diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzmmae" "b/data_all_eng_slimpj/shuffled/split2/finalzzmmae" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzmmae" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe discovery of a Higgs boson, honoured with the 2013 Nobel prize in physics, marks a turning point in particle physics, \nas the last missing building block of the Standard Model falls into place and opens the door to completely new studies of a \nparticle unlike every other discovered before. \nLike in many earlier instances in the history of particle physics, it did not come as a surprise, but was anticipated and sought for. The Higgs mass had been predicted with increasing precision from the analysis of electro-weak quantum corrections, in which measurements at the previous generation of $e^+e^-$ colliders played a prominent r\\^ole. \n\nToday, Higgs physics has been identified as one of the prime \"drivers\" of the field, as a compelling line of research with great promise, where surprises may be expected. \nThe main question is to fully establish the profile of the Higgs particle, measure its quantum numbers and, above all, its precisely predicted couplings to almost all other fundmental particles, and to find out whether it fulfils its r\\^ole in the Standard Model, or whether it holds the key to new physics beyond. \n\nThe accuracy, which is required in order to detect possible mechanisms behind electroweak symmetry breaking through deviations of the Higgs couplings from their pure Standard Model values, has been quantitatively investigated in the framework of the Snowmass study 2013~\\cite{snowmasshiggs}. \nPopular models like two-Higgs doublet or composite Higgs schemes, which predict new particles at the TeV scale, and which are still compatible with recent limits from direct searches at the LHC, typically lead to such deviations in the per-cent or sub-percent range. \nThis sets the scale of the future experimental challenges and demonstrates the discovery potential of precision measurements in the Higgs sector. \n\n\\subsection*{The ILC and its detectors}\n \n The ILC has been proposed as the next big high energy accelerator project. \n It is designed to have centre-of-mass energies ranging from 250 to 500~GeV and is upgradeable to reach 1~TeV. \n The delivered luminosity increases with energy and amounts to typically 100 -- 300~fb$^{-1}\/$y, \n with beam polarisations of up to 80\\% and 30\\% for electrons and positrons, respectively. \n The superconducting technology is mature, as is demonstrated by the on-going construction of the European XFEL at DESY, which uses a very similar design at industrial scales. \n A technical design report (TDR)~\\cite{tdr} for the ILC has been completed in 2012, a proposed site has been selected in the Kitakami mountains in the North of Japan, and the project is currently being discussed at ministerial levels. \n\nTwo detector concepts have been proposed~\\cite{dbd} for the ILC, which have been optimised for precision, as radiation hardness and rate capability requirements are very relaxed with respect to those at the LHC. \nThe detectors feature highly granular and compact calorimeters for particle flow reconstruction, ultra-thin and precise trackers, and vertex detectors capable of identifying not only beauty but also charm quarks. \nDetailed designs have been implemented in the simulations to evaluate the physics potential under realistic conditions, including beam-induced backgrounds.\n\n\\section{Measurements of Higgs couplings}\n\nIt is instructive to recall the necessary ingredients to a measurement of a coupling strength. \nThe number of particles $N$ observed in a given final state $f$, normalised to integrated Luminosity $L$, is given by the product of cross-section $\\sigma$ and branching fraction $\\cal B$, which is the ratio of partial width $\\Gamma_f$ to total width \n$\\Gamma_T$.\nThe couplings to the initial and final state, $g_i$ and $g_f$, enter via the production cross section and the partial width, such that one has\n\\begin{equation}\nN \/ L = \\sigma \\cdot {\\cal B} = \\sigma \\cdot \\Gamma_f \/ \\Gamma_T \\sim g_i^2 \\cdot g_f^2\\, \/ \\, \\Gamma_T \\, .\n\\end{equation}\nIn order to extract $g_f$, one needs a measurement of the inclusive cross section -- to obtain $g_i$ -- and the total width. \nIn the Z line shape analysis at LEP, the width of the Z resonance was directly observable, and the cross section \nin the $e^+e^-$ final (and initial) state provided a normalisation of the couplings of the Z to fermions. \nThe width of the Higgs particle, however, is expected to be about 4~MeV in the Standard Model and too narrow to be resolved experimentally, so it has to be extracted from the branching ratio of a channel, for which the coupling is already known, e.g.\\ from a production measurement, \n\\begin{equation}\\label{Eq:GammaT}\n\\Gamma_T = {\\cal B} \/ \\Gamma_f \\sim {\\cal B} \/ g^2_f\n\\end{equation}\nAt the LHC the total cross section and total width are poorly constraint, and in general the Standard Model values are assumed. \nAt the ILC, however, one can make use of the unique features of an $e^+e^-$ collider to obtain a self-contained set of observables.\n\n\\subsection{Higgs production at the ILC}\nThe dominant Higgs production processes at the ILC are Higgs strahlung and W fusion.~\\cite{ilctdrphys}.\nFigure~\\ref{Fig:Hprod} shows the diagrams and the dependence of the cross-section on the centre-of-mass energy.\nHiggs strahlung as an $s$ channel process dominates at threshold, whilst the cross section of the $t$ channel process W fusion increases logarithmically with energy and takes over at about 450~GeV. \nHere, one has made use of the beam polarisation to enhance the cross section. \nNow, since at an $e^+e^-$ machine one can control the energy of the incoming fermions, one can select the dominant process by tuning the beam energy. \n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[width=0.25\\textwidth]{sefkow_felix_fig1a.pdf}\n\\hspace{1cm}\n\\includegraphics[width=0.35\\textwidth]{sefkow_felix_fig1.pdf}\n\\end{center}\n\\caption{Higgs production diagrams and cross section vs.\\ centre-of-mass energy.}\\label{Fig:Hprod}\n\\end{figure}\n\nAnother consequence of the well-defined initial state is the possibility to apply kinematic constraints. \nIn ZH events, a Higgs signal can be observed in the spectrum of recoil masses against the Z decay products,\n\\begin{equation*}\nM_{\\mbox{\\small recoil}}^2 = E^2 - p^2 \\: \\mbox{with} \\: E = \\sqrt{s} - E_Z \\;\\mbox{and} \\; p=p_Z\n\\end{equation*}\n This works best for Z decays into muon pairs, as shown in Figure~\\ref{Fig:Hrecoil}, but also well for the electron channel, whilst for hadronic Z decays it is more difficult. \nHere, no requirements whatsoever on the Higgs final state have been made, it can even be invisible, and thus the measurement is fully inclusive.\nIt provides an absolute normalisation for all branching ratios into specific final states and a model-independent extraction of \nthe absolute value of $g_Z$, the Higgs Z coupling, which is {\\em the} central measurement of the Higgs coupling analyses. \n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[width=0.35\\textwidth]{sefkow_felix_fig2.png}\n\\hspace{1cm}\n\\includegraphics[width=0.45\\textwidth]{sefkow_felix_fig3.pdf}\n\\end{center}\n\\caption{Higgs signal in the recoil mass spectrum (ZH production), and in the $b\\bar{b}$ di-jet mass (W fusion).}\\label{Fig:Hrecoil}\n\\end{figure}\n\n\n\\subsection{The Higgs total width}\n\nThe Higgs mass of 125~GeV is almost ideally suited for the study of a large number of decay modes with not too small branching ratios. \nHowever, the fraction of decays into Z pairs is only a few per-cent and the statistics for specific Z channels very small.\nAn extraction of the total width, using Eq.~\\ref{Eq:GammaT} with $g_Z$ and ${\\cal B}({\\rm H}\\rightarrow {\\rm ZZ}^*)$ is in principle possible, but would suffer from large uncertainties of $\\sim 20\\%$.\n\nIt is more advantageous to use the W fusion cross section and the branching ratio ${\\cal B}({\\rm H}\\rightarrow {\\rm WW}^*)$.\nSince in W fusion the Higgs is accompanied by two neutrinos, the recoil method cannot be applied for a decay-mode independent measurement, but a specific Higgs channel must be used. \nBoth the $b\\bar{b}$ and the $WW^*$ channel are suited~\\cite{duerig}; the $b\\bar{b}$ signal is shown in Figure~\\ref{Fig:Hrecoil}.\nSince these decay modes are also measured in HZ production, the ratio $g_W\/g_Z$ and thus $g_W$ can be extracted and \n$\\Gamma_T$ from Eq.~\\ref{Eq:GammaT}. \nNow one has all ingredients to convert also the other branching ratio measurements into absolute couplings, \n\n\\subsection{Higgs couplings to fermions and the self-coupling}\n\nThanks to the relatively benign beam conditions at the ILC vertex detector systems can be realised which can not only identify \n$b$ flavoured hadron decays on the basis of the finite decay length, but can also tag charmed hadrons and disentangle prompt open charm from tertiary vertices, which originate from $b\\rightarrow c$ decays. \nParticularly well suited are ZH events with Z decaying into neutrinos, such that the final state consists of the two jets from the Higgs only, giving a signal in the diet invariant mass. \nA multivariate analysis of the vertex topologies then yields a simultaneous measurement of \n${\\cal B}({\\rm H}\\rightarrow b\\bar{b})$, ${\\cal B}({\\rm H}\\rightarrow c\\bar{c})$ and ${\\cal B}({\\rm H}\\rightarrow gg)$, and thus \n$g_b$, $g_c$ and a model-dependent value for $g_t$, like the $\\gamma\\gamma$ mode.\n\nThe measurement of the coupling to the second quark generation is unique for testing the mass dependence of the Higgs coupling in the quark sector, since couplings to $u$, $d$ and $s$ quarks are unobservable. \nIn the lepton sector, $g_{\\tau}$ can be measured well, but in the H$\\rightarrow\\mu\\mu$ channel only very few events can be observed and only at the highest energies attainable at the ILC, where luminosity and cross section are maximal. \n\nThe direct observation of the top Higgs Yukawa coupling is made though a production cross section measurement for the \n$t\\bar{t}$H channel, where, e.g., a Higgs is radiated from one of the two quarks in a $t\\bar{t}$ pair. \nThis involves the analysis of complex 8 or 10 fermion final states, where eben after using flavour tags and di-jet masses, the signal basically consists of an excess over expectation without $t\\bar{t}$H coupling. \nThis is a particularly good example for cases where a large gain in precision can be obtained from a combined evaluation of ILC and LHC data, see below. \n\nFinally, a measurement of the Higgs self-coupling would represent the last cornerstone in establishing the Higgs profile and demonstrating that it has the properties required for electro-weak symmetry breaking. \nThe strength $g_{HHH}$ can be measured at the ILC, albeit with only moderate precision. \nThis is due to the fact that ZHH events are not only produced with diagrams involving triple-Higgs coupling, but also through processes like double Higgs strahlung, which constitute an irreducible background. \nThe situation is more favourable in the case of W fusion leading to $\\nu\\bar{\\nu}$HH events, therefore the best precision is obtained at highest energies, where the dilution is less and luminosity and cross section are largest. \n\n\\section{Global fits and achievable precision: Summary}\n\nIn a staged running scenario, each centre-of-mass energy, 250, 500 and 1000~GeV, provides an independent set of measurements.\nAltogether, 33 measurements of $\\sigma\\cdot {\\cal B}$ values are made and injected into a global fit with 10 free parameters -- the couplings to W, Z and $t$, $b$, $c$, $\\tau$, $\\mu$ fermions, indirect to $gg$, $\\gamma\\gamma$ pairs, and the total width \n$\\Gamma_T$. \nThe result is shown in Figure~\\ref{Fig:Hprecision}. \n\nThe precision has been compared to that expected for the LHC~\\cite{peskin} and its high-luminosity upgrade~\\cite{zerwas}.\nIn these studies consistent assumptions and constraints have been used for both colliders' data sets, which is important for a fair comparison. \nAs the Figure~\\ref{Fig:Hprecision} shows, with linear collider results the per-cent and sub-per-cent level precision can be reached, which is required to detect deviations from the Standard Model in the magnitude expected in theories for mechanisms behind electro-weak symmetry breaking. \n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[width=0.40\\textwidth]{sefkow_felix_fig4.pdf}\n\\hspace{1cm}\n\\includegraphics[width=0.50\\textwidth]{sefkow_felix_fig5.pdf}\n\\end{center}\n\\caption{Higgs coupling strengts, mesurerd at the ILC, as a function of mass: relative precision for expected ILC and LHC results, including the luminosity upgrade, and combination of data. }\\label{Fig:Hprecision}\n\\end{figure}\n\n\n\n\\section{Bibliography}\n \n\n\\begin{footnotesize}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nAccording to the current cosmological observations, our universe\nis experiencing a late time accelerating expansion.\nAlthough the $\\Lambda$CDM model can describe the accelerating\nuniverse by introducing dark energy~\\cite{Amendola:2015}, it fails to\nsolve the cosmological constant problem,\nrelated to the ``fine-tuning\"~\\cite{Weinberg:1988cp, Weinberg:1972} and ``coincidence\"~\\cite{ArkaniHamed:2000tc, Peebles:2002gy} puzzles. A lot of efforts have been made to understand these issues.\nFor example, one can modify the gravitational theory to\nobtain viable cosmological models with dynamical dark energy\nto explain the accelerating universe~\\cite{Copeland:2006wr}.\n\nOn the other hand, one can reconstruct the Friedmann equations through \nthe implications of thermodynamics. It has been shown that\nthe Einstein's equations can be derived by considering\nthe Clausius' relation of a local Rindler observer\n~\\cite{Jacobson:1995ab}. In particular, this idea\nhas been applied to cosmology,\nwhile the Friedmann equations have been obtained by using the \nfirst law of thermodynamics in the horizon of the universe~\\cite{Cai:2005ra}.\nIt has been also demonstrated that the modified Friedmann equations \ncan be acquired from the thermodynamical approach\nby just replacing the entropy-area relation with a proper one in a wide variety of gravitational theories~\\cite{Cai:2005ra,Akbar:2006er,Akbar:2006kj,Jamil:2009eb,Fan:2014ala,Gim:2014nba}.\nThus, as long as there is a new entropy area relation, \nthermodynamics gives us a new way to determine the\nmodified Friedmann equations without knowing the \nunderlying gravitational theory.\n Furthermore, since the entropy area relation obtained from the modified gravity theory can be useful to extract the dark energy dynamics along with the modified Friedmann equations, it is reasonable to believe that even if we do not know the underlying theory of modified gravity, some modifications of \n the entropy relation will still give us additional information for modified Friedmann equations as well as the dynamics of dark energy, which would be different from $\\Lambda$CDM.\nAs a result, we expect that the modification of the entropy is also relevant to the cosmological evolutions.\n\nIt is known that a power-law corrected term from the quantum entanglement can be included in the black hole entropy near its horizon~\\cite{Das:2007mj}.\nInterestingly, one can apply it to cosmology by taking it\nas the entropy on the horizon of the universe.\nOn the other hand,\nthe universe is regarded as a non-extensive thermodynamical\nsystem, so the Boltmann-Gibbs entropy should be\ngeneralized to a non-extensive quantity, the Tsallis entropy, while the standard one can be treated as a limit~\\cite{Tsallis:1987eu, Lyra:1998wz, Wilk:1999dr}.\nThe Tsallis entropy has been widely discussed in the literature.\nIn the entropic-cosmology scenario~\\cite{Easson:2010av},\nthe Tsallis entropy model predicts a decelerating and accelerating\nuniverse~\\cite{Komatsu:2013qia}. In addition, a number of works on the Tsallis holographic dark energy have been proposed and investigated~\\cite{Abreu:2014ara}. In addition, the Tsallis entropy has also been used in many different dark energy models, such as \nthe Barboza-Alcaniz and Chevalier-Polarski-Linder parametric dark energy and Wang-Meng and Dalal vacuum decay models~\\cite{Nunes:2014jra}.\nMoreover, it is shown that modified cosmology from the first law of thermodynamics with varying-exponent Tsallis entropy can provide a description of both inflation and late-time acceleration with the same parameter choices~\\cite{Nojiri:2019skr}.\nIn particular,\nthe Tsallis entropy is proportional to a power of the horizon \narea, i.e. $S_T \\propto A^{\\delta}$, when the universe is assumed to be a spherically symmetric system\n~\\cite{Tsallis:2012js}.\n\nAlthough it is possible to modify Friedmann equations by just considering fluid with an inhomogeneous equation of state of the corresponding form~\\cite{1205.3421}, we still choose the thermo-dynaimical approach\nas that in Ref.~\\cite{Lymperis:2018iuz}, in which\n the authors considered the first law of thermodynamics of the universe with fixed-exponent Tsallis entropy\nand showed that the cosmological evolution mimics that of $\\Lambda$CDM\nand are in great agreement with Supernovae type Ia observational\ndata.\nIn this paper, we examine the features of the modified Friedmann\nequations obtained by replacing the usual\nBekenstein-Hawking entropy-area relation,\n$S=A\/4G$, with the power-law correction\nand Tsallis entropies~\\cite{Das:2007mj, Tsallis:1987eu, Lyra:1998wz, Wilk:1999dr,\nKomatsu:2013qia, Abreu:2014ara, Nunes:2014jra, Zadeh:2018wub, Nojiri:2019skr, Tsallis:2012js, Lymperis:2018iuz}, where $G$ is the gravitational constant.\n\nThis paper is organized as follows. In Sec.~\\uppercase\\expandafter{\\romannumeral 2}, we consider the\npower-law corrected and Tsallis entropy models and derive\nthe modified Friedmann equations and dynamical equation of state parameters by applying the first law of thermodynamics to the\napparent horizon of the universe.\nIn Sec.~\\uppercase\\expandafter{\\romannumeral 3},\nwe present the cosmological evolutions of the two models and compare them with those in $\\Lambda$CDM.\nFinally, the conclusions are given in Sec.~\\uppercase\\expandafter{\\romannumeral 4}.\nThe paper is written in units of $c=\\hbar=k_B=1$.\n\n\\section{The Models}\nWe use the flat Friedmann-Lema\\^{i}tre-Robertson-Walker\n(FLRW) metric:\n\\begin{equation}\nds^2=-dt^2+a^{2}(t)\\Big(dr^2+r^2 d\\Omega^2\\Big),\n\\end{equation}\nwhere $a(t)$ is the scale factor.\nThe modified Friedmann equations can be constructed by considering\nthe first law of thermodynamics in the apparent horizon\nof the universe and using the new entropy area relation\nrather than the Bekenstein-Hawking one.\nWe concentrate on two models:\npower law corrected entropy (PLCE)\nand Tsallis entropy cosmological evolution (TECE) Models.\n\\subsection{Power Law Corrected Entropy (PLCE) Model}\nIn the PLCE model, the entropy\nhas the form~\\cite{Das:2007mj}\n\\begin{align}\n\\label{eq:splce}\nS_{pl}= \\frac{A}{4L_p^2}\\bigg(1 - K_\\nu A^{1-\\frac{\\nu}{2}}\\bigg),\n\\end{align}\nwhere $\\nu$ is a dimensionless constant parameter and $K_\\nu=\\nu (4 \\pi)^{(\\nu-2)\/2}(4-\\nu)^{-1}r_c^{\\nu-2}$ with $r_c$ the crossover scale,\n$A$ corresponds to the area of the system, and $L_p$ represents the Planck length. With the method described in Ref.~\\cite{Lymperis:2018iuz},\none is able to extract the modified Friedmann equations:\n\\begin{align}\nH^2&=\\frac{8\\pi G}{3}(\\rho_m +\\rho_r + \\rho_{DE}),\\nonumber\\\\\n\\dot{H}&=-4\\pi G(\\rho_m +\\rho_r+ \\rho_{DE}+p_m +p_r+p_{DE}),\\label{FeqTsa}\n\\end{align}\nwhere $\\rho_{DE}$ and $p_{DE}$ are the dark energy density\nand pressure, given by\n\\begin{align}\n\\rho_{DE} &= \\frac{3}{8 \\pi G}\\frac{1}{r^{2-\\nu}_c}\\big(H^{\\nu}-1\\big)\n+ \\frac{\\Lambda}{8 \\pi G},\\\\\np_{DE} &=\\frac{-\\nu}{8 \\pi G}\\frac{\\dot{H}}{r^{2-\\nu}_c}\\big(H^{\\nu}-1\\big)- \\frac{3}{8 \\pi G}\\frac{1}{r^{2-\\nu}_c}\\big(H^{\\nu}-1\\big)\n-\\frac{\\Lambda}{8 \\pi G},\n\\end{align}\nrespectively.\nTo discuss the evolution of dark energy, it is convenient to\ndefine the equation of state parameter, $w_{DE} \\equiv p_{DE}\/\\rho_{DE}$,\nwhich is found to be\n\\begin{align}\nw_{DE} = -1 + \\frac{-\\nu\\dot{H}H^{\\nu-2}}{3(H^{\\nu}-1)\n\t+ \\Lambda r^{2-\\nu}_c}.\n\\end{align}\n\\subsection{Tsallis Entropy Cosmological Evolution Model}\nIn the TECE model, we have~\\cite{Tsallis:2012js}\n\\begin{equation}\n\\label{eq:stece}\nS_T=\\frac{\\tilde{\\alpha}}{4G}A^{\\delta},\n\\end{equation}\nwhere $A$ is the area of the\nsystem with dimension [$L^2$], $\\tilde{\\alpha}$ is a\npositive constant with dimension [$L^{2-2\\delta}$], and \n$\\delta$ denotes the non-additivity parameter.\nSimilarly, by following the procedure in Ref.~\\cite{Lymperis:2018iuz},\nwe obtain\n\\begin{align}\nH^2&=\\frac{8\\pi G}{3}(\\rho_m +\\rho_r + \\rho_{DE}),\\nonumber\\\\\n\\dot{H}&=-4\\pi G(\\rho_m +\\rho_r+ \\rho_{DE}+p_m +p_r+p_{DE}),\\label{FeqTsa}\n\\end{align}\nwith\n\\begin{align}\n\\rho_{DE}&=\\frac{3}{8 \\pi G}\\bigg[\\frac{\\Lambda}{3}+\nH^2\\bigg(1-\\alpha \\frac{\\delta}{2-\\delta} H^{2(1-\\delta)}\\bigg)\\bigg],\\\\\np_{DE}&=-\\frac{1}{8 \\pi G}\\bigg[\\Lambda \n+2 \\dot{H}(1-\\alpha\\delta H^{2(1-\\delta)})\n+3H^2\\bigg(1-\\alpha\\frac{\\delta}{2-\\delta}H^{2(1-\\delta)}\\bigg)\\bigg]\n\\end{align}\nwhere $\\alpha= (4 \\pi)^{\\delta-1}\\tilde{\\alpha}$, and $\\Lambda$ is a constant related to the present values of $H_0, \\rho_{m0}$ and $\\rho_{r0}$, given by\n\\begin{align}\n\\Lambda&=\\frac{3\\alpha\\delta}{2-\\delta}H_0^{2(2-\\delta)}-8\\pi G(\\rho_{m0}\n+\\rho_{r0}).\n\\end{align}\nThus, the equation of state parameter for the TECE model is evaluated to be\n\\begin{align}\nw_{DE} = \\frac{p_{DE}}{\\rho_{DE}}\n=-1 + \\frac{2\\dot{H}(\\alpha \\delta H^{2\\dot{H}(1-\\delta)}-1)}\n{3H^2\\bigg(1-\\frac{\\alpha \\delta}{2-\\delta}H^{2(1-\\delta)}\\bigg)+\\Lambda}.\n\\end{align}\n\n\n\\section{Cosmological Evolutions}\n\\label{sec:observation}\n\n\\subsection{Power Law Corrected Entropy Model}\nSince $\\rho_{DE}$ and $w_{DE}$ are determined by the Hubble parameter $H(z)$, we use the Newton-Raphson method~\\cite{press:2007} to obtain the cosmological evolutions of the PLCE model. \n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.45 \\linewidth, angle=270]{W.eps}\n\t\\caption{Evolutions of the equation-of-state parameter $w_{DE}$ in $\\Lambda$CDM and PLCE models.} \n\t\\label{fg:W1}\n\\end{figure}\n\nBecause the PLCE model goes back to $\\Lambda$CDM when $\\nu = 0$, we choose $\\nu =\\pm0.02$ to compare the differences between the two models. We also take a larger value of $\\nu = 0.2$ to check the sensitivity of $\\nu$. The results in Fig.~\\ref{fg:W1} show that $w_{DE}$ does not overlap or cross -1 in any non-zero value of $\\nu$. In addition, it maintains its value in the early universe, and only trends to -1 for $z<2$.\n\n\\begin{figure}[b]\n\t\\centering\n\t\\includegraphics[width=0.45 \\linewidth, angle=270]{0CMB-TT.eps}\n\t\\caption{CMB power spectra of the TT mode in $\\Lambda$CDM and PLCE models along with the Planck 2018 data.}\n\t\\label{fg:TT1}\n\\end{figure}\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.45 \\linewidth, angle=270]{0CMB-diffTT.eps}\t\n\t\\caption{\n\t\tThe change $\\Delta D_{\\ell}^{TT}$ of the TT mode of CMB power spectra between PLCE and $\\Lambda$CDM, where the legend is the same as Fig.~\\ref{fg:W1}.}\n\t\\label{fg:DIFTT1}\n\\end{figure}\n\nIn Fig.~\\ref{fg:TT1}, we display the CMB power spectra in the $\\Lambda$CDM and PLCE models\nalong with the data from Planck 2018.\nSince the TT spectra of PLCE and $\\Lambda$CDM are almost identical to the data from Planck 2018 for the high values of the multipole $l$, \nwe focus on the differences between the two models and the data when $l<100$ as depicted in Fig.~\\ref{fg:DIFTT1}. The TT power spectrum in the PLCE model for $\\nu > 0$ is larger than that of $\\Lambda$CDM when $l < 100$ with the error in the allowable range of the observational data.\n\nFor the TE mode, the spectra in PLCE for the different parameters $\\nu$ are always close to that in $\\Lambda$CDM as well as the observational data of Planck 2018, as shown in Fig.~\\ref{fg:TE1}. However, when we carefully compare the differences between the results in PLCE and $\\Lambda$CDM in Fig.~\\ref{fg:DIFTE1}, we notice that those of PLCE are closer to the Planck 2018 data, comparing to that in $\\Lambda$CDM.\n\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.45 \\linewidth, angle=270]{0CMB-TE.eps}\n\t\\caption{CMB TE power spectra in the $\\Lambda$CDM and PLCE models along with the Planck 2018 data.} \n\t\\label{fg:TE1}\n\\end{figure}\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.45 \\linewidth, angle=270]{0CMB-diffTE.eps}\t\n\t\\caption{\n\t\t The change $\\Delta D_{\\ell}^{TE}$ of the TT mode of CMB power spectra between PLCE and $\\Lambda$CDM, where the legend is the\n\t same as Fig.~\\ref{fg:TE1}.} \n\t\\label{fg:DIFTE1}\n\\end{figure}\n\n\n\n\\subsection{Tsallis Entropy Cosmological Evolution Model}\n\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.45 \\linewidth, angle=270]{W1-TECE.eps}\n\n\t\\caption{Evolutions of the equation-of-state parameter $w_{DE}$ in $\\Lambda$CDM and TECE models.}\n\t\\label{fg:W2}\n\\end{figure}\n\tEq.~\\eqref{eq:stece} of TECE becomes the one in $\\Lambda$CDM when $\\delta=\\tilde{\\alpha}=1$. In our study, we only focus on the effects\n\twhen $\\delta \\neq 1$ so we set $\\tilde{\\alpha}=1$ and $\\delta=1+\\xi$. In Fig.~\\ref{fg:W2}, we find that the equation of state, $w_{DE}$, behaves differently for different values of $\\xi$. In particular, it is larger (smaller) than -1 when $\\xi$ is larger (smaller) than zero without crossing -1 in anytime.\n\nIn Figs.~\\ref{fg:TT2} and ~\\ref{fg:DIFTT2}, we see that the TT Power spectra of TECE and $\\Lambda$CDM have a large difference in the large scale structure. Note that there is a significant discrepancy between $\\Lambda$CDM and the data at $l\\sim 20-27$. However, the spectrum of TECE \nfor $\\xi$=0.002 and $l\\sim 20-27$ is below that in $\\Lambda$CDM, and closer to the observational data of Planck 2018. The shifts of the\nTE mode between PLCE and $\\Lambda$CDM are shown in Figs.~\\ref{fg:TE2} and\n\\ref{fg:DIFTE2}.\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.45 \\linewidth, angle=270]{0CMB-TT-TECE.eps}\n\t\\caption{Legend is the same as Fig.~\\ref{fg:TT1} but in the TECE model\n\t\twith a set of $\\xi$.} \n\t\\label{fg:TT2}\n\\end{figure}\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.45 \\linewidth, angle=270]{0CMB-diffTT-TECE.eps}\t\n\t\\caption{\n\t\tLegend is the same as Fig.~\\ref{fg:DIFTT1} but in the TECE model\n\t\twith a set of $\\xi$.} \n\t\\label{fg:DIFTT2}\n\\end{figure}\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.45 \\linewidth, angle=270]{0CMB-TE-TECE.eps}\n\t\\caption{Legend is the same as Fig.~\\ref{fg:TE1} but in the TECE model\n\t\twith a set of $\\xi$.} \n\t\\label{fg:TE2}\n\\end{figure}\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.45 \\linewidth, angle=270]{0CMB-diffTE-TECE.eps}\t\n\t\\caption{\n\t\tLegend is the same as Fig.~\\ref{fg:DIFTE1} but in the TECE model\n\t\twith a set of $\\xi$.}\n\t\\label{fg:DIFTE2}\n\\end{figure}\n\n\\subsection{Global Fits}\n\nWe use the modified ${\\bf CAMB}$ and {\\bf CosmoMC} program~\\cite{Lewis:2002ah} to do the global cosmological\nfits for the PLCE and TECE models from the observational data with the MCMC method.\nThe dataset includes those of the CMB temperature fluctuation from {\\it Planck 2015} with TT, TE, EE, low-$l$\npolarization and CMB lensing from SMICA~\\cite{Adam:2015wua, Aghanim:2015xee, Ade:2015zua}, the weak lensing (WL) data from CFHTLenS ~\\cite{Heymans:2013fya}, and the BAO data from 6dF Galaxy Survey~\\cite{Beutler:2011hx} and BOSS~\\cite{Anderson:2013zyy}.\nIn particular, we include 35 points for the $H(z)$ measurements in our fits, which are listed in Table~\\ref{tab:0}.\nThe $\\chi^2$ fit is given by\n\\begin{eqnarray}\n\\label{eq:chi}\n{\\chi^2}={\\chi^2_{CMB}}+{\\chi^2_{WL}}+{\\chi^2_{BAO}}+{\\chi^2_{H(z)}},\n\\end{eqnarray}\nwith\n\\begin{eqnarray}\n\\chi^2_c = \\sum_{i=1}^n \\frac{(T_c(z_i) - O_c(z_i))^2}{E^i_c} \\,,\n\\end{eqnarray}\nwhere the subscript of ``$c$\" denotes the category of the data, $n$ represents the number of the dataset, \n$T_c$ is the prediction from {\\bf CAMB}, and $O_c$ ($E_c$) corresponds to the observational value (covariance). The priors of the various \ncosmological parameters are given in Table~\\ref{tab:1}.\n\n\\begin{table}[!hbp]\n\t\\caption{$H(z)$ data points}\n\t\\begin{tabular}{|c|c|c|c||c|c|c|c||c|c|c|c|}\n\t\t\\hline\n\t\t\\ & $z$ & $H(z)$ & Ref. & \\ & $z$ & $H(z)$ & Ref. & \\ & $z$ & $H(z)$ & Ref. \\\\\n\t\t\\hline\n\t\t~1~ & 0.07 & 69.0$\\pm$19.6 & ~\\cite{Zhang:2012mp} &\n\t\t~13~ & 0.4 & 95.0$\\pm$17.0 &~\\cite{Simon:2004tf} &\n\t\t~25~ & 0.9 & 117.0$\\pm$23.0 & ~\\cite{Simon:2004tf} \\\\\n\t\t\\hline\n\t\t2 & 0.09 & 69.0$\\pm$12.0 & ~\\cite{Jimenez:2003iv} & 14 & 0.4004 & 77.0$\\pm$10.2 &~\\cite{Moresco:2016mzx} &26 & 1.037 & 154.0$\\pm$20.0 & ~\\cite{Moresco:2012jh}\\\\\n\t\t\\hline\n\t\t3 & 0.12 & 68.6$\\pm$26.2 & ~\\cite{Zhang:2012mp} &15 & 0.4247 & 87.1$\\pm$11.2 & ~\\cite{Moresco:2016mzx}&27 & 1.3 & 168.0$\\pm$17.0 & ~\\cite{Simon:2004tf}\\\\\n\t\t\\hline\n\t\t4 & 0.17 & 83.0$\\pm$8.0 & ~\\cite{Simon:2004tf} &16 & 0.4497 & 92.8$\\pm$12.9 & ~\\cite{Moresco:2016mzx}& 28 & 1.363 & 160.0$\\pm$33.6 & ~\\cite{Moresco:2015cya}\\\\\n\t\t\\hline\n\t\t5 & 0.179 & 75.0$\\pm$4.0 & ~\\cite{Moresco:2012jh} &17 & 0.4783 & 80.9$\\pm$9.0 & ~\\cite{Moresco:2016mzx}& 29 & 1.43 & 177.0$\\pm$18.0 & ~\\cite{Simon:2004tf}\\\\\n\t\t\\hline\n\t\t6 & 0.199 & 75.0$\\pm$5.0 & ~\\cite{Moresco:2012jh} &18 & 0.48 & 97.0$\\pm$62.0 &~\\cite{Stern:2009ep} & 30 & 1.53 & 140.0$\\pm$14.0 & ~\\cite{Simon:2004tf}\\\\\n\t\t\\hline\n\t\t7 & 0.2 & 72.9$\\pm$29.6 & ~\\cite{Zhang:2012mp} &19 & 0.57 & 92.4$\\pm$4.5 & ~\\cite{Reid:2012sw}&31 & 1.75 & 202.0$\\pm$40.0 & ~\\cite{Simon:2004tf} \\\\\n\t\t\\hline\n\t\t8 & 0.27 & 77.0$\\pm$14.0 & ~\\cite{Simon:2004tf} &20 & 0.5929 & 104.0$\\pm$13.0 & ~\\cite{Moresco:2012jh} & 32 & 1.965 & 186.5$\\pm$50.4 & ~\\cite{Moresco:2015cya}\\\\\n\t\t\\hline\n\t\t9 & 0.24 & 79.69$\\pm$2.65 & ~\\cite{Gaztanaga:2008xz} &21 & 0.6797 & 92.0$\\pm$8.0 & ~\\cite{Moresco:2012jh}& 33 & 2.3 & 224$\\pm$8 & ~\\cite{Busca:2012bu}\\\\\n\t\t\\hline\n\t\t10 & 0.28 & 88.8$\\pm$36.6 & ~\\cite{Zhang:2012mp} &22 & 0.7812 & 105.0$\\pm$12.0 & ~\\cite{Moresco:2012jh}&34 & 2.34 & 222$\\pm$7 & ~\\cite{Hu:2014vua}\\\\\n\t\t\\hline\n\t\t11 & 0.352 & 83.0$\\pm$14.0 & ~\\cite{Moresco:2012jh} & 23 & 0.8754 & 125.0$\\pm$17.0 & ~\\cite{Moresco:2012jh}& 35 & 2.36 & 226$\\pm$8 &~\\cite{Font-Ribera:2013wce} \\\\\n\t\t\\hline\n\t\t~12~ & 0.3802 & 83.0$\\pm$13.5 & ~\\cite{Moresco:2016mzx} &24 & 0.88 & 90.0$\\pm$40.0 &~\\cite{Stern:2009ep}& & & & \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\label{tab:0}\n\\end{table}\n\n\n\\begin{table}[ht]\n\t\\begin{center}\n\t\t\\caption{ Priors for cosmological parameters in the PLCE and TECE models. }\n\t\t\\begin{tabular}{|c||c|} \\hline\n\t\t\tParameter & Prior\n\t\t\t\\\\ \\hline\n\t\t\tPLCE Model parameter $\\nu$& $-0.025 \\leq \\nu \\leq 1.0$\n\t\t\t\\\\ \\hline\n\t\t\tTECE Model parameter $\\xi$& $-0.01 \\leq \\xi \\leq 0.02$\n\t\t\t\\\\ \\hline\n\t\t\tBaryon density & $0.5 \\leq 100\\Omega_bh^2 \\leq 10$\n\t\t\t\\\\ \\hline\n\t\t\tCDM density & $0.1 \\leq 100\\Omega_ch^2 \\leq 99$\n\t\t\t\\\\ \\hline\n\t\t\tOptical depth & $0.01 \\leq \\tau \\leq 0.8$\n\t\t\t\\\\ \\hline\n\t\t\tNeutrino mass sum& $0 \\leq \\Sigma m_{\\nu} \\leq 2$~eV\n\t\t\t\\\\ \\hline\n\t\t\t$\\frac{\\mathrm{Sound \\ horizon}}{\\mathrm{Angular \\ diameter \\ distance}}$ & $0.5 \\leq 100 \\theta_{MC} \\leq 10$\n\t\t\t\\\\ \\hline\n\t\t\tScalar power spectrum amplitude & $2 \\leq \\ln \\left( 10^{10} A_s \\right) \\leq 4$\n\t\t\t\\\\ \\hline\n\t\t\tSpectral index & $0.8 \\leq n_s \\leq 1.2$\n\t\t\t\\\\ \\hline\n\t\t\\end{tabular}\n\t\n\t\t\\label{tab:1}\n\t\\end{center}\n\\end{table}\n\n\n\n\n\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.96 \\linewidth]{PLCE03.eps}\n\t\\caption{One and two-dimensional distributions of $\\Omega_b h^2$, $\\Omega_c h^2$, $\\sum m_\\nu$, $\\nu$, $H_0$, and $\\sigma_8$ in the PLCE and $\\Lambda$CDM models, where the contour lines represent 68$\\%$~ and 95$\\%$~ C.L., respectively.}\n\t\\label{fg:P}\n\\end{figure}\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.96 \\linewidth]{TECE03.eps}\n\t\\caption{Legend is the same as Fig.~\\ref{fg:P} but for the TECE\n\t\tand $\\Lambda$CDM models.}\n\t\\label{fg:T}\n\\end{figure}\n\n\\begin{table}[h]\n\t\\begin{center}\n\t\t\\caption{Fitting results for the PLCE and $\\Lambda$CDM models, where the limits are given at 68$\\%$ and 95$\\%$ C.L., respectively }\n\t\t\\begin{tabular} {|c|c|c|c|c|}\n\t\t\t\\hline\n\t\t\tParameter & PLCE (68\\% C.L.)& PLCE (95\\% C.L.) & $\\Lambda$CDM (68\\% C.L.)& $\\Lambda$CDM (95\\% C.L.)\\\\\n\t\t\t\\hline\n {\\boldmath$\\Omega_b h^2 $} & $0.02237\\pm 0.00014 $& $0.02237 \\pm 0.00027 $& $0.02235\\pm 0.00014 $& $0.02235^{+0.00028}_{-0.00027}$\\\\\n \n {\\boldmath$\\Omega_c h^2 $} & $0.1172^{+0.0012}_{-0.0011}$& $0.1172^{+0.0022}_{-0.0023}$& $0.1173\\pm 0.0012 $& $0.1173 \\pm 0.0023 $\\\\\n \n {\\boldmath$100\\theta_{MC} $} & $1.04101\\pm 0.00030 $& $1.04101 \\pm 0.00059$& $1.04100\\pm 0.00029 $& $1.04100^{+0.00057}_{-0.00058}$\\\\\n \n {\\boldmath$\\tau $} & $0.079^{+0.017}_{-0.019} $& $0.079^{+0.036}_{-0.034} $& $0.078\\pm 0.018 $& $0.078^{+0.035}_{-0.034} $\\\\\n \n {\\boldmath$\\Sigma m_\\nu $\/eV} & $< 0.0982 $& $< 0.183 $& $< 0.100 $ & $< 0.195 $\\\\\n \n {\\boldmath$\\nu $} & $0.0240^{+0.0110}_{-0.0085} $& $0.024^{+0.022}_{-0.033} $&$-$&$-$\\\\\n \n {\\boldmath${\\rm{ln}}(10^{10} A_s)$} & $3.086^{+0.031}_{-0.035} $& $3.086^{+0.068}_{-0.063} $& $3.083\\pm 0.033 $& $3.083^{+0.066}_{-0.064} $\\\\\n \n $H_0 $ & $67.96\\pm 0.56 $& $68.0 \\pm 1.1 $& $68.14\\pm 0.55 $& $68.1 \\pm 1.1 $\\\\\n \n $\\sigma_8 $ & $0.814^{+0.013}_{-0.011} $& $0.814^{+0.023}_{-0.026} $ & $0.815^{+0.013}_{-0.011} $& $0.815^{+0.023}_{-0.025} $\\\\\n \\hline\n\t\t\t$\\chi^2_{best-fit} $& \\multicolumn{2}{c|}{3017.12}& \\multicolumn{2}{|c|}{3018.32}\\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\t\\label{tab:2}\n\t\\end{center}\n\\end{table}\n\n\\begin{table}[h]\n\t\\begin{center}\n\t\t\\caption{Fitting results for the TECE and $\\Lambda$CDM models, where the limits are given at 68$\\%$ and 95$\\%$ C.L., respectively }\n\t\t\\begin{tabular} {|c|c|c|c|c|}\n\t\t\t\\hline\n\t\t\tParameter & TECE (68\\% C.L.)& TECE (95\\% C.L.) & $\\Lambda$CDM (68\\% C.L.)& $\\Lambda$CDM (95\\% C.L.)\\\\\n\t\t\t\\hline\n\t\t\n\t\t\t{\\boldmath$\\Omega_b h^2 $} & $0.02226\\pm 0.00016 $& $0.02226^{+0.00033}_{-0.00032}$ & $0.02236\\pm 0.00014 $& $0.02236^{+0.00028}_{-0.00027}$\\\\\n\t\t\t\n\t\t\t{\\boldmath$\\Omega_c h^2 $} & $0.1174\\pm 0.0013 $& $0.1174 \\pm 0.0025$ & $0.1173\\pm 0.0012 $& $0.1173 \\pm 0.0023$\\\\\n\t\t\t\n\t\t\t{\\boldmath$100\\theta_{MC} $} & $1.04125\\pm 0.00036 $& $1.04125^{+0.00073}_{-0.00069}$& $1.04099\\pm 0.00031 $& $1.04099^{+0.00062}_{-0.00060}$\\\\\n\t\t\t\n\t\t\t{\\boldmath$\\tau $} & $0.090\\pm 0.021 $& $0.090 \\pm 0.042 $ & $0.079\\pm 0.018 $ & $0.079^{+0.037}_{-0.035} $\\\\\n\t\t\t\n\t\t\t{\\boldmath$\\Sigma m_\\nu $\/eV} & $< 0.186 $& $< 0.317 $ & $< 0.107 $& $< 0.195 $\\\\\n\t\t\t\n\t\t\t{\\boldmath$\\xi $} & $0.00038\\pm 0.00027 $& $0.00038^{+0.00055}_{-0.00053}$&$-$&$-$\\\\\n\t\t\t\n\t\t\t{\\boldmath${\\rm{ln}}(10^{10} A_s)$} & $3.103\\pm 0.039 $& $3.103 \\pm 0.076 $& $3.085\\pm 0.034 $& $3.085^{+0.068}_{-0.065} $\\\\\n\t\t\t\n\t\t\t$H_0 $ & $68.42\\pm 0.71 $& $68.4 \\pm 1.4 $ & $68.05^{+0.60}_{-0.54} $ & $68.1^{+1.1}_{-1.2} $\\\\\n\t\t\t\n\t\t\t$\\sigma_8 $ & $0.814^{+0.017}_{-0.014} $& $0.814^{+0.028}_{-0.032} $& $0.814^{+0.014}_{-0.012} $ & $0.814^{+0.024}_{-0.027} $\\\\\n\t\t\t\\hline\n\t\t\t$\\chi^2_{best-fit} $& \\multicolumn{2}{c|}{3018.96}& \\multicolumn{2}{|c|}{3019.28}\\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\t\\label{tab:3}\n\t\\end{center}\n\\end{table}\n\n\n\nIn Fig.~\\ref{fg:P}, we present our fitting results of PLCE (red) and $\\Lambda$CDM (blue).\nAlthough the PLCE model has been discussed in the literature, it is the first time to illustrate its numerical cosmological effects.\nIn particular, we find that $\\nu$ = $( 0.0240^{+0.0110}_{-0.0085})$ in 68$\\%$~C.L., which shows that PLCE and $\\Lambda$CDM can be clearly distinguished.\nIt is interesting to note that the value of $\\sigma_8$=$0.814^{+0.023}_{-0.026} $ (95$\\%$~C.L.) in PLCE\nis smaller than that of $0.815^{+0.023}_{-0.025}$ (95$\\%$~C.L.) in $\\Lambda$CDM. \nAs shown in Table~\\ref{tab:2}, the best fitted $\\chi^2$ value in PLCE is 3017.12, which is also smaller than 3018.32 in $\\Lambda$CDM.\nAlthough the cosmological observables for the best $\\chi^2$ fit in PLCE do not significantly deviate from those in $\\Lambda$CDM,\nit indicates that the PLCE model is closer to the observational data than $\\Lambda$CDM.\n\nSimilarly, we show our results for TECE (red) and $\\Lambda$CDM (blue)\n in Fig.~\\ref{fg:T}. Explicitly, we get that $\\xi$ = $( 3.8\\pm {2.7})\\times 10^{-4}$ \nin 68$\\%$~C.L.\nIn addition, the TECE model can relax the limit of the total mass of the active neutrinos.\nIn particular, we have that $\\Sigma m_\\nu$ $< 0.317$ eV, comparing to $\\Sigma m_\\nu$ $< 0.195$ eV in $\\Lambda$CDM at 95$\\%$~C.L.\nIn addition, the value of $H_0$ in TECE equals to $68.42\\pm {0.71}$ $(68.4\\pm {1.4})$, which is larger than $68.05^{+0.60}_{-0.54}$ $(68.1^{+1.1}_{-1.2})$ in $\\Lambda$CDM with 68$\\%$ (95$\\%$)~C.L.\n\nAs shown in Table \\ref{tab:3}, the best fitted $\\chi^2$ value in the TECE\nmodel is 3018.96, which is smaller than 3019.28 in $\\Lambda$CDM model.\nAlthough the difference between the value of $\\chi^2$ in TECE and $\\Lambda$CDM is not significant,\nit still implies that the TECE model can not be ignored. Clearly, more considerations and discussions are needed in the future.\n\n\n\n\\section{Conclusions}\n\nWe have calculated the cosmological evolutions of $\\rho_{DE}$ and $w_{DE}$ in the PLCE and TECE models.\nWe have found that the EoS of dark energy in PLCE (TECE) does not cross -1.\nWe have shown that the CMB TE power spectrum of the PLCE model with a positive $\\nu$ is closer to the Planck 2018 data than that in $\\Lambda$CDM, while the CMB TT spectrum in the TECE model has smaller values around $l\\sim 20-27$, which are lower than that in $\\Lambda$CDM, but close to the data of Planck 2018.\nBy using the Newton method in the global fitting, we have obtained the first numerical result in the PLCE model with $\\nu=0.0240^{+0.0110}_{-0.0085} $\nin 68$\\%$ C.L., which can be distinguished well with $\\Lambda$CDM. Our Fitting results indicate that the PLCE model gives a smaller value of $\\sigma_8$ with a better $\\chi^2$ value than $\\Lambda$CDM.\nIn the TECE model, we have gotten that $\\xi=(3.8\\pm2.7)\\time 10^{-4}$ and $\\Sigma m_\\nu$ $< 0.186$ eV in 68$\\%$ C.L., while $H_0$ is closer to 70. \nThe best fitted value of $\\chi^2$ is 3018.96 in the TECE model, which is smaller than 3019.28 in $\\Lambda$CDM. These results have demonstrated that the TECE model deserves more attention and research in the future.\n\n\n\n\n\n\n\\begin{acknowledgments}\nThis work was supported in part by National Center for Theoretical Sciences and\nMoST (MoST-107-2119-M-007-013-MY3).\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nUltra-compact H{\\small II} (UCH{\\small II}) regions are small nebulae that surround massive, young stars that are still embedded within a natal cloud \\citep{2002ARA&A..40...27C, 2007prpl.conf..181H}. UCH{\\small II} regions represent an important evolutionary stage in the formation of massive stars. They may correspond to the still embedded accretion phase while the star is already ionising its surroundings. Their study is critical to our understanding of how stars manage to reach high masses despite the high pressure generated by their growing HII regions. UCHII regions are amongst the brightest Galactic objects at submillimetre wavelengths and can be used as a probe of star formation in distant galaxies. Thus, they make excellent candidates with which to probe all phases of massive star formation and with which to test new instruments.\n\n\\object{G29.96-0.02} is an UCH{\\small II} region \\citep{1989ApJS...69..831W,2002A&A...381..571P} located at a distance of $8.9^{+0.6}_{-0.09}$\\,kpc \\citep{2004ApJS..154..553S}. The bright IR source at the centre of G29.96-0.02 is an O5-6 star \\citep{1997ApJ...490L.165W, 2003A&A...405..175M}, with a luminosity of $L_{bol} \\sim 3-4 \\times10^{6}$\\,L$_{\\odot}$ for the \\citet{1997ApJ...490L.165W} temperature limits at the \\citet{2004ApJS..154..553S} distance. However, models indicate that a cluster of young stars must be present in G29.96-0.02 to account for its observed luminosity \\citep{2003MNRAS.340..799L}. \n\nAt the head of the cometary shaped radio emission \\citep{1989ApJS...69..831W} is a small cluster of hot molecular cores ($\\sim$300\\,K) with masses of 3-11\\,M$_{\\odot}$ \\citep{2007A&A...468.1045B}. These arcsec-sized structures are embedded within an arcmin-sized submillimetre clump called \\object{G29.956-0.017SMM} \\citep{2006A&A...453.1003T}. \\citet{1995ApJ...453..308F} suggested that the cometary shape of the UCH{\\small II} region is caused by its expansion westwards into the denser part of the surrounding clump. This clump is part of a larger complex that has been mapped by {\\it Herschel}-SPIRE\/PACS as part of the Hi-GAL key programme \\citep[][Beltran et al. in prep.]{2010molinari}.\n\nIn this paper we present new submillimetre spectra taken towards \\object{G29.956-0.017SMM} (hereafter G29) with the ESA {\\it Herschel} Space Observatory \\citep{2010herschel} as part of the ``Evolution of Interstellar Dust'' key programme \\citep{2010sag4}. The data were taken with the SPIRE Fourier-Transform Spectrometer (FTS). The SPIRE instrument, its in-orbit performance, and its scientific capabilities are described by \\citet{2010spire}, and the SPIRE astronomical calibration methods and accuracy are outlined by \\citet{2010spirecal}.\n\n\\section{Observations}\n\\label{obs}\n\n\n\\begin{figure}\n\\centering{\n\t\\includegraphics[width=0.90\\columnwidth]{14625fg1.eps}\n}\n\\caption{\\label{fig1} SPIRE FTS SLWC3 (red) and SSWD4 (blue\/grey) spectra of G29. \\textbf{Upper:} On-source spectrum with continuum, point-source calibration. \\textbf{Middle:} As above with continuum subtracted. Major lines are annotated. \\textbf{Lower:} Mean off-source spectrum, extended calibration. See text for details.}\n\\end{figure}\n\nG29 was observed with the high-resolution mode of the SPIRE FTS on 13 September, 2009 at 19:49 ({\\it Herschel} observation ID, 1342183824). Two scan repetitions were observed giving an on-source integration time of 266.4 seconds. The pointing centre was at a Right Ascension and Declination (J2000) of 18$^{h}$46$^{m}$04.07$^{s}$, $-$02\\degr 39\\arcmin 21\\farcs88 and Galactic coordinates of 29.9561, $-$0.01726. The unapodized spectral resolution was 1.2\\,GHz (0.04\\,cm$^{-1}$), after apodization \\citep[using extended Norton-Beer function 1.5, ][]{2007naylor} this became 2.17\\,GHz.\n\nThe SPIRE FTS measures the Fourier transform of the source spectrum across short (SSW, 194-313\\,$\\mu$m) and long (SLW, 303-671\\,$\\mu$m) wavelength bands simultaneously. Each waveband is imaged with a hexagonal bolometer array with pixel spacing of approximately twice the beam-width. The FWHM beam-widths of the SSW and SLW arrays vary between 17-21\\arcsec\\ and 29-42\\arcsec\\ respectively. The source spectrum, including the continuum, is restored by taking the inverse transform of the observed interferogram. For details of the SPIRE FTS and its calibration see \\citet{2010spire} and \\citet{2010spirecal}.\n\n\n\n\n\n\\section{Results}\n\\label{res}\n\n\\begin{table}\n\\caption{\\label{tab1} Best Fit Line Parameters}\n\\centering{\n\\begin{tabular}{lll r@{.}l c r@{ $\\pm$ }l }\n\\hline\n\nSpecies & Line & Band & \\multicolumn{2}{c}{$\\nu$} & $\\nu_{offset}$ & \\multicolumn{2}{c}{$S_{peak}$} \\\\\n\n\t&\t & & \\multicolumn{2}{c}{[GHz]} & [GHz] & \\multicolumn{2}{c}{[Jy]} \\\\\n\t\t\\hline\n\n12CO & 4-3 & SLW & 461&0 & 0.48 & 57.1 & 12.6 \\\\\n\t& 5-4 & SLW & 576&3 & 0.09 & 61.6 & 5.8 \\\\ \n\t& 6-5 & SLW & 691&5 & 0.36 & 80.7 & 1.1 \\\\\n\t& 7-6 & SLW & 806&7 & 0.46 & 154 & 2 \\\\\n\t& 8-7 & SLW & 921&8 & 0.45 & 154 & 4 \\\\ \n\t& 9-8 & SSW & 1037& & 0.51 & 140 & 6 \\\\\n\t& 10-9 & SSW & 1152& & 0.46 & 170 & 7 \\\\\n\t& 11-10 & SSW & 1267& & 0.53 & 179 & 15 \\\\\n\t& 12-11 & SSW & 1382& & 0.70 & 183 & 7 \\\\\n\t& 13-12 & SSW & 1497& & 0.71 & 192 & 14 \\\\\n\n13CO & 5-4 & SLW & 550&9 & 0.52 & 22.8 & 3.4 \\\\\n\t& 6-5 & SLW & 661&1 & 0.74 & 32.9 & 3.0 \\\\\n\t& 7-6 & SLW & 771&2 & 0.61 & 25.9 & 4.6 \\\\\n\t& 8-7 & SLW & 881&3 & 0.82 & 65.7 & 3.8 \\\\\n\t\n {[}C~{\\sc i}] & & SLW & 492&2 & -0.08 & 29.9 & 6.5 \\\\\n {[}C~{\\sc i}] & & SLW & 809&3 & 0.50 & 26.8 & 3.3 \\\\\n\t\t\\hline\n\t\\end{tabular}\n}\n\\end{table}\n\n\nThe upper panel of Fig. \\ref{fig1} shows the apodized SLW (red) and SSW (grey) spectra. Only the data from the central bolometer (C3 for the SLW and D4 for the SSW) as calibrated for a point source are shown for each spectrum. The spectra are dominated by the thermal continuum. Superimposed on this are a series of bright lines, the most noticeable of which are the ladder of CO lines. The shape of the continuum was estimated by masking the CO lines and performing a linear regression of the form $\\log S_{\\nu} = C + p\\log\\nu$ to both spectra. The data were best fit by power-laws with indices of $p_{SSW}=2.71$ and $p_{SLW}=3.40$ respectively. There is a disconnect between the two spectra. The offset in power-law constants was $C_{SSW}-C_{SLW}=0.164$, equivalent to a linear factor of 1.45. The SSW spectrum was shifted upwards by this margin and replotted as the blue spectrum under the assumption that the discontinuity was due to differing structure within the SLW and SSW beams. The power-laws for each spectrum are plotted as black lines.\n\n\nThe Levenberg-Marquardt least-squares minimisation package MPFIT \\citep{2009ASPC..411..251M} was used to simultaneously fit a catalogue of lines and an 8th-order polynomial to each of the SLW and SSW spectra. It was assumed that the lines were Gaussian and that the linewidths were not resolved. The middle panel of Fig. \\ref{fig1} shows the spectra after the polynomial background has been subtracted. The best-fit line shapes are shown in red and blue for the SLW and SSW bands respectively. These show the same ladder of CO lines as the upper panel as well as their $^{13}$CO counterparts. The position of the $^{12}$CO and $^{13}$CO lines are annotated \\citep{1998JQSRT..60..883P}. The brightest non-CO lines are the 492 and 809\\,GHz lines of [C~{\\sc i}]. The 809\\,GHz [C~{\\sc i}] line is blended with the 806\\,GHz J=7-6 line of $^{12}$CO. Table \\ref{tab1} lists the transitions, reference frequencies, frequency offsets, and peak flux densities for each significant line fit. \n\n\\begin{figure}\n\\centering{\n\t\\includegraphics[width=0.9\\columnwidth]{14625fg2.eps}\n}\n\\caption{\\label{figRot} Population diagram for $^{12}$CO and $^{13}$CO towards G29. }\n\\end{figure}\n\nFigure \\ref{figRot} shows the rotational population diagram for $^{12}$CO in green and $^{13}$CO in blue. Following \\citet{1999ApJ...517..209G} we fit a rotational temperature to each species. The weighted linear fits and results are plotted for each species. The quoted errors are the errors on the fit and do not include uncertainties in beam size, calibration, and sub-structure within the beam. The $^{13}$CO lines come from a greater depth of material than the $^{12}$CO lines and will be more indicative of the temperature of the interior of the clump. The 7-6 transition of $^{13}$CO and the 9-8 transition of $^{12}$CO appear to be lower than the trends. This could be due to the structure in the beam changing between transitions (the beam FWHM changes by a factor of 2.5 between the CO 4-3 and 13-12 transitions). \n\n\\begin{figure}\n\\centering{\n\t\\includegraphics[width=\\columnwidth]{14625fg3.eps}\n}\n\\caption{\\label{fig2} {\\bf Left:} False-colour map of G29 showing GLIMPSE 4.5 (blue) and 8-$\\mu$m (green) \\citep{2003PASP..115..953B} and MAGPIS 20cm (red) \\citep{2006AJ....131.2525H}. {\\bf Right:} [N~{\\sc ii}] line flux towards G29. The circles show the position of the bolometers on the sky. The space between them has been interpolated using inverse distance weighting. The X marks the position of the G29.96-0.02 radio source \\citep[ 18$^{h}$46$^{m}$03.96$^{s}$, $-$02\\degr 39\\arcmin 21\\farcs5 ]{1989ApJS...69..831W}. The diamond marks the position of the NII peak. The contours are SCUBA 850\\,$\\mu$m emission (5, 10, 20, 50 and 100 times the local off-source rms) showing the extent of the G29.956-0.017SMM clump.}\n\\end{figure}\n\nThe region around G29 is shown in the left panel of Fig. \\ref{fig2} as a false-colour image - blue and green are GLIMPSE 4.5 and 8-$\\mu$m \\citep{2003PASP..115..953B} and red is MAGPIS 20cm \\citep{2006AJ....131.2525H} - and archival SCUBA 850\\,$\\mu$m contours. The X marks the position of the VLA radio emission associated with the UCH{\\small II} region \\citep{1989ApJS...69..831W}. The spectra measured towards positions surrounding G29 show several notable differences to the on-source spectra. The lower panel of Fig. \\ref{fig1} shows the mean background-subtracted spectrum for co-aligned off-source bolometers. The 835\\,GHz line of CH$^+$ is seen in absorption in the SLW band and appears to be probing the ISM \\citep[see ][ for a detailed decomposition of the CH$^+$ line for this source]{2010naylor}. The deepest absorption feature is coincident with the 1.232\\,THz line of HF \\citep{2010neufeld}. The higher-order $^{12}$CO and $^{13}$CO lines are not detected in the SSW band, but the 1.46\\,THz fine structure line of [N~{\\sc ii}] is strongly detected. This line is present in the extended-source calibration, but not in the on-source point-source calibration. \n\nA fit to the [N~{\\sc ii}] line was performed for all bolometers of the SLW array. The results are plotted on the right panel of Fig. \\ref{fig2}. The coloured circle markers show the bolometer positions and the measured line intensities at those positions. For these early results the bolometer array was not offset to create a fully sampled map. To compensate for this the pixels between the bolometers have been interpolated using an inverse distance weighting (modified Shepard's) method. The southern [N~{\\sc ii}] peak is coincident with a region of 20cm emission that is bounded to the north by a 8-$\\mu$m filament suggesting that it is a separate HII region to the G29 UCHII region. Its diameter is $\\sim$1\\arcmin\\ (2.5\\,pc at 8.9\\,kpc). The morphology around G29 is clearly complex and a more detailed, fully-sampled study will be required to disentangle the various components. As stated, this preliminary map was not fully sampled, but it does shows the potential of the FTS as a mapping spectrometer.\n\n\\section{Spectral energy distribution}\n\\label{sed}\n\n\\begin{figure*}\n\\centering{\n\t\\includegraphics[width=0.77\\textwidth]{14625fg4.eps}\n}\n\\caption{\\label{fig3} The G29 spectral energy distribution. SPIRE SWS-D4\/SLW-C3 spectra are shown by the blue\/red lines. \nThe light and dark grey spectra are archival data from the ISO Short Wavelength Spectrograph \\citep[SWS]{1996A&A...315L..49D} and ISO Long Wavelength Spectrograph \\citep[LWS]{1996A&A...315L..38C} respectively and were originally published by \\citet{2002A&A...381..571P}. Archival MIPS SED mode data \\citep{2004ApJS..154...25R} are shown by the green line.\nThe error bars show IRAS 12.5-100\\,$\\mu$m \\citep[IRAS 18434-0242 is coincident with G29]{1988iras....7.....H}, SCUBA 450 and 850\\,$\\mu$m \\citep{2006A&A...453.1003T}, and IRTF 1.3\\,mm \\citep{1986A&A...154L...8C} data points. The FTS SED data have been shifted to the same calibration as the ISO LWS data. A best fit SED is shown by the dashed line.\n}\n\\end{figure*}\n\nFigure \\ref{fig3} shows the spectral energy distribution (SED) measured towards G29. SPIRE FTS SLW-C3 and SSW-D4 spectra are shown in the same colours (red and blue) as Fig. \\ref{fig1}. A search was made for archival data coincident with G29. For consistency with the FTS data the downloaded spectra and data points were converted into intensity units by dividing by their wavelength-dependent beam area or, where relevant, their aperture area. The archival data is plotted in Fig. \\ref{fig3} and described in the Figure caption. \n\nFigure \\ref{fig3} does not include the relative correction applied in Fig. \\ref{fig1}. It was found that the FTS SLW data were marginally higher that the SCUBA 450$\\mu$m data point and that the first point of the FTS SSW spectrum was significantly higher than the ISO Longwave Spectrograph (LWS) final data point. It was assumed that the three sections of the long wavelength SED (ISO LWS, FTS SSW, FTS SLW) followed the same single temperature greybody and that the offsets between them were due to differences in their absolute calibration. A greybody equation was fitted to these three spectra and offsets between them accounted for by giving the flux from each section a multiplier coefficient. The greybody had the form\n\\begin{equation}\nI_{\\nu} = C_{X} B_{\\nu}(T) \\left(1- e^{ -(\\nu\/\\nu_{c})^{\\beta}}\\right)\n\\end{equation}\nwhere $C_{x}$ is the coefficient, $B_{\\nu}(T)$ is the Planck function at temperature $T$, $\\nu_c$ is the frequency where the emission becomes optically thin, and $\\beta$ is the dust emissivity index. The data were resampled into evenly spaced bins in log-wavelength space to prevent the differing density of data points between the ISO LWS and the SPIRE FTS corrupting the minimisation. \n\nThe best-fit to the SED was $T=80.3\\pm0.6$\\,K, $\\beta=1.73\\pm0.02$, and $\\nu_{c}=20.0\\pm0.5$\\,THz (equivalent to $\\lambda=15.0\\pm0.4$\\,$\\mu$m). We note that the SED temperature is similar to the $^{13}$CO rotational temperature from Fig. \\ref{figRot}. The multiplicative coefficients for the FTS SSW and FTS SLW were found to be $9.3\\pm0.08$ and $4.04\\pm0.06$ respectively, the ISO LWS coefficient was fixed at 1.0. The SPIRE FTS spectra have been aligned to the ISO LWS calibration. A single temperature greybody well fits the data longwards of $\\sim40$\\,$\\mu$m. However, fitting a single component greybody to a complicated source like G29 can only yield a broadly characteristic temperature. The relatively flat SED shortwards of the peak shows that there must be several hotter temperature components. Nevertheless, Fig. \\ref{fig3} does show that the long wavelength the slope of the SPIRE FTS data is consistent with the slope of the literature data. \n\nIn the following paragraphs we adopt the distance of 8.9\\,kpc \\citep{2004ApJS..154..553S} for G29 and assume that the majority of the emission comes from a region comparable to, or smaller than, the Hershel beam at 250\\,$\\mu$m ($\\theta$=18\\arcsec) -- (at 850\\,$\\mu$m, 60\\% of the extended emission towards G29 is within the central SCUBA 14.7\\arcsec\\ beam, \\citep{2006A&A...453.1003T}). Based on the fitted dust temperature, G29's published 850\\,$\\mu$m peak flux density \\citep{2006A&A...453.1003T}, and making typical assumptions \\citep[e.g.][]{2005MNRAS.360.1506K} we estimate the mass of the G29 clump to be $M \\sim 1500$\\,M$_{\\odot}$. As with the fitted temperature, the actual mass will depend on the internal profile\/geometry of the clump, but we note that this mass is similar to the median mass (940\\,M$_\\odot$) of the infra-red dark clouds (IRDCs) from which high-mass stars are believed to form \\citep{2006ApJ...641..389R}.\n \nThe luminosity integrated under the fitted greybody in the range 2-2000\\,$\\mu$m is $L_{Dust} = 61 D^2 \\theta^2$~L$_{\\odot}$ where $D$ is the distance to the source in kpc and $\\theta$ is the diameter of the emitting area. The above assumptions give a luminosity of $L_{Dust} = 1.6\\times10^6$\\,L$_\\odot$. This agrees with the luminosity of 10$^{6}$\\,L$_{\\odot}$ estimated from IRAS measurements alone \\citep{1991ApJ...372..199W}. Likewise, the bolometric luminosity in the range 2-2000\\,$\\mu$m, interpolating to the fitted SED at $\\lambda>650$\\,$\\mu$m, is $L_{bol} = 154 D^2 \\theta^2$~L$_{\\odot}$. The greybody luminosity is $\\sim$40\\% of $L_{bol}$. At the assumed distance $L_{bol} = 4.0\\times10^6$\\,L$_{\\odot}$. The $L_{bol}$ of the clump containing the UCH{\\small II} should equal the luminosity of the driving sources if the region is in equilibrium. Our calculated $L_{bol}$ is on the upper limit of the range of luminosities for the identified single O-star in G29 \\citep[$3-4\\times 10^{6}$\\,L$_{\\odot}$, ][]{1997ApJ...490L.165W} which may support the idea that the luminosity from more than one star is powering the reprocessed SED \\citep{2003MNRAS.340..799L}. \n\n\\section{Conclusions}\n\nWe have presented new SPIRE FTS 190-670$\\mu$m spectra of the submillimetre clump G29.956-0.017SMM which contains the G29.96-0.02 UCH{\\small II} region. The impressive capabilities of the SPIRE FTS have allowed us to simultaneously observe both the dust continuum and prominent spectral lines towards G29. We have conducted basic line-fitting and shown the distribution of [N~{\\sc ii}] emission towards G29. While this preliminary map was not fully sampled it does show at least one other HII region neighbouring G29 and demonstrates the amazing potential of the FTS as a mapping spectrometer. We have reconstructed the SED of G29 using the FTS spectra and archival measurements and have shown that the FTS calibration is broadly consistent with earlier observations. The combined data-set allowed us to fit a precise greybody with $T=80$\\,K and $\\beta=1.73$. Based on a distance of 8.9\\,kpc we estimated the mass of G29 to be approximately $1500$\\,M$_{\\odot}$. The calculated luminosity of the G29 clump is slightly greater than the known O-star at the centre of the UCH{\\small II} region. \n\n\\begin{acknowledgements} We thank D. Neufeld for identifying the 1.2\\,THz HF absorption feature. JMK acknowledges STFC funding, while this work was carried out, under the auspices of the Cardiff Astronomy Rolling Grant. SPIRE has been developed by a consortium of institutes led by Cardiff University (UK) and including Univ. Lethbridge (Canada); NAOC (China); CEA, LAM (France); IFSI, Univ. Padua (Italy); IAC (Spain); Stockholm Observatory (Sweden); Imperial College London, RAL, UCL-MSSL, UKATC, Univ. Sussex (UK); and Caltech, JPL, NHSC, Univ. Colorado (USA). This development has been supported by national funding agencies: CSA (Canada); NAOC (China); CEA, CNES, CNRS (France); ASI (Italy); MCINN (Spain); Stockholm Observatory (Sweden); STFC (UK); and NASA (USA).\n\\end{acknowledgements}\n\n\\bibliographystyle{aa}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nA modern theory describing large-scale universe evolution is the $\\Lambda$CDM model. This model provides a satisfactory fit to the majority of evidence related to the formation and evolution of cosmic structures on large scales \\cite{DEL_POPOLO_2014, Spergel_2003, Komatsu_2011}, however, several\nproblems still remain unresolved. One of such problems is known as the \\emph{core-cusp} problem and consists in the description of density distribution behavior near the galaxy center.\n\nThe core-cusp problem is the discrepancy between observed dark matter density distribution near galaxy centers, where density profiles preserve smoothness in the center and form a \\emph{core}, and the results of numerical simulations of the formation process for cosmic structures, where initial matter distribution evolves into a singular density profile called \\emph{cusp}.\n\nThe classic result of works dedicated to numerical simulations is the universal density profile obtained in works \\cite{Navarro_1996, Navarro_1997} and frequently referred to as the NFW profile. In the NFW profile a density of matter is singular in the center and proportional to $r^{\\alpha}$, where $\\alpha = -1$. This profile is steadily reproduced in simulations corresponding to the formation of cosmic structures from non-interacting dark matter.\nIt is possible to suggest mechanisms \\cite{10.1093\/mnras\/283.3.L72} relying on star formation processes and dynamics of a gas, which can efficiently smooth the NFW distribution. However, these mechanisms are unable to resolve the core-cusp problem in the case of galaxies, where dark matter is a strongly dominated component.\n\n\nIn a role of observed galaxies with a dominant dark component, we can peak a set of dwarf satellite galaxies.\nWe can peak a set of dwarf satellite galaxies of the Milky Way and Andromeda galaxy as candidates for a role of observed galaxies with a dominant dark component, where both gas-rich galaxies with ongoing baryonic processes and gas-poor galaxies are present. Initially, the core-cusp problem was found in the study of rotation curves of gas-rich dwarf galaxies \\cite{Moore1994, 1994ApJ...427L...1F,Burkert_1995}, where the density profile differs from $1\/r$ and corresponds to the core with constant density in the center, which means that density profile is proportional to $r^\\alpha$, with $\\alpha = 0$. More modern observations show less unambiguous results regarding the central structure of density profiles. One can find a thorough description of the core-cusp problem both from the side of observations and numerical simulations and a comprehensive list of references in a review \\cite{1606.07790} or in a recently published review \\cite{galaxies10010005}.\n\nOne possible way to look for a solution to the core-cusp problem is to introduce a self-interaction for dark matter (see \\cite{1705.02358} for examples) because self-interaction may prevent excessive accumulation of dark matter in the center of a galaxy. It is worth noting that no one has succeeded in direct detection of dark matter \\cite{1509.08767,1604.00014} in a framework of various hypotheses regarding the interaction of dark matter with regular matter (see, for example, \\cite{1703.07364,astro-ph\/0003365,1705.02358}). This leads to a suggestion that dark-matter-related effects may be explained by replacing GR with some modified theory of gravity, which contains \\emph{extra solutions} and if such solutions can be treated as solutions of GR with an additional contribution from a fictitious matter in the r.h.s. of Einstein equations. In such a framework, the dark matter has a purely gravitational origin instead of being a regular matter which explains failures of its direct detection.\n\nWe can consider a popular model of mimetic gravity \\cite{mukhanov,Golovnev201439} (see also review \\cite{mimetic-review17}) as an example of a theory containing extra solutions along with a whole set of regular GR solutions. Alternatively, we can use another theory called embedding gravity (or embedding theory) \\cite{regge,deser}, which is close to the mimetic gravity since both theories are formulated with differential field transformations in GR (see details in \\cite{statja60}). With this approach we can try to explain dark-matter-related effects on the cosmological scales \\cite{davids01,statja26}, as well as on the scales of galaxies\n because the dark matter arising in this approach in the non-relativistic limit behaves as a dust-like matter with some self-interaction \\cite{statja51,statja68}.\n\nRegardless of whether dark matter is the real matter with self-interaction, or simulating its gravitational effect, it is necessary to evaluate how the appearance of self-interaction affects the emerging profile of the distribution of matter in the galaxy -- would \\emph{core} or \\emph{cusp} appear.\nIn order to avoid resource-consuming simulations, we aimed to obtain an analytic criteria of when \\emph{core} and when \\emph{cusp} distributions arise. At first, this task should be considered in the case without any self-interaction, which was the purpose of the current paper. In future works, any particular model for self-interaction in the dark matter may be introduced as a modification of this task.\n\nThe capability of purely analytical approaches is strongly limited in the framework of cosmic structures formation and it is usually possible to proceed only considering spherically symmetrical problems. One of the simplest and most popular models is an isothermal sphere, which is described, for example, here \\cite{Doroshkevich_2012,10.1046\/j.1365-8711.1999.02609.x}. Pseudo-isothermal sphere with density profile $\\rho \\sim \\left(1+ r^2\/r_0^2 \\right)^{-1}$ gives a flat density profile which appears to be in good agreement with observations.\nAn analytical formulation for a dynamical investigation of galaxy formation processes was proposed, for example, in \\cite{https:\/\/doi.org\/10.48550\/arxiv.astro-ph\/0006184,10.1046\/j.1365-8711.1999.02609.x, https:\/\/doi.org\/10.48550\/arxiv.astro-ph\/0409173, El_Zant_2008, El_Zant_2013}. There the questions of describing the formation of the cosmic structure from the initial fluctuation and studying the stability of this process are discussed.\nAnother class of analytical problems, with a setup similar to ours, are works devoted to the study of the relation between a distribution function of particles in velocity space with a density profile for a static cosmic structure (see, for example, \\cite{1985AJ.....90.1027M,refId0}).\n\nIn the current work, we suggest a simple analytical approach for obtaining an asymptotic at zero for radial dependence of a density distribution of a dust-like matter in a galaxy by consideration of the distribution function of particles over all possible trajectories.\nWe are focused on a static and spherically symmetric on average distribution while assuming that the movement of a single particle can violate spherical symmetry. In a section \\ref{razd1} we obtain a relation between an arising profile type (\\emph{core} or \\emph{cusp}) and an asymptotic in zero of the distribution function over a module of angular momentum.\nIn a section \\ref{razd21} we show that for an exactly spherically symmetric gravitational potential this asymptotic corresponds to the \\emph{core} profile. Taking this into account, the \\emph{cusp} profile can arise only as a result of deviation from exact spherical symmetry. In a section \\ref{razd22} we discuss the possible corresponding mechanism for change in time of the asymptotic of a distribution function over the module of an angular momentum, which leads to a transition of a density profile to the \\emph{cusp}-type.\n\n\n\n\n\n\\section{Relation between the density profile and the distribution function of particles}\\label{razd1}\n\nWe will consider a matter distribution neglecting deviations from spherical symmetry. Let us consider a distribution of the matter, which is spherically symmetric on average and independent from time. Assume that the matter consists of massive particles which interact only by means of gravity and move along bounded trajectories. Let us also assume, that particles move with non-relativistic speed and their density is sufficiently small so that Newtonian gravity can be applied.\nIn that case a matter density is given by a static spherically symmetric function $\\rho(x)$, which is related to the static spherically symmetrical gravitational potential $\\varphi(x)$ by Poisson equation\n\\begin{equation}\n\\label{poisson}\n\t\\partial_k \\partial_k \\varphi(x) = 4 \\pi G \\rho(x),\n\\end{equation}\nwhere $G$ is the Newtonian gravitational constant; hereinafter $i,k,\\ldots=1,2,3$.\n\nWhen moving in a spherically symmetric potential, each particle has a conserved total energy $E$ and angular momentum $L_k$. In this case, the movement of each particle occurs along a planar orbit, which passes through the center of symmetry. Since we are considering a finite motion only, the change in the radial coordinate will be periodic, but the orbit may or may not be closed. In the case of an open orbit, we will, for definiteness, call the orbit only the part corresponding to one period of radial motion (see fig.~\\ref{pic1}).\n\n\n\n\\begin{figure}[h!]\n\\begin{minipage}{0.49\\linewidth}\n\t\\centering\n\t\\begin{tikzpicture}[scale = 1.5]\n\t\t\\begin{scope}[rotate = -40]\n\t\t\t\\draw[thick] (0, -0.5) to[out = 0, in = 0] (0, 2);\n\t\t\t\\draw[thick] (0, 2) to[out = 180, in = 180] (0, -0.5);\n\t\t\t\\draw[thick, dashed] (0, 0) -- (90:2.5);\n\t\t\t\\draw[thick, ->] (0, 0) -- (90:1)node[pos = 0.75, below]{$\\vec{\\tau}$};\n\t\t\\end{scope}\n\t\t\\fill (0, 0)node[above]{$O$} circle(0.07);\t\t\n\t\\end{tikzpicture}\n\\end{minipage}\n\\hskip -1cm\n\\begin{minipage}{0.49\\linewidth}\n\t\\centering\n\t\\begin{tikzpicture}[scale = 1.5]\n\t\t\\begin{scope}[rotate = -40]\n\t\t\t\\draw[thick] (-0.3, -0.5) to[out = -31, in = 0] (0, 2);\n\t\t\t\\draw[thick] (0, 2) to[out = 180, in = 211] (0.3, -0.5);\t\n\t\t\t\\draw[thick, dashed] (0, 0) -- (90:2.5);\n\t\t\t\\draw[thick, ->] (0, 0) -- (90:1)node[pos = 0.75, below]{$\\vec{\\tau}$};\n\t\t\\end{scope}\n\t\t\\fill (0, 0)node[above]{$O$} circle(0.07);\n\t\t\\begin{scope}[rotate = 22, gray!30]\n\t\t\t\\draw[thick] (-0.3, -0.5) to[out = -31, in = 0] (0, 2);\n\t\t\t\\draw[thick] (0, 2) to[out = 180, in = 211] (0.3, -0.5);\t\n\t\t\\end{scope}\n\t\t\\begin{scope}[rotate = -102, gray!30]\n\t\t\t\\draw[thick] (-0.3, -0.5) to[out = -31, in = 0] (0, 2);\n\t\t\t\\draw[thick] (0, 2) to[out = 180, in = 211] (0.3, -0.5);\t\n\t\t\\end{scope}\n\t\\end{tikzpicture}\n\\end{minipage}\n\\caption{A definition of the orbit and its characteristic direction for a closed and open orbit.}\n\\label{pic1}\n\\end{figure}\n\nThen each orbit can be uniquely defined with normalized (devided by mass of the particle $m$) energy $\\varepsilon = {E}\/{m}$, normalized angular momentum $\\ell_k = {L_k}\/{m}$ and a direction $\\tau_k$, defining the orientation of the orbit in a plane orthogonal to the vector of angular momentum $L_k$.\nHence, the vector $\\tau_k$ must satisfy conditions\n\\begin{equation}\\label{sp1}\n\\tau_k \\ell_k=0,\\qquad \\tau_k\\tau_k=1\n\\end{equation}\nand because of that has only one independent component.\nTo make a complete description of a particle motion we need to introduce another scalar parameter $\\gamma$ which defines a phase of a particle periodic movement at the initial moment of time.\nAs a result, the motion of a single particle is given by the function\n$\\hat{x}_m \\left( t, \\varepsilon, \\ell_k, \\tau_l, \\gamma \\right)$, where $t$ is time.\n\nNow we can introduce the distribution function of particles $f$ (we can assume that all particles have the same mass $m$ without loosing generality), which depends on all mentioned parameters. This function is defined by the number of particles within small range of parameters change according to the formula\n\\begin{equation}\n\td N = f \\left( \\varepsilon, \\ell_k, \\tau_l, \\gamma \\right) \\, d \\varepsilon \\,d^3 \\ell \\,d \\tau \\,d \\gamma,\n\\end{equation}\nwhere $d\\tau$ is considered as a one dimensional value since the vector $\\tau_l$ must satisfy conditions \\eqref{sp1}.\nTo be more accurate, $d\\tau$ should be defined as a product $d^3\\tau\\,\\delta(\\tau_k \\tau_k-1)\\delta(\\tau_k \\ell_k\/\\ell)$, where $\\ell=\\sqrt{\\ell_k\\ell_k}$.\nIn this section, we will assume that the process of galaxy formation has already been completed so the distribution function $f$ has become time-independent.\n\n\nLet us derive the expression for density of matter in a given point using the distribution function $f$. Considering the fact that contribution to the density from a single particle can be written using $\\delta$-function, we can write the expression for the density in the form\n\\begin{equation}\n\\label{rho}\n\t\\rho(x_m) = m \\int d \\varepsilon \\,d^3 \\ell \\,d \\tau \\,d \\gamma \\ f \\left( \\varepsilon, \\ell_k, \\tau_l, \\gamma \\right) \\delta \\left( x_m - \\hat{x}_m \\left( t, \\varepsilon, \\ell_k, \\tau_l, \\gamma \\right) \\right).\n\\end{equation}\nThe supposed spherical symmetry and time independence of the matter density $\\rho$ must be provided by some properties of the distribution function $f$, however, we will not specify them explicitly.\nUsing the independence of the l.h.s of the equation \\eqref{rho} from time and a particular direction of the vector $x_m$, we can integrate both parts of the equation over time from $0$ to $T$ and over a sphere with radius $r=\\sqrt{x_k x_k}$ and after that divide the result on the sphere surface area and on the length of the time interval $T$.\nAs a result we obtain\n\\begin{equation}\n\\label{rho2}\n\t\\rho(r) = \\frac{m}{4\\pi r^2 T} \\int d \\varepsilon \\,d^3 \\ell \\,d \\tau \\,d \\gamma \\ f \\left( \\varepsilon, \\ell_k, \\tau_l, \\gamma \\right) \\int \\limits_{S_r} d^2 x \\int \\limits_0^T dt \\, \\delta \\left( x_m - \\hat{x}_m \\left( t, \\varepsilon, \\ell_k, \\tau_l, \\gamma \\right) \\right).\n\\end{equation}\nThe delta-function in this expression can be removed if we switch from integration over $t$ to integration over $r$ (obtaining an additional multiplier $1\/|v_r|$, where $v_r$ is a radial component of velocity) and combine this integration with the integration over sphere $S_r$. This results in\n\\begin{equation}\n\\label{rho3}\n\t\\rho(r) = \\frac{m}{4\\pi r^2 T} \\int d \\varepsilon \\,d^3 \\ell \\,d \\tau \\,d \\gamma \\ f \\left( \\varepsilon, \\ell_k, \\tau_l, \\gamma \\right)\n\\frac{n}{|v_r|},\n\\end{equation}\nwhere $n$ is a number of $\\delta$-function carriers which lies within the interval of integration over time. This number depends only on $T$ and $\\varepsilon$, $\\ell$ for any given $r$.\n\nTo proceed, let us return to the movement of a single particle. While moving in the spherically symmetric field it conserves its energy\n\\begin{equation}\n\tE = \\frac{m}{2} \\left( v_r^2 + v_\\tau^2 \\right) + m \\varphi(r)\n\\end{equation}\n(here $v_\\tau$ is a tangential component of velocity)\n and its angular momentum $L_k$, magnitude of which has form\n\\begin{equation}\n\tL =m r v_\\tau.\n\\end{equation}\nIt allows us to express the module of radial component of velocity through integrals of motion $\\ell=L\/m$ and $\\varepsilon=E\/m$:\n\\begin{equation}\n\\label{v}\n\t|v_r| = \\sqrt{2\\varepsilon - 2\\varphi(r) - \\frac{\\ell^2}{r^2}}.\n\\end{equation}\nThe obtained expression can be used in \\eqref{rho3}.\n\nThe number of $\\delta$-function carriers $n(\\varepsilon, \\ell, r)$ from \\eqref{rho3} can be estimated introducing the period of radial motion $\\hat{T} \\left( \\varepsilon, \\ell \\right)$ which depends only on normalized energy and angular momentum:\n\\begin{equation}\\label{sp3}\n\tn\\left( \\varepsilon, \\ell, r \\right) \\approx \\frac{2 T}{\\hat{T} \\left( \\varepsilon, \\ell \\right)} \\Theta \\left( 2\\varepsilon - 2\\varphi(r) - \\frac{\\ell^2}{r^2}\\right),\n\\end{equation}\nwhere $\\Theta(x)$ is the Heaviside step function, moreover, the accuracy of the approximate equality \\eqref{sp3} increases with an increase in the arbitrary parameter $T$.\nAs result, using \\eqref{v} and \\eqref{sp3} in \\eqref{rho3}, we obtain in the limit of large $T$\n\\begin{equation}\\label{rho4}\n\\rho(r) = \\frac{m}{2\\pi r^2} \\int d \\varepsilon \\, d^3\\ell\\,\n\\frac{f(\\varepsilon, \\ell_k)\\,\\Theta \\left( 2\\varepsilon - 2\\varphi(r) - \\frac{\\ell^2}{r^2}\\right)}{\\hat{T} \\left( \\varepsilon, \\ell \\right)\\sqrt{2\\varepsilon - 2\\varphi(r) - \\frac{\\ell^2}{r^2}}},\n\\end{equation}\nwhere a following notation introduced (we will denote different distribution functions with the same symbol if they can be distinguished by the number of arguments)\n\\begin{equation}\\label{sp8}\nf(\\varepsilon, \\ell_k)=\\int d \\tau \\,d \\gamma\\, f(\\varepsilon, \\ell_k, \\tau_l, \\gamma)\n\\end{equation}\nfor a distribution over only normalized energy and angular momentum.\n\nMoving on, let us divide the integration over normalized angular momentum $\\ell_k$ in \\eqref{rho4} on integration over its magnitude $\\ell$ and integration over sphere $S_\\ell$ with radius $\\ell$:\n\\begin{equation}\\label{rho5a}\n\\rho(r) = \\frac{m}{2\\pi r^2} \\int d \\varepsilon \\, \\int\\limits_0^\\infty d\\ell\\,\n\\frac{\\hat f(\\varepsilon, \\ell)\\,\\Theta \\left( 2\\varepsilon - 2\\varphi(r) - \\frac{\\ell^2}{r^2}\\right)}{\\hat{T} \\left( \\varepsilon, \\ell \\right)\\sqrt{2\\varepsilon - 2\\varphi(r) - \\frac{\\ell^2}{r^2}}},\n\\end{equation}\nwhere the following notation introduced\n\\begin{equation}\\label{sp11}\n\\hat f(\\varepsilon, \\ell)=\n\\int \\limits_{S_\\ell} d^2 \\ell \\,\nf \\left( \\varepsilon, \\ell_k\\right).\n\\end{equation}\nThe quantity $\\hat f(\\varepsilon, \\ell)$ has the meaning of the particle distribution function over normalized energy and angular momentum modulus.\n\nLet us consider a behavior of the obtained density \\eqref{rho4} in asymptotic $r \\to 0$. Due to the fact that the gravitational potential is related to the non-negative density of matter by the equation \\eqref{poisson}, the quantity $-\\varphi(r)$ cannot grow at $r \\to 0$ faster or in the same way as ${1}\/{ r^2}$. Hence, the term ${\\ell^2}\/{r^2}$ dominates in the argument of the $\\Theta$-function at small $r$ and the only values which contribute to the integral over $\\ell$ are the small values which satisfy the inequality\n\\begin{equation}\n\\ell\\le r\\sqrt{2\\varepsilon - 2\\varphi(r)}.\n\\end{equation}\nAs a result asymptotic of the $\\rho(r)$ at $r\\to0$ is defined by asymptotic at $\\ell\\to0$ of the functions $\\hat f(\\varepsilon, \\ell)$ and $\\hat{T}(\\varepsilon,\\ell)$ contributing to \\eqref{rho4}.\nThe period of radial motion $\\hat{T}(\\varepsilon,\\ell)$ has a finite limit $\\hat{T}(\\varepsilon,0)$ at $\\ell\\to0$ for the exception of the range of small values of a normalized energy $\\varepsilon$ which gives a small contribution and correspond to the particles which are almost at rest in the center.\nThe behavior of $\\hat f(\\varepsilon, \\ell)$ is not predetermined, so let us consider various options.\n\nAssume, at first, that $\\hat f(\\varepsilon, \\ell)$ has finite limit at $\\ell\\to0$ and $\\hat f(\\varepsilon, 0)\\ne0$ at least at some interval of $\\varepsilon$.\nIn this case we can perform a change of variables $\\ell=r\\tilde\\ell$ in integration in \\eqref{rho4}, which leads to the asymptotic\n\\begin{equation}\\label{rho5}\n\\rho(r) = \\frac{m}{2\\pi r} \\int d \\varepsilon \\, d\\tilde\\ell\\,\n\\frac{\\hat f(\\varepsilon, 0)\\,\\Theta \\left( 2\\varepsilon - 2\\varphi(r) - \\tilde\\ell^2\\right)}{\\hat{T} \\left( \\varepsilon, 0 \\right)\n\\sqrt{2\\varepsilon - 2\\varphi(r) - \\tilde\\ell^2}}=\n\\frac{m}{4 r} \\int d \\varepsilon \\,\\frac{\\hat f(\\varepsilon, 0)}{\\hat{T} \\left( \\varepsilon, 0 \\right)}.\n\\end{equation}\nIt can be seen that particles density is singular at $r\\to0$ in this case, moreover, it is proportional to $1\/r$ (note that $\\hat f(\\varepsilon, 0)\\ge0$ so the integral in \\eqref{rho5} doesn't turn into zero).\nSuch behavior exactly coincides with the resulting from numerical simulations \\emph{cusp}-like density profile with $\\al=-1$ (see Introduction). It is worth noting that the gravitational potential $\\varphi(r)$ in this case, remains finite at $r=0$,\nbut has a cusp at this point, i.e. $\\varphi'(0)\\ne0$.\n\nNow, assume that distribution function $\\hat f(\\varepsilon, \\ell)$ can be expanded into a series w.r.t. variable $\\ell$ at the point $\\ell\\to0$ and its zeroth term vanishes at any value of $\\varepsilon$, i.e. at $\\ell\\to0$\n\\begin{equation}\\label{sp4}\n\\hat f(\\varepsilon, \\ell)\\approx \\hat f'(\\varepsilon, 0)\\ell,\n\\end{equation}\nwhere prime means derivative w.r.t. $\\ell$.\nIn this case we get\n\\begin{equation}\\label{rho6}\n\\rho(r) = \\frac{m}{2\\pi} \\int d \\varepsilon \\, d\\tilde\\ell\\,\n\\frac{\\hat f'(\\varepsilon, 0)\\tilde \\ell\\,\\Theta \\left( 2\\varepsilon - 2\\varphi(r) - \\tilde\\ell^2\\right)}{\\hat{T} \\left( \\varepsilon, 0 \\right)\n\\sqrt{2\\varepsilon - 2\\varphi(r) - \\tilde\\ell^2}}=\n\\frac{m}{2\\pi} \\int d \\varepsilon \\,\\frac{\\hat f'(\\varepsilon, 0)}{\\hat{T} \\left( \\varepsilon, 0 \\right)}\\sqrt{2\\varepsilon - 2\\varphi(r)}\n\\end{equation}\ninstead of \\eqref{rho5} by analogous reasoning.\nAssuming the finiteness of the value of the gravitational potential in zero $\\varphi(0)$ (which is true even in the \\emph{cusp}-case) we can replace $\\varphi(r)$ to $\\varphi(0)$ in the last expression in \\eqref{rho6}, when considering asymptotic at $r\\to0$.\nIt means that in this case, the dependence of the density on radius corresponds to the \\emph{core}-type profile.\n\nTo sum up, a rather simple analytical derivation shows that the matter density profile in the central region is defined by the presence or absence of a zeroth-order term in the expansion of the function $\\hat f(\\varepsilon, \\ell)$ into series w.r.t $\\ell$ at the point $\\ell = 0$.\nSince $\\hat f(\\varepsilon, \\ell)$ is positive it is sufficient to check the expansion of the distribution function\n\\disn{sp15}{\n\\hat f(\\ell)=\\int d\\varepsilon \\hat f(\\varepsilon, \\ell)\n\\nom}\nof particles over only the module of angular momentum.\nIn the next section, we will analyze the possible behavior of this function for $\\ell\\to0$.\n\n\n\n\\section{An asymptotic of the distribution function at $\\ell\\to0$}\\label{razd2}\n\\subsection{The case of the spherically symmetric potential}\\label{razd21}\nAt first, let us consider the case when the formation of the static structure from a particles cloud goes in the already existing spherically symmetric gravitational potential $\\ff(x_i)$. In this case, each particle preserves its angular momentum so the distribution function $\\hat f(\\ell)$ defined by \\eqref{sp15} doesn't change with time.\nHence, it is sufficient to consider the distribution of particles at the initial moment of time in order to find the asymptotic of the distribution function in the resulting static configuration.\n\nlet us start by obtaining the distribution function over the normalized energy and angular momentum $f(\\varepsilon, \\ell_k)$. At the first sight, this function should be smooth at $\\ell_k=0$. let us show that this is not the case in general.\nAssume that we have a large number of point particles with coordinates $x_i$ and velocities $v_i$ which are described by the distribution function $\\chi(x_i,v_i)$ at the initial moment. Then\n\\begin{equation}\\label{sp10}\nf(\\varepsilon, \\ell_k)=\n\\int d^3 x \\, d^3 v\\, \\chi(x_i,v_i)\n\\delta(\\ell_i-\\epsilon_{ikl}x_k v_l)\n\\delta\\left(\\varepsilon-\\frac{v^2}{2}-\\ff(x_i)\\right),\n\\end{equation}\nwhere $v$ is the velocity magnitude and $\\epsilon_{ikl}$ is the antisymmetric unit tensor.\nWe assume for simplicity that the function $\\chi(x_i,y_i)$ is spherically symmetric which means that its value doesn't change if we will simultaneously rotate vectors $x_i$ and $v_i$.\nTaking into account that $\\ff(x_i)$ is also spherically symmetric, the function $f(\\varepsilon, \\ell_k)$ given by \\eqref{sp10} can depend on $\\ell_k$ only through its magnitude.\nThen we can take $\\ell_k = (\\ell, 0, 0)$ without the loss of generality.\nAs a result, we have\n\\disn{sp9}{\nf(\\varepsilon, \\ell_k)=\\int d^3 x \\, d^3 v \\, \\chi(x_i,v_i) \\times\\ns \\times\n\\delta(\\ell - x_2 v_3 + x_3 v_2)\\delta(x_1 v_3 - x_3 v_1)\\delta(x_1 v_2 - x_2 v_1) \\delta\\left(\\varepsilon-\\frac{v^2}{2}-\\ff(x_i)\\right)= \\no\n\t\t= \\int d x_2 \\, d x_3 \\, d^3 v \\, \\left.\\left[\\chi(x_i,v_i)\\delta\\left(\\varepsilon-\\frac{v^2}{2}-\\ff(x_i)\\right)\\right]\\right|_{x_1 = \\frac{v_1}{v_3} x_3}\\times\\ns \\times \\frac{1}{|v_3|}\\, \\delta(\\ell - x_2 v_3 + x_3 v_2) \\, \\delta \\left( \\left( \\frac{x_3 v_2}{v_3} - x_2 \\right) v_1 \\right) = \\ns\n\t\t= \\frac{1}{\\ell}\\int d x_2 \\, d x_3 \\, d v_2 \\, d v_3 \\, \\left.\\left[\\chi(x_i,v_i)\\delta\\left(\\varepsilon-\\frac{v^2}{2}-\\ff(x_i)\\right)\\right]\\right|_{x_1 =v_1= 0} \\delta(\\ell - x_2 v_3 + x_3 v_2).\n\\nom}\nIt is easy to check that after factorization of multiplier $1\/\\ell$ the remaining expression becomes finite at $\\ell=0$ in the general case.\nThis is true even if we abandon the spherical symmetry of the function $\\chi(x_i,v_i)$, in this case\nthe limit of the coefficient before $1\/\\ell$ at $\\ell\\to0$ will smoothly depend on the direction of the vector $\\ell_k$.\n\nAs a result, we see that $f(\\varepsilon, \\ell_k)$ is indeed not smooth for $\\ell_k=0$.\nIn this case, the distribution function \\eqref{sp15} over the module of angular momentum behaves at $\\ell\\to0$ as\n\\disn{sp11a}{\n\\hat f(\\ell)\\approx C\\ell,\n\\nom}\nwhere $C$ is obtained by integrating over $\\varepsilon$ and averaging over the angles of the abovementioned limit of the coefficient before $1\/\\ell$ in \\eqref{sp9}.\nSince it corresponds to \\eqref{sp4}, we can conclude that the \\emph{core}-type matter density profile emerges in the considered case.\n\nThe discussed situation when the gravitational potential remains spherically symmetric while the density profile formation goes may take place when the gravitational potential is created primarily by the dark matter.\nAt the same time, dark matter can obey its own laws (for example, it may have some specific self-interaction) which define whether it forms \\emph{cusp} or \\emph{core} and we study the formation of a density profile for a regular matter and obtain the \\emph{core}-type result.\n\n\\subsection{The case with a deviation from the spherical symmetry}\\label{razd22}\nNow let us switch to the alternative situation when the static structure forms from the particles cloud without a spherically symmetric gravitational background. let us assume that gravitational potential forms simultaneously with the static configuration, so a significant deviation from spherical symmetry may occur during that process.\nThis situation might take place if we consider the formation of structures from dark matter particles assuming that it behaves as regular matter without any self-interaction. Such a setup is close to the one used in mentioned numerical simulations (see Introduction).\n\nIn this case trajectories of particles wouldn't be exactly the same as shown in the figure~\\ref{pic1} and wouldn't be defined by parameters $\\varepsilon, \\ell_k, \\tau_l, \\gamma$. However, if the deviation from spherical symmetry is small then deviations from defined trajectories would be also small and particles can still be described by parameters $\\varepsilon, \\ell_k, \\tau_l, \\gamma$, but considering that not all of them remain constant with time.\nIn particular, the angular momentum of each particle, determined relative to the future center of the emerging structure, will no longer conserve its original value, since it can change under the action of forces arising from small concentrations of particles that can evolve into satellite galaxies.\nAs a result, in contrast with the previous case, the distribution function $f(\\varepsilon, \\ell_k, \\tau_l, \\gamma)$, and hence $f(\\varepsilon,\\ell_i)$, can change over time, including in terms of distribution over $\\ell_i$.\nIt means that the distribution function over he module of angular momentum defined by formulas \\eqref{sp11},\\eqref{sp15} can also change and its asymptotic at $\\ell=0$ defines whether the \\emph{core} profile or the \\emph{cusp} profile arise.\n\nAs shown in the previous subsection, this function has asymptotic \\eqref{sp11a} at the initial moment in the general case. However, in contrast with the previous case, the function $\\hat f(\\ell)$ can now change with time and its asymptotic too. And hence the asymptotic can obtain a non-zero contribution as a zeroth-order term in its expansion into series over $\\ell$, which will change the emerging density profile type to \\emph{cusp}.\n\nThe rate of change of the normalized angular momentum $\\ell_i$ for a single particle is determined by the moment of force acting on the particle:\n\\disn{sp12}{\n\\dot \\ell_i = \\epsilon_{ikl} x_k a_l,\n\\nom}\nwhere $x_i$ is the location of the particle in a given moment of time, and $a_l$ is its acceleration; dot denotes derivative w.r.t. time.\nHence, for the modulus of the normalized angular momentum $\\ell$ of a single particle, we obtain the expression\n\\disn{sp13}{\n\\dot \\ell = \\frac{d}{dt} \\sqrt{\\ell_i \\ell_i} = \\varepsilon_{ikl}\\frac{\\ell_i}{\\ell}x_k a_l,\n\\nom}\nwhich can be either positive or negative.\n\nA time evolution of the distribution function $\\hat f(\\ell)$ can be described by a standard continuity equation\n\\disn{sp14}{\n\\frac{d}{dt}\\hat f(\\ell)=-\\frac{d}{d\\ell}\\ls \\hat f(\\ell)\\bar{\\dot\\ell}\\,\\rs,\n\\nom}\nwhere $\\bar{\\dot\\ell}$ is a value of the rate of change $\\dot\\ell$ which is averaged over all particles with given $\\ell$. So, the $\\hat f(\\ell)\\bar{\\dot\\ell}$ is a \"flow\"{} of a particles in terms of values of the angular momentum modulus. This \"flow\"{} can be either positive or negative. A positive value means \"outflow\"{} of particles modulus the specific angular momentum from the value $\\ell=0$. A negative value means \"inflow\"{} to this value, which after some time should lead to the appearance of a non-zero value $\\hat f(0)$.\nIt can be seen more precisely from equation \\eqref{sp14} if we substitute \\eqref{sp11a} into it (note that $C>0$ because the distribution function $\\hat f(\\ell)$ is positive) as an initial value and neglect the change of $\\bar{\\dot\\ell}$ with changing $\\ell$ near the point $\\ell=0$.\nIf $\\bar{\\dot\\ell}<0$ we obtain the solution\n\\disn{sp15a}{\n\\hat f(\\ell)=C(\\ell-\\bar{\\dot\\ell}\\,t),\n\\nom}\nwhich leads to the appearance after some time $t$ of a non-zero value $\\hat f(0)=-C\\bar{\\dot\\ell}\\,t>0$.\n\nAs it can be seen from \\eqref{sp13}, particles with opposite values of a normalized angular momentum $\\ell_i$ gives opposite contributions to the $\\bar{\\dot\\ell}$ at $\\ell\\to0$. If there were exactly the same number of particles of both types, then this would give a zero value of $\\bar{\\dot\\ell}$ (note that, as can be seen from \\eqref{sp9}, the distribution of particles over the vector $\\ell_i$ is not smooth at $\\ell\\to0$, which reduces the reliability of this conclusion).\nHowever, if the spherical symmetry is violated and the system has a certain angular momentum then the number of such particles wouldn't be the same, which may lead to the appearance of a non-zero $\\bar{\\dot\\ell}$.\n\nDetermining exactly which sign will acquire the average value over all $\\bar{\\dot\\ell}$ particles as a result of the appearance of deviations from spherical symmetry is too difficult.\nHowever, it is natural to assume that in a generic situation there will be intervals of time when $\\bar{\\dot\\ell}<0$. During such a period of time, the quantity $\\hat f(0)$ will have a non-zero value, which may change with further dynamics, but without fine-tuning of the initial data, it will not disappear completely. Thus, we have significant arguments supporting the fact that in the presence of deviations from spherical symmetry in general case after some time the distribution function over the modulus of a normalized angular momentum at $\\ell\\to0$ will no longer behave according to \\eqref{sp11a}, but will have the form\n\\disn{sp17}{\n\\hat f(\\ell)\\approx \\hat f(0)+C\\ell\n\\nom}\nwith $\\hat f(0)>0$. As shown in section~\\ref{razd1}, this corresponds to the matter profile of the \\emph{cusp} type.\n\n\n\\section{Conclusion}\nWe perform an analytical analysis of the formation of a static density profile $\\rho(r)$ of a radial matter distribution arising for a non-interacting (with an exception for gravitation interaction) dust-like matter.\nThe case of spherical symmetry of the emerging stationary configuration is considered, which can simulate the process of galaxy formation if we neglect the deviations from spherical symmetry that exist for real galaxies.\nIt is possible to relate the type of the resulting profile to the asymptotic at zero of the distribution function $\\hat f(\\ell)$\nof particles over the module of angular momentum: if the constant term present in the expansion of the $\\hat f(0)\\ne0$ (see~\\eqref{sp17}),\nthen the \\emph{cusp}-type density profile emerge (with $\\al=-1$, i.e. $\\rho\\sim r^{-1}$), and if $\\hat f(0)=0$ and the expansion starts from the linear term (see~\\eqref{sp11a}),\nthen the \\emph{core}-type profile take place.\nWhich of these two options is realized depends on how accurately the spherical symmetry is satisfied in the process of formation of the stationary configuration.\n\nIn the first case, we can assume that a spherically symmetric potential well (gravitational potential $\\ff(r)$) formed prior to the static structure formation and created by a dark matter which may have different properties in comparison to the regular matter (for example, the existence of self-interaction).\nThen, studying the formation of the distribution of regular dust-like matter with a given potential as the background, we find the asymptotic at zero of the function $\\hat f(\\ell)$ in the form of a linear behavior \\eqref{sp11a}, which means that the profile of the dust-like matter will have the \\emph{core} type.\nIf, in this case, the self-interaction of dark matter also provides an \\emph{core} distribution for it, then this case will not contradict the available observations.\n\nIn the second case, we can assume that there is no pre-formed potential well, and the gravitational potential is formed simultaneously with the distribution of dust-like matter.\nThen, during the formation of a stationary configuration, noticeable deviations from spherical symmetry can occur.\nWe present arguments supporting the fact that the non-zero value $\\hat f(0)\\ne0$ emerges in such a situation.\nThis leads to the fact that the profile of such dust-like matter (it can be both dark matter and ordinary matter) will have the \\emph{cusp} type.\nThis conclusion is consistent with the results of numerical simulations giving the $r^{-1}$ profile, in which the formulation of the problem is close to the described second case.\n\n{\\bf Acknowledgments.}\nThe authors are grateful to A.~Golovnev for useful discussions\nand A.~El~Zant for provided references.\nThe work is supported by RFBR Grant No.~20-01-00081.\nThe work of A.D.~Kapustin is supported by the Foundation for the Advancement of Theoretical\nPhysics and Mathematics \"BASIS\"{}.\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\chapter*{Author Index}\n\n\n\n\n\\section{Introduction}\n\n\n\\subsection{Motivation}\nOver the last ten years constraint programming emerged as an interesting\nand viable approach to programming. In this approach the programming process\nis limited to a generation of requirements (``constraints'') and\na solution of these requirements by means of general and domain specific \nmethods.\nThe techniques useful for finding solutions to sets of\nconstraints were studied for\nsome twenty years in the field of Constraint Satisfaction.\nOne of the most important of them is\n{\\em constraint propagation\\\/}, a process of reducing\na constraint satisfaction problem to another one that is equivalent\nbut ``simpler''.\n\nThe algorithms that achieve such a reduction\nusually aim at reaching some ``local consistency'', which \ndenotes some property approximating in some loose sense\n``global consistency'', which is\nthe consistency of the whole constraint satisfaction problem.\nIn fact, most of the notions of local consistency are neither\nimplied by nor imply global consistency (for a simple \nillustration of this statement see, e.g., Example\n\\ref{exa:arccon} in Subsection \\ref{subsec:auto}).\n\nFor some constraint satisfaction problems such an enforcement\nof local consistency is already sufficient for finding a solution \nin an efficient way or for\ndetermining that none exists. \nIn some other cases this process substantially reduces\nthe size of the search space which makes it possible to solve the original\nproblem more efficiently by means of some search algorithm.\n\nThe aim of this paper is to show that the constraint propagation\nalgorithms (also called (local) consistency, consistency enforcing,\nWaltz, filtering or narrowing algorithms) can be naturally explained\nby means of {\\em chaotic iteration}, a basic technique used for\ncomputing limits of iterations of finite sets of functions that\noriginated from numerical analysis (see, e.g., \nChazan and Miranker \\cite{CM69}) and\nwas adapted for computer science needs by Cousot and Cousot \\cite{CC77a}.\n\nIn our presentation we study chaotic iteration of monotonic and\ninflationary functions on partial orders first. This is done in\nSection \\ref{sec:chaotic}. Then, in Section \\ref{sec:cons} we show\nhow specific constraint propagation algorithms can be obtained by\nchoosing specific functions and specific partial orders.\n\nThis two-step presentation reveals that several constraint propagation\nalgorithms proposed in the literature are instances of generic chaotic\niteration algorithms studied here.\n\nThe adopted framework allows us to prove properties of these\nalgorithms in a simple, uniform way. This clarifies which properties\nof the so-called reduction functions (also called relaxation rules or\nnarrowing functions) account for correctness of these algorithms. For\nexample, it turns out that idempotence is not needed here. Further,\nthis framework allows us to separate an analysis of general properties, such as\ntermination and independence of the scheduling strategy, from\nconsideration of specific, constraint-related properties, such as\nequivalence. Even the consequences of choosing a queue instead of a\nset for scheduling purposes can be already clarified without\nintroducing constraints.\n\nWe also explain how\nby characterizing a given notion of a local consistency as a\ncommon fixed point of a finite set of monotonic and inflationary\nfunctions we can automatically generate an algorithm achieving this\nnotion of consistency by ``feeding'' these functions into a generic\nchaotic iteration algorithm.\nBy studying these functions in separation we can also compare\nspecific constraint propagation algorithms.\n\nA recent work of Monfroy and {R{\\'{e}}ty} \n\\cite{MR99} also shows how this approach\nmakes it possible to derive generic distributed constraint propagation\nalgorithms in a uniform way.\n\nSeveral general presentations of constraint propagation algorithms\nhave been published before. In Section \\ref{sec:concluding}\nwe explain how our work relates to and generalizes the work of others.\n\n\n\\subsection{Preliminaries}\n\\label{subsec:prel}\n\n\\begin{definition} Consider a sequence of domains ${\\cal D} := D_1, \\mbox{$\\ldots$}, D_n$.\n \\begin{itemize}\n\n \\item By a {\\em scheme\\\/} (on $n$) we mean a sequence of different elements from $[1..n]$.\n \\item \nWe say that $C$ is a {\\em constraint (on ${\\cal D}$) with scheme\\\/} \n$i_1, \\mbox{$\\ldots$}, i_l$ if\n$C \\mbox{$\\:\\subseteq\\:$} D_{i_1} \\times \\cdots \\times D_{i_l}$.\n\n\\item Let ${\\bf s} := s_1, \\mbox{$\\ldots$}, s_k$ be a sequence of schemes.\nWe say that a sequence of constraints $C_1, \\mbox{$\\ldots$}, C_k$ on ${\\cal D}$ is an\n{\\bf s}-{\\em sequence\\\/} if each $C_i$ is with scheme $s_i$.\n\n\\item\nBy a {\\em Constraint Satisfaction Problem\\\/} $\\langle \\cal D; \\cal C\\rangle$, in short CSP, we mean\na sequence of domains ${\\cal D}$ together with an {\\bf s}-sequence of\nconstraints ${\\cal C}$ on ${\\cal D}$. We call then {\\bf s} the {\\em scheme\\\/} of \n$\\langle \\cal D; \\cal C\\rangle$.\n\\hfill{$\\Box$} \n\\end{itemize}\n\\end{definition}\n\nIn principle a constraint can have more than one scheme, for example\nwhen all domains are equal.\nThis eventuality should not cause any problems in the sequel.\n Given an $n$-tuple $d := d_1, \\mbox{$\\ldots$}, d_n$\nin $D_1 \\times \\cdots \\times D_n$ and a scheme $s := i_1, \\mbox{$\\ldots$}, i_l$ on\n$n$ we denote by $d[s]$ the tuple $d_{i_1}, \\mbox{$\\ldots$} , d_{i_l}$. In\nparticular, for $j \\in [1..n]$ \\ $d[j]$ is the $j$-th element of $d$.\nBy a {\\em solution\\\/} to a CSP $\\langle \\cal D; \\cal C\\rangle$, where\n${\\cal D} := D_1, \\mbox{$\\ldots$}, D_n$, we mean an $n$-tuple $d \\in D_1 \\times\n\\cdots \\times D_n$ such that for each constraint $C$ in ${\\cal C}$ with\nscheme $s$ we have $d[s] \\in C$.\n\nConsider now a sequence of schemes $s_1, \\mbox{$\\ldots$}, s_k$. By its {\\em\n union}, written as $\\langle s_1, \\mbox{$\\ldots$}, s_k \\rangle$ \n we mean the scheme obtained from the sequences $s_1, \\mbox{$\\ldots$}, s_k$ by removing from\neach $s_i$ the elements present in some $s_j$, where $j < i$, and by concatenating\nthe resulting sequences. For example, $\\langle (3,7,2), (4,3,7,5), (3,5,8) \\rangle = (3,7,2,4,5,8)$.\nRecall that for an $s_1, \\mbox{$\\ldots$}, s_k$-sequence of\nconstraints $C_1, \\mbox{$\\ldots$}, C_k$\ntheir {\\em join\\\/}, written as $C_1 \\Join \\cdots \\Join C_k$,\nis defined as the constraint with scheme\n$\\langle s_1, \\mbox{$\\ldots$}, s_k \\rangle$ and such that\n\\[\nd \\in C_1 \\Join \\cdots \\Join C_k \\mbox{ iff $d[s_i] \\in C_i$ for $i \\in [1..k]$}.\n\\]\n\nFurther, given a constraint $C$ and a subsequence $s$ of its scheme,\nwe denote by $\\Pi_{s}(C)$ the constraint with scheme $s$\ndefined by \n\\[\n\\Pi_{s}(C) := \\C{d[s] \\mid d \\in C},\n\\]\nand call it {\\em the projection of $C$ on $s$}. \nIn particular, for a constraint $C$ with scheme $s$ and an element $j$ of $s$,\n$\\Pi_{j}(C) = \\C{a \\mid \\mbox{$\\exists$} d \\in C \\: a = d[j]}$.\n\nGiven a CSP $\\langle \\cal D; \\cal C\\rangle$ we denote by $Sol(\\langle \\cal D; \\cal C\\rangle)$\nthe set of all solutions to it. If the domains are clear from the context we drop the\nreference to $\\cal D$ and just write $Sol({\\cal C})$.\nThe following observation is useful.\n\\begin{note} \\label{not:sol}\nConsider a CSP $\\langle \\cal D; \\cal C\\rangle$ with \n ${\\cal D} := D_1, \\mbox{$\\ldots$}, D_n$ and ${\\cal C} := C_1, \\mbox{$\\ldots$}, C_k$ and with\nscheme {\\bf s}.\n\\begin{enumerate} \\renewcommand{\\theenumi}{\\roman{enumi}\n\\item \\mbox{}\\\\[-9mm]\n\\[\nSol(\\langle {\\cal D}; {\\cal C}\\rangle) = C_1 \\Join \\cdots \\Join C_k \\Join_{i \\in I} D_i,\n\\]\nwhere $I := \\C{i \\in [1..n] \\mid \\mbox{ $i$ {\\rm does not appear in {\\bf s}}}}$.\n\\item\nFor every {\\bf s}-subsequence {\\bf C} of ${\\cal C}$ and\n$d \\in Sol(\\langle {\\cal D}; {\\cal C} \\rangle)$ we have\n$d[\\langle {\\bf s} \\rangle] \\in Sol({\\bf C})$.\n\n\\hfill{$\\Box$}\n\\end{enumerate}\n\\end{note}\n\nFinally, we call two CSP's {\\em equivalent\\\/} if they have the same\nset of solutions. Note that we do not insist that these CSP's have\nthe same sequence of domains or the same scheme.\n\n\\section{Chaotic Iterations}\n\\label{sec:chaotic}\n\nIn our study of constraint propagation we proceed in two stages. In\nthis section we study chaotic iterations of functions on partial\norders. Then in the next section we explain how this framework can be\nreadily used to explain constraint propagation algorithms.\n\n\\subsection{Chaotic Iterations on Simple Domains}\n\\label{subsec:ci-sd}\n\nIn general, chaotic iterations are defined for functions that\nare projections on individual components \nof a specific function with several arguments.\nIn our approach we study a more elementary situation in which the\nfunctions are unrelated but satisfy certain properties.\nWe need the following concepts.\n\n\\begin{definition}\nConsider a set $D$, an element $d \\in D$ and a set of functions \n $F := \\C{f_1, \\mbox{$\\ldots$} , f_k}$ on $D$.\n \\begin{itemize}\n \\item \nBy a {\\em run\\\/} (of the functions $f_1, \\mbox{$\\ldots$}, f_k$) we mean\nan infinite sequence of numbers from $[1..k]$.\n\n\n\\item A run $i_1, i_2, \\mbox{$\\ldots$}$ is called {\\em fair\\\/} if \nevery $i \\in [1..k]$ appears in it infinitely often.\n\n\\item \nBy an {\\em iteration of $F$ associated with a run\n$i_1, i_2, \\mbox{$\\ldots$}$ and starting with $d$\\\/} \nwe mean\nan infinite sequence of values \n$d_0, d_1, \\mbox{$\\ldots$} $ defined inductively by\n\\[\nd_0 := d,\n\\]\n\\[\nd_{j} := f_{i_{j}}(d_{j-1}).\n\\]\n\nWhen \n$d$ is the least element of $D$ in some partial\norder clear from the context, we drop the reference to $d$\nand talk about an {\\em iteration of $F$}.\n\n\\item An iteration of $F$ is \ncalled {\\em chaotic\\\/} if it is associated with \na fair run.\n\\hfill{$\\Box$}\n \\end{itemize}\n\\end{definition}\n\n\\begin{definition}\nConsider a partial order $(D, \\mbox{$\\ \\sqsubseteq\\ $})$. A function $f$ on $D$ is called\n\\begin{itemize}\n\\item {\\em inflationary\\\/} if \n$x \\mbox{$\\ \\sqsubseteq\\ $} f(x)$ for all $x$,\n\n\\item {\\em monotonic\\\/} \\index{function!monotonic}\nif $x \\mbox{$\\ \\sqsubseteq\\ $} y$ implies \n$f(x) \\mbox{$\\ \\sqsubseteq\\ $} f(y)$ for all $x, y$,\n\n\\item {\\em idempotent} if\n$f(f(x)) = f(x)$ for all $x$.\n\\hfill{$\\Box$}\n\\end{itemize}\n\\end{definition}\n\nIn what follows we study chaotic iterations\non specific partial orders.\n\n\\begin{definition}\nWe call a partial order $(D, \\mbox{$\\ \\sqsubseteq\\ $} )$ an {\\em $\\sqcup$-po\\\/} if\n\\begin{itemize}\n\\item $D$ contains the\nleast element, denoted by $\\bot$, \n\n\\item for every increasing sequence\n\\[\nd_0 \\: \\mbox{$\\ \\sqsubseteq\\ $} \\: d_1 \\: \\mbox{$\\ \\sqsubseteq\\ $} \\: d_2 \\: \\mbox{$\\ldots$}\n\\]\nof elements from $D$, the least upper bound of the set\n\\[\n\\C{ d_0 , \\: d_1 , \\: d_2 , \\mbox{$\\ldots$} },\n\\]\ndenoted by $\\bigsqcup_{n=0}^{\\infty} d_n$ and called\nthe {\\em limit of\\\/} $d_0, d_1, \\mbox{$\\ldots$}$, exists,\n\n\\item for all $a,b \\in D$ the least upper bound\nof the set $\\C{a,b}$, denoted by $a \\sqcup b$, exists.\n\\end{itemize}\n\nFurther, we say that \n\\begin{itemize}\n\n\\item an increasing sequence\n$d_0 \\: \\mbox{$\\ \\sqsubseteq\\ $} \\: d_1 \\: \\mbox{$\\ \\sqsubseteq\\ $} \\: d_2 \\: \\mbox{$\\ldots$}$\n{\\em eventually stabilizes at d\\\/} if for some $j \\geq 0$ we have\n$d_i = d$ for $i \\geq j$,\n\n\\item \na partial order satisfies\nthe {\\em finite chain property} if \nevery increasing sequence of its elements eventually stabilizes.\n\\hfill{$\\Box$}\n\\end{itemize}\n\n\\end{definition}\n\nIntuitively, $\\bot$ is an element with the least amount of information\nand $a \\sqsubseteq b$ means that $b$ contains more information than\n$a$. Clearly, the second condition of the definition of $\\sqcup$-po\nis automatically satisfied if $D$ is finite.\n\nIt is also clear that $\\sqcup$-po's are closed under the Cartesian product.\nIn the applications we shall use specific $\\sqcup$-po's built out of sets\nand their Cartesian products.\n\n\\begin{definition}\nLet $D$ be a set.\nWe say that a family ${\\cal F}(D)$ of subsets of $D$ is\n{\\em based on $D$\\\/} if\n\n\\begin{itemize}\n\n\\item $D \\in {\\cal F}(D)$,\n\n\\item for every decreasing sequence\n\\[\nX_0 \\supseteq X_1 \\supseteq X_2 \\mbox{$\\ldots$}\n\\]\nof elements of ${\\cal F}(D)$ \n\\[\n\\cap^{\\infty}_{i=0} X_i \\in {\\cal F}(D),\n\\]\n\n\\item for all $X, Y \\in {\\cal F}(D)$ we have $X \\cap Y \\in {\\cal F}(D)$.\n\\end{itemize}\n\nThat is, a set ${\\cal F}(D)$ of subsets of $D$ is based on $D$ iff\n${\\cal F}(D)$ with the relation $\\mbox{$\\ \\sqsubseteq\\ $}$ defined by\n\\[\nX \\sqsubseteq Y \\mbox{ iff } X \\supseteq Y\n\\]\nis an $\\sqcup$-po. \nIn this $\\sqcup$-po $\\bot = D$ and $X \\sqcup Y = X \\cap Y$.\nWe call $({\\cal F}(D), \\sqsubseteq)$ an $\\sqcup$-po {\\em based on $D$}. \n\\hfill{$\\Box$}\n\\end{definition}\n\nThe following two examples of families of subsets based on a domain \nwill be used in the sequel.\n\n\\begin{example}\n\nDefine\n\\[\n{\\cal F}(D) := {\\cal P}(D),\n\\] \nthat is ${\\cal F}(D)$ consists of all subsets of $D$.\nThis family of subsets will be used to discuss general\nconstraint propagation algorithms.\n\\hfill{$\\Box$}\n\\end{example}\n\n\\begin{example}\n\\label{exa:partial}\nLet $(D, \\mbox{$\\ \\sqsubseteq\\ $})$ be a partial order with \nthe $\\mbox{$\\ \\sqsubseteq\\ $}$-least element {\\em min}, the $\\mbox{$\\ \\sqsubseteq\\ $}$-greatest element {\\em max}\nand such that for every two elements $a, b \\in D$\nboth $a \\sqcup b$ and $a \\sqcap b$ exists.\n\nExamples of such partial orders are\na linear order with the $\\mbox{$\\ \\sqsubseteq\\ $}$-least element and the $\\mbox{$\\ \\sqsubseteq\\ $}$-greatest element\nand the set of all subsets of a given set with the subset relation.\n\nGiven two elements $a,b$ of $D$ define\n\\[\n[a,b] := \\C{c \\mid a \\leq c \\mbox{ and } c \\leq b}\n\\]\nand call such a set an {\\em interval}.\nSo for $b < a$ we have $[a,b] = \\mbox{$\\emptyset$}$, for $b = a$ we have $[a,b] = \\C{a}$\nand $[{\\em min} .. {\\em max}] = D$.\n\nLet now $F$ be a finite subset of $D$ containing {\\em min} and {\\em max}.\nDefine\n\\[\n{\\cal F}(D) := \\C{[a,b] \\mid a,b \\in F},\n\\] \nthat is ${\\cal F}(D)$ consists of all intervals with the bounds in $F$.\nNote that ${\\cal F}(D)$ is indeed a\nfamily of subsets based on $D$ since\n\n\\begin{itemize}\n\n\\item $D = [{\\em min} ..{\\em max}]$,\n\n\\item ${\\cal F}(D)$ is finite, so every decreasing sequence of elements of ${\\cal F}(D)$ \neventually stabilizes,\n\n\\item for $a,b,c,d \\in F$ we have\n\\[\n[a,b] \\cap [c,d] = [a \\sqcup c, b \\sqcap d].\n\\]\n\\end{itemize}\n\nSuch families of subsets will be used to\ndiscuss constraint propagation algorithms on reals.\nIn these applications $D$ will be the set of real numbers augmented\nwith $- \\infty$ and $+ \\infty$ and $F$ the set of floating point numbers. \n\\hfill{$\\Box$}\n\\end{example}\n\n\nThe following observation can be easily distilled from a more general\nresult due to Cousot and Cousot \\cite{CC77a}. \nTo keep the paper self-contained we\nprovide a direct proof.\n\n\\begin{theorem}[(Chaotic Iteration)] \\label{thm:chaotic}\nConsider an $\\sqcup$-po $(D , \\mbox{$\\ \\sqsubseteq\\ $} )$\nand a set of functions \n $F := \\C{f_1, \\mbox{$\\ldots$} , f_k}$ on $D$.\nSuppose that all functions in $F$ are inflationary and monotonic.\nThen the limit of every chaotic iteration of $F$ exists and\ncoincides with \n\\[\n\\bigsqcup_{j=0}^{\\infty} f\\uparrow j,\n\\]\nwhere the function $f$ on $D$ is defined by:\n\\[\nf(x) := \\bigsqcup_{i=1}^{k} f_i(x)\n\\]\nand $f\\uparrow j$ is an abbreviation for $f^{j}(\\bot)$, the\n$j$-th fold iteration of $f$ started at $\\bot$.\n\\end{theorem}\n\\Proof\nFirst, notice that $f$ is inflationary, so \n$\\bigsqcup_{j=0}^{\\infty} f\\uparrow j$ exists.\nFix a chaotic iteration $d_0, d_1, \\mbox{$\\ldots$}$ of $F$ \nassociated with a fair run $i_1, i_2, \\mbox{$\\ldots$}$.\nSince all functions $f_i$ are inflationary, \n$\\bigsqcup_{j=0}^{\\infty} d_j$ exists.\nThe result follows directly from the following two claims.\n\n\\begin{claim}\n$\\mbox{$\\forall$} j \\: \\mbox{$\\exists$} m \\: f \\uparrow j \\mbox{$\\ \\sqsubseteq\\ $} d_m$.\n\\end{claim}\n{\\em Proof.}\nWe proceed by induction on $j$.\n\\vspace{2 mm}\n\n\\noindent\n{\\bf Base}. $j = 0$.\nAs $f \\uparrow 0 = \\bot = d_0$, the claim is obvious.\n\\vspace{2 mm}\n\n\\noindent\n{\\bf Induction step}.\nAssume that for some $j \\geq 0$ we have\n$f \\uparrow j \\mbox{$\\ \\sqsubseteq\\ $} d_m$ for some $m \\geq 0$.\nSince\n\n\\[\nf \\uparrow (j+1) = f(f \\uparrow j) = \\bigsqcup_{i=1}^{k} f_i(f \\uparrow j), \n\\]\nit suffices to prove \n\\begin{equation}\n\\mbox{$\\forall$} i \\in [1..k] \\: \\mbox{$\\exists$} m_i \\: f_i(f \\uparrow j) \\mbox{$\\ \\sqsubseteq\\ $} d_{m_i}.\n\\label{equ:incl}\n\\end{equation}\nIndeed, we have then by the fact that\n$d_l \\mbox{$\\ \\sqsubseteq\\ $} d_{l+1}$ for $l \\geq 0$ \n\\[\n\\bigsqcup_{i=1}^{k} f_i(f \\uparrow j) \\mbox{$\\ \\sqsubseteq\\ $} \\bigsqcup_{i=1}^{k} d_{m_i} \\mbox{$\\ \\sqsubseteq\\ $} d_{m'}\n\\]\nwhere $m' := max \\C{m_i \\mid i \\in [1..k]}$.\n\nSo fix $i \\in [1..k]$. By fairness of the considered run\n $i_1, i_2, \\mbox{$\\ldots$}$,\nfor some $m_{i} > m$\nwe have $i_{m_{i}} = i$. \nThen $d_{m_i} = f_i(d_{m_{i} -1})$.\nNow $d_m \\mbox{$\\ \\sqsubseteq\\ $} d_{m_{i} -1}$, so\nby the monotonicity of $f_i$ we have\n\n\\[\nf_i(f \\uparrow j) \\mbox{$\\ \\sqsubseteq\\ $} f_i(d_m) \\mbox{$\\ \\sqsubseteq\\ $} f_i(d_{m_{i} -1}) = d_{m_i}.\n\\]\nThis proves (\\ref{equ:incl}).\n\\hfill{$\\Box$}\n\\vspace{3 mm}\n\n\\begin{claim}\n$\\mbox{$\\forall$} m \\: d_m \\mbox{$\\ \\sqsubseteq\\ $} f \\uparrow m$.\n\\end{claim}\n{\\em Proof.}\nThe proof is by a straightforward induction on $m$.\nIndeed, for $m = 0$ we have\n$d_0 = \\bot = f \\uparrow 0$, so the induction base holds.\n\nTo prove the induction step \nsuppose that for some $m \\geq 0$\nwe have $d_m \\mbox{$\\ \\sqsubseteq\\ $} f \\uparrow m$. For some $i \\in [1..k]$ \nwe have $d_{m+1} = f_i(d_m)$, so by the monotonicity of $f$ we get\n$\nd_{m+1} = f_i(d_m) \\mbox{$\\ \\sqsubseteq\\ $} f(d_m)\\mbox{$\\ \\sqsubseteq\\ $} f(f \\uparrow m) = f \\uparrow (m+1).\n$\n\\hfill{$\\Box$}\n\n\\hfill{$\\Box$}\n\\vspace{5 mm}\n\nIn many situations some chaotic iteration studied in the Chaotic\nIteration Theorem \\ref{thm:chaotic} eventually stabilizes. \nThis is for example the case when\n$(D, \\mbox{$\\ \\sqsubseteq\\ $} )$ satisfies the finite chain property.\nIn such cases the limit of every chaotic iteration\ncan be characterized in an alternative way.\n\n\\begin{corollary}[(Stabilization)] \\label{cor:chaotic}\nSuppose that \nunder the assumptions of the Chaotic Iteration Theorem \\ref{thm:chaotic}\nsome chaotic iteration of $F$ eventually stabilizes. Then\nevery chaotic iteration of $F$ eventually stabilizes at\nthe least fixed point of $f$.\n\\end{corollary}\n\\Proof\nIt suffices to note that if some chaotic iteration \n$d_0, d_1 \\mbox{$\\ldots$}$ of $F$ eventually stabilizes at some $d_m$\nthen by Claims 1 and 2 $f \\uparrow m = d_m$, so\n\n\\begin{equation}\n\\bigsqcup_{j=0}^{\\infty} f\\uparrow j = f \\uparrow m.\n\\label{eq:stab}\n\\end{equation}\nThen, again by Claims 1 and 2, every chaotic iteration \nof $F$ stabilizes at $f \\uparrow m$\nand it is easy to see that by virtue of (\\ref{eq:stab})\n$f \\uparrow m$ is the least fixed point of $f$.\n\\hfill{$\\Box$}\n\\vspace{5 mm}\n\nFinally, using the above results we can compare chaotic iterations\nresulting from different sets of functions.\n\n\\begin{corollary}[(Comparison)] \\label{cor:chaotic2}\nConsider an $\\sqcup$-po $(D , \\mbox{$\\ \\sqsubseteq\\ $} )$\nand two set of functions, \n$F := \\C{f_1, \\mbox{$\\ldots$} , f_k}$ and $G := \\C{g_1, \\mbox{$\\ldots$} , g_l}$ on $D$.\nSuppose that all functions in $F$ and $G$ are inflationary and monotonic.\nFurther, assume that for $i \\in [1..k]$ there exist $j_1, \\mbox{$\\ldots$}, j_m \\in [1..l]$\nsuch that\n\\[\nf_i(x) \\mbox{$\\ \\sqsubseteq\\ $} g_{j_1} \\circ \\mbox{$\\ldots$} \\circ g_{j_m}(x) \\mbox{ for all $x$.}\n\\]\nThen $lim(F) \\mbox{$\\ \\sqsubseteq\\ $} lim(G)$ for the uniquely defined\nlimits $lim(F)$ and $lim(G)$ of the \nchaotic iterations of $F$ and $G$.\n\\end{corollary}\n\\Proof\nStraightforward using the Chaotic\nIteration Theorem \\ref{thm:chaotic}\nand the fact that the functions in $G$ are inflationary.\n\\hfill{$\\Box$} \n\n\\subsection{Chaotic Iterations on Compound Domains} \n\\label{subsec:ci-cd}\nNot much more can be deduced about the process of the chaotic iteration \nunless the structure of the domain $D$ is further known.\nSo assume now that $\\sqcup$-po $(D, \\mbox{$\\ \\sqsubseteq\\ $} )$ is the \nCartesian product of the $\\sqcup$-po's\n$(D_i, \\mbox{$\\ \\sqsubseteq\\ $}_i )$, for $i \\in [1..n]$.\nIn what follows we consider a modification of the situation studied in \nthe Chaotic Iteration Theorem \\ref{thm:chaotic} in which each\nfunction $f_i$ affects only certain components of $D$.\n\nConsider the partial orders\n$(D_i, \\mbox{$\\ \\sqsubseteq\\ $}_i )$, for $i \\in [1..n]$ and\na scheme $s := i_1, \\mbox{$\\ldots$}, i_l$ on $n$.\nThen by $(D_s, \\mbox{$\\ \\sqsubseteq\\ $}_s)$ we mean the Cartesian product of the\npartial orders\n$(D_{i_j}, \\mbox{$\\ \\sqsubseteq\\ $}_{i_j})$, for $j \\in [1..l]$.\n\nGiven a function $f$ on $D_s$ we say that $f$ is {\\em with scheme $s$}. \nInstead of defining iterations for the case of the functions with schemes,\nwe rather reduce the situation to the one studied in the previous subsection.\nTo this end we canonically extend each function $f$ on $D_s$\nto a function $f^+$ on $D$ as follows. \nSuppose that $s = i_1, \\mbox{$\\ldots$}, i_l$ and\n\\[\nf(d_{i_1}, \\mbox{$\\ldots$}, d_{i_l}) = (e'_{i_1}, \\mbox{$\\ldots$}, e'_{i_l}).\n\\]\nLet for $j \\in [1..n]$\n\\[\ne_j := \\left \\{ \\begin{array}{ll}\n e'_j & \\mbox{if $j$ is an element of $s$}, \\\\\n d_j & \\mbox{otherwise. }\n \\end{array}\n \\right.\n\\]\nThen we set\n\\[\nf^+(d_1, \\mbox{$\\ldots$}, d_n) := (e_1, \\mbox{$\\ldots$}, e_n).\n\\]\n\n\nSuppose now that $(D, \\mbox{$\\ \\sqsubseteq\\ $} )$ is the Cartesian product of the\n$\\sqcup$-po's $(D_i, \\mbox{$\\ \\sqsubseteq\\ $}_i )$, for $i \\in [1..n]$, and $F := \\C{f_1, \\mbox{$\\ldots$}, f_k}$ \nis a set of functions with schemes that are all inflationary and monotonic.\nThen the following algorithm can be used\nto compute the limit of the chaotic iterations of $F^+ := \\C{f^+_1, \\mbox{$\\ldots$},f^+_k}$.\nWe say here that a function $f$\n{\\em depends on $i$\\\/} if $i$ is an element of its scheme.\n\\vspace{2 mm}\n\n\n\\noindent\n{\\sc Generic Chaotic Iteration Algorithm ({\\tt CI})}\n\\begin{tabbing}\n\\= $d := \\underbrace{(\\bot, \\mbox{$\\ldots$}, \\bot)}_{\\mbox{$n$ times}}$; \\\\[1mm]\n\\> $d' := d$; \\\\ \n\\> $G := F$; \\\\ \n\\> {\\bf while} $G \\neq \\mbox{$\\emptyset$}$ {\\bf do} \\\\\n\\> \\qquad choose $g \\in G$; suppose $g$ is with scheme $s$; \\\\\n\\> \\qquad $G := G - \\C{g}$; \\\\\n\\> \\qquad $d'[s] := g(d[s])$; \\\\\n\\> \\qquad {\\bf if} $d[s] \\neq d'[s]$ {\\bf then} \\\\\n\\> \\qquad \\qquad $G := G \\cup \\C{f \\in F \\mid \\mbox{$f$ depends on\n some } i \\mbox{ in $s$ such that } d[i] \\neq d'[i]}$; \\\\\n\\> \\qquad \\qquad $d[s] := d'[s]$ \\\\\n\\> \\qquad {\\bf fi} \\\\\n\\> {\\bf od} \n\\end{tabbing}\n\nObviously, the condition $d[s] \\neq d'[s]$ can be\nomitted here. We retained it to keep the form of the\nalgorithm more intuitive.\n\nThe following observation will be useful in\nthe proof of correctness of this algorithm.\n\n\\begin{note} \\label{not:extend}\nConsider the partial orders\n$(D_i, \\mbox{$\\ \\sqsubseteq\\ $}_i )$, for $i \\in [1..n]$,\na scheme $s$ on $n$\nand a function $f$ with scheme $s$. Then\n\\begin{enumerate}\\renewcommand{\\theenumi}{\\roman{enumi}\n\\item $f$ is inflationary iff $f^+$ is.\n\n\\item $f$ is monotonic iff $f^+$ is.\n\\hfill{$\\Box$}\n\\end{enumerate}\n\\end{note}\n\nObserve that, in spite of the name of the algorithm, its infinite\nexecutions do not need to correspond to chaotic iterations. \nThe following example will be of use for a number of different purposes.\n\n\\begin{example} \\label{exa:inf}\nConsider the set of natural numbers $\\cal N$ augmented with $\\omega$, with\nthe order $\\leq$. In this order $k \\leq \\omega$ for $k \\in {\\cal N}$.\nNext, we consider the following three functions on ${\\cal N} \\cup \\C{\\omega}$:\n\n\\[\nf_1(n) := \\left \\{ \\begin{array}{ll}\n n+1 & \\mbox{if $n$ is even}, \\\\\n n & \\mbox{if $n$ is odd}, \\\\\n \\omega & \\mbox{if $n$ is $\\omega$, }\n \\end{array}\n \\right.\n\\]\n\n\\[\nf_2(n) := \\left \\{ \\begin{array}{ll}\n n+1 & \\mbox{if $n$ is odd}, \\\\\n n & \\mbox{if $n$ is even}, \\\\\n \\omega & \\mbox{if $n$ is $\\omega$, }\n \\end{array}\n \\right.\n\\]\n\\[\nf_3(n) := \\omega.\n\\]\n\n\\noindent\nClearly, the underlying order is an $\\sqcup$-po and the functions\n$f_1, f_2$ and $f_3$ are all inflationary, monotonic and idempotent.\nNow, there is an infinite execution of the {\\tt CI} algorithm\nthat corresponds with the run $1,2,1,2, \\mbox{$\\ldots$}$.\nThis execution does not correspond to any chaotic iteration\nof $\\C{f_1, f_2, f_3}$.\n\\hfill{$\\Box$}\n\\end{example}\n\n\nHowever, when we focus on terminating executions we obtain \nthe following result in the proof of which our analysis of chaotic\niterations is of help.\n\n\\begin{theorem}[({\\tt CI})] \\label{thm:CI}\n \\mbox{} \\\\[-6mm]\n\\begin{enumerate}\\renewcommand{\\theenumi}{\\roman{enumi}\n\\item Every terminating execution of the {\\tt CI} algorithm computes\nin $d$ the least fixed point of\nthe function $f$ on $D$ defined by\n\\[\nf(x) := \\bigsqcup_{i=1}^{k} f^+_i(x).\n\\]\n\n\\item If all $(D_i, \\mbox{$\\ \\sqsubseteq\\ $}_i )$, where $i \\in [1..n]$, \nsatisfy the finite chain property, then every execution of the \n{\\tt CI} algorithm terminates.\n \\end{enumerate}\n\\end{theorem}\n\n\\Proof\nIt is simpler to reason about a modified, but equivalent, algorithm\nin which the assignments $d'[s] := g(d[s])$ and $d[s] := d'[s]$ are\nrespectively replaced by $d' := g^+(d)$ and $d := d'$ and the test\n$d[s] \\neq d'[s]$ by $d \\neq d'$. \n\\vspace{2 mm}\n\n\\noindent\n$(i)$\nNote that the formula\n\\[\nI :=\\mbox{$\\forall$} f \\in F - G \\: f^+(d) = d\n\\]\nis an invariant of the {\\bf while} loop of the modified algorithm.\nThus upon its termination\n\\[\n(G = \\mbox{$\\emptyset$}) \\mbox{$\\ \\wedge\\ $} I\n\\]\nholds, that is \n\\[\n\\mbox{$\\forall$} f \\in F \\: f^+(d) = d.\n\\]\nConsequently, some chaotic iteration of $F^+$ eventually stabilizes at $d$.\nHence $d$ is the least fixpoint of the function $f$ defined in item $(i)$\nbecause the\nStabilization Corollary \\ref{cor:chaotic} is applicable here by\nvirtue of Note \\ref{not:extend}.\n\\vspace{2 mm}\n\n\\noindent\n$(ii)$\nConsider the lexicographic order of the partial orders\n$(D, \\sqsupseteq)$ and $({\\cal N}, \\leq)$, \ndefined on the elements of $D \\times {\\cal N}$ by\n\\[ \n(d_1, n_1) \\leq_{lex} (d_2, n_2)\\ {\\rm iff} \\ d_1 \\sqsupset d_2\n \\ {\\rm or}\\ ( d_1 = d_2 \\ {\\rm and}\\ n_1 \\leq n_2). \n\\]\nWe use here the inverse order $\\sqsupset$\ndefined by: $d_1 \\sqsupset d_2$ iff $d_2 \\sqsubseteq d_1$ and $d_2 \\neq d_1$.\n\nBy Note \\ref{not:extend}(i) all functions $f^+_i$\nare inflationary, so\nwith each {\\bf while} loop iteration of the modified algorithm\nthe pair\n\\[\n(d, card \\: G)\n\\]\nstrictly decreases in this order $\\leq_{lex}$.\nHowever, in general the lexicographic order $(D \\times {\\cal N}, \\leq_{lex})$ is not\nwell-founded and in fact termination is not guaranteed.\nBut assume now additionally that each partial order\n$(D_i, \\mbox{$\\ \\sqsubseteq\\ $}_i )$ satisfies the finite chain property. Then so does their\nCartesian product $(D, \\mbox{$\\ \\sqsubseteq\\ $} )$.\nThis means that $(D, \\sqsupseteq)$ is well-founded and consequently so is\n$(D \\times {\\cal N}, \\leq_{lex})$ which implies termination.\n\\hfill{$\\Box$}\n\\vspace{5 mm}\n\nWhen all considered functions $f_i$ are also idempotent,\nwe can reverse the order of\nthe two assignments to $G$, that is to put the assignment\n$G := G - \\C{g}$ after the {\\bf if-then-fi} statement,\nbecause after applying an idempotent function there is\nno use in applying it immediately again.\nLet us denote by {\\tt CII} the algorithm resulting from this movement of\nthe assignment $G := G - \\C{g}$.\n\nMore specialized versions of the {\\tt CI} and {\\tt CII} algorithms\ncan be obtained by representing $G$ as a queue. To this end we use the operation\n${\\bf enqueue}(F, Q)$ which for a set $F$ and a queue $Q$ enqueues in an \narbitrary order all the elements of $F$ in $Q$,\ndenote the empty queue by {\\bf empty}, and the head and the tail \nof a non-empty queue $Q$ respectively by ${\\bf head}(Q)$ and ${\\bf tail}(Q)$.\nThe following algorithm is then a counterpart of the {\\tt CI} algorithm.\n\\vspace{2 mm}\n\n\\noindent\n{\\sc Generic Chaotic Iteration Algorithm with a Queue ({\\tt CIQ})}\n\\begin{tabbing}\n\\= $d := \\underbrace{(\\bot, \\mbox{$\\ldots$}, \\bot)}_{\\mbox{$n$ times}}$; \\\\[1mm]\n\\> $d' := d$; \\\\ \n\\> $Q := {\\bf empty}$; \\\\\n\\> ${\\bf enqueue}(F, Q)$; \\\\\n\\> {\\bf while} $Q \\neq {\\bf empty}$ {\\bf do} \\\\\n\\> \\qquad $g := {\\bf head}(Q)$; suppose $g$ is with scheme $s$; \\\\\n\\> \\qquad $Q := {\\bf tail}(Q)$; \\\\\n\\> \\qquad $d'[s] := g(d[s])$; \\\\\n\\> \\qquad {\\bf if} $d[s] \\neq d'[s]$ {\\bf then} \\\\\n\\> \\qquad \\qquad ${\\bf enqueue}(\\C{f \\in F \\mid \\mbox{$f$ depends on\n some } i \\mbox{ in $s$ such that } d[i] \\neq d'[i]}, Q)$; \\\\\n\\> \\qquad \\qquad $d[s] := d'[s]$ \\\\\n\\> \\qquad {\\bf fi} \\\\\n\\> {\\bf od} \n\\end{tabbing}\n\nDenote by {\\tt CIIQ} the modification of the {\\tt CIQ} algorithm\nthat is appropriate for the idempotent functions, so the one in which the \nassignment $Q := {\\bf tail}(Q)$ is performed after the {\\bf if-then-fi} statement.\n\nIt is easy to see that the claims of the {\\tt CI}\nTheorem \\ref{thm:CI} also hold for the\n{\\tt CII, CIQ} and {\\tt CIIQ} algorithms.\nA natural question arises whether for the specialized versions\n{\\tt CIQ} and {\\tt CIIQ}\nsome additional properties can be established. The answer is positive. \nWe need an auxiliary notion and a result first.\n\n\\begin{definition}\nConsider a set of functions $F := \\C{f_1, \\mbox{$\\ldots$}, f_k}$ on a domain $D$.\n\n\\begin{itemize}\n\\item We say that an element $i \\in [1..k]$ is {\\em eventually \nirrelevant for an iteration $d_0, d_1, \\mbox{$\\ldots$}$ of $F$\\\/} if\n$\\mbox{$\\exists$} m \\geq 0 \\: \\mbox{$\\forall$} j \\geq m \\: f_i(d_j) = d_j$.\n\n\\item An iteration of $F$ is called {\\em semi-chaotic\\\/} if every\n$i \\in [1..k]$ that appears finitely often in its run is eventually \nirrelevant for this iteration.\n\\hfill{$\\Box$}\n\\end{itemize}\n\\end{definition}\n\nSo every chaotic iteration is semi-chaotic but not conversely.\n\n\\begin{note} \\label{note:semi}\n \\mbox{} \\\\[-6mm]\n\\begin{enumerate}\\renewcommand{\\theenumi}{\\roman{enumi}\n\\item Every semi-chaotic iteration $\\xi$ corresponds to a chaotic\n iteration $\\xi'$ with the same limit as $\\xi$ and such that $\\xi$\n eventually stabilizes at some $d$ iff $\\xi'$ does.\n\\item Every infinite execution of the {\\tt CIQ} (respectively {\\tt CIIQ})\nalgorithm corresponds to a semi-chaotic iteration.\n\n \\end{enumerate}\n\n\\end{note}\n\\Proof \\\\\n{\\em (i)\\\/}\n$\\xi$ can\nbe transformed into the desired chaotic iteration $\\xi'$\nby repeating from a certain moment on some elements of it.\n\\vspace{2 mm}\n\n\\noindent\n{\\em (ii)\\\/}\nConsider an infinite execution of the {\\tt CIQ} algorithm. \nLet $i_1, i_2, \\mbox{$\\ldots$}$ be the run associated with it and\n$\\xi := d_0, d_1, \\mbox{$\\ldots$}$ the iteration of $F^+$ associated\nwith this run.\n\nConsider the set $A$ of the elements of $[1..k]$ that appear \nfinitely often in the run $i_1, i_2, \\mbox{$\\ldots$}$. \nFor some $m \\geq 0$ we have $i_j \\not\\in A$ for $j > m$.\nThis means by the structure of this algorithm\nthat after $m$ iterations of the {\\bf while} loop no function $f_i$\nwith $i \\in A$ is ever present in the queue $Q$.\n\nBy virtue of the invariant $I$ used in the proof of the {\\tt CI}\nTheorem \\ref{thm:CI} we then have \n$f^+_i(d_j) = d_j$ for $i \\in A$ and $j \\geq m$.\nThis proves that $\\xi$ is semi-chaotic. \n\nThe proof for the {\\tt CIIQ} algorithm is the same.\n\\hfill{$\\Box$}\n\\vspace{5 mm}\n\nItem {\\em (i)\\\/} shows that the results of Subsection \\ref{subsec:ci-sd}\ncan be strengthened to semi-chaotic iterations. However, the \nproperty of being a semi-chaotic iteration cannot be determined from\nthe run only. So, for simplicity, we decided to limit our exposition\nto chaotic iterations. Next, it is easy to\nshow that item {\\em (ii)\\\/} cannot be strengthened to chaotic\niterations.\n\nWe can now prove the desired results. The first one shows that the\nnondeterminism present in the {\\tt CIQ} and {\\tt CIIQ} algorithms has\nno bearing on their termination.\n\n\\begin{theorem}[(Termination)] \\label{thm:CIQ}\nIf some execution of the {\\tt CIQ} (respectively {\\tt CIIQ})\nalgorithm terminates, then\nall executions of the {\\tt CIQ} (respectively {\\tt CIIQ})\nalgorithm terminate.\n\\end{theorem}\n\\Proof\nWe concentrate on the {\\tt CIQ} algorithm. For the {\\tt CIIQ} algorithm\nthe proof is the same. \n\nConsider a terminating execution of the {\\tt CIQ} algorithm.\nConstruct a chaotic iteration of $F^+$ the initial prefix of which\ncorresponds with this execution. By virtue of the invariant $I$ this\niteration eventually stabilizes. By the Stabilization Corollary\n\\ref{cor:chaotic}\n\\begin{equation}\n\\mbox{every chaotic iteration of $F^+$ eventually stabilizes.}\n\\label{eq:stab1}\n\\end{equation}\n\nSuppose now by contradiction that some execution of the {\\tt CIQ}\nalgorithm does not terminate. Let $\\xi$ be the iteration of $F^+$\nassociated with this execution.\nBy the structure of this algorithm\n\\begin{equation}\n\\mbox{$\\xi$ does not eventually stabilize.}\n\\label{eq:stab2}\n\\end{equation}\n\nBy Note \\ref{note:semi}(ii) $\\xi$ is a semi-chaotic iteration.\nConsider a chaotic iteration $\\xi'$ of $F^+$ that corresponds with\n$\\xi$ by virtue of Note \\ref{note:semi}(i). We conclude by\n(\\ref{eq:stab2}) that $\\xi'$ does not eventually stabilize. This\ncontradicts (\\ref{eq:stab1}). \\hfill{$\\Box$} \\vspace{5 mm}\n\nSo for a given Cartesian product $(D, \\mbox{$\\ \\sqsubseteq\\ $} )$ of the $\\sqcup$-po's\nand a finite set $F$ of inflationary, monotonic and idempotent \nfunctions either all executions\nof the {\\tt CIQ} (respectively {\\tt CIIQ}) algorithm terminate or\nall of them are infinite. In the latter case we can be more specific.\n\n\\begin{theorem}[(Non-termination)] \\label{thm:limit}\n For every infinite execution of the {\\tt CIQ} (respectively {\\tt\n CIIQ}) algorithm the limit of the corresponding iteration of $F$\n exists and coincides with\n\\[\n\\bigsqcup_{j=0}^{\\infty} f\\uparrow j,\n\\]\nwhere $f$ is defined as in the {\\tt CI}\nTheorem \\ref{thm:CI}(i).\n\\end{theorem}\n\\Proof \nConsider an infinite execution of the {\\tt CIQ} algorithm. By\nNote \\ref{note:semi}(ii) it corresponds to a semi-chaotic iteration $\\xi$ of\n$F^+$. \nBy Note \\ref{note:semi}(i) $\\xi$ corresponds to a chaotic iteration\nof $F^+$ with the same limit. The desired conclusion now follows by\nthe Chaotic Iteration Theorem \\ref{thm:chaotic}.\n\nThe proof for the {\\tt CIIQ} algorithm is the same.\n\\hfill{$\\Box$} \n\\vspace{5 mm}\n\nNeither of the above two results holds for the {\\tt CI} and {\\tt CII}\nalgorithms. Indeed, take the $\\sqcup$-po $({\\cal N} \\cup \\C{\\omega}, \\leq)$\nand the functions $f_1, f_2, f_3$ of Example \\ref{exa:inf}. Then\nclearly both infinite and finite executions of the {\\tt CI} and {\\tt\n CII} algorithms exist. We leave to the reader the task of modifying\nExample \\ref{exa:inf} in such a way that for both {\\tt CI} and {\\tt\n CII} algorithms infinite executions exist with different limits\nof the corresponding iterations.\n\n\\section{Constraint Propagation}\n\\label{sec:cons}\nLet us return now to the study of CSP's. We show here how the results\nof the previous section can be used to explain the constraint\npropagation process.\n\nIn general, two basic approaches fall under this name:\n\n\\begin{itemize}\n\n\\item reduce the constraints while maintaining equivalence;\n\n\\item reduce the domains while maintaining equivalence.\n\\end{itemize}\n\n\\subsection{Constraint Reduction} \n\\label{subsec:cr}\n\nIn each step of the constraint reduction process one or more\nconstraints are replaced by smaller ones. In general, the smaller\nconstraints are not arbitrary. For example, when studying linear\nconstraints usually the smaller constraints are also linear.\n\nTo model this aspect of constraint reduction \nwe associate with each CSP an $\\sqcup$-po\nthat consists of the CSP's that can be generated during the\nconstraint reduction process.\n\nBecause the domains are assumed to remain unchanged, we can\nidentify each CSP with the sequence of its constraints.\nThis leads us to the following notions.\n\nConsider a CSP ${\\cal P} := \\langle {\\cal D}; C_1, \\mbox{$\\ldots$}, C_k\\rangle$. Let\nfor $i \\in [1..k]$ \n$({\\cal F}(C_i), \\supseteq)$ be an $\\sqcup$-po based on $C_i$. \nWe call the Cartesian product\n$(CO, \\mbox{$\\ \\sqsubseteq\\ $})$ of $({\\cal F}(C_i), \\supseteq)$, with $i \\in [1..k]$,\n{\\em a constraint $\\sqcup$-po associated with ${\\cal P}$}.\n\nAs in Subsection \\ref{subsec:ci-cd}, for\na scheme $s := i_1, \\mbox{$\\ldots$}, i_l$ we denote by \n$(CO_s, \\mbox{$\\ \\sqsubseteq\\ $}_s)$ the Cartesian product of the partial orders\n$({\\cal F}(C_{i_j}), \\supseteq)$, where $j \\in [1..l]$.\n\nNote that \n$CO_s = {\\cal F}(C_{i_1}) \\times \\cdots \\times {\\cal F}(C_{i_l})$.\nBecause we want now to use constraints in our analysis\nand constraint are sets of tuples, we identify $CO_s$ with the set\n\\[\n\\C{ X_1 \\times \\cdots \\times X_l \\mid \\mbox{ $X_j \\in {\\cal F}(C_{i_j})$ for $j \\in [1..l]$}}.\n\\]\nIn this way we can write the elements of $CO_s$ as \nCartesian products\n$X_1 \\times \\cdots \\times X_l$, so as\n(specific) sets of $l$-tuples, \ninstead of as $(X_1, \\mbox{$\\ldots$}, X_l)$,\nand similarly with $CO$.\n\nNote that $C_1 \\times \\cdots \\times C_k$ is the $\\mbox{$\\ \\sqsubseteq\\ $}$-least element of $CO$.\nAlso, note that because of the use of the inverse subset order\n$\\supseteq$\nwe have for\n$X_{1} \\times \\cdots \\times X_{l} \\in CO_s$ and\n$Y_{1} \\times \\cdots \\times Y_{l} \\in CO_s$\n\\begin{center}\n \n\\begin{tabular}{lll}\n$X_1 \\times \\cdots \\times X_l \\mbox{$\\ \\sqsubseteq\\ $}_s Y_1 \\times \\cdots \\times Y_l$ & \\ iff &\n$X_1 \\times \\cdots \\times X_l \\supseteq Y_1 \\times \\cdots \\times Y_l$ \\\\\n& (iff & $X_i \\supseteq Y_i$ for $i \\in [1..l]$),\n\\end{tabular}\n\\end{center}\n\\begin{center}\n\\begin{tabular}{lll}\n$(X_1 \\times \\cdots \\times X_l) \\sqcup_s (Y_1 \\times \\cdots \\times Y_l)$ & \\ = &\n$(X_1 \\times \\cdots \\times X_l) \\cap (Y_1 \\times \\cdots \\times Y_l)$ \\\\\n & (= & $(X_1 \\cap Y_1) \\times \\cdots \\times (X_l \\cap Y_l))$.\n\\end{tabular}\n\\end{center}\n\nThis allows us to use from now on the\nset theoretic counterparts $\\supseteq$ and $\\cap$ of $\\mbox{$\\ \\sqsubseteq\\ $}_s$ and $\\sqcup_s$.\nNote that for the partial order $(CO_s, \\mbox{$\\ \\sqsubseteq\\ $}_s)$\na function $g$ on $CO_s$ is inflationary iff \n${\\bf C} \\supseteq g({\\bf C})$ and $g$ is monotonic iff it is monotonic w.r.t.\nthe set inclusion.\n\nSo far we have introduced an $\\sqcup$-po associated with a CSP. Next, we\nintroduce functions by means of which chaotic iterations will\nbe generated.\n\n\\begin{definition} \\label{def:crf}\n Consider a CSP $\\langle {\\cal D}; C_1, \\mbox{$\\ldots$}, C_k \\rangle$ together with a\n sequence of families of sets ${\\cal F}(C_i)$ based on $C_i$, for $i\n \\in [1..k]$, and a scheme $s$ on $k$. By a {\\em constraint\n reduction function with scheme $s$\\\/} we mean a function $g$ on\n $CO_s$ such that for all ${\\bf C} \\in CO_s$\n\\begin{itemize}\n\\item ${\\bf C} \\supseteq g({\\bf C})$,\n\n\\item $Sol({\\bf C}) = Sol(g({\\bf C}))$.\n\\hfill{$\\Box$}\n\\end{itemize}\n\\end{definition}\n\n{\\bf C} is here a Cartesian product of some constraints and \nin the second condition we\nidentified it with the sequence of these constraints,\nand similarly with $g({\\bf C})$.\nThe first condition states that $g$ reduces the constraints\n$C_i$, where $i$ is an element of $s$,\nwhile the second condition states that during this \nconstraint reduction\nprocess no solution to ${\\bf C}$ is lost.\n\n\\begin{example} \\label{exa:projection}\n As a first example of a constraint reduction function\ntake ${\\cal F}(C) := {\\cal P}(C)$ for each constraint $C$\nand consider the following function $g$ on some $CO_s$:\n\n\\[\ng(C \\times {\\bf C} ) := C' \\times {\\bf C},\n\\]\nwhere $C' = \\Pi_t(Sol(C, {\\bf C}))$ and $t$ is the scheme of\n$C$. \nIn other words, $C'$ is the projection of the set of solutions\nof $(C, {\\bf C})$ on the scheme of $C$.\n\n \nTo see that $g$ is indeed a constraint reduction function,\nfirst note that\nby the definition of $Sol$ we have $C' \\mbox{$\\:\\subseteq\\:$} C$, so\n$C \\times {\\bf C} \\supseteq g(C \\times {\\bf C})$.\nNext, note that for $d \\in Sol(C, {\\bf C})$ we have\n$d[t] \\in \\Pi_t(Sol(C, {\\bf C}))$,\nso $d \\in Sol(C', {\\bf C})$.\nThis implies that \n$Sol(C, {\\bf C}) = Sol(g(C, {\\bf C})).$\n\nNote also that $g$ is monotonic w.r.t. the set inclusion and idempotent.\n\\hfill{$\\Box$}\n\\end{example}\n\n\\begin{example} \\label{exa:path}\n As another example that is of importance for the discussion in\n Subsection \\ref{subsec:related} consider a CSP $\\langle D_1, \\mbox{$\\ldots$},\n D_n; \\cal C\\rangle$ of binary constraints such that for each scheme\n $i,j$ on $n$ there is exactly one constraint, which we denote by\n $C_{i,j}$. Again put ${\\cal F}(C) := {\\cal P}(C)$ for each\n constraint $C$.\n\nDefine now for each scheme $k,l,m$ on $n$ \nthe following function $g^m_{k,l}$ on $CO_s$, where\n$s$ is the triple corresponding to the positions of the constraints\n$C_{k,l}, C_{k,m}$ and $C_{m,l}$ in ${\\cal C}$:\n\n\\[\ng^m_{k,l}(X_{k,l} \\times X_{k,m} \\times X_{m,l}) := \n(X_{k,l} \\cap \\Pi_{k,l}(X_{k,m} \\Join X_{m,l})) \\times X_{k,m} \\times X_{m,l}.\n\\]\n\nTo prove that the functions $g^m_{k,l}$ are\nconstraint reduction functions it suffices to note that\nby simple properties of the $\\Join$ operation and by\nNote \\ref{not:sol}(i) we have\n\\begin{center}\n \n\\begin{tabular}{lll}\n$X_{k,l} \\cap \\Pi_{k,l}(X_{k,m} \\Join X_{m,l})$ & = & $\\Pi_{k,l}(X_{k,l} \\Join X_{k,m} \\Join X_{m,l})$ \\\\\n & = & $\\Pi_{k,l}(Sol(X_{k,l}, X_{k,m}, X_{m,l}))$,\n\\end{tabular}\n\\end{center}\nso these functions are special cases of the functions defined in\nExample \\ref{exa:projection}.\n\\hfill{$\\Box$} \n\\end{example}\n\n\n\\begin{example} \\label{exa:cuts}\n\nAs a final example consider linear inequalities over integers.\nLet $x_1, \\mbox{$\\ldots$} ,x_n$ be different variables ranging over integers,\nwhere $n > 0$.\nBy a {\\em linear inequality\\\/} we mean here a formula of the\nform\n\\[\n\\sum_{i = 1}^{n} a_i x_i \\leq b,\n\\]\nwhere $a_1, \\mbox{$\\ldots$} ,a_n$ and $b$ are integers.\n\nIn what follows we consider CSP's that consist of finite\nor countable sets of linear inequalities. Each such set determines\na subset of ${\\cal N}^n$ which we view as a single constraint.\nCall such a subset an {\\em INT-LIN\\\/} set.\n\nFix now a constraint $C$ that is an {\\em INT-LIN\\\/} set formed by a\nfinite or countable set {\\em LI\\\/} of linear inequalities. Define\n${\\cal F}(C)$ to be the set of {\\em INT-LIN\\\/} sets formed by a finite\nor countable set of linear inequalities extending {\\em LI}.\nClearly, ${\\cal F}(C)$ is a family of sets based on $C$.\n\nGiven now $m$ linear inequalities\n\\[\n\\sum_{i = 1}^{n} a^{j}_{i} x_{i} \\leq b^{j},\n\\]\nwhere $j \\in [1..m]$, and $m$ nonnegative reals $c_1, \\mbox{$\\ldots$} ,c_m$, we\nconstruct a new linear inequality\n\\[\n\\sum_{i = 1}^{n} (\\sum_{j = 1}^{m} c_j a^{j}_{i}) x_{i} \\leq \\sum_{j = 1}^{m} c_j b^{j}.\n\\]\n\nIf for $i \\in [1..n]$ each coefficient\n$\\sum_{j = 1}^{m} c_j a^{j}_{i}$ is an integer, then\nwe replace the right-hand side by $\\floor{\\sum_{i = 1}^{m} c_j b^{j}}$.\n\nThis yields the inequality\n\\[\n\\sum_{i = 1}^{n} (\\sum_{j = 1}^{m} c_j a^{j}_{i}) x^{j}_{i} \\leq \\floor{\\sum_{j = 1}^{m} c_j b^{j}}\n\\]\nthat is called a {\\em Gomory-Chv\\'{a}tal cutting plane}.\n\nAn addition of a cutting plane to a set of linear inequalities on\nintegers maintains equivalence, so it is an example of a constraint\nreduction function. \n\nIt is well-known that the process of deriving cutting planes does not\nhave to stop after one application (see, e.g., \nCook, Cunningham, Pulleyblank, and Schrijver \\cite[Section\n6.7]{CCPS98}), so this reduction function is non-idempotent.\n\\hfill{$\\Box$}\n\\end{example}\n\nWe now show that when the constraint reduction function discussed in\nExample \\ref{exa:projection} is modified by applying it to\neach argument constraint\nsimultaneously, it becomes a constraint reduction function that\nis in some sense optimal. \n\nMore precisely, assume\nthe notation of Definition \\ref{def:drf} and let $s := i_1, \\mbox{$\\ldots$}, i_l$.\nDefine a function $\\rho$\non $CO_s$ as follows:\n\\[\n\\rho({\\bf C}) := {\\bf C}',\n\\]\n\n\\noindent\nwhere \n\\[\n{\\bf C} := C_{i_1} \\times \\cdots \\times C_{i_l},\n\\]\n\n\\[\n{\\bf C}' := C'_{i_1} \\times \\cdots \\times C'_{i_l},\n\\]\nwith each $C'_{i_j} := \\Pi_{t_j}(Sol({\\bf C}))$,\nwhere $t_j$ is the scheme of $C_{i_j}$.\n\nSo $\\rho({\\bf C})$ replaces every constraint $C$ in {\\bf C} \nby the projection of $Sol({\\bf C})$ on the scheme of $C$.\n\n\n\\begin{note}[(Characterization)] \\label{note:crf-char}\nAssume the notation of Definition \\ref{def:crf}.\nA function $g$ on $CO_s$ is a constraint reduction function \niff for all ${\\bf C} \\in CO_s$\n\\[\n\\rho({\\bf C}) \\mbox{$\\:\\subseteq\\:$} g({\\bf C}) \\mbox{$\\:\\subseteq\\:$} {\\bf C}.\n\\]\n\\end{note}\n\\Proof\nSuppose that $s := i_1, \\mbox{$\\ldots$}, i_l$.\nWe have the following string of equivalences for \n\\[\ng({\\bf C}) := X_{i_1} \\times \\cdots \\times X_{i_l}:\n\\]\n$\\rho({\\bf C}) \\mbox{$\\:\\subseteq\\:$} g({\\bf C})$ iff \n$\\Pi_{t_j}(Sol({\\bf C})) \\mbox{$\\:\\subseteq\\:$} X_{i_j}$ for $j \\in [1..l]$ iff\n$Sol({\\bf C}) \\mbox{$\\:\\subseteq\\:$} Sol(g({\\bf C}))$.\n\nSo \n$\\rho({\\bf C}) \\mbox{$\\:\\subseteq\\:$} g({\\bf C}) \\mbox{$\\:\\subseteq\\:$} {\\bf C}$ iff\n($Sol({\\bf C}) = Sol(g({\\bf C}))$ and $g({\\bf C}) \\mbox{$\\:\\subseteq\\:$} {\\bf C}$).\n\\hfill{$\\Box$}\n\\vspace{5 mm}\n\nTake now a CSP ${\\cal P} := \\langle {\\cal D}; C_1, \\mbox{$\\ldots$}, C_k\\rangle$\nand a sequence of constraints $C'_1, \\mbox{$\\ldots$}, C'_k$ such that $C'_i \\mbox{$\\:\\subseteq\\:$} C_i$ for\n$i \\in [1..k]$.\nLet ${\\cal P}' := \\langle {\\cal D}; C'_1, \\mbox{$\\ldots$}, C'_k\\rangle$.\nWe say then that ${\\cal P'}$ {\\em is determined by ${\\cal P}$\nand $C'_1 \\times \\cdots \\times C'_k$}.\nFurther, we say that ${\\cal P'}$ is {\\em smaller than\\\/} ${\\cal P'}$ and\n${\\cal P}$ is {\\em larger than\\\/} ${\\cal P'}$.\n\n\nConsider now a CSP ${\\cal P} := \\langle {\\cal D}; C_1, \\mbox{$\\ldots$}, C_k\\rangle$\nand a constraint reduction function $g$.\nSuppose that \n\\[\ng^+(C_1 \\times \\cdots \\times C_k) = C'_1 \\times \\cdots \\times C'_k,\n\\]\nwhere $g^+$ is the canonic extension of $g$ to $CO$ defined \nin Subsection \\ref{subsec:ci-cd}.\nWe now define \n\\[\ng({\\cal P}) := \\langle {\\cal D}; C'_1, \\mbox{$\\ldots$}, C'_k\\rangle.\n\\]\nWe have the following observation.\n\n\\begin{lemma} \\label{lem:c-equ}\nConsider a CSP ${\\cal P}$ and a constraint reduction function $g$.\nThen ${\\cal P}$ and $g({\\cal P})$ are equivalent.\n\\end{lemma}\n\\Proof\nSuppose that $s$ is the scheme of the function $g$ and let\n{\\bf C} be an element of $CO_s$. So {\\bf C} is a Cartesian product\nof some constraints.\nAs before we identify it with the sequence of these constraints. \nFor some sequence of schemes\n{\\bf s}, \\ {\\bf C} is\nthe {\\bf s}-sequence of the\nconstraints of ${\\cal P}$.\n\nLet now $d$ be a solution to ${\\cal P}$.\nThen by Note \\ref{not:sol}(ii) we have \n$d[\\langle {\\bf s} \\rangle] \\in Sol({\\bf C})$, so by the definition of $g$ also\n$d[\\langle {\\bf s} \\rangle] \\in Sol(g({\\bf C}))$.\nHence for every constraint $C'$ in $g({\\bf C})$ with scheme $s'$ we have \n$d[s'] \\in C'$ since\n$d[\\langle {\\bf s} \\rangle][s'] = d[s']$. So\n$d$ is a solution to $g({\\cal P})$.\nThe converse implication holds by the definition of a constraint reduction function.\n\\hfill{$\\Box$}\n\\vspace{5 mm}\n\nWhen dealing with a specific CSP with a constraint $\\sqcup$-po\nassociated with it we have in general several constraint reduction\nfunctions, each defined on a possibly different domain. To study the\neffect of their interaction we can use the Chaotic Iteration Theorem\n\\ref{thm:chaotic} in conjunction with the above Lemma. After\ntranslating the relevant notions into set theoretic terms we get the\nfollowing direct consequence of these results. (In this translation\n$CO_s$ corresponds to $D_s$ and $CO$ to $D$.)\n\n\\begin{theorem}[(Constraint Reduction)] \\label{thm:cons}\nConsider a CSP ${\\cal P} := \\langle {\\cal D}; C_1, \\mbox{$\\ldots$}, C_k\\rangle$\nwith a constraint $\\sqcup$-po associated with it.\nLet $F := \\C{g_1, \\mbox{$\\ldots$} , g_k}$, where each $g_i$ is a \nconstraint reduction function. \nSuppose that all functions $g_i$ are monotonic\nw.r.t. the set inclusion. Then \n\\begin{itemize}\n\\item the limit of \nevery chaotic iteration of \n$F^+ := \\C{g^+_1, \\mbox{$\\ldots$} , g^+_k}$\nexists;\n\\item this limit coincides with \n\\[\n\\bigcap_{j=0}^{\\infty} g^{j}(C_1 \\times \\cdots \\times C_k),\n\\]\nwhere the function $g$ on $CO$ is defined by:\n\\[\ng({\\bf C}) := \\bigcap_{i=1}^{k} g^+_i({\\bf C}),\n\\]\n\n\\item the CSP determined by ${\\cal P}$ and this limit is equivalent to ${\\cal P}$.\n\\hfill{$\\Box$}\n\\end{itemize}\n\\end{theorem}\n\nInformally, this theorem states that the order of the applications of the\nconstraint reduction functions does not matter, as long as none of them is\nindefinitely neglected.\nMoreover, the CSP corresponding to the limit of such an iteration\nprocess of the constraint reduction functions is equivalent to the\noriginal one.\n\nConsider now a CSP ${\\cal P}$ with a constraint $\\sqcup$-po associated with it\nthat satisfies the finite chain property. Then we can use the\n{\\tt CI, CII, CIQ} and {\\tt CIIQ} algorithms to compute the limits of\nthe chaotic iterations considered in the above Theorem.\nWe shall explain in Subsection \\ref{subsec:related} how\nby instantiating these algorithms with specific constraint $\\sqcup$-po's and\nconstraint reduction functions\nwe obtain specific algorithms considered in the literature.\n\nIn each case, by virtue of the {\\tt CI} Theorem \\ref{thm:CI} and its reformulations\nfor the {\\tt CII, CIQ} and {\\tt CIIQ} algorithms, we can conclude that\nthese algorithms compute the greatest common\nfixpoint w.r.t. the set inclusion of the functions from $F^+$.\nConsequently, the CSP determined by ${\\cal P}$ and this limit is the \nlargest CSP that is both smaller than ${\\cal P}$ and is a fixpoint of\nthe considered constraint reduction functions.\n\nSo the limit of the constraint propagation process\ncould be added to the collection of important greatest fixpoints \npresented in Barwise and Moss \\cite{BM96}.\n\n\\subsection{Domain Reduction}\n\\label{subsec:dr}\n\nIn this subsection we study the domain reduction\nprocess. First, we associate with each CSP an $\\sqcup$-po\nthat ``focuses'' on the domain reduction.\n\nConsider a CSP ${\\cal P} := \\langle D_1, \\mbox{$\\ldots$}, D_n; {\\cal C} \\rangle$. Let\nfor $i \\in [1..n]$ \n$({\\cal F}(D_i), \\supseteq)$ be an $\\sqcup$-po based on $D_i$. \nWe call the Cartesian product\n$(DO, \\mbox{$\\ \\sqsubseteq\\ $})$ of $({\\cal F}(D_i), \\supseteq)$, with $i \\in [1..n]$\n{\\em a domain $\\sqcup$-po associated with ${\\cal P}$}.\n\nAs in Subsection \\ref{subsec:ci-cd}, for\na scheme $s := i_1, \\mbox{$\\ldots$}, i_l$ we denote by \n$(DO_s, \\mbox{$\\ \\sqsubseteq\\ $}_s)$ the Cartesian product of the partial orders\n$({\\cal F}(D_{i_j}), \\supseteq)$, where $j \\in [1..l]$.\nThen, as in the previous subsection, we identify $DO_s$ with the set\n\\[\n\\C{ X_1 \\times \\cdots \\times X_l \\mid \\mbox{ $X_j \\in {\\cal F}(D_{i_j})$ for $j \\in [1..l]$}}.\n\\]\n\nNext, we introduce functions that reduce domains.\nThese functions are associated with constraints. Constraints are arbitrary\nsets of $k$-tuples for some $k$, while the $\\mbox{$\\ \\sqsubseteq\\ $}_s$ order and the $\\sqcup_s$ \noperation are defined only on Cartesian\nproducts. So to define these functions we use the\nset theoretic counterparts $\\supseteq$ and $\\cap$ of $\\mbox{$\\ \\sqsubseteq\\ $}_s$ and $\\sqcup_s$\nwhich are defined on arbitrary sets.\n\n\\begin{definition} \\label{def:drf}\nConsider a sequence of domains $D_1, \\mbox{$\\ldots$}, D_n$\ntogether with a sequence of families of sets ${\\cal F}(D_i)$ based on $D_i$, \nfor $i \\in [1..n]$, and a scheme $s$ on $n$.\nBy a {\\em domain reduction function\\\/} for a constraint $C$ with scheme $s$\nwe mean a function $f$ on $DO_s$ such that for all ${\\bf D} \\in DO_s$\n\\begin{itemize}\n\\item ${\\bf D} \\supseteq f({\\bf D})$,\n\n\\item $C \\cap {\\bf D} = C \\cap f({\\bf D})$.\n\\hfill{$\\Box$}\n\\end{itemize}\n\\end{definition}\n\nThe first condition states that $f$ reduces the ``current'' domains\nassociated with the constraint $C$\n(so no solution to $C$ is ``gained''),\nwhile the second condition states that during this \ndomain reduction process no solution to $C$ is ``lost''.\nIn particular, the second condition implies that \nif $C \\mbox{$\\:\\subseteq\\:$} {\\bf D}$ then\n$C \\mbox{$\\:\\subseteq\\:$} f({\\bf D})$.\n\n\\begin{example} \\label{exa:arc}\nAs a simple example of a domain reduction functions consider a binary constraint\n$C \\mbox{$\\:\\subseteq\\:$} D_1 \\times D_2$. \nLet ${\\cal F}(D_i) := {\\cal P}(D_i)$ with $i \\in [1,2]$ \nbe the families of sets based on $D_1$ and $D_2$.\n \nDefine now the projection functions $\\pi_1$ and $\\pi_2$ on\n$DO_{1,2} = {\\cal P}(D_1) \\times {\\cal P}(D_2)$ as follows:\n\\[\n\\pi_1(X \\times Y) := X' \\times Y,\n\\]\nwhere $X' = \\C{a \\in X \\mid \\mbox{$\\exists$} b \\in Y \\: (a,b) \\in C}$, and\n\\[\n\\pi_2(X \\times Y) := X \\times Y',\n\\]\nwhere $Y' = \\C{b \\in Y \\mid \\mbox{$\\exists$} a \\in X \\: (a,b) \\in C}$.\nIt is straightforward to check that $\\pi_1$ and $\\pi_2$ are\nindeed domain reduction functions.\nFurther, these functions are\nmonotonic w.r.t. the set inclusion and idempotent.\n\\hfill{$\\Box$}\n\\end{example}\n\n\\begin{example} \\label{exa:arcn}\nAs another example of a domain reduction function \nconsider an $n$-ary constraint\n$C \\mbox{$\\:\\subseteq\\:$} D_1 \\times \\cdots \\times D_n$.\nLet for $i \\in [1..n]$ the family of sets based on $D_i$\nbe defined by\n${\\cal F}(D_i) := {\\cal P}(D_i)$.\n\nNote that \n$DO = {\\cal P}(D_1) \\times \\cdots \\times {\\cal P}(D_n)$.\nDefine now the projection function $\\pi_{C}$ by putting for ${\\bf D} \\in DO$\n\n\\[\n\\pi_{C}({\\bf D}) := \\Pi_1(C \\cap {\\bf D}) \\times \\cdots \\times \\Pi_n(C \\cap {\\bf D}).\n\\]\nRecall from Subsection \\ref{subsec:prel}\nthat $\\Pi_i(C \\cap {\\bf D}) = \\C{a \\mid \\mbox{$\\exists$} d \\in C \\cap {\\bf D} \\ a = d[i]}$.\nClearly $\\pi_{C}$ is a domain reduction function for $C$ and is\nmonotonic w.r.t. the set inclusion and idempotent.\n\nHere the scheme of $C$ is $1,\\mbox{$\\ldots$}, n$. Obviously, $\\pi_{C}$\ncan be defined in an analogous way for a constraint $C$ with an \narbitrary scheme.\n\\hfill{$\\Box$} \n\\end{example}\n\nSo all three domain reduction functions deal with projections, respectively\non the first, second or all components\n and\ncan be visualized by means of Figure\n\\ref{fig:projection}.\n\n\\begin{figure}[htbp]\n \\begin{center}\n \\leavevmode\n\\epsfxsize7cm\n\\centerline{\\epsfbox{icalp97-1.ps}}\n \\caption{Domain reduction functions.}\n \\label{fig:projection}\n \\end{center}\n\\end{figure}\n\nThe following observation provides an equivalent definition of a domain\nreduction function in terms of the projection function defined in the\nlast example. \n\n\\begin{note}[(Characterization)] \\label{note:char}\nAssume the notation of Definition \\ref{def:drf}.\nA function $f$ on $DO_s$ is a domain reduction function \nfor the constraint $C$ iff\nfor all ${\\bf D} \\in DO_s$\n\\[\n\\pi_{C}({\\bf D}) \\mbox{$\\:\\subseteq\\:$} f({\\bf D}) \\mbox{$\\:\\subseteq\\:$} {\\bf D}.\n\\]\n\\end{note}\n\\Proof\nSuppose that $s := i_1, \\mbox{$\\ldots$}, i_l$.\nWe have the following string of equivalences for \n\\[\nf({\\bf D}) := X_{i_1} \\times \\cdots \\times X_{i_l}:\n\\]\n$\\pi_{C}({\\bf D}) \\mbox{$\\:\\subseteq\\:$} f({\\bf D})$ iff\n$\\Pi_{i_j}(C \\cap {\\bf D}) \\mbox{$\\:\\subseteq\\:$} X_{i_j}$ for $j \\in [1..l]$ iff\n$C \\cap {\\bf D} \\mbox{$\\:\\subseteq\\:$} f({\\bf D})$.\n\nSo $\\pi_{C}({\\bf D}) \\mbox{$\\:\\subseteq\\:$} f({\\bf D}) \\mbox{$\\:\\subseteq\\:$} {\\bf D}$ iff\n($C \\cap {\\bf D} = C \\cap f({\\bf D})$ and $f({\\bf D}) \\mbox{$\\:\\subseteq\\:$} {\\bf D}$).\n\\hfill{$\\Box$}\n\\vspace{5 mm}\n\nIntuitively, this observation means that the projection function\n$\\pi_{C}$ is an ``optimal'' domain reduction function. In general,\nhowever, $\\pi_{C}$ does not need to be a domain reduction function, since\nthe sets $\\Pi_i(C \\cap {\\bf D})$ do not have to belong to the\nused families of sets based on the domain $D_i$. The next\nexample provides an illustration of such a situation.\n\n\\begin{example}\n\\label{exa:reals}\nConsider an $n$-ary constraint $C$ on reals, that is\n$C \\mbox{$\\:\\subseteq\\:$} {\\cal R}_{+}^{n}$.\nLet ${\\cal R}_{+} := {\\cal R} \\cup \\C{+ \\infty, - \\infty}$,\n$F$ be a finite subset of ${\\cal R}_{+}$ containing \n$- \\infty$ and $+ \\infty$\nand let the family \n${\\cal F}({\\cal R}_{+})$ of subsets of ${\\cal R}_{+}$\nbe defined as in Example \\ref{exa:partial}.\nSo \n\\[\n{\\cal F}({\\cal R}_{+}) = \\C{[a,b] \\mid a,b \\in F}\n\\]\nand\n\\[\nDO = \\C{[a_1, b_1] \\times \\cdots \\times [a_n, b_n] \\mid a_i,b_i \\in F \\mbox{ for } i \\in [1..n]}.\n\\]\n\nFurther, given a subset $X$ of ${\\cal R}_{+}$ we define\n\\[\nint(X) := \\cap \\C{Y \\in {\\cal F}({\\cal R}_{+}) \\mid X \\subseteq Y}.\n\\]\nSo $int(X)$ is the smallest interval with bounds in $F$ that contains $X$.\nClearly, $int(X)$ exists for every $X$.\n\nDefine now the function $f$ on $DO$ by putting for ${\\bf D} \\in DO$\n\n\\[\nf({\\bf D}) := int(\\Pi_1(C \\cap {\\bf D})) \\times \\cdots \\times int(\\Pi_n(C \\cap {\\bf D})).\n\\]\n\nBenhamou and Older \\cite{BO97} proved that $f$ is a domain reduction function that is\nmonotonic w.r.t. the set inclusion and idempotent.\nNote that the first property is a direct consequence of the\nCharacterization Note \\ref{note:char}.\n\\hfill{$\\Box$}\n\\end{example}\n\nAll the domain reduction functions given so far\nwere idempotent. We now provide an example of a natural non-idempotent\nreduction function. \n\n\\begin{example}\n\\label{exa:lineq}\nWe consider linear equalities over integer interval domains.\nBy a {\\em linear equality\\\/} we mean here a formula of the\nform\n\\[\n\\sum_{i = 1}^{n} a_i x_i = b,\n\\]\nwhere $a_1, \\mbox{$\\ldots$} ,a_n$ and $b$ are integers.\n\nIn turn, by an {\\em integer interval\\\/} we mean\nan expression of the form\n\\[\n[a..b]\n\\]\nwhere $a$ and $b$ are integers;\n$[a..b]$ denotes the set of all integers between $a$ and $b$,\nincluding $a$ and $b$.\n\nThe domain reduction functions for linear equalities over\ninteger intervals are simple modifications of the reduction rule introduced in\nDavis \\cite[page 306]{davis87} that dealt with linear constraints over\nclosed intervals of reals.\nIn the case of a linear equality\n\\[\n\\sum_{i \\in {\\em POS}} a_i x_i - \\sum_{i \\in {\\em NEG}} a_i x_i = b\n\\]\nwhere \n\\begin{itemize}\n\\item $a_i$ is a positive integer for $i \\in {\\em POS} \\cup {\\em NEG}$,\n\n\\item $x_i$ and $x_j$ are different variables for $i \\neq j$ and $i,j\n \\in {\\em POS} \\cup {\\em NEG}$,\n\n\\item $b$ is an integer,\n\\end{itemize}\nsuch a function is defined as follows (see, e.g., Apt \\cite{Apt98a}):\n\n\n\\[\nf([l_1 .. h_1], \\mbox{$\\ldots$}, [l_n..h_n]) := \n([l'_1 .. h'_1], \\mbox{$\\ldots$}, [l'_n..h'_n])\n\\]\nwhere for $j \\in {\\em POS}$\n\\[\nl'_j := max(l_j, \\ceiling{\\gamma_j}), \\ h'_j := min(h_j, \\floor{\\alpha_j}),\n\\]\nfor $j \\in {\\em NEG}$\n\\[\nl'_j := max(l_j, \\ceiling{\\beta_j}), \\ h'_j := min(h_j, \\floor{\\delta_j}),\n\\]\nand where\n\\[\n\\alpha_j := \\frac{b - \\sum_{i \\in {\\em POS} - \\{j\\}} a_i l_i + \\sum_{i \\in {\\em NEG}} a_i h_i}{a_j}\n\\]\n\n\\[\n\\beta_j := \\frac{- b + \\sum_{i \\in {\\em POS}} a_i l_i - \\sum_{i \\in {\\em NEG} - \\{j\\}} a_i h_i}{a_j}\n\\]\n\n\\[\n\\gamma_j := \\frac{b - \\sum_{i \\in {\\em POS} - \\{j\\}} a_i h_i + \\sum_{i \\in {\\em NEG}} a_i l_i}{a_j}\n\\]\nand \n\\[\n\\delta_j := \\frac{- b + \\sum_{i \\in {\\em POS}} a_i h_i - \\sum_{i \\in {\\em NEG} - \\{j\\}} a_i l_i}{a_j}\n\\]\n(It is worthwhile to mention that this function can be derived by\nmeans of cutting planes mentioned in Example \\ref{exa:cuts}).\n\nFix now some initial integer intervals $I_1, \\mbox{$\\ldots$}, I_n$ and\nlet for $i \\in [1..n]$ the family of sets ${\\cal F}(I_i)$ consist\nof all integer subintervals of $I_i$.\n\nThe above defined function $f$ is then a domain reduction function\ndefined on the Cartesian product of ${\\cal F}(I_i)$ for $i \\in [1..n]$\nand is easily seen to be non-idempotent. For example, in case of the\nCSP\n\\[\n{\\p{x \\in [0..9], y \\in [1..8]}{3x - 5y = 4}}\n\\]\na straightforward calculation shows that \n\\[\nf([0..9], [1..8]) = ([3..9], [1..4])\n\\]\nand\n\\[\nf([3..9], [1..4]) = ([3..8], [1..4]).\n\\]\n\\hfill{$\\Box$} \n\\end{example}\n\n\nTake now a CSP ${\\cal P} := \\langle D_1, \\mbox{$\\ldots$}, D_n; {\\cal C}\\rangle$ \nand a sequence of domains $D'_1, \\mbox{$\\ldots$}, D'_n$ such that $D'_i \\mbox{$\\:\\subseteq\\:$} D_i$ for\n$i \\in [1..n]$.\nConsider a CSP ${\\cal P'}$ obtained from ${\\cal P}$ by replacing\neach domain $D_i$ by $D'_i$ and by restricting each constraint in ${\\cal C}$ to\nthese new domains. We say then that ${\\cal P'}$ {\\em is determined by ${\\cal P}$\nand $D'_1 \\times \\cdots \\times D'_n$}.\n\n\nConsider now a CSP ${\\cal P} := \\langle D_1, \\mbox{$\\ldots$}, D_n; {\\cal C}\\rangle$ \nwith a domain $\\sqcup$-po associated with it\nand a domain reduction function $f$ for a constraint $C$ of ${\\cal C}$.\nWe now define $f({\\cal P})$ to be the CSP obtained from ${\\cal P}$ \nby reducing its domains using the function $f$. \n\nMore precisely, suppose that \n\\[\nf^+(D_1 \\times \\cdots \\times D_n) = D'_1 \\times \\cdots \\times D'_n,\n\\]\nwhere $f^+$ is the canonic extension of $f$ to $DO$ defined \nin Subsection \\ref{subsec:ci-cd}.\nThen $f({\\cal P})$ is the CSP determined by ${\\cal P}$ \nand $D'_1 \\times \\cdots \\times D'_n$. The following observation is\nan analogue of Lemma \\ref{lem:c-equ}.\n\\begin{lemma}\nConsider a CSP ${\\cal P}$ and a domain reduction function $f$.\nThen ${\\cal P}$ and $f({\\cal P})$ are equivalent.\n\\end{lemma}\n\n\\Proof\nSuppose that $D_1, \\mbox{$\\ldots$}, D_n$ are\nthe domains of ${\\cal P}$ and assume that $f$ is a domain reduction function \nfor $C$ with scheme $i_1, \\mbox{$\\ldots$}, i_l$. \nBy definition $f$ is defined on $D_{i_1} \\times \\cdots \\times D_{i_l}$.\nLet\n\\[\nf(D_{i_1} \\times \\cdots \\times D_{i_l}) = D'_{i_1} \\times \\cdots \\times D'_{i_l}.\n\\]\nTake now a solution $d$ to ${\\cal P}$. Then \n$d[i_1, \\mbox{$\\ldots$}, i_l] \\in C$, so by the definition of $f$ also \n$d[i_1, \\mbox{$\\ldots$}, i_l] \\in D'_{i_1} \\times \\cdots \\times D'_{i_l}$. So $d$ is\nalso a solution to $f({\\cal P})$.\nThe converse implication holds by the definition of a domain reduction function.\n\\hfill{$\\Box$}\n\\vspace{5 mm}\n\nFinally, the following result is an analogue of the Constraint\nReduction Theorem \\ref{thm:cons}. It is a consequence of Iteration\nTheorem \\ref{thm:chaotic} and the above Lemma, obtained by translating\nthe relevant notions into set theoretic terms.\n(In this translation $DO_s$ corresponds to $D_s$ and $DO$ to $D$.)\n\n\\begin{theorem}[(Domain Reduction)] \\label{thm:dom}\nConsider a CSP ${\\cal P} := \\langle D_1, \\mbox{$\\ldots$}, D_n; \\cal C\\rangle$\nwith a domain $\\sqcup$-po associated with it.\nLet $F := \\C{f_1, \\mbox{$\\ldots$} , f_k}$, where each $f_i$ is a \ndomain reduction function for some constraint in\n${\\cal C}$. Suppose that all functions $f_i$ are monotonic\nw.r.t. the set inclusion. Then \n\\begin{itemize}\n\\item the limit of \nevery chaotic iteration of \n$F^+ := \\C{f^+_1, \\mbox{$\\ldots$} , f^+_k}$\nexists;\n\\item this limit coincides with \n\\[\n\\bigcap_{j=0}^{\\infty} f^{j}(D_1 \\times \\cdots \\times D_n),\n\\]\nwhere the function $f$ on $DO$ is defined by:\n\\[\nf({\\bf D}) := \\bigcap_{i=1}^{k} f^+_i({\\bf D}),\n\\]\n\n\\item the CSP determined by ${\\cal P}$ and this limit is equivalent to ${\\cal P}$.\n\\hfill{$\\Box$}\n\\end{itemize}\n\\end{theorem}\n\nThe above result shows an analogy between the domain reduction\nfunctions. In fact, the domain reduction functions can be modeled as\nconstraint reduction functions in the following way.\n\nFirst, given a CSP $\\langle D_1, \\mbox{$\\ldots$}, D_n; {\\cal C} \\rangle$\nadd to it $n$ unary constraints, each of which coincides with a\ndifferent domain $D_i$. This yields\n${\\cal P} := \\langle D_1, \\mbox{$\\ldots$}, D_n; {\\cal C}, D_1, \\mbox{$\\ldots$}, D_n \\rangle$.\nObviously, both CSP's are equivalent.\n\nNext, associate, as in the previous\nsubsection, with each constraint $C$ of ${\\cal P}$\nan $\\sqcup$-po ${\\cal F}(C)$ based on it. \n\nTake now a constraint $C \\in {\\cal C}$ with a \nscheme $s := i_1, \\mbox{$\\ldots$}, i_l$ and a function $f$ on $DO_s$.\nDefine a function $g$ on \n\n\\[\n{\\cal F}(C) \\times {\\cal F}(D_{i_1}) \\cdots \\times {\\cal F}(D_{i_l})\n\\]\nby\n\\[\ng(C', {\\bf D}) := (C', f({\\bf D})).\n\\]\n\nThen $f$ is a domain reduction function iff\n$g$ is a constraint reduction function, since\n$Sol(C', {\\bf D}) := C' \\cap {\\bf D}$.\n\nThis simple representation of the domain reduction functions as the\nconstraint reduction functions shows that the latter concept is more\ngeneral and explains the analogy between the results on the constraint\nreduction functions and domain reduction functions.\nIt also allows us to analyze the outcome of ``hybrid'' chaotic\niterations in which both domain reduction functions and constraint\nreduction functions are used.\n\nWe discussed the domain reduction functions separately,\nbecause, as we shall see in the next section, they have\nbeen extensively studied, especially in the context of CSP's with\nbinary constraints and of interval arithmetic. \n\n\n\\subsection{Automatic Derivation of Constraint Propagation Algorithms}\n\\label{subsec:auto}\n\nWe now show how specific provably correct algorithms for achieving a\nlocal consistency notion can be automatically derived. The idea is\nthat we characterize a given local consistency notion as a\ncommon fixpoint of a finite set of monotonic, inflationary\nand possibly idempotent\nfunctions and then instantiate any of the \n{\\tt CI, CII, CIQ} or {\\tt CIIQ} algorithms with these functions.\n As it is difficult to define local consistency formally,\nwe illustrate the idea on two examples.\n\n\\begin{example}\n\\label{exa:arccon}\nFirst, consider the notion of arc-consistency for $n$-ary relations,\ndefined in Mohr and Masini \\cite{MM88}.\nWe say that a constraint $C \\mbox{$\\:\\subseteq\\:$} D_1 \\times \\cdots \\times D_n$ is\n{\\em arc-consistent\\\/} if for every $i \\in [1..n]$ and $a \\in D_i$\nthere exists $d \\in C$ such that $a = d[i]$.\nThat is, for every involved domain each element of it participates in a\nsolution to $C$.\nA CSP is called {\\em arc consistent\\\/} if every constraint \nof it is.\n\nFor instance, the CSP $\\langle \\C{0,1}, \\C{0,1}; =, \\neq \\rangle$ that\nconsists of two binary constraints, that of equality and inequality\nover the 0-1 domain, is arc consistent (though obviously\ninconsistent).\n\nNote that a CSP $\\langle D_1, \\mbox{$\\ldots$}, D_n ; {\\cal C}\\rangle$ \nis arc consistent iff for every constraint $C$ of it\nwith a scheme $s := i_1, \\mbox{$\\ldots$}, i_l$ we have\n$\\pi_C(D_{i_1} \\times \\cdots \\times D_{i_l}) = D_{i_1} \\times \\cdots \\times D_{i_l}$,\nwhere $\\pi_C$ is defined in Example \\ref{exa:arcn}.\nWe noted there that the projection functions\n$\\pi_C$ are domain reduction functions that are monotonic w.r.t. the\nset inclusion and idempotent.\n\nBy virtue of the {\\tt CI} Theorem \\ref{thm:CI} reformulated for the {\\tt CII} algorithm,\nwe can now use the {\\tt CII} algorithm to achieve arc consistency \nfor a CSP with finite domains by instantiating the\nfunctions of this algorithm with the projection functions $\\pi_C$.\n\nBy the Domain Reduction Theorem\n\\ref{thm:dom} we conclude that the CSP computed by this\nalgorithm is equivalent to the original one\nand is the greatest arc consistent CSP that is smaller than\nthe original one.\n\\hfill{$\\Box$}\n\\end{example}\n\n\\begin{example}\nNext, consider the notion of relational consistency proposed\nin Dechter and van Beek \\cite{DvB97}.\nRelational consistency is\na very powerful concept that generalizes several\nconsistency notions discussed until now.\n\nTo define it we need to introduce some auxiliary concepts first.\nConsider a CSP $\\langle D_1, \\mbox{$\\ldots$}, D_n ; {\\cal C}\\rangle$.\nTake a scheme $t := i_1, \\mbox{$\\ldots$}, i_l$ on $n$.\nWe call \n$d \\in D_{i_1} \\times \\cdots \\times D_{i_l}$ a tuple of {\\em type $t$}.\nFurther, we say that $d$ is\n{\\em consistent\\\/} if for every \nsubsequence $s$ of $t$ and a constraint $C \\in {\\cal C}$ \nwith scheme $s$ we have $d[s] \\in C$.\n\nA CSP ${\\cal P}$ is called {\\em relationally $m$-consistent} if for\nany {\\bf s}-sequence $C_1, \\mbox{$\\ldots$}, C_m$ of different constraints of\n${\\cal P}$ and a subsequence $t$ of $\\langle {\\bf s} \\rangle$, every\nconsistent tuple of type $t$ belongs to $\\Pi_t(C_1 \\Join \\cdots \\Join\nC_m)$, that is, every consistent tuple of type $t$ can be extended to\nan element of $Sol(C_1, \\mbox{$\\ldots$}, C_m)$.\n\nAs the first step we characterize this notion \nas a common fixed point\nof a finite set of monotonic and inflationary functions.\n\nConsider a CSP ${\\cal P} := \\langle D_1, \\mbox{$\\ldots$}, D_n; C_1, \\mbox{$\\ldots$}, C_k \\rangle$.\nAssume for simplicity that for every scheme $s$ on $n$ there is a\nunique constraint with scheme $s$. Each CSP is trivially equivalent\nwith such a CSP --- it suffices to replace for each scheme $s$ the set\nof constraints with scheme $s$ by their intersection and to introduce\n``universal constraints'' for the schemes without a constraint.\nBy a ``universal constraint'' we mean here a Cartesian product of some \ndomains.\n\nConsider now a scheme $i_1, \\mbox{$\\ldots$}, i_m$ on $k$.\nLet {\\bf s} be such that $C_{i_1}, \\mbox{$\\ldots$}, C_{i_m}$ is an\n{\\bf s}-sequence of constraints and let \n$t$ be a subsequence of $\\langle {\\bf s} \\rangle$.\nFurther, let $C_{i_0}$ be the constraint of $\\cal P$ with scheme $t$.\nPut\n$s := \\langle (i_0), (i_1, \\mbox{$\\ldots$}, i_m) \\rangle$.\n(Note that if $i_0$ does not appear in $i_1, \\mbox{$\\ldots$}, i_m$ then\n$s = i_0, i_1, \\mbox{$\\ldots$}, i_m$ and otherwise $s$ is the permutation of \n$i_1, \\mbox{$\\ldots$}, i_m$ obtained by transposing $i_0$ with the first element.)\n\nDefine now a function $g_{s}$ on $CO_{s}$ by\n\\[\ng_{s}(C \\times {\\bf C}) := (C \\cap \\Pi_t(\\Join {\\bf C})) \\times {\\bf C}.\n\\]\nIt is easy to see that \nif for each function $g_s$ of the above form we have\n\\[\ng^+_s(C_1 \\times \\cdots \\times C_k) = C_1 \\times \\cdots \\times C_k,\n\\]\nthen ${\\cal P}$ is relationally $m$-consistent.\n(The converse implication is in general not true).\nNote that the functions $g_s$ are inflationary and monotonic\nw.r.t. the inverse subset order $\\supseteq$ and also idempotent. \n\nConsequently, again by the {\\tt CI} Theorem \\ref{thm:CI}\nreformulated for the {\\tt CII} algorithm,\nwe can use the {\\tt CII} algorithm to achieve relational $m$-consistency \nfor a CSP with finite domains\nby ``feeding'' into this algorithm the above defined functions.\nThe obtained algorithm improves upon the (authors' terminology)\nbrute force algorithm proposed in Dechter and van Beek \n\\cite{DvB97} since the useless constraint modifications \nare avoided.\n\nAs in Example \\ref{exa:path}, by simple properties of the $\\Join$\noperation and by Note \\ref{not:sol}(i) we have \n\\[\nC \\cap \\Pi_t(\\Join {\\bf C}) = \\Pi_t(C \\Join (\\Join {\\bf C})) = \n\\Pi_t(Sol(C, {\\bf C})).\n\\]\nHence, by virtue of Example\n\\ref{exa:projection}, the functions $g_s$ are all constraint reduction\nfunctions. Consequently, by the Constraint Reduction Theorem\n\\ref{thm:cons} we conclude that the CSP computed by the just\ndiscussed algorithm is equivalent to the original one. \n\\hfill{$\\Box$}\n\\end{example}\n\n\n\\section{Concluding Remarks}\n\\label{sec:concluding}\n\n\\subsection{Related Work}\n\\label{subsec:related}\n\nAs already mentioned in the introduction, the idea of chaotic\niterations was originally used in numerical analysis. The concept\ngoes back to the fifties and was successively generalized into the\nframework of Baudet \\cite{Bau78} on which Cousot and Cousot\n\\cite{CC77a} was based.\nOur notion of chaotic iterations on partial orders is derived from the\nlast reference.\nA historical overview can be found in Cousot \\cite{Cou78}.\n\nLet us turn now to a review of the work on constraint propagation.\nWe show how our results provide a uniform framework\nto explain and generalize the work of others.\n\nIt is illuminating to see how the attempts of finding general principles\nbehind the constraint propagation algorithms repeatedly reoccur\nin the literature on constraint satisfaction problems spanning \nthe last twenty years.\n\nAs already stated in the introduction, the aim of the constraint\npropagation algorithms is most often to achieve some form of local\nconsistency. As a result these algorithms are usually called in the\nliterature ``consistency algorithms'' or ``consistency enforcing\nalgorithms'' though, as already mentioned, some\nother names are also used.\n\nIn an early work of Montanari \\cite{montanari-networks} the notion\nof path-consistency was defined and a constraint propagation \nalgorithm was introduced to achieve it.\nThen, in the context of analysis of polyhedral scenes,\nanother constraint propagation algorithm was proposed in \nWaltz \\cite{waltz75}. \n\nIn Mackworth \\cite{mackworth-consistency} the notion of\narc-consistency was introduced and Waltz' algorithm was explained in\nmore general terms of CSP's with binary constrains. Also, a unified\nframework was proposed to explain the arc- and path-consis\\-tency\nalgorithms. To this end the arc-consistency algorithm {\\tt AC-3} and\nthe path-consistency algorithm {\\tt PC-2} were introduced and the\nlatter algorithm was obtained from the former one by pursuing the\nanalogy between both notions of consistency.\n\nA version of {\\tt AC-3} consistency algorithm can be obtained by instantiating\nthe {\\tt CII} algorithm with the domain reduction functions defined\nin Example \\ref{exa:arc}, whereas a version of {\\tt PC-2} algorithm can be\nobtained by instantiating this algorithm with the constraint reduction\nfunctions defined in Example \\ref{exa:path}.\n\nIn Davis \\cite{davis87} another generalization of Waltz algorithm was\nproposed that dealt with $n$-ary constraints. The algorithm proposed\nthere can be obtained by instantiating the {\\tt CIQ} algorithm with\nthe projection functions of Example \\ref{exa:arc} generalized to $n$-ary\nconstraints. To obtain a precise match the {\\bf enqueue} operation in this\nalgorithm should enqueue the projection functions related to one constraint\nin ``blocks''.\n\nIn Dechter and Pearl \\cite{dechter88} the notions of arc- and path-consistency\nwere modified to directional arc- and path-consistency, versions\nthat take into account some total order $<_d$ of the domain indices,\nand the algorithms for achieving these forms of consistency were\npresented. Such algorithms can be obtained as instances of the {\\tt\n CIQ} algorithm as follows.\n\nFor the case of directional arc-consistency the queue in this\nalgorithm should be instantiated with the set of the domain reduction\nfunctions $\\pi_1$ of Example \\ref{exa:arc} for the constraints the\nscheme of which is consistent with the $<_d$ order. These functions\nshould be ordered in such a way that the domain reduction functions\nfor the constraint with the $<_d$-large second index appear earlier.\nThis order has the effect that the first argument\nof the {\\bf enqueue} operation within\nthe {\\bf if-then-fi} statement always consists of domain\nreduction functions that {\\em are already\\\/} in the queue.\nSo this {\\bf if-then-fi} statement can be deleted.\nConsequently, the algorithm can be rewritten as a simple {\\bf for}\nloop that processes the selected domain reduction functions $\\pi_1$ in the\nappropriate order.\n\nFor the case of directional path-consistency the constraint reduction\nfunctions $g^m_{k,l}$ should be used only with $k,l <_d m$ and the\nqueue in the {\\tt CIQ} algorithm should be initialized in such a way\nthat the functions $g^m_{k,l}$ with the $<_d$-large $m$ index appear\nearlier. As in the case of directional arc-consistency this algorithm\ncan be rewritten as a simple {\\bf for} loop.\n\nIn Montanari and Rossi \\cite{MR91} a general study of constraint propagation was\nundertaken by defining the notion of a relaxation rule and by\nproposing a general relaxation algorithm. The notion of a relaxation\nrule coincides with our notion of a constraint propagation function\ninstantiated with the functions defined in Example\n\\ref{exa:projection} and the general relaxation algorithm is the\ncorresponding instance of our {\\tt CI} algorithm.\n\nIn Montanari and Rossi \\cite{MR91} it was also shown that the notions\nof arc-consistency and path-consistency can be defined by means of\nrelaxation rules and that as a result arc-consistency and\npath-consistency algorithms can be obtained by instantiating with\nthese rules their general relaxation algorithm.\n\nAnother, early attempt at providing a general framework to explain\nconstraint propagation was\nundertaken in Caseau \\cite{caseau91}. In this paper abstract\ninterpretations and a version of the {\\tt CIQ} algorithm are used to\nstudy iterations that result from applying approximations\nof the projection functions of Example \\ref{exa:arc} generalized \nto $n$-ary constraints.\nIt seems that for finite domains these approximation functions coincide with \nour concept of domain reduction functions.\n\nNext, Van Hentenryck, Deville and Teng\n\\cite{vanhentenryck-generic} presented a generic arc consistency\nalgorithm, called {\\tt AC-5}, that can be specialized to the known\narc-consistency algorithms {\\tt AC-3} and {\\tt AC-4} and also to new\narc-consistency algorithms for specific classes of constraints.\nMore recently, this work was extended in \nDeville, Barette and Van Hentenryck \n\\cite{DBV97} to path-consistency algorithms.\n\nLet us turn now our attention to constraints over reals.\nIn Lhomme \\cite{Lho93} the notion of arc B-consistency was introduced and\nan algorithm proposed that enforces it for constraint satisfaction problems\ndefined on reals. This algorithm can be obtained by\ninstantating our {\\tt CI} algorithm with the functions\ndefined in Example \\ref{exa:reals}.\n\nNext, in Benhamou, McAllester, and Van Hentenryck \n\\cite{BMV94} and Benhamou and Older \\cite{BO97} specific\nfunctions, called narrowing functions, were \nassociated with constraints in the context of interval arithmetic\nfor reals and some properties of them were\nestablished. In our terminology it\nmeans that these are idempotent and monotonic domain reduction functions.\nOne of such functions is defined in Example \\ref{exa:arcn}.\nAs a consequence, the algorithms proposed\nin these papers, called respectively a fixpoint algorithm\nand a narrowing algorithm, become the\ninstances of our {\\tt CIIQ} algorithm and {\\tt CII} algorithm.\n\nOther two attempts to provide a general setting for\nconstraint propagation algorithms can be found in\nBenhamou \\cite{Ben96} and Telerman and Ushakov \\cite{TU96}. In these papers \ninstead of $\\sqcup$-po's specific\nfamilies of subsets of the considered domain are taken with the\ninverse subset order. In Benhamou \\cite{Ben96} they are called approximate\ndomains and in Telerman and Ushakov \\cite{TU96} subdefinite models. \nThen specific algorithms are used to compute the outcome of\nconstraint propagation.\nThe considered families of subsets correspond to our $\\sqcup$-po's, \nthe discussed functions are in our terminology \nidempotent and monotonic domain restriction\nfunctions and the considered algorithms are respectively the\ninstances of our {\\tt CII} and {\\tt CI} algorithm.\n\nIn both papers it was noted that the algorithms compute the same value\nindependently of the order of the applications of the functions used.\nIn Benhamou \\cite{Ben96} local consistency is defined as the largest\nfixpoint of such a collection of functions and it is observed that on finite\ndomains the {\\tt CII} algorithm computes this largest fixpoint.\nIn Telerman and Ushakov \n\\cite{TU96} the subdefinite models are discussed as a general\napproach to model simulation, imprecise data and constraint programming. Also\nrelated articles that were published in 80s in Russian are there discussed.\n\nThe importance of fairness for the study of constraint propagation was\nfirst noticed in G{\\\"u}sgen and Hertzberg \\cite{guesgen-fundamental}\nwhere chaotic iterations of monotonic domain reduction functions were considered.\nResults of Section \\ref{sec:chaotic} (in view of their applications to the\ndomain reduction process in Subsection \\ref{subsec:dr}) \ngeneralize the results of this paper to arbitrary \n$\\sqcup$-po's and their Cartesian products. In particular, \nStabilization Corollary \\ref{cor:chaotic} generalizes the main result of this paper.\n\nFairness also plays a prominent role \nin Montanari and Rossi \\cite{MR91}, while the relevance of the chaotic\niteration was independently noticed in Fages, Fowler, and Sola \\cite{FFS96} and\nvan Emden \\cite{Emd97}. In the latter paper the generic chaotic iteration\nalgorithm {\\tt CII} was formulated and proved correct for the domain\nreduction functions defined in Benhamou and Older \\cite{BO97} and it was\nshown that the limit of the constraint propagation process for these\nfunctions is their greatest common fixpoint.\n\nThe idea that the meaning of a constraint is a function (on a\nconstraint store) with some algebraic properties was put forward in\nSaraswat, Rinard, and Panangaden \\cite{saraswat-semantic}, where the\nproperties of being inflationary (called there extensive), monotonic\nand idempotent were singled out.\n\nA number of other constraint propagation algorithms that were proposed\nin the literature, for example, in four out the first five issues of\nthe Constraints journal, can be shown to be instances of the generic\nchaotic iteration algorithms.\n\nIn each of the discussed algorithms a minor optimization can be incorporated\nthe purpose of which is to stop the computation as soon as one of the\nvariable domains becomes empty. In some of the algorithms discussed\nabove this optimization is already present. For simplicity we disregarded\nit in our discussion.\nThis modification can be easily incorporated into our generic algorithms\nby using $\\mbox{$\\ \\sqsubseteq\\ $}$-po's with the greatest element $\\top$ and by\nenforcing an exit from the {\\bf while} loop as soon as one of the \ncomponents of $d$ becomes $\\top$.\n\n\\subsection{Idempotence}\n\nIn most of the above papers the (often implicitly) considered semantic,\nconstraint or domain reduction functions are idempotent, so we now\ncomment on the relevance of this assumption.\n\nTo start with, we exhibited in Example \\ref{exa:cuts} and\n\\ref{exa:lineq} natural constraint and domain reduction functions\nthat are not idempotent. Secondly, as noticed in\nOlder and Vellino \\cite{older-constraint}, another paper on constraints for\ninterval arithmetic on reals, we can always replace each\nnon-idempotent inflationary function $f$ by\n\\[\nf^{*}(x) := \\bigsqcup_{i=1}^{\\infty} f^{i}(x).\n\\]\nThe following is now straightforward to check.\n\n\\begin{note}\nConsider an $\\sqcup$-po $(D, \\mbox{$\\ \\sqsubseteq\\ $})$ and a function $f$ on $D$.\n\\begin{itemize}\n\\item If $f$ is inflationary, then so is $f^{*}$. \n\n\\item If $f$ is monotonic, then so $f^{*}$. \n\n\\item If $f$ is inflationary and $(D, \\mbox{$\\ \\sqsubseteq\\ $})$ has the finite chain property, \nthen $f^{*}$ is idempotent.\n\n\\item If $f$ is idempotent, then $f^{*} = f$.\n\n\\item Suppose that $(D, \\mbox{$\\ \\sqsubseteq\\ $})$ has the finite chain property.\nLet $F := \\C{f_1, \\mbox{$\\ldots$}, f_k}$ be a set of inflationary, monotonic\nfunctions on $D$ \nand let $F^{*} := \\C{f^{*}_1, \\mbox{$\\ldots$}, f^{*}_k}$.\nThen the limits of all\nchaotic iterations of $F$ and of $F^{*}$ exist and always coincide.\n\\hfill{$\\Box$}\n\\end{itemize}\n\\end{note}\n\nConsequently, under the conditions of the last item,\nevery chaotic iteration of $F^{*}$ can be modeled by \na chaotic iteration of $F$, though \nnot conversely. In fact, the use of $F^{*}$ instead of $F$ can lead to a\nmore limited number of chaotic iterations. This may mean that in \nsome specific algorithms\nsome more efficient chaotic iterations of $F$ cannot be\nrealized when using $F^{*}$.\nFor specific functions, for instance those \nstudied in Examples \\ref{exa:cuts} and \\ref{exa:lineq},\nthe computation by means of $F^{*}$ instead of $F$\nimposes a forced delay on the application of other reduction functions.\n\n\n\\subsection{Comparing Constraint Propagation Algorithms}\n\nThe {\\tt CI} Theorem \\ref{thm:CI} and its\nreformulations for the {\\tt CII, CIQ} and {\\tt CIIQ} algorithms allow\nus to establish equivalence between these algorithms. More precisely,\nthese result show that in case of termination all four algorithms\ncompute in the variable $d$ the same value.\n\nIn specific situations it is natural to consider various domain\nreduction or constraint reduction functions. When the adopted\npropagation algorithms are instances of the generic algorithms here\nstudied, we can use the Comparison Corollary \\ref{cor:chaotic2} to\ncompare their outcomes. By way of example\nconsider two instances of the {\\tt CII} algorithm: one in which for\nsome binary constraints the pair of the domain reduction functions\ndefined in Example \\ref{exa:arc} is used, and another in which for\nthese binary constraints the domain reduction function defined in\nExample \\ref{exa:arcn} is used.\n\nWe now prove that in case of termination both algorithms compute in\n$d$ the same value. Fix a binary constraint $C$ and adopt the notation\nof Example \\ref{exa:arc} and of Example \\ref{exa:arcn} used with\n$n=2$. Note that for ${\\bf X} \\in DO_{1,2}$\n\\begin{itemize}\n\\item $\\pi_C({\\bf X}) = \\pi_1 \\circ \\pi_2({\\bf X})$,\n\n\\item $\\pi_i({\\bf X}) \\supseteq \\pi_C({\\bf X})$ for $i \\in [1..2]$.\n\\end{itemize}\nClearly, both properties hold when each function $f \\in \\C{\\pi_C, \\pi_1,\n \\pi_2}$ is replaced by its canonic extension $f^+$ to the Cartesian\nproduct $DO$ of all domains ${\\cal P}(D_i)$. By the Stabilization\nCorollary \\ref{cor:chaotic}, Comparison Corollary \\ref{cor:chaotic2}\nand the counterpart of the {\\tt CI} Theorem \\ref{thm:CI} for the {\\tt\n CIIQ} algorithm we conclude that both algorithms compute in $d$ the\nsame value.\n\nAn analogous analysis for arbitrary constraints allows us to compare\nthe algorithm of Davis \\cite{davis87} discussed in\nSubsection \\ref{subsec:related} with that \ndefined in Example \\ref{exa:arccon}.\nWe can conclude that in case of termination both algorithms\nachieve arc-consistency for $n$-ary constraints.\n\n\n\\subsection{Assessment and Future Work}\n\nIn this paper we showed that several constraint propagation\nalgorithms can be explained as simple instances of the chaotic iteration \nalgorithms. Such a generic presentation also provides a framework for\ngenerating new constraint propagation algorithms that can be \ntailored for specific application domains.\nCorrectness of these constraint propagation\nalgorithms does not have to be reproved each time anew.\n\nIt is unrealistic, however, to expect that all\nconstraint propagation algorithms presented in the literature can be\nexpressed as direct instances of the generic algorithms here considered.\nThe reason is that for some specific reduction functions\nsome additional properties of them can be exploited.\n\nAn example is the perhaps most known algorithm, the {\\tt AC-3}\narc-consistency algorithm of Mackworth \\cite{mackworth-consistency}. We\nfound that its correctness relies in a subtle way on a commutativity\nproperty of the projection functions discussed in Example\n\\ref{exa:arc}. This can be explained by means of a generic algorithm\nonly once one uses the information which function was applied last.\n\nAnother issue is that some algorithms, for example the {\\tt AC-4}\nalgorithm of Mohr and Henderson \\cite{MH86} and the {\\tt GAC-4} algorithm of\nMohr and Masini \\cite{MM88}, associate with each domain element some information\nconcerning its links with the elements of other domains. As a result\nthese algorithms operate on some ``enhancement'' of the original\ndomains. To reason about these algorithms one has to relate the\noriginal CSP to a CSP defined on the enhanced domains.\n\nIn an article under preparation we plan to discuss the\nrefinements of the general framework here presented that allow us\nto prove correctness of such algorithms in a generic way.\n\n\\section*{Acknowledgements}\nThis work was prompted by our study of the first version of\nvan Emden \\cite{Emd97}. Rina Dechter helped us to clarify (most of) our\ninitial confusion about constraint propagation. Discussions with Eric\nMonfroy helped us to better articulate various points put forward\nhere. Nissim Francez, Dmitry Ushakov and both anonymous referees\nprovided us with helpful comments on previous versions of this paper.\n\n\n\\bibliographystyle{plain}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}