diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzcicp" "b/data_all_eng_slimpj/shuffled/split2/finalzzcicp" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzcicp" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\IEEEPARstart{F}{uzzy} logic addresses reasoning about vague perceptions of true or false \\cite{zadeh1965fuzzy,zadeh1968probability,zadeh1988fuzzy}. Moreover, it represents very well our linguistic description about everyday dynamic processes \\cite{zadeh1976fuzzy, zadeh1975conceptI, zadeh1975conceptII, zadeh1975conceptIII, zadeh1996fuzzy,zadeh1983computational, efstathiou1979multiattribute}. Thus, the introduction of tense operators becomes a necessity \\cite{chajda2015tense, thiele1993fuzzy, moon2004fuzzy, cardenas2006sound, mukherjee2013fuzzy} because they are useful to represent the temporal indeterminism contained in some propositions \\cite{prior2003time,macfarlane2003future}. For example, the hypothesis ``The weather is very hot'' has its sense of truth revealed when we say ``The weather will always be very hot''. The quantifier ``always'' communicates the belief about hypothesis confirmation in the future.\n\nThe task of developing a general theory for decision-making in a fuzzy environment was started in \\cite{bellman1970decision}, but a temporal approach still needs to be discussed.\nFor example, do you prefer to receive 7 dollars today or to receive it 5 years later?\nIf you prefer today, then you made a distinction of the same value at distant points from time, even though you have used vague reasoning about value changes because we do not access our equity value in memory to perform calculations. \n\nIn decision-making, we do not only evaluate fuzzy goals, but also the time. The change caused by 7 dollars is strongly distinguishable between a short time and a distant future. This suggests the existence of temporal arguments in fuzzy logic, where ``little change in wealth today'' is different from ``little change in wealth 5 years later''. The linguistic value ``little'' is irrelevant in this problem, but the arguments ``today'' and ``5 years later'' are decisive in the judgment.\n\n\n\nThe proposal here is to connect different fuzzy sets through time attenuators and intensifiers. Hence, it is possible to simulate two figures of thought in hypotheses of dynamic systems: meiosis to argue underestimated changes and hyperbole to argue overestimated changes.\n\n\nThrough meiosis it is possible to formulate a concave expected change curve because it argues minor changes for maximizing the sense of truth. By this figure of thought within the fuzzy temporal logic it is noticeable that the hyperbolic discounting \\cite{benzion1989discount, chapman1996temporal, chapman1995valuing, pender1996discount, redelmeier1993time, frederick2002time} and its subadditivity \\cite{read2001time, read2003subadditive}, despite its numerous expected utility theory anomalies \\cite{loewenstein1992anomalies,ainslie2016cardinal,loewenstein1989anomalies}, is an intuitive dynamic evaluation of wealth, where ergodicity is relevant in the problem \\cite{peters2016evaluating, peters2011time}. \nSimilarly, concave expected change curve in lotteries \\cite{bernoulli2011exposition} make certain outcomes more attractive than lotteries with uncertain outcomes.\n\nHyperbole has an inverse reasoning to meiosis and it produces a convex expected change curve. Similarly, risky losses in the Prospect Theory have the same characteristic in its subjective value function \\cite{tversky1986rational,tversky1992advances, kahneman2013prospect}. Then, it is shown here that the risk seeking, which is a preference for risky losses rather than certain losses \\cite{kahneman2013choices}, can be described in fuzzy environment by an imprecise perception for small losses. \nThus, the indistinguishability between small negative changes leads to preference for hopes when only these are perfectly distinguishable.\nOn the other hand, when the losses are high, the risk seeking disappears and a kind of ruin aversion prevails \\cite{taleb2018skin}, where it is better to lose a lot and stay with little than risk losing all means of survival after the lottery. \nIn addition, the loss aversion behavior, where people prefer not to lose a given amount than to win the same amount \\cite{kahneman2013prospect}, is interpreted by a disjunction between gains and losses hypotheses leading to the conclusion that such behavior is also amount dependent. \n\n\nIn essence, all the behaviors analyzed here are speculation examples in dynamic systems, where we evaluate hypotheses and commit to outcomes before they emerge.\nThis paper shows, by modeling the Time Preference and the Prospect Theory, that the fuzzy temporal logic allows to construct a rhetoric for intertemporal and probabilistic choices in fuzzy environment. The first problem takes us to focus on values and time and the second focuses on values and probabilities. However, if the future is uncertain, there is no reason for time and uncertainty are studied in different matters \\cite{lu2010many}. \nIn addition, the feelings about judgments are amount dependents where the fuzziness can be decisive in some situations. Therefore, time, uncertainty and fuzziness are concepts that can be properly studied by the fuzzy temporal logic in order to elaborate the decision-making rhetoric.\n\n\n\\section{Theoretical model}\nThis section provides a theoretical framework for the reader to understand as the figures of thought, meiosis and hyperbole, can be elaborated in the fuzzy temporal logic. In short, it is discussed the need of many-valued temporal logic, the existence of different temporal prepositions with similar goals over time and, finally, it is shown as to perform the rhetorical development to make a judgment between two different hypotheses about the future. \n\n\\subsection{Temporal and many-valued logic}\n\\label{TIL}\n Usually we make decisions about dynamic processes whose states are unknown in the future. An amusing example can be found in an Aesop's fable, where the Grasshopper and the Ant have different outlooks for an intertemporal decision dilemma \\cite{perry1965babrius}. In short, while the Ant is helping to lay up food for the winter, the Grasshopper is enjoying the summer without worrying about the food stock. \n \nThe narrative teaches about hard work, collaboration, and planning by presenting temporal propositions. These propositions have invariant meaning in time, but sometimes they are true and sometimes false, yet never simultaneously true and false \\cite{ohrstrom2007temporal}. This property can be noted in the statement:\n\\[\n\\begin{array}{l}\nD_1 = \\text{``We have got plenty of food at present'' \\cite{perry1965babrius}.} \n\\end{array}\n\\]\nAlthough $D_1$ has constant meaning over time, its logical value is not constant. According to the fable, this statement is true in the summer, but it is false in the winter, hence there is a need to stock food.\n \n \nIf the logical value varies in time according to random circumstances, such as the weather of the seasons, how can we make inferences about the future? There seems to be a natural uncertainty that sets a haze over the vision of what will happen. For instance, consider the following statements about the Grasshopper: \n\\[\\centering\n\\begin{array}{rl}\nD_2 = \\text{``The Grasshopper stored a lot of food }\\\\ \n \\text{during the summer'';}\\\\\n D_3 = \\text{``The Grasshopper stores food for winter''.}\n\\end{array}\n\\] \nAccording to the fable, we know that the statement $D_2$ is false, \nbut at the fable end, when the winter came, \nthe Grasshopper says ``It is best to prepare for days of need'' \\cite{perry1965babrius}. In this way, the truth value in $D_3$ is ambiguous. We can not say with certainty that it is false, but \nwe do not know how true it can be. Thus, we can only propose hypotheses to speculate the Grasshopper's behavior.\n\n\n A hypothesis is a proposition (or a group of propositions) provisionally anticipated as an explanation of facts, behaviors, or natural phenomena that must be later verified by deduction or experience. They should always be spoken or written in the present tense because they are referring to the research being conducted. In the scientific method, this is done independently whether they are true or false and, depending on rigor, no logical value is attributed to them. However, in everyday language this rigor does not exist. Hypotheses are guesses that argue for the decision-making before the facts are verified, so they have some belief degree about the logical value that is a contingency sense performing a sound practical judgment concerning future events, actions or whatever is at stake.\n\n \n\nSince there is no rigor in everyday speculations, then different propositions may arise about the same fact. If they are analyzed by binary logic, then we may have unnecessary redundancy that can lead to contradictions.\n However, the redundancy of propositions is not a problem within the speculation process, what suggests a many-valued temporal logic.\n \n We can discuss the limitations of a binary temporal logic by speculating the Grasshopper's behavior. For example, is it possible to guarantee that the two hypotheses below are completely true simultaneously?\n\\[\n\\begin{array}{l}\n\\Theta = \\text{``The Grasshopper stores a lot of food'';}\\\\\n\\theta = \\text{``The Grasshopper stores little food''.}\n\\end{array}\n\\]\nThe hypotheses $\\Theta$ and $\\theta$ propose changes to different states. If $S_0$ is the initial state for stored food, the new state after storing a lot of food $M$ is $S_{\\Theta}=S_0+M$. Analogously, the next state after storing little food is \n$S_{\\theta}=S_0+m$ for $m S_\\theta. \\]\nTherefore, affirming both as true leads to a contradiction because the same subject can not simultaneously produce two different results on the same object. However, in a speculation of the Grasshopper's behavior, none of the propositions can be discarded.\n\n\n\n\n\nEvaluating by fuzzy logic, the linguistic variable ``stored food'' has values ``a lot of'' and ``little'' in the propositions $\\Theta$ and $\\theta$. According to Bellman and Zadeh's model \\cite{bellman1970decision}, these linguistic values are the fuzzy constraints for inputs $M$ and $m$ about the food supply. Meanwhile, the target states $S_\\theta$ and $S_\\Theta$ can be the fuzzy goals.\n\nAn alternative development, but similar, can be done through changes, what is in accordance with human psychophysical reality \\cite{kahneman2013choices}. Thus, in this paper, the goals are the factors $ S_{\\Theta}\/S_0= 1+X$ and $S_{\\theta}\/S_0= 1+x$, where $X=M\/S_0$ and $x=m\/S_0$ are changes. For example, in \\cite{mukherjee2017loss} the respondents were asked how they would feel gaining (or losing) a certain amount. It was noted that the emotional effects gradually increased as the amounts had grown. Therefore, the intensity of gains and losses are easily ordered by our perception, $x\\text{now}$. \n\n\n\n\n\nAbout the certain sense, $F\\Theta$ is known as the weak operator because $\\Theta$ is true only once in the future, while $G\\theta$ is known as the strong operator because it is true in all future periods.\nIf $\\Theta$ does not always come true, then we can investigate its sense of truth through the affirmative: \n\\[\n\\begin{array}{rl}\nGF\\Theta = \\text{``The Grasshopper will frequently store}\\\\\n\\text{ a lot of food''.}\\\\\n\\end{array}\n\\]\nWhere the quantifier ``frequently'' better argues for the sense of truth of $\\Theta$ because we have undefined repetitions in the same period in which $G\\theta$ is true.\n\n \n\nThe frequency in which the propositions $\\Theta$ and $\\theta$ are true and the changes proposed by them determine the outcomes over time. \nThe affirmative $GF\\Theta$ communicates that the Grasshopper will frequently produce a strong change in the stored food stock (change factor $1+X$). On the other hand, $G\\theta$ proposes a small change factor, $1+x$, but continuously over time. So what is the relation between $X$ and $x$ that generates a similarity of states between the two hypotheses over time? The relation that constructs this similarity can be obtained by the time average. Therefore, let us consider $\\tau(t)$ as the total time where $GF\\Theta$ is true and $t$ as the observation time. If $\\Theta$ is true with a frequency given by\n\\begin{equation}\n\\lim\\limits_{t\\to \\infty} \\frac{\\tau(t)}{t}=s,\n\\end{equation} \nthen, the relation between change factors $1+x =(1+X)^s$, estimated by time average \\cite{peters2016evaluating}, indicates that the sentences $GF\\Theta$ and $G\\theta$ have similar goals in the long run. This similarity is denoted in this work by\n\n\\begin{equation}\nGF\\Theta \\sim G {\\theta}.\n\\end{equation} \n\n\nThe sense of truth for the sentence $GF\\Theta$ is quantified in the parameter $s$. It can be a stationary probability when\n $t$ is big enough, but we do not have much time to calculate this probability in practice.\nIn this way, we assume that the sense of truth is an imprecise suggestion (intuition) for the time probability.\nFigure \\ref{fig1} presents some adverbs of frequency that can suggest the sense of truth in future sentences.\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[scale=0.8]{Gfrequency}\n\t\t\\caption{Adverbs of frequency that can suggest the sense of truth of $GF$. The quantifier ``always'' indicates certainty, while ``never'' indicates impossibility.}\t\n\t\\label{fig1}\n\\end{figure}\n\n\n\n\nThe axiomatic system of temporal logic proposes that \n $G {\\theta} \\Rightarrow GN {\\theta}$, where $N\\theta$ stands for $\\theta$ is true at the next instant. Therefore, the similarity $GF\\Theta \\sim G {\\theta}$ can be written by\n\\begin{equation}\n F\\Theta \\sim N {\\theta},\n\\end{equation}\nwhen $1+x \\approx (1+X)^s$. \nThus, the statement ``the Grasshopper will sometime store a lot of food'', which have a change factor $1+X$, is similar to the statement ``the Grasshopper will store little food at the next moment'', which has a change factor $(1+X)^s$.\n\n\n\n\\subsection{Rhetoric: Meiosis and Hyperbole}\n\\label{MH}\nIn general, different hypotheses do not have similar changes in the future. For this reason, it is required that a rhetorical development make a judgment between them. In this section, two figures of thought are presented, meiosis and hyperbole, which can be used to compare hypotheses with different changes and senses of truth. \n\nStill using Aesop's fable, imagine that we want to compare Ant and Grasshopper's performance with the following hypotheses:\n\\[\n\\begin{array}{l}\n\\phi = \\text{``The Ant stores little food'';}\\\\\n\\Theta = \\text{``The Grasshopper stores a lot of food''.}\\\\\n\\end{array}\n\\]\nAssume that the Ant produces a change $y$ in its food stock while the Grasshopper produces a change $X$. If we think that the Ant is more diligent in its work than the Grasshopper, i.e., the sense of truth for $\\phi$ is maximum while the sense of truth for $\\Theta$ is ambiguous, so we can affirm $N\\phi$ and $F\\Theta$ in order to develop the following argumentation process:\n\\begin{enumerate}\n\\item elaborate a proposition $\\theta$, similar to $\\Theta$, which proposes a lower outcome, that has a change $x$ (for example, $\\theta $ = ``The Grasshopper stores little food'');\n\\item express $\\theta$ with maximum certainty in the future, $N\\theta$ = ``The Grasshopper will store little food at the next moment'';\n\\item calculate the relation $1+x \\approx (1+X)^s$ to match the average changes between $N\\theta$ and $F\\Theta$ in order to obtain the similarity $F\\Theta \\sim N {\\theta}$, where $X>x$ and $s\\in [0,1]$;\n\\item finally, judge the affirmations $N\\phi$ and $N\\theta$ through fuzzy logic. In this specific problem we have\n\\[N\\theta \\text{ or } N\\phi = \\max\\left\\{\\mu \\left((1+X)^s-1\\right),\\mu(y)\\right\\}.\\]\n\\end{enumerate}\n\n\nThe above argument uses meiosis and the upper part of Figure \\ref{figMH} summarizes this procedure. \nIn linguistic meiosis, the meaning of something is reduced to simultaneously increase something else in its place.\nIn the above mentioned case, proposing $\\theta$ means reducing the stored food change by the Grasshopper.\nAt the same time, this suggests greater certainty because it makes the process more feasible in the future.\nHowever, it is only a figure of thought to make an easy comparison because judging two sentences with the same sense of truth, looking only at the change goals, is much simpler.\n\n The meiosis for Grasshopper's case has a membership composite function given by\n\\[ \\mu_{\\text{ Grasshopper's goal}}= \\mu \\circ \\mu_{\\text{meiotic change}}(X)=\\mu \\left((1+X)^s-1\\right).\\] \nIn general, $\\mu_{\\text{ Grasshopper's goal}}$ refers to the fuzzy goal ``the bigger the better is the change $(1+X)^s-1$ at the next moment''. Like this, we can evaluate it for decision-making in real time. \n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\begin{tikzpicture}[node distance=2cm and 1cm,>=stealth',auto, every place\/.style={draw}]\n \n \\node [state,initial text=,accepting by double] (1) [] {$\\begin{array}{l}\n F\\Theta \\\\\n \\\\\n{\\color{blue} F\\Phi}\n\\end{array}$}; \n \\node [state,initial text=,accepting by double] (0) [right=of 1] {$\\begin{array}{l}\n{\\color{blue} N\\theta} \\\\\n\\\\\n N\\phi\n\\end{array}$};\n \\path[->] (1) edge [bend left] node {$\\underbrace{(1+X)^s}_{\\text{Meiosis}}$} (0);\n \\path[->] (0) edge [bend left] node {$\\overbrace{(1+y)^\\frac{1}{s}}^{\\text{Hyperbole}}$} (1); \n\\end{tikzpicture}\n\t\t\\caption{Diagram representing the meiosis and hyperbole procedure for the judgment of hypotheses. The blue sentences $F\\Phi$ and $N\\theta$ are the figures of thought.}\t \n\t\\label{figMH}\n\\end{figure}\n\n\n \n\n\n\n\nOn the other hand, there is an inverse process to meiosis that is called hyperbole. Basically, it exaggerates a change in the outcome to reduce its sense of truth. In everyday language, such statements are commonplace, as ``the bag weighed a ton''.\nIn this statement, we realize that the bag is really heavy. However, is there a bag weighing really a ton? This is just a figure of speech.\n\n\nIn order to understand how to judge two hypotheses through a process of hyperbolic argumentation, consider again that we can make the future statements $F\\Theta$ and $N\\phi$ pass through the following steps:\n\\begin{enumerate}\n\\item elaborate a proposition $\\Phi$, similar to $\\phi$, which proposes a larger outcome, that has a change $Y$ (for instance, $\\Phi $ = ``The Ant stores a lot of food'');\n\\item affirm $\\Phi$ in the future with the same sense of truth as the proposition $\\Theta$, that is, $F\\Phi$ = ``The Ant will sometime store a lot of food'';\n\\item calculate the relation $(1+y)^{\\frac{1}{s}}\\approx 1+Y$ to match the average changes between $N\\phi$ and $F\\Phi$ in order to obtain the similarity $N\\phi \\sim F\\Phi$, where $Y>y$ and $s\\in [0,1]$; \n\\item finally, judge the fuzzy changes goals of the affirmative $F\\Phi$ and $F\\Theta$. In this specific problem we have\n\\[F\\Theta \\text{ or } F\\Phi = \\max\\left\\{\\mu(X), \\mu\\left((1+y)^\\frac{1}{s}-1\\right)\\right\\}.\\]\n\\end{enumerate}\n\n\n\nThe hyperbole for Ant's case has a membership composite function given by\n\\[ \\mu_{\\text{ Ant's goal}}= \\mu \\circ \\mu_{\\text{hyperbolic change}}(y)=\\mu \\left((1+y)^\\frac{1}{s} -1\\right).\\] \nThus, $\\mu_{\\text{ Ant's goal}}$ refers to the fuzzy goal ``the bigger the better is the change $ (1+y)^\\frac{1}{s} -1 $ sometime in the future''. \n\n\n The bottom part of Figure \\ref{figMH} summarizes hyperbolic argumentation procedure. Note that the arguments by meiosis and hyperbole lead to the same conclusion. Therefore, they can be only two ways to solve the same problem. However, in section \\ref{PT}, where the Prospect Theory is evaluated, there may be preference for one of the methods according to the frame in which the hypotheses are inserted.\n\n\n\n\n\\section{Time Preference}\n\\label{PTemp}\n\nTime preference is the valuation placed on receiving a good on a short date compared with receiving it on a farther date. A typical situation is choosing to receive a monetary amount $m$ after a brief period (a day or an hour) or to receive $M>m$ in a distant time (after some months or years).\n\n\nThe time preference choice is a problem of logic about the future and in order to model it consider the following hypotheses:\n\n\\begin{itemize}\n\\item $\\Theta_m =$ ``to receive $m$'' represents the receipt of the amount $m$ in short period, $t_m=t_0+\\delta t$;\n\\item $\\Theta_M =$ ``to receive $M$'' represents the receipt of the amount $M$ in longer time horizons, $t_M=t_0+\\Delta t$. \n\\end{itemize}\nEach proposition has a change factor for the individual's wealth. The proposition $\\Theta_M$ has the change factor $(1+M\/W_0)$, while $\\Theta_m$ has the change factor $(1+m\/W_0)$. Now, we perform the meiosis procedure for both hypotheses, reducing the changes and maximizing the sense of truth:\n\\begin{itemize}\n\\item $N\\theta_m =$ ``to receive an amount less than $m$ at the next moment''. This affirmative proposes a change factor $1+x_m$ in the individual's wealth;\n\\item $N\\theta_M =$ ``to receive an amount less than $M$ at the next moment''. Similarly, this affirmative proposes a change factor $1+x_M$.\n\\end{itemize}\n\nThe senses of truth for the hypotheses $\\Theta_M$ and $\\Theta_m$ are revealed when they are affirmed in the future. Therefore, we have the following similarities:\n\\begin{itemize}\n\\item $F\\Theta_M \\sim N\\theta_M$, if $(1+x_M)\\approx \\left(1+\\frac{M}{W_0}\\right)^{s_M}$ ;\n\\item $F\\Theta_m \\sim N\\theta_m$, if $(1+x_m)\\approx \\left(1+\\frac{m}{W_0}\\right)^{s_m}$ .\n\\end{itemize}\nWhere $s_M$ and $s_m$ are the senses of truth regarding the receipt of the values $M$ and $m$.\nIn this problem, they cannot be time probabilities since the probabilistic investigation in this case is not convenient. However, intuitions about the realization of the hypotheses $\\Theta_M$ and $\\Theta_m$ are feasible for individuals and they should be represented.\n\nNow suppose, without loss of generality, that $M$ is large enough so that the individual prefers to receive it in the distant future. If we consider $n=\\Delta t\/\\delta t$ periods in which $n$ attempts to receive $m$ until $M$'s receipt date are allowed, then we have\n\\begin{eqnarray}\n\\nonumber (1+x_M) &>& (1+x_m)^n \\\\\n\\Rightarrow\\left(1+\\frac{M}{W_0} \\right)^{s_M} &>& \\left(1+\\frac{m}{W_0} \\right)^{ns_m}.\n\\label{BhomBawerk}\n\\end{eqnarray}\nJudging by fuzzy logic the two hyprothesis in the future, the ``or'' operation between change goals is indicated to finalize the meiosis procedure, \n\\begin{eqnarray}\n\\nonumber \\mu(x_M) &=& \\max \\left\\{\\mu(x_M), \\mu \\left( (1+x_m)^n-1 \\right)\\right\\}\\\\\n&=& \\mu\\left(\\left(1+\\frac{M}{W_0} \\right)^{s_M}-1\\right).\n\\end{eqnarray}\n\nIn general the time preference solution is presented through a discount function. In order to use this strategy it is necessary to develop the same form on both sides of the inequality \\ref{BhomBawerk}. For this, there is a value $\\kappa$ such that $\\kappa s_M > m$, where we can write\n\\begin{equation}\n\\left(1+\\frac{M}{W_0} \\right)^{s_M} = \\left(1+\\frac{\\kappa {s_M}}{W_0} \\right)^{ns_m}>\\left(1+\\frac{m}{W_0} \\right)^{ns_m}.\n\\label{Passagem}\n\\end{equation}\nThe discount function undoes exactly the change caused by the proposition $\\Theta_M$, that is,\n\\begin{equation}\n \\frac{1}{{\\left(1+\\frac{M}{W_0} \\right)}} =\\left(1+\\frac{\\kappa s_M}{W_0} \\right)^{-\\frac{s_m}{s_M}n} .\n \\label{DescontoHiperbolico}\n\\end{equation}\n\n\nEquation \\ref{DescontoHiperbolico} describes the hyperbolic discount, the most well documented empirical observation of discounted utility \\cite{frederick2002time}. When mathematical functions are explicitly fitted to experiment data, a hyperbolic shape fits the data better than the exponential form \\cite{kirby1997bidding, kirby1995modeling, myerson1995discounting,rachlin1991subjective}. Among the functions proposed for the adjustment of experimental data, the discount function proposed by Benhabib, Bisin and Schotter \\cite{benhabib2004hyperbolic}\n\\[e_h^{-\\rho n}\\equiv \\left(1-h \\rho n \\right)^{\\frac{1}{h}}\\]\nallows for greater flexibility of fit for exponential, hyperbolic, and quasi-hyperbolic discounting \\cite{laibson1997golden} (see Figure \\ref{figDiscounting}). In order to obtain it, we must reparametrize equation \\ref{DescontoHiperbolico} by doing \n\\begin{eqnarray}\n\\frac{1}{h} &=& -\\frac{s_m}{s_M}n, \\label{eqh} \\\\\n\\rho &=& \\frac{\\kappa s_m}{W_0}.\n\\end{eqnarray}\n\nThe parameter $h$ denotes hyperbolicity and it gives the curve shape over time. For instance, $e_h^{-\\rho x}$ equals the exponential function $e^{-\\rho x}$ when $h \\to 0^-$. This means that there is plenty of time for possible trials with higher sense of truth until the date of the great reward. On the other hand, $h\\ll 0$ indicates time shortage for trials. In theses cases, only the first periods have strong declines in the discount function. In short, equation \\ref{eqh} shows that the senses of truth and the time between rewards determinate the value of $h$.\n\n\n Furthermore, the discount rate $\\rho$ quantifies the preference for goods and it is influenced by individual states of scarcity and abundance of goods. For instance, let us consider an individual called Bob. If an object is scarce for him (small $W_0$), then he places a higher preference (great $\\rho$). Analogously, if $W_0$ represents an abundance state for him, he has a lower preference (small $\\rho$). This may cause great variability in experiments because the wealth distribution follows the power law \\cite{levy1997new,druagulescu2001exponential,sinha2006evidence,klass2006forbes}. This means that $W_0$ can vary abruptly from one individual to another in the same intertemporal arbitrage experiment. \n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[scale=0.65]{Discount}\n\t\t\\caption{Discount function $e_h^{-\\rho n}$ versus number of delayed periods: the dashed black curve is the exponential discounting for $\\rho = 0.005$; the $\\circ$-blue curve is quasi-hyperbolic discounting for $\\rho = 0.7$ and $h = -3$; \nand the $\\ast$-red and cyan curves are hyperbolic discounting for $\\rho_1 = 0.0175$ and $h_1 = -3$, and $\\rho_2 = 0.05$ and $h_2 = -5$, respectively. }\t\n\t\\label{figDiscounting}\n\\end{figure} \n\n\\subsection{Discussion about time preference behaviors}\n\n\nDaniel Read pointed out that a common evidence in time preference is the ``subadditve discounting'', in others words, the discounting over a delay is greater when it is divided into subintervals than when it is left undivided \\cite{read2001time}. For example, in \\cite{takahashi2006time,takahashi2008psychophysics} it has been argued that abstinent drug addicts may more readily relapse into addiction if an abstinence period is presented as a series of shorter periods, rather than a undivided long period. This property is present in the function $e_h^{-x}$, because if $h<0$, then\n\\[e_h^{-x} \\; e_h^{-y}=e_h^{-x-y+h xy}0.\\]\n\nThe hyperbolicity, due to intertemporal arbitrage, is always negative, $\\frac{1}{h} = -\\frac{s_m}{s_M}n$. Therefore, we have a subadditivity as a mandatory property in the time preference for positive awards. \n\n\n\nHowever, the main experimentally observed behavior, hereinafter referred to as ``time effect'', concerns the observation that the discount rate will vary inversely with the time period to be waited. Here, we can verify this effect as a consequence of the subadditivity found in the function $e_h^{-\\rho n}$. After all, when we want to find the discount rate through a given discount $D$ after $n$ periods, then we calculate $-\\frac{1}{n}\\ln D $. For example, if we have an exponential discounting $e^{-\\rho n}$, then $\\rho= -\\frac{1}{n}\\ln e^{-\\rho n} $. Thus, using the subadditive property we can develop \n\\begin{eqnarray}\n\\nonumber \\left(e_h^{-\\rho}\\right)^n\\leq e_h^{-\\rho n} & \\Rightarrow & \n-\\frac{1}{n}\\ln \\left(e_h^{-\\rho}\\right)^n\\geq -\\frac{1}{n}\\ln e_h^{-\\rho n} \\\\\n\\nonumber & \\Rightarrow & \\left\\langle -\\ln e_h^{-\\rho} \\right\\rangle \\geq \\left\\langle -\\frac{1}{n}\\ln e_h^{-\\rho n} \\right\\rangle .\n\\end{eqnarray}\nTherefore, if $h$ does not tend to zero from the left side, then the average discount rate over shorter time is higher than the average discount rate over longer time horizons. \nFor example, in \\cite{thaler1981some} it was asked to respondents how much money they would require to make waiting one month, one year and ten years just as attractive as getting the \\$ 250 now. The median responses (US \\$ 300, US \\$ 400 and US \\$ 1000) had an average (annual) discount rate of 219\\% over the one month, 120\\% over the one year and 19\\% over the ten years. Other experiments presented a similar pattern \\cite{benzion1989discount, chapman1996temporal, chapman1995valuing, redelmeier1993time, pender1996discount}. Therefore, the time effect is a consequence of subadditivity when there is not plenty of time for trials at one of the hypotheses. \n\n\n\n\nA second behavior, referred to as ``magnitude effect'', is also consequence of subadditivity. The reason for this is that magnitude effect and time effects are mathematically similar, because the discount rate is $\\rho=\\kappa s_m\/W_0$ and $ \\kappa $ is growing for large values of $ M $ (see equation \\ref{Passagem}). In order to understand the similarity, note that the function $e_h^{-\\rho n}$ varies with $ n \\rho $. If we set the value $ n $, for example $ n = 1 $, and we vary the only rate $ \\rho = r \\rho_0 $, where $ \\rho_0 $ is constant and $ r>1 $ is a multiplier which is growing for values of $ M $, then we have the function $e_h^{-r\\rho_0}$ analogous to $e_h^{-\\rho n}$. Therefore, the magnitude effect made by $r$, similarly to the time effect, results in\n\\[\\left\\langle -\\ln e_h^{-\\rho_0} \\right\\rangle \\geq \\left\\langle -\\frac{1}{r}\\ln e_h^{-\\rho_0 r} \\right\\rangle .\\]\n\nFor example, in Thaler's investigation \\cite{thaler1981some}, the respondents preferred, on average, \\$ 30 in 3 months rather than \\$ 15 now, \\$ 300 in 3 months rather than \\$ 250 now, and \\$ 3500 in 3 months rather than \\$ 3000 now, where the discount rates are 277\\%, 73\\% and 62\\%, respectively. Other experiments have found similar results \\cite{ainslie1983motives, benzion1989discount,green1994temporal,holcomb1992another,kirby1997bidding, kirby1995modeling, kirby1999heroin, loewenstein1987anticipation,raineri1993effect,shelley1993outcome, green1997rate}.\n\n\nAnother experimentally observed behavior is the ``preference reversal''. Initially, when individuals are asked to choose between one apple today and two apples tomorrow, then they may be tempted to prefer only one apple today. However, when the same options are long delayed, for example, choosing between one apple in one year and two apples in one year plus one day, then to add one day to receive two apples becomes acceptable \\cite{thaler1981some}. \n\nThe preference reversal shows how we evaluate value and time in the same hypothesis. For example, if $M_1 \\left( 1+\\frac{M_2}{W_0}\\right)^{s_2}.\n\\label{Twice}\n\\end{equation}\n\n\nOn the other hand, when one has to choose between the hypotheses $ H_1=$``to receive $M_1$ in $n$ days'' and $ H_2=$ ``to receive $M_2$ in $n+1$ days'', then a similar judgment can be made by evaluating the proposed action execution over and over again over time. Since the waiting time to receive $ M_1 $ is shorter, then we can realize that the number of attempts to receive $M_1$ will be greater in the future ($(n+1)\/n$ trials to receive $M_1$ for each trial to receive $M_2$). By fuzzy temporal logic, the choice between $ H_1$ and $ H_2$ depends on the following result: \n\\[ \\max \\left\\{\\left( 1+\\frac{M_1}{W_0}\\right)^{\\frac{n+1}{n} s_1},\\left( 1+\\frac{M_2}{W_0}\\right)^{s_2}\\right\\}.\\]\nWhen $n=1$, then it will be preferable to receive the reward $M_1$ (see equation \\ref{Twice}). This can also happen to other small values of $n$, for example, $n$ equals 2 or 3. \nHowever, when $n$ is large enough, the relation $(n+1)\/n$ tends to 1 and makes $M_2$ a preferable reward (see equation \\ref{today}). Thus, the preference between the rewards are reversed when they are shifted in time. In similar experiments, this behavior can be observed in humans \\cite{kirby1995preference,green1994temporal,millar1984self,solnick1980experimental} and in pigeons \\cite{ainslie1981preference,green1981preference}.\n\n\n\nHence, the time effect and magnitude effect on the discount rates, preference reversal and subadditivity are strong empirical evidences for the application of fuzzy temporal logic in intertemporal choices. \n\n\n\n\n\n\n \n\n\\section{Lotteries}\n\\label{PT}\nIn a more realistic human behavior descriptive analysis with the psychophysics, the subjective value of lottery must be related to the wealth change \\cite{kahneman2013choices}. However, changes after lotteries have incomplete information because we avoid calculating them using equity values, premiums and probabilities. Therefore, fuzzy sets are good candidates for representing these hypothetical changes. In addition, more realistic expectations should consider the evolution of outcomes over time \\cite{peters2016evaluating}. Thus, the changes may have their expected values attenuated or intensified by the sense of truth (intuitive time probability).\n\n \n\\subsection{Meiosis and risk aversion}\n In order to model lotteries consider the hypothesis $\\Theta_2$ = ``to win $M $'' and a probability $p \\in [0,1]$. If $M>0$, then by fuzzy temporal logic we can have:\n\\begin{itemize}\n\\item $l_1$ = to win $Mp$ (at the next moment);\n\\item $l_2$ = to win $M$ (at the next moment) with probability $p$.\n\\end{itemize}\nThe expression ``at the next moment'' does not appear in the experiments, but they are implicit because the low waiting time for the two lotteries seems to be the same. Moreover, note that both lotteries are equivalents in the ensemble average ($E=pM$ for the two lotteries). \n\nAgain, let us consider an individual called Bob who may repeat similar lotteries in the future. Therefore, this repetition can affect his decision \\cite{peters2016evaluating}. If the lottery $l_2$ is repeated several times until he wins $M$, then $l_2$ is similar to affirm\n\\[F\\Theta_2=\\text{ ``will sometime win } M \\text{'',}\\]\nwhere $p$ is the time probability (or sense of truth for lottery). Thus, if we perform the meiotic argumentation procedure (see section \\ref{MH}), then the similar sentence which have equivalent outcomes to $F\\Theta_2$ is\n\\[N\\theta_2=\\text{ ``will win } W_0\\left(1+\\frac{M}{W_0}\\right)^p - W_0 \\text{ at the next moment'',}\\]\nin which this sentence is a future affirmation of the hypothesis \\[\\theta_2= \\text{ ``to win } W_0\\left(1+\\frac{M}{W_0}\\right)^p - W_0 \\text{''}.\\]\n\nNow, the difference between $N\\theta_2$ and $Nl_1$ consists only in the value of the award. Although values are reported in lotteries, variations on wealth are unknown because the cognitive effort to perform the division $M\/W_0$ is avoided. Thus, taking $x\\geq 0$, such that $x=M\/W_0$, we can only evaluate changes, \n\\begin{eqnarray}\n\\nonumber N l_1 \\text{ or } N\\theta_2 &=& \\max \\{\\mu(px),\\mu\\left((1+x)^p-1\\right)\\}\\\\\n&=& \\mu (px) \\text{ for all } x\\geq 0.\n\\label{MeioseComp}\n\\end{eqnarray}\n\n\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[scale=0.65]{PT1}\n\t\t\\caption{Function ${\\cal M}_p^+(x)$ to represent meiosis. The blue dashed line $x\/2$ is tangent to the $\\circ$-blue curve given by ${\\cal M}_{1\/2}^+(x)$. Analogously, the red dashed line $x\/10$ is tangent to the $\\ast$-red curve given by ${\\cal M}_{1\/10}^+(x)$. Note that in the vicinity of zero the curves are close, so this is a region of low distinguishability for the changes.}\t\n\t\\label{figPT1}\n\\end{figure} \n\nThe line $px$ is tangent to the concave curve $(1+x)^p-1$ at the point $x=0$ what results in $px \\geq (1+x)^p-1$ for $x\\geq 0$. An example can be seen in Figure \\ref{figPT1} where the dashed blue line $x\/2$ is above the curve $\\circ$-blue $(1+x)^\\frac{1}{2}-1$. Analogously, a similar illustration can be seen for $p=1\/10$. Thus, we can note that the lottery $l_1$ is preferable for any values of $M$ and $p$, because the line $px$ is always above the curve $(1+x)^p-1$. This result is consistent with the respondents' choices in the Kahneman and Tversky experiments \\cite{kahneman2013prospect}. Thus, the concave curve representing the expected positive change at the next moment can be described by\n\\begin{eqnarray}\n\\nonumber {\\cal M}^+_p(x)&=& (1+x)^p -1 \\\\\n&=& p\\ \\text{ln}_p (1+x) \\text{ for all } x\\geq 0.\n\\label{MeioseMais}\n\\end{eqnarray}\nThe function $\\text{ln}_p (x)\\equiv (x^p -1)\/p$ is defined here as in \\cite{nivanen2003generalized} and it is commonly used in nonextensive statistics \\cite{tsallis1988possible,tsallis1999nonextensive}.\n\n\n\n\n\n \n\n\n\n\n\\subsection{Hyperbole and risk seeking}\n Kahneman and Tversky also show that the subjective value function is not always concave \\cite{kahneman2013choices}. They noted that in a loss scenario there is a convexity revealing a preference for uncertain loss rather than for certain loss.\n If we replace the word ``win'' for ``lose'' in the lotteries $l_1$ and $l_2$, then we have the following lotteries that result in the wealth decrease: \n\\begin{itemize}\n\\item $l_3$ = to lose $Mp$ (at the next moment);\n\\item $l_4$ = to lose $M$ (at the next moment) with probability $p$.\n\\end{itemize}\n\nNow consider the hypothesis $\\Theta_4$ = ``to lose $M$''. If $l_4$ is repeated until the individual loses $M$, then this lottery becomes similar to \n\\[F\\Theta_4=\\text{ ``will sometime lose } M \\text{'',}\\] \nwhere $p$ is the time probability (sense of truth for $F\\Theta_4$). \n\nSimultaneously, the lottery $l_3$ proposes a certain loss. Then we can reduce its sense of truth for $p$ to compare with $F\\Theta_4$. For this, let us consider the following hyperbole\n \\[L_3 = \\text{``to lose } W_0-W_0\\left(1-\\frac{pM}{W_0}\\right)^\\frac{1}{p}\\text{'',}\\]\nin which the affirmation in the future\n\\[\\centering\n\\begin{array}{rl}\nFL_3 = \\text{``will sometime lose } W_0-W_0\\left(1-\\frac{pM}{W_0}\\right)^\\frac{1}{p}\\text{''}\n \\end{array}\n\\]\narguments an expected change $(1+px)^\\frac{1}{p}-1$ for $-1\\leq x<0$.\n\n\n\n The line $x$ is tangent to the convex curve $(1+px)^\\frac{1}{p}-1$ at the point $x=0$ for any $p$, what results in $(1+x)^\\frac{1}{p}-1 \\geq x$ for $-1\\leq x<0$. In Figure \\ref{figPT2} the dashed black line $x$ represents the proposed change in $l_4$ and the curves $\\ast$-red and $\\circ$-blue, \nbelonging to the family of curves $(1+px)^\\frac{1}{p}-1$, represent the hyperbolic argumentation for $l_3$.\n \\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[scale=0.65]{PT2}\n\t\t\\caption{The hyperbolic curves $(1+px)^{\\frac{1}{p}}$ for $p=1\/2$ and $p=1\/10$. Note that the curves tangentiate the black dashed line $x$ at the point zero. The interval $-0.1\\leq x <0$ is a region of low distinguishability. }\t\n\t\\label{figPT2}\n\\end{figure} \nTherefore, note that the family of curves $(1+px)^\\frac{1}{p}-1$ is very close to line $x$ until 0.1. \nThis means that they have low distinguishability in this region, in other words, uncertain and certain losses can be imperceptible changes when the losses are small.\nIn fuzzy logic is equivalent to choose between ``small decrease in wealth with certainty'' and ``small decrease in wealth with probability $p$''. \nThe decreases in wealth are almost the same and undesirable, but the uncertainty argues hope for escaping losses and it is desirable. Thus, the uncertain option for losses will be more attractive in this situation. Then, in order to simulate risk seeking in the losses region we must insert a rate $\\rho$ into the hyperbolic argumentation process, so that\n\\begin{eqnarray}\n\\nonumber {\\cal H}^-_p(\\rho x)&\\equiv &(1+p\\rho x)^\\frac{1}{p}-1\\\\\n\\nonumber &=& e_p^{\\rho x}-1.\n\\end{eqnarray}\n\n\nThe rate $\\rho$ makes the curve ${\\cal H}^-_p(\\rho x) $ more convex. Thus, its first values pass below the line $x$ to simulate the risk seeking. In Figure \\ref{figPT3} the red curve has $\\rho=1.2$ and $p=1\/2$ to simulate this effect in the interval $-0.550$.\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[scale=0.65]{PT3}\n\t\t\\caption{Function ${\\cal S}_{0.5} (x)$ for $\\rho=1.2$. When the red curve is below the dashed black line we have risk seeking (interval $-0.550$ (dashed black line $x\/2$ above the red curve). }\t\n\t\\label{figPT3}\n\\end{figure} \n\\subsection{Loss aversion and disjunction between hypotheses}\n\\label{averisco}\nThe loss aversion principle, ${\\cal{S}}_p(x)<-{\\cal{S}}_p(-x)$, refers to the tendency to avoid losses rather than to acquire equivalent gains, in other words, it is better not to lose \\$ 1,000 than to win \\$ 1,000. \n\nIn order to understand why the curve is steeper on the losses side, consider that Bob has only \\$10,000 (all of his money). If now he loses \\$ 1,000, then the variation is -10\\% and his new wealth is \\$ 9,000. However, if he wins \\$ 1,000, then the positive variation is 10\\% and his new wealth is \\$ 11,000. So far this process seems fair, but we need to look at it dynamically.\n If he will have \\$ 9,000 at the next moment, then he will need to gain 11.11\\% to get back \\$ 10,000. On the other hand, if he will have \\$ 11,000 at the next moment, then the required variation is -9.09\\% to get back \\$ 10,000. So which is the most difficult change to happen? Exactly, the 11.11\\% that restore the previous state after to lose 10\\%. Therefore, it is better not lose 10\\% than to gain 10\\% in the long-term gamble.\n\n\nThis behavior can be modeled in fuzzy temporal logic through the disjunction operator. In order to understand the details about the disjunction between loss and gain hypotheses, consider the lottery \n\\begin{eqnarray}\n\\nonumber L_{wl} &=& \\text{``to win } M_1 \\text{ with probability } p\\\\\n\\nonumber &\\text{ }&\\text{ or to lose }M_2\\text{ with probability } q\\text{''.}\n\\label{AversoRisco}\n\\end{eqnarray}\nIf winning $M_1$ produces a gain $x_1>0$ and losing $M_2$ produces a loss, $-1\\leq x_2 <0$, so we have the following atomic hypotheses\n\\[\n\\begin{array}{l}\nH_1 = \\text{``to win }M_1\\text{'' and } H_2 = \\text{``to lose }M_2\\text{''.}\n\\end{array}\n\\]\nWhere the sense of truth for $H_1$ is $p$ and the sense of truth for $H_2$ is $q$. The future statement for disjunction is \n\\begin{eqnarray}\n\\nonumber F (H_1 \\vee H_2) &=& \\text{``to win } M_1 \\text{ or to lose }M_2\\\\\\nonumber &\\text{ }&\\text{once in the future''.}\n\\label{LoteriaDisjunta}\n\\end{eqnarray}\nThe change average in this disjunction is $(1+x_1)^p(1+x_2)^q$ for $p+q<1$. This means that one of the hypotheses may be true at the next instant, or none, because only one will sometime be true in the future.\n \n Another way of affirming a disjunction of losses and gains is ensuring that one or the other will be true at the next instant, $N (H_1 \\vee H_2)$. The change average in this case is $(1+x_1)^p(1+x_2)^q$ for $p+q=1$. This means that $H_1$ or $H_2$ will be true at the next moment with absolute certainty. Uncertainty is just ``which hypothesis is true?''. Therefore, the judgment preceding the decision whether or not participating in this lottery, $N(H_1 \\vee H_2)$ or nothing, is equal to\n\\begin{equation}\n\\nonumber \\max \\{\\mu\\left((1+x_1)^p(1+x_2)^q-1\\right),\\mu(0)\\}.\n\\end{equation}\nThe lottery $L_{wl}$, which is a loss and gain disjunction, will be considered fair if the parameters $x_1$, $x_2$, $p$ and $q$ guarantee $(1+x_1)^p(1+x_2)^q-1>0$. In the experiment described at \\cite{kahneman2013prospect}, the value $p=1\/2$ and $x_1=x_2=x$ generate the inequality $\\sqrt{1-x^2}-1<0$ that makes the lottery unfair. Thus, the respondent's choice for not betting seems to reveal a perception about the lottery dynamics. In addition, it can be noted that expected negative change has its intensity increased by $x$. Therefore, the intensity of loss aversion is amount magnitude dependent. This means that the feeling of aversion to the lottery increases with the growth of the amount. In \\cite{mukherjee2017loss} is presented an empirical evidence of this behavior. \n\n\n \n\\section{Conclusion}\nHeuristics are cognitive processes that ignore part of the information and use simple rules to make a decision quicker and easier. In general, they are defined as strategies that seek alternatives, stop searches in a short time, and make a decision soon after.\n\n\nWithin heuristic processes, some decision-making requires the hypotheses judgment about dynamic processes before they take place in time. Time Preference Problem and Prospect Theory are famous examples. The first evaluates the goods receipt at different future dates and the second requires lotteries valuation before their outcomes are known. The common characteristics between these two problems noted here are the magnitude dependence and the inseparability between time and uncertainty. On the magnitude dependence it can be concluded that:\n\\begin{itemize}\n\\item the magnitude effect in time preference is a consequence of subadditivity;\n\\item the risk seeking can disappear in the Prospect Theory if high magnitude losses were considered. In addition, the aversion risk increases with the growth of the amounts.\n\\end{itemize}\nOn the other hand, on the inseparability between time and uncertainty it can be concluded that:\n\\begin{itemize}\n\\item in the time preference problem, the number of uncertain trials for the short-term hypotheses until verification of the long-term hypothesis produces the subadditive discounting, and consequently, higher annual average rates as the waiting time decreases. In addition, the preference reversal occurs because the number of allowed trials is changed when the hypothesis deadlines are shifted in time;\n\\item the probabilities of lotteries represent the temporal indeterminism about the future. Thus, the S-shaped curve in the Prospect Theory can be described by expected fuzzy changes of temporal hyptotheses.\n\\end{itemize}\n\nIf the future is uncertain, then time and uncertainty about changes can not mean two independent matters. For this reason, choice under uncertainty and intertemporal choice, traditionally treated as separate subjects, are unified in a same matter in this paper to elaborate the rhetoric for the decision-making. \n\nIn addition, it is shown that the fuzziness can changes to prospective judgments about magnitude dependent gains and losses. This means that a given problem may have different decisions simply by changing the values of the rewards, even if time and uncertainty context are not changed. \nExactly in these situations, fuzzy environment modeling will be essential to represent the decision-making.\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nUplift modeling involves a set of methods for estimating the expected causal impact of taking an action or a treatment at an individual or subgroup level, which could lead to an increase in their conversion probability \\cite{Zhao2019UpliftMF}.\nTypically for financial services and commercial companies looking to provide additional value-added services and products to their customers, marketers may be interested in evaluating the effectiveness of numerous marketing techniques, such as sending promotional coupons.\nWith the change in customer conversion possibilities, marketers are able to efficiently target prospects.\nMore than marketing campaigns, uplift modeling can be applied to a variety of real-world scenarios related to personalization, such as online advertising, insurance, or healthcare, where patients with varying levels of response to a new drug are identified, including the discovery of adverse effects on specific subgroups \\cite{Jaskowski2012UpliftMF}.\n\nIn essence, uplift modeling is a problem that combines causal inference and machine learning. For the former, it is mutually exclusive to estimate the change between two outcomes for the same individual. To overcome this counterfactual problem, samples are randomly assigned to a treatment group (receiving online advertisement or marketing campaign) and a control group (not receiving online advertisement nor marketing campaign). For the latter, the task is to train a model that predicts the difference in the probability of belonging to a given class on the two groups.\nAt present, two major categories of estimation techniques have been proposed in the literature, namely meta-learners and tailored methods \\cite{Zhang2022AUS}. The first includes the Two-Model approach \\cite{Radcliffe2007UsingCG}, the X-learner \\cite{Knzel2017MetalearnersFE} and the transformed outcome methods \\cite{Athey2015MachineLM} which extend classical machine learning techniques. The second refers to direct uplift modeling such as uplift trees \\cite{5693998} and various neural network based methods \\cite{ Louizos2017CausalEI,Yoon2018GANITEEO}, which modify the existing machine learning algorithms to estimate treatment effects. \nAlso, uplift trees can be extended to more general ensemble tree models, such as causal forests \\cite{doi:10.1080\/01621459.2017.1319839,10.1214\/18-AOS1709}, at the cost of losing true interpretability.\n\nIn order to take advantage of decision trees and the ensemble approach, we propose causal inference based single-branch ensemble trees for uplift modeling (CIET) with two completely different partition criteria that directly maximizes the difference between outcome distributions of the treatment and control groups. When building a single-branch tree, we employ lift gain and lift gain ratio as loss functions or partition criteria for node splitting in a recursive manner. \nSince our proposed splitting criteria are highly related to the incremental impact, the performance of CIET is thus expected to be reflected in the uplift estimation. Meanwhile, \nthe splitting logic of all nodes along the path from root to leaf is combined to form a single rule to ensure the interpretability of CIET. Moreover, the dataset not covered by the rule is then censored and the above tree-building process continue to repeat on the censored data. \nDue to this divide and conquer learning strategy, the dependencies between the formed rules can be effectively avoided. It leads to the formation of single-branch ensemble trees and a set of decorrelated inference rules. \n\nNote that our CIET is essentially different from decision trees for uplift modeling and causal forests. There are three major differences: (1) single-branch tree $vs$ standard binary tree; (2) lift gain and its ratio as loss function or splitting criterion $vs$ Kullback-Leibler divergence and squared Euclidean distance; and (3) decorrelated inference rules $vs$ correlated inference rules or even no inference rules. \nIt is demonstrated through empirical experiments that CIET can achieve better uplift estimation compared with the existing models. Extensive experimental results on synthetic data and the public credit card data show the success of CIET in uplift modeling. We also train an ensemble model and evaluate its performance on a large real-world online loan application dataset from a national financial holdings group in China. As expected, the corresponding results show a significant improvement in the evaluation metrics in terms of both AUUC and Qini coefficient.\n\n\n\n\nThe rest of this ms is organized as follows. First, causal inference based single-branch ensemble trees for uplift modeling is introduced. Next, full details of our experimental results on synthetic data, credit card data and real-world online loan application data are given. \nIt is demonstrated that CIET performs well in estimating causal effects compared to decision trees for uplift modeling. Finally, conclusions are presented.\n\n\n\n\\section{Causal Inference Based Single-Branch Ensemble Trees (CIET) for Uplift Modeling}\\label{sec:Alg}\nThis section consists of three parts. We first present two splitting criteria, single-branch ensemble approach and pruning strategy specially designed for the uplift estimation problem. Evaluation metrics for uplift modeling are then discoursed. Three key algorithms of CIET are further described in detail.\n\n\\subsection{Splitting Criteria, Single-Branch Ensemble Method and Pruning Strategy}\nTwo distinguishing characteristics of CIET are splitting criteria for tree generation and the single-branch ensemble method, respectively.\n\nAs for splitting criteria in estimating uplift, it is motivated by our expectation to achieve the maximum difference between the distributions of the treatment and control groups. \nGiven a class-labeled dataset with $N$ samples, $N^{T}$ and $N^{C}$ are sample size of the treatment and control groups (recall that $N = N^{T} + N^{C}$, $T$ and $C$ represent the treatment and control groups).\nFormally, in the case of a balanced randomized experiment, the estimator of the difference in sample average outcomes between the two groups is given by:\n\\begin{align}\n \\label{tau} \\tau = (P^{T} - P^{C})(N^{T} + N^{C})\n\\end{align}\nwhere $P^{T}$ and $P^{C}$ are the probability distribution of the outcome for the two groups. Motivated by Eq. (\\ref{tau}), the divergence measures for uplift modeling we propose are lift gain and its ratio, namely LG and LGR for short. The corresponding mathematical forms of LG and LGR can be thus expressed as\n\\begin{align}\n \\label{LG} LG &=(P_{R}^{T} - P_{R}^{C})N_{R} - \\tau_{0} = \\tau_{R} - \\tau_{0}\\\\\n \\label{LGR} LGR &=\\frac{(P_{R}^{T} - P_{R}^{C})}{(P_{0}^{T} - P_{0}^{C})} \\propto (P_{R}^{T} - P_{R}^{C}) = \\frac{\\tau_{R}}{N_{R}}\n\\end{align}\nwhere $P_{0}^{T}$ and $P_{0}^{C}$ are the initial probability distribution of the outcome for the two groups, $\\tau_{0} = (P_{0}^{T} - P_{0}^{C})N_{R}$. And $N_{R}$ and $Y_{R}$ for a node logic $R$ represent coverage and correction classification,\nwhile $P_{R}^{T}$ and $P_{R}^{C}$ are the corresponding probability distribution of the outcome for both groups, respectively. Evidently, both Eq. (\\ref{LG}) and Eq. (\\ref{LGR}) represent the estimator for uplift, which are proposed as two criteria in our ms. Compared to the standard binary tree with left and right branches, only one branch is created after each node splitting in this ms. It is characterized by the fact that both LG and LGR are calculated using the single-branch observations that present following a node split. Accordingly, subscript $k$ indicating binary branches doesn't exist in the above equation. Furthermore, the second term of LG makes every node partition better than randomization, while LGR has the identical advantages to information gain ratio. \n\n\n\nThe proposed splitting criterion for a test attribute A is then defined for any divergence measure $D$ as \n\\begin{equation}\\label{CRITERION}\n \\Delta = D(P^T(Y):P^C(Y)|A) - D(P^T(Y):P^C(Y)) \n\\end{equation}\nwhere $D(P^T(Y):P^C(Y)|A)$ is the conditional divergence measure. Apparently, $\\Delta$ is the incremental gain in divergence following a node splitting. Substituting for $D$ the $LG$ and $LGR$, \nwe obtain our proposed splitting criteria $\\Delta_{LG}$ and $\\Delta_{LGR}$. \nThe intuition behind these splitting criteria is as follows: we want to build a single-branch tree such that the distribution divergence between the treatment and control groups before and after splitting an attribute differ as much as possible. \nThus, an attribute with the highest $\\Delta$ is chosen as the best splitting one. In order to achieve it, we need to calculate and find the best splitting point for each attribute. In particular, an attribute is sorted in descending order by value when it is numerical. \nFor categorical attributes, some encoding methods are adopted for numerical type conversion. The average of each pair of adjacent values in an attribute with $n$ value, forms $n-1$ splitting points or values. \nAs for this attribute, the point of the highest $\\Delta$ can be seen as the best partition one. \nFurthermore, the best splitting attribute with the highest $\\Delta$ can be achieved by traversing all attributes. \nAs for the best splitting attribute, the instances are thus divided into two subsets at the best splitting point. One feeds into a single-branch node, while the other is censored. \nNote that the top\u2013down, recursive partition will continue unless there is no attribute that explains the incremental estimation with statistical significance. Also, histogram-based method can be employed to select the best splitting for each feature, which can reduce the time complexity effectively.\n\nDue to noise and outliers in the dataset, a node may merely represent these abnormal points, resulting in model overfitting. Pruning can often effectively deal with this problem. That is, using statistics to cut off unreliable branches. Since none of the pruning methods is essentially better than others, we use a relatively simple pre-pruning strategy. If $\\Delta$ gain is less than a certain threshold, node partition would stop. Thus, a smaller and simpler tree is constructed after pruning. Naturally, decision-makers prefer less complex inference rules, since they are considered to be more comprehensible and robust from business perspective.\n\n\n\\subsection{Evaluation Metrics for Uplift Modeling}\nAs noted above, it is impossible to observe both the control and treatment outcomes for an individual, which makes it difficult to find measure of loss for each observation.\nIt leads that uplift evaluation should differ drastically from the traditional machine learning model evaluation. \nThat is, improving the predictive accuracy of the outcome \ndoes not necessarily indicate that the models will have better performance in identifying targets with higher uplift. In practice, most of the uplift literature resort to aggregated\nmeasures such as uplift bins or curves. Two key metrics involved are area under uplift curve (AUUC) and Qini coefficient \\cite{Gutierrez2016CausalIA}, respectively. In order to define AUUC, binned uplift predictions are sorted from largest to smallest. For each $t$, the cumulative sum of the observations statistic is formulated as below,\n\\begin{equation}\\label{AUUC}\n f(t) = (\\frac{Y_t^{T}}{N_t^{T}} - \\frac{Y_t^{C}}{N_t^{C}})(N_t^{T} + N_t^{C}) \n\\end{equation}\nwhere the $t$ subscript implies that the quantity is calculated on the first or top $t$ observations.\nThe higher this value, the better the uplift model. The continuity of the uplift curves makes it \npossible to calculate AUUC, i.e. area under the real uplift curve, which can be used to evaluate and compare different models. As for Qini coefficient, it represents a natural generalization of \nGini coefficient to uplift modeling. Qini curve is introduced with the following equation,\n\\begin{equation}\\label{QINI_Curve}\n g(t) = {Y_t^{T}} - \\frac{Y_t^{C}N_t^{T}}{N_t^{C}}\n\\end{equation}\nThere is an obvious parallelism with the uplift curve since $f(t)=g(t)(N_t^{T}+N_t^{C})\/N_t^{T}$.\nThe difference between the area under the actual Qini curve and that under the diagonal corresponding to random targeting can be obtained. \nIt is further normalized by the area between the random and the optimal targeting curves, which is defined as Qini coefficient. \n\n\n\\subsection{Algorithm Modules}\nThe following representation of three algorithms includes: selecting the best split for each feature using the splitting criteria described above, learning a top-down induced single-branch tree and forming ensemble trees with each resulting tree progressively.\n\nAlgorithm \\ref{single1:algorithm} depicts how to find the best split of a single feature $F$ on a given dataset $D[group\\_key, feature,$ $ target]$ using a histogram-based method with the proposed two splitting criteria. Gain\\_left and Gain\\_right are the uplift gains for the child nodes after each node partition. If the maximum value of Gain\\_left is greater than that of Gain\\_right, the right branch is censored and vice versa. Thus, the best split with its corresponding splitting logic, threshold and uplift gain is found, which is denoted by Best$\\_$Direction, Best$\\_$Threshold and Best$\\_\\Delta$. Besides, there are several thresholds to be initialized before training a CIET model, including minimum number of samples at a inner node $min\\_samples$, minimum recall $min\\_recall$ and minimum uplift gain required for splitting $min\\_\\Delta$. The top-down process would continue only when the restrictions are satisfied.\n\n\\begin{algorithm}[htbp]\n\\caption{Selecting the Best Split for One Feature}\n\\label{single1:algorithm}\n\\textbf{Input}: D, the given class-labeled dataset, including the group key (treatment\/control);\\\\\n\\textbf{Parameter}: feature $F$, min$\\_$samples, min$\\_$recall, min$\\_\\Delta$\\\\\n\\textbf{Output}: the best split that maximizes the lift gain or lift gain ratio on a feature\n\n\\begin{algorithmic}[1]\n\\STATE \\textbf{Set} Best$\\_$Value = 0, Best$\\_$Direction = \"\", Best$\\_\\Delta$ = 0, Best$\\_$Threshold = None\n\\STATE calculate $Y^T$, $Y^C$, $N^T$, $N^C$ on D\n\\STATE For each feature value $v$ , calculate the values of $Y_{F\\leq v}^T$, $Y_{F\\leq v}^C$, $N_{F\\leq v}^T$, $N_{F\\leq v}^C$, and then the Gain\\_left($v$) and Gain\\_right($v$) with LG \\eqref{LG} or LGR \\eqref{LGR}.\n\\STATE set the Gain\\_left($v$) and Gain\\_right($v$) to their minimum value on the $v$, whose split does not satisfy the restrictions on number of samples\/recall rate\/divergence measure gain. \n\\STATE $v_1$ = argmax(Gain\\_left($v$)), $v_2$ = argmax(Gain\\_right($v$))\n\\IF {max(Gain\\_left) $\\geq$ max(Gain\\_right)}\n\\STATE Best$\\_$Value = max(Gain\\_left)\n\\STATE Best$\\_$Direction = \"$\\leq$\"\n\\STATE Best$\\_$Threshold = $v_1$\n\\ELSE\n\\STATE Best$\\_$Value = max(Gain\\_right)\n\\STATE Best$\\_$Direction = \"$>$\"\n\\STATE Best$\\_$Threshold = $v_2$\n\\ENDIF\n\\STATE \\textbf{return} Best$\\_$Value, Best$\\_$Direction, Best$\\_$Threshold\n\\end{algorithmic} \n\\end{algorithm}\n\nAlgorithm \\ref{single:algorithm} presents a typical algorithmic framework for top\u2013down induction of a single-branch uplift tree, which is built in a recursive manner using a greedy depth-first strategy. \nThe parameter $max\\_depth$ represents the depth of the tree and $cost$ indicates the threshold of LG or LGR.\nAs the tree grows deep, more instances are censored since they are not covered by the node partition logic of each layer. \nAs a result, each child node subdivides the original dataset hierarchically into a smaller subset until the stopping criterion is satisfied. Tracing the splitting logics on the path from the root to leaf nodes in the tree, an \"IF-THEN\" inference rule is thus extracted. \n\nFinally, adopting a divide-and-conquer strategy, the above tree-building process is repeated on the censored samples to form ensemble trees, resulting in the formation of a set of inference rules as shown in Algorithm \\ref{multi:algorithm}.\n\n\n\\begin{algorithm}[htbp]\n\\caption{Learning An \"IF-THEN\" Uplift Rule of A Single-branch Tree}\n\\label{single:algorithm}\n\\textbf{Input}: D, the given class-labeled dataset, including the group key (treatment\/control);\\\\\n\\textbf{Parameter}: max$\\_$depth, cost, min$\\_$samples, min$\\_$recall, min$\\_\\Delta$\\\\\n\\textbf{Output}: an \"IF-THEN\" uplift rule\n\n\\begin{algorithmic}[1]\n\\STATE \\textbf{Set} Rule$\\_$Single = [], Max$\\_$Gain = 0.0\n\\STATE \\textbf{Set} Add\\_Rule = \\textbf{True}\n\\WHILE{depth $\\leq$ max$\\_$depth \\textbf{and} Add\\_Rule}\n\\IF {the treatment group or control group in D is empty}\n\\STATE break\n\\ENDIF\n\\STATE \\textbf{Set} Keep = \\{ \\}, Best$\\_$Split = \\{ \\}\n\\STATE depth $\\leftarrow $ depth + 1\n\\STATE Add\\_Rule = \\textbf{False}\n\\FOR{feature in features}\n\\STATE Keep[feature] = Best\\_Split\\_for\\_One\\_Feature(D, feature, min$\\_$samples, min$\\_$recall, min$\\_\\Delta$) (Algorithm \\ref{single1:algorithm})\n\\ENDFOR\n\\FOR{feature in Keep}\n\\IF {feature's best gain $>$ Max\\_Gain + cost}\n\\STATE Max$\\_$Gain = feature's best gain\n\\STATE \\textbf{Add} Keep[feature] \\textbf{to} Best$\\_$Split\n\\STATE Add\\_Rule = \\textbf{True}\n\\ELSE\n\\STATE continue\n\\ENDIF\n\\ENDFOR\n\\STATE \\textbf{Add} Best$\\_$Split \\textbf{to} Rule$\\_$Single\n\\STATE D $\\leftarrow $ D $\\setminus$ \\{\nSamples covered by Rule$\\_$Single\\} \n\\ENDWHILE\n\\STATE \\textbf{return} Rule$\\_$Single\n\\end{algorithmic} \n\\end{algorithm}\n\n\n\\begin{algorithm}[htbp]\n\\caption{Learning A Set of \"IF-THEN\" Uplift Rules}\n\\label{multi:algorithm}\n\\textbf{Input}: D, the given class-labeled dataset, including the group key (treatment\/control);\\\\\n\\textbf{Parameter}: max$\\_$depth, rule$\\_$count, cost, min$\\_$samples, min$\\_$recall, min$\\_\\Delta$\\\\\n\\textbf{Output}: a set of \"IF-THEN\" uplift rules\n\n\\begin{algorithmic}[1]\n\\STATE \\textbf{Set} Rule$\\_$Set = \\{\\}, number = 0\n\\WHILE{number $\\leq$ rule$\\_$count}\n\\STATE rule = Single$\\_$Uplift$\\_$Rule(D, cost, max$\\_$depth, min$\\_$samples, min$\\_$recall, min$\\_\\Delta$) (Algorithm \\ref{single:algorithm})\\\\\n\\STATE \\textbf{Add} rule \\textbf{to} Rule$\\_$Set \n\\STATE D $\\leftarrow $ D $\\setminus$ dataset covered by rule \n\\STATE number $\\leftarrow$ number + 1\n\\ENDWHILE\n\\STATE \\textbf{return} Rule$\\_$Set\n\\end{algorithmic} \n\\end{algorithm}\n\n\\begin{figure*}[htbp]\n\\centering\n\\includegraphics[width=0.95\\textwidth]{figs\/AUUC.png} \n\\caption{The uplift curves of four analyzed classifiers with four different colors for the synthetic dataset, while the dashed line corresponding to random targeting.}\n\\label{fig:AUUC_SSD} \n\\end{figure*}\n\n\\section{Experiments}\\label{sec:er}\nIn this section, the effectiveness of our CIET is evaluated on synthetic and real-world business datasets. Since CIET fundamentally stems from tree-based approaches, we implement it and compare it with uplift decision trees based on squared Euclidean distance and Kullback-Leibler divergence \\cite{5693998}, which are referred as baselines.\n\n\n\n\\begin{table*}[ht]\n\\centering\n\\begin{tabular}{llll}\n\\hline\nrule number &\"1\" &\"2\" &\"3\"\\\\\n\\hline\nnode logic &x9$\\_$uplift $\\leq$ 0.17 &x3$\\_$informative $>$ -1.04 &x6$\\_$informative $>$ 0.95 \\\\\nnode logic &x10$\\_$uplift $\\leq$ 2.59 &x1$\\_$informative $\\leq$ 2.71 &x1$\\_$informative $\\leq$ 1.58\\\\\nnode logic &null &x2$\\_$informative $\\leq$ 1.28 &x9$\\_$uplift $\\leq$ 2.09 \\\\\n$N_{before}$ &3000 &2210 &675 \\\\\n$N_{before}^T$ &1500 &1117 &348 \\\\\n$N_{before}^C$ &1500 &1093 &327 \\\\\n$N_{rule}$ &790 &1535 &180 \\\\\n$N_{rule}^T$ &383 &769 &76 \\\\\n$N_{rule}^C$ &407 &766 &104 \\\\\nnet gain &195.99 &87.14 &37.66 \\\\\n$recall_{treatment}$ &36.62$\\%$ &70.42$\\%$ &42.11$\\%$ \\\\\n$recall_{control}$ &28.00$\\%$ &63.70$\\%$ &44.90$\\%$ \\\\\n\\hline\n\\end{tabular}\n\\caption{A set of inference rules found by CIET and their corresponding statistical indicators with $criterion\\_type = $\"LG\", $rule\\_count = 3$ and $max\\_depth = 3$.}\n\\label{tab:RS_CIET}\n\\end{table*}\n\n\n\\subsection{Experiments on Synthetic Data}\\label{subsec: SD}\n\\textbf{Dataset} We can test the methodology with numerical simulations. That is, generating synthetic datasets with known causal and non-causal relationships between the outcome, action (treatment\/control) and some confounding variables. More specifically, both the outcome and the action\/treatment variables are binary. A synthetic dataset is generated with the $make\\_uplift\\_classification$ function in \"Causal ML\" package, based on the algorithm in \\cite{Guyon2003DesignOE}.\nThere are 3,000 instances for the treatment and control groups, with response rates of 0.6 and 0.5, respectively. The input consist of 11 features in three categories. 8 of them are used for base classification, which are composed of 6 informative and 2 irrelevant variables. 2 positive uplift variables are created to testify positive treatment effect. The remaining one is a mix variable, which is defined as a linear superposition of a randomly selected informative classification variable and a randomly selected positive uplift variable.\n\n\n\n\n\\textbf{Parameters and Results} There are four hyper-parameters in CIET: $criterion\\_type$, $max\\_depth$ and $rule\\_count$. $criterion\\_type$ includes two options, LG and LGR. \nMore precisely, two main factors of business complexity and difficulty in online deployment, determine parameter assignment.\nDue to the requirement of model generalization and its interpretability, $max\\_depth$ is set to 3. That is, the business logics of a single inference rule are always less than or equal to 3. And, $rule\\_count$ is given a value of 3, indicating that a set of no more than three rules is defined to model the causal effect of a treatment on the outcome. Meanwhile, the default values for $min\\_samples$, $min\\_recall$, $cost$ and $min\\_\\Delta$ are 50, 0.1, 0.01 and 0, respectively.\n\n\n\n\\begin{table}[htbp]\n\\centering\n\\begin{tabular}{lllll}\n\\hline\ndataset &KL &Euclid &LG &LGR\\\\\n\\hline\ntraining &0.187 &0.189 &0.239 &0.235 \\\\\ntest &0.176 &0.178 &0.210 &0.225 \\\\\n\\hline\n\\end{tabular}\n\\caption{Qini coefficients of four analyzed classifiers on the training and test sets of the synthetic data.}\n\\label{tab:QC_SSD}\n\\end{table}\n\n\nThe stratified sampling method is used to divide the synthetic dataset into training and test sets in a ratio of fifty percent to fifty percent. Figure \\ref{fig:AUUC_SSD} shows the uplift curves of the four analyzed classifiers. The AUUC of CIET with LG and LGR are 294 and 292 on the training set, which are significantly greater than 266 and 265 for the decision trees for uplift modeling with KL divergence and squared Euclidean distance. \nAt the 36th percentile of the population, the cumulative profit increase reach 303 and 354 for LG and LGR, resulting in a growth rate of more than 18$\\%$ and 37$\\%$ compared to baselines. Besides, AUUC shows little variation in the training and test datasets, indicating that the stability of CIET is also excellent. According to Table \\ref{tab:QC_SSD}, Qini coefficients of CIET are also obviously greater, with an increase of more than 24.5$\\%$ and 17.8$\\%$. Furthermore, all three rules are determined by uplift and informative variables as expected, which can be seen from Table \\ref{tab:RS_CIET}. \n\n\n\n\n\n\\subsection{Experiments on Credit Card Data}\n\n\\textbf{Dataset} We use the publicly available dataset $Credit$ $Approval$ from the UCI repository as one of the real-world examples, which contains 690 credit card applications. All the 15 attributes and the outcome are encoded as nonsense symbols, where $A7 \\neq v$ \nis applied as the condition for dividing the dataset into treatment and control groups. \nThere are 291 and 399 observations in the two groups with response rates of 0.47 and 0.42, respectively.\nAttributes with more than 25$\\%$ difference in distribution between the two groups should be removed before any experiments are performed. This leads that 12 attributes are left as input variables. For further preprocessing, categorical features are binarized through one-hot encoding.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.95\\linewidth]{figs\/CC.png} \n\\caption{The uplift curves of four analyzed classifiers with four different colors for the $Credit\\ Approval$ dataset, while the dashed curve corresponding to the random targeting.}\n\\label{fig:FIG_CAD} \n\\end{figure}\n\n\\begin{table}[htbp]\n\\centering\n\\begin{tabular}{lllll}\n\\hline\nmetrics &KL &Euclid &LG &LGR\\\\\n\\hline\nAUUC &37.337 &40.887 &42.893 &48.222 \\\\\nQini &0.201 &0.236 &0.257 &0.310 \\\\\n\\hline\n\\end{tabular}\n\\caption{Model performance of four analyzed classifiers on the $Credit\\ Approval$ dataset.}\n\\label{tab:CAD}\n\\end{table}\n\n\n\\textbf{Parameters and Results} Based on the business decision-making perspective, the initial parameters are also the same as above.\n\n\n\n\\begin{figure*}[htbp]\n\\centering\n\\includegraphics[width=0.95\\textwidth]{figs\/ROP.png} \n\\caption{The uplift curves of four analyzed classifiers with four different colors for the real world online loan application dataset, while the dashed line corresponding to random targeting.}\n\\label{fig:AUUC_ROP} \n\\end{figure*}\n\nIn order to avoid the distribution bias caused by the division on such small sample size dataset, there is no need to divide $Credit\\ Approval$ into training and test parts. Figure \\ref{fig:FIG_CAD} shows the uplift curves for the four analyzed classifiers, from which we can see that CIET is able to obtain higher AUUC and Qini coefficients. As shown in Table \\ref{tab:CAD}, the former increases from 37$\\sim$40 at baselines to 42$\\sim$48 at CIET approximately, while the latter also improves significantly from 0.20$\\sim$0.23 to 0.25$\\sim$0.31. Especially when LGR serves as the splitting criterion, the cumulative profit has a distinguished peak of 74.5, while only 48.4$\\%$ of the samples are covered.\n\n\n\n\n\\subsection{Experiments on Online Loan Application Data}\n\n\n\n\\textbf{Dataset} We further extend our CIET to precision marketing for new customer application. A telephone marketing campaign is designed to promote customers to apply for personal credit loans at a national financial holdings group in China via its official mobile app. The target is 1\/0, indicating whether a customer would submit an application or not. \nThe data contains 53,629 individuals, consisting of a treated group of 32,984 (receiving marketing calls) and a control group of 20,645 (not receiving marketing calls). These two groups have 300 and 124 credit loan applications with response rates of 0.9\\% and 0.6\\%, which are typical values in real world marketing practice. There are 24 variables in all, which are characterized as credit card-related information, loan history, customer demographics et al.\n\n\n\n\n\n\\begin{table}[htbp]\n\\centering\n\\begin{tabular}{lllll}\n\\hline\ndataset &KL &Euclid &LG &LGR\\\\\n\\hline\ntraining &0.173 &0.168 &0.414 &0.302 \\\\\ntest &0.108 &0.124 &0.385 &0.319 \\\\\n\\hline\n\\end{tabular}\n\\caption{Qini coefficients of four analyzed classifiers on the training and test sets of the real world online loan application data.}\n\\label{tab:QC_ROP}\n\\end{table}\n\n\n\n\\textbf{Parameters and Results} All parameters are the same as in the above experiments. The dataset is first divided into training and test sets in a ratio of sixty percent to forty percent. The response rates are consistent across two sets for two groups. Figure \\ref{fig:AUUC_ROP} diplays the results graphically. As for the training dataset, CIET based on LG and LGR reach AUUC of about 104 and 89, while the decision trees based on KL divergence and squared Euclidean distance are 73 and 72. It can be seen that CIET achieves a significant improvement compared to baselines on this real-world dataset even with a very low response rate. Moreover, as can be seen in Table \\ref{tab:QC_ROP}, Qini coefficient based on our approaches increases to 0.30$\\sim$0.41 from 0.16$\\sim$0.17 \non the training dataset. Meanwhile, Qini coefficient changes little when crossing to test dataset, indicating a better stability. Consequently, classifier with our CIET for precision marketing is effectively improved as well as stabilized in terms of AUUC and Qini coefficient. At present, CIET has already been applied to personal credit telemarketing. \n\n\n\n\\section{Conclusion}\\label{sec:con}\nIn this ms, we propose new methods for constructing causal inference based single-branch ensemble trees for uplift estimation, CIET for short. Our methods provide two partition criteria for node splitting and strategy for generating ensemble trees. The corresponding outputs are uplift gain between the two outcomes and a set of interpretable inference rules, respectively. Compared with the classical decision tree for uplift modeling, CIET can not only be able to avoid dependencies among inference rules, but also improve the model performance in terms of AUUC and Qini coefficient. It would be widely applicable to any randomized controlled trial, such as medical studies and precision marketing. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nQuantum teleportation experiments have shown that quantum coherence can be \nmaintained for ever increasing distances. Indeed, the factor that hinders \ncoherence (breaking the required entanglement for teleportation) is the \nloss of signal to the medium, mainly the atmosphere. This obstacle is no \nlonger present in space, which hints at the possibility of performing \nsuch experiments at interstellar distances, or even detecting quantum \nsignals from astrophysical sources. In this context, one of us showed \nrecently that the quantum state of a photon could indeed be maintained \nat galactic distances, at least for a range of the electromagnetic \nspectrum \\cite{Berera2020}. The reason for this is that the mean free \npaths associated to the different interactions the photon could have are \nmany order of magnitude larger than the galactic scales \n(or even the observable Universe). \nAs an outcome of this observation, one seminal suggestion that paper \nmade was the possibility\nfor interstellar quantum communication, due to the viability for\nmaintaining quantum coherence over these distances\nfor certain frequency bands. Another possibility suggested in\nthat paper was if there were any natural quantum coherent sources,\nsuch signals could maintain their coherence over interstellar\ndistances. Extending on these ideas,\nthat paper also noted that this (lack of) effect most likely can be extrapolated to cosmological distances. \n\nThis work will explore that possibility. Here we consider\na wider variety of decoherence factors, like the expansion of the Universe itself. However, even for this case we do not give up on the philosophy that decoherence takes place due to the interaction of the quantum state with some environment. To do so, we consider the environment to be constituted by particles produced by the expansion of the Universe at different epochs. The mechanism to achieve this is squeezing, which has been widely studied in quantum optics and, in cosmology, in the theory of inflationary perturbations. So, borrowing from this mechanism, we compute the number of scalar particles through squeezing, and argue that this effect is essentially absent for fermions and $U(1)$ gauge bosons. Moreover, we identify the scalar field (interacting with photons) to be that of axion-like particles (ALPs), as a natural extension of the Standard Model. With these considerations, we are able to look at different interactions of the photon with the ALPs (or their decay products) in order to estimate the probability of interactions, which we find to be basically null. Thus, in practice, the expansion is not a decoherence factor for photons (at the energies we shall consider). We also look at other potential sources of decoherence, like interaction with CMB radiation or with electrons after reionization. The latter is more likely to be a source of decoherence, although the probabilities remain low enough to consider that the quantum state could remain undisturbed after decoupling. This opens up a new window to look for quantum signals from certain astrophysical objects or even from cosmic strings. \n\n\\section{Expansion-induced decoherence}\n\n\\subsection{Scalar fields}\n\nIn order to learn how the expansion of the Universe can lead to decoherence, let us look at the theory of cosmic inflation for guidance. \nCosmological perturbations during inflation undergo a process known as {\\it squeezing}, where states of the type $|n_{\\bf k}, n_{-{\\bf k}}\\rangle$ are created at superhorizon scales. This is an effect purely due to expansion, whose basic principles can be grasped just by studying a massless scalar field minimally coupled to gravity, as follows:\n\\begin{align}\n\t{\\cal S} & = \\frac{1}{2} \\int dt\\ d^3 x\\ \\sqrt{-g} \\partial_{\\mu} \\phi \\partial^{\\mu} \\phi \\nonumber \\\\\n\t& = \\frac{1}{2} \\int d\\tau\\ d^3 x\\ a^2 \\left[(\\phi')^2 - (\\nabla \\phi)^2 \\right],\n\\end{align}\nwhere primes denote derivative w.r.t. the conformal time $\\tau$. It is convenient to introduce the change of variable $\\varphi \\equiv a \\phi$, such that\n\\begin{equation}\\label{act1}\n\t{\\cal S} = \\frac{1}{2}\\int d\\tau\\ d^3 x\\ \\left[ \\left(\\varphi' - \\frac{a'}{a}\\varphi\\right)^2 - (\\nabla \\varphi)^2 \\right].\n\\end{equation}\nUsing the Euler-Lagrange equations, and going to Fourier space, one gets the mode equations\n\\begin{equation}\\label{eom1}\n\t\\varphi_k'' + \\left(k^2 - \\frac{a''}{a}\\right) \\varphi_k = 0.\n\\end{equation} \nIn the case of a perfect de Sitter expansion, $a''\/a = 2\/\\tau^2$, the equation of motion becomes\n\\begin{equation}\n\t\\varphi_k'' + \\left(k^2 - \\frac{2}{\\tau^2} \\right) \\varphi_k = \\varphi_k'' + \\left(k^2 - 2 (aH)^2 \\right) \\varphi_k = 0\\,.\n\\end{equation}\nClearly, the solutions are oscillatory for $k^2 > 2 (aH)^2$, whereas for $k^2 < 2(aH)^2$ there is a growing and a decaying-mode solution. The question which then arises is what should be the right initial state for solving this equation. For inflation, one usually takes Bunch-Davies initial states\n\\begin{equation}\n\t\\varphi_k (\\tau) = \\frac{e^{-i k \\tau}}{\\sqrt{2k}} \\left(1 - \\frac{i}{k\\tau}\\right)\\,,\n\\end{equation}\nsuch that the time-dependent field operator and the canonical momentum are given by\n\\begin{widetext}\n\\begin{equation}\n\t\\hat{\\varphi} (\\tau,\\vt{x}) = \\int \\frac{d^3k}{(2\\pi)^3} \\frac{1}{\\sqrt{2k}} \\left[e^{-ik\\tau}\\left(1 - \\frac{i}{k\\tau}\\right) \\cm{k} (\\tau_0) + e^{ik\\tau} \\left(1 + \\frac{i}{k\\tau}\\right) \\cpp{k} (\\tau_0) \\right] e^{i \\dpr{k}{x}},\n\\end{equation}\n\\begin{equation}\n\\hat{\\pi}(\\tau, \\vt{x}) = \\varphi' - \\frac{a'}{a}\\varphi = -i \\int \\frac{d^3k}{(2\\pi)^3} \\sqrt{\\frac{k}{2}} \\left[e^{-ik\\tau} \\cm{k} (\\tau_0) - e^{ik\\tau} \\cpp{k} (\\tau_0)\\right] e^{i \\dpr{k}{x}}\\,.\n\\end{equation}\n\\end{widetext}\n\nThe creation and annihilation operators at later times can be found through a Bogolyubov transformation, such that\n\\begin{align}\n\t\\cm{k} (\\tau) & = \\alpha_k (\\tau) \\cm{k} (\\tau_0) + \\beta_k (\\tau) \\cpp{k}(\\tau_0)\\,, \\nonumber\\\\\n\t\\cpp{k} (\\tau) & = \\alpha_k^* (\\tau) \\cpp{k} (\\tau_0) + \\beta_k^* (\\tau) \\cm{k} (\\tau_0)\\,,\n\\end{align}\nwhere $|\\alpha_k|^2 - |\\beta_k|^2 = 1$.\n\nConsidering this, one can parametrize these coefficients as\n\\begin{equation}\n\t\\alpha_k = \\cosh (r_k) e^{-i \\Theta_k}, \\quad \\beta_k = -\\sinh (r_k) e^{i (\\Theta_k + 2\\phi_k)}\\,,\n\\end{equation}\nwhich renders\n\\begin{widetext}\n\\begin{align}\\label{sqpar}\n\t\\hat{\\varphi}_k (\\tau) =& \\frac{1}{\\sqrt{2k}} \\left\\{\\left[ \\cosh (r_k) e^{-i \\Theta_k} - \\sinh(r_k) e^{-i(\\Theta_k + 2\\phi_k)}\\right] \\cm{k} + \\left[\\cosh(r_k) e^{i\\Theta_k} - \\sinh(r_k) e^{i(\\Theta_k + 2\\phi_k)}\\right] \\cpp{k} \\right\\}, \\nonumber \\\\\n\t\\hat{\\pi}_k (\\tau) = & -i \\sqrt{\\frac{k}{2}} \\left\\{\\left[ \\cosh (r_k) e^{-i \\Theta_k} + \\sinh(r_k) e^{-i(\\Theta_k + 2\\phi_k)}\\right] \\cm{k} - \\left[\\cosh(r_k) e^{i\\Theta_k} + \\sinh(r_k) e^{i(\\Theta_k + 2\\phi_k)}\\right] \\cpp{k} \\right\\}\\,.\n\\end{align}\n\\end{widetext}\n\nComparing with the equations above (depending on Bunch-Davies functions), one readily finds the parameters\n\\begin{align}\n\tr_k & = \\sinh^{-1} \\left(\\frac{1}{2k\\tau}\\right), \\qquad \\Theta_k = k\\tau + \\tan^{-1} \\left(\\frac{1}{2k\\tau}\\right)\\,, \\nonumber \\\\\n\t\\phi_k & = -\\frac{\\pi}{4} - \\frac{1}{2} \\tan^{-1} \\left(\\frac{1}{2k\\tau}\\right).\n\\end{align}\nThe vacuum expectation value of the number of particles for the new vacuum in the $k$ mode is given by\n\\begin{equation}\n\t\\left\\langle N_k \\right\\rangle = |\\beta_k|^2 = \\sinh^2 (r_k) = \\left(\\frac{1}{2k\\tau}\\right)^2.\n\\end{equation}\nThus, for $k < 2\/(aH)$ the expectation number is bigger than $1$. In practice, this matches the region for which the equation of motion has the exponential solutions, and in particular, where squeezing takes place. \n\n\\subsubsection{Particle production during the standard cosmological expansion}\n\nIn order to find the density of particles created during the expansion history, it is convenient to have at hand the evolution of the scale factor as a function of the conformal time, starting from the inflationary era until the matter dominated era. We shall assume the transitions between epochs to be instantaneous, commonly known as the sudden approximation.\\footnote{Relaxing this assumption does not change our main findings.} Using the sudden approximation between the (quasi)-de Sitter expansion and the hot big bang phase, the scale factor is given by\n\n\\begin{equation*}\n\ta(\\tau)=\\left\\{\\begin{array}{l}\n(H_{\\rm Inf}|\\tau|)^{-1}, \\quad \\tau<\\tau_{e}<0 \\\\\n\\alpha_M (\\tau-\\tau_e)^2 + \\alpha_R (\\tau - \\tau_e) + \\alpha_I, \\quad \\tau>\\tau_{e}\n\\end{array}\\right.\\,,\n\\end{equation*}\n\\begin{align}\n\t\\alpha_M & = \\frac{\\pi G}{3} \\rho_{eq} a_{eq}^3, \\quad \\alpha_I = \\frac{1}{H_{\\rm Inf} |\\tau_e|}, \\nonumber \\\\\n\t \\alpha_R & = \\left[ \\frac{4\\pi G}{3} \\rho_{eq} a_{eq}^3 \\left(\\frac{1}{H_{\\rm Inf} |\\tau_e|} + a_{eq} \\right)\\right]^{1\/2}\\,,\n\\end{align}\nwhere $\\tau_e$ denotes the conformal time at the end of inflation and ``$eq$'' refers to the time of matter-radiation equality. The quadratic term corresponds to the evolution during matter domination, whereas the linear term to radiation domination. \n\nThen, the equation of motion Eq. \\eqref{eom1} (which is for a completely general cosmological background) during this epoch(s) is given by\n\\begin{equation}\n\t\\varphi_k'' + \\left(k^2 - \\frac{2\\alpha_M}{\\alpha_M(\\tau-\\tau_e)^2 + \\alpha_R(\\tau-\\tau_e) + \\alpha_I} \\right) \\varphi_k = 0\\,.\n\\end{equation}\nNaturally, one can identify regions where the equation gets simplified. For the radiation-dominated era, the e.o.m. is\n\\begin{equation}\n\t\\varphi_k'' + k^2 \\varphi_k = 0\\,,\n\\end{equation}\nwhereas for the matter-dominated era, it is given by\n\\begin{equation}\n\t\\varphi_k'' + \\left(k^2 - \\frac{2}{\\tau^2} \\right) \\varphi_k = 0\\,,\n\\end{equation}\ni.e., the same equation as for the inflationary era. In principle, there can be small changes to the usual (positive-frequency) vacuum state coming from effects of gravitational phase transitions. However, the corrections to the positive-frequency vacuum are small or, in other words, the number of particles created due to these phase transitions quickly dilute. Therefore, it is reasonable to consider the Bunch Davies-like initial states, such that the solutions to these equations are, respectively, given by\n\\begin{eqnarray}\n{}^{\\rm Rad}\\varphi_k (\\tau) = \\frac{1}{\\sqrt{2k}} e^{-ik\/\\mathcal{H}}\\,,\n\\end{eqnarray}\nand the basis for the matter-dominated era as\n\\begin{eqnarray}\\label{solm}\n\t{}^{\\rm Mat}\\varphi_k (\\tau) = \\frac{1}{\\sqrt{2k}} \\left(1 - \\frac{i \\mathcal{H}}{2k}\\right) e^{-2ik\/\\mathcal{H}}\\,,\n\\end{eqnarray}\nwhere ${\\cal H}$ is the comoving rate of expansion.\\footnote{We here ignore any\n squeezing which takes place before the epoch of radiation domination.} As mentioned, the matching of the solutions during the different epochs will lead to excited states that will increase the number of generated particles. However, for now it will be enough to concentrate on this simple form of solutions. Moreover, notice that Eq. \\eqref{solm} has the same functional form as the Bunch-Davies solution for de Sitter spacetime, and thus the squeezing formalism derived for that case also applies here. In particular, the vacuum expectation of the number of particles is \n\\begin{equation}\n\t\\langle N_k \\rangle = |\\beta_k|^2 = \\left(\\frac{1}{2k\\tau}\\right)^2\\,.\n\\end{equation}\nOn the contrary, during radiation domination there is no mass term in the e.o.m., so there is no squeezing and particle production during this era. Therefore, expansion induces particle excitations of a scalar field only during the de Sitter and matter-dominated eras. Before moving on, we will cover the case of massive scalar fields during inflation, and similar calculations can be done for standard expansion.\n\n\\subsubsection{The massive scalar case}\n\nHere we will cover (although somewhat superficially) the case of a massive scalar field. A priori, one would expect particle production for massive fields to be less efficient, so we need to quantify the required corrections to the functions displayed above.\n\nLet us start with a generalization of the action Eq. \\eqref{act1},\n\\begin{equation}\n\t{\\cal S} = \\frac{1}{2} \\int d\\tau d^3 x \\left[ \\left(\\varphi' - \\frac{a'}{a}\\varphi \\right)^2 - (\\nabla \\varphi)^2 - a^2 m^2 \\varphi^2 \\right]\\,,\n\\end{equation}\nwhich renders the following equation of motion for the field\n\\begin{equation}\n\t\\varphi_k'' + \\left[k^2 - \\left( \\frac{a''}{a} - a^2 m^2 \\right) \\right] \\varphi_k = 0\\,.\n\\end{equation}\nOnce again, this equation is completely general for any cosmological epoch. For inflation this becomes\n\\begin{equation}\n\t\\varphi_k'' + \\left[k^2 - \\frac{1}{\\tau^2} \\left(2 - \\frac{m^2}{H^2}\\right) \\right]\\varphi_k = 0\\,,\n\\end{equation}\nwhere the Bunch-Davies solution is\n\\begin{equation}\\label{fim}\n\t\\varphi_k = \\frac{e^{i (2\\nu+1)\\frac{\\pi}{4}}}{\\sqrt{2k}} \\sqrt{\\frac{\\pi}{2}} w^{1\/2} H_{\\nu}^{(1)} (w)\\,,\n\\end{equation}\nwhere $w = |k \\tau|$ and $\\nu^2 = 9\/4 - (m\/H)^2$. The conjugate momentum is\n\\begin{align}\\label{pim}\n\t\\pi_k = & -i \\sqrt{\\frac{k}{2}} \\bigg\\{-i \\sqrt{\\frac{\\pi}{2}} e^{i (2\\nu+1)\\frac{\\pi}{4}} \\bigg[ w^{1\/2} H_{\\nu-1}^{(1)} (w) \\nonumber \\\\\n\t& + w^{-1\/2} \\left(3\/2 - \\nu \\right) H_{\\nu}^{(1)} (w) \\bigg] \\bigg\\}\\,.\n\\end{align}\nNotice that negative values of $\\nu$ lead to exponentially suppressed solutions. Thus, as expected, there is no particle production for $m \\gtrsim H$. Then, assuming that the mass term is small enough so that $\\nu$ is safely larger than 0, one can compute the number of generated particles due to squeezing by comparing the equations above with Eq. \\eqref{sqpar}. In order to have analytical expressions, one can expand eqs.\\eqref{fim},\\eqref{pim} in powers of $m\/H$, which yields\n\\begin{align}\n\t|\\beta_k|^2 \\simeq \\left(\\frac{1}{2k\\tau}\\right)^2 \\left[1 + \\frac{2}{3} \\frac{m^2}{H^2} \\left( -1 + \\gamma_E + \\ln (2w) \\right)\\right]\\,,\n\\end{align}\nwhere $\\gamma_E$ denotes the Euler-Mascheroni constant. Naturally, during radiation domination this type of mass term does not enhance squeezing, whereas during matter dominance it is more subdominant than in the other eras ($\\tau^{-4}$ vs. $\\tau^{-2}$). \n\n\\subsection{Setting up an environment}\n\nWhat we have covered so far is valid for a scalar field, so the natural next step is to try and reproduce this for photons. However, in this case there is no induced time-dependent mass-term and thus no squeezing (similarly to the scalar case during radiation domination). Naturally, there can be particle production due to interactions with other fields, but such processes are not linked to the background dynamics. In fact, in some cases the expansion just dilutes whatever number of particles are produced through these couplings. Consequently, in order to grasp the effects of decoherence of photons due to expansion alone, the next best thing is to look at the interactions between the quantum state (of a test photon) and an environment encompassed by either pseudoscalar particles produced by the squeezing of super-horizon states, or by decay products of these scalars, in particular, into photons. Arguably, the preeminent example of a scalar field in such scenario is the axion, which has a well-known interaction with $U(1)$ fields. Moreover, the interactions between axions and photons through other means have been widely explored in the literature, where the search of this particle is largely based on this interaction. \nThe interaction between axions and $U(1)$ gauge fields is described by the Lagrangian\n\\begin{equation}\n\t{\\cal L}_{A\\gamma\\gamma} = - \\frac{g_{A \\gamma\\gamma}}{4} F_{\\mu\\nu} \\tilde{F}^{\\mu\\nu} \\phi_A = g_{A\\gamma\\gamma} \\vt{E}\\cdot \\vt{B}\\ \\phi_A\\,,\n\\end{equation}\nwith\n\\begin{align}\n\tg_{A\\gamma\\gamma} & = \\frac{\\alpha}{2\\pi f_A} \\left(\\frac{E}{N} - 1.92(4)\\right) \\nonumber \\\\\n\t& = \\left(0.203(3) \\frac{E}{N} - 0.39(1) \\right) \\frac{m_A}{{\\rm GeV}^2}\\,,\n\\end{align}\nwhere $E$ and $N$ are the electromagnetic and color anomalies of the axial current \\cite{RRR2018}. \n\n\\subsubsection{Number density}\n\nLet us estimate the number density of $\\phi_A$-particles created during inflation (just by squeezing). For this, we need to compute the total number of particles. Assuming the states are homogeneously distributed, the amount of states within a ``radius'' $k$ is \n\\begin{equation}\n\tG(k) = \\frac{V}{(2\\pi)^3} \\frac{4\\pi k^3}{3}\\,,\n\\end{equation}\nwhere $V$ stands for a comoving volume. In this way, the density of states is given by\n\\begin{equation}\n\tg(k) = \\frac{\\partial G}{\\partial k} = \\frac{V}{2\\pi^2} k^2\\,.\n\\end{equation}\nThus, the total number of particles is\n\\begin{widetext}\n\\begin{equation}\n\tN = \\sum_k N_k = \\int dk\\ g(k) f(k) \\nonumber = \\frac{V}{2\\pi^2} \\int_{0}^{-1\/\\tau_e} dk\\ k^2 \\left(\\frac{1}{2k\\tau_e}\\right)^2 \\left[1 + \\frac{2}{3} \\frac{m^2}{H_{\\rm Inf}^2} \\left( -1 + \\gamma_E + \\ln (-2k\\tau_e) \\right)\\right]\\,,\n\\end{equation}\n\\end{widetext}\nwhere we have used the formula for the average number of particles on a mode $k$ created due to squeezing, which we identified with $f(k)$. The integration limits correspond only to modes that have been superhorizon at some point during inflation, as those are the ones that undergo squeezing. Performing the integral we get\n\\begin{eqnarray}\n\tN & = & \\frac{V}{8\\pi^2} \\left. \\frac{k}{\\tau_e^2} \\left[1 + \\frac{2}{3}\\frac{m_a^2}{H_{\\rm Inf}^2} \\left( -2 + \\gamma_E + \\ln (-2k\\tau_e)\\right] \\right]\\right|_{0}^{-1\/\\tau_e} \\nonumber \\\\\n\t& = & -\\frac{V}{8\\pi^2} \\frac{1}{\\tau_e^3} \\left[1 + \\frac{2}{3}\\frac{m_a^2}{H_{\\rm Inf}^2} \\left( -2 + \\gamma_E + \\ln 2 \\right) \\right]\\,.\n\\end{eqnarray}\nThen, one can obtain the density of these particles at any given time through\n\\begin{equation*}\n\tn = \\frac{N}{V_{\\rm phys}} = - \\frac{1}{8\\pi^2} \\left(\\frac{1}{a \\tau_e}\\right)^3 \\left[1 + \\frac{2}{3} \\frac{m_a^2}{H_{\\rm Inf}^2} \\left( -2 + \\gamma_E + \\ln 2 \\right) \\right]\\,.\n\\end{equation*}\n\nThis is a good point to make some estimates. First, one can get away (for now) with not choosing a value of $m_a$, as it will be subdominant. Thus, we are left to find $\\tau_e$. To do so, notice that \n\\begin{equation}\n\t\\frac{1}{k_0 |\\tau_e|} = \\frac{k_0|\\tau_*|}{k_0 |\\tau_e|} = \\frac{a_e}{a_*} \\sim e^{60}\\,,\n\\end{equation}\nwhere `$0$' and `$*$' stand for present-day and horizon-crossing magnitudes. In particular, $k_0$ can be identified with the current horizon length. As it is widely known, inflation had to last at least 60 $e-$folds after this mode crossed the horizon in order to solve the horizon problem.\\footnote{Actually, the number of $e-$folds needed to solve the horizon problem depends on the energy scale of inflation, but we are only interested in rough estimates here.} Then, the above is equivalent to \n\\begin{equation}\n\t\\frac{(aH)_0^{-1}}{|\\tau_e|} = \\frac{H_0^{-1}}{|\\tau_e|} \\sim e^{60}\\,,\n\\end{equation}\nrendering a conformal time at the end of inflation,\n\\begin{equation}\n\t\\tau_e \\sim - 4 \\times 10^{-9}\\ {\\rm sec} = - 1.465\\times 10^{34} \\Mp^{-1}\\,.\n\\end{equation}\nWith this, we have the necessary values to estimate the number density of squeezing-generated ALPs at any given era. The free parameters are the energy scale of inflation and the mass of the particles. However, if the latter is small in comparison to the former, the contribution from the ratio will be negligible and one can get away with working with the first term. \n\n\\subsection{$\\phi_A \\gamma_t \\rightarrow x \\overline{x}$}\n\nFermion production from the interaction of an ALP and a (test) photon $\\gamma_t$ is mediated by the Lagrangian\n\\begin{equation}\n\tq_x A_{\\mu} \\overline{\\psi} \\gamma^{\\mu} \\psi\\,.\n\\end{equation}\n\nLet us take the initial momenta of the particles to be\n\\begin{equation}\n\tk_a = E_a(1, \\cos \\theta, \\sin \\theta ,0), \\qquad k_{\\gamma} = E_{\\gamma}(1, 1, 0, 0)\\,.\n\\end{equation}\nWith a center of mass energy given by $E^2_{\\rm com} = 2 E_a E_{\\gamma} (1-\\cos \\theta)$, one can find the cross section of the interaction to be\n\\begin{equation}\n\t\\sigma = \\frac{1}{4E_a E_{\\gamma} |v_a - v_{\\gamma}|} \\frac{q_x^2 m_x^2}{2\\pi f_a^2} \\ln \\left(\\frac{E' + p'}{E'-p'}\\right)\\,,\n\\end{equation}\nwhere $E' = E_{\\rm com}\/2$, $p' = \\sqrt{(E')^2 - m_x^2}$ and $m_x$ denotes the mass of the fermions. Now, we introduce the variables $\\lambda = 2 m_x^2\/(E_a E_{\\gamma})$ and $y = \\cos \\theta$, such that the average over the initial axion momentum is \\cite{Conlon:2013isa}\n\\begin{widetext}\n\\begin{align}\n\t\\langle \\sigma v \\rangle & = \\frac{q_x^2 \\lambda}{16\\pi f_a^2} \\int_{-1}^{1-\\lambda} dy\\ \\ln \\left(\\frac{\\sqrt{1-y} + \\sqrt{1-y-\\lambda}}{\\sqrt{1-y} - \\sqrt{1-y-\\lambda}} \\right) \\nonumber \\\\\n\t& = \\frac{q_x^2 \\lambda}{16\\pi f_a^2} \\left[ - \\sqrt{4-2\\lambda} + (\\lambda - 2) \\ln (\\sqrt{2} - \\sqrt{2-\\lambda}) + 2 \\ln (\\sqrt{2-\\lambda} + \\sqrt{2}) - \\frac{1}{2} \\lambda \\ln \\lambda \\right] \\nonumber \\\\\n\t& = \\frac{q_x^2 \\lambda}{16\\pi f_a^2} \\left[ - \\sqrt{4-2\\lambda} + (4-\\lambda) \\ln (\\sqrt{2} + \\sqrt{2-\\lambda}) + \\left(\\frac{\\lambda}{2} - 2\\right) \\ln \\lambda \\right]\\,.\n\\end{align}\n\\end{widetext}\nNotice this expression tells us that $0 < \\lambda \\leq 2$. \n\nNext, we identify two contributions to the ALP number density during the matter dominated era: those produced during inflation and those produced during matter domination itself. \n\\begin{equation*}\n\tn = \\int dn = \\frac{1}{8\\pi^2} \\left[ \\int_{aH}^{-1\/\\tau_e} \\frac{dk}{a^3 \\tau_e^2} + \\int_{aH}^{(aH)_{eq}} \\frac{dk}{a^3 \\tau^2} \\right]\\,.\n\\end{equation*}\nThen, it is convenient to write every expression in terms of the variable $\\lambda$ introduced above, \n\\begin{equation}\nk = \\frac{2 a m_x^2}{\\lambda E_{\\gamma}} \\implies d k= - \\frac{2am_x^2}{E_{\\gamma}} \\frac{d\\lambda}{\\lambda^2}\\,,\n\\end{equation}\nsuch that the interaction rate is given by\n\\begin{widetext}\n\\begin{align}\n\t\\langle n \\sigma v \\rangle = & \\frac{q_x^2}{16\\pi f_a^2} \\frac{1}{8\\pi^2 a^3} \\frac{2am_x^2}{E_{\\gamma}} \\bigg\\{ \\int_{\\lambda_e}^{\\lambda_{\\tau}} \\frac{d\\lambda}{\\lambda \\tau_e^2} \\left[ - \\sqrt{4-2\\lambda} + (4-\\lambda) \\ln (\\sqrt{2} + \\sqrt{2-\\lambda}) + \\left(\\frac{\\lambda}{2} - 2\\right) \\ln \\lambda \\right] \\nonumber \\\\\n\t& + \\int_{\\lambda_{\\tau}}^{\\lambda_{eq}} \\frac{d\\lambda}{\\lambda \\tau^2} \\left[ - \\sqrt{4-2\\lambda} + (4-\\lambda) \\ln (\\sqrt{2} + \\sqrt{2-\\lambda}) + \\left(\\frac{\\lambda}{2} - 2\\right) \\ln \\lambda \\right] \\bigg\\}\\,,\n\\end{align}\n\\end{widetext}\nwhere \n\\begin{equation*}\n\t\\lambda_e = \\frac{2m_x^2}{E_{\\gamma}} a|\\tau_e|\\,, \\qquad \\lambda_{\\tau} = \\frac{2m_x^2}{H E_{\\gamma}}\\,, \\qquad \\lambda_{eq} = \\frac{2m_x^2}{H_{eq} E_{\\gamma}}\\,.\n\\end{equation*}\nThe kinematic constraints on $\\lambda$ place stringent bounds on the allowed values of the parameters of the model, in particular on the ratio $m_x^2\/E_{\\gamma}$. To see this, take the values at matter-radiation equality, where\n\\begin{equation*}\n\t\\lambda_e(a_{eq}) \\sim 10^{30} \\Mp^{-1} \\frac{2m_x^2}{E_{\\gamma}}\\,, \\quad \\lambda_{\\tau} (\\lambda_{eq}) \\sim 10^{55} \\Mp^{-1} \\frac{2m_x^2}{E_{\\gamma}}\\,.\n\\end{equation*}\nSo, taking the maximum allowed value of $\\lambda$, we conclude that $m_x^2\/E_{\\gamma} \\sim 10^{-55} \\Mp$. This could be satisfied only for extremely light fermions (even for not-so-realistic values of the photon energy). Assuming these rather implausible conditions are satisfied, we can notice that the first integral will dominate ($\\tau_e^{-2} \\gg \\tau^{-2}$), so we will just focus on this one (for now). Then, the interaction rate is\n\\begin{equation}\n\t\\langle n \\sigma v \\rangle \\approx \\frac{3356 q_x^2}{16\\pi f_a^2} \\frac{(1+z_{eq})^2}{8\\pi^2} \\frac{2\\times 10^{-55}\\Mp}{\\tau_e^2}\\,, \n\\end{equation} \nwhere we have solved the integral numerically. Plugging the numerical values of $\\tau_e$ and $z_{eq}$, we have that \n\\begin{equation}\n\t\\langle n \\sigma v \\rangle \\sim 10^{-112} \\Mp^3 \\frac{q_x^2}{128\\pi^2 f_a^2}\\,.\n\\end{equation}\nClearly $f_a$ would need to be abnormally small in order to have a non negligible interaction rate. The only way to obtain non negligible values would be to suppress even more the ratio $m_x^2\/E_{\\gamma}$, such that the corresponding versions of $\\lambda$ approach to $0$, where the integral actually diverges. Needless to say, even considering very light fermions, the energy of the photon would be out of reach (and can even become trans-Planckian). Indeed, for axions coming from string theory, we generically expect $f_a>\\Mp$ from the Weak Gravity Conjecture (WGC) \\cite{Heidenreich:2015nta, Rudelius:2014wla, Bachlechner:2015qja}. Interestingly, the WGC also constrains the ratio of the charge-to-mass of fermions to be less than $q_x\/m_x < 1$ in Planck units. On excluding trans-Planckian photons on physical grounds, this means that $q_x$ gets naturally suppressed on considering very small values for $m_x^2\/E_{\\gamma}$. Therefore, it seems that the WGC highly disfavours having a non-negligible value for this interaction rate.\n\n\\subsection{$\\phi_A \\rightarrow \\gamma \\gamma \\implies \\gamma_t \\gamma \\rightarrow \\gamma \\gamma$}\n\nIn this case, we will check how likely it is for the photon to interact with an environment composed of photons which are produced from the decay of an ALP. For this, we need the decay width of the process, which is \n\\begin{equation}\n\t\\Gamma_{A\\rightarrow \\gamma\\gamma} = \\frac{g_{A\\gamma\\gamma}^2 m_A^3}{64\\pi}\\,,\n\\end{equation}\nand, assuming $E\/N = 0$, this becomes\n\\begin{equation}\n\t\\Gamma_{A\\rightarrow \\gamma\\gamma} = 1.1 \\times 10^{-24}\\ {\\rm s}^{-1} \\left(\\frac{m_A}{\\rm eV}\\right)^5\\,.\n\\end{equation}\nWithout any further calculations, one can see that for masses $m_A \\sim {\\cal O}(1)\\ {\\rm eV}$ or less, the decay width is too small even considering the age of the Universe ($\\sim 10^{17}\\ {\\rm s}$), and so no photons would be produced. Current bounds on the mass of the axion highly disfavour higher masses. This is why it is more appropriate to talk about ALPs, as they are more generic and well suited to be a test lab. \n\nNaturally, the photons resulting from the decaying of the ALP will not have the same momentum as it. We label the resulting photons as $1'$ and $2'$, with an angle $\\theta'$ between their momenta. Then, one can easily show that \n\\begin{equation}\n\t\\langle \\cos \\theta' \\rangle = - \\frac{m_A^2}{4 p_{1'} p_{2'}}\\,, \\qquad \\langle p_{1'} p_{2'} \\rangle = \\frac{m_A^2}{4}\\,,\n\\end{equation}\nso that\n\\begin{equation}\n\tp_{1'}^2 + p_{2'}^2 = p_A^2 + \\frac{m_A^2}{2}\\,,\n\\end{equation}\nleading to the following direction-averaged momenta\n\\begin{align}\n\tp_{1'}^2 &= \\frac{m_A^2}{4} + \\frac{p_A^2}{2} \\left[ 1 + \\sqrt{1+\\frac{m_A^2}{p_A^2}} \\right]\\,, \\nonumber \\\\\n\tp_{2'}^2 &= \\frac{m_A^2}{4} + \\frac{p_A^2}{2} \\left[ 1 - \\sqrt{1+\\frac{m_A^2}{p_A^2}} \\right]\\,.\n\\end{align}\nThis leads to a not-so-simple distribution of photons. However, considering the range of masses that render a photon population at matter domination, the distribution can be somewhat simplified. To see this, first notice that the comoving momentum is between $(aH)_{eq} \\lesssim k \\leq (aH)_{e}$, or plugging in numbers, $10^{-59}\\ \\Mp \\lesssim k \\lesssim 10^{-34}\\ \\Mp$. The physical momentum of massive particles varies with expansion the same way as for massless particles ($p \\propto a^{-1}$). Thus, the physical momentum of ALPs should be on the range $10^{-56}\\ \\Mp \\lesssim p \\lesssim 10^{-31}\\ \\Mp$ (or $ 10^{-38}\\ {\\rm GeV} \\lesssim p \\lesssim 10^{-13}\\ {\\rm GeV}$). Even for the upper limit, the physical momentum of ALPs is rather negligible in comparison with the rest mass required for it to decay by the matter dominated era (${\\cal O}(10^3)\\ {\\rm eV}$). Thus, it is a good approximation to treat the ALPs as non-relativistic. Then, the momentum of the resulting photons are roughly\n\\begin{equation}\n\tp_{1'} \\approx \\frac{m_A + p_A}{2}\\,, \\qquad p_{2'} \\approx \\frac{m_A - p_A}{2}\\,,\n\\end{equation}\nwhere for the sake of simplicity, we take $p_{1'} \\approx p_{2'} \\approx m_A\/2$.\n\nWith these considerations, one can compute the mean free path of a test photon interacting with an environment of photons decaying from ALPs. For starters, Euler and Kockel computed the cross section for photon-photon interactions \\cite{Euler:1935zz, Liang:2011sj}, \n\\begin{equation}\n\t\\sigma(\\gamma\\gamma \\rightarrow \\gamma\\gamma) = \\frac{937 \\alpha^4 \\omega^6}{10125 \\pi m^8},\n\\end{equation} \nwhere $\\alpha \\simeq 1\/137$ is the fine structure constant, $\\omega$ is the energy of the photons in the center-of-momentum frame, and $m$ is the mass of the electron. The momentum of each photon in the lab-frame can be written as\n\\begin{equation}\np_1^{\\mu} = E_{1} (1,1,0,0), \\qquad p_2^{\\mu} = E_2 (1, -\\cos \\theta, \\sin \\theta, 0),\n\\end{equation}\nsuch that\n\\begin{equation}\n\t\\omega = \\sqrt{E_1 E_2}\\; \\cos \\frac{\\theta}{2} .\n\\end{equation}\nNext, recalling the number density of ALPs (which translates into the number density of photons up to a factor of $2$), and considering that their mass is negligible in comparison to the energy scale of inflation, we have\n\\begin{equation}\\label{npeq}\n\tn = - \\frac{1}{4\\pi^2} \\left(\\frac{1}{a \\tau_e}\\right)^3\\,,\n\\end{equation}\nsuch that\n\\begin{equation}\n\t\\sigma n \\sim \\frac{937 \\alpha^4}{10125\\pi m^8} E_{\\gamma}^3 E_{1}^3 \\frac{2}{\\pi} \\frac{(1+z_{eq})^3}{4\\pi^2} |\\tau_e|^{-3}\\,,\n\\end{equation}\nwhere $E_{\\gamma}$ denotes the energy of the test photon (quantum state) and $E_{1}$ the energy of the environment photon. Then, taking $E_{\\gamma} = 10^{-17}\\ \\Mp$ and $E_{1} \\sim m_A = 10^{-24}\\ \\Mp\\ (1\\ {\\rm keV})$, the resulting mean free path is\n\\begin{equation}\n\t\\ell = (\\sigma n)^{-1} \t\\sim 10^{21}\\ {\\rm cm}\\,,\n\\end{equation}\nwhich should be compared to $H_{eq}^{-1} \\sim 10^{50}\\ {\\rm cm}$. Nevertheless, notice that we have taken a rather high energy for the test photon, so much so that the cross section formula may be invalid due to other processes being predominant. A more sensible value would be $E_{\\gamma} = 10^{-24}\\ \\Mp$, which yields\n\\begin{equation}\n\t\\ell = (\\sigma n)^{-1} \t\\sim 10^{42}\\ {\\rm cm}\\,.\n\\end{equation}\nThus, in principle photons could interact with other photons emerging from the decay of ALPs (we will check this more carefully below). However, it is instructive to compare the possibility of these interactions to the interaction with CMB photons. According to our estimation for the number density of photons created through the process $\\phi_A \\rightarrow \\gamma\\gamma$, by the time of photon decoupling we have $ n \\sim 20 \\ {\\rm cm}^{-3}$ $(600\\ {\\rm cm}^{-3}$ by matter-radiation equality), whereas for CMB photons $n_{pd} \\approx n_{\\gamma,0} (1+z_{pd})^3 \\sim 4 \\times 10^{11}\\ {\\rm cm}^{-3}$. Thus, the number density of ALP photons is negligible in comparison to CMB photons, so the latter are in principle a more important source of decoherence than the former after $z \\sim 1000$. Let us compute next the mean free path due to this interaction. \n\n\\subsection*{Mean free path}\n\nIn order to compute the mean free path (or redshift in a cosmological setting), we will use the optical depth, defined as \n\\begin{equation}\n\t{\\cal T} = \\int \\sigma j_{\\mu} dx^{\\mu}\\,,\n\\end{equation}\nwhere $\\sigma$ is the cross section of the interaction and $j_{\\mu}$ is the four-current \\cite{Ruffini:2015oha}. The integral over the spatial dimensions are null due to isotropy and homogeneity. \nThis will be used to compute in a more robust manner the mean free path for the interaction of a photon with others produced by the decay of an ALP. Moreover, we will incorporate the time dependence from the decay width. With these considerations, the optical depth is written as \n\\begin{widetext}\n\\begin{equation}\\label{opd}\n\t{\\cal T} = \\int_{t}^{t_0} dt\\ (1-e^{-\\Gamma t}) \\frac{937\\alpha^4 E_{\\gamma}^3 m_A^3}{10125\\pi m^8} \\int_{-1}^{1} d(\\cos \\theta) \\cos^6 \\frac{\\theta}{2}\\ \\frac{1}{4\\pi^2} \\left[\\int_{aH}^{-1\/\\tau_e} \\frac{dk}{\\tau_e^2} + \\int_{aH}^{(aH)_{eq}} \\frac{dk}{\\tau^2} \\right]\\,.\n\\end{equation}\n\\end{widetext}\n\nNext, we shall assume a matter dominated Universe throughout the entire propagation of the photon. This will be convenient in order to deal with the explicit time dependence in the expression. Thus, we have that\n\\begin{equation}\n\tt = \\frac{2}{3H_0} (1+z)^{-3\/2}\\,.\n\\end{equation}\n\nWe shall focus on the first term inside the brackets of \\eqref{opd}, which is dominant (by many orders of magnitude). In doing so, the optical depth is written as\n\\begin{align}\n{\\cal T} & = \\frac{937\\alpha^4 E_{\\gamma,0}^3 m_A^3}{10125\\pi m^8} \\frac{1}{8\\pi^2} \\times \\nonumber \\\\ & \\int_0^z \\frac{dz'\\ (1+z')^3}{H_0 (1+z')^{5\/2}} (1-e^{-\\Gamma t})\\frac{1}{\\tau_e^2}\\left[-\\frac{1}{\\tau_e} - H_0 (1+z)^{1\/2}\\right]\\,. \\nonumber\n\\end{align}\nIn order to have numerical estimates we take $E_{\\gamma,0} = m_A = 10^{-24}\\ \\Mp$, such that\n\\begin{align}\n\t\\frac{937\\alpha^4 E_{\\gamma,0}^3 m_A^3}{10125\\pi m^8} \\frac{1}{8\\pi^2 H_0} \\simeq 10^{78}\\,.\n\\end{align}\nThe probability of the photon travelling without interacting with the environment is given by $P(z) = e^{- {\\cal T}(z)}$. For $z = 3400$, one gets ${\\cal T} \\sim 10^{-20}$, meaning that basically $P = 1$, and so there is no decoherence due to the interaction between the photon in some quantum state and the photons produced by the decay of expansion-generated ALPs. One could entertain the idea of going further into the past (higher redshift) in order to obtain non-trivial probabilities (even though the single-fluid approximation would break in the realistic setup). However, even for redshifts as high as $10^{20}$, the optical depth is just around $10^{-13}$, so that interactions remain highly unlikely. One could also argue that different input parameters could change this conclusion, however, smaller masses only lead to less efficient interactions and a slower decay, effectively increasing the mean free path. \n\nLet us emphasize that we have studied the potential interactions with particles that have been produced directly or indirectly due to the dynamics of the expansion of the Universe. In this sense, one could also ask if there can be interactions with a primordial population of ALPs (or their offspring). Such interactions can be potentially more important that the ones we have considered; however, it has been found that for realistic values of the parameters the growth of the photon field in particular is strongly suppressed \\cite{Garretson1992, Arza2020}, and thus by the time of decoupling this scenario should not be considered a source of decoherence. \n\nAn interesting thing to note is that the strength of the interaction, which we have considered in this work, has recently been constrained from the observation of the birefringence angle from the CMB data \\cite{Minami:2020odp}. It is also well-known that photons travelling significantly large distances, and interacting with magnetic fields, can lead to the production of ALPs (see, for instance, \\cite{DeAngelis:2008sk}). Conversions between photons and ALPs, in the presence of primordial magnetic fields, can also leave observable signatures in the CMB \\cite{Mirizzi:2009nq}, which together with other cosmological considerations, has been used to constrain a considerable region of the parameter space \\cite{Irastorza:2018dyq}. In the future, we plan to combine the estimate coming from polarization data, and the requirement that ALPs from the early-universe do not decohere, to find new probes for the so-called cosmological axion background \\cite{Dror:2021nyr}.\n\n\n\n\\section{Decoherence through the cosmological medium}\n\nIn this section we will look at the potential sources of decoherence of a photon in some quantum state due to the interaction with other particles in the cosmological medium. Unlike for the estimates in the previous section, we know from observations the number density of the other species, with values that make interactions more likely. We already had a first glance at such interactions, like photon-photon scattering with CMB radiation. \n\n\\subsection{Abundance of particles}\\label{abpar}\n\nFirst, we shall compute the number density of photons. This is given by\n\\begin{align}\n& n_{\\gamma}=\\frac{8 \\pi}{c^{3}} \\int_{0}^{\\infty}\\left(\\frac{k T}{h}\\right)^{3} \\frac{x^{2} d x}{e^{x}-1} \\nonumber \\\\\n& \\implies n_\\gamma = 4.11 \\times 10^8 \\, (1+z)^3 \n\\, m^{-3}\\,,\n\\end{align}\nwhere the temperature of the CMB is $T_0 = 2.72548\\pm 0.00057$K. Other sources give far fewer photons. \n\nNext, we look at the abundance of baryons. The baryon-to-photon density is \n\\begin{equation}\n\\eta = \\frac{n_{\\rm b}}{n_\\gamma} = 2.75 \\times 10^{-8} \\Omega_b h^2\\,.\n\\end{equation}\nWith Planck's (2018) value of $\\Omega_{\\rm b} h^2 = 0.02237 \\pm 0.00015$ \\cite{Planck:2018vyg}, this gives an average baryon density today (if fully ionized) of\n\\begin{equation}\nn_{\\rm b,0} = 0.2526 \\, m^{-3}.\n\\end{equation}\nPrimordial nucleosynthesis and the CMB tell us that the Helium-4 mass fraction is about $Y_{\\rm P} = 0.246$. To a good approximation, all the mass is in protons and Helium --- everything else is negligible in terms of number density.\n\nThe number density of Helium is given by $Y_{\\rm P} = 4n_{\\rm He}\/(4n_{\\rm He}+n_{\\rm p})$. With $Y_{\\rm P}=0.246$, $n_{\\rm p}\/n_{\\rm He} = 12.26$. This means that the fraction of baryonic nuclei that is Helium-4 is 0.0754. We also have $n_{\\rm b} = n_{\\rm p} + 4 n_{\\rm He} = n_{\\rm p}(1+4\/12.26) = 1.33 n_{\\rm p}$.\n\nNext, the abundance of protons is related to that of baryons by\n\\begin{equation}\nn_{\\rm p,0} = \\frac{n_{\\rm b}}{1.33} \\implies n_{\\rm p} = 0.190 \\,(1+z)^3\\, m^{-3}\\,, z0$. Let us call them Algorithms 1 and 2 as they are in \\cite{KS}. Algorithm 2 is simpler than Algorithm 1 and has been intensely studied: see for example Aronson, Frieze and Pittel \\cite{AFP}, Bohman and Frieze \\cite{BF}, Balister and Gerke \\cite{BG} or Bordenave and Lelarge \\cite{BL}. In particular, \\cite{AFP} together with Frieze and Pittel \\cite{FP} shows that w.h.p. Algorithm 2 finds a matching that is within $\\tilde{\\Theta}(n^{1\/5})$ of the optimum, when applied to $G_{n,m}$. Subsequently, Chebolu, Frieze and Melsted \\cite{CFP} showed how to use Algorithm 2 as a basis for a linear expected time algorithm, when $c$ is sufficiently large.\n\nAlgorithm 2 proceeds as follows (a formal definition of Algorithm 1 is given in the next section). While there are isolated vertices it deletes them. After which, while there are vertices of degree one in the graph, it chooses one at random and adds the edge incident with it to the matching and deletes the endpoints of the edge. Otherwise, if the current graph has minimum degree at least two, then it adds a random edge to the matching and deletes the endpoints of the edge. \n\nIn the same paper Karp and Sipser proposed another algorithm for finding a matching that also runs in linear time. This was Algorithm 1. The algorithm sequentially reduces the graph until it reaches the empty graph. Then it unwinds some of the actions that it has taken and grows a matching which is then output. Even though it was shown empirically to outperform Algorithm 2, it has not been rigorously analyzed. In this paper we analyze Algorithm 2 in the special case where the graph is random with a fixed degree sequence $3\\leq d(i)\\leq 4$ for $i=1,2,\\ldots,n$. We prove the following:\n\\begin{thm}\\label{main}\nLet $G$ be a random graph with degree sequence $3\\leq d(i)\\leq 4$ for $i=1,2,\\ldots,n$. Then \n\\begin{enumerate}[(a)]\n\\item Algorithm 1 finds a matching of size $n\/2-O(\\log n)$, w.h.p.\n\\item Algorithm 1 can be modified to find a (near) perfect matching in $O(n)$ time w.h.p. and in expectation.\n\\end{enumerate}\n\\end{thm}\nA (near) perfect matching is one of size $\\rdown{n\/2}$. Note that in the case of cubic graphs, it is known that they have (near) perfect matchings w.h.p., see Bollob\\'as \\cite{Bol}. Note also that it was shown by Frieze, Radcliffe and Suen \\cite{FRS} that w.h.p. Algorithm 2 finds a matching of size $n\/2-\\tilde{\\Theta}(n^{1\/5})$. \\footnote{Recently, the junior author has extended Theorem \\ref{main} to random $r$-regular graphs for all $3\\leq r=O(1)$.}\n\\section{The Algorithm}\nThe algorithm that is given in \\cite{KS} can be split into two parts. The first part sequentially reduces the graph until it reaches the empty graph. Then the second part reverses part of this reduction and grows a matching which is then output. \n\nTo reduce the graph, \n\\begin{enumerate}[(1)]\n\\item First, while there are vertices of degree 0 or degree 1 the algorithm removes them along with any edge incident to them. The edges removed at this stage will be part of the output matching. \n\\item Second, while there are vertices of degree 2 the algorithm contracts them along with their two neighbors. That is the induced path $(x,y,z)$ is replaced by a single contracted vertex $y_c$ whose neighbors are those of $x,z$ other than $y$. The description in \\cite{KS} does not explicitly say what to do with loops or multiple edges created by this process. In any case, such creations are very rare. We say a little more on this in Section \\ref{details}.\n\nIn the unwinding, if we have so far constructed a matching containing an edge $\\set{y_c,\\xi}$ incident with $y_c$ and $\\xi$ is a neighbor of $x$ then in our matching we replace this edge by $\\set{x,\\xi}$ and $\\set{y,z}$. If there is no matching edge so far chosen incident with $y_c$ then we add an arbitrary one of $\\set{x,y}$ or $\\set{y,z}$ to our matching.\n\\item Finally if the graph has minimum degree 3 then a random vertex is chosen among those of maximum degree and then a random edge incident to that vertex is deleted. These edges will not be used in the unwinding.\n\\end{enumerate} \n\\subsection{Idea of proof:} No mistakes are made while handling vertices of degree 0,1 or 2. Each mistake decreases the size of the final matching produced by one from the maximum size. We will show that mistakes occur only at parts of the graph that have become denser than is likely. \n\nWe show that w.h.p. the maximum degree remains $O(\\log^{2}\\nu)$ where $\\nu$ is the number of vertices remaining and so as long as $\\nu\\log n$, say, then w.h.p. there will be no dense subgraphs and the algorithm will not make any mistakes. This explains the $O(\\log n)$ error term. Finally, we assert that removing an edge incident to a vertex of a maximum degree will help to control the maximum degree, explaining this choice of edge to delete.\n\n\\subsection{Details}\\label{details}\nThe precise algorithm that we analyze is called {\\sc reduce-construct} The algorithm description given in \\cite{KS} is not explicit in how to deal with loops and multiple edges, as they arise. We will remove loops immediately, but keep the multiple edges until removed by other operations. \n\nWe assume that our input (multi-)graph $G = G([n],E)$ has degree sequence $\\bd$ and is generated by the configuration model of Bollob\\'as \\cite{Bol}. Let $W=[2\\nu]$, $2\\nu=\\sum_{i=1}^n d(i)$, be our set of {\\em configuration points} and let $\\Phi$ be the set of {\\em configurations} i.e. functions $\\phi:W \\mapsto [n]$ that such that $|\\f^{-1}(i)|=d(i)$ for every $i \\in [n]$. Given $\\phi \\in \\Phi$ we define the graph $G_\\phi=([n],E_\\phi)$ where $E_\\phi=\\{\\{\\phi(2j-1),\\phi(2j)\\}: j\\in [\\nu] \\}$. Choosing a function $\\phi \\in \\Phi$ uniformly at random yields a random (multi-)graph $G_\\phi$ with degree sequence $\\bd$. \n\nIt is known that conditional on $G_\\f$ being simple, i.e. having no loops or multiple edges, it is equally likely to be any graph that has degree sequence $\\bd$. Also, if the maximum degree is $O(1)$ then the probability that $G_\\f$ is simple is bounded below by a positive quantity that is independent of $n$. Thus results on this model can be translated immediately to random simple graphs.\n\nWe split the {\\sc reduce-construct} Algorithm into the {\\sc reduce} and {\\sc construct} algorithms which we present separately.\n\n\\textbf{Algorithm} \\textsc{Reduce}:\n\nThe input $G_0=G_\\f$ where we condition on there being no loops.\\\\ \n$i=\\hat{\\tau}=0$.\n\\\\ \\textbf{While} $G_i=(V_i,E_i) \\neq (\\emptyset,\\emptyset)$ do: \n\\begin{itemize}\n\\item[]\\textbf{If} $\\delta(G_i)=0$: Perform a {\\bf vertex-0 removal}: choose a vertex of degree 0 and remove it from $V_i$.\n\\item[]\\textbf{Else if} $\\delta(G_i)=1$: Perform a {\\bf vertex-1 removal}: choose a random vertex $v$ of degree 1 and remove it along with its neighbor $w$ and any edge incident to $w$. \n\\item[]\\textbf{Else if} $\\delta(G_i)=2$: Perform a {\\bf contraction}: choose a random vertex $v$ of degree 2. Then replace $\\{v\\} \\cup N(v)$ ($v$ and its neighbors $N(v)$) by a single vertex $v_c$. For $u\\in V \\setminus (\\{v\\} \\cup N(v))$, $u$ is joined to $v_c$ by as many edges as there are in $G_i$ from $u$ to $\\{v\\} \\cup N(v)$. Remove any loops created.\n\\item[]\\textbf{Else if } $\\delta(G_i)\\geq 3$: Perform a {\\bf max-edge removal}: choose a random vertex of maximum degree and remove a random edge incident with it.\n\\\\ \\textbf{End if}\n\\item[]\\textbf{If} the last action was a max-edge removal, say the removal of edge $\\{u,v\\}$ and in the current graph we have $d(u)=2$ and $u$ is joined to a single vertex $w$ by a pair of parallel edges then perform an {\\bf auto correction contraction}: contract $u,v$ and $w$ into a single vertex. Remove any loops created.\n\\\\ \\textbf{End If}\n\\item[] Set $i=i+1$ and let $G_i$ be the current graph.\n\\end{itemize}\n\\textbf{End While}\n\\\\Set $\\hat{\\tau}=i$.\n\nObserve that we only reveal edges (pairs of the form $(\\phi(2j-1),\\phi(2j)): j\\in [\\nu]$) of $G_\\phi$ as the need arises in the algorithm. Moreover the algorithm removes any edges that are revealed. Thus if we let $\\bd(i)$ be the degree sequence of $G_i$ then, given $\\bd(i)$ and the actions performed by {\\sc reduce} until it generates $G_i$ we have that $G_i$ is uniformly distributed among all configurations with degree sequence $\\bd(i)$ and no loops.\n\nCall a contraction that is performed by {\\sc reduce} and involves only 2 vertices \\emph{bad} i.e. one where we contract $u,v$ to a single vertex given that $G$ contains a parallel pair of the edge $\\set{u,v}$ and $u$ has degree 2. Otherwise call it \\emph{good}. Observe that a bad contraction can potentially be a mistake while a good contraction is never a mistake. By introducing the auto correction contraction we replace the bad contraction of the vertex set $\\{u,w\\}$, as presented in the description of {\\sc reduce}, with the good contraction of the vertex set $\\{v,u,w\\}$. Note that we do not claim that all bad contractions can be dealt with in this way. We only show later that other instance of bad contractions are very unlikely.\n\nWe now describe how we unwind the operations of {\\sc reduce} to provide us with a matching.\n\n\\textbf{Algorithm} {\\sc construct}:\n\n\\textbf{Input:} $G_0,G_1,...,G_{\\hat{\\tau}}$ - \nthe graph sequence produced by {\\sc reduce},\nan integer $j\\in \\{0,1,...,\\hat{\\tau}\\}$ and a matching $M_{j}$ of $G_j$. (We allow the possibility of stopping {\\sc reduce} before it has finished.\nIf we do so when $|V(G_j)|=\\Theta(n^{2\/3})$ then, given that $G_j$ has a perfect matching w.h.p., we can use the $O(|E||V|^{1\/2})$ algorithm of \\cite{MV} applied to $G_j$ to find a perfect matching $M_j$ of $G_j$ in $O(n)$ time. Thereafter we can use {\\sc construct} to extend $M_j$ to a matching of $G_0$.)\n\\\\ \\textbf{For $i=1$ to $j$ do}: \n\\begin{itemize}\n\\item[]\\textbf{If} $\\delta(G_{j-i})=0$: Set $M_{j-i}=M_{j-i+1}$\n\\item[]\\textbf{Else if} $\\delta(G_{j-i})=1$: Let $v$ be the vertex of degree 1 chosen at the $(j-i)th$ step of {\\sc reduce} and let $e$ be the edge that is incident to $v$ in $G_{j-i}$. Then, \nSet $M_{j-i}=M_{j-i+1}\\cup\\{e\\}$.\n\\item[]\\textbf{Else if} $\\delta(G_{j-i})=2$: Let $v$ be the vertex of degree 2 selected in $G_{j-i}$. If $|N(v)|=1$ i.e. $v$ is joined to a single vertex by a double edge in $G_{j-i}$, set $M_{j-i}=M_{j-i+1}$. Else let $N(v)=\\{u,w\\}$ and $v_c$ be the new vertex resulting from the contraction of $\\{v,u,w\\}$. If $v_c$ is not covered by $M_{j-i+1}$ then set $M_{j-i}=M_{j-i+1}\\cup\\{v,u\\}$. Otherwise assume that $\\{v_c,z\\}\\in M_{j-i+1}$ for some $z\\in V(G_{j-i})$. Without loss of generality assume that in $G_{j-i}$, $z$ is connected to $u$. Set\n$M_{j-i}=(M_{j-i+1}\\cup \\{\\{v,w\\},\\{u,z\\}\\})\\setminus \\{ v_c,z\\}$.\n\\item[]\\textbf{Else if }: $\\delta(G_{j-i})\\geq 3$: Set $M_{j-i}=M_{j-i+1}$\n\\end{itemize}\n\\textbf{End For}\n\nFor a graph $G$ and $j\\in \\{0,1,...,\\hat{\\tau}\\}$ denote by $R_0(G,j)$ and $R_{2b}(G,j)$ the number of times that {\\sc reduce} has performed a vertex-0 removal and a bad contraction respectively until it generates $G_j$. For a graph $G$ and a matching $M$ denote by $\\kappa(G,M)$ the number of vertices that are not covered by $M$. The following Lemma determines the quality of the output of the {\\sc reduce-construct} algorithm.\n\\begin{lem}\\label{correction}\nLet $G$ be a graph and $M$ be the output of the Reduce-Backtrack algorithm applied to $G$. Then, for $j\\geq 0$,\n\\beq{true}{\n\\kappa(G,M)=R_0(G,j)+R_{2b}(G,j)+\\kappa(G_j,M_j).\n}\n\\end{lem}\n\\begin{proof}\nLet $G=G_0,G_1,...,G_{\\hat{\\tau}}$ be the sequence of graphs produced by {\\sc reduce} and let $M_j,M_{j-1},...,M_0=M$ be the sequence of matchings produced by {\\sc construct}. Let $R_0(G,j,i)$ and $R_{2b}(G,j,i)$ be the number of vertex-0 removals and bad contractions performed by {\\sc reduce} going from $G_{j-i}$ to $G_j$. We will prove that for every $0\\leq i\\leq j$, \n\\beq{troo}{\n\\kappa(G_{j-i},M_{j-i})=R_0(G,j,i)+R_{2b}(G,j,i)+\\kappa(G_j,M_j).\n} \nTaking $i=j$ yields the desired result.\n\nFor $i=0$, equation \\eqref{true} holds as $R_0(G,j,0)=R_{2b}(G,j,0)=0$. Assume inductively that \\eqref{true} holds for $i=k-1$ where $k$ satisfies $0\\ell).$$\n{\\bf Hyperactions of Interest}\\\\\nFor the analysis of {\\sc reduce} we consider 7 distinct hyperactions (sequences of actions) which we call hyperactions of Type 1,2,3,4,5,33 and 34 respectively. In the case that the maximum degree is larger than 3 we consider the following hyperactions: we have put some diagrams of these hyperactions at the end of the paper.\n\\begin{itemize}\n\\item[]\\textbf{ Type 1}: A single max-edge removal,\n\\item[]\\textbf{ Type 2}: A max edge-removal followed by an auto correction contraction.\n\\item[]\\textbf{ Type 3}: A single max-edge removal followed by a good contraction. \n\\item[]\\textbf{ Type 4}: A single max-edge removal followed by 2 good contractions.\nIn this case we add the restriction that there are exactly 6 distinct vertices $v,u,x_1,x_2,w_1,w_2$ involved in this hyperaction and they satisfy the following:\n(i) $v$ is a vertex of maximum degree, it is adjacent to $u$ and $\\{u,v\\}$ is removed during the max-edge removal, (ii) $d(u)=d(x_1)=d(x_2)=3$, (iii) $N(u)=\\{v,x_1,x_2\\}$, $N(x_1)=\\{u,x_2,w_1\\}$ and $N(x_2)=\\{u,x_1,w_2\\}$. (Thus $\\set{u,x_1,x_2}$ form a triangle.) The two contractions have the same effect as contracting $\\{u,x_1,x_2,w_1,w_2\\}$ into a single vertex.\n\\end{itemize}\n In the case that the maximum degree equals 3 we also consider the following hyperactions:\n\\begin{itemize}\n\\item[] \\textbf{Type 5}: A max-edge removal followed by 2 good contractions that interact.\nIn this case the 5 vertices $u,v,x_1,x_2,z$ involved in the hyperaction satisfy the following:\n(i) $\\{u,v\\}$ is the edge removed by the max-edge removal, (ii) $N(v)=\\{u,x_1,x_2\\}$, $N(u)=\\{v,x_1,z\\}$, (so $\\set{u,v,x_1}$ form a triangle), (iii) $|(N(x_1) \\cup N(x_2) \\cup N(z)) \\setminus \\set{u,v,x_1,x_2,z}|\\geq 3$. This hyperaction has the same effect as contracting all of $\\set{u,v,x_1,x_2,z}$ into a single vertex.\n\\item[] \\textbf{Type 33}: A max-edge removal followed by 2 good contractions that do not interact. There are 6 distinct vertices involved $v,v_1,v_2,u,u_1,u_2$. During the max-edge removal $\\{u,v\\}$ is removed. Thereafter each of the 2 sets of vertices $\\{v,v_1,v_2\\}$ and $\\{u,u_1,u_2\\}$ is contracted to a single vertex.\n\\item[] \\textbf{Type 34}: A max-edge removal followed by 3 good contractions. There are 8 distinct vertices involved $v,v_1,v_2,v,u,u_1,u_2,w_1,w_2$. During the max-edge removal $\\{u,v\\}$ is removed. The conditions satisfied by $v,u,u_1,u_2,w_1,w_2$ and the actions that are performed on them are similar to the ones in a hyperaction of Type 4. The difference now is that $v$ has degree 3 before the hyperaction. In addition $\\{v,v_1,v_2\\}$ is contracted into a single vertex.\n\\end{itemize}\nWe divide Hyperactions of Type 3 into three classes. Assume that during a Hyperaction of Type 3 the set $\\{v,a,b\\}$ is contracted, $v$ is the contracted vertex and $v_c$ is the new vertex. We say that such a Hyperaction is of \\textbf{Type 3a} if $d(v_c)=d(a)+d(b)-2$, is of \\textbf{Type 3b} if $d(v_c)=d(a)+d(b)-4$ and is of \\textbf{Type 3c} if $d(v_c)j}(G):=\\sum_{h>j} p_h(G)$,\n\\vspace{-2mm}\n\\item $ ex_\\ell(G):=\\sum_{v \\in V(G)} [d(v)-\\ell]\\mathbb{I}(d(v)>\\ell)$, \n\\end{itemize}\nWe denote by $n_i,e_i, n_{j,i},p_{j,i},p_{>j,i}$ and $ex_{\\ell,i}$ the corresponding quantities in relation to $\\Gamma_i$. \n\\begin{lem}\\label{dence}\nLet $K$ be an arbitrary fixed positive integer. Let $\\bd$ be a degree sequence of length $n$ that satisfies $ex_\\ell(G)\\leq \\log^2 n$ for some $3\\leq \\ell=O(1)$. Let $G$ be a random graph with degree sequence $\\bd$ and no loops. Let $b \\in \\{0,1\\}$, then $\\mathbf{Pr}(\\cB_K(G,v,b))=o(n^{-0.9-b})$.\n\\end{lem}\n\\begin{proof}\nLet $G$ be a random graph with degree sequence $\\bd$. The fact that $ex_\\ell(G)\\leq \\log^2 n$ implies that $G$ has no loops with probability bounded below by a positive constant (see for example \\cite{FK}). Hence the condition of having no loops can be ignored in the proof that events have probability $o(1)$. Also it implies that $\\Delta=\\Delta(G)\\leq \\ell+ ex_\\ell(G) \\leq \\ell+\\log^2 n$.\n\nLet $2m=\\sum_{i=1}^n d(i)\\leq \\ell n+ex_{\\ell}(G) =\\Theta(n)$. Then for vertex $v$ and for $b=0,1$ the probability that $G$ spans a subgraph that covers $v$, spans $a\\leq K$ vertices and $a+b$ edges can be bounded above by \n\\begin{align*}\n&\\leq_O \\sum_{a=2}^{K} \\binom{n}{a-1} (\\Delta a)^{2(a+b)} \\frac{(2m-2(a+b))!}{2^{m-(a+b)}[m-(a+b)]!}\\times \\frac{2^{m}m!}{(2m)!}\\\\ \n&\\leq_O \\sum_{a=2}^{K} n^{a-1} \\Delta^{2(a+b)} \\frac{m(m-1)...(m-(a+b)+1)}{2m(2m-1)...(2m-2(a+b)+1)}\\\\ \n&\\leq_O \\sum_{a=2}^{K} n^{a-1} \\Delta^{2(a+b)} m^{-(a+b)}\\\\\n&=o(n^{-0.9-b}).\n\\end{align*}\n\\end{proof}\nWe will drop the subscript $K$ from $\\cB$. Taking $K=20$ will easily suffice for the rest of the proof. Thus $\\cB(G,v,b)=\\cB_{20}(G,v,b)$.\n\\subsection{Proof of Lemma \\ref{hyper}}\nLet $v$ be the vertex of maximum degree chosen by {\\sc reduce} and let $u$ be the vertex adjacent to $v$ such that $\\{u,v\\}$ is chosen for removal. We will show that if $\\cB(\\Gamma_i,v,1)$ does not occur then {\\sc reduce} performs one of the hyperactions given in Section \\ref{list}.\nAlso observe that if a Hyperaction of Type 2,3b,4,5 or 34 occurs then $\\cB(\\Gamma_i,v,0)$ occurs. Lemma \\ref{dence} states that $\\mathbf{Pr}(\\cB(\\Gamma_i,v,0))=o(|V(\\Gamma_i)|^{-0.9})$ thus proving the second part of Lemma \\ref{hyper}. Also note that if a Hyperaction of Type 3c, occurs, corresponding to a bad Hyperaction, then $\\cB(\\Gamma_i,v,1)$ occurs. Lemma \\ref{dence} states that $\\mathbf{Pr}(\\cB(\\Gamma_i,v,1))=o(|V(\\Gamma_i)|^{-1.9})$.\n\n{\\bf Case A: $d(v) \\geq 4$.}\\\\\nIf $d(u)\\geq 4$ then a hyperaction of Type 1 is performed. \nThus assume $d(u)=3$ and consider the cases where $|N(u)|=1,2,3$, (recall that we allow parallel edges but not self-loops). \n\n{\\textbf{Case A1: $|N(u)|=1$}}.\\\\\n$u$ is connected to $v$ by 3 parallel edges and so $\\cB(\\Gamma_i,v,1)$ occurs.\t\n\n{\\textbf{Case A2: $|N(u)|=2$}}.\\\\\nLet $N(u)=\\{v,u'\\}$ and $S=\\set{u,u',v}$. Let $T=(N(u')\\cup N(v) )\\setminus S$. If $|T|\\leq 2$ then we have $d(S)\\geq 10$ and either $S$ spans more than 3 edges or $S \\cup T$ spans at least 7 edges. In both cases $\\cB(\\Gamma_i,v,1)$ occurs. Assume then that $|T|\\geq 3$. Now exactly one of $\\set{u,u'}$, $\\set{u,v}$ is repeated, else $\\cB(\\Gamma_i,v,1)$ occurs. If $\\set{u,u'}$ is repeated then we perform an auto correction contraction resulting to a Hyperaction of Type 2 . If $\\set{u,v}$ is repeated then we contract the remaining path $(u',u,v)$. Hence we have performed a Hyperaction of Type 3b.\n\n{\\textbf{Case A3: $|N(u)|=3$.} }\\\\\nLet $N(u)=\\set{v,x_1,x_2}$ and $T=(N(x_1)\\cup N(x_2))\\setminus\\{u,x_1,x_2\\}$. \n\n{\\textbf{Sub-case A3a: $|T|\\leq 1$}. \\\\\n$\\{u,x_1,x_2\\} cup T$ spans at least $(d(u)-1+d(x_1)+d(x_2)+|T|)\/2 \\geq 4+|T|\/2$ edges and the event $\\cB(\\Gamma_i,v,1)$ occurs.\n\n{\\textbf{Sub-case A3b: $T=\\{w_1,w_2\\}$}}. \\\\\nIf $v$ is at distance less than 6 from $\\{u\\} \\cup N(u)$ in $\\Gamma\\setminus \\{ u,v\\}$ then $\\cB(\\Gamma_i,v,1)$ occurs. To see this consider the subgraph $H$ spanned by $\\{u,v,x_1,x_2,w_1,w_2,y\\}$ and the vertices on the shortest path $P$ from $v$ to $u$ in $\\Gamma_i\\setminus \\{ u,v\\}$. Here $y$ is the neighbor of $v$ on $P$. It must contain at least two distinct cycles. One contained in $\\set{u,x_1,x_2,w_1,w_2}$ and $u,v,P$. If there is no edge from $x_1$ to $x_2$ then $\\{v,u,x_1,x_2,w_1,w_2\\}$ spans at least 7 edges and so $\\cB(\\Gamma_i,v,1)$ occurs.\n\nThus we may assume that $N(x_1)=\\{u,x_2,w_1\\}$, $N(x_2)=\\{u,x_1,w_2\\}$ and $v\\notin \\{w_1,w_2\\}\\cup N(w_1) \\cup N(w_2)$. We may also assume that $\\set{w_1,w_2}$ is not an edge of $\\Gamma$, for otherwise $\\{u,x_1,x_2,w_1,w_2\\}$ contains two distinct cycles and $\\cB(G_i,v,1)$ occurs. The algorithm {\\sc Reduce} proceeds by contracting $u,x_1,x_2$ into a single vertex $x'$. $x'$ has degree 2 and the algorithm proceeds by performing a contraction of $x',w_1,w_2$ into a new vertex $w'$. Let $S=N\\{w_1,w_2\\}\\setminus \\{x_1,x_2\\}$. If $|S|\\leq 3$ then $\\cB(\\Gamma_i,u,1)$ occurs. To see this observe that $w_1,w_2$ must then have a common neighbor $w_3$ say. Consider the subgraph $H$ spanned by $\\{u,x_1,x_2,w_1,w_2,w_3\\}$. $H$ contains at least 7 edges and 6 vertices. If $|S|\\geq 4$ then the new vertex has degree 4 and the sequence of actions taken by {\\sc reduce} corresponds to a hyperaction of Type 4.\n\\vspace{3mm}\n\\\\{\\textbf{Sub-case A3c: $|T| \\geq 3$.}}\nAfter the removal of $\\{v,u\\}$ we contract $\\{u,x_1,x_2\\}$ into a single vertex of degree at least 3, hence a hyperaction of Type 3 is performed.\n\\vspace{3mm}\n\\\\\n{\\bf Case B: $d(v)=d(u)=3$.}\\\\\nLet $\\Gamma'=\\Gamma_i\\setminus \\{e\\}$ where $e=\\{v,u\\}$.\n\\vspace{3mm}\n\\\\{\\textbf{Case B1: In $\\Gamma'$, $u$ and $v$ are at distance at least 4.}}\\\\ \nIf $|N(N (u))|$ and $|N(N(v))| \\leq 3$ then $\\cB(\\Gamma_i,v,1)$ occurs. Thus we can assume that either $|N(N (u))|=4$ and\/or $|N(N(v))|=4$. If both $|N(N (u))|,|N(N(v))|=4$ then {\\em Reduce} will perform 2 good contractions and this amounts to a hyperaction of Type 33. Assume then that $|N(N (u))|=4$ and that $|N(N (v))|\\leq 3$. If $|N(N (v))|=3$ then again {\\em Reduce} will perform 2 good contractions amounting to a hyperaction of Type 33. If $|N(N (v))|=2$ and so $v$ is in a triangle then {\\em Reduce} will perform a hyperaction of Type 34. Finally, if $|N(N (v))|=1$ then $\\cB(\\Gamma_i,v,1)$ occurs.\n\n{\\textbf{Case B2: In $\\Gamma'$, $u$ and $v$ are at distance 3.}}\\\\\nIn $\\Gamma_i$ there is a cycle $C$ of length 4 containing $u,v$. If in $\\Gamma'$ we find that $|N(N (u))|\\leq 3$ or $ |N(N(v))| \\leq 3$ or $|N(u)\\cap N(N(v))|>1$ or $|N(v)\\cap N(N(u))|>1$ then $\\cB(\\Gamma_i,v,1)$ occurs. This is because the graph spanned by $\\{u,v\\} \\cup N(u)\\cup N(v) \\cup N(N(u) \\cup N(N(v))$ in $\\Gamma_i$ will contain a cycle distinct from $C$. Assume this is not the case. Then after the max-edge removal of $\\{u,v\\}$ we have a contraction of $\\{u\\} \\cup N(u)$ followed by a contraction of $\\{u\\} \\cup N(u)$. Observe that neither contraction reduces the size of $N(N(u))$ or $N(N(v))$. Thus {\\sc reduce} performs a hyperaction of Type 33.\n\\vspace{3mm}\n\\\\{\\textbf{Case B3: In $\\Gamma'$, $u$ and $v$ are at distance 2.}\\\\\nIn the case that $u,v$ have 2 common neighbors in $\\Gamma'$ we see that $\\cB(\\Gamma_i,v,1)$ occurs. Assume then that they have a single common neighbor $x_1$. Let $z,x_2$ be the other neighbors of $u,v$ respectively. Then either $\\cB(\\Gamma_i,v,1)$ occurs or {\\sc reduce} performs a hyperaction of Type 5.\n\\vspace{3mm}\n\\\\{\\textbf{Case B4: In $\\Gamma'$, $u$ and $v$ are at distance 1}.\\\\\nSo here we have that $\\set{u,v}$ is a double edge in $\\Gamma_i$. Let $x,y$ be the other neighbors of $u,v$ repectively in $\\Gamma$. Assuming that $\\cB(\\Gamma_i,v,1)$ does not occur, {\\sc reduce} performs a max-edge removal followed by a single good contraction and this will be equivalent to a hyperaction of Type 3, involving the contraction of one of $x,u,v$ or $u,v,y$.\n\\qed\n\\subsection{Proof of Lemma \\ref{drift}}\nThe inequality $ex_{4,i} \\leq \\log^2 n_i$ implies that $p_{j,i}\\leq \\log^2n_i\/2e_i =o(n_i^{-0.95})$ for $5\\leq j \\leq \\Delta$. It also implies that the maximum degree $\\Delta$ of $\\Gamma$ satisfies $\\Delta=O(\\log^2 n)$. \n\nIn the case that $\\cB(G,v,1)$ occurs, i.e. with probability $O(n_i^{-1.9})$ (see Lemma \\ref{hyper}) \n$$|ex_{4,i}-ex_{4,i+1}|\\leq 2e_i \\leq 4n_i+ex_{4,i} \\leq 5n_i.$$ \nObserve that if an action of Type 2,3b,4,5 or 34 takes place then $v$ lies on a subgraph with $a<12$ vertices and $a$ edges. Lemma \\ref{hyper} states that this occurs with probability $o(n^{-0.9})$. For all the above hyperactions we have $|ex_{4,i+1}-ex_{4,i}|\\leq 2$. This follows from the fact that performing a contraction can increase $ex_4$ by at most 2. This is because the initial vertices with degrees say $2,d_1,d_2$ contributed $\\max\\{ 0, (d_1-4)+(d_2-4) \\}$ to $ex_{4,i}$ while the new contracted vertex has degree $d_1+d_2-2$ and contributes $\\max\\{ 0, d_1+d_2-2-4) \\}$ to $ex_{4,i+1}$. Moreover observe that if a hyperaction of Type 5, 33 or 34 occurs then all the vertices involved have degree 3. If $\\cB(\\Gamma,v,1)$ does not occur then a hyperaction of Type 5 will increase $ex_{4,i}$ by 1 (since there will be one new vertex of degree 5). Hyperactions of Type 33 or Type 34 do not change $ex_4$. Thus it remains to examine the effects of a hyperaction of Type 1 or of Type 3a. \n\nIf $ex_{4,i}=0$ then a hyperaction of Type 1 does not increase $ex_{4,i}$ while a hyperaction of Type 3 could increase it by 2. \n\nIf $ex_4(\\Gamma)>0$ then the $i^{th}$ hyperaction starts with a max-edge removal. \\\\\n{\\bf Case 1:} If the smaller degree vertex involved is of degree larger than 3, then this results in a hyperaction of Type 1. This happens with probability $(1+o(1))(1-p_{3,i})$. Furthermore in this case $ex_{4,i}-2\\leq ex_{4,i+1}\\leq ex_{4,i}-1$.\n (The $(1+o(1))$ factor arises because of $O(1)$ degree changes during the hyperaction.)\\\\\n{\\bf Case 2:} If the smaller degree vertex $v$ involved is of degree 3 then a contraction is performed. And this occurs with probability $(1+o(1))p_{3,i}$. If the contraction results in a vertex of degree at least 3 then we have a hyperaction of Type 3a, and not of Type 1. Also this is the only way for a hyperaction of Type 3a to occur. Let the other two vertices involved in the contraction be $a,b$ and have degrees $d_a,d_b$ respectively. Now $d_a=d_b=3$ with probability $(1+o(1))p_{3,i}^2$, resulting in a new vertex that has degree at most 4. In this case, $ex_{4,i+1}-ex_{4,i}= -1$ (the -1 here originates from the max-edge removal). Else if $d_a=3,d_b=4$ then we have $-1\\leq ex_{4,i}-ex_{4,i}\\leq -1+1=0$. And this occurs with probability $(1+o(1))2p_{3,i}p_{4,i}$. Else if $d_a=d_b=4$ then we have $-1\\leq ex_{4,i+1}-ex_{4,i}\\leq -1+2= 1$ (this occurs with probability $p_{3,i}p_{4,i}^2$. Otherwise a vertex of degree at least 5 is involved and given our upper bound on $ex_{4,i}$, this happens with probability $o(1)$. If the event $\\cB(\\Gamma,v,1)$ does not occur then the new contracted vertex has degree $d_a+d_b-2$. Hence $-1\\leq ex_{4,i} - ex_{4,i} \\leq -1+2=1.$ \n\nThus in all cases $|ex_{4,i} - ex_{4,i}|\\leq 2$ and if $ex_{4,i}>0$ then\n\\begin{align*}\n\\ex[ex_{4,i+1}-ex_{4,i}| \\Gamma_i] &\\leq -(5n_i)\\cdot o(n_i^{-1.9})+ 2n_i^{-0.95} - (1-p_{3,i}) -p_{3,i}^3 + p_{3,i}p_{4,i}^2+o(1)\n\\\\&\\leq - (1-p_{3,i}) -p_{3,i}^3 +p_{3,i}(1-p_{3,i})^2+o(1)\n \\leq -\\frac13\n\\end{align*} \n(The final expression in $p_{3,i}$ is maximized at $p_{3,i}=1\/2$, for $p_{3,i}\\in [0,1]$.)\n\\qed\n\\subsection{Proof of Lemma \\ref{4inf}} \nWe start by proving the following Lemma:\n\\begin{lem}\\label{intervals}\nLet $\\Gamma_h$ be such that $ex_{4,h}=0$. Then with probability $1-o(n_j^{-1.8})$ there exists $1\\leq j\\leq 0.25 \\log^2n_h$ satisfying $ex_{4,h+j}=0$. Furthermore\n$ex_{4,h+i}\\leq \\log^2n_{h+i}$ for $i\\leq j$. \n\\end{lem}\n\\begin{proof}\nIf $ex_{4,h+1}=0$ then we are done. Otherwise Lemma \\ref{drift} implies that \n$ex_{4,h+1}\\in\\{1, 2\\}$ with probability $1-o(n_h^{-1.9})$.\n\nLet $\\cE_T$ be the event that for $j\\leq 0.25\\log^2 n_h$ {\\sc reduce} performs only hyperactions of Type 1,2,3,4,5,33 or 34. Such a hyperaction reduces the vertex set by at most 8. Lemma \\ref{hyper} implies that $\\cE_T$ occurs with probability $1-o(n^{-1.8}_h)$. Moreover if $\\cE_T$ occurs for $j<0.25\\log^2 n_h$ then $|n_{h+j}| \\geq n_h-8\\cdot 0.25 \\log^2 n_h$. In addition from Lemma \\ref{drift} we have that with probability $1-o(n^{-1.8})$, $|ex_{4,h+j}-ex_{4,h+j-1}|<2$ for $j<0.25 \\log^2 n$ hence $ex_{4,h+j} \\leq 2\\cdot 0.25 \\log^2 n_h \\leq \\log^2 n_{h+j}$. Finally conditioned on $\\cE_T$ since $ex_{4,h+1}=1$ or 2, the probability that there is no $2\\leq j\\leq 0.25\\log^2 n_h$ satisfying $ex_{4,h+j}=0$ is bounded by the probability that the sum of $0.25 \\log^2 n-2$ independent random variables with magnitude at most 2 and expected value smaller than $-1\/3$ (see Lemma \\ref{drift}) is positive. From Hoeffding's inequality \\cite{Hoeff} we have that the later probability is bounded by $\\exp\\set{-\\frac{2(\\frac13\\log^2n_h-3)^2}{\\log^2 n_h}}=o(n^{-2}_h)$.\n\\end{proof}\nNow let $\\Gamma_0,\\Gamma_{i_1},...,\\Gamma_{i_\\ell}$ be the subsequence of $\\Gamma_{0},\\Gamma_{1},..,\\Gamma_{\\tau}$ that includes all the graphs that have $ex_4=0$ and at least $\\omega$ vertices. Then since $\\Gamma_i$ has minimum degree 3 and $e_i$ is decreasing with respect to $i$ using Lemma \\ref{intervals} we have that with probability \n$$1-\\sum_{i:n_i\\geq \\omega}o( n_i^{-1.8} )\\geq 1-\\sum_{i:n_i\\geq \\omega} 2e_i^{-1.8}\/3 \\geq 1-\\sum_{i=\\omega}^{\\infty} i^{-1.8}=1-o(\\omega^{-0.8})$$\nwe have that for $j<\\ell$ we have $n_{i_j} -n_{i_{j+1}} \\leq 8\\cdot 0.25\\log ^2 n_{i_j}=2\\log ^2 n_{i_j}$ and all the graphs $\\Gamma_i$ preceding $\\Gamma_{i_\\ell}$ in $\\Gamma_0,\\Gamma_1,...,\\Gamma_\\tau$ satisfy $ex_{4,i} \\leq \\log^2 n_i$. Suppose now that $n_{i_\\ell}> 2\\omega$. Then the above argument implies that w.h.p. there is $j>i_\\ell$ such that $ex_{4,j}=0$ and $n_j\\geq n_{i_\\ell}-2\\log^2n_{i_\\ell}\\geq \\omega$ and this contradicts the definition of $i_\\ell$. Thus, w.h.p., $\\omega\\leq n_{i_\\ell}\\leq 2\\omega$ and hhis completes the proof of Lemma \\ref{4inf}.\n\\qed\n\\section{Existence of a Perfect Matching}\\label{Proofs Matchings}\nWe devote this section to the proof of Lemma \\ref{34match}. As discussed in the previous section it is enough to prove that given a degree sequence $\\bd=(d(1),...,d(n))$ consisting only of 3's and 4's, if we let $G$ be a random configuration (multi)-graph with degree sequence $\\bd$, then w.h.p. $G$ has a (near)-perfect matching (i.e we can lift the condition that $G$ has no loops). We will first assume that $n$ is even and verify Tutte's condition. That is for every $W\\subset V$ the number of odd components induced by $V \\setminus W$, $q(V\/W)$, is not larger that $|W|$. We split the verification of Tutte's condition into two lemmas. \n\\begin{lem}\\label{minimal}\nLet $W\\subset V$ be a set of minimum size that satisfies $q(V\/W)>|W|$. Then with probability $1-O(n^{-3\/4})$, $|W| > 10^{-5}n$. \n\\end{lem}\n\\begin{lem}\\label{maximal}\nLet $W\\subset V$ be a set of maximum size that satisfies $q(V\/W)>|W|$. Then with probability $1-O(n^{-3\/4})$, $|W| < 10^{-5}n$. \n\\end{lem}\nLemmas \\ref{minimal} and \\ref{maximal} together imply Tutte's condition as there exists no set with size that is both strictly larger and strictly smaller than $10^{-5}n$. In the proof of these lemmas we use the following estimates.\n\\begin{lem}\\label{estimates}\nThe number of distinct partitions of a set of size $2r$ into 2-element subsets, denoted by $\\phi(2r)$, satisfies $\\phi(2r)=\\Theta\\left(\\bfrac{2r}{e}^r\\right)$. Also for $\\ell|W|$ of minimum size and assume $2\\leq w=|W|\\leq 10^{-5}n$. We can rule out the case $w=1$ from the fact that with probability $1-O(1\/n)$, $G$ will be 3-connected, see e.g. the proof of Theorem 10.8 in \\cite{FK} . Let $C_z$ be a component spanned by $V\\setminus W$ of maximum size and let $r=|C_z|$. \n\n\\textbf{Case 1: $r=|C_z|\\leq 0.997n$.}} In this case we can partition $V\\setminus W$ into two parts $V_1,V_2$ such that (i) each $V_l,l=1,2$ is the union of components of $V\\setminus W$, (ii) $|V_1|\\geq |V_2|$, and (iii) $|V_2|\\geq (n-(r+w))\/2\\geq 10^{-3}n$. \n\nLet $d_2=d(V_2)$ and $d_W=d(W)$. Out of the $d_W$ endpoints in $W$ (i.e. configuration points that correspond to vertices in $W$), $\\ell\\leq d_W$ are matched with endpoints in $V_2$ and the rest with endpoints in $V_1$. \n\nFor fixed $i,w,d_W$ the probability that there are sets $V_1,V_2,W$ with $w=|W|, d(W)=d_W$ and $|V_2|=i$ satisfying $1\\leq w\\leq 10^{-5}n, 10^{-3}n\\leq i\\leq 0.5n$ and $d_W\\leq 4w\\leq 0.04i$, such that $V_1 \\times V_2$ spans no edges is bounded by \n\\begin{align*}\np_1 &\\leq \\sum_{\\ell=0}^{d_W}\n\\binom{n}{i} \\binom{n-i}{w} \\binom{d_W}{\\ell} \\frac{\\phi(d_2+\\ell)\\cdot \\phi(2m-d_2-\\ell) }{\\phi(2m)}\\\\ \n&\\leq_O \\sum_{\\ell=0}^{d_W}\n\\bfrac{en}{i}^i \\bfrac{en}{w}^w 2^{d_W} \\frac{\\bfrac{d_2+\\ell}{e}^{(d_2+\\ell)\/2} \n\\bfrac{2m-d_2-\\ell}{e}^{(2m-d_2-\\ell)\/2} }{\\bfrac{2m}{e}^m}\\\\ \n&\\leq\\sum_{\\ell=0}^{d_W}\n \\bfrac{en}{i}^i \\bfrac{100en}{i}^{i\/100} 2^{i\/25} \\bfrac{d_2+\\ell}{2m}^{(d_2+\\ell)\/2} \n\\bigg(1-\\frac{d_2+\\ell}{2m}\\bigg)^{(2m-d_2-\\ell)\/2} \\\\ \n& \\leq_O \\sum_{\\ell=0}^{d_W}\n \\bfrac{1600(en)^{101}}{i^{101}}^{i\/100} \\bfrac{d_2+\\ell}{2m}^{(d_2+\\ell)\/2} \\exp\\set{- \\frac{d_2+\\ell}{2}\\brac{1-\\frac{d_2+\\ell}{2m}}} \n\\end{align*}\nFor the third line we used the fact that $w\\leq i\/100$ and $d_W\\leq 4w \\leq i\/25$. \n\nLet $f(x)=x^xe^{-x(1-x)}$ and $L(x)=\\log f(x)$. Then $L''(x)=x^{-1}+2$ and so $L$ and hence $f$ is convex for $x>0$. Now $d_2+\\ell \\in J= [3i,4.04i]$ and since $\\bfrac{d_2+\\ell}{2m}^{(d_2+\\ell)\/2}\\exp\\set{- \\frac{d_2+\\ell}{2}\\brac{1-\\frac{d_2+\\ell}{2m}}}=f\\bfrac{d_2+\\ell}{2m}^m$ we see that its maxima are at the endpoints of $J$. In general $3i\\leq 3n\/2 \\leq m$. However when $d_2+\\ell=4.04i$ we have that \n\\beq{2m}{\n2m\\geq 4.04i+3(n-i-w)\\geq 4.04i+ 3(n-1.01i)= 3n+1.01i.\n}\n{\\bf Case 1a: $d_2+\\ell=3i$.}\\\\\nWe have $\\frac{d_2+\\ell}{2m} \\leq \\frac{3i}{3n}\\leq \\frac{1}{2}$ and $(d_2+\\ell)(1-\\frac{d_2+\\ell}{2m})\\geq 3i\/2$. Therefore,\n\\begin{align*}\np_1&\\leq_O w \\bfrac{1600(en)^{101}}{i^{101}}^{i\/100}\\bfrac{i}{n}^{3i\/2} e^{-3i\/4} \\\\\n&= w \\bigg[\\frac{1600e^{26}}{2^{49}} \\bfrac{2i}{n}^{49}\\bigg]^{i\/100} \\\\ \n&\\leq w \\bigg[ e^{-1\/2} \\bfrac{2i}{n}^{49}\\bigg]^{i\/100}\\\\\n&\\leq w e^{-i\/200}.\n\\end{align*}\n{\\bf Case 1b: $d_2+\\ell=4.04i$.}\\\\\nIt follows from \\eqref{2m} that $\\frac{d_2+\\ell}{2m} \\leq \\frac{4.04i}{3n+1.01i} \\leq 0.577$ where the second inequality uses $i\\leq n\/2$. It follows from this that $(d_2+\\ell)(1-\\frac{d_2+\\ell}{2m})\/2\\geq 0.85i$. Hence,\n\\begin{align*}\np_1&\\leq_O w \\bfrac{1600(en)^{101}}{i^{101}}^{i\/100} \\bfrac{4.04i}{3n+1.01i}^{2.02i} e^{-0.85i}\\\\\n&\\leq_O w \\bigg[1600e^{16} \\bigg(\\frac{n}{i}\\bigg)^{101} \\bigg( \\frac{4.04i}{3n}\\bigg)^{101} \\bigg(\\frac{4.04i}{3n+1.01i}\\bigg) ^{101}\\bigg]^{i\/100}\\\\ \n&\\leq_O w \\bigg[1600e^{16} \\bigg(\\frac{4.04}{3} \\cdot 0.577\\bigg)^{101}\\bigg]^{i\/100}\\\\\n& \\leq_O w e^{-i\/100}.\n\\end{align*}\nTherefore the probability that Case 1 is satisfied is bounded by a constant times\n\\begin{align*}\n& \\sum_{w=1}^{10^{-5}n} \\sum_{i=10^{-3}n}^{0.5n} w e^{-i\/200}=O(n^{-3\/4}).\n\\end{align*}\n\\vspace{3mm}\n\\\\ \\textbf{Case 2:} $r=|C_z|\\geq 0.997n$. \nLet $V_1=V(C_z)$, \n$V_2=V\\setminus (V_1\\cup W)$. First note that $V_2$ spans at least $w$ components. Therefore $|V_2|\\geq w$. To lower bound $e(V_2:W)$ we use the following Claim.\n\\vspace{3mm}\n\\\\ \\textbf{Claim 1} Every vertex in $W$ is adjacent to at least 3 distinct components in $V \\setminus W$, and hence to at least 2 vertices in $V_2$.\n\\vspace{3mm}\n\\\\ \\textbf{Proof of Claim 1:} Let $v \\in W$ be such that it is adjacent to $t\\in \\{0,1,2\\}$ components in $V\\setminus W$. Consider $W'=W\\setminus\\{v\\}$. Thus $|W'|=|W|-1$ . If $t=0$ then $q(V\\setminus W')=q(V\\setminus W)+1$. If $t=1$ then $q(V\\setminus W')\\geq q(V\\setminus W)-1$. If $t=2$ then if the both of the corresponding components have odd size then the new component will also have odd side, while if only one of them has odd size then the new one has even size. Finally if both have even size the new one has odd size. In all three cases the inequality $q(V\\setminus W')\\geq q(V\\setminus W)-1$ is satisfied. Therefore $q(V\\setminus W') \\geq q(V\\setminus W) -1 >|W|-1=|W'| $ contradicting the minimality of $W$.\n\\qed \n\\vspace{3mm}\n\\\\ From Claim 1 it follows that $W:V_2$ spans at least $2w$ edges. \nWe also have that $|V_2|\\leq n-r-w \\leq 0.003n$. For fixed $2\\leq w\\leq 10^{-5}n$, $3w\\leq d_W \\leq 4w$ and $w\\leq i$ the probability that there exist such sets $V_1,V_2,W$, $|V_2|=i$, $w=|W|$, $d(W)=d_W$ and $2w\\leq \\ell= e(V_2:W)\\leq 4w$ is bounded by \n\\begin{align*}\n&\\sum_{\\ell=2w}^{d_W} \n \\binom{n}{i} \\binom{n-i}{w} \\binom{d_W}{\\ell} \\frac{\\phi(d_2+\\ell)\\cdot \\phi(2m-d_2-\\ell) }{\\phi(2m)}\\\\\n& \\leq_O \\sum_{\\ell=2w}^{d_W} \\bfrac{en}{i}^i \\bfrac{en}{w}^w 2^{4w} \\frac{\\bfrac{d_2+\\ell}{2}^{(d_2+\\ell-2w)\/2} \\bfrac{2w}{e}^{w} \\bfrac{2m-d_2-\\ell}{e}^{(2m-d_2-\\ell)\/2} }{\\bfrac{2m}{e}^m}\\\\ \n&= \\sum_{\\ell=2w}^{d_W} \\bfrac{en}{i}^i \\bfrac{en}{w}^w 2^{4w} \\bfrac{2w}{2m}^{w} \\brac{\\frac{d_2+\\ell}{2m}\\cdot \\frac{e}2}^{(d_2+\\ell-2w)\/2} \n\\bigg(1-\\frac{d_2+\\ell}{2m}\\bigg)^{(2m-d_2-\\ell)\/2} \\\\ \n& \\leq_O w \\bfrac{en}{i}^i \\bfrac{16e}{3}^{w} \\bigg( \\frac{5i}{3n} \\cdot \\frac{e}{2} \\bigg)^{3i\/2}\\\\\n&\\leq_O w \\brac{e^2 \\bfrac{16e}{3}^{2w\/i} \\frac{5^3i}{3^3n}}^{i\/2}.\n\\end{align*} \nFor the second line we used the second inequality of Lemma \\ref{estimates}. For the fourth line we used that $2w\\leq \\ell $, $d_2+\\ell \\leq 4|V_2|+4w \\leq 0.01204n$ and so $\\brac{\\frac{d_2+\\ell}{2m}\\cdot\\frac{e}2}^{(d_2+\\ell-2w)\/2}$ is maximized when $d_2,\\ell$ are as small as possible, that is $d_2=3i,\\ell=2w$. Furthermore note that $d_2+\\ell-2w\\geq d_2\\geq 3i$ and $i\\geq q(V\\setminus W)-1\\geq w$. Therefore the probability that Case 2 is satisfied is bounded by a constant times\n\\begin{align*}\n&\\sum_{w=2}^{10^{-5}n} \\sum_{i=w}^{0.003n} \\brac{e^2\\bfrac{16e}{3}^{2w\/i} \\frac{5^3i}{3^3n}}^{i\/2}\\\\\n&\\leq_O \\sum_{w=2}^{10^{-5}n} \\sum_{i=w}^{2w}\\bfrac{C_1i}{n}^{i\/2}+\\sum_{w=2}^{10^{-5}n} \\sum_{i=2w}^{0.003n}\\bfrac{C_2i}{n}^{i\/2}\\\\\n\\noalign{where $C_1=16^25^3e^4\/3^5,C_2=16\\cdot 5^3 e^3\/3^4$,}\n&\\leq \\sum_{i=2}^{n^{1\/4}}i\\brac{\\bfrac{C_1}{n^{3\/4}}^{i\/2}+\\bfrac{C_2}{n^{3\/4}}^{i\/2}} +\\sum_{i=n^{1\/4}}^{2\\cdot 10^{-5}n}i\\bfrac{2C_1}{10^5}^{i\/2}+\\sum_{i=n^{1\/4}}^{0.003n}i\\bfrac{6C_2}{10^3}^{i\/2}\\\\\n&=O(n^{-3\/4}).\n\\end{align*} \nFinally, since $G$ has an even number of vertices, for $W=\\emptyset$ we have $|W|=q(V\\setminus W)=0$.\n\\qed\n\\subsection{Proof of Lemma \\ref{maximal}:}\nLet $W$ be a set satisfying $q(V \\setminus W)>|W|$ of maximum size and assume $w=|W|\\geq 10^{-5}n$.\n\\vspace{3mm}\n\\\\\n{\\textbf{Claim 2}} No component induced by $V\\setminus W$ is a tree with more than one vertex.\n\\vspace{3mm}\n\\\\ \\textbf{Proof of Claim 2:} Indeed assume that $C_i$ is such a component. If $|C_i|$ is even then\nlet $v$ be a leaf of $C_i$ and define $W'=W \\cup \\{v\\}$. Then $C_i \\setminus \\{v\\}$ is an odd component in $V \\setminus W'$ and $q(V \\setminus W')=q(V \\setminus W)+1>|W|+1=|W'|$ contradicting the maximality of $W$.\n\nThus assume that $|C_i|$ is odd. Let $L_1$ be the set of leaves of $C_i$ and $L_2$ be the neighbors of $L_1$. Set $W'=W \\cup |L_1|$. Then $|L_1| \\geq |L_2|$. Furthermore every vertex in $L_1$ is an odd component in $V \\setminus W'$ and in the case $|L_1|=|L_2|$ then $C_i \\setminus (L_1\\cup L_2)$ is also an odd component in $V \\setminus W'$. Therefore,\n\\begin{align*}\nq(V\/W') &=q(V\/W) -1 +|L_1|+\\mathbb{I}(|L_1|=|L_2|) \n\\\\&\\geq q(V\/W) +|L_2| +|L_1|-|L_2| +\\mathbb{I}(|L_1|=|L_2|)-1\n\\\\&> |W|+|L_2|=|W'|,\n\\end{align*} \ncontradicting the maximality of $W$. \\qed\n\\vspace{3mm}\n\\\\\nWe partition $V \\setminus W$ into three sets, $W_1,W_2$ and $W_3$, as follows. With the underlying graph being the one spanned by $V \\setminus W$, $W_1$ consists of the isolated vertices in $V \\setminus W$, $W_2$ consists of the vertices spanned by components that contain a cycle and have size $s\\leq \\frac{1}{10}\\log n$ and $W_3$ consists of the vertices that are spanned by a component of size at least $\\frac1{10}\\log n$. Finally let $W_4=W_2 \\cup W_3$. To lower bound $W_1$ we use the following claim.\n\n\\textbf{{{Claim 3:}}} W.h.p.\\@ $W_4$ spans at most $\\frac{11w}{\\log n}$ components in $V\\setminus W$.\n\n\\textbf{Proof of Claim 3:} First observe that the number of components spanned by $W_2$ is smaller than the number of cycles of size at most $\\frac1{10} \\log n$ in $G$, which we denote by $r$. \n\\begin{align*}\n\\mathbf{Pr}(r\\geq n^{0.3}) &\\leq n^{-0.3} \\sum_{q=1}^{0.1 \\log n} \\binom{n}{q} 4^q q! \\frac{\\phi(2q) \\phi(2m-2q)}{\\phi(2m)}\\\\\n& \\leq_O n^{-0.3} \\sum_{q=1}^{0.1 \\log n} \\bfrac{en}{q}^q 4^q \\bfrac{2q}{e}^q \\bfrac{e}{2m}^q\n\\\\& \\leq_O n^{-0.3} \\sum_{q=1}^{0.1 \\log n} \\bfrac{8e}{3}^q \n\\leq_O n^{-0.3} (\\log n) 8^{0.1 \\log n}=o(1). \n\\end{align*}\nHence w.h.p.\\@ $W_2$ spans at most $n^{0.3}$ components. Moreover every component spanned by $W_3$ has size at least $\\frac{1}{10}\\log n$. Therefore $W_4$ spans at most $n^{0.3}+\\frac{10w}{\\log n}= \\frac{(1+o(1))10w}{\\log n}$ components in $V\\setminus W$.\\qed\n\nSince $W_4$ spans at most $u=\\frac{11w}{\\log n}$ components in $V\\setminus W$ and no component is a tree it follows that the rest of the components consist of at least $q(V\\setminus W) -u > w-u$ isolated vertices that lie in $W_1$.\n\nFor convenience, we move $|W_1|-(w-u)$ vertices from $W_1$ to $W_4$. Therefore $|W_1|= w-u$.\nLet $k_1$ be the number of vertices of degree 4 in $W_1$ and $d=d(W)-d(W_1)$.\nThen $0\\leq d\\leq 4w-(3(w-u)+k_1)=w+3u-k_1$. For fixed $10^{-5}n\\leq w\\leq 0.5n$ the probability that there exist such sets $W,W_1,W_4$ is bounded by \n\n\\begin{align}\np_2&\\leq \\sum_{k_1=0}^{w-u} \\sum_{d=0}^{w+3u-k_1}\n\\binom{n}{2w} \\binom{2w}{w}\\binom{w}{u} \\binom{4w}{d} \\mathbf{Pr}(d(W)-d(W_1)=d) \\label{2}\\\\ \n&\\times (3(w-u)+k_1)! \\times \\frac{ [2m- [6(w-u)+2k_1]]!}{2^{m-[3(w-u)+k_1}[m-[3(w-u)+k_1]]!} \\times \\frac{2^m m!}{(2m)!}.\\label{3}\n\\end{align}\n\\textbf{Explanation:} We first choose the sets $W,W_1$ and $W_4$ of size $w,w-u$ and $n-2w+u$ respectively. This can be done in $\\binom{n}{2w} \\binom{2w}{w}\\binom{w}{u} \\binom{n-2w+u}{u}^{-1} $ ways, but we ignore the final factor. \n\nFrom the at most $4w$ copies of vertices in $W$ we choose a set $W''\\subset W$ be of size $d$. \nWe let $W'=W\/W''$. These are the copies of vertices that will be matched with those in $W_1$. \n\nIn the calculations that follow we let $a=w\/n\\geq 10^{-5}$. We also let $k_4$ be the number of vertices of degree 4 that lie in $W_4$. We first bound the binomial coefficients, found in the first line.\n\\begin{align}\n &\\binom{n}{2w} \\binom{2w}{w}\\binom{w}{u} \\binom{4w}{d} \n= \\binom{n}{2an} \\binom{2an}{an}\\binom{an}{u} \\binom{4an}{d} \\nonumber\\\\\n&\\leq 2^{o(n)} \\bfrac{1}{2a}^{2an} \\bfrac{1}{1-2a}^{(1-2a)n} 2^{2an} \\bfrac{4ean}{d}^{d} \\nonumber\\\\\n& =2^{o(n)}\\bfrac{1}{a}^{2an} \\bfrac{1}{1-2a}^{(1-2a)n} \\bfrac{4ean}{d}^{d}.\\label{f1}\n\\end{align}\nFor the second line we used that $u\\leq u_0$ which implies that $\\binom{an}{u}=2^{o(n)}$. Observe that \n\\begin{align}\\label{2m2}\n2m=6(w-u)+2k_1+d+3(n-2w+u)+k_4=3n+d+2k_1+k_4-3u.\n\\end{align}\nLet $m_0=d+2k_1+k_4-3u$. For the terms in line \\eqref{3} we have\n\\begin{align*}\n\\frac{(2m)!}{2^m m!}&=\\frac{(3n)!}{2^{1.5n}(1.5n)!} \n\\frac{\\prod_{i=1}^{m_0}(3n+i)}{2^{m_0\/2} \\prod_{i=1}^{m_0\/2}(1.5n+i)}\n\\geq_O\\bfrac{3n}{e}^{1.5n} \\prod_{i=1}^{m_0\/2}[3n+(2i-1)].\\\\\n& \\geq \\bfrac{3n}{e}^{1.5n} e^{-o(n)}(3n)^{-3u\/2} \\prod_{i=1}^{d\/2+k_1+k_4\/2}[3n+(2i-1)]\n\\end{align*}\nEquation \\eqref{2m2} implies that \n$$2m-[6(w-u)+2k_1]=3(1-2a)n+3u+k_4+d.$$ \nThus, \n\\begin{align*}\n& \\frac{ [2m- [6(w-u)+2k_1]]!}{2^{m-[3(w-u)+2k_1]}[m-[3(w-u)+k_1]]!}=\\frac{ [3(1-2a)n]!} {2^{1.5(1-2a)n}[1.5(1-2a)n]!} \\cdot \\frac{\\prod_{i=1}^{d}3(1-2a)n+i}{2^{d\/2}\\prod_{j=1}^{\\frac{d}{2}} 1.5(1-2a)n+j} \n\\\\&\\hspace{5mm} \\times \\frac{\\prod_{i=1}^{k_4}3(1-2a)n+d+i}{2^{k_4\/2}\\prod_{j=1}^{k_4\/2} 1.5(1-2a)n+d\/2 +j } \\cdot \\frac{\\prod_{i=1}^{3u}3(1-2a)n+d+k_4+i}{2^{3u\/2}\\prod_{j=1}^{\\frac{3u}{2}} 1.5(1-2a)n+d\/2+k_4\/2 +j}\n\\\\ &\\hspace{5mm}\\leq_O \\bfrac{3(1-2a)n}{e}^{1.5(1-2a)n} \\prod_{i=1}^{d\/2} [3(1-2a)n+(2i-1)]\n \\prod_{j=1}^{k_4\/2} [3(1-2a)n+d+(2j-1) ] \\cdot (2m)^{3u\/2}\n\\\\ &\\hspace{5mm} \\leq_O \\bfrac{3(1-2a)n}{e}^{1.5(1-2a)n} [3(1-2a)n+an\/2]^{d\/2} (2m)^{3u\/2} \\prod_{j=1}^{k_4\/2} [3(1-2a)n+d+(2j-1) ]\n\\end{align*}\nFor the last inequality we used the Arithmetic Mean-Geometric Mean inequality and the fact that $d\/2\\leq an\/2+ o(n)$, which follows from $d\\leq w+3u-k_1$. \n\nFor the first term of \\eqref{3} we have\n\\begin{align*}\n[3(w-u)+k_1]! \\leq \\frac{3w!}{ (3(w-u))^{3u}} \\prod_{i=1}^{k_1}(3(w-u)+i)\\leq \\bfrac{3an}{e}^{3an} \\frac{2^{o(n)}}{n^{3u}} \\prod_{i=1}^{k_1}(3(w-u)+i).\n\\end{align*}\nThus the expression in \\eqref{3} is bounded by\n\\begin{align}\n&2^{o(n)} \\bfrac{3an}{e}^{3an} \\frac{1}{n^{3u}} \\prod_{i=1}^{k_1}(3(w-u)+i) \\nonumber\\\\ \n&\\times \\bfrac{3(1-2a)n}{e}^{1.5(1-2a)n} [3(1-2a)n+an\/2]^{d\/2} (2m)^{3u\/2} \\prod_{j=1}^{k_4\/2} [3(1-2a)n+d+2j-1 ]\\nonumber\\\\ \n&\\times \\bigg[ \\bfrac{3n}{e}^{1.5n} (3n)^{-3u\/2} \\prod_{i=1}^{d\/2+k_1+k_4\/2}[3n+(2i-1)] \\bigg]^{-1}\\nonumber\\\\\n&=2^{o(n)} a^{3an} [(1-2a)n]^{1.5(1-2a)n} \\bfrac{6m}{n}^{3u\/2} \\prod_{i=1}^{d\/2}\\frac{ 3(1-2a)n+an\/2}{3n+(2i-1)} \\nonumber\\\\\n&\\times \n\\prod_{i=1}^{k_1}\\frac{ 3(w-u)+i}{3n+d+(2i-1)} \n\\prod_{i=1}^{k_4\/2} \\frac{ 3(1-2a)n+d+2i-1}{3n+d+2k_1+2i-1}\\nonumber\\\\\n&\\leq_O 2^{o(n)} a^{3an} [(1-2a)n]^{1.5(1-2a)n}\n\\prod_{i=1}^{d\/2}\\frac{ 3(1-2a)n+an\/2}{3n} \n\\prod_{i=1}^{k_1}\\frac{ 3(w-u)+i}{3n+d+(2i-1)} \\prod_{i=1}^{k_4\/2} 1 \\nonumber\\\\ \n&\\leq_O 2^{o(n)} a^{3an} [(1-2a)n]^{1.5(1-2a)n}[(1-2a)+a\/6]^{d\/2} 2^{-k_1} \\label{f2\n\\end{align}\nFinally we consider the term $\\mathbf{Pr}(d(W)-d(W_1)=d) $ and assume that $h$ vertices of degree 4 were chosen to be included in $W\\cup W_1$, so that $d=h+3u-2k_1$. Then, because there are $\\binom{h}{k_1}\\binom{2w-u-h}{(w-u)-k_1}$ out of $\\binom{2w}{w-u}$ ways to distribute the $k_1$ vertices of degree 4,\n\\begin{align*}\np_3&=\\mathbf{Pr}(d(W)-d(W_1)=d) =\\binom{h}{k_1}\\binom{2w-u-h}{(w-u)-k_1} \\bigg\/ \\binom{2w-u}{w-u}\\\\\n&\\leq \\binom{h}{k_1}\\binom{2w-u-h}{w-u}\\bigg\/\\binom{2w-u}{w-u} \\leq \\binom{h}{k_1} \\prod_{i=0}^{h-1} \\frac{w-i}{2w-i}\\\\\n&\\leq 2^{hH(k_1\/h) -h}=2^{k_1}2^{-k_1+h\\cdot H(k_1\/h)-h}.\n\\end{align*}\nHere $H(x)=-x\\log_2( x) -(1-x) \\log_2(1-x)$ is the entropy function. For fixed $d$ we have $h=d+2k_1+o(n)$. Thus \n$$p_3 \\leq 2^{o(n)+k_1+df(k_1\/d)},\\text{ where }f(x)= -x + (1+2x) H\\brac{\\frac{x}{1+2x}}-(1+2x).$$ \n$f(x)$ has a unique maximum at $x^*$, the solution to $8x(1+x)=(1+2x)^2$ and $f(x^*) \\leq -0.771$. Hence \n\\beq{f3}{\np_3\\leq 2^{-0.771d+k_1+o(n)}.\n} \nMultiplying the bounds in \\eqref{f1}, \\eqref{f2}, \\eqref{f3} together we have a bound\n\\begin{align*}\np_2 & \\leq 2^{o(n)-0.771d+k_1} \\bfrac{1}{a}^{2an} \\bfrac{1}{1-2a}^{(1-2a)n} \\bfrac{4ean}{d}^{d} \n\\\\ &\\times a^{3an} (1-2a)^{1.5(1-2a)n} \\bigg(1-2a+ \\frac{a}{6}\\bigg)^{d\/2} 2^{-k_1}\n\\\\ & = 2^{o(n)} \\bfrac{2^{1.229}ean}{d}^{d} a^{an} (1-2a)^{0.5(1-2a)n} \\bigg(1-\\frac{11a}{6}\\bigg)^{d\/2}\n\\end{align*}\nThus $p_2=o(1)$ when $d=o(n)$. Let $d=ban$ for some $0e$ then $\\bfrac{g(a)}{b}^b$ is maximized at $b=1$. Hence\n$$ p_2 \\leq \\bigg\\{ 2^{o(1)} \\ 2^{1.229} e \\brac{1-\\frac{11a}{6} }^{0.5} a (1-2a)^{0.5(1-2a)\/a} \\bigg\\}^{an} \\leq \\bfrac{19}{20}^{an}.$$\nThe last inequality is most easily verified numerically. Thus the probability that there exists a set $W$ satisfying $q(V \\setminus W)>|W|$ of size $w=|W|\\leq 10^{-5}n$\nis bounded by \n\\begin{align*}\n\\sum_{w=10^{-5}n}^{0.5n} \\bfrac{99}{100}^w =o(1). \n\\end{align*}\nThis only leaves the case of $n$ odd. The reader will notice that in none of the calculations above, did we use the fact that $n$ was even. The Tutte-Berge formula for the maximum size of a matching $\\nu(G)$ is\n$$\\nu(G)=\\min_{W\\subseteq V}\\frac12(|V|+|W|-q(V\\setminus W)).$$\nWe have shown that the above expression is at least $|V|\/2$ for $W\\neq\\emptyset$ and so the case of $n$ odd is handled by putting $W=\\emptyset$ and $q(W)=1$.\n\\section{Conclusions and open questions}\nThe paper of Karp and Sipser \\cite{KS} has been a springboard for research on matching algorithms in random graphs. Algorithm 1 of that paper has not been the subject of a great deal of analysis, mainly because of the way it disturbs the degree sequences of the graphs that it produces along the way. In this paper we have shown that if the original graph has small maximum degree then the maximum degree is controllable and the great efficiency of the algorithm can be verified.\n\nIt is natural to try to extend the analysis to random regular graphs with degree more than four and we are in the process of trying to overcome some technical problems. It would also be of interest to analyse the algorithm on $G_{n,p}$, as originally intended.\n\n\\section{Diagrams of Hyperactions of interest}\n{\\bf Type 2.}\n\n\\begin{center}\n\\pic{\n\\node at (0,0.2) {$w$};\n\\node at (1,0.2) {$u$};\n\\node at (2,0.2) {$v$};\n\\node at (3,1.2) {$x$};\n\\node at (3,0.2) {$y$};\n\\node at (3,-.8) {$z$};\n\\node at (-1,1.2) {$a$};\n\\node at (-1,-0.8) {$b$};\n\\draw (1,0) -- (2,0);\n\\draw (2,0) -- (3,1);\n\\draw (2,0) -- (3,-1);\n\\draw (2,0) -- (3,0);\n\\draw (0,0) -- (-1,1);\n\\draw (0,0) -- (-1,-1);\n\\draw (0,0) to [out=45,in=135] (1,0);\n\\draw (0,0) to [out=-45,in=-135] (1,0);\n\\draw [fill=black] (0,0) circle [radius=.05];\n\\draw [fill=black] (1,0) circle [radius=.05];\n\\draw [fill=black] (2,0) circle [radius=.05];\n\\draw [fill=black] (3,0) circle [radius=.05];\n\\draw [fill=black] (3,1) circle [radius=.05];\n\\draw [fill=black] (3,-1) circle [radius=.05];\n\\draw [fill=black] (-1,1) circle [radius=.05];\n\\draw [fill=black] (-1,-1) circle [radius=.05];\n\\draw [->] [ultra thick] (4,0) -- (5,0);\n\\draw (7,0) ellipse (.6 and .3);\n\\node at (7.05,0) {$wuv$};\n\\draw (7.6,0) -- (8.6,1);\n\\draw (7.6,0) -- (8.6,-1);\n\\draw (7.6,0) -- (8.6,0);\n\\draw (5.6,1) -- (6.5,0);\n\\draw (5.6,-1) -- (6.5,0);\n\\draw [fill=black] (8.6,1) circle [radius=.05];\n\\draw [fill=black] (8.6,-1) circle [radius=.05];\n\\draw [fill=black] (8.6,0) circle [radius=.05];\n\\draw [fill=black] (5.6,-1) circle [radius=.05];\n\\draw [fill=black] (5.6,1) circle [radius=.05];\n\\node at (5.6,1.2) {$a$};\n\\node at (5.6,-0.8) {$b$};\n\\node at (8.6,1.2) {$x$};\n\\node at (8.6,0.2) {$y$};\n\\node at (8.6,-.8) {$z$};\n\\node at (8.6,0.2) {$y$};\n\\node at (8.6,-.8) {$z$};\n}\n\\end{center}\n\n{\\bf Type 3.}\n\n\\begin{center}\n\\pic{\n\\node at (1,0.2) {$u$};\n\\node at (2,0.2) {$v$};\n\\node at (3,1.2) {$x$};\n\\node at (3,0.2) {$y$};\n\\node at (3,-0.8) {$z$};\n\\node at (0,1.2) {$a$};\n\\node at (0,-.8) {$b$};\n\\node at (-1,2.2) {$c$};\n\\node at (-1,1.2) {$d$};\n\\node at (-1,-.8) {$e$};\n\\node at (-1,-1.8) {$f$};\n\\draw (1,0) -- (2,0);\n\\draw (2,0) -- (3,1);\n\\draw (2,0) -- (3,-1);\n\\draw (2,0) -- (3,0);\n\\draw (0,1) -- (1,0);\n\\draw (0,1) -- (-1,2);\n\\draw (0,1) -- (-1,1);\n\\draw (0,-1) -- (-1,-1);\n\\draw (0,-1) -- (-1,-2);\n\\draw (0,-1) -- (1,0);\n\\draw [fill=black] (-1,2) circle [radius=.05];\n\\draw [fill=black] (-1,1) circle [radius=.05];\n\\draw [fill=black] (-1,-2) circle [radius=.05];\n\\draw [fill=black] (-1,-1) circle [radius=.05];\n\\draw [fill=black] (0,1) circle [radius=.05];\n\\draw [fill=black] (0,-1) circle [radius=.05];\n\\draw [fill=black] (1,0) circle [radius=.05];\n\\draw [fill=black] (2,0) circle [radius=.05];\n\\draw [fill=black] (3,0) circle [radius=.05];\n\\draw [fill=black] (3,1) circle [radius=.05];\n\\draw [fill=black] (3,-1) circle [radius=.05];\n\\draw [->] [ultra thick] (4,0) -- (5,0);\n\\draw (7.5,0) ellipse (.6 and .3);\n\\node at (7.55,0) {$avb$};\n\\draw (9.1,0) -- (10.1,1);\n\\draw (9.1,0) -- (10.1,-1);\n\\draw (9.1,0) -- (10.1,0);\n\\draw [fill=black] (9.1,0) circle [radius=.05];\n\\draw [fill=black] (10.1,1) circle [radius=.05];\n\\draw [fill=black] (10.1,-1) circle [radius=.05];\n\\draw [fill=black] (10.1,0) circle [radius=.05];\n\\node at (9.1,0.2) {$u$};\n\\node at (10.1,1.2) {$x$};\n\\node at (10.1,0.2) {$y$};\n\\node at (10.1,-.8) {$z$};\n\\node at (6,2.2) {$c$};\n\\node at (6,1.2) {$d$};\n\\node at (6,-.8) {$e$};\n\\node at (6,-1.8) {$f$};\n\\draw [fill=black] (6,2) circle [radius=.05];\n\\draw [fill=black] (6,1) circle [radius=.05];\n\\draw [fill=black] (6,-1) circle [radius=.05];\n\\draw [fill=black] (6,-2) circle [radius=.05];\n\\draw (6,2) -- (7,0);\n\\draw (6,1) -- (7,0);\n\\draw (6,-2) -- (7,0);\n\\draw (6,-1) -- (7,0);\n}\n\\end{center}\nWe allow the edge $\\set{a,b}$ to be a single edge in this construction. This gives us a Type 3b hyperaction.\n\n{\\bf Type 4.}\n\n\\begin{center}\n\\pic{\n\\node at (0,0.2) {$v$};\n\\draw [fill=black] (0,0) circle [radius=.05];\n\\node at (-1,1.2) {$a$};\n\\node at (-1,0.2) {$b$};\n\\node at (-1,-0.8) {$c$};\n\\draw [fill=black] (-1,1) circle [radius=.05];\n\\draw [fill=black] (-1,0) circle [radius=.05];\n\\draw [fill=black] (-1,-1) circle [radius=.05];\n\\draw (-1,1) -- (0,0);\n\\draw (-1,0) -- (0,0);\n\\draw (-1,-1) -- (0,0);\n\\draw [fill=black] (1,0) circle [radius=.05];\n\\node at (1,0.2) {$u$};\n\\draw (0,0) -- (1,0);\n\\node at (2,1.2) {$x_1$};\n\\node at (2,-1.2) {$x_2$};\n\\draw [fill=black] (2,1) circle [radius=.05];\n\\draw [fill=black] (2,-1) circle [radius=.05];\n\\draw (1,0) -- (2,1);\n\\draw (1,0) -- (2,-1);\n\\draw (2,-1) -- (2,1);\n\\node at (3,1.2) {$w_1$};\n\\node at (3,-1.2) {$w_2$};\n\\draw [fill=black] (3,1) circle [radius=.05];\n\\draw [fill=black] (3,-1) circle [radius=.05];\n\\draw (2,1) -- (3,1);\n\\draw (2,-1) -- (3,-1);\n\\node at (4,2.2) {$p$};\n\\node at (4,1.2) {$q$};\n\\node at (4,-0.8) {$r$};\n\\node at (4,-1.8) {$s$};\n\\draw [fill=black] (4,2) circle [radius=.05];\n\\draw [fill=black] (4,1) circle [radius=.05];\n\\draw [fill=black] (4,-1) circle [radius=.05];\n\\draw [fill=black] (4,-2) circle [radius=.05];\n\\draw (3,1) -- (4,2);\n\\draw (3,1) -- (4,1);\n\\draw (3,-1) -- (4,-2);\n\\draw (3,-1) -- (4,-1);\n\\draw [->] [ultra thick] (5,0) -- (6,0);\n\\node at (8,0.2) {$v$};\n\\draw [fill=black] (0,0) circle [radius=.05];\n\\node at (7,1.2) {$a$};\n\\node at (7,0.2) {$b$};\n\\node at (7,-0.8) {$c$};\n\\draw [fill=black] (7,1) circle [radius=.05];\n\\draw [fill=black] (7,0) circle [radius=.05];\n\\draw [fill=black] (7,-1) circle [radius=.05];\n\\draw [fill=black] (8,0) circle [radius=.05];\n\\draw (8,0) -- (7,1);\n\\draw (8,0) -- (7,0);\n\\draw (8,0) -- (7,-1);\n\\draw (10,0) ellipse (1.4 and .3);\n\\node at (10,0) {$u,x_1,x_2,w_1,w_2$};\n\\node at (13,2.2) {$p$};\n\\node at (13,1.2) {$q$};\n\\node at (13,-0.8) {$r$};\n\\node at (13,-1.8) {$s$};\n\\draw [fill=black] (13,2) circle [radius=.05];\n\\draw [fill=black] (13,1) circle [radius=.05];\n\\draw [fill=black] (13,-1) circle [radius=.05];\n\\draw [fill=black] (13,-2) circle [radius=.05];\n\\draw (11.3,0) -- (13,2);\n\\draw (11.3,0) -- (13,1);\n\\draw (11.3,0) -- (13,-1);\n\\draw (11.3,0) -- (13,-2);\n}\n\\end{center}\n\n{\\bf Type 5}.\n\n\\begin{center}\n\\pic{\n\\draw [fill=black] (0,1) circle [radius=.05];\n\\draw [fill=black] (0,-1) circle [radius=.05];\n\\node at (0,1.2) {$a$};\n\\node at (0,-.8) {$b$};\n\\draw [fill=black] (1,0) circle [radius=.05];\n\\node at (1,.2) {$z$};\n\\draw (0,1) -- (1,0);\n\\draw (0,-1) -- (1,0);\n\\draw [fill=black] (2,0) circle [radius=.05];\n\\draw (1,0) -- (2,0);\n\\node at (2,.2) {$u$};\n\\draw [fill=black] (3,0) circle [radius=.05];\n\\draw (3,0) -- (2,0);\n\\node at (3,-.2) {$v$};\n\\draw [fill=black] (3,1) circle [radius=.05];\n\\node at (3,1.2) {$x_1$};\n\\draw (3,1) -- (2,0);\n\\draw (3,1) -- (3,0);\n\\draw [fill=black] (4,0) circle [radius=.05];\n\\node at (4,.2) {$x_2$};\n\\draw (3,0) -- (4,0);\n\\draw [fill=black] (5,0) circle [radius=.05];\n\\draw [fill=black] (5,1) circle [radius=.05];\n\\draw [fill=black] (5,-1) circle [radius=.05];\n\\draw (4,0) -- (5,0);\n\\draw (3,1) -- (5,1);\n\\draw (4,0) -- (5,-1);\n\\node at (5,1.2) {$p$};\n\\node at (5,0.2) {$q$};\n\\node at (5,-.8) {$r$};\n\\draw [->] [ultra thick] (6,0) -- (7,0);\n\\draw (10,0) ellipse (1.4 and .3);\n\\node at (10,0) {$z,u,v,x_1,x_2$};\n\\draw [fill=black] (8,1) circle [radius=.05];\n\\draw [fill=black] (8,-1) circle [radius=.05];\n\\node at (8,-0.8) {$b$};\n\\node at (8,1.2) {$a$};\n\\draw [fill=black] (12.5,0) circle [radius=.05];\n\\draw [fill=black] (12.5,1) circle [radius=.05];\n\\draw [fill=black] (12.5,-1) circle [radius=.05];\n\\draw (11.4,0) -- (12.5,1);\n\\draw (11.4,0) -- (12.5,-1);\n\\draw (11.4,0) -- (12.5,0);\n\\node at (12.5,1.2) {$p$};\n\\node at (12.5,-0.8) {$r$};\n\\node at (12.5,0.2) {$q$};\n\\draw (8,1) -- (8.6,0);\n\\draw (8,-1) -- (8.6,0);\n}\n\\end{center}\n\n\n{\\bf Type 33.}\n\\begin{center}\n\\pic{\n\\draw [fill=black] (0,1) circle [radius=.05];\n\\draw [fill=black] (0,-1) circle [radius=.05];\n\\node at (0,1.2) {$u_1$};\n\\node at (0,-.8) {$u_2$};\n\\draw [fill=black] (1,0) circle [radius=.05];\n\\node at (1,.2) {$u$};\n\\node at (2,.2) {$v$};\n\\draw (0,1) -- (1,0);\n\\draw (0,-1) -- (1,0);\n\\node at (-1,2.2) {$a$};\n\\node at (-1,1.2) {$b$};\n\\node at (-1,-.8) {$c$};\n\\node at (-1,-1.8) {$d$};\n\\draw (1,0) -- (2,0);\n\\draw (2,0) -- (3,1);\n\\draw (2,0) -- (3,-1);\n\\draw (0,1) -- (1,0);\n\\draw (0,1) -- (-1,2);\n\\draw (0,1) -- (-1,1);\n\\draw (0,-1) -- (-1,-1);\n\\draw (0,-1) -- (-1,-2);\n\\draw (0,-1) -- (1,0);\n\\draw [fill=black] (-1,2) circle [radius=.05];\n\\draw [fill=black] (-1,1) circle [radius=.05];\n\\draw [fill=black] (-1,-2) circle [radius=.05];\n\\draw [fill=black] (-1,-1) circle [radius=.05];\n\\draw [fill=black] (0,1) circle [radius=.05];\n\\draw [fill=black] (0,-1) circle [radius=.05];\n\\draw [fill=black] (3,1) circle [radius=.05];\n\\draw [fill=black] (3,-1) circle [radius=.05];\n\\node at (3,-.8) {$v_2$};\n\\node at (3,1.2) {$v_2$};\n\\draw [fill=black] (4,1) circle [radius=.05];\n\\draw [fill=black] (4,-1) circle [radius=.05];\n\\node at (4,2.2) {$p$};\n\\node at (4,1.2) {$q$};\n\\node at (4,-.8) {$r$};\n\\node at (4,-1.8) {$s$};\n\\draw [fill=black] (4,2) circle [radius=.05];\n\\draw [fill=black] (4,-2) circle [radius=.05];\n\\draw (3,1) -- (4,1);\n\\draw (3,1) -- (4,2);\n\\draw (3,-1) -- (4,-1);\n\\draw (3,-1) -- (4,-2);\n\\draw [->] [ultra thick] (5,0) -- (6,0);\n\\draw (9,0) ellipse (1 and .2);\n\\draw (12,0) ellipse (1 and .2);\n\\node at (14,2.2) {$p$};\n\\node at (14,1.2) {$q$};\n\\node at (14,-.8) {$r$};\n\\node at (14,-1.8) {$s$};\n\\node at (7,2.2) {$a$};\n\\node at (7,1.2) {$b$};\n\\node at (7,-.8) {$c$};\n\\node at (7,-1.8) {$d$};\n\\draw [fill=black] (14,2) circle [radius=.05];\n\\draw [fill=black] (14,1) circle [radius=.05];\n\\draw [fill=black] (14,-2) circle [radius=.05];\n\\draw [fill=black] (14,-1) circle [radius=.05];\n\\draw [fill=black] (7,1) circle [radius=.05];\n\\draw [fill=black] (7,-1) circle [radius=.05];\n\\draw [fill=black] (7,2) circle [radius=.05];\n\\draw [fill=black] (7,-2) circle [radius=.05];\n\\draw (14,2) -- (13,0);\n\\draw (14,1) -- (13,0);\n\\draw (14,-1) -- (13,0);\n\\draw (14,-2) -- (13,0);\n\\draw (7,2) -- (8,0);\n\\draw (7,1) -- (8,0);\n\\draw (7,-2) -- (8,0);\n\\draw (7,-1) -- (8,0);\n\\node at (9,0) {$u,u_1,u_2$};\n\\node at (12,0) {$v,v_1,v_2$};\n}\n\\end{center}\n\n{\\bf Type 34.}\n\n\\hspace{1in}\n\\pic{\n\\draw [fill=black] (0,1) circle [radius=.05];\n\\draw [fill=black] (0,-1) circle [radius=.05];\n\\node at (0,1.2) {$v_1$};\n\\node at (0,-.8) {$v_2$};\n\\draw [fill=black] (1,0) circle [radius=.05];\n\\node at (1,.2) {$v$};\n\\node at (2,.2) {$u$};\n\\draw (0,1) -- (1,0);\n\\draw (0,-1) -- (1,0);\n\\node at (-1,2.2) {$a$};\n\\node at (-1,1.2) {$b$};\n\\node at (-1,-.8) {$c$};\n\\node at (-1,-1.8) {$d$};\n\\draw (1,0) -- (2,0);\n\\draw (2,0) -- (3,1);\n\\draw (2,0) -- (3,-1);\n\\draw (0,1) -- (1,0);\n\\draw (0,1) -- (-1,2);\n\\draw (0,1) -- (-1,1);\n\\draw (0,-1) -- (-1,-1);\n\\draw (0,-1) -- (-1,-2);\n\\draw (0,-1) -- (1,0);\n\\draw [fill=black] (-1,2) circle [radius=.05];\n\\draw [fill=black] (-1,1) circle [radius=.05];\n\\draw [fill=black] (-1,-2) circle [radius=.05];\n\\draw [fill=black] (-1,-1) circle [radius=.05];\n\\draw [fill=black] (0,1) circle [radius=.05];\n\\draw [fill=black] (0,-1) circle [radius=.05];\n\\draw [fill=black] (3,1) circle [radius=.05];\n\\draw [fill=black] (3,-1) circle [radius=.05];\n\\node at (3,-1.2) {$u_2$};\n\\node at (3,1.2) {$u_1$};\n\\draw (3,1) -- (3,-1);\n\\draw [fill=black] (4,1) circle [radius=.05];\n\\draw [fill=black] (4,-1) circle [radius=.05];\n\\node at (4,-1.2) {$w_2$};\n\\node at (4,1.2) {$w_1$};\n\\draw (3,1) -- (4,1);\n\\draw (3,-1) -- (4,-1);\n\\node at (5,2.2) {$p$};\n\\node at (5,1.2) {$q$};\n\\node at (5,-.8) {$r$};\n\\node at (5,-1.8) {$s$};\n\\draw [fill=black] (5,2) circle [radius=.05];\n\\draw [fill=black] (5,1) circle [radius=.05];\n\\draw [fill=black] (5,-2) circle [radius=.05];\n\\draw [fill=black] (5,-1) circle [radius=.05];\n\\draw (4,1) -- (5,1);\n\\draw (4,1) -- (5,2);\n\\draw (4,-1) -- (5,-1);\n\\draw (4,-1) -- (5,-2);\n\\draw [->] [ultra thick] (6,0) -- (7,0);\n}\n\n\\hspace{3in}\n\\pic{\n\\draw (10,0) ellipse (1 and .2);\n\\draw (13,0) ellipse (1.4 and .3);\n\\node at (10,0) {$v,v_1,v_2$};\n\\node at (13,0) {$u,u_1,u_2,w_1,w_2$};\n\\node at (8,2.2) {$a$};\n\\node at (8,1.2) {$b$};\n\\node at (8,-.8) {$c$};\n\\node at (8,-1.8) {$d$};\n\\draw [fill=black] (8,2) circle [radius=.05];\n\\draw [fill=black] (8,1) circle [radius=.05];\n\\draw [fill=black] (8,-2) circle [radius=.05];\n\\draw [fill=black] (8,-1) circle [radius=.05];\n\\draw (9,0) -- (8,1);\n\\draw (9,0) -- (8,2);\n\\draw (9,0) -- (8,-1);\n\\draw (9,0) -- (8,-2);\n\\node at (15.4,2.2) {$p$};\n\\node at (15.4,1.2) {$q$};\n\\node at (15.4,-.8) {$r$};\n\\node at (15.4,-1.8) {$s$};\n\\draw [fill=black] (15.4,2) circle [radius=.05];\n\\draw [fill=black] (15.4,1) circle [radius=.05];\n\\draw [fill=black] (15.4,-2) circle [radius=.05];\n\\draw [fill=black] (15.4,-1) circle [radius=.05];\n\\draw (14.4,0) -- (15.4,1);\n\\draw (14.4,0) -- (15.4,2);\n\\draw (14.4,0) -- (15.4,-1);\n\\draw (14.4,0) -- (15.4,-2);\n}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nIn the past years, {\\it Integral}--IBIS (Ubertini et al. 2003)\nand {\\it Swift}--BAT (Barthelmy et al. 2005) catalogs of hard X--ray selected \nextragalactic sources have been published (Sambruna et al. 2007; Bird et al. 2010), opening \na new window in the study of blazars.\n\nThe hard-X-ray selection produces a set of diverse sources, including extremely ``red'' \nflat spectrum radio quasars (FSRQs), with Inverse Compton (IC) peak at keV-MeV\nenergies, or ``blue'' BL Lacs with synchrotron peak at these frequencies,\nand provides a way to test the validity of the blazar sequence, \nlooking for outliers not predicted by the sequence itself (Giommi et al. 2007).\nIn particular, by selecting hard X--ray luminous FSRQ at high redshift, i.e. with the highest intrinsic \nbolometric luminosities, it is possible to collect samples of \nsupermassive black holes (SMBHs) in the early Universe, thus introducing important observational \nconstraint on the number density of \nheavy black holes at high redshift and hence on their formation processes and timing.\n\nIGR J22517+2217 was first reported by Krivonos et al.\n(2007) as an unidentified object detected by {\\it Integral}--IBIS (150 ks of total IBIS exposure).\n{\\it Swift} follow-up observations were used to associate the source \nwith MG3 J225155+2217 (Bassani et al. 2007). MG3 J225155+2217 has been optically identified as a QSO by Falco et al. (1998) in a redshift survey of \n177 FSRQs, on the basis of S$_{IV}$, Ly$\\alpha$, C$_{II}$ and C$_{IV}$ emission lines.\nIGR J22517+2217 is the highest redshift ($z=3.668$) blazar detected in the fourth {\\it Integral}--IBIS hard X--rays catalog (Bird et al. 2010).\nThe source has a {\\it Swift}--BAT counterpart (SWIFT J2252.0+2218) in the 3 years BAT survey (Ajello et al. 2009) and is present in the multifrequency ``Roma--BZCAT\" \ncatalog of blazars (Massaro et al. 2009).\n\nUsing XRT, IBIS and archival optical\/IR data, Bassani et al. (2007) constructed a non simultaneous SED \nof the source, showing an extremely bright X--ray emission with respect to the\noptical emission ($\\alpha_{OX}<0.75$), \nand suggested that IGR J22517+2217 could be a rare FSRQ with synchrotron peak at X--ray frequencies,\nor a more canonical FSRQ, i.e. with the synchrotron peak at radio--mm frequencies and IC peak at MeV-GeV energies, \nbut with an exceptionally strong Compton dominance.\n\nThis ``controversial\" blazar has been studied also by Maraschi et al. (2008). \nThey reanalyzed the existing {\\it Swift} (XRT and BAT) and {\\it Integral}--IBIS data, \nand propose a ''standard leptonic one-zone emission model'' (Ghisellini \\& Tavecchio 2009, see Sect. 3) with the peak of the synchrotron component at microwave\/radio frequencies, \nand a high luminosity external Compton (EC) component peaking in hard X-rays to reproduce the SED of the source.\nThis model ruled out both a synchrotron and a synchrotron self--Compton (SSC) interpretation for the \nX--ray emission. \n\nGhisellini et al. (2010) included IGR J22517+2217 in their sample of 10 X--ray selected blazar at $z>2$:\nthe intent of the paper was to characterize the physical properties of these powerful sources, and to confirm\nthe capability of the hard X--ray selection in finding \nextreme blazars with massive SMBH, powerful jets and luminous accretion disks. \nIGR J22517+2217 is the highest redshift FSRQ in their sample and shows the highest total jet power (P$_{Jet}=1.5\\times10^{48}$ erg s$^{-1}$).\n\nAll these previous studies have been performed through the analysis of the ''average'' X-ray spectra obtained with the INTEGRAL and {\\it Swift}-BAT surveys, \nwithout taking into account any possible flux variation of the source during the period of monitoring (5 years for INTEGRAL and BAT).\nIn this paper we present the discovery of strong flaring activity, \nin X--ray (IBIS and BAT) archival data, of this extremely bright and peculiar\nFSRQ, and the modelization of both its flaring and quiescent SEDs.\nNew {\\it Suzaku} and {\\it Fermi} data are used for characterizing the \nquiescent state.\nOur goal is to investigate the evolution \nof the SED and obtain information on the physical condition of the source in the two different states.\n\nThe paper is organized as follows. In \\S2 we report the multiwavelength data analysis of the instruments involved in the SED building. \nIn \\S3 we describe the model adopted to reproduce the broad band SED, while in \\S4 we discuss the SED fitting of both the flaring and quiescent state. \nThe summary of our results is presented in \\S5.\nThroughout the paper, a $\\Lambda-CDM$ cosmology with $H_0 = 71$ km s$^{-1}$ Mpc$^{-1}$, \n$\\Omega_\\Lambda = 0.73$, and $\\Omega_m = 0.27$ is adopted.\n\n\n\n\\section{Data reduction}\n\n\n\\subsection{ New {\\it Suzaku} data}\n\n{\\it Suzaku} observed the source on 2009 Nov 26 (ID 704060010, PI A. De Rosa), \nfor a net exposure of $\\sim$40 ks, with both the X--ray Imaging Spectrometer (XIS; Koyama et al. 2007) and the \nHard X--ray Detector (HXD; Takahashi et al. 2007).\nThe XIS instrument has 3 operating detectors at the time of the observation: the front illuminated XIS 0 and XIS 3, sensitive in the 0.5--10 keV band, and the \nback--illuminated XIS 1, extending the low energy range to 0.2 keV.\nThe HXD is instead composed of GSO scintillators and silicon PIN diodes.\nThe PIN detectors observe in the 12--60 keV energy band, while the GSO ones can observe up to 600 keV.\nData reduction and processing were performed using HEASOFT $v6.9$ and {\\it Suzaku ftools v16}.\nOnly XIS and PIN data have been used in this analysis, since the source is below the \nsensitivity limit of the GSO scintillators.\nThe cleaned event files produced by the data processing with the standard selection criteria were used.\n\nXIS source events were extracted from a region of radius 200 arcsec, and the \nbackground events from an annulus (external radius 400 arcsec) outside the source region.\nThe response matrix and effective area were calculated for each detector using {\\it Suzaku} \nftools tasks {\\it xisrmfgen} and {\\it xissimarfgen}.\nGiven that XIS 0 and XIS 3 have similar responses, their event files were summed.\nOnly data in the 0.5--8 keV band were considered, and the spectral counts were \nrebinned using at least 30 counts per bin, in order to allow the use of $\\chi^2$ statistic.\n\nPIN data were extracted from the HXD cleaned event files after standard screening. \nThe tuned background model supplied\nby the {\\it Suzaku} team (Fukazawa et al. 2009), was used for the ``Non X--ray Background\" (NXB) events. \nThe background light curve was corrected for the 10x oversampling rate. \nThe source spectra were corrected for deadtime using {\\it hxddtcor}. \nWe estimate the cosmic X--ray background (CXB) contribution to the\nPIN background using the model given in Gruber et al. (1999),\nwhich is folded with the PIN response to estimate the CXB rate.\n\nThe XIS and PIN data were fitted with a simple power-law, modified by neutral absorption at the source redshift, plus \ngalactic absorption, fixed to the value measured by Kalberla et al. (2005) at the source \ncoordinates\\footnote{http:\/\/heasarc.nasa.gov\/cgi-bin\/Tools\/w3nh\/w3nh.pl} ($N_{H, Gal.} = 5\\times10^{20} cm^{-2}$). \nThe best fit values are reported in Table 1.\nThe XIS and PIN data show a flat spectrum, with $\\Gamma=1.5\\pm0.1$ and $\\Gamma=1.5\\pm0.8$ \nrespectively, with the PIN power law normalization being $\\sim1.2$ times the XIS one, a known cross calibration issue\n\\footnote{http:\/\/heasarc.gsfs.nasa.gov\/docs\/suzaku\/analysis\/abc sec 5.4}.\nThe XIS data show some curvature below 1 keV, that can be reproduced either by an intrinsic \ncolumn density of $N_H=(1.5\\pm1.1)\\times10^{22}$ cm$^{-2}$ at the source redshift (compatible with the value found in Bassani et al. using XRT data)\nor with a broken power-law with break energy of $0.8\\pm0.1$ keV and $\\Delta\\Gamma=0.8$.\nBoth models give a comparable reduced $\\chi^2$ of 1.03 and 1.04, respectively, thus the \nquality of the data do not allow do disentangle the two possibilities.\nIn Sect. 4 we will attempt to disentangle these two different models using broadband data. \n\nThe hard X-ray flux obtained by fitting the PIN data is a factor of $\\sim$10 (4) lower that the one\nreported in literature from {\\it Integral}-IBIS ({\\it Swift}-BAT) survey data (Bassani et al. 2007; Maraschi et al. 2008; Ghisellini et al. 2010).\nThis suggests that the source was in a much less active state during the 2009 {\\it Suzaku} observation, compared to previous measurements.\nThis lead us to reanalyze the archival IBIS and BAT data in order to check for the presence of variability in the hard X-ray source flux.\nIn Figure 1 top panel we show the {\\it Suzaku} XIS (red, black) and PIN (green) unfolded spectra and model best fit found for IGR J22517+2217. \nFor comparison also the IBIS spectrum (blue) is shown, to emphasize the different flux state.\n\n\n\\subsection{{\\it Integral}-IBIS and {\\it Swift}-BAT survey data}\n\n\\begin{figure}\n\\begin{center}\n\\psfig{figure=plot_xis_integral_def.ps,width=8cm,height=6.5cm}\\hspace{1cm}\\psfig{figure=igr22_lc.ps,width=8cm,height=7cm}\n\\caption{\n{\\it a) Top panel:} {\\it Suzaku} XIS (red, black) and PIN (green) unfolded spectrum and model of IGR J22517+2217. \nFor comparison the IBIS cat4 spectrum is shown (blue). \n{\\it b) Bottom panel:} Hard X--ray light curve of IGR J22517+2217. \nRed squares (black circles) represent BAT (IBIS) 15--55 keV flux. \nThe blue empty (green filled) triangles show the time of XRT\/UVOT ({\\it Suzaku}) observation. \nThe cyan solid lines represent the time intervals used in the BAT spectra extraction.\nThe orange dashed line represents the period of observation of {\\it Fermi}\/LAT.\n}\n\\end{center}\n\\label{fig:light}\n\\end{figure}\n\n\nIBIS data are taken from the Fourth IBIS\/ISGRI Soft $\\gamma$--ray Survey Catalog (Bird et al. 2010), \ncovering the period Feb 2003 -- Apr 2008.\nImages from the ISGRI detector (Lebrun et al. 2003) for each pointing have been generated\nusing scientific Analysis\nSoftware (Goldwurm et al. 2003) OSA version 7.0. \nFive primary energy bands (20--40, 30--60, 20--100, 17--30, and\n18--60 keV) were used to maximize the detection sensitivity\nfor sources with various energy spectra (for details see Bird et al. 2010).\n\nThe BAT analysis results presented in this paper were derived with all the available data \nduring the time interval 2005 Jan 19 -- 2010 Apr 14.\nThe 15--55 keV spectra and light curve were extracted following the recipes presented in Ajello \net al. (2008, 2009). \nThe spectra are constructed by weighted averaging of the source spectra extracted from short \nexposures (e.g., 300s) and are accurate to the mCrab level. \nThe reader is referred to Ajello et al. (2009) for more details. \n\nThe total IBIS and BAT spectra can be reproduced with a simple power law having $\\Gamma=1.6\\pm 0.6$ and \n$1.6\\pm0.5$, respectively;\nthe 15--55 keV flux is F$_{15-55 \\, {\\rm keV}}=(2.5\\pm0.9)\\times10^{-11}$ and \nF$_{15-55 \\, {\\rm keV}}=(2.1\\pm0.8)\\times10^{-11}$ erg cm$^{-2}$ s$^{-1}$, respectively.\n\nThe bottom panel of Fig. 1 shows the BAT (red squares) and IBIS (black diamonds) historical light curve \nof IGR J22517+2217 from 2003 Dec 04 to 2010 Mar 23.\nThe 15--55 keV IBIS light curve was extracted from the ISDC Science Products Archive\\footnote{http:\/\/www.isdc.unige.ch\/heavens\\_webapp\/integral} \nadopting a nominal bin size of 100 days. \nWe converted the observed IBIS and BAT count rates into 15--55 keV observed flux, using the \nWebPimms HEASARC tool\\footnote{http:\/\/heasarc.nasa.gov\/Tools\/w3pimms.html}, \nassuming as underling model a power law with photon index $\\Gamma=1.6$, consistent with \nthe values observed in the BAT and IBIS spectra, and assuming a constant cross-calibration between IBIS and BAT, equal to one.\n\n\\noindent Fig.1 (bottom panel) shows that: \n\n-- the source displays quite strong long term variability in hard X--rays;\n\n-- a strong flare episode occurred around Jan 2005, and the source reached a 15--55 keV flux maximum of $(8\\pm2) \\times10^{-11}$ erg cm$^{-2}$ s$^{-1}$\n(a factor of 20 higher than the flux measured by {\\it Suzaku}-PIN in 2009); \n\n-- after the flare, the source faded into a quiescent state, reaching a flux that is at or below the detection limit of both BAT and IBIS instruments.\nAs can be seen, the IBIS light curve is completely dominated by the flare, \ni.e. the source flux is below the IBIS detection limit after \nMJD 53550 and the total spectrum extracted from the entire period can be considered representative of the flare.\n\nAs a result of its different pointing strategy, BAT has a much more regular and extended coverage of the source, \nand we were able to characterize the source in both states.\nWe extracted a BAT spectrum from the period around the flare of 2005 (2004 Dec 11--2005 Mar 21)\nand also one from the remaining quiescent period (2005 May 10--2009 Jun 18, \nsolid cyan lines in the bottom panel of Fig. 1). \n\n\nThe BAT flux relative to the flare state is the highest one measured in the hard X-ray energy range\n(F$_{15-55 \\, {\\rm keV}}=(3.7\\pm0.8)\\times10^{-11}$ erg cm$^{-2}$ s$^{-1}$).\nDuring this state the source is detected up to 200 keV, and the spectrum is characterized by photon index of 1.5$\\pm$0.5.\nIn the quiescent state the source is detected, with significance $\\sim3\\sigma$, only up to $\\sim75$ keV, and \nthe spectrum has a flux F$_{15-55 \\, {\\rm keV}}$ a factor of $\\sim15$ lower than the flaring one, while the photon index is 1.7$\\pm$1.1.\nConsidering the large uncertainties on $\\Gamma$ we can consider the spectra (in flaring and quiescent state) comparable (see\nTable 1 for IBIS and BAT spectral analysis results).\n\n\\begin{table*}\n\\begin{center}\n\\label{tab:xray}\n\\begin{tabular}{ccccccccccccccc}\n\\hline\\hline\\\\\n\\multicolumn{1}{c} {Date}&\n\\multicolumn{1}{c} {Inst.}&\n\\multicolumn{1}{c} {Exp.}&\n\\multicolumn{1}{c} {$N_{\\rm H}$}&\n\\multicolumn{1}{c} {$\\Gamma$}&\n\\multicolumn{1}{c} {F$_{2-10}$}&\n\\multicolumn{1}{c} {$\\log L_{2-10}$}&\n\\multicolumn{1}{c} {F$_{15-55}$}& \n\\multicolumn{1}{c} {$\\log L_{15-55}$}\\\\\n (1) & (2) &(3) & (4) & (5) & (6) & (7) & (8) & (9) \\\\\n\\hline\\\\ \n2003 Dec 04--2007 Nov 17 & IBIS & 191 & - & 1.6$\\pm$0.6 & - & - &25.1$\\pm$8.9 & 48.1 \\\\\n2004 Dec 11--2005 Mar 21 & BAT Flare & 160 & - & 1.5$\\pm$0.5 & - & - &36.9$\\pm$8.1 & 48.3 \\\\\n2005 May 10--2009 Jun 18 & BAT quiesc.&1610 & - & 1.7$\\pm$1.1 & - & - &2.6$\\pm$1.6 & 47.3 \\\\\n2007 May 01 & XRT & 40 & $2.0\\pm1.5$ & 1.4$\\pm$0.1 & 2.4$\\pm$0.4& 47.1 & - & - \\\\\n2009 Nov 01 & XIS & 40 & $1.5\\pm1.1$ & 1.5$\\pm$0.1 & 1.2$\\pm$0.1& 46.7 & - & - \\\\\n2009 Nov 01 & PIN & 40 & - & 1.5$\\pm$0.8 & - & - &3.8$\\pm$1.8 & 47.4\\\\\n\\hline \n\\hline \n\\end{tabular}\n\\end{center} \n\\caption{Best fit model parameters for the different observations of IGR 22517+2217 analyzed in this work.\nThe broadband continuum is reproduced with a simple power-law, modified by intrinsic absorption at the source redshift, \nplus galactic absorption N$_{\\rm H, Gal}$ = 5$\\times$10$^{20}$ cm$^{-2}$.\nColumn: \n(1) Date;\n(2) Instrument; \n(3) Effective exposure (ks);\n(4) Column density (in $10^{22}$ cm$^{-2}$ units);\n(5) Photon index; \n(6) 2--10 keV observed flux (in $10^{-12}$ erg s$^{-1}$ cm$^{-2}$ units); \n(7) Log 2--10 keV deabsorbed luminosity (erg s$^{-1}$);\n(8) 15--55 keV flux (in $10^{-12}$ erg s$^{-1}$ cm$^{-2}$ units);\n(9) Log 15--55 keV luminosity (erg s$^{-1}$).}\n\\end{table*}\n \n\n\n\n\\subsection{{\\it Fermi}\/LAT data}\n\n{\\it Fermi} Large Area Telescope (LAT; Atwood et al. 2009) data were collected from Aug 2008 (MJD 54679) to Aug 2010 (MJD 55409). \nDuring this period, the {\\it Fermi}\/LAT instrument operated mostly in survey mode, \nscanning the entire $\\gamma$-ray sky every 3 hours.\nThe analysis was performed with the ScienceTools software package version {\\it v9r17p0}, which\nis available from the Fermi Science Support Center. \nOnly events having a high probability of being photons\n-- those in the ``diffuse class\" -- were used. \n\nThe energy range used was 100 MeV -- 100 GeV, and the maximum zenith angle value was $105^{\\circ}$.\nWe adopted the unbinned likelihood analysis, using the standard P6\\_V3\\_DIFFUSE response functions.\nThe diffuse isotropic background used is {\\it isotropic\\_iem\\_v02} and the galactic diffuse emission model used is \n{\\it gll\\_iem\\_v02}\\footnote{http:\/\/fermi.gsfc.nasa.gov\/ssc\/data\/access\/lat\/BackgroundModels.html}.\nWe considered a region of interest (RoI) of $15^{\\circ}$ from the IGR J22517+2217 position.\nAll sources from the 1FGL catalog (Abdo et al. 2010) within the RoI of the source were included in the fit, \nwith their photon indexes and the integral fluxes free to vary, \nplus a point source with power law spectrum at the IGR J22517+2217 position, having photon index fixed to 2.2 (a value typical of FSRQ in the GeV band).\nThe normalization of the background was also left free to vary.\nWe also repeated the analysis using the source list of the 2FGL catalog (Ackermann et al. 2011), obtaining consistent results.\n\nIGR J22517+2217 is located $\\sim6^{\\circ}$ from 3C 454.3, a bright, extremely variable source in \nthe $\\gamma$--ray sky (Ackermann et al. 2010). It contributes for more than 90\\% to the total counts in the RoI.\nWe tried to exclude the period of flaring activity of 3c 454.3 from the data set, in order to minimize the contamination by this source.\nThe results obtained for IGR J22517+2217, however, did not change significantly.\n\nIGR J22517+2217 is not detected in the 2 year observation, and the computed Test Statistic (TS, Mattox et al. 1996)\nis $TS \\simeq3.5$ in the full band.\nWe therefore calculated the 95\\% upper limits in 5 energy bands (i.e. 0.1--0.3, 0.3--1, 1--3, 3--10, 10--100 GeV), using the profile likelihood method.\nThe upper-limits were corrected for attenuation due to extragalactic background light (EBL) through $\\gamma-\\gamma$ interactions (Chen et al. 2004, Razzaque et al. 2009), \nalthough only the 10-100 GeV band was found to be affected by significant attenuation.\nAs a consequence of being close to 3C 454.3, the background around the position \nof IGR J22517+2217 is higher than in a typical extragalactic field, making it difficult to constrain more strongly the upper limits at GeV energies.\nThe upper limits are plotted in Figure 2, with all other data discussed in this paper.\n\nGiven that {\\it Fermi} data were collected starting from Aug 2008, they completely fall\nin the quiescent period of the source, and so it is not surprising that IGR J22517+2217 was not detected.\nTherefore, they will be used to characterize the \nstate of the source in the quiescent SED only, while we do not have available $\\gamma$-ray data \nfor the flaring state SED.\n\n\n\\subsection{Archival Data}\n\nWe collected archival radio and optical data from NASA\/IPAC EXTRAGALACTIC DATABASE \\footnote{http:\/\/ned.ipac.caltech.edu\/} for the source.\nRadio data at 1.4 4.8 and 8.4 GHz comes from NRAO\/VLA.\nOptical data in J, H, and K bands are taken with the UFTI instrument at UKIRT (Kuhn 2004).\nThe UV data are taken from Ghisellini et al. (2010), where they corrected the {\\it Swift}-UVOT observed magnitudes\nfor the absorption of neutral hydrogen in intervening Lyman$-\\alpha$ absorption systems. \nWe also reanalyzed the archival XRT spectrum (average of 4 contiguous observations), obtaining consistent results with those found in Bassani et al. (2007)\nusing the same set of data, and with the results reported in \\S2.1 for the XIS spectra.\nThe only difference is in the XRT observed 2-10 keV flux, that is a factor 2 higher than the flux measured by XIS.\n\n\\begin{figure*}\n\\begin{center}\n\\psfig{figure=sed_tot2.ps,width=16cm,height=12cm}\n\\hspace{0.3cm}\n\\vspace{-0.5cm}\n\\psfig{figure=zoom.ps,width=10cm,height=8cm}\n\\caption{\n{\\it a) Top panel:} \nSpectral energy distribution of IGR J22517+2217. \nGray circles and arrows represent archival radio\/optical\/UV data from NED. \nEmpty red triangles and magenta \nsquares represent XIS 0 and XIS 3 data, \nempty green circles and orange pentagons represent PIN and BAT quiescent data respectively, \nwhile black arrows are {\\it Fermi} upper limits in 5 bands.\nFilled violet squares represent XRT data, filled cyan diamonds and blue pentagons represent \nIBIS and BAT flare data respectively.\nThe solid cyan and orange curves are the results of the modeling of the quiescent \nand flaring states, respectively.\nWith gray lines we show the different components of the non--thermal emission:\nsynchrotron (dotted), synchrotron self--Compton (long dashed) and\nexternal Compton (dot--dashed).\nThe black dashed line corresponds to the thermal emission of the disk, the IR torus and the X--ray disk corona. \nThe model does not account for radio emission, produced from much larger regions of the jet.\n{\\it a) Bottom panel:} Zoom in the X--ray energy range for the two SEDs. Symbols as in top panel.\n}\n\\end{center}\n\\label{fig:sed2}\n\\end{figure*}\n\n\\section{The SED model}\n\nThe model adopted to fit the SED is a leptonic, one--zone synchrotron and inverse Compton model, \nfully discussed in Ghisellini \\& Tavecchio (2009).\nThe assumptions can be summarized as follows:\n\n\\begin{table*}\n\\label{tab:sed}\n\\begin{center}\n\\begin{tabular}{ccccccccc}\n\\hline\\hline\\\\\n\\multicolumn{1}{c} {State}&\n\\multicolumn{1}{c} {$R_{\\rm diss}$}&\n\\multicolumn{1}{c} {$P'_{\\rm inj}$}&\n\\multicolumn{1}{c} {$B$}&\n\\multicolumn{1}{c} {$\\Gamma$}&\n\\multicolumn{1}{c} {$\\gamma_{\\rm b}$}&\n\\multicolumn{1}{c} {$\\gamma_{\\rm max}$}&\n\\multicolumn{1}{c} {$s_1$}&\n\\multicolumn{1}{c} {$s_2$}\\\\\n (1) & (2) &(3) & (4) & (5) & (6) & (7) & (8) & (9) \\\\\n\\hline\\\\ \nLow & 570 (1900) & 0.045 & 1.06 & 16 & 70 & 2e3 & -1 & 4 \\\\\nHigh & 990 (3300) & 0.30 & 0.61 & 16* & 70* & 2e3* & -1* & 4* \\\\\n\\hline \n\\end{tabular}\\end{center} \n\\caption{Input parameters of the SED fitting for the low and high state of IGR J22517+2217.\nColumn: \n(1) State ; \n(2) dissipation radius in units of $10^{15}$ cm and, in parenthesis, in units of\nSchwarzschild radii;\n(3) intrinsic injected power ($10^{45}$ erg s$^{-1}$) in the form of relativistic electrons;\n(4) magnetic field intensity (Gauss);\n(5) bulk Lorentz factor at R$_{diss}$; \n(6) and (7) break and maximum random Lorentz factors of the injected electrons; \n(8) and (9) slopes of the injected electron distribution [Q($\\gamma$)] below and above $\\gamma_{\\rm b}$;\n* fixed values from low to high state SED.}\n\\end{table*}\n\n-- The emitting region is assumed to be spherical \n(with radius $R$) and at a distance $R_{\\rm diss}$ (dissipation radius) from the central black hole.\nThe emitting electrons are injected at a rate Q($\\gamma$) [cm$^{-3}$ s$^{-1}$] for a finite time equal to the\nlight crossing time $R\/c$. \nThe adopted function Q($\\gamma$) is a smoothly broken power law with a break at $\\gamma_{\\rm b}$\nand slopes $s_1$ and $s_2$ below and above $\\gamma_{\\rm b}$, respectively.\nThe emitting region is moving with a velocity $\\beta_{\\rm c}$ corresponding to a bulk Lorentz factor $\\Gamma$. \nWe observe the source at the viewing angle $\\theta_{\\rm v}$.\n\n-- The external radiation sources taken into account are:\nthe broad line region (BLR) photons, assumed to re--emit 10\\% of the \naccretion disk luminosity from a shell--like distribution of clouds located at a distance \n$R_{\\rm BLR}=10^{17}L_{\\rm d, 45}^{1\/2}$ cm (Kaspi et al. 2005), where $L_{\\rm d}$ is the disk luminosity; \nthe IR emission from a dusty torus located at a distance \n$R_{\\rm IR}=2.5\\times10^{18}L_{d, 45}^{1\/2}$ cm (Elitzur 2006); \nthe direct emission from the accretion disk, including its X--ray corona. \nAlso the starlight contribution from the inner region of the host galaxy and the cosmic\nbackground radiation are taken into account, but these photon sources are unimportant in\nthe case of IGR J22517+2217.\n\n-- The accretion disk is a standard ``Shakura \\& Syunyaev\" (1973) disk, emitting as a blackbody at each radius.\nThe maximum temperature ($T_{\\rm max}$), i.e. the peak of the disk luminosity, is assumed to occur at $\\sim5$ \nSchwarzschild radii ($R_S$).\nThus from the position of the peak of the disk luminosity, and the total luminosity of the accretion \ndisk ($L_d$) it is possible to derive $M_{\\rm BH}$ and $\\dot M$, once a value for the efficiency $\\eta$ is assumed ($\\eta=0.08$ for a Schwarzschild black hole). \nSee Ghisellini et al. (2010a) for a discussion on caveats of this black hole mass estimate.\n\nWe can estimate the black hole mass $M_{\\rm BH}$ and the accretion luminosity \n$L_{\\rm d}$ of IGR J22517+2217 using optical and UV data and upper-limits corrected for the absorption\nof neutral hydrogen in intervening Lyman $\\alpha$ systems along the line of sight \n(see Ghisellini et al. 2010a for details).\nGiven the uncertainties in the amount of intervening Ly$\\alpha$ systems \nand the paucity of data the results must be considered an approximation.\nWe find $M_{\\rm BH}=10^9 M_{\\sun}$ and $L_{\\rm d}=6.8\\times10^{46}$ erg s$^{-1}$.\nThese values correspond to a disk radiating at 45\\% of the Eddington level.\n\nThe BLR are located at $R_{\\rm BLR}= 8\\times 10^{17}$ cm and the \nIR emitting torus at $R_{\\rm IR}=2.5\\times10^{19}$ cm.\nThe total X--ray corona luminosity is assumed to emit 30\\% of $L_{\\rm d}$. \nIts spectral shape is assumed to be $\\propto \\nu^{-1} \\exp(-h\\nu\/150 \\, {\\rm keV})$. \n\n\n\\section{Discussion}\n\nOur analysis shows that the extremely Compton dominated FSRQ IGR J22517+2217,\nexperienced a strong flare in the high energy hump in Jan 2005, and then faded in a quiescent state.\nIn order to investigate the physical properties of the source we built two SEDs for the two different states, and fitted \nthe data with the leptonic, one--zone synchrotron and inverse Compton model described in \\S3.\n\nIn order to build the SED for the quiescent state of IGR J22517+2217, we used the X-ray data from {\\it Suzaku} (XIS and PIN), \nthe BAT spectrum extracted from the quiescent period and {\\it Fermi}\/LAT 24 months upper limits.\nThe archival non--simultaneous optical\/UV data were added to our data.\n\nFor the flaring SED we used hard X-ray data from {\\it Integral}--IBIS and the {\\it Swift}--BAT \nspectrum extracted from the Jan 2005 flare.\nWe do not have soft X-ray data available for the flaring period. \nHowever, in order to put some constraint at soft X-ray frequencies we chose to include the XRT spectrum in \nthe flaring SED. This has a factor 2 higher normalization with respect to the XIS data.\nWe stress that, as observed in other samples of bright, red blazars (Ghisellini et al. 2010a), \nthe flux variability is larger at higher energies (i.e. in hard X--rays)\nbut modest at few keV. This support our choice of include the XRT data in the flaring SED.\n\nAll data points and SED model components of the flaring and quiescent SED are plotted in Fig. 2, top panel. Fig. 2, bottom\npanel, shows a zoom in the region of X-ray data.\nThe strong and very bright hard X-ray spectrum, together with the upper limits in the {\\it Fermi}\/LAT\nenergy range, constrain the peak of the high energy hump of the quiescent SED to be located \nat $\\sim10^{20}$--$10^{21}$ Hz, and thus the form of energy spectrum of radiating electrons.\nThe same energy spectrum is then assumed in the flaring SED, for which no MeV\/GeV data are available.\nThe corresponding synchrotron peak falls in a region of the spectrum localized around \n$10^{11}$ Hz for both SEDs.\n\nHowever, in our single--zone leptonic model, the synchrotron and the high \nenergy humps are produced by the same population of electrons.\nFurthermore, if the high energy emission is given by the external Compton process,\nthe energies of the electrons emitting at the $\\sim$MeV peak are rather modest,\nimplying a corresponding low frequency synchrotron peak.\nWe then require that the synchrotron component peaks at low energies,\nclose to the self--absorption frequency, and furthermore require that the\nthin synchrotron emission has a steep spectrum, whose flux is smaller\nthan the optical archival data (characterized instead by a rather hard slope,\nthat we interpret as emission from the accretion disk).\n\nGiven the relative paucity of the observational data, the choice of the model parameters is not unique,\nand some further assumption has to be made.\nWe indeed fix the viewing angle $\\theta_{\\rm v}$ to 3$^\\circ$, to be close to the\n$\\theta_{\\rm v}\\sim 1\/\\Gamma$ condition.\n\nNote that we model both states of\nthe source by changing the minimum number of parameters, given that fitting \ntwo data sets with the same model, allows to constraint better\nthe model parameters.\nIn particular, we assume that the accretion luminosity does not change \nfor the two states, and require also the bulk Lorentz factor and the parameters\nof the distribution of the injected electrons (break and maximum random Lorentz factors\nand slopes below and above $\\gamma_{\\rm b}$) to be the same for the two SEDs.\nThus, the parameters that are left free to vary from one state to the other are \nthe dissipation radius $R_{\\rm diss}$, the injected power $P'_{\\rm inj}$ and the \nmagnetic field $B$, that is proportional to $1\/R_{\\rm diss}$ (the Poynting flux is assumed to be constant).\n\nThe results of our modeling are shown in Fig. 2, where we show the total flux together\nwith the contributions of the non--thermal (synchrotron, self Compton, external Compton)\nand thermal (accretion disk, IR torus, X--ray corona) components.\nAs can be seen, the curvature around 1 keV observed in the XRT and XIS spectra\ncan be well reproduced by an EC component changing (softening) slope from \n$\\sim10^{17}$ to $\\sim10^{19}$ Hz, disfavoring the intrinsic obscuration scenario\nfor the shape of the X-ray emission of IGR J22517+2217.\n\nTable 2 reports the parameters of the SED fitting for the quiescent and flaring states. \nThe main difference between the two SEDs is the power $P'_{\\rm inj}$ \ninjected in the source in the form of relativistic electrons, that changes by a factor $\\sim$7.\nThe increase of $P'_{\\rm inj}$ accounts for the enhanced X--ray flux in the high state.\nThe other difference is the location of the emitting region $R_{\\rm diss}$, \nbecoming larger for the high state.\nThis is dictated by the detailed modeling of the slope of the soft to hard X--ray spectrum,\nrequiring an electron distribution with a break at low energies, and another break\nat somewhat larger energies.\nThis is accounted for, in our modeling, by requiring that electron of very low energies\n(corresponding to random Lorentz factors $\\gamma<5$) do not cool in one light crossing time.\nThis can be achieved if the location of the emitting region is slightly beyond the BLR,\nin a zone where the BLR radiation energy density is somewhat smaller.\nThis is the reason leading to a larger $R_{\\rm diss}$ in the high state.\nAs a consequence of assuming a larger region, the magnetic field is lower,\nfollowing the assumption of a constant Poynting flux ($\\propto B^2 R^2$).\nThe large Compton dominance constrains the value of the magnetic field,\nand in turn the relevance of the self Compton flux, found to be almost negligible.\n\nThis is also in agreement with the results pointed out in Sikora et al. (2009):\nbright blazars with very hard ($\\alpha_x<0.5$) X--ray spectra and high luminosity ratio \nbetween high and low frequency spectral components\nchallenge both the standard synchrotron self-Compton and \nhadronic models, while EC can easily account for these observed properties.\n\nIn the analysis described above, the bulk Lorentz factor is assumed to be constant. \nHowever, the change of bulk Lorentz factor is often invoked to explain the variability of FSRQs.\nAs a further check, we performed a new fit of both low and high states, \nleaving $\\Gamma$ as a free parameter, in addition to $R_{\\rm diss}$ and $P'_{\\rm inj}$, although the fit is limited by the poor number of data points\n\nBoth SEDs are well reproduced with these new parameters for the low (high) state: $\\Gamma$=15 (20); $R_{\\rm diss}$=1700 (3500) $R_S$ and \nLog($P'_{\\rm inj}$)= 43.48 (44.17) erg s$^{-1}$.\nIn this case the values of $P'_{\\rm inj}$ are slightly lower for both low and high states,\nand the difference is lower (a factor of $\\sim5$), while the difference \nin $R_{\\rm diss}$ is slightly larger: \nfrom 1700 to 3500 $R_S$ instead of 1900 and 3300 $R_S$.\nThus the change in $\\Gamma$ has the main effect of slightly decreasing the required variation of the total injected power\nto account for the observed variability, but no other substantial differences are introduced.\nThis new fit also gives an idea of the degree of degeneracy in the fit parameters, due to the incompleteness of the data set,\nespecially in the $\\gamma$-ray band.\n\n\n\nIn Table 3 we report the logarithm of the jet power in the form of radiation, \nPoynting flux and bulk motion of electrons and protons, \nin erg s$^{-1}$, calculated for both SEDs.\nThey have been calculated from\n\\begin{equation}\nP_{\\rm i} = \\pi R^2 \\Gamma^2 c U^\\prime_{\\rm i} \n\\end{equation}\nwhere $U^\\prime_{\\rm i}$ is the energy density of interest, calculated in the comoving frame\n(see e.g. Celotti \\& Ghisellini 2008).\nAs discussed in Ghisellini et al (2011), the power $P_{\\rm r}$ dissipated by the jet\nto produce the radiation we see is almost model--independent, since it depends\nonly on the observed luminosity and on the bulk Lorentz factor $\\Gamma$.\nThe power dissipated in other forms, on the other hand, depends on the amount of electrons ($P_{\\rm e}$),\nprotons ($P_{\\rm p}$), and magnetic field ($P_{\\rm B}$) carried by the jet,\nwhich have been estimated by applying our specific model.\nFurthermore the power carried in the bulk motion of protons requires \na knowledge of how many protons are there, per emitting lepton.\nThe values given in Table 3 assume one proton for one emitting lepton.\n\n\\begin{table}\n\\label{tab:sed}\n\\begin{center}\n\\begin{tabular}{cccccccccccccccc}\n\\hline\\hline\\\\\n\\multicolumn{1}{c} {State}&\n\\multicolumn{1}{c} {$\\log P_{\\rm r}$}&\n\\multicolumn{1}{c} {$\\log P_{\\rm B}$}&\n\\multicolumn{1}{c} {$\\log P_{\\rm e}$}&\n\\multicolumn{1}{c} {$\\log P_{\\rm p}$}&\\\\\n (1) & (2) &(3) & (4) & (5) \\\\\n\\hline\\\\ \nLow & 46.04 & 45.54 & 45.00 & 47.56 \\\\\nHigh & 46.83 & 45.54 & 46.17 & 48.41 \\\\\n\\hline \n\\end{tabular}\\end{center} \n\\caption{Input parameters of the SED model fitting both the low and high state of IGR J22517+2217. \nColumn: \n(1) State ; \n(2)--(5) logarithm of the jet power in the form of radiation ($P_{\\rm r}$), \nPoynting flux ($P_{\\rm B}$), bulk motion of electrons ($P_{\\rm e}$) and protons ($P_{\\rm p}$,\nassuming one proton per emitting electron). Powers are in erg s$^{-1}$.}\n\\end{table}\n\nFrom Table 3 we can see that the power\n$P_{\\rm r}$ changes from $\\sim0.15\\times L_{\\rm d}$ in the quiescent state, \nto $P_{\\rm r} \\sim L_{\\rm d}$ in the flaring state, i.e. the jet requires a power comparable \nto the disk luminosity to produce the radiation we see, in the latter case.\n\nTanaka et al. (2011) reported a similar behavior, based on {\\it Fermi}-LAT data, for the strong 2010 GeV flare of the blazar 4C+21.35:\nassuming similar efficiencies for the accretion disk and the jet, they estimated a jet intrinsic power $L_{jet}$\nchanging from $\\sim0.1 L_{acc}$ in the quiescent state to $\\sim1 L_{acc}$ (beeing $L_{acc}$ the intrinsic accretion power).\nThey argued that these results, combined with the findings of Abdo et al. (2010b) on several FSRQ detected by {\\it Fermi}-LAT,\nsuggest a scenario in which the observed $\\gamma$-ray variability of blazars is due to the different power of the jet,\nthat normally represents only a small fraction of the accretion power, while during major flares is capable of \ncarry away almost all the available accretion power.\nOur findings are quantitatively similar, and thus result in agreement with this view.\n\nThe $P_{\\rm r}$ however is a {\\it lower limit} to the total jet power.\nThe total jet power, dominated by the bulk motion of protons associated to emitting \nelectrons, is $P_{\\rm jet} = P_{\\rm B} + P_{\\rm e} + P_{\\rm p} = 3.6\\times10^{47}$ \nand $2.6\\times10^{48}$ erg s$^{-1}$, in the low and high state, respectively. \n$P_{\\rm jet}$ is dominated by $P_{\\rm p}$: if there is indeed one proton per\nemitting lepton, then the jet power is from 3 to 30 times more powerful than the \naccretion luminosity. \n\nIt has been proposed that jets in luminous blazars may well be numerically dominated by pairs,\nbut being still dynamically dominated by protons (Sikora \\& Madejski 2000, Kataoka et al. 2008).\nWe computed the jet power due to protons assuming an upper limit for this ratio of one proton per 20 pairs.\nAbove this limit the jet is too ``light'' and the Compton drag produces significant\ndeceleration (Ghisellini \\& Tavecchio 2010).\nThe one-to-one ratio can be assumed as a lower limit for this ratio.\n\nThe lower limits for $P_{\\rm p}$ obtained in this way are $1.8\\times10^{46}$ and $1.3\\times10^{47}$ erg s$^{-1}$ for the low and high state, respectively.\nTherefore with this assumption, the jet power in electron, protons and magnetic field, becomes comparable with the radiation power.\nThis translates into a total jet power of $P_{\\rm jet} = 2.2\\times10^{46}$ and $1.4\\times10^{47}$ erg s$^{-1}$, respectively.\n\nThe values obtained assuming one proton per lepton are extreme, even if compared with the distribution of $P_{\\rm jet}$ and $L_{\\rm d}$ \ncomputed, with the same assumptions, for a sample of high redshift {\\it Fermi}\/LAT and BAT blazars in Ghisellini et al. (2011, Fig. 9), \nthat are in the range $P_{\\rm jet}\\simeq10^{46}-2\\times10^{48}$ and $L_{\\rm d}\\simeq8\\times10^{45}-2\\times10^{47}$ erg s$^{-1}$. \nTo better clarify the remarkable behavior of IGR J22517+2217, \nwe note that similar values of jet power have been achieved during the exceptional flare \nof 3C 454.3 in December 2009 (Ackermann et al. 2010; Bonnoli et al. 2011). Therefore, IGR J22517+2217 represents one of the ''monsters'' of the high-z Universe.\n\n\n\n\n\\section{Summary and conclusion}\n\n\nThanks to a new {\\it Suzaku} observation in the X-ray energy band, the {\\it Fermi} upper limits in the 0.1-100 GeV band,\nthe flux selected spectra obtained through a re-analysis of the IBIS and BAT hard X-ray data, and other optical and radio archival data sets, \nwe were able to identify a strong flare episode in the high redshift, hard X-ray selected blazar IGR J22517+2217, which occurred in Jan 2005,\nfollowed by a period of quiescence, extending up to the present days.\nTo model the overall SEDs of the source in the flare and quiescent states, we adopted a leptonic, one-zone synchrotron and inverse Compton model. \nThe optical\/UV emission is interpreted as thermal emission from the accretion disk, plus IC from the corona, and reprocessed emission from the BLR. \n\nThe curvature observed in the X-ray spectra was proposed to be due to intrinsic, moderate absorption (N$_H\\sim2\\times10^{22}$ cm$^{-2}$).\nHowever, in the context of the broad band SED modelization proposed in this paper, it appears to be inherently accounted for by an intrinsic softening of the \nEC component around $\\sim10^{18}$ Hz. \n\nIn both states a very strong Compton dominance is observed, with the high energy hump (produced by EC), that is at least two orders of magnitudes \nhigher than the low energy (synchrotron) one.\nThe high energy peak flux varies by a factor 10 between the two states, while the high energy peak frequency remain almost constant, between $10^{20}-10^{22}$ Hz.\nThe observed large Compton dominance constrains the value of the magnetic field,\nand hence the relevance of the self--Compton component, that is found to be negligible in both states.\nThe model can explain the observed variability as a variation of the total number of emitting electrons (a variation of factor $\\sim7$) \nand as a change in the dissipation radius, moving from within to outside the broad line region as the luminosity increases.\n\nIn the flaring state, the jet power lower limit, represented by the radiative component $P_{\\rm r}$,\nrequires a power comparable to the disk luminosity to produce the observed radiation.\nThe total jet power upper limit, dominated by the bulk motion of protons, and estimated assuming one proton per electron, \nis more than $\\sim30$ times more powerful than the accretion luminosity ($2.6\\times10^{48}$ erg s$^{-1}$). \nSuch extreme values have been derived only recently for a handful of extreme, high redshift, hard-X\/soft-$\\gamma$ ray selected FSRQs, \nshowing similar strong Compton dominance,\nand comparable with the value achieved by 3C 454.3 during the 2009 exceptional flare.\n\n\n\n\\section*{acknowledgements}\n\nWe thank the referee for useful comments that improved the paper.\n\nPartial support from the Italian Space Agency (contracts ASI-INAF\nASI\/INAF\/I\/009\/10\/0) is acknowledged. \n\nThis research has made use of the NASA\/IPAC Extragalactic Database (NED) \nwhich is operated by the Jet Propulsion Laboratory, California Institute of Technology, \nunder contract with the National Aeronautics and Space Administration.\n\nThe \\textit{Fermi} LAT Collaboration acknowledges generous ongoing support\nfrom a number of agencies and institutes that have supported both the\ndevelopment and the operation of the LAT as well as scientific data analysis.\nThese include the National Aeronautics and Space Administration and the\nDepartment of Energy in the United States, the Commissariat \\`a l'Energie Atomique\nand the Centre National de la Recherche Scientifique \/ Institut National de Physique\nNucl\\'eaire et de Physique des Particules in France, the Agenzia Spaziale Italiana\nand the Istituto Nazionale di Fisica Nucleare in Italy, the Ministry of Education,\nCulture, Sports, Science and Technology (MEXT), High Energy Accelerator Research\nOrganization (KEK) and Japan Aerospace Exploration Agency (JAXA) in Japan, and\nthe K.~A.~Wallenberg Foundation, the Swedish Research Council and the\nSwedish National Space Board in Sweden.\n\nAdditional support for science analysis during the operations phase is gratefully\nacknowledged from the Istituto Nazionale di Astrofisica in Italy and the Centre National d'\\'Etudes Spatiales in France.\n\n\n\n\n\n\n\n\\begin{appendix}\n\n\n\\end{appendix}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}