diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzfywc" "b/data_all_eng_slimpj/shuffled/split2/finalzzfywc" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzfywc" @@ -0,0 +1,5 @@ +{"text":"\\section{Supplementary information}\n\\subsection{Extraction of the exciton population from measured data}\n\\label{population}\n\nThe exciton population created by a pump pulse tuned into the phonon sideband\ncan be extracted from our two-pulse spectra (see Fig.~2) in three different ways\nby analyzing the peak heights of the three transitions: \n$|0\\rangle \\rightarrow |X\\rangle$, $|0\\rangle \\rightarrow |\\bar X\\rangle$\nand $|X\\rangle \\rightarrow |2X\\rangle$. The relation\nbetween the photocurrent ($PC$) signal and the exciton population is as follows. The $PC$ measured in our experiment is determined by the number of\nelectron-hole pairs in the sample, including pairs in the quantum dot (QD) and pairs\nexcited in the surrounding material by the pump and probe pulses, according to:\n\\begin{equation}\nPC=\\alpha(C^{'}_X+C^{'}_{\\bar{X}})+2\\beta C^{'}_{2X}+\\gamma N^{'}_\\text{s},\n\\label{Eq:PC}\n\\end{equation}\nwhere $C^{'}_X$ and $C^{'}_{\\bar{X}}$ are the populations of the two exciton states \nwith opposite spins that are created in the QD \nafter circularly polarized pump and probe pulses, while $C^{'}_{2X}$ is the biexciton population. \n$N^{'}_\\text{s}$ is the number of electron-hole pairs created \nin the surrounding material which scales linearly with the laser intensity.\n$\\alpha$, $\\beta$ and $\\gamma$ are the detection efficiencies of our \nphotocurrent measurement for each type of charge complexes. \nSpecifically, for our pump-probe measurement, $PC$ can be further subdivided into two parts: \na reference level $PC_\\text{R}$ and the change of the photocurrent signal originating \nfrom the QD induced by the probe pulse $\\Delta PC$ according to:\n\n\\begin{subequations}\n\\begin{align}\nPC&=PC_\\text{R} + \\Delta PC,\\\\\nPC_\\text{R} &= (\\alpha(C_X+C_{\\bar{X}})+ 2\\beta C_{2X})e^{-\\tau_\\text{delay}\/T_1}\n+ \\gamma N^{'}_\\text{s},\\\\\n\\Delta PC &= \\alpha (\\Delta C_X +\\Delta C_{\\bar{X}})+2\\beta \\Delta C_{2X},\n\\end{align}\n\\label{Eq:C}\n\\end{subequations}\nwhere $C_{X,\\bar{X},2X}$ are the exciton and biexciton\npopulations created immediately after the pump pulse\nand $\\Delta C_{X,\\bar{X},2X}$ is the change of the exciton population induced\nby the probe pulse in the QD. Since the electron can\ntunnel out from the QD, the exciton and biexciton populations decay exponentially with\ntime. Thus, $C_{X,\\bar{X},2X}e^{(-\\tau_\\text{delay}\/T_1)}=C^{'}_{X,\\bar{X},2X}$ represent the pump-created exciton or biexciton\npopulations that have remained in the QD until the arrival of the probe pulse. \nHere, $\\tau_\\text{delay}$ is the delay time between the pump and probe\npulses and $T_1$ is the electron tunnelling time determined using inversion recovery measurement \\cite{Kolodka2007}. In the above discussion, we have neglected \nthe loss of the exciton population due to the radiative decay, since the radiative decay time\n($\\sim 600$~ps) is much longer than our typical delay times $\\tau_\\text{delay}$ (10 - 30ps). \n$PC_\\text{R}$ is determined by measuring the photocurrent signal from \nan off-resonant measurement where the pump pulse is tuned into the phonon \nsideband and the probe frequency is far from any of the resonances of the dot or the surrounding material\nsuch that $\\Delta PC$ becomes negligible and the photocurrent signal coincides with $PC_\\text{R}$\nin this case. \n$\\Delta PC$ is the differential photocurrent signal that is discussed in the paper, \nwhich is determined from our experiment by subtracting the $PC_\\text{R}$ obtained from the\noff-resonant measurement from the total photocurrent signal $PC$. \nThe detection efficiency $\\alpha$ can be extracted from the single $\\pi$ pulse experiment. \nDenoting by $\\Delta PC(\\pi)$ the maximum of the differential photocurrent signal reached \nwith a single $\\pi$ pulse we find:\n\\begin{equation}\n\\Delta PC(\\pi)=\\alpha C_X(\\pi)=\\alpha.\n\\label{Eq:alpha}\n\\end{equation}\nHere, we have used that according to our path-integral simulations,\nthe phonon-induced deviation of $C_X(\\pi)$ from the ideal value of 1 \nis negligible at low temperatures.\n\n\nNow let us derive the relation between the differential PC signal and the exciton population for \nthe case when pump and probe pulse are co-circularly polarized and the probe pulse\nis resonant to the $|0\\rangle \\rightarrow |X\\rangle$ transition.\nIn this case, firstly, the $\\sigma^{+}$ polarized pump pulse creates a certain exciton population $C_X$\nand consequently the ground-state occupation after the pump pulse is given by $C_{0}=1-C_{X}$.\nThen, in the time interval until the probe pulse arrives, the exciton population\nis reduced to $C_Xe^{-\\tau_\\text{delay}\/T_1}$ due to the tunneling. The ground-state occupation, on the other hand,\nis not affected by the tunneling and thus stays at the value of $C_{0}=1-C_{X}$ until the arrival of the probe.\nFinally, the $\\pi$-pulse $\\sigma^{+}$ polarized probe exchanges the populations of the\nstates $|0\\rangle$ and $|X\\rangle$ resulting in an exciton population after the probe of $C^{'}_{X}=1-C_{X}$.\nTherefore, the change of the $X$-exciton occupation induced by the probe pulse with the energy of $\\hbar \\omega_X$ \nis $\\Delta C_X = 1-(1+ e^{-\\tau_\\text{delay}\/T_1})\\,C_{X}$. \nUsing this result \ntogether with the fact, that for co-polarized $\\sigma^{+}$-pulses the populations of the $\\bar X$-exciton \nand the biexciton never build up, we find from Eq.~2(c):\n\\begin{equation}\n\\Delta PC_{0-X}=\\alpha[1-(1+ e^{-\\tau_\\text{delay}\/T_1})\\,C_{X}]\n\\label{Eq:Hph}\n\\end{equation}\nand thus with the help of Eq.~(\\ref{Eq:alpha}) we eventually end up with:\n\\begin{equation}\nC_X=\\dfrac{1}{1+e^{-\\tau_\\text{delay}\/T_1}}\\left(1-\\dfrac{\\Delta PC_{0-X}}{\\Delta PC(\\pi)}\\right).\n\\end{equation}\n\nThe situation is different when cross-circularly polarized pulses are used. Let\nus first discuss the case where the $\\bar X$ polarized probe pulse is resonant\nto the $|0\\rangle \\rightarrow |\\bar X\\rangle$ transition. After the action of\nthe circularly $\\sigma^{+}$ polarized pump pulse, the QD again has a certain probability $C_X$\nto be in the $|X\\rangle$ state and the probability to find the dot in the ground-state is $C_{0}=1-C_{X}$. The $\\sigma^{-}$ polarized probe pulse induces transitions\nfrom the ground-state to the $\\bar X$ exciton. Since the probe pulse has a pulse\narea of $\\pi$ and the occupation of $|\\bar X\\rangle$ is zero before the arrival\nof the probe, the probe pulse fully converts the occupation that was left in \nthe ground-state after the pump pulse \ninto an occupation of the $\\bar X$ exciton, i.e., $C^{'}_{\\bar X}=C_{0}=1-C_{X}$.\nAgain, the ground-state occupation is not affected by the electron tunnelling\nand therefore no correction involving the tunnelling time $T_{1}$ should be applied.\nSince the probe pulse is off-resonant to the $X-2X$ transition we can neglect the probe induced change\nof the $|X\\rangle$ and $|2X\\rangle$ occupations.\nRecalling that $|\\bar X\\rangle$ is unoccupied before the probe, we find for\nthe resulting differential photocurrent signal $\\Delta PC_{0-\\bar{X}}$ with the \nprobe pulse being in resonance to the exciton transition:\n\\begin{equation}\n\\Delta PC_{0-\\bar{X}}=\\alpha (1-C_X),\n\\label{Eq:Hph}\n\\end{equation}\nwhich yields:\n\\begin{equation}\nC_X=1-\\Delta PC_{0-\\bar{X}}\/ \\Delta PC(\\pi).\n\\label{Eq:Hph}\n\\end{equation}\n\nBesides from the data measured at the $|0\\rangle \\rightarrow |X\\rangle$ \nand $|0\\rangle \\rightarrow |\\bar X\\rangle$ transitions, the exciton population\ncreated by the pump can also be extracted from the exciton to biexciton\ntransition. An $\\sigma^{+}$ polarized pump pulse tuned into the high-energy phonon\nsideband of the neutral exciton transition again creates a certain exciton population\n$C_X$ which evolves into $C_Xe^{-\\tau_\\text{delay}\/T_1}$ until the arrival of the probe.\nBiexcitons are not created, i.e., we have $C_{2X}=0$. A cross-polarized\n$\\pi$-power probe pulse resonant to the \n$|X\\rangle \\rightarrow |2X\\rangle$ transition\nconverts the $|X\\rangle$ population completely into\nan $|2X\\rangle$ population, which gives $C^{'}_X=0$ and\n$C^{'}_{2X}=C_Xe^{-\\tau_\\text{delay}\/T_1}$. Since the probe is now off-resonant to the\n$|0\\rangle \\rightarrow |\\bar X\\rangle$ transition, the occupation of the\n$|\\bar X\\rangle$ exciton induced by the probe is negligible \nand the ground-state occupation is not affected. \nThus, the $\\Delta PC$ signal resulting from a probe pulse \nin resonance to the $|X\\rangle \\rightarrow |2X\\rangle$\ntransition is given by:\n\\begin{equation}\n\\Delta PC_{X-2X}=2\\beta \\Delta C_{2X}+\\alpha \\Delta C_X=(2\\beta - \\alpha)C_Xe^{-\\tau_\\text{delay}\/T_1}.\n\\label{Eq:PCbiexciton}\n\\end{equation}\n$\\beta$ can be determined from a separate experiment in whihc the pump is a $\\pi$ pulse resonant with $X$ and the probe is a $\\pi$ pulse resonant with $2X$. According to Eqs.~\\eqref{Eq:alpha} and \\eqref{Eq:PCbiexciton}, we have:\n\\begin{equation}\n\\beta=0.5(e^{\\tau_\\text{delay}\/T_1}\\Delta PC_{X-2X}(\\pi)+\\Delta PC(\\pi)).\n\\label{Eq:beta}\n\\end{equation}\n\nInserting Eqs.~\\eqref{Eq:alpha} and \\eqref{Eq:beta} into Eq.~\\eqref{Eq:PCbiexciton} \nwe can extract the exciton population after the pump from:\n\\begin{equation}\nC_X= \\dfrac{\\Delta PC_{X-2X}}{\\Delta PC_{X-2X}(\\pi)}.\n\\end{equation}\n\n\\subsection{Model}\n\\label{Model}\n\nFor our calculations we used the same model for an optically driven strongly \nconfined quantum dot as in Ref.~\\cite{Glassl2013},\nwhich is based on the Hamiltonian\n\\begin{align}\n \\label{eq:Hamiltonian}\n H = H_{\\rm{QD-light}} + H_{\\rm{QD-phonon}},\n\\end{align}\nwhere \n\\begin{align}\n H_{\\rm{QD-light}} = \\hbar\\omega^{0}_{X}| X\\rangle\\langle X|\n+\\frac{\\hbar\\Omega(t)}{2} \\left[ | 0\\rangle\\langle X| + |X\\rangle \\langle 0| \\right],\n\\end{align}\nand\n\\begin{align}\n H_{\\rm{QD-phonon}} \\!=\\! \\sum_{\\bf q} \\hbar\\omega_{\\bf q}\\,b^\\dag_{\\bf q} b_{\\bf q} \n\\!+\\! \\sum_{\\bf q} \\hbar \\big( \\gamma_{\\bf q} b_{\\bf q} \\!+\\! \\gamma^{\\ast}_{\\bf q} b^\\dag_{\\bf q}\n \\big) |X \\rangle\\langle X|.\n\\label{dot-ph}\n\\end{align} \nThe ground-state $|0\\rangle$ is chosen as the zero of the energy and\nthe phonon-free energy of the transition to the single exciton state $|X\\rangle$ is denoted\nby $\\hbar\\omega^{0}_{X}$. The Rabi frequency $\\Omega(t)$ is proportional to the\nelectric field envelope of a circularly polarized Gaussian laser pulse with\nfrequency $\\omega_{L}$, which is detuned from the ground-state to exciton\ntransition by $\\Delta = \\omega_{L}-\\omega_{X}$, where $\\omega_{X}$ is the\nfrequency of the single exciton resonance which deviates from $\\omega^{0}_{X}$\nby the polaron shift that results from the dot-phonon coupling\nin Eq.~(\\ref{dot-ph}). The coupling to the laser field\nis treated in the common rotating wave and dipole approximations. The operator\n$b^\\dag_{\\bf q}$ creates a longitudinal acoustic (LA) bulk phonon with wave\nvector $\\bf{q}$ and energy $\\hbar \\omega_{\\bf{q}}$. We assume a linear\ndispersion relation $\\omega_{\\bf{q}} = c_{s} |\\bf{q}|$, where $c_{s}$ denotes\nthe speed of sound. The phonons are coupled via the deformation potential only\nto the exciton state. This coupling is expressed by the exciton-phonon coupling\n$\\gamma_{\\bf{q}}=\\frac{|\\bf{q}|}{\\sqrt{2V\\rho \\hbar \\omega_{\\bf{q}}}}\n\\left(D_{\\rm{e}} \\Psi^{\\rm{e}}({\\bf q}) - D_{\\rm{h}} \\Psi^{\\rm{h}}({\\bf\nq})\\right)$, where $\\rho$ denotes the mass density of the crystal, $V$ the mode\nvolume, $D_{\\rm{e\/h}}$ the deformation potential constants, and\n$\\Psi^{\\rm{e\/h}}(\\bf{q})$ the formfactors of electron and hole, respectively. As\nexplained in the main article, we calculate the formfactors from the\nground-state wavefunctions of a spherical symmetric, parabolic confinement\npotential. It should be noted that, in the pure dephasing model for the\ndot-phonon coupling, no transitions between the bare electronic states can be\ninduced by the continuum of LA phonons, which can change the electronic\noccupations only in the presence of the laser field. We assume the system to be\ninitially in a product state of a thermal phonon-distribution at the temperature\nof the cryostat and a pure ground-state of the electronic subsystem. We use the\nmaterial parameters given in Ref.~\\cite{Krummheuer2002} for GaAs, which are: $\\rho =\n5370 \\; \\rm{kg}\/\\rm{m}^3$, $c_{s} = 5110 \\; \\rm{m}\/\\rm{s}$, $D_{\\rm{e}} = 7.0 \\;\n\\rm{eV}$, and $D_{\\rm{h}} = -3.5 \\; \\rm{eV}$.\n\nTo obtain the time evolution of the electronic density matrix elements predicted \nby this model, we make use of a numerically exact real-time path-integral\napproach, described in detail in Ref.~\\cite{Vagov2011}. This gives us the\nopportunity to calculate the dynamics of the quantum dot with a high and\ncontrollable numerical precision and without further approximations to the\ngiven Hamiltonian. This includes taking into account all multi-phonon processes\nand non-Markovian effects.\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Conclusion and Future Work}\n\\label{sec:conclusion}\n\nWe present Flexible BFT, a protocol that supports diverse clients with different assumptions to use the same ledger. Flexible BFT allows the clients to tolerate combined (Byzantine plus alive-but-corrupt\\xspace) faults exceeding 1\/2 and 1\/3 for synchrony and partial synchrony respectively. At a technical level, under synchrony, we show a synchronous protocol where the replicas execute a network speed protocol and only the commit rule uses the synchrony assumption. For partial synchrony, we introduce the notion of Flexible Byzantine Quorums by deconstructing existing BFT protocols to understand the role played by the different quorums. We combine the two to form Flexible BFT which obtains the best of both worlds.\n\nOur liveness proof in Section~\\ref{sec:proof}\nemploys a strong assumption that all clients have correct commit rules.\nThis is because our alive-but-corrupt\\xspace fault model did not specify \nwhat these replicas would do if they can violate safety for some clients.\nIn particular, they may stop helping liveness. \nHowever, we believe this will not be a concern \nonce we move to a more realistic rational model.\nIn that case, the best strategy for alive-but-corrupt\\xspace replicas\nis to attack the safety of clients with unsafe commit rules \nwhile preserving liveness for clients with correct commit rules. \nSuch an analysis in the rational fault model\nremains interesting future work.\nOur protocol also assumes that all replicas have clocks that advance at the same rate. It is interesting to explore whether our protocol can be modified to work with clock drifts.\n\n\n\\section{Discussion}\n\\label{sec:discussion}\n\nAs we have seen, three parameters \n${q_{r}}$, ${q_c}$, and $\\Delta$ determine the protocol. \n${q_{r}}$ is the only parameter for the replicas\nand is picked by the service administrator.\nThe choice of ${q_{r}}$\ndetermines a set of client assumptions that can be supported. \n${q_c}$ and $\\Delta$ are chosen by clients to commit blocks.\nIn this section, we first discuss the client assumptions supported by a\ngiven ${q_{r}}$ and then discuss the trade-offs \nbetween different choices of ${q_{r}}$.\n\n\\subsection{Client Assumptions Supported by ${q_{r}}$}\n\\label{sec:client-beliefs-given}\n\n\\begin{figure}[tbp]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{single-curve.pdf}\n \\caption{\\textbf{Clients supported for ${q_{r}} = 2\/3$.}}\n \\label{fig:clients-supported}\n\\end{figure}\n\nFigure~\\ref{fig:clients-supported} represents\nthe clients supported at ${q_{r}} = 2\/3$. \nThe x-axis represents Byzantine faults\nand the y-axis represents total faults (Byzantine plus a-b-c\\xspace). \nEach point on this graph \nrepresents a client fault assumption as a pair: (Byzantine faults, total faults).\nThe shaded gray area indicates an ``invalid area'' since we\ncannot have fewer total faults than Byzantine faults.\nA missing dimension\nin this figure is the choice of $\\Delta$. \nThus, the synchrony guarantee shown in this figure is for clients\nthat choose a correct synchrony bound. \n\nClients with partial-synchrony assumptions\ncan get fault tolerance on (or below) the starred orange line. \nThe right most point on the line is $(1\/3, 1\/3)$, i.e., we\ntolerate less than a third of Byzantine replicas and no additional\na-b-c\\xspace replicas. \nThis is the setting of existing partially synchronous consensus\n protocols~\\cite{DLS88,castro1999practical,yin2018hotstuff}. \nFlexible BFT\\xspace generalizes these protocols by giving clients the option of\nmoving up-left along the line, \ni.e., tolerating fewer Byzantine and more total faults.\nBy choosing ${q_c}>{q_{r}}$, a client tolerates \n$< {q_c}+{q_{r}}-1$ total faults for safety \nand $\\leq 1-{q_c}$ Byzantine faults for liveness. \nIn other words, as a client moves left, \nfor every additional vote it requires,\nit tolerates one fewer Byzantine fault and gains overall\none higher total number of faults (i.e., two more a-b-c\\xspace faults).\nThe left most point on this line $(0, 2\/3)$ tolerating no \nByzantine replicas and the highest fraction of a-b-c\\xspace replicas. \n\nMoreover, for clients who believe in synchrony,\nif their $\\Delta$ assumption is correct,\nthey enjoy 1\/3 Byzantine tolerance and 2\/3 total tolerance\nrepresented by the green diamond.\nThis is because synchronous commit rules are not parameterized by\nthe number of votes received. \n\n\\paragraph{How do clients pick their commit rules?}\nIn Figure~\\ref{fig:clients-supported}, the shaded starred orange\nportion of the plot represent fault tolerance provided \nby the partially synchronous commit rule (CR1). \nSpecifically, setting ${q_c}$ to the total fault \nfraction yields the necessary commit rule. On the other hand, if\na client's required fault tolerance lies in the circled green portion\nof the plot, then the synchronous commit rule (CR2) with an\nappropriate $\\Delta$ picked by the client yields the necessary\ncommit rule. Finally, if a client's target fault tolerance corresponds to the\nwhite region of the plot, then \nit is not achievable with this ${q_{r}}$.\n\n\\paragraph{Clients with incorrect assumptions and recovery.}\nIf a client has incorrect assumption with respect to the fault\nthreshold or synchrony parameter $\\Delta$, then it can lose\nsafety or liveness. If a client believing in synchrony\npicks too small a $\\Delta$ and commits a value $b$, \nit is possible that a conflicting\nvalue $b'$ may also be certified. Replicas may\nchoose to extend the branch containing $b'$, effectively\nreverting $b$ and causing a safety violation. Whenever a client\ndetects such a safety violation, it may need to revert\nsome of its commits and increase $\\Delta$ to recover.\n\nFor a client with partial-synchrony assumption, if it loses safety, \nit can update its fault model to move left along\nthe orange starred line, i.e., tolerate higher total faults but fewer\nByzantine. On the other hand, if it observes no progress as its\nthreshold ${q_c}$ is not met, then it moves towards the\nright. However, if the true fault model is in the circled green\nregion in Figure~\\ref{fig:clients-supported}, then the client\ncannot find a partially synchronous commit rule that is both safe and live\nand eventually has to switch to using a synchronous commit rule.\n\nRecall that the goal of a-b-c\\xspace replicas is to attack \nsafety. Thus, clients with\nincorrect assumptions may be exploited by\na-b-c\\xspace replicas for their own gain (e.g., by double-spending). \nWhen a client updates to a correct assumption \nand recovers from unsafe commits, their subsequent commits\nwould be safe and final. This is remotely analogous to Bitcoin \n-- if a client commits to a transaction when it is a few blocks deep and a\npowerful adversary succeeds in creating an alternative longer fork, the\ncommit is reverted. \n\n\\subsection{Comparing Different ${q_{r}}$ Choices}\n\\label{sec:comp-diff-qmins}\n\\begin{figure}[tbp]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{pbft.pdf}\n \\caption{Clients supported by Flexible BFT\\xspace at different\n ${q_{r}}$'s. The legend represents the different ${q_{r}}$ values.}\n \\label{fig:varying-qmin}\n\\end{figure}\nWe now look at the service administrator's choice at picking\n${q_{r}}$. \nIn general, the service administrator's goal is to tolerate a large number of Byzantine\nand a-b-c\\xspace faults, i.e., move towards top and\/or right of the figure.\nFigure~\\ref{fig:varying-qmin} shows the trade-offs\nin terms of clients supported by different ${q_{r}}$ values in Flexible BFT\\xspace.\n\nFirst, it can be observed that for clients with partial-synchrony assumptions, \n${q_{r}} \\geq 2\/3$ dominates ${q_{r}} < 2\/3$. \nObserve that the fraction of Byzantine\nreplicas $(B)$ are bounded by $B < {q_c}+{q_{r}}-1$ and $B \\leq\n1-{q_c}$, so $B \\leq {q_{r}}\/2$. \nThus, as ${q_{r}}$ decreases, Byzantine fault tolerance decreases.\nMoreover, since the total fault tolerance is \n${q_c} + {q_{r}} - 1$, a lower ${q_{r}}$ also tolerates a\nsmaller fraction of total faults for a fixed ${q_c}$.\n\nFor ${q_{r}} \\geq 2\/3$ or for clients believing in synchrony, \nno value of ${q_{r}}$ is Pareto optimal. \nFor clients with partial-synchrony assumptions, as ${q_{r}}$ increases, \nthe total fault tolerance for safety increases.\nBut since ${q_c} \\geq {q_{r}}$, we have $B \\leq 1 - {q_{r}}$, and\nhence the Byzantine tolerance for liveness decreases.\nFor clients believing in synchrony, the total fault tolerance\nfor safety is $< {q_{r}}$ and the Byzantine fault tolerance for\nliveness is $\\geq 1-{q_{r}}$. \nIn both cases, the choice of ${q_{r}}$ represents a safety-liveness\ntrade-off.\n\n\n\\subsection{Separating Alive-but-corrupt\\xspace Resilience from Diversity}\n\nSo far, we presented the Flexible BFT\\xspace techniques and protocols \nto simultaneously support diverse client support and stronger a-b-c\\xspace fault tolerance.\nIndeed, we believe both properties are desirable and they strengthen each other.\nBut we remark that these two properties can be provided separately.\n\nIt is relatively straightforward to provide stronger fault tolerance in the a-b-c\\xspace model in a classic uniform setting. \nFor example, under partial-synchrony, one can simply use a larger quorum in PBFT (without the ${q_{r}}$\/$q$ replica\/client quorum separation).\nBut we note that a higher total (a-b-c\\xspace plus Byzantine) tolerance comes at the price of a lower Byzantine tolerance.\nIn a uniform setting, this means \\emph{all} clients have to sacrifice some Byzantine tolerance.\nIn the diverse setting, Flexible BFT\\xspace gives clients the freedom to choose the fault assumption they believe in,\nand a client can choose the classic Byzantine fault model.\n\nOn the flip side, if one hopes to support diverse clients in the classic Byzantine fault (no a-b-c\\xspace faults),\nthe ``dimension of diversity'' reduces.\nOne example is the network speed replica protocol in Section~\\ref{sec:overv-synchr-flex},\nwhich supports clients that believe in different synchrony bounds.\nThat protocol can be further extended to support clients with a (uniform) partial-synchrony assumption.\nClients with partial-synchrony assumption are uniform since\nwe have not identified any type of ``diversity'' outside a-b-c\\xspace faults for them. \n\n\n\n\\section{Introduction}\n\\label{sec:introduction}\n\nByzantine fault tolerant (BFT) protocols are used to build replicated services~\\cite{PSL80,LSP82,schneider1990implementing}.\nRecently, they have received revived interest\nas the algorithmic foundation of what is known as decentralized ledgers, or blockchains. \n\nIn the classic approach to BFT protocol designs,\na protocol designer or a service administrator first picks a set of assumptions \n(e.g., the fraction of Byzantine faults and certain timing assumptions)\nand then devises a protocol (or chooses an existing one) tailored for that particular setting.\nThe assumptions made by the protocol designer are imposed upon all parties involved --- \nevery replica maintaining the service as well as every client (also known as the\n\"learner\" role) using the service.\nSuch a protocol collapses if deployed under settings that differ from the one it is designed for. \nIn particular, optimal-resilience partially synchronous solutions~\\cite{DLS88,castro1999practical} \nbreak (lose safety and liveness) if the fraction of Byzantine faults exceeds\n$1\/3$. \nSimilarly, optimal-resilience synchronous solutions~\\cite{abraham2018synchronous,hanke2018dfinity}\ndo not obtain safety or liveness if the fraction of Byzantine faults exceeds\n$1\/2$ or if the synchrony bound is violated.\n\nIn this work, we introduce a new approach for BFT protocol design called \\emph{Flexible BFT\\xspace}. \nOur approach offers advantages in the two aspects above.\nFirst, the Flexible BFT\\xspace approach enables protocols \nthat tolerate more than $1\/3$ (resp.\\ $1\/2$) corruption faults in the\npartial-synchrony (resp.\\ synchrony) model\n--- provided that the number of Byzantine faults do not exceed the respective resilience bounds. \nSecond, the Flexible BFT\\xspace approach allows a certain degree of separation between \nthe fault model and the protocol design.\nAs a result, Flexible BFT\\xspace allows diverse clients with different \nfault assumptions and timing assumptions (synchrony or not)\nto participate in the same protocol.\nWe elaborate on these two aspects below.\n\n\\paragraph{Stronger resilience.}\nWe introduce a mixed fault model with a new type of fault called\n\\emph{alive-but-corrupt\\xspace} (a-b-c\\xspace for short) faults. \nAlive-but-corrupt\\xspace replicas actively try to disrupt the system from maintaining a safe consensus decision \nand they might arbitrarily deviate from the protocol for this purpose. \nHowever, if they cannot break safety,\nthey will not try to prevent the system from reaching a (safe) decision.\nThe rationale for this new type of fault is that \nviolating safety may provide the attacker gains (e.g., a double spend attack)\nbut preventing liveness usually does not.\nIn fact, a-b-c\\xspace replicas may gain rewards from keeping the replicated service\nlive, e.g., by collecting service fees.\nWe show a family of protocols \nthat tolerate a combination of Byzantine and a-b-c\\xspace faults \nthat exceeds $1\/3$ in the partially synchronous model\nand exceeds $1\/2$ in the synchronous model.\nOur results do not violate existing resilience bounds \nbecause the fraction of Byzantine faults is always smaller than the respective bounds.\n\n\n\\paragraph{Diversity.}\nThe Flexible BFT\\xspace approach further provides certain separation between the fault model\nand the protocol. The design approach builds a protocol whose transcript can be\ninterpreted by external clients with diverse beliefs, who draw different consensus\ncommit decisions based on their beliefs.\nFlexible BFT\\xspace guarantees safety and liveness so far as the\nclients' beliefs are correct; thus two clients with correct\nassumptions agree with each other.\nClients specify (i) the fault threshold they need to tolerate, \nand (ii) the message delay bound, if any, they believe in. \nFor example, one instance of Flexible BFT\\xspace can support a client that requires tolerance against $1\/5$ Byzantine faults plus $3\/10$ a-b-c\\xspace faults, \nwhile simultaneously supporting another client who requires tolerance against $1\/10$ Byzantine faults plus $1\/2$ a-b-c\\xspace faults,\nand a third client who believes in synchrony and requires \n$3\/10$ Byzantine plus $2\/5$ a-b-c\\xspace tolerance.\n\nThis novel separation of fault model from protocol design\ncan be useful in practice in several ways.\nFirst, different clients may naturally hold different assumptions about the system.\nSome clients may be more cautious and require a higher resilience than others;\nsome clients may believe in synchrony while others do not.\nMoreover, even the same client may assume a larger fraction of faults \nwhen dealing with a \\$1M transaction compared to a \\$5 one. \nThe rationale is that more replicas may be willing to collude\nto double spend a high-value transaction.\nIn this case, the client can wait for more votes before committing the \\$1M transaction.\nLast but not least, a client may update its assumptions based on certain events it observes.\nFor example, if a client receives votes for conflicting values,\nwhich may indicate an attempt at attacking safety,\nit can start requiring more votes than usual;\nif a client who believes in synchrony notices abnormally long message delays,\nwhich may indicate an attack on network infrastructure,\nit can update its synchrony bound to be more conservative or switch to a partial-synchrony assumption.\n\n\nThe notion of ``commit'' needs to be clarified in our new model.\nClients in Flexible BFT\\xspace have different assumptions and hence different commit rules.\nIt is then possible and common that a value is committed by one client but not another.\nFlexible BFT\\xspace guarantees that any two clients whose assumptions are correct \n(but possibly different) commit to the same value.\nIf a client's assumption is incorrect, however,\nit may commit inconsistent values which may later be reverted.\nWhile this new notion of commit may sound radical at first, \nit is the implicit behavior of existing BFT protocols.\nIf the assumption made by the service administrator is violated in a classic BFT protocol\n(e.g., there are more Byzantine faults than provisioned),\nclients may commit to different values and they have no recourse.\nIn this sense, Flexible BFT\\xspace is a robust generalization of classic BFT protocols. \nIn Flexible BFT\\xspace, if a client performs conflicting commits,\nit should update its assumption to be more cautious\nand re-interpret what values are committed under its new assumption.\nIn fact, this ``recovery'' behavior is somewhat akin to Bitcoin.\nA client in Bitcoin decides how many confirmations are needed \n(i.e., how ``deeply buried'') to commit a block.\nIf the client commits but subsequently an alternative longer fork appears, its commit is reverted.\nGoing forward, the client may increase the number of confirmations it requires.\n\n\n\\paragraph{Key techniques.}\nFlexible BFT\\xspace centers around two new techniques.\nThe first one is a novel synchronous BFT protocol with replicas\n executing at \\emph{network speed}; \nthat is, the protocol run by the replicas does not assume synchrony.\nThis allows clients in the same protocol \nto assume different message delay bounds and commit at their own\npace. \nThe protocol thus \n separates timing assumptions of replicas from timing assumptions of clients.\nNote that this is only possible via Flexible BFT\\xspace's separation of\nprotocol from the fault model:\nthe action of committing is only carried out by clients, not by replicas.\nThe other technique involves a breakdown of the different roles that \nquorums play in different steps of partially synchronous BFT protocols.\nOnce again, made possible by the separation in Flexible BFT\\xspace,\nwe will use one quorum size for replicas to run a protocol,\nand let clients choose their own quorum sizes for committing in the protocol.\n\n\\paragraph{Contributions.}\nTo summarize, our work has the following contributions.\n\n\\begin{enumerate}[topsep=8pt,itemsep=8pt]\n\t\t\t\t\n\\item \n\\textbf{Alive-but-corrupt faults.} We introduce a new type of fault, called\nalive-but-corrupt\\xspace fault, which attack safety but not liveness.\n\n\\item \\textbf{Synchronous BFT with network speed replicas.} \n\tWe present a synchronous protocol in which \n\tonly the commit step requires synchrony.\n\tSince replicas no longer perform commits in our approach,\n\tthe protocol simultaneously supports clients assuming different synchrony bounds.\n\t\t\n\\item \\textbf{Flexible Byzantine Quorums.} \n\tWe deconstruct existing BFT protocols to understand the role played by different quorums \n\tand introduce the notion of Flexible Byzantine Quorums.\n\tA protocol based on Flexible Byzantine Quorums simultaneously supports clients assuming different fault models.\n\n\\item \\textbf{One BFT Consensus Solution for the Populace.} \n\tPutting the above together, we present a new approach for BFT design, Flexible BFT\\xspace.\n\tOur approach has stronger resilience and diversity: Flexible BFT\\xspace tolerates a fraction of combined (Byzantine plus a-b-c\\xspace) faults \n\tbeyond existing resilience bounds. And clients with diverse fault\nand timing beliefs are supported in the same protocol. \n\n\\end{enumerate}\n\n\\paragraph{Organization.}\nThe rest of the paper is organized as follows.\nSection~\\ref{sec:model} defines the Flexible BFT\\xspace model where replicas and clients are separated.\nWe will describe in more detail our key techniques \nfor synchrony and partial-synchrony in\nSections~\\ref{sec:overv-synchr-flex}~and~\\ref{sec:overview-async}, respectively. \nSection~\\ref{sec:protocol} puts these techniques together and presents the final protocol.\nSection~\\ref{sec:discussion} discusses the result obtained by the\nFlexible BFT design and \nSection~\\ref{sec:related-work} describes related work.\n\n\n\n\\section*{Acknowledgement}\nWe thank Ittai Abraham and Ben Maurer for many useful discussions\non Flexible BFT. We thank Marcos Aguilera for many insightful\ncomments on an earlier draft of this work\n\\bibliographystyle{plain}\n\n\\section{Modeling Flexible BFT\\xspace}\n\\label{sec:model}\n\nThe goal of Flexible BFT\\xspace is to build a replicated service that takes requests from clients\nand provides clients an interface of a single non-faulty server,\ni.e., it provides clients with the same totally ordered sequence of values. \nInternally, the replicated service uses multiple servers,\nalso called replicas, to tolerate some number of faulty servers. \nThe total number of replicas is denoted by $n$. In this paper, whenever we\nspeak about a set of replicas or messages, we denote the set size as its fraction over $n$.\nFor example, we refer to a set of $m$ replicas as ``$q$ replicas'' where $q=m\/n$.\n\nBorrowing notation from Lamport~\\cite{lamport2006fast}, \nsuch a replicated service has three logical actors: \n\\emph{proposers} capable of sending new values,\n\\emph{acceptors} who add these values to a totally ordered sequence (called a blockchain), \nand \\emph{learners} who decide on a sequence of values based on the transcript of the protocol and execute them on a state machine. \nExisting replication protocols provide the following two properties:\n\n\\begin{description}[topsep=8pt,itemsep=4pt]\n\\item[-] \\textbf{Safety.} Any two learners learn the same sequence of values.\n\\item[-] \\textbf{Liveness.} A value proposed by a proposer will\n eventually be executed by every learner.\n\\end{description}\n\nIn existing replication protocols, \nthe learners are assumed to be \\emph{uniform}, \ni.e., they interpret a transcript using the same rules \nand hence decide on the same sequence of values.\nIn Flexible BFT\\xspace, we consider diverse learners with different assumptions. \nBased on their own assumptions, \nthey may interpret the transcript of the protocol differently. \nWe show that so far as the assumptions \nof two different learners are both correct, \nthey will eventually learn the same sequence of values.\nA replication protocol in the Flexible BFT\\xspace approach\nsatisfies the following properties:\n\n\\begin{description}[topsep=8pt,itemsep=4pt]\n\\item[-] \\textbf{Safety for diverse learners.} Any two\n learners with correct but potentially different assumptions\n learn the same sequence of values.\n\\item[-] \\textbf{Liveness for diverse learners.} A value\n proposed by a proposer will eventually be executed by every learner\n with a correct assumption.\n\\end{description}\n\nIn a replicated service, clients act as proposers and learners, \nwhereas the replicas (replicated servers) are acceptors.\nThus, safety and liveness guarantees are defined with respect to clients.\n\n\\paragraph{Fault model.} \nWe assume two types of faults within the replicas: Byzantine and \\emph{alive-but-corrupt\\xspace} (a-b-c\\xspace for short). \nByzantine replicas behave arbitrarily. \nOn the other hand, the goal of a-b-c\\xspace replicas\nis to attack safety but to preserve liveness.\nThese replicas will take any actions that help them break safety of the protocol.\nHowever, if they cannot succeed in breaking safety, they will help provide liveness.\nConsequently, in this new fault model, \nthe safety proof should treat a-b-c\\xspace replicas similarly to Byzantine.\nThen, \\emph{once safety is proved},\nthe liveness proof can treat a-b-c\\xspace replicas similarly to honest.\nWe assume that the adversary is static, i.e., the adversary determines which\nreplicas are Byzantine and a-b-c\\xspace before the start of the protocol. \n\n\\paragraph{Other assumptions.} \nWe assume hash functions, digital signatures and a public-key infrastructure (PKI).\nWe use $\\sig{x}_R$ to denote a message $x$ signed by a replica $R$. \nWe assume pair-wise communication channels between replicas. \nWe assume that all replicas have clocks that advance at the same\nrate.\n\n\n\\section{Flexible Byzantine Quorums for Partial Synchrony - Overview}\n\\label{sec:overview-async}\n\nIn this section, we explain the high-level insights of Flexible\nByzantine Quorums in Flexible BFT\\xspace. \nAgain, for ease of exposition, we\nfocus on a single-shot consensus and do not consider termination. \nWe start by reviewing the Byzantine Quorum\nSystems~\\cite{Malkhi:1997:BQS:258533.258650} that underlie\nexisting partially synchronous protocols \nthat tolerate 1\/3 Byzantine faults (Section~\\ref{sec:byz-quorums}). \nWe will illustrate that multiple uses of 2\/3-quorums \nactually serve different purposes in these protocols.\nWe then generalize these protocols to use \n\\emph{Flexible Byzantine Quorums}~(Section~\\ref{sec:flexible-byz-quorum}),\nthe key idea that enables more than 1\/3 fault tolerance \nand allows diverse clients with varying assumptions to co-exist. \n\n\\subsection{Background: Quorums in PBFT}\n\\label{sec:byz-quorums}\n\nExisting protocols for solving consensus in the partially synchronous setting \nwith optimal $1\/3$-resilience\nrevolve around voting by \\emph{Byzantine quorums} of replicas. \nTwo properties of Byzantine quorums are utilized for achieving safety and liveness.\nFirst, any two quorums intersect at one honest replica -- quorum intersection.\nSecond, there exists a quorum that contains no Byzantine faulty replicas -- quorum availability.\nConcretely, when less than $1\/3$ the replicas are Byzantine,\nquorums are set to size ${q_{r}}=2\/3$.\n(To be precise, ${q_{r}}$ is slightly larger than 2\/3, \ni.e., $2f+1$ out of $3f+1$ where $f$ is the number of faults,\nbut we will use ${q_{r}}=2\/3$ for ease of exposition.)\nThis guarantees an intersection of size at least $2{q_{r}}-1=1\/3$, \nhence at least one honest replica in the intersection.\nAs for availability, there exist ${q_{r}}=2\/3$ honest replicas to form\na quorum.\n\nTo dissect the use of quorums in BFT protocols, consider their use in\nPBFT~\\cite{castro1999practical} for providing safety and liveness. \nPBFT operates in a view-by-view manner.\nEach view has a unique leader and consists of the following steps: \n\n\\begin{itemize}\n\n\\item[-] \\textbf{Propose.} \n A leader $L$ proposes a value $b$.\n\n\\item[-] \\textbf{Vote 1.} \n On receiving the first value $b$ for a view $v$, a replica votes for $b$ \n if it is \\emph{safe}, as determined by a locking mechanism described below.\n A set of ${q_{r}}$ votes form a certificate $\\mathcal{C}^{{q_{r}}}(b)$.\n\n\\item[-] \\textbf{Vote 2.} \n On collecting $\\mathcal{C}^{{q_{r}}}(b)$,\n a replica ``locks'' on $b$ and votes for $\\mathcal{C}^{{q_{r}}}(b)$.\n\n\\item[-] \\textbf{Commit.} \n On collecting ${q_{r}}$ votes for $\\mathcal{C}^{{q_{r}}}(b)$, a client learns that proposal\n$b$ becomes a committed decision. \n\n\\end{itemize}\nIf a replica locks on a value $b$ in a view, then it votes only for $b$ in subsequent views \nunless it ``unlocks'' from $b$.\nA replica ``unlocks'' from $b$ if it learns that ${q_{r}}$ replicas are \\emph{not}\nlocked on $b$ in that view or higher \n(they may be locked on other values or they may not be locked at all).\n\nThe properties of Byzantine quorums are harnessed in PBFT for safety and liveness as follows:\n\n\\begin{description}[topsep=8pt,itemsep=4pt]\n\n\\item[Quorum intersection within a view.]\nSafety within a view is ensured by the first round of votes.\nA replica votes only once per view.\nFor two distinct values to both obtain certificates, \none honest replica needs to vote for both, which cannot happen.\n\n\\item[Quorum intersection across views.]\nSafety across views is ensured by the locking mechanism.\nIf $b$ becomes a committed decision in a view, \nthen a quorum of replicas lock on $b$ in that view.\nFor an honest replica among them to unlock from $b$,\na quorum of replicas need to claim they are not locked on $b$.\nAt least one replica in the intersection is honest \nand would need to falsely claim it is not locked, which cannot happen.\n\n\\item[Quorum availability within a view.]\nLiveness within each view is guaranteed by having an honest quorum respond to a\nnon-faulty leader.\n\\end{description}\n\n\n\\subsection{Flexible Byzantine Quorums}\n\\label{sec:flexible-byz-quorum}\n\nOur Flexible BFT\\xspace approach separates the quorums used in \nBFT protocols for the replicas (acceptors) from the quorums used for learning when a decision becomes\ncommitted. \nMore specifically,\nwe denote the quorum used for forming certificates (locking) by ${q_\\text{lck}}$\nand the quorum used for unlocking by ${q_\\text{ulck}}$. \nWe denote the quorum employed by clients for learning certificate uniqueness by\n${q_\\text{unq}}$, and the quorum used for learning commit safety by ${q_\\text{cmt}}$. \nIn other words, clients mandate ${q_\\text{unq}}$ first-round votes and ${q_\\text{cmt}}$ second-round votes in order\nto commit a decision. \nBelow, we outline a modified PBFT-like protocol that uses these different quorum sizes instead of a single quorum size $q$.\nWe then introduce a new definition, Flexible Byzantine Quorums, that\ncapture the requirements needed for these quorums to provide\nsafety and liveness.\n\n\\begin{figure}[h]\n\\centering\n\\begin{boxedminipage}{\\columnwidth}\n\\begin{itemize}[itemsep=4pt,leftmargin=*]\n\n\\item[-] \\textbf{Propose.} \n A leader $L$ proposes a value $b$.\n\n\\item[-] \\textbf{Vote 1.} \n On receiving the first value $b$ for a view $v$, a replica votes for $b$ \n if it is \\emph{safe}, as determined by a locking mechanism described below.\n A set of ${q_\\text{lck}}$ votes forms a certificate $\\mathcal{C}^{{q_\\text{lck}}}(b)$.\n\n\\item[-] \\textbf{Vote 2.} \n On collecting $\\mathcal{C}^{{q_\\text{lck}}}(b)$,\n a replica ``locks'' on $b$ and votes for $\\mathcal{C}^{{q_\\text{lck}}}(b)$.\n\n\\item[-] \\textbf{Commit.} \n On collecting ${q_\\text{unq}}$ votes for $b$ and ${q_\\text{cmt}}$ votes for $\\mathcal{C}^{{q_\\text{lck}}}(b)$, \n a client learns that proposal $b$ becomes a committed decision. \n\n\\end{itemize}\nIf a replica locks on a value $b$ in a view, then it votes only for $b$ in subsequent views \nunless it ``unlocks'' from $b$ by learning that ${q_\\text{ulck}}$\n replicas are not locked on $b$.\n\\end{boxedminipage}\n\\end{figure}\n\n\\begin{description}[topsep=8pt,itemsep=4pt]\n\\item[Flexible quorum intersection (a) within a view.]\nContrary to PBFT, in Flexible BFT\\xspace, a pair of ${q_\\text{lck}}$ certificates need not necessarily\nintersect in an honest replica. \nIndeed, locking on a value does not preclude conflicting locks.\nIt only mandates that every ${q_\\text{lck}}$ quorum\nintersects with every ${q_\\text{unq}}$ quorum at at least one honest replica. \nFor safety, it is essential\nthat the fraction of faulty replicas is less than ${q_\\text{lck}}+{q_\\text{unq}}-1$.\n\n\\item[Flexible quorum intersection (b) across views.]\nIf a client commits a value $b$ in a view, \n${q_\\text{cmt}}$ replicas lock on $b$ in that view.\nFor an honest replica among them to unlock from $b$,\n${q_\\text{ulck}}$ replicas need to claim they are not locked on $b$.\nThis property mandates that every ${q_\\text{ulck}}$ quorum intersects\nwith every ${q_\\text{cmt}}$ quorum at at least one honest replica.\nThus, for safety, it is essential that the fraction of faulty\nreplicas is less than ${q_\\text{ulck}} + {q_\\text{cmt}} - 1$. \n\n\\item[Flexible quorum availability within a view.]\nFor liveness, Byzantine replicas cannot exceed \n{$1-\\max({q_\\text{unq}}, {q_\\text{cmt}}, {q_\\text{lck}}, {q_\\text{ulck}})$} \nso that the aforementioned quorums can be formed at different stages of the protocol. \n\\end{description}\n\nGiven the above analysis, Flexible BFT\\xspace ensures safety \nif the fraction of faulty replicas is less than\n$\\min({q_\\text{unq}} + {q_\\text{lck}} - 1, {q_\\text{cmt}} + {q_\\text{ulck}} - 1)$,\nand provides liveness if the fraction of Byzantine replicas \nis at most $1-\\max({q_\\text{unq}}, {q_\\text{cmt}}, {q_\\text{lck}}, {q_\\text{ulck}})$.\nIt is optimal to use \\emph{balanced quorum sizes}\nwhere ${q_\\text{lck}} = {q_\\text{ulck}}$ and ${q_\\text{unq}} = {q_\\text{cmt}}$.\nTo see this, first note that we should make sure \n${q_\\text{unq}} + {q_\\text{lck}} = {q_\\text{cmt}} + {q_\\text{ulck}}$;\notherwise, suppose the right-hand side is smaller, \nthen setting $({q_\\text{cmt}},{q_\\text{ulck}})$ to equal $({q_\\text{unq}},{q_\\text{lck}})$\nimproves safety tolerance without affecting liveness tolerance.\nNext, observe that if we have ${q_\\text{unq}} + {q_\\text{lck}} = {q_\\text{cmt}} + {q_\\text{ulck}}$\nbut ${q_\\text{lck}} > {q_\\text{ulck}}$ (and hence ${q_\\text{unq}} < {q_\\text{cmt}}$),\nthen once again setting $({q_\\text{cmt}},{q_\\text{ulck}})$ to equal $({q_\\text{unq}},{q_\\text{lck}})$\nimproves safety tolerance without affecting liveness tolerance.\n\nThus, in this paper, we set ${q_\\text{lck}} = {q_{r}}$ and ${q_\\text{unq}} =\n{q_\\text{cmt}} = {q_c}$. Since replicas use ${q_{r}}$ votes to lock,\nthese votes can always be used by the clients to commit\n${q_\\text{cmt}}$ quorums. \nThus, ${q_c} \\geq {q_{r}}$.\nThe Flexible Byzantine Quorum requirements collapse\ninto the following two conditions.\n\\begin{description}\n\\item[Flexible quorum intersection.]\nThe fraction of faulty replicas is $< {q_c} + {q_{r}} - 1$.\n\n\\item[Flexible quorum availability.]\nThe fraction of Byzantine replicas is $\\leq 1-{q_c}$.\n\\end{description}\n\n\\paragraph{Tolerating a-b-c\\xspace faults.}\nIf all faults in the system are Byzantine faults, \nthen the best parameter choice is ${q_c}={q_{r}} \\geq 2\/3$\nfor $<1\/3$ fault tolerance,\nand Flexible Byzantine Quorums degenerate to basic Byzantine quorums.\nHowever, in our model, a-b-c\\xspace replicas are only interested in attacking safety but not liveness.\nThis allows us to tolerate \n${q_c} + {q_{r}} - 1$ total faults (Byzantine plus a-b-c\\xspace), which can be more than $1\/3$. \nFor example, if we set ${q_{r}} = 0.7$ and ${q_c} = 0.8$, \nthen such a protocol can tolerate $0.2$ Byzantine faults plus\n$0.3$ a-b-c\\xspace faults.\nWe discuss the choice for ${q_{r}}$ and ${q_c}$ and their rationale in Section~\\ref{sec:discussion}.\n\n\\paragraph{Separating client commit rules from the replica protocol.}\n\\label{sec:client-coexist-partial}\nA key property of the Flexible Byzantine Quorum approach is that\nit decouples the BFT protocol from \nclient commit rules. \nThe decoupling allows \nclients assuming different fault models\nto utilize the same protocol.\nIn the above protocol, the propose and two voting steps \nare executed by the replicas and they are only parameterized by ${q_{r}}$.\nThe commit step can be carried by different clients using different commit thresholds ${q_c}$. \nThus, a fixed ${q_{r}}$ determines a possible set of clients with\nvarying commit rules (in terms of Byzantine and a-b-c\\xspace adversaries).\nRecall that a Byzantine adversary can behave arbitrarily and thus may not\nprovide liveness whereas an a-b-c\\xspace adversary only\nintends to attack safety but not liveness.\nThus, a client who believes that a large fraction of\nthe adversary may attempt to break safety, not progress, can choose a larger ${q_c}$.\nBy doing so, it seeks stronger safety against dishonest replicas, while trading \n liveness.\nConversely, a client that assumes that a large fraction of the adversary\nattacks liveness must choose a smaller ${q_c}$. \n\n\n\n\\subsection{Safety and Liveness}\n\\label{sec:proof}\n\nWe introduce the notion of \\emph{direct} and \\emph{indirect} commit to aid the proofs.\nWe say a block is committed \\emph{directly} under \\textbf{CR1} if\nthe block and its immediate successor both get ${q_c}$ votes in the same view.\nWe say a block is committed \\emph{directly} under \\textbf{CR2} if some honest replica\nreports an undisturbed-$2\\Delta$ period after its successor block\nwas obtained.\nWe say a block is committed \\emph{indirectly} if neither condition applies to it but\nit is committed as a result of a block extending it being committed directly.\nWe remark that the direct commit notion, especially for \\textbf{CR2}, is merely a proof technique.\nA client cannot tell whether a replica is honest,\nand thus has no way of knowing whether a block is directly committed under \\textbf{CR2}. \n\n\\begin{lemma}\nIf a client directly commits a block $B_l$ in view $v$ using a correct commit rule,\nthen a certified block that ranks no lower than $\\CommitCert_{v}(B_{l})$ must equal or extend $B_l$. \n\\label{lemma:unique-cert}\n\\end{lemma}\n\n\\begin{proof}\nTo elaborate on the lemma, \na certified block $\\CommitCert_{v'}(B'_{l'})$ ranks no lower than $\\CommitCert_{v}(B_{l})$\nif either (i) $v'=v$ and $l' \\geq l$, or (ii) $v'>v$.\nWe need to show that if $B_l$ is directly committed, \nthen any certified block that ranks no lower either equals or extends $B_l$.\nWe consider the two commit rules separately. \nFor both commit rules, we will use induction on $v'$ to prove the lemma.\n\n\\bigskip\nFor $\\textbf{CR1}$ with parameter ${q_c}$ to be correct, \nflexible quorum intersection needs to hold, i.e.,\nthe fraction of faulty replicas must be less than ${q_c}+{q_{r}}-1$.\n$B_l$ being directly committed under $\\textbf{CR1}$ with parameter ${q_c}$ implies that \nthere are ${q_c}$ votes in view $v$ for $B_l$ and $B_{l+1}$ where $B_{l+1}$ extends $B_l$. \n\nFor the base case, a block $B'_{l'}$ with $l'\\geq l$ that does not extend $B_l$ cannot get certified in view $v$,\nbecause that would require ${q_c}+{q_{r}}-1$ replicas to vote for two equivocating blocks in view $v$.\n\nNext, we show the inductive step.\nNote that ${q_c}$ replicas voted for $B_{l+1}$ in view $v$,\nwhich contains $\\CommitCert_{v}(B_{l})$.\nThus, they lock $B_l$ or a block extending $B_l$ by the end of view $v$.\nDue to the inductive hypothesis,\nany certified block that ranks equally or higher from view $v$ up to view $v'$ \neither equals or extends $B_l$. \nThus, by the end of view $v'$,\nthose ${q_c}$ replicas still lock $B_l$ or a block extending $B_l$.\nSince the total fraction of faults is less than ${q_c}+{q_{r}}-1$,\nthe status $\\mathcal{S}$ shown by the leader of view $v'+1$ \nmust include a certificate for $B_l$ or a block extending it;\nmoreover, any certificate that ranks equal to or higher than $\\CommitCert_{v}(B_{l})$\nis for a block that equals or extends $B_l$. \nThus, only a block that equals or extends $B_l$ can gather votes from those ${q_c}$ replicas in view $v'+1$\nand only a block that equals or extends $B_l$ can get certified in view $v'+1$.\n\n\\bigskip\nFor $\\textbf{CR2}$ with synchrony bound $\\Delta$ to be correct, \n$\\Delta$ must be an upper bound on worst case message delay \nand the fraction of faulty replicas is less than ${q_{r}}$. \n$B_l$ being directly committed under $\\textbf{CR2}$ with $\\Delta$-synchrony implies that\nat least one honest replica voted for $B_{l+1}$ extending $B_l$ in view $v$,\nand did not hear an equivocating block or view change within $2\\Delta$ time after that.\nCall this replica $h$.\nSuppose $h$ voted for $B_{l+1}$ extending $B_l$ in view $v$ at time $t$,\nand did not hear an equivocating block or view change by time $t+2\\Delta$. \n\nWe first show the base case: a block $B'_{l'}$ with $l'\\geq l$ certified in view $v$ must equal or extend $B_l$.\nObserve that if $B'_{l'}$ with $l'\\geq l$ does not equal or extend $B_l$,\nthen it equivocates $B_l$.\nNo honest replica voted for $B'_{l'}$ before time $t+\\Delta$,\nbecause otherwise $h$ would have received the vote for $B'_{l'}$ by time $t+2\\Delta$,\nNo honest replica would vote for $B'_{l'}$ after time $t+\\Delta$ either,\nbecause by then they would have received (from $h$) and voted for $B_l$.\nThus, $B'_{l'}$ cannot get certified in view $v$.\n\nWe then show the inductive step.\nBecause $h$ did not hear view change by time $t+2\\Delta$,\nall honest replicas are still in view $v$ by time $t+\\Delta$,\nwhich means they all receive $B_{l+1}$ from $h$ by the end of view $v$.\nThus, they lock $B_l$ or a block extending $B_l$ by the end of view $v$.\nDue to the inductive hypothesis,\nany certified block that ranks equally or higher from view $v$ up to view $v'$ \neither equals or extends $B_l$. \nThus, by the end of view $v'$,\nall honest replicas still lock $B_l$ or a block extending $B_l$.\nSince the total fraction of faults is less than ${q_{r}}$,\nthe status $\\mathcal{S}$ shown by the leader of view $v'+1$ \nmust include a certificate for $B_l$ or a block extending it;\nmoreover, any certificate that ranks equal to or higher than $\\CommitCert_{v}(B_{l})$\nis for a block that equals or extends $B_l$. \nThus, only a block that equals or extends $B_l$ can gather honest votes in view $v'+1$\nand only a block that equals or extends $B_l$ can get certified in view $v'+1$.\n\\end{proof}\n\n\n\\begin{theorem}[Safety]\nTwo clients with correct commit rules commit the same block $B_k$ for each height $k$.\n\\label{thm:safety}\n\\end{theorem}\n\n\\begin{proof}\nSuppose for contradiction that two distinct blocks \n$B_k$ and $B'_k$ are committed at height $k$.\nSuppose $B_k$ is committed as a result of $B_{l}$ being directly committed in view $v$\nand $B'_k$ is committed as a result of $B'_{l'}$ being directly committed in view $v'$.\nThis implies $B_l$ is or extends $B_k$;\nsimilarly, $B'_{l'}$ is or extends $B'_k$.\nWithout loss of generality, assume $v \\leq v'$.\nIf $v=v'$, further assume $l \\leq l'$ without loss of generality.\nBy Lemma~\\ref{lemma:unique-cert},\nthe certified block $\\CommitCert_{v'}(B'_{l'})$ must equal or extend $B_l$.\nThus, $B'_k=B_k$.\n\\end{proof}\n\n\n\n\\begin{theorem}[Liveness]\nIf all clients have correct commit rules, \nthey all keep committing new blocks. \n\\label{thm:liveness}\n\\end{theorem}\n\n\\begin{proof}\nBy the definition of a-b-c\\xspace faults,\nif they cannot violate safety, they will preserve liveness.\nTheorem~\\ref{thm:safety} shows that if all clients have correct commit rules,\nthen safety is guaranteed \\emph{even if a-b-c\\xspace replicas behave arbitrarily}.\nThus, once we proved safety, we can treat a-b-c\\xspace replicas \nas honest when proving liveness. \n\nObserve that a correct commit rule tolerates at most $1-{q_{r}}$ Byzantine faults.\nIf a Byzantine leader prevents liveness, \nthere will be ${q_{r}}$ blame messages against it,\nand a view change will ensue to replace the leader. \nEventually, a non-Byzantine (honest or a-b-c\\xspace) replica becomes the leader \nand drives consensus in new heights.\nIf replicas use increasing timeouts,\neventually, all non-Byzantine replicas stay in the same view for sufficiently long.\nWhen both conditions occur, \nif a client's commit rule is correct (either \\textbf{CR1} and \\textbf{CR2}),\ndue to quorum availability,\nit will receive enough votes in the same view to commit. \n\\end{proof}\n\n\n\\section{Flexible BFT Protocol}\n\\label{sec:protocol}\n\nIn this section, we combine the ideas presented in\nSections~\\ref{sec:overv-synchr-flex} and \\ref{sec:overview-async}\nto obtain a final protocol that supports both types of clients.\nA client can either assume partial synchrony, with freedom to choose ${q_c}$ as described\nin the previous section, or assume synchrony with its own choice of\n$\\Delta$, as described in Section~\\ref{sec:overv-synchr-flex}. \nReplicas execute a protocol at the network speed\nwith a parameter ${q_{r}}$.\nWe first give the protocol executed by the replicas \nand then discuss how clients commit depending on their assumptions. \nMoreover, inspired by Casper~\\cite{DBLP:journals\/corr\/abs-1710-09437} and\nHotStuff~\\cite{yin2018hotstuff}, we show a protocol where the\nrounds of voting can be pipelined. \n\n\n\\subsection{Notation}\nBefore describing the protocol, we will first define some\ndata structures and terminologies that will aid presentation.\n\n\\paragraph{Block format.} \nThe pipelined protocol forms a chain of values.\nWe use the term \\emph{block} to refer to each value in the chain.\nWe refer to a block's position in the chain as its \\emph{height}. \nA block $B_k$ at height $k$ has the following format\n$$B_k := (b_k, h_{k-1})$$ \nwhere $b_k$ denotes a proposed value at height $k$ \nand $h_{k-1} := H(B_{k-1})$ is a hash digest of the predecessor block.\nThe first block $B_1=(b_1, \\bot)$ has no predecessor.\nEvery subsequent block $B_k$ must specify a predecessor block $B_{k-1}$ by including a hash of it.\nWe say a block is \\emph{valid} if \n(i) its predecessor is valid or $\\bot$, and \n(ii) its proposed value meets application-level validity conditions and is consistent with its chain of ancestors (e.g., does not double spend a transaction in one of its ancestor blocks). \n\n\\paragraph{Block extension and equivocation.}\nWe say $B_l$ \\emph{extends} $B_k$, \nif $B_k$ is an ancestor of $B_l$ ($l>k$). \nWe say two blocks $B_l$ and $B'_{l'}$ \\emph{equivocate} one another \nif they are not equal and do not extend one another.\n\n\n\\paragraph{Certificates and certified blocks.} \nIn the protocol, replicas vote for blocks by signing them.\nWe use $\\CommitCert_{v}{(B_{k})}$ to denote a set of signatures \non $h_{k} = H(B_{k})$ by ${q_{r}}$ replicas in view $v$. \n${q_{r}}$ is a parameter fixed for the protocol instance. \nWe call $\\CommitCert_{v}{(B_{k})}$ a certificate for $B_{k}$ from view $v$.\nCertified blocks are ranked \nfirst by the views in which they are certified and then by their heights.\nIn other words, a block $B_k$ certified in view $v$ \nis ranked \\emph{higher} than a block $B_{k'}$ certified in view $v'$\nif either (i) $v > v'$ or (ii) $v = v'$ and $k>k'$. \n\n\\paragraph{Locked blocks.} \nAt any time, a replica locks the highest certified block to its knowledge.\nDuring the protocol execution, each replica keeps track of all signatures for all blocks\nand keeps updating its locked block.\nLooking ahead, the notion of locked block will be\nused to guard the safety of a client commit.\n\n\\subsection{Replica Protocol}\n\nThe replica protocol progresses in a view-by-view fashion. Each view has a\ndesignated leader who is responsible for driving consensus on a\nsequence of blocks. Leaders can be chosen\nstatically, e.g., round robin, \nor randomly using more sophisticated techniques~\\cite{CKS05,Algorand}. \nIn our description, we assume a round robin selection of leaders,\ni.e., ($v$ {\\sf mod} $n$) is the leader of view $v$. \n\nAt a high level, the protocol does the following: \nThe leader proposes a block to all replicas. \nThe replicas vote on it if safe to do so. \nThe block becomes certified once ${q_{r}}$ replicas vote on it. \nThe leader will then propose another block extending the previous one, \nchaining blocks one after another at increasing heights.\nUnlike regular consensus protocols\nwhere replicas determine when a block is committed, in\nFlexible BFT\\xspace, replicas only certify blocks while committing is\noffloaded to the clients.\nIf at any time replicas detect malicious leader behavior or lack of\nprogress in a view, they blame the leader \nand engage in a view change protocol \nto replace the leader and move to the next view. \nThe new leader collects a status from different replicas and continues to propose\nblocks based on this status.\nWe explain the steady state and view change protocols in more detail below.\n\n\\begin{figure*}[tb]\n\\centering\n\\begin{boxedminipage}{\\textwidth}\n\nLet $v$ be the current view number and \nreplica $L$ be the leader in this view. \nPerform the following steps in an iteration.\n\n\\begin{enumerate}[topsep=8pt,itemsep=8pt,leftmargin=*]\n\\setlength\\itemsep{0.5em}\n\\item \\textbf{Propose. } \\label{step:propose} \n\\Comment{Executed by the leader of view $v$}\n\nThe leader $L$ broadcasts $\\sig{\\mathsf{propose}, B_k, v, \\CommitCert_{v'}(B_{k-1}), \\mathcal{S}}_L$.\nHere, $B_k := (b_k, h_{k-1})$ is the newly proposed block\nand it should extend the highest certified block known to $L$.\nIn the steady state, an honest leader $L$ would extend the\nprevious block it proposed,\nin which case $v'=v$ and $\\mathcal{S} = \\bot$. \nImmediately after a view change, \n$L$ determines the highest certified block from\nthe status $\\mathcal{S}$ received during the view change. \n\n\\item \\label{step:vote} \\textbf{Vote.} \n\\Comment{Executed by all replicas}\n\nWhen a replica $R$ receives a valid proposal \n$\\sig{\\mathsf{propose}, B_k, v, \\CommitCert_{v'}(B_{k-1}), \\mathcal{S}}_L$\nfrom the leader $L$, $R$ broadcasts the proposal and a vote $\\sig{\\mathsf{vote}, B_k, v}_R$ \nif (i) the proposal is the first one in view $v$,\nand it extends the highest certified block in $\\mathcal{S}$, \nor (ii) the proposal extends the last proposed block in the view. \n\n\nIn addition, replica $R$ records the following based on the messages it receives.\n\\begin{itemize}[topsep=8pt,itemsep=4pt]\n\\item[-] $R$ keeps track of the number of\n votes received for this block in this view as $q_{B_{k},v}$. \n\\item[-] If block $B_{k-1}$ has been proposed in view\n $v$, $R$ marks $B_{k-1}$ as a locked block and\n records the locked time as ${\\mathsf{t\\text{-}{lock}}}_{k-1,v}$.\n\\item[-] If a block equivocating $B_{k-1}$\n is proposed by $L$ in view $v$ (possibly received through a vote),\n $R$ records the time ${\\mathsf{t}\\text{-}\\mathsf{equiv}}_{k-1,v}$ at\n which the equivocating block is received. \n\\end{itemize}\n\nThe replica then enters the next iteration.\nIf the replica observes no progress or equivocating blocks in the same view $v$, \nit stops voting in view $v$ \nand sends $\\sig{\\mathsf{view\\text{-}change},v}_r$ message to all replicas. \n\\end{enumerate}\n\n\\end{boxedminipage}\n\\caption{Flexible BFT steady state protocol.}\n\\label{fig:steady}\n\\end{figure*}\n\n\\paragraph{Steady state protocol.}\nThe steady state protocol is described in Figure~\\ref{fig:steady}.\nIn the steady state, there is a unique leader who, in an iteration,\nproposes a block, waits for votes from ${q_{r}}$\nreplicas and moves to the next iteration.\nIn the steady state, an honest leader always extends the previous block it proposed.\nImmediately after a view change, \nsince the previous leaders could have been\nByzantine and may have proposed equivocating blocks, \nthe new leader needs to determine a safe block to propose. \nIt does so by collecting a status of\nlocked blocks from ${q_{r}}$ replicas denoted by $\\mathcal{S}$\n(described in the view change protocol).\n\nFor a replica $R$ in the steady state, on receiving a proposal for block\n$B_{k}$, a replica votes for it if \nit extends the previous proposed block in the view or \nif it extends the highest certified block in $\\mathcal{S}$.\nReplica $R$ can potentially receive blocks out of\norder and thus receive $B_{k}$ before its ancestor blocks. \nIn this case, replica $R$ waits until it receives the ancestor blocks, \nverifies the validity of those blocks and $B_k$ before voting for $B_k$.\nIn addition, replica $R$ records the following to aid a client commit:\n\\begin{itemize}\n\\item[-] \\textbf{Number of votes.} It records the number of votes received\n for $B_{k}$ in view $v$ as $q_{B_{k},\n v}$. Observe that votes are broadcast by \n all replicas and the number of votes for a block can be greater\n than ${q_{r}}$. $q_{B_{k}, v}$ will be updated each time the\n replica hears about a new vote in view $v$.\n\\item[-] \\textbf{Lock time.} If $B_{k-1}$ was proposed in the\n same view $v$, it locks $B_{k-1}$ and records the locked time as\n ${\\mathsf{t\\text{-}{lock}}}_{k-1, v}$.\n\\item[-] \\textbf{Equivocation time.} If the replica ever observes an\n equivocating block at height $k$ in view $v$ through a\n proposal or vote, \n it stores the time of equivocation as ${\\mathsf{t}\\text{-}\\mathsf{equiv}}_{k,v}$.\n\\end{itemize}\nLooking ahead, the locked time ${\\mathsf{t\\text{-}{lock}}}_{k-1,v}$ and\nequivocation time ${\\mathsf{t}\\text{-}\\mathsf{equiv}}_{k-1,v}$ will\nbe used by clients with synchrony assumptions to commit,\nand the number of votes $q_{B_{k}, v}$\nwill be used by clients with partial-synchrony assumptions to commit.\n\n\\paragraph{Leader monitoring.}\nIf a replica detects a lack of progress in view $v$ or\nobserves malicious leader behavior\nsuch as more than one height-$k$ blocks in the same view,\nit blames the leader of view $v$\nby broadcasting a $\\sig{\\mathsf{view\\text{-}change}, v}$ message. It quits\nview $v$ and stops voting and broadcasting blocks in\nview $v$. \nTo determine lack of progress, the replicas may simply guess a time bound for\nmessage arrival or use increasing timeouts for each view~\\cite{castro1999practical}.\n\n\\paragraph{View change.}\nThe view change protocol is described in Figure~\\ref{fig:vc}.\nIf a replica gathers ${q_{r}}$ $\\sig{\\mathsf{view\\text{-}change}, v}$\nmessages from distinct replicas,\nit forwards them to all other replicas and enters a new view\n$v + 1$ (Step~\\ref{step:new_view}). It records the time\nat which it received the blame certificate as ${\\mathsf{t\\text{-}{viewchange}}}_{v}$.\nUpon entering a new view,\na replica reports to the leader of the new view $L'$ its locked block \nand transitions to the steady state (Step~\\ref{step:status}). \n${q_{r}}$ status messages form the status $\\mathcal{S}$.\nThe first block $L'$ proposes in the new view \nshould extend the highest certified block among these ${q_{r}}$ status messages.\n\n\\begin{figure*}[htbp]\n\\centering\n\\begin{boxedminipage}{\\textwidth}\n\nLet $L$ and $L'$ be the leaders of views $v$ and $v+1$, respectively.\n\\begin{enumerate}[topsep=8pt,itemsep=8pt,leftmargin=*,label=(\\roman*)]\n\\setlength\\itemsep{0.5em}\n\\item \\label{step:new_view} \\textbf{New-view.}\nUpon gathering ${q_{r}}$ $\\sig{\\mathsf{view\\text{-}change}, v}$ messages,\nbroadcast them and enter view $v+1$. Record the time as ${\\mathsf{t\\text{-}{viewchange}}}_{v}$.\n\n\\item \\label{step:status} \\textbf{Status.}\nSuppose $B_j$ is the block locked by the replica. Send a\nstatus of its locked block to the leader $L'$ using\n$\\sig{\\mathsf{status}, v, B_j, \\CommitCert_{v'}(B_j)}$ and transition\nto the steady state. Here, $v'$ is the view in which $B_j$\nwas certified.\n\n\\end{enumerate}\n\n\\end{boxedminipage}\n\\caption{Flexible BFT view change protocol.}\n\\label{fig:vc}\n\\end{figure*}\n\n\n\n\\subsection{Client Commit Rules}\n\\label{sec:commit-rules}\n\n\n\\begin{figure*}[htbp]\n \\centering\n \\begin{boxedminipage}{\\textwidth}\n \\begin{enumerate}[itemsep=8pt,leftmargin=0.2cm,label=]\n \n \\item \\textbf{(CR1) Partially-synchronous commit.} A\n block $B_k$ is committed under the partially synchronous rule with parameter ${q_c}$ \n iff there exist $l \\geq k$ and $v$ such that\n \\begin{enumerate}[topsep=8pt,itemsep=4pt]\n\t\t\t\t\\item $\\CommitCert_{v}(B_l)$ and $\\CommitCert_{v}(B_{l+1})$ \n\t\t\t\t\t\texist where $B_{l+1}$ extends $B_l$ and $B_k$ (if $l = k$, $B_l = B_k$).\n \\item $q_{B_{l}, v} \\geq {q_c}$ and $q_{B_{l+1}, v} \\geq {q_c}$.\n \\end{enumerate}\n \n\t\t\\item \\textbf{(CR2) Synchronous commit.}\n A block $B_k$ is committed assuming $\\Delta-$synchrony \n\t\tiff the following holds for ${q_{r}}$ replicas.\n\t\tThere exist $l \\geq k$ and $v$ (possibly\n different across replicas) such that,\n \\begin{enumerate}[topsep=8pt,itemsep=4pt]\n \\item $\\CommitCert_{v}(B_l)$ exists where $B_l$ extends $B_k$ (if $l = k$, $B_l = B_k$).\n \\item An undisturbed-$2\\Delta$ period is observed after\n $B_{l+1}$ is obtained, i.e., no equivocating block\n or view change of view $v$ were\n observed before $2\\Delta$ time after $B_{l+1}$\n was obtained, i.e.,\n $$\\min({\\mathsf{current}\\text{-}\\mathsf{time}}, {\\mathsf{t}\\text{-}\\mathsf{equiv}}_{l,v},\n {\\mathsf{t\\text{-}{viewchange}}}_v) - {\\mathsf{t\\text{-}{lock}}}_{l,v}\n \\geq 2\\Delta$$\n \\end{enumerate}\n \\end{enumerate}\n \\end{boxedminipage}\n \\caption{Flexible BFT commit rules}\n \\label{fig:commit-rules}\n\\end{figure*}\n\nAs mentioned in the introduction, Flexible BFT\\xspace supports \nclients with different assumptions. \nClients in Flexible BFT\\xspace learn the\nstate of the protocol from the replicas and based on their\nown assumptions determine whether a block has been committed.\nBroadly, we supports two types of clients:\nthose who believe in synchrony and those who believe in partial synchrony.\n\n\\subsubsection{Clients with Partial-Synchrony Assumptions (CR1)}\nA client with partial-synchrony assumptions deduces whether a block has been committed\nby based on the number of votes received by a block. \nA block $B_l$ (together with its ancestors) is committed with parameter ${q_c}$\niff $B_l$ and its immediate\nsuccessor both receive $\\geq {q_c}$ votes in the same view. \n\n\\paragraph{Safety of CR1.}\nA CR1 commit based on ${q_c}$ votes is safe \nagainst $<{q_c}+{q_{r}}-1$ faulty replicas (Byzantine plus a-b-c\\xspace).\nObserve that if $B_l$ gets ${q_c}$ votes in view $v$, \ndue to flexible quorum intersection, \na conflicting block cannot be certified in view $v$, \nunless $\\geq {q_c}+{q_{r}}-1$ replicas are faulty. \nMoreover, $B_{l+1}$ extending $B_l$ \nhas also received ${q_c}$ votes in view $v$. \nThus, ${q_c}$ replicas lock block $B_l$ in view $v$.\nIn subsequent views, honest replicas that have locked \n$B_l$ will only vote for a block \nthat equals or extends $B_l$ unless they unlock. \nHowever, due to flexible quorum intersection, \nthey will not unlock \nunless $\\geq {q_c}+{q_{r}}-1$ replicas are faulty. \nProof of Lemma~\\ref{lemma:unique-cert} formalizes this argument. \n\n\\subsubsection{Client with Synchrony Assumptions (CR2)}\nIntuitively, a CR2 commit involves ${q_{r}}$ replicas collectively stating that \nno ``bad event'' happens within ``sufficient time'' in a view.\nHere, a bad event refers to either leader equivocation or view change\n(the latter indicates sufficient replicas believe leader is\nfaulty) and the ``sufficient time'' is $2\\Delta$; \nwhere $\\Delta$ is a synchrony bound chosen by the client. \nMore formally, a replica states that a synchronous commit for block $B_k$\nfor a given parameter $\\Delta$ (set by a client) is satisfied iff the following holds.\nThere exists $B_{l+1}$ that extends $B_l$ and $B_k$,\nand the replica observes an undisturbed-$2\\Delta$ period after\nobtaining $B_{l+1}$ during which (i) no equivocating block\nis observed, and (ii) no blame certificate\/view\nchange certificate for view $v$ was obtained, i.e., \n$$\\min({\\mathsf{current}\\text{-}\\mathsf{time}}, {\\mathsf{t}\\text{-}\\mathsf{equiv}}_{l,v},\n {\\mathsf{t\\text{-}{viewchange}}}_v) - {\\mathsf{t\\text{-}{lock}}}_{l,v}\n \\geq 2\\Delta$$\nwhere ${\\mathsf{t}\\text{-}\\mathsf{equiv}}_{l,v}$ denotes the time \nequivocation for $B_l$ in view $v$ was\nobserved ($\\infty$ if no equivocation), ${\\mathsf{t\\text{-}{viewchange}}}_v$\ndenotes the time at which view change happened from view $v$\nto $v + 1$ ($\\infty$ if no view change has happened\nyet), and ${\\mathsf{t\\text{-}{lock}}}_{l,v}$ denotes the time at which \n$B_l$ was locked (or $B_{l+1}$ was proposed) in\nview $v$.\nNote that the client does not require the ${q_{r}}$ fraction of replicas \nto report the same height $l$ or view $v$.\n\n\\paragraph{Safety of CR2.}\nA client believing in synchrony assumes that all messages between replicas\narrive within $\\Delta$ time after they were sent. \nIf the client's chosen $\\Delta$ is a correct upper bound on message delay, \nthen a CR2 commit is safe against ${q_{r}}$ faulty replicas (Byzantine plus a-b-c\\xspace),\nas we explain below.\nIf less than ${q_{r}}$ replicas are faulty, at least one honest replica reported \nan \\emph{undisturbed-$2\\Delta$} period.\nLet us call this honest replica $h$ and \nanalyze the situation from $h$'s perspective\nto explain why an undisturbed $2\\Delta$ period ensures safety.\nObserve that replicas in Flexible BFT\\xspace forward the proposal when voting. \nIf $\\Delta$-synchrony holds, every other honest replica \nlearns about the proposal $B_l$ at most $\\Delta$ time after $h$ learns about it.\nIf any honest replica voted for a conflicting block \nor quit view $v$, $h$ would have known within $2\\Delta$ time. \n\n\\input{proof}\n\n\\subsection{Efficiency}\n\n\\paragraph{Latency.}\nClients with a synchrony assumption incur a latency of\n$2\\Delta$ plus a few network speed rounds. In terms of the\nmaximum network delay $\\Delta$, this matches the \nstate-of-the-art synchronous\nprotocols~\\cite{abraham2019sync}. The distinction though is that\n$\\Delta$ now depends on the client assumption and \nhence different clients may commit with different latencies\nClients with partial-synchrony assumptions incur a latency of two\nrounds of voting; this matches PBFT~\\cite{castro1999practical}.\n\n\\paragraph{Communication.}\nEvery vote and new-view messages are broadcast to all replicas,\nincurring $O(n^2)$ communication messages. This is the same\ncomplexity of PBFT~\\cite{castro1999practical} and Sync\nHotStuff~\\cite{abraham2019sync}.\n\n\n\\section{Related Work}\n\\label{sec:related-work}\n\n\\begin{figure}[htbp]\n \\centering\n \\begin{subfigure}{0.45\\textwidth}\n \\includegraphics[width=1\\linewidth]{compare.pdf}\n \\end{subfigure}\n \n \\begin{subfigure}{0.45\\textwidth}\n \n \n \\begin{tabular}{c l}\n $+$ & Partially Synchronous protocols~\\cite{castro1999practical,Yin03,Martin06,Zyzzyva07,yin2018hotstuff,buchman2016tendermint}\\\\\n $\\times$ & Synchronous Protocols~\\cite{pass2018thunderella,hanke2018dfinity,abraham2019sync,abraham2018synchronous}\\\\\n $\\blacktriangle$ & Thunderella, Sync HotStuff ($\\vartriangle$: optimistic)~\\cite{pass2018thunderella,abraham2019sync}\\\\\n $\\blacklozenge$ & Zyzzyva, SBFT ($\\lozenge$: optimistic)~\\cite{Zyzzyva07}\n \\end{tabular}\n \n \n \\end{subfigure}\n \\caption{\\textbf{Comparing Flexible BFT\\xspace to existing consensus\n protocols.} The legend represent different ${q_{r}}$ values.}\n \\label{fig:compare}\n\\end{figure}\n\nMost BFT protocols are designed with a uniform assumption about the\nsystem. The literature on BFT consensus is vast and is largely beyond scope for\nreview here; we refer the reader to the standard\ntextbooks in distributed computing~\\cite{lynch1996distributed,attiya2004distributed}.\n\n\n\\paragraph{Resilience.} Figure~\\ref{fig:compare} compares resilience in Flexible BFT\\xspace\nwith some existing consensus protocols. The x axis represents a Byzantine\nresilience threshold, the y axis the total resilience against\ncorruption under the a-b-c\\xspace fault mode. The three\ndifferent colors (red, green, blue) represent three possible instantiations of\nFlexible BFT\\xspace at different ${q_{r}}$'s. \n\nEach point in the figure represents an abstract ``client'' belief. For the partial\nsynchrony model, client beliefs form lines, and for synchronous settings,\nclients beliefs are individual circles.\nThe locus of points on a given color represents all client\nassumptions supported for a corresponding ${q_{r}}$, representing the diversity of\nclients supported. \nThe figure depicts state-of-art resilience combinations by existing consensus solutions via \nuncolored shapes, $+, \\times, \\vartriangle, \\blacktriangle, \\lozenge, \\blacklozenge$.\nPartially synchronous protocols~\\cite{castro1999practical, yin2018hotstuff,buchman2016tendermint} that tolerate one-third Byzantine faults can all be represented by the `+' symbol at $(1\/3, 1\/3)$. \nSimilarly, synchronous\nprotocols~\\cite{hanke2018dfinity,abraham2018dfinity,abraham2018synchronous}\nthat tolerate one-half Byzantine faults are represented by the\n`$\\times$' symbol at $(1\/2, 1\/2)$.\nIt is worth noting that some of these works \nemploy two commit rules that differ in number of votes \nor synchrony~\\cite{Martin06,Zyzzyva07,pass2018thunderella,abraham2019sync}.\nFor instance, Thunderella and Sync HotStuff optimistically commit \nin an asynchronous fashion based on quorums of size $\\geq 3\/4$, \nas represented by a hollow triangle at $(1\/4, 1\/2)$.\nSimilarly, FaB~\\cite{Martin06}, Zyzzyva~\\cite{Zyzzyva07} and SBFT~\\cite{gueta2018sbft}\noptimistically commit when they receive all votes but wait for two rounds of votes otherwise.\nThese are represented by two points in the figure. \nDespite the two commit rules, \nthese protocols do not have client diversity, \nall parties involved (replicas and clients) make the same assumptions\nand reach the same commit decisions.\n\n\\paragraph{Diverse client beliefs.}\nA simple notion of client diversity exists in Bitcoin's probabilistic commit rule.\nOne client may consider a transaction committed after six confirmations while another may require only one confirmation.\nGenerally, the notion of client diversity has been discussed informally at public blockchain forums. \n\nAnother example of diversity is considered in the XFT protocol~\\cite{XFT}.\nThe protocol supports two types of clients: clients that assume crash faults under partial synchrony,\nor clients that assume Byzantine faults but believe in synchrony.\nYet another notion of diversity is considered \nby the federated Byzantine consensus model and the Stellar\nprotocol~\\cite{mazieres2015stellar}.\nThe Stellar protocol allows nodes to pick their own quorums.\nOur Flexible BFT\\xspace approach instead considers diverse clients \nin terms of a-b-c\\xspace adversaries and synchrony.\nThe model and techniques in~\\cite{mazieres2015stellar} and our paper\nare largely orthogonal and complementary.\n\n\\paragraph{Flexible Paxos.} Flexible Paxos by Howard et\nal.~\\cite{DBLP:conf\/opodis\/HowardMS16} observes that \nPaxos may use non-intersecting quorums within a view \nbut an intersection is required across views.\nOur Flexible Quorum Intersection (b) can be viewed as\nits counterpart in the Byzantine and a-b-c\\xspace setting.\nIn addition, Flexible BFT\\xspace applies the flexible quorum idea\nto support diverse clients with different fault model and timing assumptions.\n\n\\paragraph{Mixed fault model.}\nFault models that mix Byzantine and crash faults have been considered in various\nworks, e.g., FaB~\\cite{Martin06} and SBFT~\\cite{abraham2019sync}. The a-b-c\\xspace\nfaults are in a sense the opposite of crash faults, mixing Byzantine with\n``anti-crashes''. \nOur a-b-c\\xspace adversary bears similarity to a rational adversary in the BAR\nmodel~\\cite{aiyer2005bar}, with several important differences.\nBAR assumes no collusion exists among rational replicas themselves and between\nrational and Byzantine replicas, whereas a-b-c\\xspace replicas have no such\nconstraint.\nBAR solutions are designed to expose cheating behavior and thus deter rational\nreplicas from cheating. The Flexible BFT\\xspace approach does not rely on deterrence for good\nbehavior, and breaks beyond the $1\/3$ ($1\/2$) corruption tolerance threshold in\nasynchronous (synchronous) systems. \nLast, BAR solutions address only the partial synchrony settings. \nAt the same time, BAR provides a game theoretic proof of rationality. \nMore generally, game theoretical modeling and analysis with collusion have been performed to other problems \nsuch as secret sharing and multiparty computation~\\cite{abraham2006distributed,lysyanskaya2006rationality,gordon2006rational,kol2008cryptography}. \nAnalyzing incentives for the a-b-c\\xspace model remains an open challenge.\n\n\n\n\\section{Synchronous BFT with Network Speed Replicas - Overview}\n\\label{sec:overv-synchr-flex}\n\\begin{figure*}[ht]\n\\begin{boxedminipage}{\\textwidth}\n\\paragraph{Protocol executed by the replicas.}\n\\begin{enumerate}\n\\setlength\\itemsep{0.5em}\n\n\\item \\textbf{Propose. } \\label{step:sync-propose} \n The leader $L$ of view $v$ \n proposes a value $b$.\n\\item \\textbf{Vote.} \\label{step:sync-vote}\n On receiving the first value $b$ in a view $v$, a replica broadcasts $b$ and votes for $b$ if it is \\emph{safe} to do so, as determined by a locking mechanism described later. \nThe replica records the following.\n\\begin{itemize}\n\\item[-] If the replica collects ${q_{r}}$ votes on $b$, \n\t\tdenoted as $\\mathcal{C}^{q_{r}}_v(b)$ and called a certificate of $b$ from view $v$, \n\t\tthen it ``locks'' on $b$ and records the lock time as ${\\mathsf{t\\text{-}{lock}}}_v$.\n\\item[-] If the replica observes an equivocating value signed by $L$ at any time after entering view $v$, it records the time of equivocation as ${\\mathsf{t}\\text{-}\\mathsf{equiv}}_v$. It blames the leader by broadcasting $\\sig{\\mathsf{view\\text{-}change}, v}$ and the equivocating values.\n\\item[-] If the replica does not receive a proposal for sufficient time in view $v$, it times out and broadcasts $\\sig{\\mathsf{view\\text{-}change}, v}$. \n\\item[-] If the replica collects a set of ${q_{r}}$ $\\sig{\\mathsf{view\\text{-}change}, v}$ messages, \nit records the time as ${\\mathsf{t\\text{-}{viewchange}}}_v$, broadcasts them and enters view $v+1$. \n\\end{itemize}\n\\end{enumerate}\n\nIf a replica locks on a value $b$ in a view, then it votes only for $b$ in subsequent views \nunless it ``unlocks'' from $b$ by learning that ${q_{r}}$ replicas\nare not locked on $b$ in that view or higher views (they may be locked on other values or they may not be locked at all).\n\n\\paragraph{Commit rules for clients.}\nA value $b$ is said to be committed by a client assuming $\\Delta$-synchrony \niff ${q_{r}}$ replicas\neach report that there exists a view $v$ such that, \n\\begin{enumerate}\n\\item $b$ is certified, i.e., $\\mathcal{C}^{q_{r}}_v(b)$ exists.\n\\item the replica observed an undisturbed-$2\\Delta$ period after certification, i.e., \n\t\tno equivocating value or view change was observed at a time before $2\\Delta$ after it was certified, \n\t\tor more formally, $\\min({\\mathsf{current}\\text{-}\\mathsf{time}}, {\\mathsf{t}\\text{-}\\mathsf{equiv}}_v, {\\mathsf{t\\text{-}{viewchange}}}_v) - {\\mathsf{t\\text{-}{lock}}}_v \\geq 2\\Delta$\n\\end{enumerate}\n\\end{boxedminipage}\n\\caption{Synchronous BFT with network speed replicas.}\n\\label{fig:sync-bft}\n\\end{figure*}\n\nEarly synchronous protocols~\\cite{DS83,katz2009expected,micali2017optimal} have relied on synchrony in two ways.\nFirst, the replicas assume a maximum network delay $\\Delta$ for communication between them. Second, they require a lock step execution, i.e., all replicas are in the same round at the same time.\nHanke et al. showed a synchronous protocol without lock step execution~\\cite{hanke2018dfinity}. \nTheir protocol still contains a synchronous step in which all replicas perform a blocking wait of $2\\Delta$ time before proceeding to subsequent steps. \nSync HotStuff~\\cite{abraham2019sync} improves on it further to remove replicas' blocking waits during good periods (when the leader is honest),\nbut blocking waits are still required by replicas during bad situations (view changes).\n\nIn this section, we show a synchronous protocol where the\nreplicas do not ever have blocking waits and execute at the network speed.\nIn other words, replicas run a partially synchronous protocol and do not rely on synchrony at any point. \nClients, on the other hand, rely on synchrony bounds to commit.\nThis separation is what allows our protocol to support clients with different assumptions on the value of $\\Delta$. \nTo the best of our knowledge, this is the first synchronous protocol to achieve such a separation.\nIn addition, the protocol tolerates a combined Byzantine plus a-b-c\\xspace fault ratio greater than a half (Byzantine fault tolerance is still less than half). \n\nFor simplicity, in this overview, we show a protocol for single shot consensus.\nIn our final protocol in Section~\\ref{sec:protocol}, we will consider a pipelined version of the protocol for consensus on a sequence of values. \nWe do not consider termination for the single-shot consensus protocol in this overview \nbecause our final replication protocol is supposed to run forever. \n\nThe protocol is shown in Figure~\\ref{fig:sync-bft}. It runs in a sequence of views. Each view has a designated leader who may be selected in a round robin order. The leader drives consensus in that view.\nIn each view, the protocol runs in two steps -- propose and vote. In the propose step, the leader proposes a value $b$. In the vote step, replicas vote for the value if it is \\emph{safe} to do so.\nThe vote also acts as a \\emph{re-proposal} of the value.\nIf a replica observes a set of ${q_{r}}$ votes on $b$, called a\ncertificate $\\CommitCert(b)$, it ``locks'' on $b$. \nFor now, we assume ${q_{r}} = 1\/2$.\n(To be precise, ${q_{r}}$ is slight larger than 1\/2, e.g., $f+1$ out of $2f+1$.)\nWe will revisit the choice\nof ${q_{r}}$ in Section~\\ref{sec:discussion}.\nIn subsequent views, a replica will not vote for a value other than $b$ unless it learns that ${q_{r}}$ replicas are not locked on $b$. In addition, the replicas switch views (i.e., change leader) if they either observe an equivocation or if they do not receive a proposal from the leader within some timeout. \nA client commits $b$ if ${q_{r}}$ replicas state that there exists a view in which $b$ is certified \nand no equivocating value or view change was observed at a time before $2\\Delta$ after it was certified. \nHere, $\\Delta$ is the maximum network delay the client believes in. \n\nThe protocol ensures safety if there are fewer than ${q_{r}}$ faulty replicas.\nThe key argument for safety is the following: \nIf an honest replica $h$ satisfies the commit condition for some value $b$ in a view, then \n(a) no other value can be certified and \n(b) all honest replicas are locked on $b$ at the end of that view.\nTo elaborate, satisfying the commit condition implies that some honest replica $h$ has observed an undisturbed-$2\\Delta$ period after it locked on $b$, i.e., it did not observe an equivocation or a view change. \nSuppose the condition is satisfied at time $t$. \nThis implies that other replicas did not observe an equivocation or a view change before $t-\\Delta$. \nThe two properties above hold if the quorum honesty conditions described below hold.\nFor liveness, if Byzantine leaders equivocate or do not propose a safe value, they will be blamed by both honest and a-b-c\\xspace replicas and a view change will ensue. \nEventually there will be an honest or a-b-c\\xspace leader to drive consensus if quorum availability holds.\n\n\\begin{description}[topsep=8pt,itemsep=4pt]\n\\item[Quorum honesty (a) within a view.] \nSince the undisturbed period starts after $b$ is certified, \n$h$ must have voted (and re-proposed) $b$ at a time earlier than $t-2\\Delta$. \nEvery honest replica must have received $b$ before $t - \\Delta$.\nSince they had not voted for an equivocating value by then, they must have voted for $b$.\nSince the number of faults is less than ${q_{r}}$,\nevery certificate needs to contain an honest replica's vote.\nThus, no certificate for any other value can be formed in this view. \n\t\t\n\\item[Quorum honesty (b) across views.] \n$h$ sends $\\mathcal{C}^{q_{r}}_v(b)$ at time $t-2\\Delta$. \nAll honest receive $\\mathcal{C}^{q_{r}}_v(b)$ by time $t-\\Delta$ and become locked on $b$. \nFor an honest replica to unlock from $b$ in subsequent views, \n${q_{r}}$ replicas need to claim that they are not locked on $b$. \nAt least one of them is honest \nand would need to falsely claim it is not locked, which cannot happen.\n\n\\item[Quorum availability.]\tByzantine replicas do not exceed $1-{q_{r}}$ so that ${q_{r}}$ replicas respond to the leader. \n\\end{description}\n\n\n\\paragraph{Tolerating a-b-c\\xspace faults.}\nIf we have only honest and Byzantine replicas (and no a-b-c\\xspace replicas), quorum honesty requires the fraction of Byzantine replicas $B < {q_{r}}$. Quorum availability requires $B \\leq 1-{q_{r}}$. If we optimize for maximizing $B$, we obtain ${q_{r}} \\geq 1\/2$.\nNow, suppose $P$ represents the fraction of a-b-c\\xspace replicas. Quorum honesty requires $B + P < {q_{r}}$, and quorum availability requires $B \\leq 1-{q_{r}}$. Thus, the protocol supports varying values of $B$ and $P$ at different values of ${q_{r}} > 1\/2$ such that safety and liveness are both preserved.\n\n\\paragraph{Separating client synchrony assumption from the replica protocol.}\nThe most interesting aspect of this protocol is the separation of the client commit rule from\nthe protocol design. In particular, although this is a synchronous protocol, the replica protocol does not rely on any synchrony bound. This allows clients to choose their own message delay bounds. \nAny client that uses a correct message delay bound enjoys safety. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nConvolutional neural network (CNN) has gained success in many fields and become the main method in pattern recognition and computer vision since AlexNet won the 2012 ImageNet competition \\cite{krizhevsky2012imagenet,deng2009imagenet}. Nowadays, the technology has also been used in real-world applications, such as applications in smartphones, intelligent recommendations on products or music, target recognition and face detection \\cite{lecun2015deep}.\n\nHowever, designing a high performance architecture is still a challenging problem. This usually needs expertise and takes long time of research by trail and error \\cite{hanxiao}. Because of numerous configurations with respect to the number of layers and details in each layer, it is impossible to explore the whole architecture space. Hence, how to automatically generate a competitive architecture becomes an important problem \\cite{bergstra2012random,miller1989designing,negrinho2017deeparchitect}.\n\nReinforcement Learning and Evolutionary Algorithm are two main architecture learning methods used for CNN structure learning \\cite{Baker,zhao,Zoph,Barret,zhao,Esteban}. Reinforcement learning uses models' performances on validation dataset as rewards and uses an agent to maximize the cumulative rewards when interacting with an environment. Evolutionary algorithm simulates evolution processes to iteratively improve models' performances.\n\nHowever, performances of the generated networks are limited by architecture search space which is determined by the algorithm's encoding system. If the encoding system can not represent one architecture, that architecture can never be learned. For example, \\cite{Lingxi} used fixed-number stages to represent network structures. Different stages are separated by pooling layer. Each stage is a directed acyclic graph (DAG). In a DAG, nodes represent layers and edges represent operations such as 1$\\times$1 convolution and 3$\\times$3 convolution. Therefore, the architecture space is directly limited by the fixed-number stages and pooling layers. Because of this, it is impossible to generate many network architectures. For example, it can't generate one network where there is a skip connection between one layer in first stage and another layer in last stage. Some other encoding systems just represent plain architectures which are composed of stacking layers. But plain networks usually suffer from learning problems such as vanishing gradient problem and exploding gradient problem. In this paper, we propose a new encoding system which have little limitations on search space. For example, the order of layers are randomly set. In our encoding system, we use short DNA strands which are composed of a series of DNA molecules (A,G,C,T) to represent layers and long DNA strands composed of short strands to represent architectures.\n\nIt is well known that network depth is of crucial importance because it can raise the ``levels'' of features. However, simply stacking layers causes the degradation problem. Skip-connections can solve the degradation problem when deepening networks \\cite{srivastava2015highway,srivastava2015training,he2016deep}. ResNet \\cite{he2016deep} uses skip-connections via residual blocks. Skip-connections containing in ResNet provide a shorter path between bottom layers and target output layer compared with normal plain architectures. Parameters of bottom layers are thus easier to be trained. DenseNet \\cite{huang2017densely} also adopts skip-connections by reusing feature maps. Because skip-connections existing in ResNet and DenseNet ease learning problem and improve model's capacity, we explore neural network architectures with skip-connections.\n\nDNA Computing is a method of computation using molecular biology techniques \\cite{braich2000solution,adleman1994molecular,boneh1996computational,kari2000using}. DNA molecules (A, G, C, T) can be used to encode information just like binary strings. DNA strands are composed of DNA molecules and can be regarded as a piece of information. DNA computing uses DNA strands to carry information. It generates large numbers of short DNA strands and then put them into DNA soup. In DNA soup, short DNA strands connect to each other to form longer DNA strands based on base pairing principle if provided suitable reaction environment. After a period of time, the soup contains a sets of candidate DNA strands that represent desired results. Then we can pick DNA strands representing desired results from the soup. For example, ``travelling salesman problem\" can be solved by DNA computing \\cite{adleman1994molecular}. We can generate different DNA strands and use them to represent a city that have to be visited. Each strand has a linkage with other strands. Within seconds, the strands form bigger ones that represent different travel routes. Then the DNA strands representing longer routes can be eliminated through chemical reaction. The remains are the solutions. In DNA soup, all connecting processes happen at the same time so that DNA computing can reduce reaction time.\n\nIn this paper, we use DNA computing algorithm to generate neural network architectures. In DNA computing algorithm, short DNA strand denoted as Layer Strand encodes one layer architecture and a piece of skip-connection information which determines whether the layer has a skip-connection with one of its previous layers. Long strands denoted as Architecture Strands are composed of short Layer Strands via base pairing. Because each layer in network have at most one skip connection with one its previous layers, DNA computing algorithm aims to explore networks with skip-connections. We have little limitations on search space. We don't limit the number of pooling layers and the depth of the architecture. The skip-connections are also randomly set for that any two layers can have a skip-connection. During DNA computing algorithm, we use Layer Strands (representing layers) as our reaction units and learn Architecture Strands (representing architectures) via base paring. After getting models (Architecture Strands) via DNA computing algorithm, we train those models on training data set and select one model according to their performance on the validation data set. We achieve 0.27\\% test error on MINST data set and 4.9\\% test error on CIFAR-10 data set.\n\n\\section{Related Work}\nIn this section, we introduce convolutional neural networks firstly. Then we introduce reinforcement learning and evolutionary algorithms for structure learning of deep networks.\n\\subsection{Convolutional Neural Networks}\nConvolutional neural networks (CNN) \\cite{Alex,Simonyan} have achieved great success in various computer tasks \\cite{Backpropagation}. Convolution neural networks are usually composed of convolution layers, pooling layers and fully connected layers. By stacking convolution layers, pooling layers and fully connected layers, we can get plain architectures.\n\nSpecialists have tried a lot to improve neural network's capacity and find that increased depth can help a lot. For example, deep models, from depth of sixteen \\cite{simonyan2014very} to thirty \\cite{ioffe2015batch}, perform well on ImageNet dataset. However, vanishing gradients and exploding gradients prevent models from being deeper \\cite{he2016deep}. By normalized initialization \\cite{lecun1998efficient,glorot2010understanding,saxe2013exact,he2015delving} and intermediate normalization layers, the problem has been solved a lot and networks can extend to tens of layers. But degradation problem happens when the models become deeper and deeper. Degradation problems mean that models' performance degrades with increased depth.\n\nResNet \\cite{he2016deep} and DenseNet \\cite{huang2017densely} can solve degradation problems well via skip-connections. Both have gained good results in ImageNet and CIFAR-10. They can also be generalized to many other data sets. Skip-connections make bottom layers have shorter pathes to output layer which makes learning easily and enriches the features. ResNet uses residual blocks to form whole architecture. In each residual block, input layer is added to output layer which is called a skip-connection. So bottom layers in ResNet have a very short path to output layer. The gradients can thus easily and effectively flow to the bottom layers via skip-connections. DenseNet reuses feature maps and increases width of each layer with little increased parameters. The input layer becomes one part of the output layer which can also be called a skip-connection. \\\\\nAs the neural networks containing skip-connections perform well in many tasks, we explore neural network architectures with skip-connections. In our encoding system, we don't limit search space. we don't limit the number of pooling layer compared with \\cite{Lingxi}. Locations of convolution layers and pooling layers are all randomly set and skip-connections are also randomly set for that one layer can have a skip-connection with any one of its previous layers.\n\\subsection {Reinforcement Learning and Evolutionary Algorithm}\nEven though Resnet and DenseNet perform well in many data sets, network architectures still need to be carefully designed for specific data set. Therefore, how to design a convolutional neural network is a very worthwhile issue. The traditional neural network is designed based on a large number of experimental experience. Recently, more and more researchers focus their research on automatically generating networks and networks have been automatically generated through Reinforcement Learning \\cite{Baker} and genetic algorithms \\cite{zhao}.\n\nReinforcement Learning usually uses a meta-controller to determine neural network architectures. It uses architectures' performance on validation data set as reward to update the meta-controller. Thus, a neural network architecture can be treated as a training sample. Because deep learning is a data driving technology, Reinforcement Learning needs a lot of samples to learn a high performance meta-controller. However, training a model spends huge computation resources. Evolutionary Algorithm simulates the process of evolution. It iterates improve its performance by operators such as mutation, crossover and selection. Evolutionary algorithm selects models according to their performances on validation data set. Thus, Evolutionary Algorithm also spends huge computation resources and time.\n\nAs the Reinforcement Learning and Evolutionary Algorithm are all data hungry methods and need huge computation sources, we aim to address that we can get a good model via DNA computing algorithm from high quality search space and only train a few of models.\n\\begin{figure*}[ht]\n\\centering\n\\includegraphics[width=3in,height=3in]{DNA_strands.jpg} \n\\caption{ Layer Strand 9 in one Architecture Strand whose maximum depth is 24 (not including fully connected layers). \\textcircled{1}: Left: Head fragment AAACG (9) of Layer Strand 9; Right: Tail of Layer Strand 8. The tail pairs with the head to form a longer strand. \\textcircled{2}: Layer type fragment AAGTC (30). Because the result of 30 mod 24 is 6 and 6 is more than 4, the layer belongs to a convolution layer. \\textcircled{3}: kernel size fragement AAAGC (6). 6 mod 4 is 2. Because the layer is a convolution layer, thus the number 2 represents 5*5 kernel size. \\textcircled{4}: Channel number fragment AAGAC (18). The result of 18 mod 6 is 0, so the channel number is 32. \\textcircled{5}: Skip-connection fragment AAAGC (6). 6 mod 9 is 6. So there is a skip-connection between layer 6 and layer 9. \\textcircled{6}: Left: The head fragment of layer 10. Right: The tail fragment of layer 9.}\n\\label{f1}\n\\end{figure*}\n\\section{Our Approach}\nIn this section, we introduce how DNA strands encode neural network architectures. And then, we introduce how to generate architectures using DNA Computing Algorithm. After introduction of model generation, we introduce how to train the learned models.\n\\subsection{Coding System}\\label{codesys}\nDNA computing algorithm uses DNA strands to encode neural network architectures and use DNA computing algorithm to generate the strands that represent architectures. In our DNA computing algorithm, we use short DNA strands denoted as Layer Strands to represent layer architectures (convolution layers and pooling layers) and long strands denoted as Architecture Strands to represent overall neural network architectures. In DNA computing algorithm, Layer Strands can form Architecture Strands via base paring between the exposed heads and tails of different Layer Strands, which is like the process that stacking layers to form architectures. The skip-connection information is encoded in each Layer Strand. And one layer can have at most one skip-connection with one of its previous layer. Those architectures with skip-connections form architecture space learned by DNA computing algorithm. We must emphasize we don't limit the search space for that we only set the maximum depth of the model. The number of pooling layers, location of convolution layers and skip-connection in each layer are all random. The detail of encoding method are described below.\n\nBecause the similarity between pooling layer and convolution layer, pooling layer and convolution layer are represented by Layer Strands that have same constructions. Each Layer Strand is composed of 6 DNA-fragments representing specific parameters of pooling layer or convolution layer, such as kernel size or channel number. Those 6 DNA-fragments include head fragment, layer type fragment, kernel size fragment, channel number fragment, skip-connection fragment and tail fragment. The head fragment represents layer number. For example, some Layer Strands represent first layer while some Layer strands represent second layer and so on. If one Layer Strand represent ith layer, it is called Layer Strand i-1. The tail fragment are specially designed to pair the head fragment of its previous Layer strand so that two Layer Strands can form a longer strand by base paring method. The layer type fragment determines which kinds of layer this strand represents, convolution layer or pooling layer. The channel number fragment determines the number of channels. The number of channels is chosen from \\{32, 64, 96, 128, 160, 192\\}. The channel number fragment of pooling layer is useless for that the channel number of pooling layer is same as its previous layer. The kernel size fragment determines the kernel size. For convolution layer, the kernel size is chosen from \\{1$\\times$1, {3$\\times$3}, {5$\\times$5}, {7$\\times7$}\\}. For pooling layer, the kernel size is chosen from \\{2$\\times$2, 3$\\times$3\\}. As for skip-connection fragment, it determines whether this layer has a skip-connection with one of its previous layer. Each layer has at most one skip-connection. Six DNA fragments are arranged according to head, layer type, kernel size, channel number, skip-connection, and tail order. In reality, DNA is a double-stranded structure with exposed head and tail. So in addition to the head and tail fragments, the other four fragments belong to double-stranded structures. The head and tail fragments are exposed with single layer structure. Thus different Layer Strands can be connected between their exposed head and tail fragments' base paring. Six fragments compose a Layer strand which is a basic unit in DNA computing.\n\nJust as Figure \\ref{f1} shows, each fragment of Layer Strand is composed of five pairs of molecules or five single molecules (head and tail). For double-stranded structure fragments, only the strand along the same side as the tail and head is useful during decoding. Therefore, the length of a Layer Strand is 30. We use molecules A, G, C, T to represent 0, 1, 2, 3 respectively. According to the quaternary decoding method, five molecules represent integers from 0 to 1023. Thus one fragment can encode 1024 (4$^5$) kinds of information. There are redundant integers in each strands and we use different groups of integers to represent different parameter values in the four double-structured fragments so that all the permutation of five molecules can be utilized. The detail will be described below.\n\nHyper-parameter N specifies the maximum number of layers in each neural network. Each fragment can be translated into a real number and the specific parameter are determined by the real number. The head fragment can be translated into integer n and the integer n represents layer n. In generation, only Layer Strands represent layer 0 (first layer) to layer N-1 can be generated. Thus, only head fragments represent layer 0 to layer N-1 can be generated. The Layer Strand representing layer i can be denoted as Layer Strand i. In this way, we can define maximum depth of neural network. Because the tail fragment representing layer L needs to pair the head fragment of layer L+1 to form a longer strand, the tail fragment of layer L is thus determined by the tail fragment of layer L+1. For example, the head fragment representing layer one is AAAAG (can be translated into number 1), then the tail fragment of the layer 0 must be TTTTC (A pairs with T and G pairs with C). Thus, the two Layer Strands can be connected by base paring between exposed AAAAG and TTTTC single-layer fragments to form a long strand. So, only N kinds of tail fragments corresponding to head tails can be generated. As for the other four kinds of fragment, they are not limited in generation. All kinds of permutation of five molecules can be generated.\n\nDuring decoding, one fragment is translated into real number n and the concrete meanings of fragments are determined by integer groups which the integers belong to. Layer type fragment can be translated into n$_t$. If the result of n$_t$ mod N is less than 4, the layer belong to pooling layer. Otherwise, it is a convolution layer. That means we may get about 4 pooling layers in the N layers. The kernel size fragment can be translated into number n$_k$. If the layer is a convolution layer, the result of n$_k$ mod 4 determines the kernel size and 0, 1, 2, 3 represent 1$\\times$1, {3$\\times$3}, {5$\\times$5}, {7$\\times7$} kernel sizes respectively. If the layer is a pooling layer, the result of n$_k$ mod 2 determines the kernel size and 0, 1 represent 2$\\times$2, 3$\\times$3 kernel sizes respectively. The channel number fragment can be translated into real number n$_c$. The results of n$_c$ mod 6 determine the channel numbers and 0, 1, 2, 3, 4, 5 represent 32, 64, 96, 128, 160, 192 respectively. The skip-connection fragment can be translated into real number n$_s$. If the layer number is L and the result of n$_c$ mod L is l, there is a skip-connection between the layer L and layer l. l should be less than L-1. Otherwise, the skip-connection fragments are useless.\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=3in,height=5.8in]{1234.jpg} \n\\caption{One of our models on CIFAR-10 that gains high accuracy. }\n\\label{f3}\n\\end{figure}\n\\subsection{Generation via DNA Computing Algorithm}\nLayer Strands are our basic reaction units of DNA computing algorithm. Hyper-parameter P defines the number of architectures need to be generated. In generation, we generate P Layer Strands i (0$\\leq$i$\\le$N-1). We can get P$\\times$N Layer Strands. In each Layer Strand, only head fragments and tail fragments are generated specially while other fragments are randomly generated. \n\nWe then put all the Layer Strands into DNA soup. In DNA soup, Layer Strand i and Layer Strand i+1 are connected via base paring between their exposed head and tail. The head fragment of Layer Strand i pairs with the tail of Layer Strand i+1 to form a double-structure fragment. Thus, the two strands form a longer strand. All the connection processes can happen at the same time.\n\nThe process finish within seconds. Then we can select DNA strands from DNA soup and eliminate the strands whose start part is not Layer Strand 0. After we get Architecture Strands, we translate them into real architectures. In the translating, we add a fully connected layer at the end of each architecture. We train them on training data set and select one model based on their performances on validation data set. We simulate DNA computing algorithm via computer. Algorithm \\ref{alg:1} illustrates the process of DNA computing algorithm.\n\\begin{algorithm}[ht]\n\t\\renewcommand{\\algorithmicrequire}{\\textbf{Input:}}\n\t\\renewcommand{\\algorithmicensure}{\\textbf{Output:}}\n\t\\caption{DNA Computing Algorithm}\n\t\\label{alg:1}\n\t\\begin{algorithmic}[1]\n\t\t\\REQUIRE {Dataset, maximum number of layers in one architecture (N), number of Layer Strands representing one layer (P);\\\\}\n\t\t\\ENSURE The network structure with the highest accuracy on test data set.\n \\FOR { i=1 to N }\n \\FOR { j=1 to P }\n\t\t \\STATE {Generate one Layer Strand i randomly. }\n \\ENDFOR\n \\ENDFOR\n\t\t\\STATE{Put all the Layer Strands into DNA soup and provide proper reaction environment.}\\\\\n \\STATE{Select Architecture Strands from DNA soup. Count the number of Architecture Strands and get number Num. }\n\t\t\\FOR{ i=1 to Number}\n\t\t\\STATE {{S}=Generate real CNN network.}\n \\ENDFOR\n \\STATE {{G} = Randomly select 100 models.}\n\t\t\\FOR { i=1 to 100 }\n\t\t \\STATE { Training network i in {G} on training data set and record its network structure and final accuracy on validation data set}\n \\ENDFOR\n\t\t\\STATE {Select the model R from G that has highest validation accuracy.}\n \\STATE {Train model R on the whole training data set and validation data set and get its accuracy A on test data set.}\n\t\t\\STATE \\textbf{return} R, A\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Model Training}\nJust as algorithm \\ref{alg:1}, we train all the models on the training data set and get their accuracies on validation data set after getting model via DNA Computing Algorithm. We select the best model according to their performances on validation data set. After getting the best model, we merge the training data set and validation data set and train the model again. we then use the model's performance on test data set as our algorithm output. The training details are described below.\n\nTo compress the search space, we used a pre-activated convolution unit (PCC). That's to say, we use batch normalization(BN) \\cite{ioffe2015batch} and ReLU activation \\cite{Alex} before convolution operation. The stride of convolution layer is set as 1 while the stride of pooling layer is set as 2. As for skip-connections, if the layer i and layer j (i$\\leq$j) has a skip-connection. We then add layer i and layer j as output of layer j. If the two layers' channels don't map, we use 1$\\times$1 convolution with stride 1 to change channels. If the feature map size don't map, we use 1$\\times$1 convolution with stride 2 to down sample.\n\nAs for optimizer, we use momentum optimizer with momentum set to 0.9. The initial learning rate is 0.1 and the weight decay is 0.0001. The total training epochs is 60. In the tenth epoch, the learning rates is set to 0.01. In the thirtieth epoch, the learning rates is set to 0.001. \n\nThe models are trained for at most 60 epochs on training sets (CIFAR-10). As for MNIST, 10 epochs is enough for the models to be converged. Carefully designing total epochs reduces huge time. At training time, we find that with a fixed learning rate, the algorithm is converged around some epochs and had a little progress on the further epochs. If we do not decrease the learning rate, the accuracy increase little. So it's necessary to immediately decrease the learning rate after the accuracy increase slowly. If the learning rate is set properly, it reduce huge time. During the training process, we used L2 regularization.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=3in]{6.png} \n\\caption{The learning curve of MNIST and CIFAR10 is very similar and proves the early stop strategy can also be used on MNIST.}\n\\label{f6}\n\\end{figure}\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=3in]{5.png} \n\\caption{ We randomly chose eight of all our generated models without early stop strategy. As the figure shows, the competitive models perform well all the time, but the poor models are contrary. That demonstrates we can eliminate the poor model according their learning curve. }\n\\label{f5}\n\\end{figure}\n\\subsection{Early Stop Strategy}\nIn order to further reduce running time, we used early stop strategy. Just as shown in Figure \\ref{f5} and Figure \\ref{f6}, the epochs can be reduced by carefully design. We have two main discoveries. Firstly, all the models have similar learning curves. That indicates us we can eliminate models without finishing all the training epochs. Secondly, most of models in our search space perform similarly showing that we can get a high performance model with fewer times of training. We can stop training if the model can not perform well until the specified epoch. The early stop strategy eliminates poor performance models with high probability for that good models usually perform well on an early stage. This strategy reduces the huge time and has little impact on accuracy. We set three model performance thresholds on CIFAR-10 data set that are trained for at most 60 epochs. (1) 10th epoch, 80\\% test accuracy. (2) 20th epoch, 85\\% accuracy. (3) 45th epoch, 90\\% accuracy. If the model does not reach the specified accuracy after the there epochs respectively, the models will be eliminated.\n\n\n\n\\section{Results}\nIn this section, we introduce our results on CIFAR-10 and MNIST data sets. We simulate DNA computing algorithm by computer.\n\\begin{table}[h]\n \\centering\n \\setlength{\\tabcolsep}{0.8mm}\n \\scalebox{0.9}[0.9]{\n \\begin{tabular} {|l|l|l|l|}\n \\hline\n Model&MNIST\\\\\n \\hline\n Lecun\\emph{et.al}\\cite{lecun1998gradient}&0.7\\\\\n \\hline\n Lauer\\emph{et.al}\\cite{lauer2007trainable}&0.54\\\\\n \\hline\n Jarrett\\emph{et.al}\\cite{jarrett2009best}&0.53\\\\\n \\hline\n Ranzato\\emph{et.al}\\cite{poultney2007efficient}&0.39\\\\\n \\hline\n Cirecsan\\emph{et.al}\\cite{cirecsan2012multi}&0.23\\\\\n\n \\hline\n {Our Method}&{0.27}\\\\\n \\hline\n \\end{tabular}\n }\n \\caption{Comparison of the recognition error rate (\\%) on MNIST.}\n\\end{table}\n\n\\subsection{Results on the MNIST Data Set}\nThe MNIST database of handwritten digits has a training set of 60,000 examples, and a test set of 10,000 examples. We test our algorithm on MNIST data set in order to reduce time. We divide the training data set into two parts. 55,000 images are used as training set while 5,000 images are used as validation data set. After we get the best model according their performances on validation data set, the whole 60,000 images are used for training the model. We then test the model on test data set and use the accuracy as our method's final output.\n\nWe use DNA Computing algorithm to generate model architectures. Many of our models have gained test accuracies higher than 99.60\\% and the highest test accuracy is 99.73\\%. We show one hundred models' learning in Figure \\ref{f7}. Only few of them perform poorly indicating that our neural network architectures with random skip-connection composes a high performance search space.\n\\begin{table}[h]\n \\centering\n \\setlength{\\tabcolsep}{1.0mm}\n\n \\scalebox{0.9}[0.9]{\n \\begin{tabular}{p{1.2cm}|p{6cm}|p{1.4cm}}\n \\hline\n &Model&CIFAR-10\\\\\n \\hline\n &Maxout \\cite{goodfellow2013maxout}&9.38\\\\\n human&ResNet(depth=110) \\cite{he2016deep}&6.61\\\\\n designed&ResNet(pre-activation)\\cite{he2016identity}&4.62\\\\\n &DenseNet(L=40,k=12)\\cite{huang2017densely}&5.24\\\\\n\n \\hline\n &GeNet\\#2(G-50) \\cite{Lingxi}&7.10\\\\\n auto&Large-Scale Evolution\\cite{Esteban}&5.40\\\\\n designed&NAS (depth=39)\\cite{Lee}&6.05\\\\\n &MetaQNN\\cite{Baker}&6.92\\\\\n &EAS\\cite{cai2018efficient}&4.23\\\\\n \\hline\n &{Our Method(depth=49)}& {4.9}\\\\\n \\hline\n \\end{tabular}\n }\n \\caption{Comparison of the recognition error rate (\\%) on CIFAR-10.}\n\n\n\\end{table}\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=3in]{7.png} \n\\caption{One hundred models test accuracy on MNIST. Most of models have gained similar performances.}\n\\label{f7}\n\\end{figure}\n\n\\subsection{Results on the CIFAR-10 Data Set}\nThe CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50,000 training images and 10,000 test images. The training data set are divided into two parts, new training data set(55,000 images) and validation data set(5,000 images).\n\n\\begin{table}[h]\n \\centering\n \\setlength{\\tabcolsep}{0.8mm}\n \\scalebox{0.9}[0.9]{\n \\begin{tabular} {p{5.5cm}|p{2cm}}\n \\hline\n Model&CIFAR-10\\\\\n \\hline\n DNA computing algorithm (depth=13)&7.34\\\\\n \\hline\n DNA computing algorithm (depth=17)&6.7\\\\\n \\hline\n DNA computing algorithm (depth=21)&6.1\\\\\n \\hline\n DNA computing algorithm (depth=25)&5.65\\\\\n \\hline\n DNA computing algorithm (depth=49)&4.9\\\\\n \\hline\n \\end{tabular}\n }\n \\caption{Comparison of our models' recognition error rate (\\%) with different depth of models on CIFAR-10 data set.}\n\\end{table}\nOur best model gain 95.10\\% test accuracy with 49 layers architectures composed of convolution layers, pooling layers and a fully connected layer. Note that we use data augmentation (flip, crop, and data normalization). With few fully trained models, we still gain high test accuracy. It proves that learning models from carefully designed search space via DNA computing algorithm can gain high accuracy. It is possible for non-experts without expertise to get a high accuracy models in specific task. One of our high performance models are showed in Figure \\ref{f3}.\n\n\\section{Conclusion}\nWe propose DNA computing algorithm to learn neural networks from a well defined architecture search space. Our search space is defined by architectures with skip-connections. During training models, we use early stop strategy which saves time and computation sources. We find most models perform similarly in our search space and have similar learning curves. We prove that learning neural networks via DNA computing algorithm is feasible and gain high accuracy. And we find that local minimal is not of importance during training models and using early stop strategy can eliminate models just after several epochs of training. We conduct the algorithm in two data sets (CIFAR-10 and MNIST) and we get competitive results in comparison with evolutionary algorithm and Reinforcement Learning but training fewer models.\nWe simulate DNA computing algorithm by computer. In future work, We consider doing biochemical experiments to verify the feasibility of the method. \n\\bibliographystyle{aaai}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nThis document serves as an example submission. It illustrates the format\nwe expect authors to follow when submitting a paper to ECCV. \nAt the same time, it gives details on various aspects of paper submission,\nincluding preservation of anonymity and how to deal with dual submissions,\nso we advise authors to read this document carefully.\n\n\\section{Initial Submission}\n\n\\subsection{Language}\n\nAll manuscripts must be in English.\n\n\\subsection{Paper length}\nPapers submitted for review should be complete. \nThe length should match that intended for final publication. \nPapers accepted for the conference will be allocated 14 pages (plus additional pages for references) in the proceedings. \nNote that the allocated 14 pages do not include the references. The reason for this policy\nis that we do not want authors to omit references for sake of space limitations.\n\nPapers with more than 14 pages (excluding references) will be rejected without review.\nThis includes papers where the margins and\nformatting are deemed to have been significantly altered from those\nlaid down by this style guide. Do not use the TIMES, or any other font than the default. The reason such papers will not be reviewed is that there is no provision for supervised revisions of manuscripts. The reviewing process cannot determine the suitability of the paper for presentation in 14 pages if it is reviewed in 16.\n\n\\subsection{Paper ID}\n\nIt is imperative that the paper ID is mentioned on each page of the manuscript.\nThe paper ID is a number automatically assigned to your submission when \nregistering your paper submission on the submission site.\n\n\nAll lines should be numbered in the initial submission, as in this example document. This makes reviewing more efficient, because reviewers can refer to a line on a page. Line numbering is removed in the camera-ready.\n\n\n\\subsection{Mathematics}\n\nPlease number all of your sections and displayed equations. Again,\nthis makes reviewing more efficient, because reviewers can refer to a\nline on a page. Also, it is important for readers to be able to refer\nto any particular equation. Just because you didn't refer to it in\nthe text doesn't mean some future reader might not need to refer to\nit. It is cumbersome to have to use circumlocutions like ``the\nequation second from the top of page 3 column 1''. (Note that the\nline numbering will not be present in the final copy, so is not an\nalternative to equation numbers). Some authors might benefit from\nreading Mermin's description of how to write mathematics:\n\\url{www.pamitc.org\/documents\/mermin.pdf}.\n\\section{Policies}\nTo avoid confusion, in case of discrepancies between policies mentioned here and those in the ECCV 2022 webpage, the web page is the one that is updated regularly and its policies shall overrule those appearing here. \n\n\\subsection{Review Process}\nBy submitting a paper to ECCV, the authors agree to the review process and understand that papers are processed by the Toronto system to match each manuscript to the best possible chairs and reviewers.\n\\subsection{Confidentiality}\nThe review process of ECCV is confidential. Reviewers are volunteers not part of the ECCV organisation and their efforts are greatly appreciated. The standard practice of keeping all information confidential during the review is part of the standard communication to all reviewers. Misuse of confidential information is a severe professional failure and appropriate measures will be taken when brought to the attention of ECCV organizers. It should be noted, however, that the organisation of ECCV is not and cannot be held responsible for the consequences when reviewers break confidentiality.\n\nAccepted papers will be published by Springer (with appropriate copyrights) electronically up to three weeks prior to the main conference. Please make sure to discuss this issue with your legal advisors as it pertains to public disclosure of the contents of the papers submitted.\n\\subsection{Dual and Double Submissions}\nBy submitting a manuscript to ECCV 2022, authors acknowledge that it has not been previously published or accepted for publication in substantially similar form in any peer-reviewed venue including journal, conference, or workshop. Furthermore, no paper substantially similar in content has been or will be submitted to a journal, another conference or workshop during the review period (March 07, 2022 \u2013 July 3, 2022). The authors also attest that they did not submit substantially similar submissions to ECCV 2022. Violation of any of these conditions will lead to rejection and the violation will be reported to the other venue or journal, which will typically lead to rejection there as well. \n\nThe goals of the dual submission policy are (i) to have exciting new work be published for the first time at ECCV 2022, and (ii) to avoid duplicating the efforts of the reviewers.\nTherefore, all papers under review are checked for dual submissions and this is not allowed, independent of the page size of submissions. \n\nFor already published papers, our policy is based upon the following particular definition of ``publication''. A publication, for the purposes of the dual submission policy, is defined to be a written work longer than four pages that was submitted for review by peers for either acceptance or rejection, and, after review, was accepted. In particular, this definition of publication does not depend upon whether such an accepted written work appears in a formal proceedings or whether the organizers declare that such work ``counts as a publication''. \n\nAn arXiv.org paper does not count as a publication because it was not peer-reviewed for acceptance. The same is true for university technical reports. However, this definition of publication does include peer-reviewed workshop papers, even if they do not appear in a proceedings, if their length is more than 4 pages including citations. Given this definition, any submission to ECCV 2022 should not have substantial overlap with prior publications or other concurrent submissions. As a rule of thumb, the ECCV 2022 submission should contain no more than 20 percent of material from previous publications. \n\n\\subsection{Requirements for publication}\nPublication of the paper in the ECCV 2022 proceedings of Springer requires that at least one of the authors registers for the conference and present the paper there. It also requires that a camera-ready version that satisfies all formatting requirements is submitted before the camera-ready deadline. \n\\subsection{Double blind review}\n\\label{sec:blind}\nECCV reviewing is double blind, in that authors do not know the names of the area chair\/reviewers of their papers, and the area chairs\/reviewers cannot, beyond reasonable doubt, infer the names of the authors from the submission and the additional material. Avoid providing links to websites that identify the authors. Violation of any of these guidelines may lead to rejection without review. If you need to cite a different paper of yours that is being submitted concurrently to ECCV, the authors should (1) cite these papers, (2) argue in the body of your paper why your ECCV paper is non trivially different from these concurrent submissions, and (3) include anonymized versions of those papers in the supplemental material.\n\nMany authors misunderstand the concept of anonymizing for blind\nreview. Blind review does not mean that one must remove\ncitations to one's own work. In fact it is often impossible to\nreview a paper unless the previous citations are known and\navailable.\n\nBlind review means that you do not use the words ``my'' or ``our''\nwhen citing previous work. That is all. (But see below for\ntechnical reports).\n\nSaying ``this builds on the work of Lucy Smith [1]'' does not say\nthat you are Lucy Smith, it says that you are building on her\nwork. If you are Smith and Jones, do not say ``as we show in\n[7]'', say ``as Smith and Jones show in [7]'' and at the end of the\npaper, include reference 7 as you would any other cited work.\n\nAn example of a bad paper:\n\\begin{quote}\n\\begin{center}\n An analysis of the frobnicatable foo filter.\n\\end{center}\n\n In this paper we present a performance analysis of our\n previous paper [1], and show it to be inferior to all\n previously known methods. Why the previous paper was\n accepted without this analysis is beyond me.\n\n [1] Removed for blind review\n\\end{quote}\n\n\nAn example of an excellent paper:\n\n\\begin{quote}\n\\begin{center}\n An analysis of the frobnicatable foo filter.\n\\end{center}\n\n In this paper we present a performance analysis of the\n paper of Smith [1], and show it to be inferior to\n all previously known methods. Why the previous paper\n was accepted without this analysis is beyond me.\n\n [1] Smith, L. and Jones, C. ``The frobnicatable foo\n filter, a fundamental contribution to human knowledge''.\n Nature 381(12), 1-213.\n\\end{quote}\n\nIf you are making a submission to another conference at the same\ntime, which covers similar or overlapping material, you may need\nto refer to that submission in order to explain the differences,\njust as you would if you had previously published related work. In\nsuch cases, include the anonymized parallel\nsubmission~\\cite{Authors14} as additional material and cite it as\n\\begin{quote}\n1. Authors. ``The frobnicatable foo filter'', BMVC 2014 Submission\nID 324, Supplied as additional material {\\tt bmvc14.pdf}.\n\\end{quote}\n\nFinally, you may feel you need to tell the reader that more\ndetails can be found elsewhere, and refer them to a technical\nreport. For conference submissions, the paper must stand on its\nown, and not {\\em require} the reviewer to go to a techreport for\nfurther details. Thus, you may say in the body of the paper\n``further details may be found in~\\cite{Authors14b}''. Then\nsubmit the techreport as additional material. Again, you may not\nassume the reviewers will read this material.\n\nSometimes your paper is about a problem which you tested using a tool which\nis widely known to be restricted to a single institution. For example,\nlet's say it's 1969, you have solved a key problem on the Apollo lander,\nand you believe that the ECCV audience would like to hear about your\nsolution. The work is a development of your celebrated 1968 paper entitled\n``Zero-g frobnication: How being the only people in the world with access to\nthe Apollo lander source code makes us a wow at parties'', by Zeus.\n\nYou can handle this paper like any other. Don't write ``We show how to\nimprove our previous work [Anonymous, 1968]. This time we tested the\nalgorithm on a lunar lander [name of lander removed for blind review]''.\nThat would be silly, and would immediately identify the authors. Instead\nwrite the following:\n\\begin{quotation}\n\\noindent\n We describe a system for zero-g frobnication. This\n system is new because it handles the following cases:\n A, B. Previous systems [Zeus et al. 1968] didn't\n handle case B properly. Ours handles it by including\n a foo term in the bar integral.\n\n ...\n\n The proposed system was integrated with the Apollo\n lunar lander, and went all the way to the moon, don't\n you know. It displayed the following behaviours\n which show how well we solved cases A and B: ...\n\\end{quotation}\nAs you can see, the above text follows standard scientific convention,\nreads better than the first version, and does not explicitly name you as\nthe authors. A reviewer might think it likely that the new paper was\nwritten by Zeus, but cannot make any decision based on that guess.\nHe or she would have to be sure that no other authors could have been\ncontracted to solve problem B. \\\\\n\nFor sake of anonymity, it's recommended to omit acknowledgements\nin your review copy. They can be added later when you prepare the final copy.\n\n\\section{Manuscript Preparation}\n\nThis is an edited version of Springer LNCS instructions adapted\nfor ECCV 2022 first paper submission.\nYou are strongly encouraged to use \\LaTeX2$_\\varepsilon$ for the\npreparation of your\ncamera-ready manuscript together with the corresponding Springer\nclass file \\verb+llncs.cls+.\n\nWe would like to stress that the class\/style files and the template\nshould not be manipulated and that the guidelines regarding font sizes\nand format should be adhered to. This is to ensure that the end product\nis as homogeneous as possible.\n\n\\subsection{Printing Area}\nThe printing area is $122 \\; \\mbox{mm} \\times 193 \\;\n\\mbox{mm}$.\nThe text should be justified to occupy the full line width,\nso that the right margin is not ragged, with words hyphenated as\nappropriate. Please fill pages so that the length of the text\nis no less than 180~mm.\n\n\\subsection{Layout, Typeface, Font Sizes, and Numbering}\nUse 10-point type for the name(s) of the author(s) and 9-point type for\nthe address(es) and the abstract. For the main text, please use 10-point\ntype and single-line spacing.\nWe recommend using Computer Modern Roman (CM) fonts, which is the default font in this template.\nItalic type may be used to emphasize words in running text. Bold\ntype and underlining should be avoided.\nWith these sizes, the interline distance should be set so that some 45\nlines occur on a full-text page.\n\n\\subsubsection{Headings.}\n\nHeadings should be capitalized\n(i.e., nouns, verbs, and all other words\nexcept articles, prepositions, and conjunctions should be set with an\ninitial capital) and should,\nwith the exception of the title, be aligned to the left.\nWords joined by a hyphen are subject to a special rule. If the first\nword can stand alone, the second word should be capitalized.\nThe font sizes\nare given in Table~\\ref{table:headings}.\n\\setlength{\\tabcolsep}{4pt}\n\\begin{table}\n\\begin{center}\n\\caption{Font sizes of headings. Table captions should always be\npositioned {\\it above} the tables. The final sentence of a table\ncaption should end without a full stop}\n\\label{table:headings}\n\\begin{tabular}{lll}\n\\hline\\noalign{\\smallskip}\nHeading level & Example & Font size and style\\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\nTitle (centered) & {\\Large \\bf Lecture Notes \\dots} & 14 point, bold\\\\\n1st-level heading & {\\large \\bf 1 Introduction} & 12 point, bold\\\\\n2nd-level heading & {\\bf 2.1 Printing Area} & 10 point, bold\\\\\n3rd-level heading & {\\bf Headings.} Text follows \\dots & 10 point, bold\n\\\\\n4th-level heading & {\\it Remark.} Text follows \\dots & 10 point,\nitalic\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\\setlength{\\tabcolsep}{1.4pt}\n\nHere are some examples of headings: ``Criteria to Disprove Context-Freeness of\nCollage Languages'', ``On Correcting the Intrusion of Tracing\nNon-deterministic Programs by Software'', ``A User-Friendly and\nExtendable Data Distribution System'', ``Multi-flip Networks:\nParallelizing GenSAT'', ``Self-determinations of Man''.\n\n\\subsubsection{Lemmas, Propositions, and Theorems.}\n\nThe numbers accorded to lemmas, propositions, and theorems etc. should\nappear in consecutive order, starting with the number 1, and not, for\nexample, with the number 11.\n\n\\subsection{Figures and Photographs}\n\\label{sect:figures}\n\nPlease produce your figures electronically and integrate\nthem into your text file. For \\LaTeX\\ users we recommend using package\n\\verb+graphicx+ or the style files \\verb+psfig+ or \\verb+epsf+.\n\nCheck that in line drawings, lines are not\ninterrupted and have constant width. Grids and details within the\nfigures must be clearly readable and may not be written one on top of\nthe other. Line drawings should have a resolution of at least 800 dpi\n(preferably 1200 dpi).\nFor digital halftones 300 dpi is usually sufficient.\nThe lettering in figures should have a height of 2~mm (10-point type).\nFigures should be scaled up or down accordingly.\nPlease do not use any absolute coordinates in figures.\n\nFigures should be numbered and should have a caption which should\nalways be positioned {\\it under} the figures, in contrast to the caption\nbelonging to a table, which should always appear {\\it above} the table.\nPlease center the captions between the margins and set them in\n9-point type\n(Fig.~\\ref{fig:example} shows an example).\nThe distance between text and figure should be about 8~mm, the\ndistance between figure and caption about 5~mm.\n\\begin{figure}\n\\centering\n\\includegraphics[height=6.5cm]{eijkel2}\n\\caption{One kernel at $x_s$ ({\\it dotted kernel}) or two kernels at\n$x_i$ and $x_j$ ({\\it left and right}) lead to the same summed estimate\nat $x_s$. This shows a figure consisting of different types of\nlines. Elements of the figure described in the caption should be set in\nitalics,\nin parentheses, as shown in this sample caption. The last\nsentence of a figure caption should generally end without a full stop}\n\\label{fig:example}\n\\end{figure}\n\nIf possible (e.g. if you use \\LaTeX) please define figures as floating\nobjects. \\LaTeX\\ users, please avoid using the location\nparameter ``h'' for ``here''. If you have to insert a pagebreak before a\nfigure, please ensure that the previous page is completely filled.\n\n\n\\subsection{Formulas}\n\nDisplayed equations or formulas are centered and set on a separate\nline (with an extra line or halfline space above and below). Displayed\nexpressions should be numbered for reference. The numbers should be\nconsecutive within the contribution,\nwith numbers enclosed in parentheses and set on the right margin.\nFor example,\n\\begin{align}\n \\psi (u) & = \\int_{0}^{T} \\left[\\frac{1}{2}\n \\left(\\Lambda_{0}^{-1} u,u\\right) + N^{\\ast} (-u)\\right] dt \\; \\\\\n& = 0 ?\n\\end{align}\n\nPlease punctuate a displayed equation in the same way as ordinary\ntext but with a small space before the end punctuation.\n\n\\subsection{Footnotes}\n\nThe superscript numeral used to refer to a footnote appears in the text\neither directly after the word to be discussed or, in relation to a\nphrase or a sentence, following the punctuation sign (comma,\nsemicolon, or full stop). Footnotes should appear at the bottom of\nthe\nnormal text area, with a line of about 2~cm in \\TeX\\ and about 5~cm in\nWord set\nimmediately above them.\\footnote{The footnote numeral is set flush left\nand the text follows with the usual word spacing. Second and subsequent\nlines are indented. Footnotes should end with a full stop.}\n\n\n\\subsection{Program Code}\n\nProgram listings or program commands in the text are normally set in\ntypewriter font, e.g., CMTT10 or Courier.\n\n\\noindent\n{\\it Example of a Computer Program}\n\\begin{verbatim}\nprogram Inflation (Output)\n {Assuming annual inflation rates of \n years};\n const\n MaxYears = 10;\n var\n Year: 0..MaxYears;\n Factor1, Factor2, Factor3: Real;\n begin\n Year := 0;\n Factor1 := 1.0; Factor2 := 1.0; Factor3 := 1.0;\n WriteLn('Year \n repeat\n Year := Year + 1;\n Factor1 := Factor1 * 1.07;\n Factor2 := Factor2 * 1.08;\n Factor3 := Factor3 * 1.10;\n WriteLn(Year:5,Factor1:7:3,Factor2:7:3,Factor3:7:3)\n until Year = MaxYears\nend.\n\\end{verbatim}\n\\noindent\n{\\small (Example from Jensen K., Wirth N. (1991) Pascal user manual and\nreport. Springer, New York)}\n\n\n\n\\subsection{Citations}\n\nThe list of references is headed ``References\" and is not assigned a\nnumber\nin the decimal system of headings. The list should be set in small print\nand placed at the end of your contribution, in front of the appendix,\nif one exists.\nPlease do not insert a pagebreak before the list of references if the\npage is not completely filled.\nAn example is given at the\nend of this information sheet. For citations in the text please use\nsquare brackets and consecutive numbers: \\cite{Alpher02},\n\\cite{Alpher03}, \\cite{Alpher04} \\dots\n\n\\section{Submitting a Camera-Ready for an Accepted Paper}\n\\subsection{Converting Initial Submission to Camera-Ready}\nTo convert a submission file into a camera-ready for an accepted paper:\n\\begin{enumerate}\n \\item First comment out \\begin{verbatim}\n \\usepackage{ruler}\n \\end{verbatim} and the line that follows it.\n \\item The anonymous title part should be removed or commented out, and a proper author block should be inserted, for which a skeleton is provided in a commented-out version. These are marked in the source file as \\begin{verbatim}\n \n \\end{verbatim} and \\begin{verbatim}\n \n \\end{verbatim}\n \\item Please write out author names in full in the paper, i.e. full given and family names. If any authors have names that can be parsed into FirstName LastName in multiple ways, please include the correct parsing in a comment to the editors, below the \\begin{verbatim}\\author{}\\end{verbatim} field.\n \\item Make sure you have inserted the proper Acknowledgments.\n \\end{enumerate} \n \n\\subsection{Preparing the Submission Package}\nWe need all the source files (LaTeX files, style files, special fonts, figures, bib-files) that are required to compile papers, as well as the camera ready PDF. For each paper, one ZIP-file called XXXX.ZIP (where XXXX is the zero-padded, four-digit paper ID) has to be prepared and submitted via the ECCV 2022 Submission Website, using the password you received with your initial registration on that site. The size of the ZIP-file may not exceed the limit of 60 MByte. The ZIP-file has to contain the following:\n \\begin{enumerate}\n \\item All source files, e.g. LaTeX2e files for the text, PS\/EPS or PDF\/JPG files for all figures.\n \\item PDF file named ``XXXX.pdf\" that has been produced by the submitted source, where XXXX is the four-digit paper ID (zero-padded if necessary). For example, if your paper ID is 24, the filename must be 0024.pdf. This PDF will be used as a reference and has to exactly match the output of the compilation.\n \\item PDF file named ``XXXX-copyright.PDF\": a scanned version of the signed copyright form (see ECCV 2022 Website, Camera Ready Guidelines for the correct form to use). \n \\item If you wish to provide supplementary material, the file name must be in the form XXXX-supp.pdf or XXXX-supp.zip, where XXXX is the zero-padded, four-digit paper ID as used in the previous step. Upload your supplemental file on the ``File Upload\" page as a single PDF or ZIP file of 100 MB in size or less. Only PDF and ZIP files are allowed for supplementary material. You can put anything in this file \u2013 movies, code, additional results, accompanying technical reports\u2013anything that may make your paper more useful to readers. If your supplementary material includes video or image data, you are advised to use common codecs and file formats. This will make the material viewable by the largest number of readers (a desirable outcome). ECCV encourages authors to submit videos using an MP4 codec such as DivX contained in an AVI. Also, please submit a README text file with each video specifying the exact codec used and a URL where the codec can be downloaded. Authors should refer to the contents of the supplementary material appropriately in the paper.\n \\end{enumerate}\n\nCheck that the upload of your file (or files) was successful either by matching the file length to that on your computer, or by using the download options that will appear after you have uploaded. Please ensure that you upload the correct camera-ready PDF\u2013renamed to XXXX.pdf as described in the previous step as your camera-ready submission. Every year there is at least one author who accidentally submits the wrong PDF as their camera-ready submission.\n\nFurther considerations for preparing the camera-ready package:\n \\begin{enumerate}\n \\item Make sure to include any further style files and fonts you may have used.\n \\item References are to be supplied as BBL files to avoid omission of data while conversion from BIB to BBL.\n \\item Please do not send any older versions of papers. There should be one set of source files and one XXXX.pdf file per paper. Our typesetters require the author-created pdfs in order to check the proper representation of symbols, figures, etc.\n \\item Please remove unnecessary files (such as eijkel2.pdf and eijkel2.eps) from the source folder. \n \\item You may use sub-directories.\n \\item Make sure to use relative paths for referencing files.\n \\item Make sure the source you submit compiles.\n\\end{enumerate}\n\nSpringer is the first publisher to implement the ORCID identifier for proceedings, ultimately providing authors with a digital identifier that distinguishes them from every other researcher. ORCID (Open Researcher and Contributor ID) hosts a registry of unique researcher identifiers and a transparent method of linking research activities to these identifiers. This is achieved through embedding ORCID identifiers in key workflows, such as research profile maintenance, manuscript submissions, grant applications and patent applications.\n\\subsection{Most Frequently Encountered Issues}\nPlease kindly use the checklist below to deal with some of the most frequently encountered issues in ECCV submissions.\n\n{\\bf FILES:}\n\\begin{itemize}\n \\item My submission package contains ONE compiled pdf file for the camera-ready version to go on Springerlink.\n\\item I have ensured that the submission package has all the additional files necessary for compiling the pdf on a standard LaTeX distribution.\n\\item I have used the correct copyright form (with editor names pre-printed), and a signed pdf is included in the zip file with the correct file name.\n\\end{itemize}\n\n{\\bf CONTENT:}\n\\begin{itemize}\n\\item I have removed all \\verb| \\vspace| and \\verb|\\hspace| commands from my paper.\n\\item I have not used \\verb|\\thanks| or \\verb|\\footnote| commands and symbols for corresponding authors in the title (which is processed with scripts) and (optionally) used an Acknowledgement section for all the acknowledgments, at the end of the paper.\n\\item I have not used \\verb|\\cite| command in the abstract.\n\\item I have read the Springer author guidelines, and complied with them, including the point on providing full information on editors and publishers for each reference in the paper (Author Guidelines \u2013 Section 2.8).\n\\item I have entered a correct \\verb|\\titlerunning{}| command and selected a meaningful short name for the paper.\n\\item I have entered \\verb|\\index{Lastname,Firstname}| commands for names that are longer than two words.\n\\item I have used the same name spelling in all my papers accepted to ECCV and ECCV Workshops.\n\\item I have inserted the ORCID identifiers of the authors in the paper header (see http:\/\/bit.ly\/2H5xBpN for more information).\n\\item I have not decreased the font size of any part of the paper (except tables) to fit into 14 pages, I understand Springer editors will remove such commands.\n\\end{itemize}\n{\\bf SUBMISSION:}\n\\begin{itemize}\n\\item All author names, titles, and contact author information are correctly entered in the submission site.\n\\item The corresponding author e-mail is given.\n\\item At least one author has registered by the camera ready deadline.\n\\end{itemize}\n\n\n\\section{Conclusions}\n\nThe paper ends with a conclusion. \n\n\n\\clearpage\\mbox{}Page \\thepage\\ of the manuscript.\n\\clearpage\\mbox{}Page \\thepage\\ of the manuscript.\n\nThis is the last page of the manuscript.\n\\par\\vfill\\par\nNow we have reached the maximum size of the ECCV 2022 submission (excluding references).\nReferences should start immediately after the main text, but can continue on p.15 if needed.\n\n\\clearpage\n\\bibliographystyle{splncs04}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusion}\nIn this paper, we addressed the problem of long-horizon exploration and planning by introducing a novel Long-HOT benchmark. Further, we proposed a modular hierarchical transport policy (HTP) that builds a topological graph of the scene to perform exploration with the help of weighted frontiers and simplify navigation in long-horizons through a combination of motion planning and RL policy robust to imperfect hand-offs.\nOur sub-task policies are connected in novel ways with different levels of hierarchical control requiring different state representations to perform object transport. We show how our approach leads to large improvements in performance on the transport task, it's ability to generalize to harder long-horizon task settings while only training on simpler versions, and also achieve state-of-the-art numbers on MultiON. \n\n\n\\section{Experiments}\n\\input{figures\/results}\n\nWe evaluate all approaches on two tasks set in photo-realistic Matterport3D scenes (MP3d)\\cite{Matterport3D} in Habitat~\\cite{habitat_iccv2019}: Long-HOT transport, and Multi-ON~\\cite{wani2020multion} object navigation. \n\n\n\\vspace{0.4cm}\n\\noindent \\textbf{Long-HOT:}\nWe split MP3D\\cite{Matterport3D} scenes into disjoint train, validation, and test scenes~\\cite{anderson2018evaluation} each with 61\/14\/15 scenes respectively. We generate 10,000 training task configurations among the training scenes, and 3000 each of validation and test configurations. \nEach task configuration consists of a specific configuration of objects, container, goal location, and agent starting location and pose. \nFirst, we sample a goal $(x,y)$ location in the map, then sample the four object locations. These object locations (1) lie in a specified range of distances from the goal (``goal-range''), (2) at a specified minimum distance (``obj-dist-min'') away from other objects, and at (3) within a specified maximum distance (``obj-dist-max'') from at least one other object. Next, container and agent starting locations are also sampled to lie within the same goal-range as objects. All distances are geodesic. These settings permit modulating complexity: for example, large goal distances lead to harder tasks that are more exploration-intensive and need a longer task horizon. \n\n\nTable \\ref{tab:dataset} shows settings for different task levels used in our experiments. We train all methods on \\emph{default}-level tasks on 61 scenes.\nAfter training, we first evaluate them on 15 disjoint test scenes in the \\textit{default} setting and call this as ``Standard Long-HOT Task''.\nWe then perform a more focused ``Large Long-HOT Task'' evaluation on \\textit{large} scenes that have at least one dimension $>40m$ and sample \\emph{default}, \\emph{hard} and \\emph{harder} level tasks from Table \\ref{tab:dataset}.\n\n\n\\input{tables\/dataset_table}\n\n\n\n\n\n\n\\vspace{0.4cm}\n\\noindent \\textbf{MultiOn Dataset:}\nMultiOn\\cite{wani2020multion} is a sequential multi-object navigation task where the agent is required to visit objects in a predefined sequence.\nWe adapt our proposed transport policy to test it on the challenging MultiOn (3-On) task using the object goal vector as input to the high-level controller (more details in supplementary). \n\n\\input{tables\/results}\n\n\\vspace{0.4cm}\n\\noindent \\textbf{Baselines:} We compare our proposed transport policy with several baseline methods and ablations. \n\\begin{itemize}[leftmargin=*]\n\\item \\textbf{NoMap:} This baseline policy, trained using PPO\\cite{ppo}, maps RGBD image $I$, hand state $O_h$ and goal state $O_g$ directly to low-level robot actions.\n\\item \\textbf{OracleMap:} This method improves NoMap by assuming additional access to a ground truth \n2D occupancy map of the $10m\\times 10m$ area centered on the agent in the overhead view. Similar to \\cite{wani2020multion}, we evaluate two versions of OracleMap: with occupancy alone (``Occ''), and with extra annotated true locations of the task-relevant objects and container (``Occ+Obj''). \n\\item \\textbf{OracleMap-Waypoints:} This baseline represents the a popular hierarchical approach in embodied navigation~\\cite{krantz2021waypoint,xia2021relmogen}: setting navigation waypoints for a motion planner, such as A\\text{*}~\\cite{astar}. It trains an RL policy to select discretized $(x, y)$ waypoints on the map. We provide access to OracleMap (Occ+Obj) for this baseline. See supplementary for details.\n\n\\item \\textbf{MultiOn Baselines:} For Long-HOT object transport, we adapt the ProjNeuralMap baseline from \\cite{wani2020multion} that projects perspective features in top view to our task. To this, we add additional hand-state and goal state embeddings instead of object goal embeddings as in MultiOn.\nFor MultiOn, we compare against the authors' baselines~\\cite{wani2020multion}, as well as the best-performing methods from the public leaderboard.\n\n\\item \\textbf{Exploration Ablations:} We study three variants of our method with different exploration strategies: NearestFrontier, CNN, GCN. HTP-NearestFrontier uses vanilla frontier exploration\\cite{frontier} and picks the closest frontier from the agent location as the next exploration subgoal. HTP-CNN and HTP-GCN use our proposed CNN and GCN-based exploration scores for weighted frontier exploration, explained in Sec~\\ref{sec:score_pred}.\n\n\n\\end{itemize}\n\n\n\n\\subsection{Metrics}\nWe use standard evaluation metrics following previous works \\cite{Weihs_2021_CVPR,wani2020multion,gupta2019cognitive,jain2020cordial,chen2020soundspaces,anderson2018evaluation,habitat_iccv2019} and adapt a few other metrics relevant to our task setting. \n\\vspace{-0.3cm}\n\\paragraph{\\bf \\%Success:} It measures the percentage of successful episodes across the test set. An episode is successful if the agent moves all $K$ objects to the goal location. \n\n\\vspace{-0.3cm}\n\\paragraph{\\bf \\%Progress:} It measures the percentage of target objects successfully transported to the goal location.\n\n\\vspace{-0.3cm}\n\\paragraph{\\bf SPL \\& PPL:} SPL is Success weighted-by Path Length, and PPL is Progress weighted by Path Length. Since there multiple ways in which one can complete this task we substitute optimal path length in SPL and PPL calculations with a reference path length $G_{ref}$ (details in supplemetary). Any execution with path length $G_{pl} \\leq G_{ref}$ weights the success and progress values by $1.0$. Hence $\\texttt{SPL} = 1_{\\texttt{success}}\\times \\min(G_{pl}\/G_{ref},1.0)$ and $\\texttt{PPL} = \\texttt{Progress}\\times \\min(G_{pl}\/G_{ref},1.0)$. \n\n\\vspace{-0.3cm}\n\\paragraph{\\bf Episode Energy:} We adapt a similar metric from \\cite{Weihs_2021_CVPR} to our task setting. \nIt measures the amount of remaining energy to complete the episode and gives partial credit if the agent successfully moves the object closer to goal. It is defined as $E = \\sum_{k=1}^K d_{g2t^k}\/ \\sum_{k=1}^K D_{g2t^k}$ where numerator and denominator represent sum of geodesic distance of target objects to goal location at the ending and starting of the episode respectively. \n\n\\vspace{-0.3cm}\n\\paragraph{\\bf \\% Picked:} This metric measures the percentage of target objects that are successfully picked\n\\input{tables\/largeLHOT_multion}\n\n\\subsection{Results} \\label{ssec:quant_compare}\n\n\n\n\\paragraph{Standard Long-HOT Task:} \n\nTable \\ref{tab:standard} shows the results of evaluations on Standard Long-HOT task for 1000 test episodes generated using the \\emph{default} task level.\nAll variants of HTP clearly outperform NoMap and ProjNeuralMap on all six metrics. \nFig. \\ref{fig:example_results} visualizes an episode of HTP-CNN. We show video results of our work in supplementary.\n\n\n\n\\vspace{0.2cm}\n\\noindent\\emph{Are hierarchies good?} HTP-NearestFrontier already outperforms the non-hierarchical flat baselines by a large margin, showing the importance of our modular hierarchical approach involving separate policies for different task phases, coupled with a topological map. Interestingly, not all hierarchies are good: in particular, OracleMap-Waypoints, which sets waypoint subgoals for a motion planner, performs clearly worse than flat OracleMap (``Occ+Obj''). Note that OracleMap methods have access to ground truth map information and are not directly comparable with HTP, but can be meaningfully compared among themselves. \n\n\\vspace{0.2cm}\n\\noindent\\emph{Does weighted frontier exploration work?} Among HTP variants, both HTP-GCN and HTP-CNN, which use predicted scores for weighted frontier exploration, clearly outperform HTP-NearestFrontier. Between them, GCN and CNN are roughly equivalent in this setting.\n\n\\vspace{0.2cm}\n\\noindent\\emph{How important is good agent-centered occupancy and object location information?} On this test set, access to the ground truth occupancy and object maps centered around the agent significantly improves performance, with OracleMap (Occ+Obj) performing the best out of all methods.\n\n\n\n\n\n\n\\vspace{-0.2cm}\n\\paragraph{Large Long-HOT Task:} We now evaluate these same trained policies on more challenging settings in large scenes, with more difficult transport task levels. This evaluates generalization and highlights the benefits of effective hierarchy and modularity. Note that scenes used for testing in Large Long-HOT have some overlap with training scenes but not with the same episodes. This is due to limited large scenes available in the disjoint test set. But our observation of better generalization stands since all methods have the same advantage, but others suffer severe drops compared to ours.\n\nTable \\ref{tab:deep_transport} (left) shows the results. Flat end-to-end approaches like NoMap and OracleMap deteriorate catastrophically on \\textit{hard} and \\textit{harder} level tasks. \nNoMap's \\%Success drops from 39\\% on \\emph{default} to\na mere 5.2\\% and 2.8\\% on \\emph{hard} and \\emph{harder} levels. \nHTP methods degrade more gracefully, achieving up to 33\\% and 22\\% on \\textit{hard} and \\textit{harder}. In fact, as task difficulty increases, HTP significantly closes the performance gap to OracleMap-Waypoints despite the oracle method's access to ground truth map information. We believe this is because our modular approach with weighted frontier exploration leads to better generalization compared to waypoint setting as in OracleMap-Waypoints. Further, OracleMap-Waypoints performs better than OracleMap (Occ+Obj) confirming the benefits of subgoals in long-horizon settings. \nTo further study the dependence of HTP's model components on the task performance and its generalization to increasing number of object goals we conduct several ablations, results for which are available in the supplementary. \n\n\n\nOverall, this large effect of small increments in the spatial task scale at \\textit{hard} and \\textit{harder} levels (Tab~\\ref{tab:dataset}) shows how Long-HOT stress-tests planning, exploration and reasoning over long spatial and temporal horizons. This is different from prior efforts~\\cite{wani2020multion} that extend a task by adding new objects, but pre-specify a sequence of single-object sub-tasks.\nOur HTP approach, which leverages hierarchical policies and topological maps, is a first step towards addressing these unique challenges induced by Long-HOT. \nNote that, difficulty of Long-HOT is expected to be a lot higher with the end criterion of MultiON, analysis for which will be in the supplementary.\n\n\n\n\n\n\n\n\n\n\n\\vspace{-0.2cm}\n\\paragraph{Results on MultiOn:} Finally, we also evaluate our proposed HTP framework on the MultiOn\\cite{wani2020multion} challenge. Table \\ref{tab:deep_transport} (right) shows that our method significantly outperforms other baselines from \\cite{wani2020multion}. Moreover, its performance is nearly on par or better with the CVPR 2021 Embodied AI workshop challenge winner \\cite{marza2021teaching}. Note that the techniques proposed in \\cite{marza2021teaching} are complementary to ours. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Introduction}\n\\input{figures\/teaser}\n\nA robot tasked with finding an object in a large environment or executing a complex maneuver must reason over a long horizon. Existing end-to-end reinforcement learning (RL) approaches often suffer in long-horizon embodied navigation tasks due to a combination of challenges: (a) inability to provide exploration guarantees when the point or object of interest is not visible, (b) difficulty in backtracking previously seen locations and (c) difficulty in planning over long horizons. In this work, we address these issues by proposing a novel long-horizon embodied transport task, as well as modular hierarchical methods for embodied transport and navigation.\n\nOur proposed long-horizon object transport task, Long-HOT, is designed to study modular approaches in the Habitat environment \\cite{habitat_iccv2019}. It requires an embodied agent to pick up objects placed at unknown locations in a large environment and drop them at a known goal location, while satisfying load constraints, which may be relaxed by picking up a special container object (Fig.~\\ref{fig:teaser}). While tasks like MultiON for sequential navigation also benefit from long-range planning \\cite{wani2020multion}, the proposed transport task requires more complex decision-making, such as order of pick-up and exploration-exploitation trade-off with respect to searching for the container. We abstract away the physical reasoning for pickup and drop actions, since unlike TDW-Transport \\cite{threedworld_transport}, our focus is on deeper exploration and long-horizon planning to find and transport objects in large scenes.\n\nWe argue that modularity is a crucial choice for tackling the above challenge, whereby navigation and interaction policies can be decoupled through temporal and state abstractions that significantly reduce training cost and enhance semantic interpretability compared to end-to-end approaches. This is distinct from existing hierarchical methods for subgoal generation \\cite{krantz2021waypoint,xia2021relmogen} in long horizon tasks, where the expressivity of subgoals is largely limited to goal reaching for embodied navigation and which still face scalability challenges when the task requires long trajectory demonstrations. \n\nOur modular approach for long-horizon embodied tasks constitutes a topological graph based exploration framework and atomic policies to execute individual sub-tasks (Fig.~\\ref{fig:teaser}). The higher level planner is a finite state machine that decides on the next sub-routine to execute from one of \\texttt{\\small \\{Explore, Pickup, Drop\\}} actions. The topological map representation consists of nodes connected in the form of a graph that serves to infuse geometric and odometry information to aid deeper exploration and facilitate backtracking. Unlike methods that utilize 360-degree panoramic images as input \\cite{chaplot2020neural,VGM,krantz2021waypoint}, we divide every node to aggregate representations from several directions in its vicinity. The representation within a specific node and direction consists of latent features $\\mathcal{F}_A$ from a pre-trained encoder, an exploration score $\\mathcal{F}_E$ that captures the likelihood of the agent finding an object if it explores a frontier in that direction and object closeness score $\\mathcal{F}_O$ that indicates distance to the object within the agent's field of view. We also propose a novel weighted improvement of frontier exploration \\cite{frontier} using the predicted exploration scores.\n\nUnlike methods \\cite{xia2021relmogen,krantz2021waypoint,VGM,habitat2o} that completely rely on either motion planning algorithms \\cite{planningalgo,roboticsbook} or use pure RL for low-level actions \\cite{VGM,wani2020multion}, our approach uses the best of both worlds with motion planning for point goals within explored regions and RL policies to travel the last-mile towards semantic targets at unknown locations. Indeed, on both Long-HOT and MultiON , we show that our proposed modular hierarchical approach performs significantly better, especially on longer horizons, compared to agents operating without map or other hierarchical approaches that sample navigation subgoals for task completion. Moreover, it realizes a key benefit of modularity, namely, adaptability to harder settings when trained on a simpler version of the task.\n\nIn summary, our contributions are the following:\n\\begin{tight_itemize}\n \\item A novel object transport task, Long-HOT, for evaluating embodied methods in complex long-horizon settings.\n \\item A modular transport framework that builds a topological graph and explores an unknown environment using weighted frontiers.\n \\item Strong empirical performance on object transport and navigation tasks, with adaptability to longer horizons . \n\\end{tight_itemize}\n\n\n\\if 0,\n\nMany real world tasks require agent to be robust towards varying task complexities. Imagine a robot trying to find a car key within a room and within the whole building.\nExisting end-to-end RL approaches fail in adapting to such long-horizon settings when the complexity of the problem scales. These methods face three main challenges in embodied navigation, a) their inability to provide exploration guarantees when the point or object of interest is not visible, b) difficulty in backtracking previously seen locations and c) difficulty in planning over long horizons. \nThus far, hierarchical approaches \\cite{gupta2019relay,Barto03recentadvances} have been better at handling such temporally extended problems. In this work, we aim to address these issues by proposing modular hierarchical methods for embodied transport and navigation.\n\\JD{Feels a little disjointed. What constitutes task complexity? Are we studying specifically navigation settings?}\n\nAs our first contribution, to study the benefits of modular approaches in we build a long horizon object transport task called Long-HOT in Habitat\\cite{habitat_iccv2019} environment. Our task requires agents to transport objects to a goal locations with access to a container where these objects are scattered randomly at different locations across the floor on Matterport\\cite{Matterport3D} scenes. While there exists previous approaches\\cite{threedworld_transport} with similar task definition requiring additional physics based reasoning for object pickup\nwe abstract such physics in our setup and focus on defining these tasks in large environments where agent requires deeper exploration and long-horizon planning to find and transport objects more efficiently Fig. \\ref{fig:teaser}.\n\nFurther, we study hierarchical control in the context of object transport and navigation tasks. Existing hierarchical approaches have looked into the problem of subgoal generation \\cite{krantz2021waypoint,xia2021relmogen} for solving complex long horizon tasks but the expressivity of these subgoals have mostly been limited to goal reaching for embodied navigation and still suffer from the problem of scalability when the task requires long trajectory demonstrations. Modular approaches are arguably more a natural choice to this problem where navigation and interaction policies can be decoupled through means of temporal and state abstractions that significantly reduce the training cost and provide means for more semantic interpretability compared to end-to-end approaches.\n\nIn this work we present a novel modular approach for long-horizon transport problems by building a topological graph based exploration framework and atomic policies to execute individual sub-tasks. \nOur higher level planner is a finite state machine that decides on next sub-routine to execute from one of \\texttt{\\small \\{Explore, Pickup, Drop\\}} actions.\nWe build a topological map representation which consists of nodes that are connected in the form of a graph. Unlike methods \\cite{chaplot2020neural,VGM,krantz2021waypoint} that utilize 360-degree panoramic images as input we divide every node to aggregate representations from $\\theta = 12$ directions \nwithin its vicinity. A representation within a specific node and direction consists of latent features from a pre-trained encoder ($\\mathcal{F}_A$),\nan exploration ($\\mathcal{F}_E$) score that captures how likely will the agent find an object if it explores a frontier in that direction and object closeness ($\\mathcal{F}_O$) scores that indicates distances to object within the agent's field of view. Further, we propose a novel weighted improvement of the frontier exploration\\cite{frontier} \nusing the predicted exploration scores ($\\mathcal{F}_E$). Unlike methods\\cite{xia2021relmogen,krantz2021waypoint,VGM,habitat2o} that completely rely on either motion planning algorithms \\cite{planningalgo,roboticsbook} or use pure RL for low-level actions \\cite{VGM,wani2020multion} we propose to use the best of both worlds where we use motion planning algorithms for point goals within explored regions and make use of RL policies to travel the last-mile towards semantic targets at unknown locations. \nWe show that our proposed modular hierarchical approach performs significantly better especially in long-horizon compared to agents operating without map or other hierarchical approaches that samples navigation subgoals for task completion. \n\nIn summary, our contributions are the following:\n\\begin{itemize}\\itemsep-2pt\n \\item A novel object transport task called Long-HOT, focused on evaluating the performance of methods in complex long-horizon settings.\n \\item A hierarchical transport framework that builds a topological graph and explores the environment using weighted frontiers.\n \\item Strong empirical performance on the proposed object transport and navigation tasks \\cite{wani2020multion}. Further, we show better adaptability of our method to much harder settings when trained on a simple version of the same task. \n \n\\end{itemize}\n\n\\fi\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Habitat Transport Policy}\n\n\n\n\n\n\\section{Hierarchical Transport Policy (HTP)}\nWe now describe a modular policy (Fig. \\ref{fig:framework}) that builds a topological map of the environment and operates at different levels of temporal abstraction. Our framework consists of three modules: {\\it a high-level controller}, {\\it an exploration module} and {\\it a pick-drop module}. The {\\it high-level controller} decides on the next high-level action to execute from one of $\\mathcal{A}_H =$\\texttt{\\small \\{Explore, Pickup[Object], Drop\\}} actions. The appropriate sub-task module then takes over to perform the given high-level action. At any point, if the high-level controller decides to execute a different sub-task, the current execution is interrupted and control is passed to the module executing the next one. The modules in our framework are made of several functions which we describe briefly in Sec. \\ref{ssec:components} and then provide details on how those components are connected to our overall framework.\n\\input{figures\/hierarchy}\n\n\\subsection{HTP Model Components}\\label{ssec:components}\nAn overview of our model components along with their connections in the HTP framework is shown in Fig. \\ref{fig:pipeline}. Our sub-task modules consist of the following components: a) a topological graph builder, b) exploration score predictor c) object-closeness predictor, d) object navigation policy and e) a low-level controller. We now describe each of them in detail. \n\\input{figures\/pipeline}\n\n\\subsubsection{Topological Graph Builder}\\label{ssec:graphupdate}\nThis function is responsible for creating a topological map of the environment as a graph ${G} = ({V}, {E})$, where $V \\in \\mathbb{R}^{N_t\\times f_d}$ and $E$ represent spatial nodes and connecting edges respectively. Here, $N_t$ represents number of nodes at timestep $t$ and $f_d$ represents the length of node features. Node features for a node $V_i \\in {V}$ consist of concatenated ``node-direction'' features $V_{i, \\theta}$ corresponding to $D=12$ directions $\\theta \\in \\{1,...,D\\}$ spanning 360 degrees, centered on the node.\nThese node-direction features $V_{i, \\theta}$ in turn are computed by encoding perspective RGB images through an encoder $\\mathcal{F}_A$, pretrained in an autoencoder (details in supplementary).\n\n\n\n\n\n\nAt every timestep $t$, the graph builder updates the map as follows. \nIt takes the pose information ($P_{\\phi xy}$)\nand encoded egocentric image features from $\\mathcal{F}_A$ as input.\nThe agent's location $P_{xy}$ is mapped to the nearest existing graph node $V_i$, and its heading angle $P_{\\phi}$ is mapped onto the nearest node-direction $\\theta$. Then that corresponding node-direction representation \n$V_{i,\\theta}$ is updated to the image feature vector.\nWe add a new node $V_{i+1}$ if the agent is not within a distance threshold $l_{th} = 2m$ of existing nodes, and store the corresponding node coordinate $P_{xy}$. \nWhen the agent transitions between two nodes in the graph, the graph is updated to add an edge between them.\nSimilar to \\cite{VGM}, we also keep track of the last visited time-step for every node to provide additional context for downstream processing.\n\n\n\n\n\n\\subsubsection{Exploration and Object Closenes Scores}\\label{sec:score_pred}\\label{ssec:scorepred}\nAt every node-direction, indexed by $(i, \\theta)$, aside from the feature vector above, we store two score predictions: (a) an {\\it exploration score} for frontiers\\cite{frontier} that predicts the likelihood of finding an object by exploring that frontier and (b) an {\\it object closeness score} that indicates distance to various objects in the current field of view.\n\n\\vspace{-0.2cm}\n\\paragraph{Exploration Score Predictor ($\\mathcal{F}_E$):}\\label{para:exp_score_pred} \nWe explore two variants of this function: (1) a Q learning based graph convolutional network (GCN)\\cite{kipf2017semi} that reasons for frontiers over the entire topological graph and (2) a convolutional network (CNN) predicts frontier scores on a per-frame basis.\n\n\\vspace{0.2cm}\n\\noindent{\\it GCN Exploration Score:} This function operates on (1) \nthe current topological map\n$G = (V,E)$ with associated features as computed above, and (2) a binary mask $M^{N_t \\times \\theta_d}$ indicating the availability of a frontier at every node-direction, computed using the method described in Sec. \\ref{ssec:expmodule}.\nProvided these inputs, we train a graph convolutional network (GCN)~\\cite{kipf2017semi} to produce reinforcement learning Q-values for each node-direction, representing future rewards for finding objects after visiting each frontier associated with that node-direction.\nThe object-finding reward function $r_t^e$ at every timestep $t$ is:\n\\begin{equation}\n r_t^e = \\mathbb{I}_{success} \\cdot r^e_{success} + r^e_{slack} + \\sum_o \\mathbb{I}^o_{found} \\cdot r^e_{found},\n\\end{equation}\nwhere $\\mathbb{I}^o_{found}$ is the indicator if object $o$ was found at timestep $t$, $r^e_{found}$ is the reward for finding a new object, $r^e_{slack}$ is the time penalty for every step that encourages finding the objects faster, $\\mathbb{I}_{success}$ is an indicator if \\textit{all} objects were found and $r^e_{success}$ is the associated success bonus. We consider the object to be found if the object is in the agent's field of view with distance less than maximum pre-defined distance. \nOur GCN architecture involves three layers of graph convolution layers and a fully connected layer. \n\n\n\n\\vspace{0.2cm}\n\\noindent{\\it CNN Exploration Score:} This variant of the exploration score is computed directly from the agent's current RGBD view.\nGiven this view, a CNN predicts three exploration scores:\neach score represents the chances of finding an object if the agent explores the farthest frontier available within a corresponding range of angles centered on the current agent heading: $-45^\\circ$ to $-15^\\circ$, $-15^\\circ$ to $+15^\\circ$, and $+15^\\circ$ to $+45^\\circ$ respectively for the three scores. This CNN is trained with labels set to $\\max(\\max_o(d_{a,o} - d_{f,o})\/5, 0)$ where $d_{a,o}, d_{f,o}$ represent geodesic distance to object $o$ from the agent and frontier respectively. If a frontier is not available then the score is set to 0. These three scores are then stored respectively to three consecutive node-directions $\\theta-1, \\theta, \\theta+1$, centered on the current direction $\\theta$, and at the current node $i$. \n\n\n\n\n\n\n\\vspace{-0.2cm}\n\\paragraph{Object Closeness Predictor ($\\mathcal{F}_O$):} This CNN maps the current RGBD observation $I$ to a ``closeness score'' for every object. It is trained with supervised learning to predict target closeness labels for each object, which are set to $\\max(1-d\/5, 0)$ where $d$ is the true distance to the object in m. So, objects farther than $5m$ away (or invisible) have labels $0$, and very close objects have labels $\\approx 1$. Each node-direction has an associated closeness score for each object.\n\n\\subsubsection{Object Navigation Policy}\\label{ssec:objectnav}\nNext, we discuss our object navigation policy. Given the RGBD observation $I$ and a one-hot encoding $k_o$ of a target object, the policy must select navigation actions from \\texttt{\\small \\{FORWARD, TURN-LEFT, TURN-RIGHT\\}} that take it closer towards the target. This policy is trained with the following reward $r^n_t$ \\cite{wani2020multion} at each timestep $t$:\n\\begin{equation}\nr^n_t = \\mathbb{I}_{[reached-obj]} \\cdot r^n_{obj} + r^n_{slack} + r^n_{d2o} + r^n_{collision},\n \n \n \n\\end{equation}\nwhere $r^n_{obj}$ is the success reward if it reaches closer than a threshold distance $d_{th}$ with the target object, $r_{slack}$ is a constant time penalty for every step, $r^n_{d2o} = (d_{t-1} - d_t)$ is the decrease in geodesic distance with the target object and $r^n_{collision}$ is the penalty for collision with the environment. \n\nWe train this policy using the proximal policy optimization (PPO) \\cite{ppo} reinforcement learning algorithm, for approximately 40M iterations using 24 simulator instances. We use mini-batch size of 4 and perform 2 epochs in each PPO update. We use other hyper-parameters similar to \\cite{wani2020multion}.\n\n\\subsubsection{Low-Level Navigation Controller}\\label{ssec:lc}\nOur final module is a low-level controller that takes a goal location (from within the explored regions)\nto be reached as input. \nIt then plans a path towards the specified goal location using the classical A*\\cite{astar} planning algorithm using a pre-built occupancy map.\n\n\\subsection{HTP Control Flow}\\label{ssec:method} \nWe are now ready to describe how HTP manages the flow of control between these components to perform long-horizon transport tasks.\nNote that while we describe the HTP algorithm for object transport, we show in Sec.~\\ref{ssec:quant_compare} that HTP also works for other embodied navigation tasks.\n\n\n\n\\subsubsection{High-Level Controller}\\label{ssec:highlevel}\nThe high-level controller ($\\pi^H$) is a finite state machine. Based on object closeness scores $\\mathcal{F}_O$ (Sec. \\ref{sec:score_pred}), hand state $O_h$, and goal state $O_g$,\nit selects one subtask from among $\\mathcal{A}_H =$\\texttt{\\small \\{Explore, Pickup[Object], Drop\\}}. \nAt timestep $t$, if the next high level action predicted by the controller is different from the current sub-task that is being executed, the controller interrupts the execution, and agent performs the updated high-level action. For example, during exploration if the agent finds an object with closeness score higher than a some threshold it then switches control from exploration to picking the object if the hand is not full or if it holds a container. \n\n\n\n\\subsubsection{Weighted Frontier Exploration}\\label{ssec:expmodule}\n\nIf $\\pi^H$ selects the $<$\\texttt{\\small Explore}$>$ sub-task, the exploration module is executed. For exploration, we introduce a weighted frontier technique based on the predicted exploration score function $\\mathcal{F}_E$ (Sec \\ref{sec:score_pred}). For every timestep $t$, we calculate the set of frontiers ${\\bf S}$ over the explored and unexplored regions using occupancy information \\cite{frontier}.\nWhen a new frontier is identified, we assign a parent node-direction $Y_r = (i,\\theta^n)_r$ for the $r^{th}$ frontier, where $(i,\\theta)$ is the current localized node-direction and $\\theta^n$ is calculated based on the angle made by the frontier with the agent. Here the agent's field of view is $90^\\circ$, so $\\theta^n$ for the newly found frontiers can assume one of $\\{\\theta-1,\\theta,\\theta+1\\}$ directions.\nFor all existing frontiers from timestep $t-1$, we copy the same parent node-direction from the previous timestep. Finally, we calculate a representative frontier $ S^{(i,\\theta)}$ for node-direction $(i,\\theta)$, as: $S^{(i,\\theta)} = \\{s_k: \\argmin_k \\|s_k - X_c\\|\\ \\forall\\ Y_k = (i,\\theta)\\}$ where $s_k \\in {\\bf S}$ and $X_c$ is the center of frontiers associated with $Y_k=(i,\\theta)$. \n\n\nAt each timestep during its execution, the exploration module selects a node-direction $(i,\\theta)$ from the topological graph $G$ that has the highest exploration score $\\mathcal{F}_E$. Its corresponding frontier $S^{(i,\\theta)}$ is then set as the goal location for the agent's low-level controller, which begins to move towards this goal. \nThe highest-score goal frontier is recomputed at every timestep, and may switch as new views are observed during exploration. \n\n\n\\subsubsection{Pick-Drop Module}\\label{ssec:pickdrop}\nThis module performs the pick or drop actions in the object transport task when the controller $\\pi^H$ selects an action $a_H \\in$\\ \\{\\texttt{\\small Pickup[Object], DropAtGoal}\\}. \nWhen called, this module first selects a node $(i,\\theta)$ from graph $G$ with the highest object closeness score $\\mathcal{F}_O$ for the target object.\nIf the agent is not already in the selected $i^{th}$ node, then its location $P_{xy}(i)$ is set as the goal for the low-level controller. Once the agent is localized to the $i^{th}$ node, it orients in the direction of $\\theta$. At this point, control is passed to the Object Navigation policy, targeting the object selected by $\\pi_H$. \nThe module then selects the pickup or drop action whenever the object closeness score $\\mathcal{F}_O$ for the target object, based on the current view, exceeds a threshold. \nThe sub-task is successful when the hand state or goal state is changed accordingly and the controller $\\pi^H$ predicts the next high level action to execute. We execute this module till it performs the pick\/drop or for a maximum of $T_p$ steps after which the control is given back to the high-level controller $\\pi^H$.\n\n\n\n\n\n\\section{Related Work}\n\\paragraph{\\bf Embodied intelligence.} \n\nThe community has developed several simulation environments \\cite{habitat_iccv2019,ai2thor,shen2021igibson,threedworld_transport,chen2020soundspaces,xiazamirhe2018gibsonenv,RoboTHOR} and associated tasks to study embodied agents in tasks like object goal navigation \\cite{batra2020objectnav,chaplot2020object,wortsman2019learning,yang2018visual,wani2020multion}, point goal navigation \\cite{anderson2018evaluation,habitat_iccv2019,wijmans2020ddppo,ramakrishnan2020occupancy}, rearrangement \\cite{threedworld_transport,shridhar2020alfred,Weihs_2021_CVPR,habitat2o}, instruction following \\cite{shridhar2020alfred,anderson2018visionandlanguage} and several others in this regard.\nWhile there are handful of previous works designed for navigation \\cite{batra2020objectnav,chaplot2020object,wani2020multion} or rearrangement \\cite{threedworld_transport,Weihs_2021_CVPR}, they do not extensively stress tests methods with increasing task complexities. We find the typically used flat policy architectures \\cite{wani2020multion} in embodied AI tasks fail completely when executing over longer horizons. Hence, we propose a new benchmark called Long-HOT that has potential to serve as a testbed for, and accelerate the development of novel architectures for planning, exploration, and reasoning over long spatial and temporal horizons. \n\n\nOur task builds on previous transport tasks defined in embodied intelligence \\cite{Weihs_2021_CVPR,habitat2o,threedworld_transport,wani2020multion} but differs in ways that it requires deeper exploration and long horizon planning. While previous work like \\cite{Weihs_2021_CVPR} focus on identifying state changes using visual inputs to perform rearrangement or \\cite{habitat2o} use geometrically specified goals in single apartment environments these works operate in minimal exploration scenarios where the focus is shifted more towards perception or interaction with objects. \nOur task is closest to \\cite{threedworld_transport}, while \\cite{threedworld_transport} focuses on performing transport including physics based simulations, we abstract our interactions and focus more on complex long-horizon planning.\nOur work extends \\cite{wani2020multion} but rather than focusing on navigation in a predefined sequential fashion, our task requires more complex decision making to determine the order of picking and decide whether to perform a greedy transport if it sees the goal or to explore more in hopes of finding the container for efficient transport.\n\n\n\n\n \n\n\n\\paragraph{\\bf Modular-Hierarchical Frameworks.} \nSolutions to long-horizon tasks typically involved hierarchical\\cite{NIPS1997_hierarchy,subgoal_discovery,Sutton:1999,bacon2016optioncritic} policies in reinforcement learning. They provide better exploration behavior through long-term commitment towards a particular subtask. \\cite{xia2021relmogen,krantz2021waypoint} present one such approach where they sample navigation subgoals to be executed by the low-level controller. While these methods can temporally abstract navigation to an extent we find their performance to drop significantly in longer horizon settings. In HTP we show that modularity enables generalization while only training on the simpler versions of the task. \n\n\nCloser to our work are modular approaches\\cite{NMCdas2019,chaplot2020neural,krantz2021waypoint,xia2021relmogen} that provide an intuitive way to divide complex long horizon tasks as a combination of individual sub-tasks that can be solved using existing approaches. Das et al. \\cite{NMCdas2019} present a modular approach to solve embodied question answering \\cite{das2017embodied} through a combination of several navigation policies each for finding an object, to find a room or to exit one. \nThis can blow-up with number of sub-routines required to navigate across a building or inability of the agent to find a room of particular type in large environments. Rather than navigating to individual rooms our method proposes a weighted frontier technique that provides exploration guarantees.\nChaplot et al. \\cite{chaplot2020neural} propose a method for image goal navigation by generating a topological map of the scene using 360-degree panoramic images. Our approach operates on perspective images and divides a node representation into segments across different directions.\n\nOur work also closely relates to task and motion planning (TAMP) literature\\cite{garrett2020integratedtamp,ffrob} where the closest work in this domain is \\cite{yamada2020motion} which proposes a MP augmented RL approach in manipulation settings where they realize large displacements of a manipulator through a MP. \nWhile \\cite{yamada2020motion} tackles a simple manipulation domain for 2D block pushing where target objects are fully observable, we tackle a more complex navigation setting and propose to use a combination of MP and object navigation policies where a motion planner first moves to a region with high likelihood of object presence and gives control to the navigation policy that takes it closer to the goal object.\n\n\n\n\n\n\n\\section{Habitat Transport Task}\nWe propose a novel transport task for embodied agents that simulates object search, interaction, and transport in large indoor environments. A robot assistant might be expected to perform such tasks in a warehouse or a hospital.\nIn each episode, the agent must transport $K$ target objects to a specified object goal location in a large partially observed Habitat~\\cite{habitat_iccv2019} 3D indoor environment with many rooms and obstacles.\nThe environment also contains a special ``container'' object (in yellow) that can be used to transport other objects, another special goal object (in green) whose position is the goal location. In our setting, all objects are cylinders of various colors, placed into the environment.\nUnlike previously studied tasks such as~\\cite{habitat2o}, the agent needs to explore the environment to find all objects, and does not have access to their geometric coordinates. \nAt each step, the agent can turn by angle $\\alpha=30^{\\circ}$ to the left or right, move forward by $x=0.25m$, or execute object ``Pickup'' or ``Drop'' actions. \n\n\nThe agent has access to standard egocentric perspective RGB and depth views. Fig.~\\ref{fig:pipeline} (left) shows an example of the agent's view of a scene with a prominent red object. Aside from this, the agent has access to odometry ($P_{\\phi xy}$),\nas well as the hand state $O_h$ and goal state $O_g$, which indicate if a target\/container object is either held by the agent or already at the goal respectively. \nFollowing \\cite{wani2020multion,Weihs_2021_CVPR}, if the agent is within $R = 1.5m$ of any pick-able object and a Pickup action is called, the closest object is removed from the scene and the hand state $O_h$ is updated to include it. For the Drop action, any objects in the agent's hand are dropped near the agent's location. If the goal is within distance $R$, the goal state is updated to include the object.\nThe agent can hold limited items in its hands at once, and is therefore constrained to carry at most two objects at a time unless it picks up the container, in which case any number of objects may be carried. Picking up the container requires the agent's hands to be empty. Each episode runs for a maximum of $T = 2500$ timesteps.\n\n\nThis transport task naturally entails additional complexity compared to previously proposed navigation settings, and has properties that are not emphasized in previous benchmark tasks for embodied agents~\\cite{wani2020multion,habitat2o,habitat_iccv2019,xiazamirhe2018gibsonenv}. It includes multiple task phases (searching for, navigating to, and interacting with objects), reasoning about the environment at various scales (such as coarse room connectivity charts over the explored map for planning long trajectories, and fine-grained local occupancy maps for object interaction), accounting for carrying capacity constraints. It also involves dynamically selecting among various sub-task sequences under uncertainty: for example, having found an object far from the goal, should an agent immediately return to drop it off at the goal, or should it look for another object before returning for efficiency?\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Further Analysis for Long-HOT and HTP}\n\n\\subsection{Ablation Study}\nHere, we study the dependence of various components in HTP to the overall task performance and also their generalizability to increasing number of target objects for the transport task. For creating a test episode, we sample additional targets with object colors that were used during the training. \nNote that we only test our methods with increasing number of object goals by training them on a \\emph{default} task level with 4 objects. The ablations were conducted using the GCN variant in HTP on 250 test episodes with 15 scenes.\n\n\\vspace{-0.3cm}\n\\paragraph{\\bf HTP w\/o graph:}\nWe perform an ablation of HTP without a graph memory for closeness scores. When the high-level action is one of \\texttt{\\small\\{Pickup, Drop\\}} action, the Pick-Drop module provides direct control to object-nav policy bypassing the low-level controller. Here, instead of reaching the node with highest closeness score we directly use object-nav policy to move closer towards target objects. \n\n\\input{figures\/ablation}\n\n\\vspace{-0.3cm}\n\\paragraph{\\bf HTP - random closeness scores:} To study how the values of closeness scores affect the task performance, we replace closeness values to be random numbers. Intuitively, this should affect agent the most as random closeness values will take agent to a very different node location compared to the one closest to the target object. Further, it also makes agent execute pick-drop operations at unintended locations thereby affecting the task performance. \n\n\\vspace{-0.3cm}\n\\paragraph{\\bf HTP - random exploration scores:} In this ablation, we replace the frontier exploration scores to random values which affects the frontier selection and the exploration strategy. \n\n\\vspace{-0.3cm}\n\\paragraph{\\bf HTP - random object navigation (50\\%):} Here we replace the actions from the object navigation policy with random actions 50\\% of the time. \n\n\nFig. \\ref{fig:ablation} reports the success rate and SPL of various HTP variants for increasing number of target objects. As shown, the performance of HTP is considerably higher than the all other ablations indicating the purpose of each of these model components. \nHTP w\/o graph provides significant drop in performance compared to HTP due to the long-horizon exploration and navigation required by object-nav for finding target objects. With increasing number of target objects the performance of HTP w\/o graph deteriorates even further relatively indicating the importance of topological graph. HTP with random closeness score affect agents the most due to its influence on pick drop operations as incorrect values take agent to a completely different node compared to the closest one and the performance even reduces close to 0\\% with increase in number of target objects. \n\nHTP with random exploration score provides low success rates as incorrect frontier values makes agent switch between frontiers that are farther away making exploration less efficient as the agent travels within the explored regions for significant portion of its time while switching. \nHTP is not affected much even when perturbed with random actions for object navigation 50\\% of the time, indicating the robustness of HTP. This could be due to HTP simplifying the navigation process through a combination of motion planning and RL where, we use motion planning for navigation within explored regions and RL policies for navigating towards semantic targets at unknown locations. \n\nWhile the performance of all the HTP variants decrease with increase in number of target objects, HTP still provides good enough performance for 8 object transport while only training on transport task involving 4 objects. This also shows HTP's ability to generalize towards increasing number of target objects.\n\n\n\n\n\\input{tables\/standard_wtermination}\n\\subsection{Episode termination for wrong pickup}\nTable \\ref{tab:standard_wterminate} compares the performance of HTP and baseline methods with early termination criteria for a wrong pickup action. An episode is terminated if there are no objects within agent's vicinity when a pickup action is called. The numbers indicate Long-HOT's difficulty to be a lot higher with an end criterion similar to MultiOn\\cite{wani2020multion}. We relax those hard constraints as training becomes more difficult with sparse rewards for long-horizon tasks and rather focus on task completion for our agents. While the performance of RL methods drop significantly in Table \\ref{tab:standard_wterminate}, the proposed HTP is nearly unaffected. It can be attributed to the modular hierarchical design of HTP where pick-drop actions are executed only when the object closeness score is higher than a threshold.\n\n\n\n\n\\subsection{Long-HOT episode statistics}\nFig. \\ref{fig:dataset_statistics} shows a histogram of geodesic distances with fraction of total episodes in the corresponding histogram bin for various datasets used in Long-HOT experiments. Note the range of reference path length increases with different task configurations in Large Long-HOT indicating a increasing task complexity.\n\\input{figures\/data_statistics}\n\n\\vspace{-0.4cm}\n\\subsection{Discussion and limitations}\nThe accompanying video shows some failure cases of HTP which includes situations where an agent is unable to move around an obstacle or has small frontier regions which are ignored by the exploration module and situations where the closeness scores for some object visible only from some particular node direction is overwritten by values obtained when the perspective image does not contain the object viewed from a different location localized to the same node direction.\n\nThe work assumes noiseless odometry and depth for task completion, but earlier works like \\cite{chaplot2020object} have shown that semantic mapping and navigation work well in the real world even with noisy pose and depth. Future works can relax these assumptions to build methods that work more robustly with different forms of noisy inputs.\n\n\n\\section{Todo}\n \n\n\\section{Further Details on Implementation, Training and Metrics}\n\nIn this section, we provide additional implementation details of baseline architectures and the proposed HTP.\n\n\\subsection{No Map baseline}\n\nWe adapt an architecture similar to \\cite{wani2020multion} for the No Map policy. \n\n\\vspace{-0.4cm}\n\\paragraph{Inputs and Outputs:} No Map takes an RGBD image of size $256\\times256$ along with the hand state $O_h$ (size $5\\times1$), goal state $O_g$ (size $4\\times1$) and previous action as inputs to the policy. It then predicts one of \\texttt{\\{\\small FORWARD, TURN LEFT, TURN RIGHT, PICKUP, DROP\\}} actions at every timestep.\n\n\\vspace{-0.4cm}\n\\paragraph{Architecture:} The RGBD image is passed through a sequence of three convolutional layers + ReLU\\cite{relu} and a linear layer + ReLU that transforms the input into a feature vector of length $512$. The convolutional layers consist of kernels with size $\\{8,4,3\\}$, strides $\\{4,2,1\\}$ and output channels $\\{32,64,32\\}$ respectively. The hand state $O_h$ and goal state $O_g$ are passed through dense layers to get respective feature vectors (dim $32$). The previous action is embedded through an embedding layer of length $32$. Finally, image features, hand-goal features, and previous action embedding are concatenated and passed through a recurrent unit to output features that are used to predict actions and the approximate value function.\n\n\n\\vspace{-0.4cm}\n\\paragraph{Rewards:} The following rewards $r_t$ is provided at every timestep $t$ to train the No Map agent:\n\\begin{equation}\n \\begin{split}\n r_t = & \\mathbb{I}_{success} \\cdot r_{success} + \\mathbb{I}_{pick} \\cdot r_{pick} + r_{d2o} +r_{d2g} \\\\ & + \\sum_o \\mathbb{I}^o_{goal} \\cdot r_{goal} + r_{collision} + r_{fpd} + r_{slack}\n \\end{split}\n\\end{equation}\nwhere, $\\mathbb{I}_{success}, \\mathbb{I}_{pick}, \\mathbb{I}^o_{goal}$ are functions that indicate successful completion of the episode, any target object picked for the first time and target objects that are transported to the goal respectively. $r_{success}, r_{pick}, r_{goal}$ are the rewards associated with $\\mathbb{I}_{success}, \\mathbb{I}_{pick}, \\mathbb{I}^o_{goal}$. $r_{d2o} = (d^o_{t-1} - d^o_t)$ is the decrease in geodesic distance from the agent's position to the closest object. $r_{d2g} = \\max_o (d^o_{t-1} - d^o_t)$ is the maximum decrease in geodesic distance of target objects with the goal. $r_{collision}$ is the collision penalty for agents and $r_{slack}$ is the slack reward for every timestep the agent delays in completing the episode.\n\n\n\\vspace{-0.4cm}\n\\paragraph{Training:} The policy is trained using proximal policy optimization (PPO) \\cite{ppo} technique, for approximately 40M iterations using 24 simulator instances. The hyper-parameters used are similar to \\cite{wani2020multion}.\n\n\\subsection{OracleMap Baselines}\nWe first describe the OracleMap (Occ) and OracleMap (Occ+Obj) baselines and then provide details on OracleMap-Waypoints policy.\n\n\\subsubsection{OracleMap (Occ \/ Occ+Obj)}\nThe policy architecture for OracleMap (Occ \/ Occ + Obj) is similar to No Map agent with an additional map input that covers an area of $10m\\times10m$. First top view map embeddings (dim. 16) are generated and then passed through a map encoder. The encoder consists of convolutions with kernels $\\{4,3,2\\}$, stride $\\{3,1,1\\}$ and output channels $\\{32,64,32\\}$. The map encoder produces a feature vector of length $256$ and is concatenated as one of the inputs to the recurrent unit. The output action space and rewards used to train OracleMap (Occ \/ Occ + Obj) is similar to the No Map baseline. \n\n\n\n\n\n\\subsubsection{OracleMap-Waypoints}\nThe inputs to the baseline are OracleMap (Occ+Obj), hand state $O_h$ and goal state $O_g$. It then predicts waypoints to be reached as $(x,y)$ locations on the map. The prediction is discretized into $M=100$ bins within a $5m\\times5m$ range centered on the agent. The agent then selects a bin as one of its action and uses A*\\cite{astar} to reach its location. The predicted subgoal is also associated with one of \\texttt{\\small \\{Pickup, Drop\\}} high-level actions. The action space in Waypoints policy contains $M\\times2$ in total. Once the agent reaches the predicted subgoal, the corresponding high level action \\texttt{\\small \\{Pickup, Drop\\}} is executed. The subgoals are updated for every $t_k$ steps irrespective of agent reaching previously assigned subgoal. Agent's prediction range is kept higher than maximum traversable distance in $t_k$ steps for agents to provide subgoals that avoid taking pickup or drop actions within $t_k$ timesteps. The policy architecture and rewards used are similar to OracleMap (Occ+Obj) baselines.\n\n\n\n\n\n\\subsection{Hierarchical Transport Policy}\n\n\\paragraph{High-level Controller:} \nThe controller takes hand state $O_h$, goal state $O_g$ and closeness scores $\\mathcal{F}_O$ of objects with respect to the nodes as inputs. The high-level action is assigned based on the following conditions, if the agent already holds maximum number of objects in its hand and goal object was discovered (closeness score of goal $> F_O^{th}$ in one of the nodes) a \\texttt{\\{Drop\\}} action is executed else the agent executes \\texttt{\\{Explore\\}} to find the goal object. If the agent has capacity to carry more objects, and some closeness scores for objects are greater than $F_O^{th}$ in the nodes, a \\texttt{\\{Pickup[Object]\\}} action is executed for objects that are not either held by the agent or transported already. The agent executes \\texttt{\\{Explore\\}} if none of the objects satisfy the closeness score criteria. \\texttt{\\{Pickup[Container]\\}} action is only executed when the container is discovered and the agent does not hold any object in its hand.\n\n\n\n\n\n\n\\paragraph{Pre-trained Encoder $\\mathcal{F}_A$ :}\nLatent features from a pre-trained auto-encoder is used as node features in HTP. It consists of a ResNet-18\\cite{resnet18} style encoder-decoder architecture with latent features of $32$ dimensions. The auto-encoder is trained with a weighted MSE loss that weighs pixels of target objects with a weight $\\lambda=2.0$.\n\n\n\\subsection{Reference Trajectory Calculation}\nA reference demonstration in the transport task first picks up the container and then picks up consecutive closest objects from its previous location to finally drop them at goal. The sum of geodesic distances in executing this reference trajectory in an episode from agent's starting location is used as the reference path length $G_{ref}$.\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nHelioseismology -- the study of solar oscillations -- is a powerful probe of the\nstructure and dynamics of the Sun which has provided great improvements in our\nunderstanding of stellar evolution and structure \\citep[][ and references\ntherein]{TurckChieze1993, JCD2002}. Those successes push the community to apply\nseismic techniques to other stars, opening the doors to asteroseismology, the\nstudy of stellar oscillations. These oscillations have already been observed\nfrom ground and space. The ground-based observations are limited by the\nday-night cycle, which introduces aliases in the observations, but allow to use\nDoppler velocity measurements. They have provided data with {sufficient\nquality} to detect solar-like oscillations \\citep[see][ and references\ntherein]{BouchyCarrier2003, BeddingKjeldsen2003}. To reduce the aliases,\nmulti-site campaigns have been carried out but they are too short to have a good\nfrequency resolution. Space photometry missions and ground-based velocity\nnetworks must be used to provide observations of stellar oscillations without\nthese limitations. With the current MOST\\footnote{Microvariability and\nOscillations of STars \\citep{Matthews1998}} and WIRE\\footnote{Wide-field Infra\nRed Explorer \\citep{Buzasi2000}} satellites and the future\nCOROT\\footnote{Convection Rotation and planetary Transits \\citep{Baglin2001}}\nmission asteroseismology is blooming. However, we still have to deal -- in the\ncase of solar-like oscillations -- with very small signal-to-noise ratio\n(hereafter $S\/N$) observations as a consequence of the weakness of the\nluminosity variations. Moreover, stars cannot be spatially resolved yet. Only\nglobal oscillation modes can be observed. In addition, we cannot have access to\nthe rotation rates and the rotation-axis inclination separately. Without knowing\nthese two key stellar properties, the tagging of the modes in terms of their\nproperties ($\\ell, m$) and successive $n$ may be extremely difficult. In fact,\nthe main problem to face will not be to fit the peaks (``peak-bagging'') but to\nprovide a good description of the model to be fitted after having put the\ncorrect labels on the modes (``peak tagging''). To do this, it has been proposed\nto use the echelle diagram where the modes follow ridges depending on the\nstellar properties. To improve the S\/N ratio \\citet{Bedding2004} proposed to\nfilter this diagram by a vertical {smoothing}. However the\n{smoothing} works well only when the ridges are quasi-vertical which\nmeans a very good \\textit{a priori} knowledge of the large difference and is\nrestricted to the asymptotic part of the spectrum. We propose here to follow a\nsimilar approach but using new mathematical denoising techniques better suited\nto the study of curved ridges.\n\nAt the end of the last decade, the application of mathematical transforms based\non wavelets to analyze astronomical images has been widely developed. The first\nwavelet algorithms were well adapted to treat images with isotropic elements.\nHowever, this description presented a limitation in the context of astrophysics,\nwhere objects such as filaments or spiral structures exhibit a highly\nanisotropic character (in shape and scale). New transforms, the ridgelet\n\\citep{Candes1998} and curvelet transforms \\citep{CandesDonoho1999, Starck2002},\nwere then developed to deal efficiently with such objects. Astrophysical\napplications (image denoising) of this technique have been presented in\n\\citet{Starck2003, Starck2004} to analyze images of gravitational arcs, the\nSaturn rings or the CMB (Cosmic Microwave Background) map.\n\nIn this paper we suggest to use the curvelet transform to analyze asteroseismic\nobservations (more precisely the stellar echelle diagrams), in order to improve\nthe ``peak tagging'' of the oscillation modes and even the resultant ``peak\nbagging''. To illustrate the application of this denoising technique in the\nasteroseismic case, we have performed Monte Carlo simulations of ideal\nasteroseismic data contaminated by different levels of stochastic noise. We\nstart in Sect.~2 by a quick reminder of the properties of stellar oscillation\nmodes in the solar-like case and the construction of the echelle diagram. In\nSect.~3 we introduce multiscale transforms, in particular the ridgelet and the\ncurvelet transforms. In Sect.~4, the simulated data of a star with an\noscillation spectrum similar to the Sun but with different rotation axis\ninclinations and rotation rates, are presented. In Sect.~5 we discuss the\nresults obtained in the simulations.\n\n\\section{Properties of solar-like oscillations}\n\n\\begin{figure}\n\t\\includegraphics[scale=0.35,angle=0]{Lambert2006_fig1a.eps}\n\t\\includegraphics[scale=0.35,angle=90]{Lambert2006_fig1b.eps}\n\t\\caption{Portion of the theoretical spectrum (top) and echelle diagram (bottom)\nfor a sun spinning ten times faster than the Sun and seen under an angle of\n50$\\degr$. This is the ideal power spectrum used in the simulations described in\nSect.~5.}\n\t\\label{theorique}\n\\end{figure}\n\nOnly low-degree stellar oscillation modes can be detected and observed with the\npresent generation of instruments. The asymptotic theory of oscillation modes\n($n\\gg \\ell$) is then adequate and can be used to study them. First order\n\\citep{Tassoul1980} and second order developments \\citep{Vorontsov1991,\nLopes1994, Roxburgh2000a, Roxburgh2000b} have been made to describe solar and\nstellar oscillations. In the case of solar-like stars, where p-modes\npredominate, the frequencies can be developed as:\n\\begin{equation}\\label{secondordre}\n\t\\nu_{n,\\ell} \\approx \\Delta\\nu_0 \\big( n+\\frac{\\ell}{2}+\\frac{1}{4}\n+\\alphaup(\\nu) \\big) + \\frac{\\Delta\\nu_0}{4\\pi^2\\nu_{n,\\ell}}\\big((\\ell +\n1\/2)^2A + \\psi\\big) \n\\end{equation}\nin this expression $\\ell$ and $n$ are respectively the degree and the radial\norder of the modes and \n\\begin{eqnarray*}\n\\tau_c\t\t\t&=&\t\\int_{r_{in}}^{r_{out}}\\frac{dr}{c_s} \\\\\n\\Delta\\nu_0\t&=&\t\\frac{1}{2\\tau_c} \\\\\nA\t\t\t\t&=&\t\\frac{1}{4\\pi^2\\nu_{n,\\ell}}\\big(\\frac{c_s(R_\\star)}{R_\\star} -\n\\int_{r_{in}}^{r_{out}}\\frac{dc_s}{dr}\\frac{dr}{r}\\big) \n\\end{eqnarray*}\n$c_s$ is the internal stellar sound speed, $\\alphaup$ is a phase-shift term and\n$\\psi$ is a function which allows to take into account the gravitational\npotential in the central region \\citep{Lopes1994}. From the asymptotic approach,\nwe can extract general properties of modes and better understand the physics\nhidden in the frequencies behavior. The large frequency spacing, defined as\n$\\Delta\\nu_{n,\\ell}=\\nu_{n+1,\\ell} - \\nu_{n,\\ell}$, tends asymptotically to\n$\\Delta\\nu_0$, related to the mass and radius of the star; the small frequency\nspacing, $\\delta_{\\ell,\\ell+2}\\nu=\\nu_{n,\\ell}-\\nu_{n-1,\\ell+2}$, can be\napproximated to first order by\n$(4\\ell+6)\\Delta\\nu_0\/(4\\pi^2\\nu_{n,\\ell})\\int_0^{R_\\star}\\frac{dc_s}{dr}\\frac{dr}{r}$.\nThis variable is related to the derivative of the sound speed and enhances the\neffect coming from the central regions, providing constraints on the age of the\nstar. Finally the second difference is defined as\n$\\delta_2\\nu=\\nu_{n+1,\\ell}-2\\nu_{n,\\ell}+\\nu_{n-1,\\ell}$. Its variations\nprovide information about the extent of the convective zone \\citep{Monteiro2000,\nBallot2004} or the helium abundance in the stellar envelope \\citep{Basu2004}.\n\nUnder the rotation effects the azimuthal order $m$ ($-\\ell \\leqslant m \\leqslant\n\\ell$) is needed to characterize the oscillation spectrum. If the angular\nvelocity $\\Omega$ is uniform \\citep{Ledoux1951}, the mode frequencies are\nasymptotically approximated by:\n\\begin{equation}\\label{unifangvel}\n\t\\nu_{n,\\ell,m}\\approx\\nu_{n,\\ell}+m\\Omega\/2\\pi = \\nu_{n,\\ell}+m\\delta\\nu\n\\end{equation}\nwhere $\\delta\\nu$ is the rotational splitting. Equation~\\ref{unifangvel} shows\nthat modes are ($2\\ell+1$)-times degenerated among the azimuthal order: a single\npeak in the spectrum becomes a multiplet. Its corresponding structure depends on\nthe rotation rate, the inclination axis of the star and its stochastic\nexcitation. The solar-like mode lifetimes (a few days) are expected to be much\nshorter than the length of the future space observations (a few months). In\nconsequence, the relative amplitude ratios inside a multiplet will only depend,\nin average, on the inclination angle and the spacing between these different\nm-components \\citep{GizonSolanki2003}. Thus if the different m-components of a\nmultiplet can be identified and tagged with the correct $(\\ell,m)$, they can\nprovide a good estimation of both the rotation-axis inclination $i$ and the\nrotational splitting $\\delta\\nu$, allowing a better mode parameter extraction\nthrough the fitting of the spectra. The effect of the stochastic excitation on\nan isolated mode could be minimized by computing the average of these parameters\non several modes \\citep[see for example the\nn-collapsogramme;][]{BallotYale2004}.\n \nEquation~\\ref{secondordre} shows that the even ($\\ell=0,2$) and odd ($\\ell=1,3$)\nmodes have respectively almost the same frequency, only separated by the small\nspacing $\\delta_{\\ell,\\ell+2}\\nu$. In addition, they are separated regularly in\nfrequency by the large spacing $\\Delta \\nu_{n,\\ell}$. This property allows us to\nbuild the so-called echelle diagram \\citep{Grec1983}, which is currently used to\nidentify modes for solar-like oscillations. It is a 2D representation of the\npower spectrum where this one is folded onto itself in units of the large\nspacing. In such representation the modes appear as almost locally vertical\nridges (see Fig.~\\ref{theorique}). The echelle diagram is a powerful tool for\nthe ``peak tagging'' since assigning the correct $(\\ell,m)$ values to the peaks\nis easier when the multiplet structure is well identified in this diagram. The\nsuccessive $n$ values are obtained from each individual horizontal line. \n\n \\section{Multiscale Transforms}\n \\subsection{The Wavelet Transform}\n\nThe wavelet transform provides a framework for decomposing images into their\nelementary constituents across scales by reducing the number of significant\ncoefficients necessary to represent an image. The continuous wavelet transform\nof a 2D signal is defined as:\n\n\\begin{eqnarray}\n W(a,b_i, b_j) = \\frac{1}{\\sqrt{a}}\\int\\!\\!\\!\\int\nf(x,y)\\psi^*\\left(\\frac{x-b_i}{a},\\frac{y-b_j}{a}\\right)dxdy\n\\end{eqnarray}\nwhere $W(a,b)$ are the wavelet coefficients of the function $f(x)$, $\\psi(x)^*$\nis the conjugate of the analyzing wavelet, $a>0$ is the scale parameter and $b$\nis the position parameter. The continuous wavelet transform is the sum over all\nthe positions of the signal $f(x,y)$ multiplied by the scaled and shifted\nversions of the wavelet $\\psi((x-b_i) \/ a,(y-b_j) \/ a)$ (cf.\nFig.~\\ref{examples}, top panels). This process produces wavelet coefficients\nthat are a function of scale and position.\n\nHowever, the classical wavelet transform only address a portion of the whole\nrange of interesting phenomena: isotropic features at all scales and locations.\nOne of the drawbacks of the two-dimensional wavelet transform is that it does\nnot achieve an efficient analysis of images which present high anisotropy. For\ninstance, the wavelet transform does not efficiently approximate 2D edges, since\na large number of large wavelet coefficients, scale after scale, are required,\nmaking difficult its analysis. In order to solve this problem two new\nmathematical transforms, namely the ridgelet transform and the curvelet\ntransform, were introduced.\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.245]{Lambert2006_fig2a.ps}\n \\includegraphics[scale=0.245]{Lambert2006_fig2b.ps}\n \\includegraphics[scale=0.245]{Lambert2006_fig2c.ps}\n \\includegraphics[scale=0.245]{Lambert2006_fig2d.ps}\n \\caption{Examples of 2D wavelets (top panels) and ridgelets (bottom panels).\nThe top right wavelet has a greater scale parameter than this on the left. The\nbottom right ridgelet has different orientation and width than the left one.}\n \\label{examples}\n\\end{figure}\n\n \\subsection{The Ridgelet transform}\n\nThe ridgelet transform was developed to process images including ridges elements\n\\citep{Candes1998}. It provides a representation of perfectly straight edges.\nGiven a function $f(x_1,x_2)$, the representation of this latter is the\nsuperposition of elements of the form\n$a^{-1\/2}\\psi((x_1\\cos\\theta+x_2\\sin\\theta-b)\/a)$, where $\\psi$ is a wavelet,\n$a>0$ a scale parameter, $b$ a location parameter and $\\theta$ an orientation\nparameter. The ridgelet is constant along lines\n$x_1\\cos\\theta+x_2\\sin\\theta=\\mathrm{const}$, and transverse to these ridges it\nis a wavelet. Thus, contrary to a unique wavelet transform, the ridgelet has two\nsupplementary characteristics: a length, equal to this of the image and an\norientation, allowing the analysis of an image in every direction and so\nexhibiting the edge structure. Fig.~\\ref{examples} (bottom panels) shows two\nexamples of ridgelets. The problem is that in the nature edges are typically\ncurved rather than straight so ridgelets alone cannot yield an efficient\nrepresentation.\n\n\t\\subsection{The Curvelet transform}\n\t\t\\subsubsection{Description}\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.32]{Lambert2006_fig3.eps} \n \\caption{Sketch illustrating the curvelet transform applied to an image. The\nimage is decomposed into subbands followed by a spatial partitioning of each\nsubband. The ridgelet transform is applied to each block. The finest details\ncorrespond to the highest frequencies.}\n \\label{curveletgraphe}\n\\end{figure}\n\nRidgelets can be adapted to represent objects with curved edges using an\nappropriate multiscale localization: at a sufficiently fine scale a curved edges\ncan be considered as almost straight. \\citet{CandesDonoho1999} developed the\ncurvelet transform using ridgelets in this localized manner.\nFig.~\\ref{curveletgraphe} shows the different steps of the curvelet analysis of\nan image:\n\n\\begin{enumerate}\n\t\\item Image decomposition into subbands: as a set of wavelets bands through a\n2D isotropic wavelet transform. Each band corresponds to a different scale.\n\t\\item Smooth partitioning: each subband is partitioned into squares -- blocks\n--, whose size is appropriate to each scale. The finest is the scale, the\nsmaller are the blocks.\n\t\\item Ridgelet analysis: it's applied to each square.\n\\end{enumerate}\n\nThe implementation of the curvelet transform offers an exact reconstruction and\na low computational complexity. Like ridgelets, curvelets occur at all scales,\nlocations and orientations. Moreover contrary to ridgelets, which have a given\nlength (the image size) and a variable width, the curvelets have also a variable\nlength (the block size) and consequently a variable anisotropy. The finest the\nscale is, the more sensitive to the curvature the analysis is. As a consequence,\ncurved singularities can be well approximated with very few coefficients.\n\n\t\t\\subsubsection{Denoising images: filtering curvelet coefficients}\n\nTo remove noise a simple thresholding of the curvelet coefficients has been\napplied to select only significant coefficients. One possible thresholding of a\nnoisy image consists in setting to $0$ all non-significant curvelet coefficients\n$\\tilde c_{i,j,l}$, $i$, $j$ and $l$ respectively the indexes of the line, row\nand scale: it is the so-called hard-thresholding:\n\\begin{eqnarray}\n\t\\tilde c_{i,j,l} = \\left\\{ \\begin{array}{ll} \t\\mbox{1} & \\mbox{if } \tc_{i,j,l}\n\\mbox{ is significant} \\\\ \n\t\t\t\t\t\t\t\t\t\t\t\\mbox{0} & \\mbox{if } \tc_{i,j,l} \\mbox{ is not significant}\n\\end{array} \\right.\n\\end{eqnarray}\nCommonly, $c_{i,j,l}$ is significant if the probability that the curvelet\ncoefficient is due to noise is small, i.e., if the curvelet coefficient is\ngreater than a given threshold. A basic problem remains: the choice of the\nthreshold. Usually, this threshold is taken equal to $k\\sigma_j$, where\n$\\sigma_j$ is the noise standard deviation at the scale $j$ and $k$ is a\nconstant taken equal to 5 in our filterings.\n\nSimple thresholding of the curvelet coefficients is very competitive\n\\citep{Starck2002} with ``state of the art'' techniques based on wavelets,\nincluding thresholding of decimated or undecimated wavelet transforms.\n\n\\section{Simulation of data}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[scale=0.65]{Lambert2006_fig4_lowres.eps}\n \\caption{Effect of the curvelet denoising on the mode visibility for\n$S\/N=5$. Each picture shows 120 realizations out of the 500 done in our Monte\nCarlo simulation. Each horizontal line corresponds to a single realization. The\ntop panel is the raw spectra and the bottom is the curvelet filtered one. \n\\label{montecarlo}}\n\\end{figure*}\n\nTo characterize the curvelet denoising technique applied to the asteroseismic\ndata, we have simulated typical solar-like observations varying different\nparameters: S\/N ratios, observational lengths, rotation-axis inclinations,\nrotation rates... With this approach we know the input parameters in advance and\nwe can evaluate the quality of the results given by the curvelet analysis and\nits limits.\n\nIn the simulations shown in this paper, we use the oscillation spectrum of a\nstar similar to the Sun but seen under different conditions. Different rotation-axis inclinations ($i=50\\degr$ and $90\\degr$) and rotation\nrates ($\\Omega= \\Omega_{\\sun}$, $5\\Omega_{\\sun}$, and $10\\Omega_{\\sun}$) have\nbeen considered. An ideal power spectrum were constructed first. Only the modes\n$\\ell\\le3$, $n=12$--$25$ were simulated. The mode parameters -- frequencies\n($\\nu$), amplitudes ($A$) and widths ($\\Gamma$) -- were obtained from the\nanalysis of GOLF (Global Oscillations at Low Frequency) data \\citep{Garcia2004}.\nThe amplitudes were corrected to take into account the difference between\nintensity and velocity observations. Modes were simulated with symmetrical\nLorentzian profiles as the asymmetry is expected to be at the level of the\nnoise. Following the method described in \\citet{FierryFraillon1998}, a\nmultiplicative noise, a $\\chi^2$ with 2 d.o.f. statistics, has been introduced\nto reproduce the stochastic excitation of such modes \\citep[see\nalso][]{Anderson1990}. The $S\/N$ ratio of the ``resultant'' raw power spectrum\nwas defined as the maximum of the bell-shaped p-mode power (i.e. the highest\nsimulated p mode) divided by the noise dispersion. The simulated background is\nflat assuming that it has been previously fitted and removed as it is usually\ndone for the Sun \\citep{Harvey1985}.\n\nSeveral Monte Carlo simulations have been performed for each ideal spectrum.\nRealistic $S\/N$, with values ranging from 5 to 15, have been used to cover a\nwide range of situations (compatible with what it is expected, \\citep[see\n][]{Baglin2001}). In each realization of the Monte Carlo simulation the same\nlevel of noise has been randomly added to the corresponding ideal spectra.\nTherefore all the realizations, in a given Monte Carlo simulation, have the same\n$S\/N$ ratio. The simulated spectra have been computed for two resolutions,\n$\\approx0.38$ and $\\approx0.077~\\mu$Hz, corresponding respectively to 30-day and\n150-day observations. The first are representative of MOST observations and the\nshort CoRoT runs while the latter are of the same length than the long CoRoT\nruns.\n\nSimulations of other stars, like some potential main CoRoT targets, with\ndifferent masses, ages and, in consequence, internal structures have been made.\nThe results have already been presented and discussed during the CoROT workshops\n\\#8 and \\#9 obtaining the same qualitative results. For the sake of clarity,\nthey are not shown here.\n\n\\section{Discussion}\n\\begin{figure*}\n \t\\includegraphics[scale=0.35,angle=90.]{Lambert2006_fig5a.eps}\n \t\\includegraphics[scale=0.35,angle=90.]{Lambert2006_fig5b.eps}\n \t\\includegraphics[scale=0.35,angle=90.]{Lambert2006_fig5c.eps}\n \t\\includegraphics[scale=0.35,angle=90.]{Lambert2006_fig5d.eps}\n \t\\includegraphics[scale=0.35,angle=90.]{Lambert2006_fig5e.eps}\n\t\\includegraphics[scale=0.35,angle=90.]{Lambert2006_fig5f.eps}\n\t\\caption{Raw (left) and filtered (right) power spectra (top and middle panels)\nand echelle diagrams (bottom panels) for a $S\/N=5$ realization. The short dashed\nlines in the power spectra represent the position of the theoretical\nfrequencies. From left to right, the three first equidistant lines indicate the\ncomponents $m=-1,0,1$ of $\\ell=1$ modes, the two next indicate the strongest\ncomponents of $\\ell=2$ ($m=-1$ and $1$), and the the last indicates $\\ell=0$. In\nthis case only two components of the $\\ell=1$ and the $\\ell=0$ mode are slightly\nvisible in the raw diagram. On the curvelet filtered one, the three $\\ell=1$\ncomponents appear as well as the $\\ell=0$ and the components $m=\\pm 1$ of the\n$\\ell=2$ modes.\\label{diagechellesnr5}}\n\\end{figure*}\n\nOnce the spectra have been computed, the echelle diagrams can be built with a\nfixed folding frequency. This one corresponds to the mean large frequency\nspacing $\\Delta\\nu_0$, identified either by computing the FFT, the\nautocorrelation of the spectra or any other technique \\citep[see for\nexample][]{Regulo2002}. The denoising based on the curvelet transform is then\napplied to this echelle diagrams. It is important to note that artifacts may\nappear in the filtered spectra at frequencies $\\nu^*$=$\\nu_0$+$k\\Delta\\nu_0$,\nwith $k$ an integer, when random small structures appear in the echelle\ndiagrams. However, their appearance and position strongly depend on the folding\nfrequency and are very sensitive to its value. Therefore they can be easily\nidentified. The artifacts can be reduced (in contrast to the regions containing\nsignal) by building echelle diagrams with slightly different folding frequencies\nand averaging the resultant filtered spectra.\n\nIn order to present the results of data analysis using the curvelet denoising\nmethod, we have selected the case of a sun-like star seen with an inclination\nangle $i=50\\degr$ and with a rotation $\\Omega=10 \\Omega_{\\sun}$. A portion of\nthe ideal spectra constructed for this star can be seen in Fig.~\\ref{theorique}\n(top panel). Monte Carlo simulations were then performed, giving rise to\ndifferent sets (each one with 500 realizations) of raw spectra with different\n$S\/N$ ratios. The echelle diagrams were constructed using a folding frequency of\n135.18$~\\mu$Hz, obtained by computing the FFT of the raw spectrum.\n\n \\subsection{Peak tagging}\n\nIn those cases, with a high $S\/N$ (typically 15), the mode structure is clearly\nvisible in each raw spectrum and also on the echelle diagram. The different\nridges can be easily identified and tagged. Although the filtering gives\nenhanced denoised diagrams and unfolded spectra, it does not contribute\nsignificantly to the mode identification. \n\nIn the lower $S\/N$ cases, however, the situation is different.\nFigure~\\ref{montecarlo} shows some of the results of the Monte Carlo simulation\nfor $S\/N$=5. \nThe upper panel corresponds to 120 realizations among the 500 computed for the\nraw spectra in the frequency range 2450--2920$~\\mu$Hz. Each horizontal line\ncorresponds to a single realization. Some patterns can hardly be seen. The lower\npanel represents the same spectra after applying the curvelet filtering. A\nseries of vertical ridges clearly appears. From the left to the right on the\npanels, they can be identified as the ($\\ell=2$; $m=\\pm 1$), the $\\ell=0$\n(blended with the $\\ell=2$; $m=+2$ ) and the ($\\ell=1$; $m=-1,0,+1$). The\nimprovement of the contrast is important in all the realizations and allows to\ndistinguish the different components of a mode, making easier the identification\nand the tagging. \n\nThe identification is harder when looking at each individual spectrum and\nrequires the use of the echelle diagram. Fig.~\\ref{diagechellesnr5} shows an\nexample of raw (left) and filtered (right) 150-day observation power spectra\n(top and middle panels) and the corresponding echelle diagrams (bottom panels)\nfor a $S\/N=5$ realization. Input frequencies are indicated by the short dashed\nlines above the spectra. The mode peaks can hardly be distinguished in the raw\nspectrum and can easily be confused with noise. For the range 2780-2920$~\\mu$Hz,\nonly a strong peak at 2900$~\\mu$Hz can be considered not to be noise. In the\nregion 3060--3180$~\\mu$Hz the peaks are visible and we can attempt to identify\nthe $\\ell$=1 and $\\ell$=0 modes but it is still unclear. On the contrary, on the\ncorresponding parts of the filtered spectrum, the structures of the $\\ell$=1\nmode with three components, the $\\ell$=0 mode and even the strongest components\nof the $\\ell$=2 mode are visible. \nThe raw echelle diagram gives no extra information because of the very weak\nridges and low contrast with the background. The weakest components can hardly\nbe detected and no tagging can be done. The curvelet filtering provides a\ncontrast enhancement of the ridges on the echelle diagram. Thus three almost\nequidistant strong ridges appear on the left of the diagram and one strong ridge\nwith two weaker ones on the right. The corresponding patterns can be seen on the\nfiltered spectrum corresponding well to the theoretical frequencies. Since the\nmodes $\\ell=3$ are not visible, and according to the amplitude of the strongest\npeak on the left, we can suggest that the three strongest peaks correspond to a\n$\\ell=1$ multiplet and the other ones to the $\\ell=2$ and $\\ell=0$ modes. \n\nConsequently, when the tagging is done it is also easier to have a first\nestimation of both the mean rotational splitting and the rotation-axis\ninclination, since the visibility of the multiplet is increased. From the\nspacing between the components of the mode $\\ell=1$, a first estimation of the\nmean rotational splitting of the star can be done, as well as an estimation of\nthe inclination angle, according to their relative amplitude ratios. We have\nselected the extraction of one parameter: the mean rotational splitting of the\n$\\ell$=1 mode at low frequency (2540--2550$~\\mu$Hz), to quantify the improvement\nof the curvelet filtering. This region is particularly interesting because the\nline width is still small and the modes, when they are visible, can be easily\nidentified. Thus, in a sample of 100 realizations of the Monte Carlo simulation,\nwe have obtained in 90 of them a better estimation of this parameter in the\nfiltered spectra. In fact, in the raw spectra it was very exceptional to obtain\na good result. With the filtered spectra a mean rotational splitting of $\\langle\n\\delta\\nu \\rangle=4.05\\pm0.30~\\mu$Hz was found, which is very close to the\nactual splitting included in the ideal spectra $\\langle \\delta\\nu\n\\rangle=4.0~\\mu$Hz. In addition, specific methods can be applied to improve the\nextraction of these parameters by using different strategies of spectra fitting\nas the ones developed by \\citet{GizonSolanki2003} or \\citet{Ballot2006}. \nIn the case of the 30-day observations, the curvelet filtered echelle diagram is\nstill very noisy and it does not help in recognizing the ridges. However the\ncorresponding denoised power spectrum is much better despite the lower\nresolution (5 times less than in the long runs), even for small $S\/N$ ratios\n($\\sim5$). The modes $\\ell=0,2$ and $\\ell=1$ can be distinguished, at the\nmaximum power, while it is not obvious to do so in the raw spectra. Therefore,\nwe consider that a \n30-day run is the minimum length needed to have reliable results with the\ncurvelet denoising technique.\n\n\\citet{Garcia2005} analyzed the first available MOST public Procyon A data\n(32-day observation) using the curvelet technique. Previous analysis by\n\\citet{Matthews2004} did not reveal the presence of any p-mode structure in this\nstar. Therefore, due to its tiny S\/N ratio the results of the curvelet denoising\nshould be taken with care. Nevertheless, an excess of power seems to appear in\nthe region where it is expected and taking the 15 most prominent peaks in this\nregion, many are in agreement, inside the error bars, with previous tagged modes\nusing ground-based velocity observations.\n\n\\subsection{Extraction of p-mode parameters}\n\nOnce the mode identification and tagging are done, the extraction of the mode\nparameters can be performed. To illustrate how this extraction can be improved\nby using the denoised spectrum we have extracted the central frequency of the\nmodes in both the raw and the filtered spectra. To determine this parameter,\nmodes have been fitted by Lorentzian profiles using a maximum-likelihood\nestimator in the classical way: adjacent pairs of even ($\\ell=0$ and $\\ell=2$)\nmodes are fitted together, while $\\ell=1$ is fitted alone, due to the small\namplitudes of $\\ell=3$ modes. For each multiplet, the fitted parameters are the\ncentral frequency $\\tilde\\nu_{n,\\ell}$, the amplitude $\\tilde A_{n,\\ell}$, the\nlinewidth $\\tilde\\Gamma_{n,\\ell}$ and the background $b$. The amplitude ratios\ninside the multiplets and the rotational splittings have been fixed thanks to\nthe preliminary estimation done in the previous section (cf. 5.1). The fitting\nprocedure provides for each adjusted parameter $\\tilde{X}$ an associated error\n$\\sigma(\\tilde{X})$ {computed by Hessian-matrix inversion}.\n \nThe raw spectra follow a $\\chi^2$ with 2 d.o.f. statistics, whereas the filtered\nspectra have a $\\chi^2$ with a higher d.o.f. statistics (close to a Gaussian\ndistribution depending on the number of filtered coefficients). {According to \\citet{Appourchaux2003}, it is possible to fit spectra following a\n$\\chi^2$ with more than\n2 d.o.f. statistics with a classical procedure developed for a $\\chi^2$ with 2\nd.o.f. statistics: parameters\nare correctly fitted, but computed errors have to be adapted \\textit{a\nposteriori}. However in our case, \ndue to filtering, points of filtred spectra are correlated (we have estimated\nthat one point is correlated with $\\sim$10 neighbouring points). This\ncorrelation should have to be considered, but we have neglected its \neffect on the fitting procedure in the present study. This assumption is\nvalidated by the Monte Carlo simulations.\nSuch a global filtering induces also correlations between the different lines\nof the echelle diagram. Thus the errors on parameters of different modes\n(typically $(n,\\ell)$ and $(n+1,\\ell)$)\ncan be correlated. These correlations will have to be taken into account\nespecially during the comparison of frequencies extracted by this way to stellar\nmodels.}\n\nFrom the 500 realizations of the Monte Carlo simulation, we derived for each\nmode and for both the raw and the filtered spectra the mean value of the\nextracted frequencies $\\langle \\tilde\\nu_{n,\\ell} \\rangle$, their mean computed\nerrors $\\langle\\sigma(\\tilde\\nu_{n,\\ell})\\rangle$ and the dispersion of\nfrequency distribution $\\sigma^{*}(\\tilde\\nu_{n,\\ell})$ (the real error). We\nhave verified that $\\sigma^{*}(\\tilde\\nu_{n,\\ell}) \\approx\n\\langle\\sigma(\\tilde\\nu_{n,\\ell})\\rangle$ for fits performed on the raw spectra\nand {we have $\\sigma^{*}(\\tilde\\nu_{n,\\ell}) <\n\\langle\\sigma(\\tilde\\nu_{n,\\ell})\\rangle$ for fits performed on the filtered\nones. As expected, the error bars on the fitted frequencies, computed by Hessian-matrix inversion, are overestimated.\n}\n\n\nFigure~\\ref{ecart} shows the difference between the mean fitted frequencies\n$\\langle \\tilde\\nu_{n,\\ell} \\rangle$ and the theoretical frequencies $\\nu_{in}$\nof the simulated star discussed in the previous section ($S\/N=5$). The error\nbars correspond to the dispersion $\\sigma^{*}(\\tilde\\nu_{n,\\ell})$. For each\n$\\ell$, the error bars of the filtered spectra are smaller than those of the raw\nspectra. In addition, the range where modes can be detected, tagged and fitted\nis extended. While the difference $\\langle \\tilde\\nu_{n,\\ell} \\rangle -\n\\nu_{in}$ is only flat in the central region of the raw power spectrum (e.g. for\n$\\ell=0$, in the range $n=18$--$22$), it extends at higher and lower frequencies\n(e.g. for $\\ell=0$, the range is extended to $n=16$--$23$) in the filtered one. \n\n\\begin{figure*}\n \\centering\n \\includegraphics[scale=0.5,angle=90]{Lambert2006_fig6.eps}\n \\caption{Differences between the mean fitted frequencies $\\langle \\tilde\n\\nu_{n,\\ell} \\rangle$ and the input frequencies $\\nu_{in}$, for $\\ell=0,1,2$,\nfor the raw (dashed line with triangles) and filtered (full line with diamonds)\nspectra ($S\/N=5$, 150-day observation). The error bars correspond to the\ndispersion $\\sigma^*(\\tilde\\nu_{n,\\ell})$ of the frequency distribution. For\nclarity the values for the raw case are shifted by $20~\\mu$Hz towards the\nright.\\label{ecart}}\n\\end{figure*}\n\n\\section{Conclusions}\n\nThe application of a noise reduction technique based on the curvelet transform\nto echelle diagrams improves the identification -- ``peak tagging'' -- of\nstellar acoustic modes. In observations with a $S\/N$ ratio as small as 5 we are\nstill able to recover the mode pattern and extract reliable asteroseismic\ninformation in both small and long runs (30-day and 150-day observations\nrespectively). Below this S\/N and with shorter observations, the method\nefficiency is reduced drastically. The rotational splittings and the\nrotation-axis inclination can be better estimated using the filtered spectrum.\nIn particular, Monte Carlo simulations showed that a better extraction of the\nmean rotational splitting from modes at low frequency can be done in 90 out of\n100 realizations using the filtered spectra. The uncertainty on the extracted\nrotational splitting of a typical sun-like star seen with an inclination angle\n$i=50\\degr$ and with a rotation $\\Omega=10 \\Omega_{\\sun}$ is very small,\n$\\sim$0.30 $\\mu$Hz. These parameters can then be used to have a set of guesses\nor \\textit{a priori} values to perform individual fits of the spectra. We have\nalso shown that the range of the frequency extraction can be extended at higher\nand lower frequencies using the filtered spectra. Finally, simulations of the\nshort run observations have demonstrated that this method can also be applied to\nlower resolution spectra with good results.\n\n\\begin{acknowledgements}\nP. Lambert thanks Dr. D. Neuman for useful discussions. \n\\end{acknowledgements}\n\\bibliographystyle{aa}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}