diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzfkgz" "b/data_all_eng_slimpj/shuffled/split2/finalzzfkgz" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzfkgz" @@ -0,0 +1,5 @@ +{"text":"\\section{FORWARD PHYSICS AND FORWARD INSTRUMENTATION AT THE CMS INTERACTION POINT}\n\nForward physics at the LHC covers a wide range of diverse physics subjects which have in \ncommon that particles produced at small polar angles $\\theta$ and hence large \nvalues of rapidity provide a defining characteristics. This article concentrates\non their physics interest in $pp$ collisions.\n\nAt the Large-Hadron-Collider (LHC), where proton-proton collisions occur at \ncenter-of-mass energies of 14 TeV, the maximal possible rapidity is \n$y_{max} = \\ln{\\frac{\\sqrt{s}}{m_{\\pi}}}\\sim 11.5$. The two multi-purpose detectors ATLAS and \nCMS at the LHC are designed primarily for efficient detection of processes with large\npolar angles and hence\nhigh transverse momentum $p_T$. The coverage in pseudorapidity \n$\\eta = - \\ln{[\\tan{( \\theta \/ 2 )} ] }$ of\ntheir main components extends down to about $|\\theta| = 1^\\circ$ from the beam axis\nor $|\\eta| = 5$.\n\nFor the CMS detector, several subdetectors with coverage beyond $|\\eta| =5$ are \ncurrently under construction (CASTOR and ZDC sampling calorimeters) or in the proposal \nstage (FP420 proton taggers and fast timing detectors). \n\nFuthermore, a salient feature of the forward instrumentation around the \ninteraction point\n(IP) of CMS is the presence of TOTEM~\\cite{TOTEM}. TOTEM is an approved experiment at \nthe LHC for measuring the $pp$ elastic cross section as a function of the four-momentum\ntransfer squared, $t$, and for measuring the total cross section with a precision of\napproximately 1\\%. The TOTEM experiment uses the same IP as CMS and supplements around \nthe CMS IP several tracking devices, located inside of the volume \nof the main CMS detector, plus near-beam proton taggers a distance up to $\\pm 220$~m. \nThe CMS and TOTEM collaborations have described the considerable physics potential of\njoint data taking in a report to the LHCC \\cite{opus}. \n\nThe kinematic coverage of the combined CMS and TOTEM apparatus is unprecedented at a\nhadron collider. It would be even further enhanced by complementing CMS with the\ndetectors of the FP420 proposal which would induce forward physics into the portfolio of\npossible discovery processes at the LHC~\\cite{fp420}.\n\nAn overview of the forward instrumentation up to $\\pm 220$~m from the CMS IP is given in \nFig.~\\ref{fig:overview}. There are two suites of calorimeters with tracking detectors in front.\nThe CMS Hadron Forward (HF) calorimeter with the TOTEM telescope T1 in front \ncovers the region $3 < |\\eta | < 5$, the CMS CASTOR calorimeter with the TOTEM telescope \nT2 in front covers $5.2 < |\\eta| < 6.6$. The CMS ZDC calorimeters will be \ninstalled at the end of the straight LHC beam-line section, at a distance of \n$\\pm 140$~m from the IP. Near-beam proton taggers will be installed by TOTEM at \n$\\pm 147$~m and $\\pm 220$~m from the IP.\nFurther near-beam proton taggers in combination with very fast timing detectors to be\ninstalled at $\\pm 420$~m from the IP are part of the FP420 proposal.\n\n\\begin{figure}\n\\hspace*{-0.5cm}\n\\includegraphics[scale=0.32, angle = -90]{cms_totem_detectors_new.eps}\n\\caption{Layout of the forward detectors around the CMS interaction point.}\n\\label{fig:overview}\n\\end{figure}\n\n\n\n\n\\section{PHYSICS WITH FORWARD DETECTORS}\n\nIn the following, we describe the physics interest of the CMS CASTOR and ZDC \ncalorimeters~\\cite{PTDR1} \nand the TOTEM T1 and T2 telescopes~\\cite{TOTEM}. Of particular interest are \nQCD measurements at values of Bjorken-$x$ as low as $x \\sim 10^{-6}$ and the resulting \nsensitivity to non-DGLAP dynamics, \nas well as forward particle and energy flow measurements. These can play an important \nrole in tuning the Monte Carlos description of underlying event and multiple interactions\nat the LHC and in constraining Monte Carlo generators used for cosmic ray studies.\n\n\\subsection{CMS CASTOR \\& ZDC calorimeters}\n\nThe two calorimeters are of interest for $pp$, $pA$ and $AA$ running at the LHC, \nwhere $A$ denotes a heavy ion. They are Cherenkov-light devices with electromagnetic \nand hadronic sections and will be present in \nthe first LHC $pp$ runs at luminosities where event pile-up should be low.\n\nThe CASTOR calorimeters are octagonal cylinders located at $\\sim 14$~m from the IP.\nThey are sampling calorimeters with tungsten plates as absorbers and fused silica quartz \nplates as active medium. The plates are inclined by $45^\\circ$ with respect to the \nbeam axis. Particles passing through the quartz emit Cherenkov\nphotons which are transmitted to photomultiplier tubes through aircore lightguides.\nThe electromagnetic section is 22 radiation lengths $X_0$ deep\nwith 2 tungsten-quartz sandwiches, the hadronic section consists of 12 tungsten-quartz\nsandwiches. The total depth is 10.3 interaction lengths $\\lambda_l$. The calorimeters\nare read out segmented azimuthally in 16 segments and logitudinally in 14 segments. \nThey do not have any segmentation in $\\eta$. The CASTOR coverage of \n$5.2 < |\\eta| < 6.6$ closes hermetically the total CMS calorimetric pseudorapidity range over\n13 units. \n\nCurrently, funding is available only for a CASTOR calorimeter on one side of the IP.\nConstruction is advanced, with concluding beamtests foreseen for this summer and \ninstallation in time for the 2009 LHC data taking. \n\nThe CMS Zero Degree Calorimeters, ZDC, are located inside the TAN absorbers \nat the ends of the straight section of \nthe LHC beamline, between the LHC beampipes, at $\\pm 140$~m distance on each side of the \nIP. They are very radiation-hard sampling calorimeters \nwith tungsten plates as absobers and as active medium quartz fibers read out via\naircore light guides and photomultiplier tubes.\nThe electromagnetic part, $19 X_0$ deep, is segmented into 5 units horizontally, the \nhadronic part into 4 units in depth. The total depth is 6.5 $\\lambda_l$. The ZDC \ncalorimeters have 100\\% acceptance for neutral particles with $|\\eta|>8.4$ and can measure\n50~GeV photons with an energy resolution of about 10\\%. \n\nThe ZDC calorimeters are already installed and will be operational already in 2008.\n\n\n\\subsection{TOTEM T1 \\& T2 telescopes}\n\nThe TOTEM T1 telescope consists of two arms symmetrically installed around the CMS IP \nin the endcaps of the\nCMS magnet, right in front of the CMS HF calorimeters and with $\\eta$ coverage similar to\nHF.\nEach arm consists of 5 planes of Cathod Strip Chambers (CSC) which measure\n3 projections per plane, resulting in a spatial resolution of 0.36~mm in the radial and\n0.62~mm in the azimuthal coordinate in test beam measurements.\n\nThe two arms of the TOTEM T2 telescope are mounted right in front of the CASTOR \ncalorimeters, with similar $\\eta$ coverage. Each arm consists of 10 planes of 20\nsemi-circular modules of Gas Electron Multipliers (GEMs). The detector read-out is\norganized in strips and pads, a resolution of $115~\\mu $m for the radial coordinate and\nof $16~\\mu$rad in azimuthal angle were reached in prototype test beam measurements.\n\n\n\\subsection{Proton-proton collisions at low $x_{Bj}$}\n\nIn order to arrive at parton-parton interactions at very low $x_{Bj}$ values, several \nsteps in the QCD cascade initiated by the partons from the \nproton may occur before the final hard interaction takes place. Low-$x_{Bj}$ QCD hence offers\nideal conditions for studying the QCD parton evolution dynamics. Measurements at the\nHERA $ep$ collider have explored low-$x_{Bj}$ dynamics down to values of a few $10^{-5}$.\nAt the LHC the minimum accessible $x$ decreases by a factor $\\sim 10$ for each\n2 units of rapidity. A process with a hard scale of $Q ~ 10$~GeV and within the \nacceptance of T2\/CASTOR ($\\eta = 6$) can occur at $x$ values as low as \n$10^{-6}$.\n\n\n\\begin{figure}[htb]\n\\includegraphics[scale =0.4]{pierre_m2vsx2_bw.eps}\n\\caption{Acceptance of the T2\/CASTOR detectors for Drell-Yan electrons, see text.}\n\\label{fig:DYcoverage}\n\\end{figure}\n\n\nForward particles at the LHC can be produced in collisions between two partons with\n$x_1 >> x_2$, in which case the hard interaction system is boosted forward.\nAn example is Drell-Yan production of $e^+ e^-$ pairs, \n$ q q \\rightarrow \\gamma^\\star \\rightarrow e^+ e^-$, a process that probes primarily \nthe quark content of the proton. Figure~\\ref{fig:DYcoverage} shows the distribution of the invariant\nmass $M$ of the $e^+ e^-$ system versus the $x_{Bj}$ of one of the quarks, where\n$x_2$ is chosen such that $x_1 >> x_2$. The solid curve shows the kinematic limit\n$M^{max} = \\sqrt{x_2 s}$. The dotted lines indicate the acceptance window for both\nelectrons to be detectable in T2\/CASTOR.\nThe black points correspond to any of the Drell-Yan events generated \nwith Pythia, the green\/light grey (blue\/dark grey ) ones refer to those events in which at least one (both)\nelectron lies within the T2\/CASTOR detector acceptance. For invariant masses of the $e^+ e^-$ \nsystem of $M> 10$~GeV, $x_{Bj}$ values down to $10^{-6}$ are accessible.\n\nThe rapid rise of the gluon density in the proton with decreasing values of $x_{Bj}$\nobserved by HERA in deep inelastic scattering cannot continue indefinitely without violating\nunitarity at some point. Hence, parton recombination within the proton must set in at low\nenough values of $x_{Bj}$ and leads to non-linear terms in the QCD gluon evolution. \nFigure~\\ref{fig:saturation} compares for \nDrell Yan processes with both electrons within the T2\/CASTOR detector acceptance the cross\nsection predicted by a PDF model without (CTEQ5L~\\cite{CTEQ}) and with (EHKQS~\\cite{EHKQS}) \nsaturation effects. A difference of a factor 2 is visible in the predictions. Further details \ncan be found in~\\cite{opus}.\n\n\n\n\\begin{figure}[htb]\n\\includegraphics[scale =0.3]{pierre_dsigmadx2dy_bw.eps}\n\\caption{Comparison of the cross section prediction of a model without (CTEQ5L) and \nwith (EHKQS) saturation for Drell-Yan events in which both electrons are detected in T2\/CASTOR.}\n\\label{fig:saturation}\n\\end{figure}\n\n\nComplementary information on the QCD evolution at low $x_{Bj}$ can be gained from\nforward jets. The DGLAP evolution~\\cite{DGLAP} \nassumes that parton emission in the cascade is strongly \nordered in transverse momentum while in the BFKL evolution~\\cite{BFKL}, \nno ordering in $k_t$ is assumed,\nbut strong ordering in $x$. At small $x_{Bj}$, the difference between the two approaches is\nexpected to be most pronounced for hard partons created at the beginning of the cascade, \nat pseudorapidities close to the proton, i.e. in the forward direction. Monte Carlo generator\nstudies indicate that the resulting excess of forward jets with high $p_T$, observed\nat HERA, might be measurable with T2\/CASTOR. Another observable sensitive to\nBFKL-like QCD evolution dynamics are dijets with large rapidity separation, which \nenhances the available phase space for BFKL-like parton radiation between the jets.\nLikewise dijets separated by a large rapidity gap are of interest since they indicate\na process in which no color flow occurs in the hard scatter but where, contrary to the \ntraditional picture of soft Pomeron exchange, also a high transverse momentum transfer \noccurs across the gap. \n\n\\subsection{Multiplicity \\& energy flow}\n\nThe forward detectors can be valuable tools for Monte Carlo tuning.\n\n\nThe hard scatter in hadron-hadron collisions takes place in a dynamic environment,\nrefered to as the ``underlying event'' (UE), where\nadditional soft or hard interactions between the partons and \ninitial and final state radiation occur. The effect of the UE can not be disentangled on an\nevent-by-event basis and needs to be included by means of tuning Monte Carlo multiplicities \nand energy flow predictions to data. The predictive power of these tunes obtained \nfrom Tevatron data is very limited, and ways need to be found to constrain the UE at LHC \nenergies with LHC data. As shown in~\\cite{Borras}, the forward detectors are sensitive\nto features of the UE which central detector information alone cannot constrain.\n\n\\begin{figure}[!b]\n\\includegraphics[scale =0.55]{cosmics_Eflow.epsi}\n\\caption{Energy flow as predicted by Monte Carlo generators used for the description of \ncosmic ray induced air showers~\\cite{opus}.}\n\\label{fig:cosmics}\n\\end{figure}\n\nAnother area with high uncertainties is modelling the interaction of primary cosmic rays in \nthe PeV energy range with the atmosphere. Their rate of occurance per year is too low for\nreliable quantitative analysis. The center-of-mass energy in $pp$ collisions at the LHC \ncorresponds to 100 PeV energy in a fixed target collision. Figure~\\ref{fig:cosmics} shows the \nenergy flow\nas function of pseudorapidity as predicted by different Monte Carlos in use in the cosmic ray\ncommunity. Clear differences in the predictions are visible in the acceptance region of\nT2\/CASTOR and ZDC.\n\n\n\n\\section{PHYSICS WITH A VETO ON FORWARD DETECTORS}\n\nEvents of the type $pp \\rightarrow pXp$ or $pp \\rightarrow Xp$, where no color exchange\ntakes place between the proton(s) and the system $X$, can be caused by $\\gamma$ exchange,\nor by diffractive interactions. In both cases, the absence of color flow between the\nproton(s) and the system $X$ results in a large gap in the rapidity distribution of the\nhadronic final state. Such a gap can be detected by requiring the absence of a signal in \nthe forward detectors. In the following, we discuss three exemplary processes which are \ncharacterized by a large rapidity gap in their hadronic final state.\n\n\\subsection{Diffraction with a hard scale}\n\nDiffraction, traditionally thought of as soft process and described in Regge theory, can also\noccur with a hard scale ($W$, dijets, heavy flavors) as\nhas been experimentally observed at UA8, HERA and Tevatron. In the presence of a hard scale,\ndiffractive processes can be described in perturbative QCD (pQCD) and their cross sections\ncan be factorized into that one of the hard scatter and a diffractive particle \ndistribution function (dPDF). In diffractive hadron-hadron scattering, rescattering between\nspectator particles breaks the factorization. The so-called rapidity gap survival \nprobability quantifies this effect~\\cite{survival}. A measure for it can be obtained by the ratio of\ndiffractive to inclusive processes with the same hard scale. At the Tevatron, the ratio \nis found to be ${\\cal O}(1 \\%)$~\\cite{tevatron}.\nTheoretical expectations for the LHC vary from a fraction of a percent to as much as \n30\\%~\\cite{predLHC}. \n\nSingle diffractive $W$ production, $pp \\rightarrow pX$, where $X$ includes a $W$, \nis an example for diffraction with a hard scale at the LHC and is in \nparticular sensitive to the quark component of the proton dPDF in an as-of-yet unmeasured \nregion. In the absence of event pile-up, a selection is possible based on the requirement\nthat there be no activity above noise level in the CMS forward calorimeters HF and CASTOR.\n\n\\begin{figure}\n\\includegraphics[scale=0.4]{nHFvsnCASTOROnlyMinus_nTrkMax_1_100pb_new.eps}\n\\caption{Number of towers with activity above noise level in HF versus in CASTOR for\nsingle diffractive $W$ production and for an integrated luminosity of 100~${\\rm pb}^{-1}$~\\cite{SDW}}\n\\label{fig:SDW}\n\\end{figure}\n\nFigure~\\ref{fig:SDW} shows the number of towers with activity above noise level in HF versus \nin CASTOR. The decay channel is $W \\rightarrow \\mu \\nu$ and a rapidity gap survical factor \nof 5\\% is assumed in the diffractive Monte Carlo sample (Pomwig). The number of events is\nnormalized to an integrated luminosity of 100~$\\rm pb^{-1}$ of single interactions (i.e. no\nevent pile-up). In the combined Pomwig + Pythia Monte Carlo sample, a clear excess in the \nbin [n(Castor), n(HF)] = [0,0] is visible, of ${\\cal O}(100)$ events. The ratio of diffraction \nto non-diffraction in the [0,0] bin of approximately 20 demonstrate the feasibility of \nobserving single diffractive $W$ production at the LHC.\n\nThe study assumes that CASTOR will be available only on one side. A second CASTOR in the \nopposite hemisphere and the use of T1, T2 will improve the observable excess \nfurther. \n\n\\subsection{Exclusive dilepton production}\n\nExclusive dimuon and dielectron production with no significant additional\nactivity in the CMS detector occurs with high cross section in\ngamma-mediated processes at the LHC, either as the pure QED process \n$\\gamma \\gamma \\rightarrow ll$ \nor in $\\Upsilon$ photoproduction~\\footnote{Photoproduction of $J\/psi$ mesons is also \npossible, but difficult to observe because of the trigger thresholds for leptons in CMS.} \nA feasibility study to detect them with CMS was presented in this\nworkshop~\\cite{Hollar}. \n\nThe event selection is based on requiring that outside of the two leptons, no other \nsignificant activity is visible within the central CMS detector, neither in the calorimeter\nnor in the tracking system. In 100 $\\rm pb^{-1}$ of single interaction data, ${\\cal O} (700)$\nevents in the dimuon channels and ${\\cal O} (70)$ in the dielectron channel can be selected.\nEvents in which one of the protons in the process does not stay intact but dissociates \nare the dominant source of background and are comparable in statistics to the signal. \nThis background can be significantly reduced by means of a veto condition on activity in CASTOR\nand ZDC, in a configuration with a ZDC on each side and a CASTOR on only one side of the IP\nby 2\/3.\n\nThe theoretically very precisely known cross section of the almost pure QED process \n$pp \\rightarrow pllp$ via $\\gamma$ exchange is an ideal calibration channel. With\n$100 \\rm pb^{-1}$ of data, an absolute luminosity calibration with 4\\% precision is feasible.\nFuthermore, exclusive dimuon production is an ideal alignment channel with high statistics \nfor the proposed proton taggers at 420~m from the IP. Upsilon photoproduction can constrain \nQCD models of diffraction, as discussed in the next section. \nThe $\\gamma \\gamma \\rightarrow e^+ e^-$ process has recently been observed at the Tevatron~\\cite{exclTevatron}.\n\n\n\\subsection{Upsilon photoproduction} \n\nAssuming the STARLIGHT~\\cite{starlight} \nMonte Carlo cross section prediction, the 1S, 2S and 3S resonances\nwill be clearly visible in $100 \\rm pb^{-1}$ of single interaction data. With their average\n$\\gamma p$ center-of-mass energy of $ \\simeq 2400 \\rm GeV^2$ they will extend\nthe accessible range of the HERA measurement of the $W_{\\gamma p}$ dependence of \n$\\sigma (\\gamma p \\rightarrow \\Upsilon(1 S) p)$ by one order of magnitude. \n\n\n\\begin{figure}[htb]\n\\includegraphics[scale=0.4]{UpsilonSignalPAS.eps}\n\\caption{Invariant mass of exclusive dimuon production in the Upsilon mass region~\\cite{ExclDileptons}}\n\\label{fig:Upsilon}\n\\end{figure}\n\nBy means of the $p_T^2$ value of the $\\Upsilon$ as estimator of the transfered four-momentum \nsquared, $t$, at the proton vertex, it might be possible to measure the $t$ dependence of the \ncross section. This dependence is sensitive to the two-dimensional gluon distribution of the \nproton and would give access to the generalized parton distribution function (GPD) of the \nproton.\n\n\\section{PHYSICS WITH NEAR-BEAM PROTON TAGGERS}\n\nFor slightly off-momentum protons, the LHC beamline with its magnets is essentially a\nspectrometer. If a scattered proton is bent sufficiently, but little enough to remain within \nthe beam-pipe, they can be detected by means of detectors inserted into the beam-pipe and\napproaching the beam envelope as closely as possible. At ligh luminosity at the LHC,\nlarge rapidity gaps typical for diffractive events or events with $\\gamma$ exchange tend to be \nfilled in by particles from overlaid pile-up events. Hence tagging the outgoing scattered \nproton(s) becomes the only mean of detection at high luminosities.\n\n\\subsection{TOTEM and FP420 proton taggers}\n\nThe TOTEM proton taggers, located at $\\pm 147$~m and $\\pm 220$~m from the IP, each consist\nof Silicon strip detectors housed in movable Roman Pots~\\cite{TOTEM}. \nThe detector\ndesign is such that the beam can be approached up to a minimal distance of $10 \\sigma +$~0.5~mm.\nWith nominal LHC beam optics, scattered protons from the IP are within the acceptance of the \ntaggers at 220~m when for their fractional momentum loss $\\xi$ holds: $0.02 < \\xi < 0.2$. \n\n\n\\begin{figure}\n\\includegraphics[scale=0.35, angle =-90]{fp420coverage.eps}\n\\caption{Acceptance in $x_L = 1 - \\xi$, where $\\xi$ is the fractional momentum loss of the \nscattered proton, of the TOTEM and FP420 proton taggers. The data points shown are from \nZEUS~\\cite{zeus}.}\n\\label{fig:xiCoverage}\n\\end{figure}\n\nIn order to achieve acceptance at smaller values of $\\xi$ with nominal LHC beam optics, \ndetectors have to be located further away from the IP. Proton taggers at $\\pm 420$~m from the\nIP have an acceptance of $0.002 < \\xi < 0.02$, complementing taggers at 220~m, as shown\nin Figure~\\ref{fig:xiCoverage}. \nThe proposal~\\cite{fp420} of the FP420 R\\&D collaboration foresees employing 3-D Silicon, an\nextremely radiation hard novel Silicon technology, for the proton taggers. Additional \nfast timing Cherenkov detectors will be capable of determining, within a resolution of a \nfew millimeters, whether the tagged proton came from the same vertex as the hard scatter visible\nin the central CMS detector. In order to comply with the space constraints of the location \nwithin the cryogenic region of the LHC, these detectors will be attached to a movable beam-pipe\nwith the help of which the detectors can approach the beam to within 3~mm.\n\nThe FP420 proposal is currently under scrutiny in CMS and ATLAS. If approved, installation could\nproceed in 2010, after the LHC start-up.\n\n\n\\subsection{Physics potential}\n\nForward proton tagging capabilities enhance the physics potential of CMS. They would\nrender possible a precise measurement of the mass and quantum numbers of the Higgs boson\nshould it be discovered by traditional searches. They also augment the CMS discovery reach\nfor Higgs production in the minimal supersymmetric extension (MSSM) of the Standard Model (SM)\nand for physics beyond the SM in $\\gamma p$ and $\\gamma \\gamma$ interactions.\n\nA case in point is the central exclusive production (CEP) process~\\cite{CEP}, \n$pp \\rightarrow p + \\phi + p$, where the plus sign denotes the absence of hadronic \nactivity between the outgoing protons, which survive the interaction intact, and the \nstate $\\phi$. The final state consists solely of the\nscattered protons, which may be detected in the forward proton taggers, and the decay \nproducts of $\\phi$ which can be detected in the central CMS detector. \nSelection rules force the produced state $\\phi$ to have $J^{CP} = n^{++}$ with $n =0, 2, ..$. \nThis process offers hence an experimentally very clean \nlaboratory for the discovery of any particle with these quantum numbers that couples \nstrongly to gluons. Additional advantages are the possibility to determine the mass of the state\n$\\phi$ with excellent resolution from the scattered protons alone, independent of its\ndecay products, and the possibility, unique at the LHC, to determine the quantum numbers of \n$\\phi$ directly from the azimuthal asymmetry between the scattered protons.\n\n\n\\begin{figure}[htb]\n\\includegraphics[angle=-90]{marek.eps}\n\\caption{Five $\\sigma$ discovery contours for central exclusive production of the \nheavier CP-even Higgs boson $H$~\\cite{Tasevsky}. See text for details.}\n\\label{fig:higgs}\n\\end{figure}\n\nIn the case of a SM Higgs boson with mass close to the current exclusion limit, which decays\npreferably into $b \\bar{b}$, CEP\nimproves the achievable signal-to-background ratio dramatically, to \n$\\cal{O}$(1)~\\cite{fp420,lightHiggs}. \nIn certain\nregions of the MSSM, generally known as ``LHC wedge region'', the heavy MSSM Higgs bosons would \nescape detection at the LHC. \nThere, the preferred search channels at the LHC are not available \nbecause the \nheavy Higgs bosons decouple from gauge bosons while their couplings to $b \\bar{b}$ and \n$\\tau \\bar{\\tau}$ are enhanced at high $\\tan{\\beta}$. Figure~\\ref{fig:higgs} depicts\nthe 5~$\\sigma$ discovery contour for the $H \\rightarrow b \\bar{b}$ channel in CEP in \nthe $M_A - \\tan{\\beta}$ plane of the MSSM within the $M_h^{max}$ benchmark scenario\nwith $\\mu = +200$~GeV and for different integrated luminosities. \nThe values of the mass of the heavier CP-even Higgs boson, $M_H$, are indicated by \ncontour lines. The dark region corresponds to the parameter region excluded by LEP. \n\nForward proton tagging will also give access to a rich QCD program on hard diffraction\nat high luminosities, where event pile-up is significant and makes undetectable the gaps \nin the hadronic final state otherwise typical of diffraction. Detailed studies with high\nstatistical precision will be possible on skewed, unintegrated gluon \ndensities; Generalized Parton Distributions which contain information on the correlations \nbetween partons in the proton; and the rapidity gap survival probability, a quantity closely \nlinked to soft rescattering effects and the features of the underlying event at the LHC.\n\nForward proton tagging also provides the possibility for precision studies of $\\gamma p$\nand $\\gamma \\gamma$ interactions at center-of-mass energies never reached before. Anomalous top\nproduction, anomalous gauge boson couplings, exclusive dilepton production and quarkonia \nproduction are possible topics, as was discussed in detail at this workshop.\n\n\n\\section{SUMMARY}\n\nForward physics in $pp$ collisions at the LHC covers a wide range of diverse physics subjects (low-$x_{Bj}$ QCD,\nhard diffraction, $\\gamma \\gamma$ and $\\gamma p$ interactions)\n which have in\ncommon that particles produced at large\nvalues of rapidity provide a defining characteristics. \nFor the CMS detector, several subdetectors with forward $\\eta$ coverage \nare currently under construction (CASTOR, ZDC) or in the proposal \nstage (FP420). The TOTEM experiment \nsupplements around the CMS IP several tracking devices and near-beam proton taggers at \ndistances up to $\\pm 220$~m. \nThe kinematic coverage of the combined CMS and TOTEM apparatus is unprecedented at a\nhadron collider. It would be even further enhanced by complementing CMS with the\ndetectors of the FP420 proposal which would add forward physics to the portfolio of\npossible discovery processes at the LHC.\n\n\n\\section{FORWARD PHYSICS AND FORWARD INSTRUMENTATION AT THE CMS INTERACTION POINT}\n\nForward physics at the LHC covers a wide range of diverse physics subjects which have in \ncommon that particles produced at small polar angles $\\theta$ and hence large \nvalues of rapidity provide a defining characteristics. This article concentrates\non their physics interest in $pp$ collisions.\n\nAt the Large-Hadron-Collider (LHC), where proton-proton collisions occur at \ncenter-of-mass energies of 14 TeV, the maximal possible rapidity is \n$y_{max} = \\ln{\\frac{\\sqrt{s}}{m_{\\pi}}}\\sim 11.5$. The two multi-purpose detectors ATLAS and \nCMS at the LHC are designed primarily for efficient detection of processes with large\npolar angles and hence\nhigh transverse momentum $p_T$. The coverage in pseudorapidity \n$\\eta = - \\ln{[\\tan{( \\theta \/ 2 )} ] }$ of\ntheir main components extends down to about $|\\theta| = 1^\\circ$ from the beam axis\nor $|\\eta| = 5$.\n\nFor the CMS detector, several subdetectors with coverage beyond $|\\eta| =5$ are \ncurrently under construction (CASTOR and ZDC sampling calorimeters) or in the proposal \nstage (FP420 proton taggers and fast timing detectors). \n\nFuthermore, a salient feature of the forward instrumentation around the \ninteraction point\n(IP) of CMS is the presence of TOTEM~\\cite{TOTEM}. TOTEM is an approved experiment at \nthe LHC for measuring the $pp$ elastic cross section as a function of the four-momentum\ntransfer squared, $t$, and for measuring the total cross section with a precision of\napproximately 1\\%. The TOTEM experiment uses the same IP as CMS and supplements around \nthe CMS IP several tracking devices, located inside of the volume \nof the main CMS detector, plus near-beam proton taggers a distance up to $\\pm 220$~m. \nThe CMS and TOTEM collaborations have described the considerable physics potential of\njoint data taking in a report to the LHCC \\cite{opus}. \n\nThe kinematic coverage of the combined CMS and TOTEM apparatus is unprecedented at a\nhadron collider. It would be even further enhanced by complementing CMS with the\ndetectors of the FP420 proposal which would induce forward physics into the portfolio of\npossible discovery processes at the LHC~\\cite{fp420}.\n\nAn overview of the forward instrumentation up to $\\pm 220$~m from the CMS IP is given in \nFig.~\\ref{fig:overview}. There are two suites of calorimeters with tracking detectors in front.\nThe CMS Hadron Forward (HF) calorimeter with the TOTEM telescope T1 in front \ncovers the region $3 < |\\eta | < 5$, the CMS CASTOR calorimeter with the TOTEM telescope \nT2 in front covers $5.2 < |\\eta| < 6.6$. The CMS ZDC calorimeters will be \ninstalled at the end of the straight LHC beam-line section, at a distance of \n$\\pm 140$~m from the IP. Near-beam proton taggers will be installed by TOTEM at \n$\\pm 147$~m and $\\pm 220$~m from the IP.\nFurther near-beam proton taggers in combination with very fast timing detectors to be\ninstalled at $\\pm 420$~m from the IP are part of the FP420 proposal.\n\n\\begin{figure}\n\\hspace*{-0.5cm}\n\\includegraphics[scale=0.32, angle = -90]{cms_totem_detectors_new.eps}\n\\caption{Layout of the forward detectors around the CMS interaction point.}\n\\label{fig:overview}\n\\end{figure}\n\n\n\n\n\\section{PHYSICS WITH FORWARD DETECTORS}\n\nIn the following, we describe the physics interest of the CMS CASTOR and ZDC \ncalorimeters~\\cite{PTDR1} \nand the TOTEM T1 and T2 telescopes~\\cite{TOTEM}. Of particular interest are \nQCD measurements at values of Bjorken-$x$ as low as $x \\sim 10^{-6}$ and the resulting \nsensitivity to non-DGLAP dynamics, \nas well as forward particle and energy flow measurements. These can play an important \nrole in tuning the Monte Carlos description of underlying event and multiple interactions\nat the LHC and in constraining Monte Carlo generators used for cosmic ray studies.\n\n\\subsection{CMS CASTOR \\& ZDC calorimeters}\n\nThe two calorimeters are of interest for $pp$, $pA$ and $AA$ running at the LHC, \nwhere $A$ denotes a heavy ion. They are Cherenkov-light devices with electromagnetic \nand hadronic sections and will be present in \nthe first LHC $pp$ runs at luminosities where event pile-up should be low.\n\nThe CASTOR calorimeters are octagonal cylinders located at $\\sim 14$~m from the IP.\nThey are sampling calorimeters with tungsten plates as absorbers and fused silica quartz \nplates as active medium. The plates are inclined by $45^\\circ$ with respect to the \nbeam axis. Particles passing through the quartz emit Cherenkov\nphotons which are transmitted to photomultiplier tubes through aircore lightguides.\nThe electromagnetic section is 22 radiation lengths $X_0$ deep\nwith 2 tungsten-quartz sandwiches, the hadronic section consists of 12 tungsten-quartz\nsandwiches. The total depth is 10.3 interaction lengths $\\lambda_l$. The calorimeters\nare read out segmented azimuthally in 16 segments and logitudinally in 14 segments. \nThey do not have any segmentation in $\\eta$. The CASTOR coverage of \n$5.2 < |\\eta| < 6.6$ closes hermetically the total CMS calorimetric pseudorapidity range over\n13 units. \n\nCurrently, funding is available only for a CASTOR calorimeter on one side of the IP.\nConstruction is advanced, with concluding beamtests foreseen for this summer and \ninstallation in time for the 2009 LHC data taking. \n\nThe CMS Zero Degree Calorimeters, ZDC, are located inside the TAN absorbers \nat the ends of the straight section of \nthe LHC beamline, between the LHC beampipes, at $\\pm 140$~m distance on each side of the \nIP. They are very radiation-hard sampling calorimeters \nwith tungsten plates as absobers and as active medium quartz fibers read out via\naircore light guides and photomultiplier tubes.\nThe electromagnetic part, $19 X_0$ deep, is segmented into 5 units horizontally, the \nhadronic part into 4 units in depth. The total depth is 6.5 $\\lambda_l$. The ZDC \ncalorimeters have 100\\% acceptance for neutral particles with $|\\eta|>8.4$ and can measure\n50~GeV photons with an energy resolution of about 10\\%. \n\nThe ZDC calorimeters are already installed and will be operational already in 2008.\n\n\n\\subsection{TOTEM T1 \\& T2 telescopes}\n\nThe TOTEM T1 telescope consists of two arms symmetrically installed around the CMS IP \nin the endcaps of the\nCMS magnet, right in front of the CMS HF calorimeters and with $\\eta$ coverage similar to\nHF.\nEach arm consists of 5 planes of Cathod Strip Chambers (CSC) which measure\n3 projections per plane, resulting in a spatial resolution of 0.36~mm in the radial and\n0.62~mm in the azimuthal coordinate in test beam measurements.\n\nThe two arms of the TOTEM T2 telescope are mounted right in front of the CASTOR \ncalorimeters, with similar $\\eta$ coverage. Each arm consists of 10 planes of 20\nsemi-circular modules of Gas Electron Multipliers (GEMs). The detector read-out is\norganized in strips and pads, a resolution of $115~\\mu $m for the radial coordinate and\nof $16~\\mu$rad in azimuthal angle were reached in prototype test beam measurements.\n\n\n\\subsection{Proton-proton collisions at low $x_{Bj}$}\n\nIn order to arrive at parton-parton interactions at very low $x_{Bj}$ values, several \nsteps in the QCD cascade initiated by the partons from the \nproton may occur before the final hard interaction takes place. Low-$x_{Bj}$ QCD hence offers\nideal conditions for studying the QCD parton evolution dynamics. Measurements at the\nHERA $ep$ collider have explored low-$x_{Bj}$ dynamics down to values of a few $10^{-5}$.\nAt the LHC the minimum accessible $x$ decreases by a factor $\\sim 10$ for each\n2 units of rapidity. A process with a hard scale of $Q ~ 10$~GeV and within the \nacceptance of T2\/CASTOR ($\\eta = 6$) can occur at $x$ values as low as \n$10^{-6}$.\n\n\n\\begin{figure}[htb]\n\\includegraphics[scale =0.4]{pierre_m2vsx2_bw.eps}\n\\caption{Acceptance of the T2\/CASTOR detectors for Drell-Yan electrons, see text.}\n\\label{fig:DYcoverage}\n\\end{figure}\n\n\nForward particles at the LHC can be produced in collisions between two partons with\n$x_1 >> x_2$, in which case the hard interaction system is boosted forward.\nAn example is Drell-Yan production of $e^+ e^-$ pairs, \n$ q q \\rightarrow \\gamma^\\star \\rightarrow e^+ e^-$, a process that probes primarily \nthe quark content of the proton. Figure~\\ref{fig:DYcoverage} shows the distribution of the invariant\nmass $M$ of the $e^+ e^-$ system versus the $x_{Bj}$ of one of the quarks, where\n$x_2$ is chosen such that $x_1 >> x_2$. The solid curve shows the kinematic limit\n$M^{max} = \\sqrt{x_2 s}$. The dotted lines indicate the acceptance window for both\nelectrons to be detectable in T2\/CASTOR.\nThe black points correspond to any of the Drell-Yan events generated \nwith Pythia, the green\/light grey (blue\/dark grey ) ones refer to those events in which at least one (both)\nelectron lies within the T2\/CASTOR detector acceptance. For invariant masses of the $e^+ e^-$ \nsystem of $M> 10$~GeV, $x_{Bj}$ values down to $10^{-6}$ are accessible.\n\nThe rapid rise of the gluon density in the proton with decreasing values of $x_{Bj}$\nobserved by HERA in deep inelastic scattering cannot continue indefinitely without violating\nunitarity at some point. Hence, parton recombination within the proton must set in at low\nenough values of $x_{Bj}$ and leads to non-linear terms in the QCD gluon evolution. \nFigure~\\ref{fig:saturation} compares for \nDrell Yan processes with both electrons within the T2\/CASTOR detector acceptance the cross\nsection predicted by a PDF model without (CTEQ5L~\\cite{CTEQ}) and with (EHKQS~\\cite{EHKQS}) \nsaturation effects. A difference of a factor 2 is visible in the predictions. Further details \ncan be found in~\\cite{opus}.\n\n\n\n\\begin{figure}[htb]\n\\includegraphics[scale =0.3]{pierre_dsigmadx2dy_bw.eps}\n\\caption{Comparison of the cross section prediction of a model without (CTEQ5L) and \nwith (EHKQS) saturation for Drell-Yan events in which both electrons are detected in T2\/CASTOR.}\n\\label{fig:saturation}\n\\end{figure}\n\n\nComplementary information on the QCD evolution at low $x_{Bj}$ can be gained from\nforward jets. The DGLAP evolution~\\cite{DGLAP} \nassumes that parton emission in the cascade is strongly \nordered in transverse momentum while in the BFKL evolution~\\cite{BFKL}, \nno ordering in $k_t$ is assumed,\nbut strong ordering in $x$. At small $x_{Bj}$, the difference between the two approaches is\nexpected to be most pronounced for hard partons created at the beginning of the cascade, \nat pseudorapidities close to the proton, i.e. in the forward direction. Monte Carlo generator\nstudies indicate that the resulting excess of forward jets with high $p_T$, observed\nat HERA, might be measurable with T2\/CASTOR. Another observable sensitive to\nBFKL-like QCD evolution dynamics are dijets with large rapidity separation, which \nenhances the available phase space for BFKL-like parton radiation between the jets.\nLikewise dijets separated by a large rapidity gap are of interest since they indicate\na process in which no color flow occurs in the hard scatter but where, contrary to the \ntraditional picture of soft Pomeron exchange, also a high transverse momentum transfer \noccurs across the gap. \n\n\\subsection{Multiplicity \\& energy flow}\n\nThe forward detectors can be valuable tools for Monte Carlo tuning.\n\n\nThe hard scatter in hadron-hadron collisions takes place in a dynamic environment,\nrefered to as the ``underlying event'' (UE), where\nadditional soft or hard interactions between the partons and \ninitial and final state radiation occur. The effect of the UE can not be disentangled on an\nevent-by-event basis and needs to be included by means of tuning Monte Carlo multiplicities \nand energy flow predictions to data. The predictive power of these tunes obtained \nfrom Tevatron data is very limited, and ways need to be found to constrain the UE at LHC \nenergies with LHC data. As shown in~\\cite{Borras}, the forward detectors are sensitive\nto features of the UE which central detector information alone cannot constrain.\n\n\\begin{figure}[!b]\n\\includegraphics[scale =0.55]{cosmics_Eflow.epsi}\n\\caption{Energy flow as predicted by Monte Carlo generators used for the description of \ncosmic ray induced air showers~\\cite{opus}.}\n\\label{fig:cosmics}\n\\end{figure}\n\nAnother area with high uncertainties is modelling the interaction of primary cosmic rays in \nthe PeV energy range with the atmosphere. Their rate of occurance per year is too low for\nreliable quantitative analysis. The center-of-mass energy in $pp$ collisions at the LHC \ncorresponds to 100 PeV energy in a fixed target collision. Figure~\\ref{fig:cosmics} shows the \nenergy flow\nas function of pseudorapidity as predicted by different Monte Carlos in use in the cosmic ray\ncommunity. Clear differences in the predictions are visible in the acceptance region of\nT2\/CASTOR and ZDC.\n\n\n\n\\section{PHYSICS WITH A VETO ON FORWARD DETECTORS}\n\nEvents of the type $pp \\rightarrow pXp$ or $pp \\rightarrow Xp$, where no color exchange\ntakes place between the proton(s) and the system $X$, can be caused by $\\gamma$ exchange,\nor by diffractive interactions. In both cases, the absence of color flow between the\nproton(s) and the system $X$ results in a large gap in the rapidity distribution of the\nhadronic final state. Such a gap can be detected by requiring the absence of a signal in \nthe forward detectors. In the following, we discuss three exemplary processes which are \ncharacterized by a large rapidity gap in their hadronic final state.\n\n\\subsection{Diffraction with a hard scale}\n\nDiffraction, traditionally thought of as soft process and described in Regge theory, can also\noccur with a hard scale ($W$, dijets, heavy flavors) as\nhas been experimentally observed at UA8, HERA and Tevatron. In the presence of a hard scale,\ndiffractive processes can be described in perturbative QCD (pQCD) and their cross sections\ncan be factorized into that one of the hard scatter and a diffractive particle \ndistribution function (dPDF). In diffractive hadron-hadron scattering, rescattering between\nspectator particles breaks the factorization. The so-called rapidity gap survival \nprobability quantifies this effect~\\cite{survival}. A measure for it can be obtained by the ratio of\ndiffractive to inclusive processes with the same hard scale. At the Tevatron, the ratio \nis found to be ${\\cal O}(1 \\%)$~\\cite{tevatron}.\nTheoretical expectations for the LHC vary from a fraction of a percent to as much as \n30\\%~\\cite{predLHC}. \n\nSingle diffractive $W$ production, $pp \\rightarrow pX$, where $X$ includes a $W$, \nis an example for diffraction with a hard scale at the LHC and is in \nparticular sensitive to the quark component of the proton dPDF in an as-of-yet unmeasured \nregion. In the absence of event pile-up, a selection is possible based on the requirement\nthat there be no activity above noise level in the CMS forward calorimeters HF and CASTOR.\n\n\\begin{figure}\n\\includegraphics[scale=0.4]{nHFvsnCASTOROnlyMinus_nTrkMax_1_100pb_new.eps}\n\\caption{Number of towers with activity above noise level in HF versus in CASTOR for\nsingle diffractive $W$ production and for an integrated luminosity of 100~${\\rm pb}^{-1}$~\\cite{SDW}}\n\\label{fig:SDW}\n\\end{figure}\n\nFigure~\\ref{fig:SDW} shows the number of towers with activity above noise level in HF versus \nin CASTOR. The decay channel is $W \\rightarrow \\mu \\nu$ and a rapidity gap survical factor \nof 5\\% is assumed in the diffractive Monte Carlo sample (Pomwig). The number of events is\nnormalized to an integrated luminosity of 100~$\\rm pb^{-1}$ of single interactions (i.e. no\nevent pile-up). In the combined Pomwig + Pythia Monte Carlo sample, a clear excess in the \nbin [n(Castor), n(HF)] = [0,0] is visible, of ${\\cal O}(100)$ events. The ratio of diffraction \nto non-diffraction in the [0,0] bin of approximately 20 demonstrate the feasibility of \nobserving single diffractive $W$ production at the LHC.\n\nThe study assumes that CASTOR will be available only on one side. A second CASTOR in the \nopposite hemisphere and the use of T1, T2 will improve the observable excess \nfurther. \n\n\\subsection{Exclusive dilepton production}\n\nExclusive dimuon and dielectron production with no significant additional\nactivity in the CMS detector occurs with high cross section in\ngamma-mediated processes at the LHC, either as the pure QED process \n$\\gamma \\gamma \\rightarrow ll$ \nor in $\\Upsilon$ photoproduction~\\footnote{Photoproduction of $J\/psi$ mesons is also \npossible, but difficult to observe because of the trigger thresholds for leptons in CMS.} \nA feasibility study to detect them with CMS was presented in this\nworkshop~\\cite{Hollar}. \n\nThe event selection is based on requiring that outside of the two leptons, no other \nsignificant activity is visible within the central CMS detector, neither in the calorimeter\nnor in the tracking system. In 100 $\\rm pb^{-1}$ of single interaction data, ${\\cal O} (700)$\nevents in the dimuon channels and ${\\cal O} (70)$ in the dielectron channel can be selected.\nEvents in which one of the protons in the process does not stay intact but dissociates \nare the dominant source of background and are comparable in statistics to the signal. \nThis background can be significantly reduced by means of a veto condition on activity in CASTOR\nand ZDC, in a configuration with a ZDC on each side and a CASTOR on only one side of the IP\nby 2\/3.\n\nThe theoretically very precisely known cross section of the almost pure QED process \n$pp \\rightarrow pllp$ via $\\gamma$ exchange is an ideal calibration channel. With\n$100 \\rm pb^{-1}$ of data, an absolute luminosity calibration with 4\\% precision is feasible.\nFuthermore, exclusive dimuon production is an ideal alignment channel with high statistics \nfor the proposed proton taggers at 420~m from the IP. Upsilon photoproduction can constrain \nQCD models of diffraction, as discussed in the next section. \nThe $\\gamma \\gamma \\rightarrow e^+ e^-$ process has recently been observed at the Tevatron~\\cite{exclTevatron}.\n\n\n\\subsection{Upsilon photoproduction} \n\nAssuming the STARLIGHT~\\cite{starlight} \nMonte Carlo cross section prediction, the 1S, 2S and 3S resonances\nwill be clearly visible in $100 \\rm pb^{-1}$ of single interaction data. With their average\n$\\gamma p$ center-of-mass energy of $ \\simeq 2400 \\rm GeV^2$ they will extend\nthe accessible range of the HERA measurement of the $W_{\\gamma p}$ dependence of \n$\\sigma (\\gamma p \\rightarrow \\Upsilon(1 S) p)$ by one order of magnitude. \n\n\n\\begin{figure}[htb]\n\\includegraphics[scale=0.4]{UpsilonSignalPAS.eps}\n\\caption{Invariant mass of exclusive dimuon production in the Upsilon mass region~\\cite{ExclDileptons}}\n\\label{fig:Upsilon}\n\\end{figure}\n\nBy means of the $p_T^2$ value of the $\\Upsilon$ as estimator of the transfered four-momentum \nsquared, $t$, at the proton vertex, it might be possible to measure the $t$ dependence of the \ncross section. This dependence is sensitive to the two-dimensional gluon distribution of the \nproton and would give access to the generalized parton distribution function (GPD) of the \nproton.\n\n\\section{PHYSICS WITH NEAR-BEAM PROTON TAGGERS}\n\nFor slightly off-momentum protons, the LHC beamline with its magnets is essentially a\nspectrometer. If a scattered proton is bent sufficiently, but little enough to remain within \nthe beam-pipe, they can be detected by means of detectors inserted into the beam-pipe and\napproaching the beam envelope as closely as possible. At ligh luminosity at the LHC,\nlarge rapidity gaps typical for diffractive events or events with $\\gamma$ exchange tend to be \nfilled in by particles from overlaid pile-up events. Hence tagging the outgoing scattered \nproton(s) becomes the only mean of detection at high luminosities.\n\n\\subsection{TOTEM and FP420 proton taggers}\n\nThe TOTEM proton taggers, located at $\\pm 147$~m and $\\pm 220$~m from the IP, each consist\nof Silicon strip detectors housed in movable Roman Pots~\\cite{TOTEM}. \nThe detector\ndesign is such that the beam can be approached up to a minimal distance of $10 \\sigma +$~0.5~mm.\nWith nominal LHC beam optics, scattered protons from the IP are within the acceptance of the \ntaggers at 220~m when for their fractional momentum loss $\\xi$ holds: $0.02 < \\xi < 0.2$. \n\n\n\\begin{figure}\n\\includegraphics[scale=0.35, angle =-90]{fp420coverage.eps}\n\\caption{Acceptance in $x_L = 1 - \\xi$, where $\\xi$ is the fractional momentum loss of the \nscattered proton, of the TOTEM and FP420 proton taggers. The data points shown are from \nZEUS~\\cite{zeus}.}\n\\label{fig:xiCoverage}\n\\end{figure}\n\nIn order to achieve acceptance at smaller values of $\\xi$ with nominal LHC beam optics, \ndetectors have to be located further away from the IP. Proton taggers at $\\pm 420$~m from the\nIP have an acceptance of $0.002 < \\xi < 0.02$, complementing taggers at 220~m, as shown\nin Figure~\\ref{fig:xiCoverage}. \nThe proposal~\\cite{fp420} of the FP420 R\\&D collaboration foresees employing 3-D Silicon, an\nextremely radiation hard novel Silicon technology, for the proton taggers. Additional \nfast timing Cherenkov detectors will be capable of determining, within a resolution of a \nfew millimeters, whether the tagged proton came from the same vertex as the hard scatter visible\nin the central CMS detector. In order to comply with the space constraints of the location \nwithin the cryogenic region of the LHC, these detectors will be attached to a movable beam-pipe\nwith the help of which the detectors can approach the beam to within 3~mm.\n\nThe FP420 proposal is currently under scrutiny in CMS and ATLAS. If approved, installation could\nproceed in 2010, after the LHC start-up.\n\n\n\\subsection{Physics potential}\n\nForward proton tagging capabilities enhance the physics potential of CMS. They would\nrender possible a precise measurement of the mass and quantum numbers of the Higgs boson\nshould it be discovered by traditional searches. They also augment the CMS discovery reach\nfor Higgs production in the minimal supersymmetric extension (MSSM) of the Standard Model (SM)\nand for physics beyond the SM in $\\gamma p$ and $\\gamma \\gamma$ interactions.\n\nA case in point is the central exclusive production (CEP) process~\\cite{CEP}, \n$pp \\rightarrow p + \\phi + p$, where the plus sign denotes the absence of hadronic \nactivity between the outgoing protons, which survive the interaction intact, and the \nstate $\\phi$. The final state consists solely of the\nscattered protons, which may be detected in the forward proton taggers, and the decay \nproducts of $\\phi$ which can be detected in the central CMS detector. \nSelection rules force the produced state $\\phi$ to have $J^{CP} = n^{++}$ with $n =0, 2, ..$. \nThis process offers hence an experimentally very clean \nlaboratory for the discovery of any particle with these quantum numbers that couples \nstrongly to gluons. Additional advantages are the possibility to determine the mass of the state\n$\\phi$ with excellent resolution from the scattered protons alone, independent of its\ndecay products, and the possibility, unique at the LHC, to determine the quantum numbers of \n$\\phi$ directly from the azimuthal asymmetry between the scattered protons.\n\n\n\\begin{figure}[!b]\n\\includegraphics[angle=-90]{marek.eps}\n\\caption{Five $\\sigma$ discovery contours for central exclusive production of the \nheavier CP-even Higgs boson $H$~\\cite{Tasevsky}. See text for details.}\n\\label{fig:higgs}\n\\end{figure}\n\nIn the case of a SM Higgs boson with mass close to the current exclusion limit, which decays\npreferably into $b \\bar{b}$, CEP\nimproves the achievable signal-to-background ratio dramatically, to \n$\\cal{O}$(1)~\\cite{fp420,lightHiggs}. \nIn certain\nregions of the MSSM, generally known as ``LHC wedge region'', the heavy MSSM Higgs bosons would \nescape detection at the LHC. \nThere, the preferred search channels at the LHC are not available \nbecause the \nheavy Higgs bosons decouple from gauge bosons while their couplings to $b \\bar{b}$ and \n$\\tau \\bar{\\tau}$ are enhanced at high $\\tan{\\beta}$. Figure~\\ref{fig:higgs} depicts\nthe 5~$\\sigma$ discovery contour for the $H \\rightarrow b \\bar{b}$ channel in CEP in \nthe $M_A - \\tan{\\beta}$ plane of the MSSM within the $M_h^{max}$ benchmark scenario\nwith $\\mu = +200$~GeV and for different integrated luminosities. \nThe values of the mass of the heavier CP-even Higgs boson, $M_H$, are indicated by \ncontour lines. The dark region corresponds to the parameter region excluded by LEP. \n\nForward proton tagging will also give access to a rich QCD program on hard diffraction\nat high luminosities, where event pile-up is significant and makes undetectable the gaps \nin the hadronic final state otherwise typical of diffraction. Detailed studies with high\nstatistical precision will be possible on skewed, unintegrated gluon \ndensities; Generalized Parton Distributions which contain information on the correlations \nbetween partons in the proton; and the rapidity gap survival probability, a quantity closely \nlinked to soft rescattering effects and the features of the underlying event at the LHC.\n\nForward proton tagging also provides the possibility for precision studies of $\\gamma p$\nand $\\gamma \\gamma$ interactions at center-of-mass energies never reached before. Anomalous top\nproduction, anomalous gauge boson couplings, exclusive dilepton production and quarkonia \nproduction are possible topics, as was discussed in detail at this workshop.\n\n\n\\section{SUMMARY}\n\nForward physics in $pp$ collisions at the LHC covers a wide range of diverse physics subjects (low-$x_{Bj}$ QCD,\nhard diffraction, $\\gamma \\gamma$ and $\\gamma p$ interactions)\n which have in\ncommon that particles produced at large\nvalues of rapidity provide a defining characteristics. \nFor the CMS detector, several subdetectors with forward $\\eta$ coverage \nare currently under construction (CASTOR, ZDC) or in the proposal \nstage (FP420). The TOTEM experiment \nsupplements around the CMS IP several tracking devices and near-beam proton taggers at \ndistances up to $\\pm 220$~m. \nThe kinematic coverage of the combined CMS and TOTEM apparatus is unprecedented at a\nhadron collider. It would be even further enhanced by complementing CMS with the\ndetectors of the FP420 proposal which would add forward physics to the portfolio of\npossible discovery processes at the LHC.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Acknowledgements}\nThis work was sponsored in part by the National Science Foundation under contract ECCS-1800812. \nThis material is based upon work supported by the Google Cloud Research Credits program with the award GCP19980904.\nWe would like to sincerely thank Dr. Hassan Hijazi at Los Alamos National Laboratory for sharing a copy of his GravityX AC-OPF solver with us for comparison of our results. \n\\section{Background}\n\\label{sec:background}\n\n\\subsection{Homotopy Methods}\nThe Newton-Raphson (N-R) method is often used to solve the underlying non-linear equations in an AC-OPF formulation and its convergence can be sensitive to the choice of initial guess for the variables.\nIf the starting point is outside the basin of attraction for the problem, convergence can be very slow.\nA class of successive-relaxation methodologies, known as homotopy methods, was introduced to mitigate such issues.\nHomotopy is a numerical analysis method to solve a non-linear system of equations that traverses the solution space by deforming the non-linear equations from a trivial problem to the original.\nThe homotopy method initially defines a relaxation of the original problem which is trivial to solve, and proceeds to solve a sequence of deformations that ultimately leads back to the original problem.\nSuppose $\\mathcal{F}(x)=0$ is a set of non-linear equations that we aim to solve, we define a mapping to a trivial problem represented by $\\mathcal{G}(x)=0$.\nThe deformation from the trivial problem to the original non-linear problem is controlled by embedding a scalar homotopy factor $\\nu \\in [0,1]$ into the non-linear equations, thereby defining a sequence of problems given by (\\ref{eq:basic_homotopy}) .\n\\begin{equation}\n\\label{eq:basic_homotopy}\n \\mathcal{H}(x, \\nu) = \\nu \\mathcal{G}(x) + (1-\\nu) \\mathcal{F}(x)=0,~ \\nu \\in [0,1]\n\\end{equation}\nHaving determined the solution, $x^0$ to the trivial problem, $\\mathcal{H}(x,1) = \\mathcal{G}(x) = 0$, we can iteratively decrease $\\nu$, to move the system closer to the original problem, and use $x^0$ as our initial guess for the the next sub-problem.\nBy incrementally decreasing $\\nu$ and solving the updated sub-problems, we traverse the solution space from the trivial problem to the original problem.\nFor this method to be effective, each sub-problem's solution should lie within the basin of attraction of the solution of the previous sub-problem, in order to exploit the quadratic convergence of N-R.\nIt is often challenging to develop a general homotopy method that ensures a proper traversal of the solution space, where there exists a feasible solution for every sub-problem $\\mathcal{H}(x,\\nu)=0$ as it traverses the path from $\\nu: 1 \\rightarrow 0 $ \\cite{Allgower}. We present our homotopy method based on a circuit-inspired intuition that can intuitively ensure a feasible path. \n\n\\subsection{Homotopy Methods in Power System}\n\nA number of approaches have applied homotopy methods to solve power flow, AC-OPF, and other variants in recent years \\cite{Murray-homotopy,Pandey-IMB, Park-homotopy,Network-Stepping,Pandey-Tx-Stepping}.\nIn \\cite{Pandey-Tx-Stepping}, the authors present a circuit-theoretic homotopy to robustly solve the power flow equations that embeds a homotopy factor in the equivalent circuits of the grid components to solve power flow.\nThese methods are extended in the Incremental Model Building (IMB) framework \\cite{Pandey-IMB} to solve the AC-OPF optimization problem.\nThe idea behind the IMB framework is to build the grid from ground-up using an embedded homotopy factor in AC-OPF equations.\nIMB defines a relaxed problem, $\\mathcal{G}(x)$, where the buses are almost completely shorted to one another, nearly all of the loads are removed at $\\nu=1$, and $\\nu$ is embedded in the generator limits so that the generator injections can be initially close to zero while remaining feasible with respect to the inequality constraints.\nAs a result, the relaxed network has very little current flow, so nearly all of the buses have voltage magnitude and angles close to that of the reference bus, and a flat-start initial point can reliably be used as a trivial solution of the homotopy sub-problem at $\\nu=1$.\nTo satisfy the requirement that a feasible solution exist for every sub-problem along the homotopy path, fictitious slack current sources are introduced at each bus for sub-problems $\\nu \\neq 0$, and their injections are heavily penalized \\cite{Jereminov-feasibility}.\n\nWhile IMB shows an approach to solve AC-OPF without good initial conditions, it does not include discrete control variables in the formulation.\nEven with a continuous relaxation to these variables, their introduction can significantly increase the nonlinearity of the network flow constraint equations and make the problem very challenging to solve.\nIn this work, we present a framework that builds on the IMB framework to include discrete control devices to solve the \\ACOPFD robustly.\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nIn this paper we developed a two-stage homotopy algorithm to robustly solve real-world AC-OPF problems while incorporating discrete control devices. The proposed approach uses fundamentals from circuit-theory to design the mechanisms of the underlying homotopy methods and the approach does not depend on access to good initial conditions to solve the overall problem. To evaluate this approach, we ran a series of tests for four networks containing transformers and shunts with discrete settings.\nFor two of the cases, this method performed better than a state-of-the-art solver. Furthermore, we showed that by constructing different homotopy paths, we find different local optima solutions for the same problem, which had significantly different generation dispatch patterns. Therefore, we believe that the nonconvexity of the solution space of \\ACOPFDs problems warrant more investigation.\n\n\n\n\n\n\\section{AC-OPF with Discrete Variables Formulation \\ACOPFD}\n\\label{sec:formulation}\nThe paper solves the following AC-OPF with discrete control settings, which we will refer to as \\ACOPFD:\n\\begin{subequations}\n\\begin{gather}\n\\underset{x,\\xd}{\\text{minimize }}\nf_0(x, \\xd) \\\\\n\\text{subject to: }\n g(x, \\xd) =0 \\\\%, \\; i = 1, \\ldots, m.\n h(x, \\xd) \\leq 0 \\\\\n \\xd^i \\in \\mathcal{D}^i, \\; i = 1, \\ldots, n_d\n\\end{gather}\n\\end{subequations}\nThe vector $x$ consists of the continuous-valued variables of the optimization, including complex bus voltages, real and reactive power injections by generators, continuous shunts settings, and substitution variables to track transformer and line flows.\nThe vector $\\xd\\in R^{n_d}$ represents the discrete control settings that are limited to a finite set of discrete values. In this paper, these include transformer taps $\\tau$ and phase-shifters $\\phi$ and discrete shunts $B^{sh}$ but the method is not restricted to just these. Each element of $\\xd$, $\\xd^i\\in\\xd$, has an associated integrality constraint (1d) restricting each device setting to be a finite value within the set, $\\mathcal{D}^i$. The objective function (1a) is chosen depending on the purpose of the AC-OPF study, but typically represents the economic cost of power generation.\nConstraint (1b) represents the AC network constraints, which can be formulated either to enforce net zero power-mismatch or current-mismatch at all nodes.\nConstraint (1c) contains the bus voltage magnitude limits, branch and transformer thermal limits, and real and reactive power generation limits.\n\n\n\n\\section{Implementation and Evaluation}\n\\label{sec:implementation}\nTo test the effectiveness of the proposed algorithm, we run the \\ACOPFDs solver on four networks based on cases used in the ARPA-E Grid Optimization Challenge 2 \\cite{go2} that contain transformers with adjustable tap ratios or phase shifts, and discrete switched shunt banks.\nThe cases were modified to remove additional features of the GO formulation in order to focus on the efficacy of our approach for incorporating discrete control devices, and to make all generation costs linear.\nThe details of cases used are shown in Table I, and the files have been made available in a public Github repository \\cite{cases-github}.\n\n\\begin{table}[h]\n\\label{tab:case-info}\n\\caption{Properties of \\ACOPFD ~Cases Tested}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\textbf{Name} & \\textbf{Buses} & \\textbf{Generators} & \\textbf{Loads} & \\textbf{Lines} & \\textbf{Discrete Devices} \\\\ \\hline\nA & 3022 & 420 & 1625 & 3579 & 1384 \\\\ \\hline\nB & 6867 & 567 & 4618 & 7815 & 925 \\\\ \\hline\nC & 11152 & 1318 & 4661 & 16036 & 1030 \\\\ \\hline\nD & 16789 & 994 & 7846 & 23375 & 2722 \\\\ \\hline\n\\end{tabular}\n\\end{table}\nIn each evaluation, $\\xdbase$ values are set to the respective device settings listed in the .raw file, but to simulate running the \\ACOPFDs without any prior knowledge of settings, a copy of each case file is created where each transformer's initial setting is listed as its median available setting, and each switched shunt bank has all switches off (0 p.u.).\nThese cases with initial settings removed are denoted with a * superscript.\n\n\\subsection{Robustness and Scalability of IMB+D}\n\nTable \\ref{tab:big-results} shows a summary of results.\n$k_{adj}=0.1$ was applied equally across all devices in Stage 1, but normalized by the range of the individual device's settings ($\\xdupper^i - \\xdlower^i$)\nAs a comparison point, we also evaluated the same cases using the GravityX AC-OPF solver \\cite{Gravity}, a leading submission to the GO Challenge which utilizes the IPOPT non-linear optimization tool \\cite{IPOPT}.\n``\\% Adj'' indicates how many device settings in the solution differed from their initial value (the original .raw file values in the standard cases, and the simulated unknown in the * cases).\nFor each test, to evaluate the necessity of Stage II, a simple ``round and resolve'' approach is also attempted at the end of Stage I, in which the a feasible solution is sought immediately after rounding and fixing settings. \n\nThe proposed approach is able to find solutions for all the cases, even when initial device settings are removed.\nFor Cases A and D, evaluating from the simulated unknown settings actually produces a very slightly better objective value.\nGravityX produces a slightly better objective for both versions of cases B and C, but a less optimal solution for case A and does not converge for case D. \nObserve that Stage II is not strictly necessary for cases B, C, or D, but it is necessary for Case A to converge with discrete settings, which makes sense because in this case the tap steps are much further apart, so selecting discrete settings introduces a larger disturbance.\n\\begin{table}[]\n\\centering\n\\caption{Objective solutions and best $k_{adj}$ for tested cases}\n\\label{tab:big-results}\n\\begin{tabular}{|c|c|c|c|c|c|c|}\n\\hline\n & \\multicolumn{3}{c|}{IMB+D ($k_{adj}=0.1$)} & \\multicolumn{2}{c|}{GravityX+IP-OPT} \\\\ \\hline\nCase & Obj & \\% Adj & Need Stage II? & Obj & \\% Adj \\\\ \\hline\nA~ & 5.377e5 & 24.9\\% & Yes & 5.561e5 & 23.8\\% \\\\ \\hline\nA* & 5.357e5 & 25.6\\% & Yes & 5.561e5 & 17.6\\% \\\\ \\hline\nB~ & 1.216e5 & 90.8\\% & No & 1.215e5 & 74.5\\% \\\\ \\hline\nB* & 1.216e5 & 93.9\\% & No & 1.215e5 & 71.7\\% \\\\ \\hline\nC~ & 5.294e5 & 77.8 \\% & No & 5.292e5 & 80.29\\% \\\\ \\hline\nC* & 5.294e5 & 82.2\\% & No & 5.292e5 & 69.4\\% \\\\ \\hline\nD~ & 3.592e5 & 74.7\\% & No & No Solution & N\/A \\\\ \\hline\nD* & 3.591e5 & 89.2\\% & No & No Solution & N\/A \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\\subsection{Impact of Different Homotopy Paths}\nTo construct different homotopy paths for the problem, we vary the Homotopy Stage I's adjustment penalty $k_{adj}$ parameter. We solve cases D and D* across a sweep of $k_{adj}$ values, and also at $k_{adj}=0$, meaning a penalty term is never used in Stage I.\nRecall that the term is completely removed at $\\nu_1 = 0$, so \\textit{the same final problem is being solved}, regardless of choice of $k_{adj}$. \nEssentially, all that differs is the homotopy path to the original \\ACOPFD problem. \n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=2.6in]{figures\/test_figure1.png}\n \\caption{Changing $k_{adj}$ affects the homotopy path taken, and thus yields different solutions}\n \n \\label{fig:stage1-graphs}\n\\end{figure}\nFig. \\ref{fig:stage1-graphs} shows the effect of increasing $k_{adj}$ (i.e., varying the homotopy path) on both the final solution cost and percentage of adjustments made to devices in the solution.\nThis plot shows how it is possible to find multiple different local optima by parameterizing the penalty terms and essentially varying the homotopy path.\nFirst, the we obtain the best objective function with very low penalty factors.\nAdditionally, when knowledge of base settings is available, increasing the $k_{adj}$ reduces the number of adjustments.\nSurprisingly, however observe that for the D* simulations, increasing $k_{adj}$ does not have a large impact the number of discrete adjustments.\nWe hypothesize this is because some devices may require certain low or high settings for the network to be feasible, and so adjustments will be made regardless of the penalty value.\n\n\\subsection{Comparison of Local Solutions}\n\nTo further investigate the impact of different local solutions on the grid dispatch, we consider three sets of solution for the same exact problem as defined by Case A*: two solutions generated by IMB+D by taking separate homotopy paths through choice of $k_{adj}$, and one generated by GravityX.\nThe three solutions have generation dispatch costs in the range of \\$5.36e5-\\$5.84e5.\nHowever, more interesting insights can be gathered by looking at the actual dispatch of six large generators in Case A* for these three distinct local optima, which are shown in Fig. \\ref{fig:gen_dispatch}.\nWe notice that the dispatches for these generators vary significantly across the three dispatches.\nThis would imply that in the real world, the dispatch produced by an \\ACOPFDs study can have widely varying patterns dependent on the method (e.g. Gravity-X vs. IMB+D) used to solve the non-convex problem, or even the choice of homotopy path within IMB+D based on the value of $k_{adj}$.\nIn real world, this could make it harder for grid operators to justify the choice of one dispatch over another as the global minima may not be easily obtainable.\nToday, by running the convex DC-constraint based optimization with fixed settings, they are able to overcome the problem. \nHowever, due to the lack of granularity and accuracy, DC-based optimizations may be insufficient for future scenarios where adjustments to discrete devices are necessary in the optimization.\n\\begin{figure}\n \\centering\n \\includegraphics[width=2.5in]{figures\/generator_dispatch.png}\n \\caption{Changing the penalty scalar $k_{adj}$ on adjustments in Stage I causes a different homotopy path to be taken and can yield starkly different local optima in the final solution.}\n \\label{fig:gen_dispatch}\n\\end{figure}\n\\section{Introduction}\n\\label{sec:introduction}\n\nA critical framework for modeling and optimizing the efficiency of today's power grid is based on the Alternating-Current Optimal Power Flow (AC-OPF) problem.\nIn the AC-OPF problem, a user-specified objective function, typically the cost of power generation, is optimized subject to network and device constraints.\nThese constraints include AC network constraints defined by Kirchhoff's Voltage and Current Laws, as well inequality constraints representing operational limits, such as bounds on voltages, power generation, and power transfer, to ensure reliable operation of the power system.\nIt has been estimated that improved methods to model and run the US power grid could improve dispatch efficiency in the US electricity system leading to savings between \\$6 billion and \\$19 billion per year \\cite{Cain}.\nMoreover, improved AC-OPF solution techniques can improve the reliability and resiliency of the grid, which is under increasing duress from extreme weather events, such as California's ongoing wildfires and the aftermath of the winter 2021 storm in Texas.\n\nTraditionally, the AC-OPF problem is a non-convex nonlinear problem with only continuous variables.\nHowever, many devices deployed in the grid today have controls with discrete-valued settings, and are likely to become more widespread as the modernization of the grid continues.\nDevices such as switched shunt banks and adjustable transformers can assist in balancing power flows in the system, meeting resilience-focused operational constraints, and locating a more optimal or resilient operating point than one located using fixed components settings.\nThe increased flexibility from these discrete devices also allows operators to avoid or delay costly upgrades to the network while increasing resiliency during extreme events. \nGiven the significant potential benefits, a recent Grid Optimization (GO) Competition (Challenge 2), organized by ARPA-E, sought new robust approaches to AC-OPF where adjustments to discrete devices like tap changers, phase shifters, and switched shunt banks are included in the variable set \\cite{go2}.\n\nAlthough there are clear benefits to inclusion of discrete control devices in an AC-OPF study, doing so directly results in a mixed-integer non-linear program (MINLP) that is significantly harder to solve.\nIncluding these discrete settings in the variable space could lead to searching over a combinatorially-large solution space, which would be intractable to solve in practical time. \nIf prior settings are at least known, then these can be used a starting point to begin the search for new settings.\nHowever, prior settings may be not known in some situations like planning or policy studies. For instance, when engineers evaluate the feasibility of 50\\% renewable penetration in a future U.S. Eastern Interconnection \\cite{miso50}. \n\nOne approach \\cite{liu-linearization} to include these discrete variables relies on sequential linearization of the optimization problem and then handling the discrete variables using mixed-integer linear problem (MILP) techniques.\nUnfortunately, the underlying network constraints can be highly non-linear with respect to certain control settings such as transformer tap ratios and phase shifts. Therefore these methods can suffer from a significant loss in model fidelity.\nAdditionally, linear relaxations to the OPF problem can lead to physically infeasible solutions, which are extremely undesirable for a grid dispatch \\cite{Baker-DCOPF}.\n\nThe simplest approach to this obstacle is a two-stage rounding technique \\cite{Tinney} \\cite{Papalexopoulos}.\nIn this method, the discrete control settings are initially treated as continuous-valued, and a relaxed formulation of the AC-OPF problem is solved; then, these variables are fixed to their nearest respective discrete values, and the optimization is solved again with these variables held constant.\nThe first challenge with this approach is that for realistically-sized networks, solving the relaxed problem with continuous valued transformer taps, phase-shifters and switched shunts is computationally challenging, especially when good initial conditions are unavailable.\nThe second challenge is that the rounding step to map the continuous-valued settings to their nearest respective discrete values can result in a physically infeasible solution, which can be difficult to avoid when the discrete values are spaced far apart or when very many control variables are rounded at once. \nDirectly rounding to the nearest discrete value also creates a discontinuous jump in the solution space, which is problematic for any Newton-method based solver that relies on first-order derivative continuity.\n\nA number of methods have been introduced to address the two challenges above. \nRecent work \\cite{Coffrin} \\cite{Lavei} has pushed the state-of-the-art for solving AC-OPF for realistic networks, but these formulations did not include discrete variables. \nFor formulations that do include discrete control variables, several new approaches have been proposed \\cite{liu-penalty-function,Macfie-discrete-shunt,Capitanescu,Murray-discrete} to eliminate non-physical solutions and degradation of optimality due to rounding.\n\\cite{liu-penalty-function} proposes utilizing a penalty term in the objective function to push each relaxed control variable towards an available discrete value, so that the disturbance introduced by the rounding step is smaller.\nHowever, the use of penalty functions in optimizations with discrete variables can introduce stationary points and local minima \\cite{discrete-opt-overview}.\nAnother approach is to select subsets of relaxed discrete variables to round in an outer loop, while repeatedly solving the AC-OPF problem in between subsets until all of the settings have been fixed.\nIn \\cite{Macfie-discrete-shunt}, the authors present two methods for selecting which variables are rounded in each loop, and show these can reduce optimality degradation caused by rounding; however, unless only a single device is rounded at a time, this can introduce oscillations, since each rounding effectively adds a piecewise discontinuous function \\cite{Katzenelson}.\n\\cite{Capitanescu} uses sensitivities with respect to the discrete variables as metric to determine when to round settings, with the help of either a merit function or a MILP solver.\nIn \\cite{Murray-homotopy}, the authors point out that time constraints may make it impractical for grid operators to adjust a large number of control variable changes for a single dispatch.\nTo address this, they propose introducing a sparsity-inducing penalty term to the objective function along with a line-search of the discrete variable space to find a more limited number of control variable changes that can still improve optimality compared to holding all control variables constant.\nHowever, this method inherently assumes knowledge of good settings, which might not be available in some use cases for AC-OPF, such as planning studies.\n\nWe propose a two-stage homotopy algorithm for solving AC-OPF problems that explains how to incorporate discrete controls variables, can scale to large networks, and is robust to potential lack of knowledge of prior settings. \nOur approach builds off of the homotopy-based AC-OPF methodology presented in \\cite{Pandey-IMB}.\nIn the first stage,the discrete settings for the adjustable control devices are relaxed as continuous-valued variables and the optimization is solved using a robust homotopy technique.\nThe solution of the first stage is used to select discrete settings, and the respective variables are held constant thereafter. \nThen the errors induced by removing the relaxation are calculated and used in a second homotopy problem to locate a realistic solution.\n\nThe proposed novel approach can robustly determine a local optimum of any large real-world network with discrete controls without reliance on prior setting values. We also show that by choosing different homotopy paths, the proposed approach can obtain a variety of local minima solutions with significantly different generation dispatches.\nIn the results, we show that the method is not only novel in its approach but also more robust than another state-of-the-art optimization tool.\n\n\n\\section{Two Stage Homotopy Method for Solving \\ACOPFD}\n\\label{sec:methodology}\nWe propose a two-stage homotopy algorithm to solve the \\ACOPFDs problem described in Section \\ref{sec:formulation}.\nThe methods described are an extension of the IMB approach discussed above and we term the overall approach IMB+D\n\n\n\nThe first stage, Stage I, applies a relaxation to the \\ACOPFDs problem in which we treat the discrete variables as continuous-valued by removing the integrality constraints. \nWe refer to the resulting relaxed optimization as \\ACOPFC, which is solved in Stage I.\nWe present a modeling framework for incorporating each adjustable device into the \\ACOPFC~ homotopy formulation so as to preserve the IMB concept of slowly ``turning on'' the grid as the sequence of sub-problems is traversed.\nAfter convergence of Stage I, in Stage II, the relaxed solution is used to select the nearest feasible discrete settings, and a second homotopy problem is defined to solve this problem. The local optima of this second stage yields an optimal solution for \\ACOPFD with the discrete value constraints satisfied. \n\n\n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=2.8in]{figures\/Stage_One.png}\n \\caption{A simple network with discrete devices in blue (a). In the completely relaxed network (b), the relaxation elements are shown in red. At the end of Stage I (c), a solution is found with all relaxations removed except continuous settings.}\n \\label{fig:stageI_figure}\n\\end{figure}\n\\subsection{Homotopy Stage I: Solving the Relaxed \\ACOPFD}\n\\subsubsection{Embedding general network with homotopy}\nIn the constraints of an AC-OPF problem expressed using the current-voltage formulation, the current flows across lines are linear with respect to voltages, but the voltage and flow limit equations are quadratic, and the current injections from generators and loads are highly non-linear.\nEven without the introduction of additional control devices, Newton method's may fail to converge if the initial guess for variables is not within the basin of attraction of a solution \\cite{Murray-homotopy}. Therefore for large systems, without access to a reliable starting guess for Newton's method, we can ensure convergence using the IMB method presented in \\cite{Pandey-IMB}.\nBy starting with a deformed version of the network in IMB that has very little current flow, there exists a high voltage solution in which the bus voltages are close to one another, which is close to a flat start guess.\nTo define a deformed network and have a smooth trajectory of intermediate deformations back to the original network, in Stage I the homotopy factor $\\nu_1$ is embedded into the many of the parameters of the network's topology and devices: namely into high conductances in parallel to existing lines, the load factor, the generation limits (see Fig. \\ref{fig:stageI_figure}.b).\n\n\\subsubsection{Continuous relaxation and separation of discrete settings}\nIn Stage I, we apply a relaxation to the discrete control setting variables and split it into two components in order to leverage the robust methods of the IMB framework. \nFirst, a relaxation removing the integrality constraints (1d) is applied to replace the discrete variables $\\xd$ with a continuous vector, $\\xdcont$.\nAs a result, the constraints (1d) are replaced by:\n\\begin{equation}\n \\xdlower^i \\leq \\xdcont^i \\leq \\xdupper^i \\text{, } i \\in 1...n_d\n\\end{equation}\nwhere $\\xdlower^i = \\min(\\mathcal{D}^i)$ and $\\xdupper^i = \\max(\\mathcal{D}^i)$, representing the minimum and maximum possible settings for a device.\n\nEven with this relaxation, adding devices that affect the flow of power across the network can significantly increase the nonlinearity of the constraint equations and the corresponding optimality conditions.\nFor example, introducing adjustable transformers causes the previously-linear transformer power flow model to be nonlinear with respect to voltage at its terminals, and adding phase shifters introduce trigonometric functions, and tap changers will introduce $\\frac{1}{\\tau}$ and $\\frac{1}{\\tau^2}$ terms.\nTo introduce these highly nonlinear models into the IMB framework, such that we preserve its initial trivial form where the entire grid is nearly shorted and a feasible homotopy path from $\\nu_1 = 1 \\rightarrow 0$, we design three measures.\n\nFirst, each relaxed setting variable $\\xdcont^i$ is separated into two components: a ``base'' value $\\xdbase^i$ and a continuous-valued ``adjustment'' variable $\\xdadjcont^i$:\n\\begin{equation}\n\\label{split}\n \\xdcont^i = \\xdtriv^i + \\xdadjcont^i \n\\end{equation}\nSplitting $\\xdcont^i$ affects its initialization and bounding, but the total value still drives the respective device behavior. \nHowever, this step allows maintaining the solution space of the variable $\\xdadjcont$ around 0, which we have observed empirically improves the convergence in comparison to introducing $\\xdcont$ as variable directly; we believe this could be due to improved search directions for the Newton's method as the partials are dependent on the value of control variables.\n\nSecond, to maintain the trivial shorted form of the IMB method at $\\nu_1 = 1$, we transform the discrete base setting $\\xdtriv^i$ to be homotopy dependent such that during early stages of homotopy it has almost no impact on the network solution. For example, a tap ratio of 1.0 p.u. on a transformer would have no affect on the system as a whole. We define this smooth deformation of the discrete base setting $\\xdtriv^i$ through embedding a homotopy factor $\\nu_1$:\n\\begin{equation}\n \\xdbase^i = \\nu_1 \\xdtriv^i + (1-\\nu_1) \\xdbase_0^i\n\\end{equation}\n$\\xdtriv^i$ is chosen as a setting value that would ensure that the trivial solution for the $\\nu_1=1$ sub-problem is maintained. \n$\\xdbase_0^i$ must be chosen from within the feasible domain. A good prior setting can be used, but the median value can always be used if the user does not know one.\n\nLastly, we ensure the effective settings do not stray from their respective trivial values in the early sub-problems of Stage I by adding a homotopy-dependent penalty term to the objective function parameterized by a $\\nu_1$ and scaled by $k_{adj}$:\n\\begin{equation} \n \\label{modified_objective_kadj}\n f(x, \\xdadjcont) = f_0(x, \\xdadjcont) + \\nu_1 k_{adj} \\sum_{i=1}^{N_d} |\\xdadjcont^i|^2\n\\end{equation}\nIncluding this term encourages minimizing $|\\xdcont^i|$ more strongly in the early stages of homotopy. However, as $\\nu_1$ is decreased, the penalty weight is reduced so that adjustment variables can move more freely as necessary to satisfy physical network constraints and decrease the primary objective function.\nNote that all penalty terms have been removed from the objective function when the \\ACOPFC ~is solved at $\\nu_1=0$, so the form of the final \\ACOPFC~ problem is independent of $k_{adj}$ value.\n\n\nTo address the likelihood of infeasible sub-problems along the homotopy path ($c(\\nu_1), \\forall \\nu_1 \\in [0,1] $) where KCL cannot be satisfied without violating some variable limits, just as in IMB, homotopy dependent slack current injections (shown as red current sources in Fig. \\ref{fig:stageI_figure}.b) are defined at each node in the network to allow satisfaction of conservation of charge \\cite{Jereminov-feasibility}.\nThe magnitudes of the injection sources are penalized heavily in the objective function so that the sources only inject current if required for satisfaction of KCL, and the values are scaled by $\\nu_1$ so that the fictitious sources are removed entirely when $\\nu_1=0$.\n\n\\subsubsection{Solution of Stage I}\nFor each sub-problem defined in the homotopy path, the relaxed \\ACOPFC~is solved using the primal-dual interior point (PDIP) approach \\cite{Boyd}.\nThe Lagrangian for the sub-problem at any given $\\nu_1 \\in [0,1]$ is given by \n\\begin{equation}\n\\begin{aligned}\n \\label{eq:lagrangian}\n \\mathcal{L}^{\\nu_1}(\\xext, \\lambda,\\mu) = f_0^{\\nu_1}(\\xext) + \\lambda^T g^{\\nu_1}(\\xext) + \\mu^T h^{\\nu_1}(\\xext)\n\\end{aligned}\n\\end{equation}\nwhere $\\xext = [x, \\xdcont]^T$, $\\lambda$ is vector of dual variables for the equality constraints, and $\\mu$ is the vector of slack variables for the inequality constraints.\nA local minimizer, $\\theta^* = [\\xext^*, \\lambda^*, \\mu^*]$, is sought by using Newton's method to solve for the set of perturbed first order KKT conditions:\n\\begin{equation}\n\\label{eq:KKT}\n\\begin{split}\n& \\mathcal{F}(\\theta) = \\begin{bmatrix}\n\\nabla_{\\xext} f_0(\\xext) + \\nabla_{\\xext}^T g(\\xext) \\lambda + \\nabla_{\\xext}^T h(x_{ext}) \\mu \\\\\ng(\\xext) \\\\\n\\mu \\odot h(\\xext) + \\epsilon\n\\end{bmatrix} = 0 \\\\\n\\end{split}\n\\end{equation}\nwhere $\\odot$ is element-wise multiplication.\nIn order to facilitate convergence and primal-dual feasibility of the solution, heuristics based on diode-limiting from circuit simulation methods are applied \\cite{Pandey-Tx-Stepping}. \nThe located $\\theta^*$ is used as the initial guess for the next \\ACOPFC sub-problem once the homotopy parameter (and thus the network relaxation) has been updated. \nThis process is repeated until $\\nu_1=0$, at which point a solution has been found to \\ACOPFC (Fig. \\ref{fig:stageI_figure}.c).\n\n\\subsection{Homotopy Stage II: Discretization}\n\nAfter determining the optimal relaxed settings for the devices, we move onto the second stage of our approach, which solves for discrete value settings and the corresponding state of the grid using the relaxed solution from Stage I.\n\\subsubsection{Selection of discrete setting values}\nA practical discrete setting value must be chosen for each control device.\nFor each control variable, the nearest-neighbor discrete value to is selected. The chosen setting is evaluated to estimate whether snapping the variable to this value might result in infeasibility that prevents convergence.\nThe sensitivities of bus voltages with respect to the setting are calculated and an approximate voltage after rounding is calculated using the sensitivity vector and a first-order Taylor approximation.\nWith predicted voltage values, we check inequality constraints affected by the setting's perturbation.\nIf the chosen rounded value resulted in an infeasibility or violation of bounds, then second closest available setting is chosen and checked.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=2.8in]{figures\/Stage_Two.png}\n \\caption{Process of using Stage I solution to discretize settings, formulate Stage II, and determine \\ACOPFD solution}\n \\label{fig:Stage-Two-Networks}\n\\end{figure}\n\\subsubsection{Stage II Homotopy: Error Injections}\nAt the termination of the Stage I, we have a solution vector $\\theta^*$ that satisfies the perturbed KKT conditions for the continuous-valued OPF:\n\\begin{equation}\n \\label{eq:KKT-sol-1}\n \\mathcal{F}(\\theta^{*}) = 0\n\\end{equation}\nBy changing the setting variables from their converged continuous values to realistic setting values ($\\xdadjcont \\rightarrow \\xdadj$), the state vector is altered from $\\theta^*$ to $\\theta^\\prime$.\nBecause of the adjustments, evaluating the KKT conditions at $\\theta^\\prime$ will result in violations of the conditions, which we will refer to as residual vector $R$:\n\\begin{equation}\n \\label{eq:KKT-sol-res}\n \\mathcal{F}(\\theta^\\prime) = R\n\\end{equation}\nIn the case of primal variables, we can think of R as a set of independent current sources that compensates for the current mismatch at each node due to rounding discrete device settings.\nThis idea can be extended to dual variables as well, as the underlying equations and nonlinearities have the same form \\cite{Network-Stepping}.\nTherefore, in the network disturbed by the rounding step, the power flow constraints can be satisfied immediately after rounding by adding a current sources to each of these perturbed buses, with current injection values defined by the current mismatches caused by rounding (see Fig. \\ref{fig:Stage-Two-Networks}.b).\nIn a relaxed problem where we seek a solution to the same network but with the addition of $R$ to the respective equations, then we already know a solution to the problem: $\\theta^\\prime$.\n\\begin{equation}\n \\mathcal{G}(\\theta^\\prime) = \\mathcal{F}(\\theta^\\prime) - R = 0\n\\end{equation}\nTherefore, we propose a second homotopy stage to find a feasible solution to the \\ACOPFDs problem after control variables have been rounded, in which the relaxed system of equations are KKT conditions for the \\ACOPFDs equations \\eqref{eq:KKT} after rounding but with $R$ added as error injections.\n\n\\begin{equation}\n\\begin{split}\n \\mathcal{H}_2(x,\\nu_2) = (1-\\nu_2)\\mathcal{F}(\\theta) + \\nu_2(\\mathcal{F}(\\theta) - R) = 0\n\\end{split}\n\\end{equation}\nHere $\\nu_2$, the Stage II scalar homotopy factor, is used to gradually reduce the residual injections, tracing a continuous path to a feasible solution for $\\mathcal{F}(\\theta)$ with rounded values.\nThis avoids taking a discontinuous jump between solving \\ACOPFC and \\ACOPFD.\nWhen this homotopy problem is solved at $\\nu_2=0$, the error injections have been removed, and we have located a realistic, feasible solution to the \\ACOPFD.\n\n\\subsection{Generalization beyond IMB}\nWhile the two-stage approach in this paper is described as an extension of the larger IMB framework \\cite{Pandey-IMB} which assumes no knowledge of prior system settings, the two-stage homotopy algorithm can be applied to other approaches for solving AC-OPF without loss of generality. \nConsider the scenario in which good initial conditions for the general network are available but optimal settings for discrete variables ($\\xd^*$) are unknown.\nSuch a use-case may occur if a grid planner wishes to re-evaluate existing settings for discrete devices (e.g. aims to move from a feasible setting $(\\xd^k)$ to an optimal feasible setting $(\\xd^*)$), or to explore the effects of upgrading of fixed devices to adjustable devices). \nIn this situation, the grid planner may want to start from a feasible discrete setting but still explore whether a more optimal operating setting exists. In this scenario, we would still separate and relax each discrete setting $\\xd^i$ according to \\eqref{split}. \nBut here, we account for this knowledge of good setting in Stage I of the algorithm by defining base values using the previously known setting $\\xd^{k,i}$ such that $\\xdbase^{i} = \\xd^{k,i}$.\n\nTo ensure finding a solution at $\\nu_1=1$ is simple, the objective function is modified according to \\eqref{modified_objective_kadj}, but a very high penalty $k_{adj}$ value is used.\nNow at $\\nu_1=1$, with access to good initial conditions and feasible initial discrete settings, a trivial solution is obtained first.\nHowever, as we traverse the homotopy path from $\\nu_1 = 1 \\rightarrow 0$, the adjustment values for discrete settings take according to the objective $f_0(x, \\xdadjcont)$ and eventually an optimal set of relaxed discrete settings are obtained at $\\nu_1 = 0$. \nTo find a feasible discrete setting, we perform Stage II as described in Section IV.B without modification. \n\n\t \n\t\t\n\n\\section{Appendix: Additional Results}\nTable III contains a larger set of results obtained using the \\ACOPFDs approach. For all these simulations a $k_{adj}$ value of 0.1 was used. The case file can be found at \\cite{cases-github}. \n\\textit{Note: this table was omitted from the original PSCC submission due to space constraints.}\n\\bigbreak\n\n\\begin{table}[h]\n\\onecolumn\n\\begin{center}\n\\caption{Additional \\ACOPFDs results}\n\\begin{tabular}{|c|c|c|c|c|c|c|}\n\\hline\n\\textbf{Case} & \\textbf{Buses} & \\textbf{Tap Changers} & \\textbf{Phase Shifter} & \\textbf{Switched Shunt} & \\textbf{Objective} & \\textbf{\\% Adj} \\\\ \\hline\nA & 3022 & 981 & 4 & 399 & 5.377e5 & 24.9 \\% \\\\ \\hline\nA* & 3022 & 981 & 4 & 399 & 5.357e5 & 25.6 \\% \\\\ \\hline\nB & 6867 & 759 & 5 & 161 & 1.216e5 & 90.8 \\% \\\\ \\hline\nB* & 6867 & 759 & 5 & 161 & 1.216e5 & 93.9 \\% \\\\ \\hline\nC & 11152 & 530 & 0 & 500 & 5.294e5 & 77.8 \\% \\\\ \\hline\nC* & 11152 & 530 & 0 & 500 & 5.294e5 & 82.2 \\% \\\\ \\hline\nD & 16789 & 997 & 2 & 1723 & 3.592e5 & 74.7 \\% \\\\ \\hline\nD* & 16789 & 997 & 2 & 1723 & 3.591e5 & 89.2 \\% \\\\ \\hline\nE & 6549 & 1846 & 3 & 436 & 9.774e4 & 80.7 \\% \\\\ \\hline\nE* & 6549 & 1846 & 3 & 436 & 9.777e4 & 91.1 \\% \\\\ \\hline\nF & 14393 & 0 & 5 & 724 & 2.317e4 & 33.7 \\% \\\\ \\hline\nF* & 14393 & 0 & 5 & 724 & 2.312e4 & 33.2 \\% \\\\ \\hline\nG & 21849 & 997 & 2 & 1710 & 1.879e5 & 60.9 \\% \\\\ \\hline\nG* & 21849 & 997 & 2 & 1710 & 1.879e5 & 66.4 \\% \\\\ \\hline\nH & 31156 & 12 & 36 & 2451 & 2.009e5 & 76.3 \\% \\\\ \\hline\nH* & 31156 & 12 & 36 & 2451 & 2.009e5 & 63.9 \\% \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\end{document}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nIntelligent reflecting surface (IRS) is an artificial planar array consisting of numerous reconfigurable passive elements with the capability of manipulating the impinging electromagnetic signals and offering anomalous reflections \\cite{Tan2018SRA,cuiTJ2017metasurface,Larsson2020Twocritical,Garcia2020JSACIRS_gap_scatter_reflect}.\nMany recent studies have indicated that IRS is a promising solution to build a programmable wireless environment via steering the incident signal in fully customizable ways to enhance the spectral and energy efficiency of legacy systems \\cite{RenzoJSACposition,magzineWuqq,Liaskos2018magzineIRS,Renzo2019position}.\nMost contributions in this area focus on joint active and passive procoding design with various objectives and constraints \\cite{Wuq2019TWCprecoderIRS,HY2020TWCprecoderIRS,mux2020TWCprecoderIRS,ZhouG2020TSPIRSprecoderimperfectCE,LinS2021TWCprecoderIRS}.\nThe potential gains claimed by these works highly depend on the availability of accurate channel state information (CSI).\nHowever, channel estimation is a challenging task for the IRS-assisted system because there are no sensing elements or radio frequency chains, and thus there is no baseband processing capability in the IRS.\n\n\n\n\n\n\\begin{figure}\n[!t]\n\\centering\n\\includegraphics[width=.8\\columnwidth]{model_v2.eps}\n\\caption{The IRS-assisted multiuser MISO communication system.}\n\\label{IRS_system}\n\\end{figure}\n\n\\begin{figure}\n[!t]\n \\centering\n \\subfigure[Protocol for SU-MISO system]{\n \\label{protocol:a}\n \\includegraphics[width=.6\\columnwidth]{protocol_0.eps}}\n \\subfigure[Directly extended protocol for MU-MISO system]{\n \\label{protocol:b}\n \\includegraphics[width=.6\\columnwidth]{protocol_1.eps}}\n \\caption{The conventional uplink channel estimation protocols without exploiting the common-link structure.}\n \\label{protocol_tran}\n\\end{figure}\n\nSome early-attempted works \\cite{Jensen2020SUCE,zhangruiJSACSUCE,YouxhSPLCESU} estimate the uplink cascaded IRS channel for single user (SU) multiple-input-single-output (MISO) systems using the protocol shown in {\\figurename~\\ref{protocol:a}}.\nIn these works, the cascaded channel is equivalently represented as a traditional $M\\times N$ MIMO channel, where $M$ and $N$ is the base station (BS) array size and the IRS size, respectively, and the sensing matrix for channel reconstruction consists of the phase shifting vectors in consecutive training timeslots.\nMany works \\cite{Araujo2021JSAC_CE_PARAFAC,Mishra2019CEonoff,Elbir2020WCL_DL_CE,\nZhouzy2020decompositionCE,Kundu2021OJCSLMMSE_DFTGOOD,Alwazani2020OJCSLMMSE_DFT} directly extend the SU protocol to multi-user (MU) MISO systems, in which $K$ users transmit orthogonal pilot sequences in each training timeslot, as shown in {\\figurename~\\ref{protocol:b}}.\nBased on this protocol,\nthe on-off IRS state (amplitude) control strategy is proposed in \\cite{Araujo2021JSAC_CE_PARAFAC,Mishra2019CEonoff,Elbir2020WCL_DL_CE} to better decompose the MU cascaded channel coefficients for easier channel estimation of the cascaded channel for each user.\n\nIt is pointed out by \\cite{Liuliang_CE2020TWC} that direct application of the SU protocol on an MU-MISO system fails to exploit the structural property and results in substantially larger pilot overheads.\nIntuitively, all the cascaded channels share a common BS-IRS link and it is possible to reduce the pilot overhead since the number of independent variables is $MN+NK$ instead of $MNK$.\nOne algorithm is proposed in \\cite{DaiLLFullD} with the idea of sequentially estimating the BS-IRS channel and the IRS-user channels.\nHowever, it requires that the BS can work at full-duplex mode.\nFor the cascaded channel estimation,\na new channel estimation protocol is proposed in \\cite{Liuliang_CE2020TWC}.\nSpecifically, the cascaded channel of one reference user is firstly estimated based on the SU protocol, and then other users' channels are estimated by only estimating the ratios of their channel coefficients to the reference channel, which can be referred to as the relative channels.\nThe overall training overhead is reduced from $NK$ to $K+N+\\lceil \\frac{N}{M}\\rceil(K-1)$.\nHowever, there is an error propagation issue associated with this scheme since a low-accuracy estimation on the reference channel may jeopardize the estimations of the relative channels.\nMoreover, some IRS elements need to be switched off while estimating the relative channels for coefficients decomposition \\cite{Liuliang_CE2020TWC}.\n\nFor IRS design, the ``off'' state means no reflection (i.e., perfectly absorbing the incident signals), and hence it is difficult \\cite{Perfect_Absorption1,Perfect_Absorption2,Perfect_Absorption3} and also attracts additional implementation costs since this state is unnecessary for data transmission after the channel estimation.\nIn addition, switching off the IRS elements causes reflection power loss, which will lower the receive signal-to-noise ratio (SNR).\nSome recent works attempt to overcome this issue using ``always-ON'' training schemes.\nIn \\cite{Zhouzy2020decompositionCE,Kundu2021OJCSLMMSE_DFTGOOD,Alwazani2020OJCSLMMSE_DFT}, cascaded channel estimation algorithms based on tensor decomposition are proposed for MU-MISO systems, without requiring selected IRS elements to be off using the protocol in {\\figurename~\\ref{protocol:b}}.\nIn particular, the training phase shifts are optimized to minimize the mean squared error (MSE), and it has been verified that the discrete-Fourier-transform (DFT)-based training phase shifting configuration is optimal in this scenario.\nHowever, the pilot overhead is $NK$ since the protocol in {\\figurename~\\ref{protocol:b}} does not utilize the\ncommon-link property.\nIn \\cite{double_IRS}, an always-ON training scheme is proposed, which extends the protocol in \\cite{Liuliang_CE2020TWC} to the double-IRS aided system.\nHowever, the number of BS antennas needs to be equal to or larger than the number of IRS elements (i.e., $M\\geq N$) to guarantee a full-rank measurement matrix to estimate the relative channels between the reference user and the other users.\\footnote{\nAccording to the property ${\\rm{rank}}({\\bf A} \\otimes{\\bf B})={\\rm{rank}}({\\bf A} ){\\rm{rank}}({\\bf B} )$, the rank of the measurement matrix in equation (40) of \\cite{double_IRS} cannot be larger than $(K-1)M$ while the targeted rank is $(K-1)N$.\n}\nThis assumption is quite restrictive as the number of elements of the IRS ($N$) is usually larger than the number of antennas at the BS.\n\n\n\nAnother critical problem is the feasibility issue when the channel statistical prior information is utilized to improve the channel estimation accuracy,\nalthough this is a common idea for conventional MIMO channel estimation.\nIn \\cite{Kundu2021OJCSLMMSE_DFTGOOD} and \\cite{Alwazani2020OJCSLMMSE_DFT}, statistical knowledge of the individual BS-IRS link and IRS-user links is required. However, these messages are not available in practice since\nnone of the existing algorithms, to the best of the authors' knowledge, can reconstruct the individual channel coefficients when $MM$).\nWe propose a holistic solution to address the aforementioned issues.\nIn particular, a novel always-ON training protocol is designed; meanwhile the common-link structure is utilized to reduce the pilot overhead.\nFurthermore, an optimization-based cascaded channel estimation framework, which is flexible to utilize more practical channel statistical prior information, is proposed.\nThe following summarizes our key contributions.\n\\begin{itemize}\n\\item {\\bf Always-ON Channel Estimation Protocol Exploiting the Common-Link Structure}:\n We propose a novel channel estimation protocol without the need for on-off amplitude control to avoid the reflection power loss.\n Meanwhile, the common-link structure is exploited and the pilot overhead is reduced to $K+N+\\lceil \\frac{N}{M}\\rceil(K-1)$.\n \n\n In addition, the proposed protocol is applicable with any number of elements at the IRS (also including $N \\leq M$).\n Further, it does not need a ``reference user'', and as such, the estimation performance is enhanced owing to the multiuser diversity.\n\n\n\\item {\\bf Optimization-Based Cascaded Channel Estimation Framework}:\n Since there is no on-off amplitude control, the cascaded channel coefficients are highly coupled.\n In order to exploit the common-link structure, we decompose the cascaded channel coefficients by the multiplication of the common-link variables and the user-specific variables, and then an optimization-based joint channel estimation problem is formulated based on the maximal a posterior probability (MAP) rule. The proposed optimization-based approach is flexible to incorporate different kinds of channel statistical prior setups. Specifically, we utilize the combined statistical information of the cascaded channels, which is a weaker requirement compared to statistical knowledge of the individual BS-IRS and IRS-user channels.\n Then, a low-complexity alternating optimization algorithm is proposed to achieve a local optimum solution.\n Simulation results demonstrated that the optimization solution with proposed protocol achieves a more than $15$ dB gain compared to the benchmark.\n \n\n\n\\item {\\bf Training Phase Shifting Optimization for the Proposed Protocol}:\n The phase shifting configuration can substantially enhance the channel estimation performance of the cascaded IRS channel because the phase shifting vectors are important components in the measurement matrix for channel reconstruction.\n However, traditional solutions \\cite{Zhouzy2020decompositionCE,Kundu2021OJCSLMMSE_DFTGOOD,Alwazani2020OJCSLMMSE_DFT} of phase shifting optimization for SU cascaded IRS channel estimation cannot be directly applied to the MU case due to that the cascaded channel coefficients are highly coupled when the common-link structure is exploited.\n We propose a new formulation to optimize the phase shifting configuration, which maximizes the average reflection gain of the IRS.\n Simulation results further verify the proposed configuration achieves a more than $3$ dB gain compared to the state-of-the-art baselines.\n \n\n\n\\end{itemize}\n\n\n\n\n\\section{System Model}\\label{system model}\n\n\n\\subsection{System Model of MU-MISO IRS Systems}\nThis paper investigates the uplink channel estimation in a narrow-band IRS-aided MU-MISO communication system that consists of one BS with $M$ antennas, one IRS with $N$ elements, and $K$ single-antenna users,\\footnote{\n{We adopt the single-antenna-user setup here for ease of presentation. The signal model can be directly extended to the setup when users have multiple antennas by transmitting orthogonal uplink pilot sequences in different antennas.\n}}\nas illustrated in {\\figurename~\\ref{IRS_system}}.\nLet ${\\bf h}_{{\\rm d},k} \\in {\\mathbb C}^{M \\times 1}$ denote the BS-user channel (a.k.a., the direct channel) for user $k$, ${\\bf G} \\in {\\mathbb C}^{N \\times M}$ denote the common BS-IRS channel, and ${\\bf h}_{{\\rm r},k} \\in {\\mathbb C}^{N \\times 1}$ denote the IRS-user channel for user $k$.\nWe assume quasi-static block fading for all the channels such that the channel coefficients remain constant within one channel coherence interval, and they are independent and identically distributed (i.i.d.) between coherence intervals. Note that the quasi-static model considers the worse case scenarios where the temporal correlations between blocks are not exploited. In practice, the pilot overheads can be further reduced if one exploits temporal correlations of the channel blocks \\cite{kalman_filter_IRS2021TVT,kalman_filter_IRS2021chinacom}, but this is outside the scope of the paper.\n\nThe received baseband signal at the BS is given by\n\\begin{equation}\\label{equ:y_model1}\n\\begin{aligned}[b]\n{\\bf y}_{t}&=\\sum_{k=1}^K \\left({\\bf h}_{{\\rm d},k}+ {\\bf G}^{\\rm T} {\\bf \\Theta}_t {\\bf h}_{{\\rm r},k} \\right) x_{k,t}\n+{ {\\bf z}_t}\n,\n\\end{aligned}\n\\end{equation}\nwhere $t$ is the time index, $x_{k,t}$ is the transmit pilot symbol from user $k$, ${\\bf z}_t \\sim {\\cal{CN}}({\\bm 0},\\sigma_0^2 {\\bf I}_{M}) $ is the {additive white Gaussian noise} (AWGN), and ${\\bf \\Theta}_t \\in {\\mathbb C}^{N \\times N}$ is the IRS reflection coefficient matrix.\nIt is known that ${\\bf \\Theta}_t$ is a diagonal matrix such that ${\\bf \\Theta}_t={\\rm diag}({\\bm \\theta}_t)$, where ${\\bm \\theta}_t=[e^{\\jmath \\varphi_{t,1}},e^{\\jmath \\varphi_{t,2}},\\cdots,e^{\\jmath \\varphi_{t,N}}]^{\\rm T}$ is the phase shifting vector from the IRS.\\footnote{\n{In practice, the reflection efficiency cannot be 1, which is known as the reflection loss. However, for ease of presentation, this loss can be absorbed into the path loss of ${\\bf G}$ since it is a constant value.\n}}\n\n\n\n\n\n\n\n\\subsection{IRS Cascaded Channel Model}\\label{sec:model_cascaded}\n\nDenote the cascaded channel related to the BS's $m$-th antenna and the $k$-th user by\n\\begin{equation}\\label{equ:cascaded_channel_vector}\n\\begin{aligned}[b]\n{\\bf h}_{{\\rm I},k,m}={\\rm{diag}}({\\bf h}_{{\\rm r},k}) {\\bf g}_m,\n\\end{aligned}\n\\end{equation}\nwhere ${\\bf g}_m$ is the $m$-th column in ${\\bf G}=[{\\bf g}_1,\\cdots,{\\bf g}_M]$.\nThe cascaded channel over all BS antennas is given by ${\\bf H}_{{\\rm I},k}=[{\\bf h}_{{\\rm I},k,1}^{\\rm T},\\cdots,{\\bf h}_{{\\rm I},k,M}^{\\rm T}]^{\\rm T}$, and we have\n\\begin{equation}\\label{equ:cascaded_channel}\n\\begin{aligned}[b]\n{\\bf H}_{{\\rm I},k}={\\bf G}^{\\rm T} {\\rm{diag}}({\\bf h}_{{\\rm r},k}).\n\\end{aligned}\n\\end{equation}\nSubstituting \\eqref{equ:cascaded_channel} into \\eqref{equ:y_model1}, the received signal is given by\n\\begin{equation}\\label{equ:y_model_vn}\n\\begin{aligned}[b]\n{\\bf y}_t&=\\sum_{k=1}^K \\left({\\bf h}_{{\\rm d},k}+ {\\bf H}_{{\\rm I},k} {\\bm{\\theta}}_t \\right) x_{k,t}\n+{\\bf z}_t\n.\n\\end{aligned}\n\\end{equation}\n\nIn \\cite{Araujo2021JSAC_CE_PARAFAC,Mishra2019CEonoff,Elbir2020WCL_DL_CE} and\n\\cite{Zhouzy2020decompositionCE,Kundu2021OJCSLMMSE_DFTGOOD,Alwazani2020OJCSLMMSE_DFT}, ${\\bf H}_{{\\rm I},k}$ for all $k$ are estimated\nwithout exploiting the implicit common link structure behind the ${\\bf H}_{{\\rm I},k}$ for all $k$. As a result, there are $MNK$ variables to be estimated and this poses a heavy penalty on the required pilot overheads in the MU-MISO system.\nOn the other hand, from \\eqref{equ:cascaded_channel}, we can see that the cascaded channels $\\left\\{{\\bf H}_{{\\rm I},1}, {\\bf H}_{{\\rm I},2},\\cdots,{\\bf H}_{{\\rm I},K}\\right\\}$ all share a common BS-IRS link $\\bf G$.\nSpecifically,\n ${\\bf H}_{{\\rm I},k}$ is the multiplication of the common $\\bf G$ and the user-specific ${\\rm{diag}}({\\bf h}_{{\\rm r},k})$.\nIn other words, the cascaded channels are not independent variables. In fact, the common link structure $\\bf G$ should be exploited in the channel estimation.\nAs a result, the total number of independent variables\nis reduced to $MN+NK$.\n\n\nWe assume ${\\mathbb E}\\left[ {\\bf h}_{{\\rm I},k,m}\\right]={\\bf 0}$ for all $k$ and $m$.\\footnote{\n{The proposed algorithm in this paper is applicable for the case when an LoS link exists and ${\\mathbb E}\\left[ {\\bf h}_{{\\rm I},k,m}\\right]\\neq{\\bf 0}$. Simply substitute ${\\mathbb E}\\left[ {\\bf h}_{{\\rm I},k,m}\\right]$ into the prior distribution model in \\eqref{equ:prior}. Note that since the rank-1 LoS link usually is very strong and easier to be estimated, it will dominate the power of the channel coefficients, and the NMSE will be better than the NLoS scenario investigated in this paper.\n}}\nThe covariance of the cascaded channel ${\\bf h}_{{\\rm I},k,m}$ is given by\n\\begin{equation}\\label{equ:cascaded_channel_statisical}\n\\begin{aligned}[b]\n{\\bf C}_m^{(k)}={\\mathbb E}\\left[ {\\bf h}_{{\\rm I},k,m} {\\bf h}_{{\\rm I},k,m}^{\\rm H} \\right].\n\\end{aligned}\n\\end{equation}\nIn this paper, we focus on the case when ${\\bf C}_m^{(k)}$ is a full rank matrix for all $m$ and $k$, which is generally true in sub-6 GHz bands.\nWe design a channel estimation algorithm and phase shifting configuration scheme by utilizing knowledge of ${\\bf C}_m^{(k)}$.\nNote that in \\cite{Kundu2021OJCSLMMSE_DFTGOOD} and \\cite{Alwazani2020OJCSLMMSE_DFT}, the channel estimation algorithms for the cascaded channels require knowledge of the covariance of the IRS-BS link $\\bf G$ as well as the covariance of the IRS-user links ${\\bf h}_{{\\rm r},k}$. Also note that knowledge of the covariance of the cascaded channel ${\\bf C}_m^{(k)}$ is a weaker requirement compared to knowledge of the individual covariances $\\bf G$ and ${\\bf h}_{{\\rm r},k}$.\n\n\n\n\n\n\n\n\n\\section{Proposed Channel Estimation Protocol}\\label{dense_scheme}\n\n\n\n\n\\subsection{Overview of the Selected On-Off Channel Estimation Protocol}\\label{overview_onoff}\nThe selected on-off channel estimation protocol in \\cite{Liuliang_CE2020TWC} is illustrated in {\\figurename~\\ref{protocol:c}}, and consists of three stages.\nIn stage I, the BS-user channels are estimated by switching off all the IRS elements.\nIn stage II, a reference user is selected, which is indexed by user $1$, and its cascaded channel ${\\bf H}_{{\\rm I},1}$ is estimated using the algorithm for SU-MISO cases \\cite{zhangruiJSACSUCE}.\nIn stage III, the other $K-1$ users' cascaded channels are estimated by exploiting the common-link property.\n\n\\begin{figure}\n[!ht]\n\\centering\n\\includegraphics[width=.95\\columnwidth]{protocol_2.eps}\n\\caption{Selected on-off channel estimation protocol in \\cite{Liuliang_CE2020TWC}.}\n\\label{protocol:c}\n\\end{figure}\n\n\nWe focus on the estimation in stage III. Substituting ${\\bf H}_{{\\rm I},1}={\\bf G}^{\\rm T} {\\rm{diag}}({\\bf h}_{{\\rm r},1})$ into \\eqref{equ:y_model1}, the received signal is given by\n\\begin{equation}\\label{equ:y_model_LL}\n\\begin{aligned}[b]\n{\\bf y}_t&=\\sum_{k=1}^K {\\bf h}_{{\\rm d},k} x_{k,t}+\n{\\bf H}_{{\\rm I},1} {\\bm{\\theta}}_t x_{1,t}\\\\\n&\\qquad+\\sum_{k=2}^K {\\bf H}_{{\\rm I},1} {\\rm diag}({\\bm \\theta}_t) {\\bf h}_{{\\rm u},k} x_{k,t}\n+{ {\\bf z}_t}\n,\n\\end{aligned}\n\\end{equation}\nwhere ${\\bf h}_{{\\rm u},k} ={\\rm diag}({\\bf h}_{{\\rm r},1})^{-1}{\\bf h}_{{\\rm r},k}$ for all $k=2,3,\\cdots,K$, which are the user-specific variables to be estimated in this stage after exploiting ${\\bf H}_{{\\rm I},1}$ as the common-link variable.\nIn \\cite{Liuliang_CE2020TWC}, to estimate ${\\bf h}_{{\\rm u},k}$, only the $k$-th user sends $x_{k,t}=1$ and all the other users are inactive such that $x_{j,t}=0$ for all $j \\neq k$.\nThe received signal is given by\n\\begin{equation\n\\begin{aligned}[b]\n{\\bf y}_\n&={\\bf h}_{{\\rm d},k}+ {\\bf H}_{{{\\rm I},1}} {\\rm diag}({\\bm \\theta}_t) {\\bf h}_{{\\rm u},k}\n+ {\\bf z}_t\n.\n\\end{aligned}\n\\end{equation}\nBy only switching on the first $M$ IRS elements with $[\\theta_{t,1},\\theta_{t,2},\\cdots,\\theta_{t,M}]^{\\rm T}={\\bf 1}$, the first $M$ coefficients in ${\\bf h}_{{\\rm u},k}$ can be estimated by\n\\begin{equation\n\\begin{aligned}[b]\n\\begin{bmatrix}{h}_{{\\rm u},k,1}\\\\ \\vdots \\\\ { h}_{{\\rm u},k,M}\\end{bmatrix}\n=\n{\n\\begin{bmatrix}\n H_{{{\\rm I},1},11} & \\dots & H_{{{\\rm I},1},1M}\\\\\n \\vdots & \\ddots & \\vdots\\\\\n H_{{{\\rm I},1},M1} & \\dots & H_{{{\\rm I},1},MM}\n\\end{bmatrix}\n}^{-1}\n({\\bf y}_i-{\\bf h}_{{\\rm d},k})\n.\n\\end{aligned}\n\\end{equation}\nNote that one may adopt the LMMSE estimator to achieve better performance if the covariance of ${\\bf h}_{{\\rm u},k} ={\\rm diag}({\\bf h}_{{\\rm r},1})^{-1}{\\bf h}_{{\\rm r},k}$ is available \\cite[Section V]{Liuliang_CE2020TWC}.\nIn the next timeslot, the next $M$ IRS elements are switched on with $[\\theta_{t,M+1},\\theta_{t,M+2},\\cdots,\\theta_{t,2M}]^{\\rm T}={\\bf 1}$ while the other elements are switched off to estimate the next $M$ coefficients in ${\\bf h}_{{\\rm u},k}$. The estimation continues in this way until all the coefficients in ${\\bf h}_{{\\rm u},k} $ are estimated, which finally costs $I=\\lceil \\frac{N}{M}\\rceil$ timeslots.\nThe overall pilot overhead of the protocol in \\cite{Liuliang_CE2020TWC} is $K+N+\\lceil \\frac{N}{M}\\rceil(K-1)$.\n\n\n\\subsection{Always-ON Channel Estimation Protocol}\\label{proposed_protocol}\nWe propose a novel always-ON channel estimation protocol without switching off\nselected IRS elements. The proposed protocol consists of two stages, as illustrated in {\\figurename~\\ref{protocol:d}}.\nIn particular, stage I contains $L_1+1$ timeslots, where $L_1 = \\lceil \\frac{N}{M}\\rceil$, and each timeslot contains $K$ samples. Stage II contains $L_2=N-L_1$ timeslots, and each timeslot contains only one sample.\nTherefore, we have $K(L_1+1)+L_2$ received samples in total. As defined in \\eqref{equ:y_model1}, the $t$-th received sample is given by\n\\begin{equation}\\label{equ:y_model2}\n\\begin{aligned}[b]\n{\\bf y}_{t}&=\\sum_{k=1}^K \\left({\\bf h}_{{\\rm d},k}+ {\\bf G}^{\\rm T} {\\rm diag} ({\\bm \\theta}_t) {\\bf h}_{{\\rm r},k} \\right) x_{k,t}\n+{ {\\bf z}_t}\n,\n\\end{aligned}\n\\end{equation}\nwhere $t=1,2,\\cdots,K(L_1+1)+L_2$.\n\n\n\n\\begin{figure}\n[!ht]\n\\centering\n\\includegraphics[width=.8\\columnwidth]{protocol_3.eps}\n\\caption{Proposed always-ON protocol.}\n\\label{protocol:d}\n\\end{figure}\n\n\n\\subsubsection{Received Signal Samples in Each Training Timeslot}\nTo facilitate analysis, we introduce new notations on the received signal samples within one training timeslot indexed by $\\ell=0,1,\\cdots,N$.\n\\begin{itemize}\n\\item\n{{\\bf {Stage I}} (Users send orthogonal pilot sequences ${\\bf X}$)}:\nStage I consists of training timeslots $\\ell=0,1,\\cdots,L_1$.\nDenote by ${\\bf X} \\in {\\mathbb C}^{K \\times K}$, where ${\\bf X}^{\\rm H} {\\bf X}=K{\\bf I}_K$, the orthogonal pilot sequences consisting of unit-modulus elements.\nIn the $\\ell$-th timeslot, $K$ users transmit ${\\bf X}$ by $K$ samples, and the IRS is configured by the phase shifting vector ${\\bm \\theta}_\\ell$.\nThe $K$ received samples in the $\\ell$-th timeslot are ${\\bf y}_{K\\ell+1},{\\bf y}_{K\\ell+2},\\cdots,{\\bf y}_{K\\ell+K}$.\nLet ${{\\bf Y}}_{\\ell}=[{\\bf y}_{K\\ell+1},{\\bf y}_{K\\ell+2},\\cdots,{\\bf y}_{K\\ell+K}]$. We have\n\\begin{equation}\\label{equ:y_model_stage_I}\n\\begin{aligned}[b]\n{{\\bf Y}}_{\\ell}&=\\left({\\bf H}_{\\rm d} + {\\bf G}^{\\rm T} {\\rm diag} ({\\bm \\theta}_{\\ell}) {\\bf H}_{\\rm r} \\right) {\\bm X}\n+{{\\bf Z}}_{\\ell}\n,\n\\end{aligned}\n\\end{equation}\nwhere\n\\begin{align}\n{\\bf H}_{\\rm d}&=[{\\bf h}_{{\\rm d},1},\\cdots,{\\bf h}_{{\\rm d},K}] \\in {\\mathbb C}^{M \\times K},\\\\\n{\\bf H}_{\\rm r}&=[{\\bf h}_{{\\rm r},1},\\cdots,{\\bf h}_{{\\rm r},K}] \\in {\\mathbb C}^{N \\times K}\n\\end{align}\nare the stacked channel coefficient matrices,\n$\\ell$ is the timeslot index,\nand ${{\\bf Z}}_{\\ell}=[{\\bf z}_{K\\ell+1},{\\bf z}_{K\\ell+2},\\cdots,{\\bf z}_{K\\ell+K}]$ denotes the noise.\n\n\n\\item {{\\bf {Stage II}} (Users send pilot $\\bar{\\bf x}$, which is the first column of $\\bf X$)}:\nStage II consists of training timeslots $\\ell=L_1+1,L_1+2,\\cdots,N$.\nWe denote the first column of $\\bf X$ by\n\\begin{equation}\\label{equ:barx}\n\\bar{\\bf x}=[{\\bar x}_{1},\\cdots,{\\bar x}_{K}]^{\\rm T}.\n\\end{equation}\nIn the $\\ell$-th timeslot, the users transmit $\\bar{\\bf x}$, while the IRS is configured by ${\\bm \\theta}_\\ell$.\nThe received signal in stage II is denoted by\n\\begin{equation}\\label{equ:y_model_stage_III}\n\\begin{aligned}[b]\n\\bar{\\bf y}_{\\ell}\n&= \\left({\\bf H}_{\\rm d} + {\\bf G}^{\\rm T} {\\rm diag} ({\\bm \\theta}_\\ell) {\\bf H}_{\\rm r} \\right) \\bar{\\bf x}+{\\bf z}_{\\ell+(K-1)L_1+K}\n,\n\\end{aligned}\n\\end{equation}\nwhere $\\bar{\\bf y}_{\\ell}={\\bf y}_{\\ell+(K-1)L_1+K}$, and $\\ell=L_1+1,L_1+2,\\cdots,N$.\n\\end{itemize}\n\n$\\bar{\\bf y}_{\\ell}$ may extend to all the $N+1$ timeslots in the protocol by\n\\begin{equation}\\label{equ:bar_y}\n\\begin{aligned}[b]\n\\bar{\\bf y}_{\\ell}=\n\\begin{cases}\n{\\bf y}_{1+K \\ell}, \\; & {\\rm for} \\; \\ell=0,\\cdots,L_1,\\\\\n{\\bf y}_{\\ell+(K-1)L_1+K}, & {\\rm for} \\; \\ell=L_1+1,\\cdots,N.\n\\end{cases}\n\\end{aligned}\n\\end{equation}\n\nThe pilot overhead in stage I is $(\\lceil \\frac{N}{M}\\rceil+1) K$, and the overhead in stage II is $N-\\lceil \\frac{N}{M}\\rceil $. The overall pilot overhead of the proposed protocol is $K+N+\\lceil \\frac{N}{M}\\rceil(K-1)$.\nNote that the pilot overhead is significantly reduced by about $M$ times compared to the protocol in\n\\cite{Araujo2021JSAC_CE_PARAFAC,Mishra2019CEonoff,Elbir2020WCL_DL_CE} and\n\\cite{Zhouzy2020decompositionCE,Kundu2021OJCSLMMSE_DFTGOOD,Alwazani2020OJCSLMMSE_DFT}, which require $NK$ pilots.\n\n\n\\subsubsection{Signal Pre-processing}\nIn the proposed protocol, we set ${\\bm \\theta}_0=-{\\bm \\theta}_1$ to decouple the estimation on the BS-user channel\\footnote{\n The BS-user channel can be estimated based on ${{\\bf Y}}_0$ and ${{\\bf Y}}_1$ by the linear minimum mean squared error estimator, which is similar to the methods in existing works\n\\cite{Kundu2021OJCSLMMSE_DFTGOOD,Alwazani2020OJCSLMMSE_DFT,Liuliang_CE2020TWC} (See Appendix \\ref{app_Estimate_Hd}).} and on the cascaded IRS channel.\nTo facilitate the estimation of the cascaded IRS channel, signal pre-processing to remove the BS-user channel from the received signals is performed.\n\n\\begin{itemize}\n\\item \\emph{Pre-processing on ${{\\bf Y}}_\\ell$ for $\\ell=1,2,\\cdots,L_1$}:\nFor $\\ell=1$, we have\n\\begin{equation}\\label{equ:R1}\n\\begin{aligned}[b]\n{\\bf R}_{1} &= \\frac{1}{2}\\left({{\\bf Y}}_1-{{\\bf Y}}_0\\right) \\\\\n&={\\bf G}^{\\rm T} {\\rm diag} ({\\bm \\theta}_1) {\\bf H}_{\\rm r}{\\bf X}\n+\\tilde{\\bf Z}_1\n,\n\\end{aligned}\n\\end{equation}\nwhere the elements in $\\tilde{\\bf Z}_1$ follow i.i.d. ${\\cal{CN}}({ 0},\\frac{1}{2}\\sigma_0^2) $.\nFor $\\ell=2,3,\\cdots,L_1$, we have\n\\begin{equation}\\label{equ:R2L1}\n\\begin{aligned}[b]\n{\\bf R}_{\\ell} &= {{\\bf Y}}_{\\ell} -\\frac{1}{2}\\left({{\\bf Y}}_0+{{\\bf Y}}_1\\right)\\\\\n&={\\bf G}^{\\rm T} {\\rm diag} ({\\bm \\theta}_\\ell) {\\bf H}_{\\rm r} {\\bf X}\n+\\tilde{\\bf Z}_{\\ell}\n,\n\\end{aligned}\n\\end{equation}\nwhere the elements in $\\tilde{\\bf Z}_{\\ell}$ follow ${\\cal{CN}}({ 0},\\frac{3}{2}\\sigma_0^2) $.\nNote that the difference between \\eqref{equ:R1} and \\eqref{equ:R2L1} is that they have different noise variances.\n\n\\item \\emph{Pre-processing on $\\bar{\\bf y}_{\\ell}$ for $\\ell=1,2,\\cdots,N$}:\nIn the same manner, the BS-user channel is removed from $\\bar{\\bf y}_{\\ell}$\nfor $\\ell=1,2,\\cdots,N$:\n\\begin{equation}\\label{equ:rbar}\n\\begin{aligned}[b]\n\\bar{\\bf r}_{\\ell} &= \\bar{\\bf y}_{\\ell}-\\frac{1}{2}\\left(\\bar{\\bf y}_{0}+\\bar{\\bf y}_{1}\\right)\\\\\n&={\\bf G}^{\\rm T} {\\rm diag} ({\\bm \\theta}_\\ell) {\\bf H}_{\\rm r}\\bar{\\bf x}\n+\\bar{\\bf z}_{\\ell}\\\\\n&={\\bf G}^{\\rm T} {\\rm diag} ({\\bf H}_{\\rm r}\\bar{\\bf x}) {\\bm \\theta}_\\ell\n+\\bar{\\bf z}_{\\ell}\n,\n\\end{aligned}\n\\end{equation}\nwhere the elements in $\\bar{\\bf z}_{\\ell}$ follow ${\\cal{CN}}({ 0},\\frac{3}{2}\\sigma_0^2) $.\n\\end{itemize}\n\n\\begin{lemma}[Effectiveness of the proposed protocol]\\label{lemma0}\nThe cascaded channels ${\\bf H}_{{\\rm I},k}={\\bf G}^{\\rm T} {\\rm{diag}}({\\bf h}_{{\\rm r},k})$ ($k=1,2,\\cdots,K$) for all $K$ users can be perfectly recovered with probability one by adopting an orthogonal phase shifting configuration matrix ${\\bm \\Phi}=[{\\bm \\theta}_1,{\\bm \\theta}_2,\\cdots,{\\bm \\theta}_N]$ whose elements are all non-zero, if there is no noise, and\n${\\bf G}={\\bf F}_{\\rm R} {\\ddot{\\bf G}} {\\bf F}_{\\rm B}^{\\rm T}$ and ${\\bf h}_{{\\rm r},k}={\\bf F}_{\\rm R}{\\ddot{\\bf h}}_{{\\rm r},k}$ where ${\\bf F}_{\\rm B} \\in {\\mathbb C}^{M \\times M}$ and ${\\bf F}_{\\rm R} \\in {\\mathbb C}^{N \\times N}$ is respectively the angular domain basis for the BS antenna array and the IRS,\n${\\ddot{\\bf G}}=[{\\ddot{\\bf g}}_1,{\\ddot{\\bf g}}_2,\\cdots,{\\ddot{\\bf g}}_M]$, and ${\\ddot{\\bf g}}_m$ and ${\\ddot{\\bf h}}_{{\\rm r},k}$ for all $m$ and $k$ are pairwise independent following zero-mean multivariate normal distributions.\n\\end{lemma}\n\n\\begin{IEEEproof}\nSee Appendix \\ref{proof_lemma0}.\n\\end{IEEEproof}\n\nCompared with \\cite{Liuliang_CE2020TWC} and \\cite{double_IRS}, the proposed protocol has two main differences, which provide the opportunity to keep all the IRS elements ON and to reduce the pilot overhead by utilizing the common-link structure at the same time.\nHere, we try to explain the intuition by supposing we adopt a similar estimation algorithm to those in \\cite{Liuliang_CE2020TWC} and \\cite{double_IRS}, i.e., first estimate the reference channel and then estimate the relative channels.\nFirstly, instead of requiring a specific reference user, we design a virtual reference channel ${\\bf H}_{\\rm v}={\\bf G}^{\\rm T} {\\rm diag} ({\\bf H}_{\\rm r}\\bar{\\bf x})$, which is fair for all users and can be reconstructed using the $N$ observations in \\eqref{equ:rbar}.\nSecondly, in our protocol, the relative channels can be estimated by using ${\\bf R}_{\\ell}$ in \\eqref{equ:R1} and \\eqref{equ:R2L1}\nresulting a measurement matrix\n$[({\\bf H}_{\\rm v}{\\rm diag} ({\\bm \\theta}_1))^{\\rm T},\\cdots,({\\bf H}_{\\rm v}{\\rm diag} ({\\bm \\theta}_{L_1}))^{\\rm T}]^{\\rm T}$.\nOne can see that there are $L_1$ different ${\\bm \\theta}_{\\ell}$ in the measurement matrix instead of a fixed one as in \\cite{double_IRS}. When $L_1 \\geq \\lceil \\frac{N}{M}\\rceil$, it is possible that the rank of the measurement matrix becomes $N$ with a proper phase shifting configuration to obtain a reasonable estimation.\\footnote{\nIt is seen that the selected on-off protocol in \\cite{Liuliang_CE2020TWC} can be treated as a special case of our protocol by setting $\\bar{\\bf x}=[1,0,\\cdots,0]^{\\rm T}$ and selecting different parts of the IRS elements to be ON and OFF to obtain $L_1$ different ${\\bm \\theta}_{\\ell}$, as introduced in Section \\ref{overview_onoff}.\n}\nNote that the above two-step channel estimation algorithm is only for explaining the intuition of the proposed approach. In the next section, we will propose an optimization-based cascaded channel estimation algorithm that may achieve more reliable performance.\n\n\n\\section{Optimization-Based MU-Cascaded IRS Channel Estimation}\\label{sec:opt_est}\nIn this section, we propose an optimization-based channel estimation on the cascaded IRS channel ${\\bf h}_{{\\rm I},k,m}$ for all $m$ and $k$ based on the pre-processed observations $\\{{\\bf R}_{\\ell},\\bar{\\bf r}_{\\ell}\\}$ in \\eqref{equ:R1}, \\eqref{equ:R2L1} and \\eqref{equ:rbar}.\nWe consider a general decomposition on the cascaded channel, which is friendly in utilizing the channel prior knowledge\nand the common-link structure across the multiple users.\nSpecifically, we adopt the MAP\napproach to estimate\n the cascaded channel ${\\bf h}_{{\\rm I},k,m}$ for all $m$ and $k$\ngiven the pre-processed observations $\\{{\\bf R}_{\\ell},\\bar{\\bf r}_{\\ell}\\}$.\nAn alternating optimization algorithm with efficient initialization is further proposed to achieve a local optimum of the MAP problem.\n\n\n\n\n\\subsection{MAP Problem Formulation}\nAs shown in \\eqref{equ:cascaded_channel_vector}, the cascaded channel ${\\bf h}_{{\\rm I},k,m}$ can be decomposed by the common BS-IRS channel ${\\bf g}_m$ and the IRS-user channel ${\\bf h}_{{\\rm r},k}$ as follows:\n\\begin{equation}\\label{equ:cascaded_channel_vector_2v}\n\\begin{aligned}[b]\n{\\bf h}_{{\\rm I},k,m}={\\rm{diag}}({\\bf h}_{{\\rm r},k}) {\\bf g}_m.\n\\end{aligned}\n\\end{equation}\nHowever, the main challenge to estimate the individual ${\\bf h}_{{\\rm r},k}$ and ${\\bf g}_m$ is that in the MAP formulation, prior distributions of ${\\bf g}_m$ and ${\\bf h}_{{\\rm r},k}$ will be needed, but\nit is difficult to obtain individual covariances of ${\\bf g}_m$ and ${\\bf h}_{{\\rm r},k}$ based on the covariance of the cascaded channel.\\footnote{\nOne possible way to estimate the covariance of the cascaded channel ${\\bf C}_m^{(k)}$ is using the\n the maximum likelihood estimator $\\hat{\\bf C}_m^{(k)}=\\frac{1}{J} \\sum_{j=1}^J \\left[ \\hat{\\bf h}_{{\\rm I},k,m}(j) \\hat{\\bf h}_{{\\rm I},k,m}(j)^{\\rm H} \\right]$, where $\\{\\hat{\\bf h}_{{\\rm I},k,m}(j)\\}$ are the estimated historical cascaded channels in the past $j=1,2,\\cdots,J$ transmission frames.\n Note that similar covariances are also required by the LMMSE estimators for the selected on-off channel estimation protocol in \\cite{Liuliang_CE2020TWC} (See equations (72) and (86) in \\cite{Liuliang_CE2020TWC}).\n}\n\nTo address this issue, we consider\na more general auxiliary variable set for the cascaded channel decomposition:\n\\begin{equation}\\label{equ:set_opt}\n\\begin{aligned}[b]\n{\\cal A}=\\left\\{ \\left.\\left\\{{\\bf H}_{\\rm g},{\\bf H}_{\\rm u}\\right\\} \\right|\n{\\bf h}_{{\\rm I},k,m}={\\rm{diag}}({\\bf h}_{{\\rm u},k}) {\\bf h}_{{\\rm g},m}, \\forall k, \\forall m\n\\right\\},\n\\end{aligned}\n\\end{equation}\nwhere ${\\bf H}_{\\rm g}=[{\\bf h}_{{\\rm g},1},\\cdots,{\\bf h}_{{\\rm g},M}]$ is the common-link variable whose $m$-th column is ${\\bf h}_{{\\rm g},m}$, and ${\\bf H}_{\\rm u}=[{\\bf h}_{{\\rm u},1},\\cdots,{\\bf h}_{{\\rm u},K}]$ is the user-specific variable whose $k$-th column is ${\\bf h}_{{\\rm u},k}$.\nOne may verify that $\\{{\\bf G}, {\\bf H}_{\\rm r}\\}\\in {\\cal A}$ according to \\eqref{equ:cascaded_channel_vector_2v}.\nBased on this, we formulate an optimization problem on ${\\bf H}_{\\rm g}$ and ${\\bf H}_{\\rm u}$ using the MAP approach, which is given by\n\\begin{equation*\n\\begin{aligned}[b]\n&\n{\\mathcal{P}}{(\\text{A})}\n\\quad \\max_{ {\\bf H}_{\\rm g}, {\\bf H}_{\\rm u} } \\;\nf_{{\\rm A}}({\\bf H}_{\\rm g}, {\\bf H}_{\\rm u})\n,\n\\end{aligned}\n\\end{equation*}\nwhere the objective function is given by\\footnote{\nOne may substitute different prior setups to $p({\\bf h}_{{\\rm I},k,m} )$ in \\eqref{equ:prior}.\nIn addition, if channel prior knowledge is unavailable, one may simply remove $p({\\bf h}_{{\\rm I},k,m} )$ from \\eqref{equ:obj_f},\nand the optimization becomes the maximum likelihood approach.\n}\n\\begin{equation}\\label{equ:obj_f}\n\\begin{aligned}[b]\n&f_{{\\rm A}}({\\bf H}_{\\rm g}, {\\bf H}_{\\rm u})\n= \\sum_{m=1}^M \\sum_{k=1}^K \\ln p({\\bf h}_{{\\rm I},k,m} ) \\\\\n&\\quad+\\sum_{\\ell=1}^{L_1} \\ln p({\\tilde{\\bf R}}_{\\ell}| {\\bf H}_{\\rm g}, {\\bf H}_{\\rm u} )\n+\\sum_{\\ell=L_1+1}^{N} \\ln p( \\bar{\\bf r}_{\\ell} | {\\bf H}_{\\rm g}, {\\bf H}_{\\rm u} )\n,\n\\end{aligned}\n\\end{equation}\nand $\\tilde{\\bf R}_{\\ell}= {{\\bf R}}_{\\ell} {\\bf X}^{-1}$.\nNote that $f_{{\\rm A}}({\\bf H}_{\\rm g}, {\\bf H}_{\\rm u})$ in \\eqref{equ:obj_f} only requires the prior distribution of ${\\bf h}_{{\\rm I},k,m}$, which is given by\n\\begin{equation}\\label{equ:prior}\n\\begin{aligned}[b]\np({\\bf h}_{{\\rm I},k,m} )\\propto e^{\n- {\\bf h}_{{\\rm g},m}^{\\rm H} {\\rm{diag}}({\\bf h}_{{\\rm u},k})^{\\rm H} {{\\bf C}_m^{(k)}}^{-1} {\\rm{diag}}({\\bf h}_{{\\rm u},k}) {\\bf h}_{{\\rm g},m}\n},\n\\end{aligned}\n\\end{equation}\nwhere $\\propto$ denotes equality up to a\nscaling that is independent of the variables (i.e., ${\\bf h}_{{\\rm I},k,m}$ for \\eqref{equ:prior}).\nThe likelihood functions $p({\\tilde{\\bf R}}_{\\ell}| {\\bf H}_{\\rm g}, {\\bf H}_{\\rm u})$ and $p( \\bar{\\bf r}_{\\ell} | {\\bf H}_{\\rm g}, {\\bf H}_{\\rm u} )$ are given by\n\\begin{align}\np(\\tilde{{\\bf R}}_{\\ell}| {\\bf H}_{\\rm g}, {\\bf H}_{\\rm u} )\n&\\propto\ne^{-\\sigma_{\\ell}^{-2} \\left\\|\n\\tilde{{\\bf R}}_{\\ell}- {\\bf H}_{\\rm g}^{\\rm T} {\\rm diag} \\left({\\bm \\theta}_{\\ell}\\right) {\\bf H}_{{\\rm u}}\n\\right\\|^2_{\\rm F} }\n, \\label{equ:likelihood_R}\n\\\\\np( \\bar{\\bf r}_{\\ell} | {\\bf H}_{\\rm g}, {\\bf H}_{\\rm u} )\n&\\propto\ne^{-\\bar\\sigma_{\\ell}^{-2} \\left\\|\n\\bar{\\bf r}_{\\ell}- {\\bf H}_{\\rm g}^{\\rm T} {\\rm diag} \\left({\\bm \\theta}_{\\ell} \\right) {\\bf H}_{{\\rm u}} \\bar{\\bf x}\n\\right\\|^2_2 }\n, \\label{equ:likelihood_r}\n\\end{align}\nwhere $\\sigma_{1}^2=\\frac{1}{2K}\\sigma_0^2$, $\\sigma_{\\ell}^2=\\frac{3}{2K}\\sigma_0^2$ for $\\ell=2,3,\\cdots,L_1$,\nand $\\bar\\sigma_{\\ell}^2=\\frac{3}{2}\\sigma_0^2$ for $\\ell=L_1+1,L_1+2,\\cdots,N$.\nFinally, after dropping all the irrelevant constant terms, the objective function is equivalently written as\n\\begin{equation}\\label{equ:obj_f_eq}\n\\begin{aligned}[b]\n&f_{{\\rm A}}({\\bf H}_{\\rm g}, {\\bf H}_{\\rm u})\n=-\\sum_{\\ell=1}^{L_1}\n{\\frac{1}{\\sigma_{\\ell}^{2}} \\left\\|\n\\tilde{{\\bf R}}_{\\ell}- {\\bf H}_{\\rm g}^{\\rm T} {\\rm diag} \\left({\\bm \\theta}_{\\ell}\\right) {\\bf H}_{{\\rm u}}\n\\right\\|^2_{\\rm F} }\\\\\n&\\quad-\\sum_{\\ell=L_1+1}^{N}\n{ \\frac{1}{\\bar\\sigma_{\\ell}^{2}} \\left\\|\n\\bar{\\bf r}_{\\ell}-{\\bf H}_{\\rm g}^{\\rm T} {\\rm diag} \\left({\\bm \\theta}_{\\ell} \\right) {\\bf H}_{{\\rm u}} \\bar{\\bf x}\n\\right\\|^2_2 }\n\\\\\n&\\quad-\\sum_{m=1}^M \\sum_{k=1}^K {\\bf h}_{{\\rm g},m}^{\\rm H} {\\rm{diag}}({\\bf h}_{{\\rm u},k})^{\\rm H} {{\\bf C}_m^{(k)}}^{-1} {\\rm{diag}}({\\bf h}_{{\\rm u},k}) {\\bf h}_{{\\rm g},m} .\n\\end{aligned}\n\\end{equation}\nNote that ${\\mathcal{P}}{(\\text{A})}$ does not have a unique solution but all the solutions are equivalent for the purpose of estimation of the cascaded channel ${\\bf h}_{{\\rm I},k,m}$ for all $m$ and $k$.\n\\begin{lemma}[Equivalence of the solution of ${\\mathcal{P}}{(\\text{A})}$]\\label{eq_decomposite}\nLet ${\\bf H}_{\\rm g}^\\star$ and ${\\bf H}_{\\rm u}^\\star$ be an optimal solution of ${\\mathcal{P}}{(\\text{A})}$, then ${\\rm diag} \\left({\\bf a}\\right){\\bf H}_{\\rm g}^\\star$ and ${\\rm diag} \\left({\\bf a}\\right)^{-1}{\\bf H}_{\\rm u}^\\star$ is also an optimal solution of ${\\mathcal{P}}{(\\text{A})}$\nfor any coefficient ${\\bf a}=[a_1,a_2,\\cdots,a_N]^{\\rm T}$ with $|a_n|\\neq0$ for all $n=1,2,\\cdots,N$.\n\\end{lemma}\n\n\\begin{IEEEproof}\nSee Appendix \\ref{proof_lemma1}.\n\\end{IEEEproof}\n\n\nAs a result, there is ambiguity in estimating individual channels from solving ${\\mathcal{P}}{(\\text{A})}$. Nevertheless, the cascaded channel is unique regardless of the coefficient ${\\bf a}$.\n\n\n\n\n\\subsection{ Channel Estimation Algorithm based on Alternative Optimization}\\label{sec:opt_est_AO}\n\nSolving ${\\mathcal{P}}{(\\text{A})}$ is difficult due to the optimization variables being coupled in the likelihood functions \\eqref{equ:likelihood_R} and \\eqref{equ:likelihood_r}.\nFortunately, we will show that ${\\mathcal{P}}{(\\text{A})}$ is actually bi-convex (see Lemma \\ref{fu_convex} and Lemma \\ref{fg_convex}), which can be solved by alternative optimization. In particular, we decompose ${\\mathcal{P}}{(\\text{A})}$ into two convex sub-problems, and\nthe optimal solutions for these sub-problems will be derived accordingly.\n\n\n\\subsubsection{Optimize ${\\bf H}_{\\rm u}$}\nWe investigate the optimization of ${\\bf H}_{\\rm u}$ while ${\\bf H}_{\\rm g}$ are fixed.\nAfter dropping all irrelevant terms, the sub-problem is given by\n\\begin{align*}\n{\\mathcal{P}}{({\\text A}_{{\\rm u}})} \\quad \\min_{ {\\bf H}_{\\rm u} }\\; f_{{\\rm u}}({\\bf H}_{\\rm u})\n,\n\\end{align*}\nwhere\n\\begin{equation}\\label{equ:obj_f_hrk}\n\\begin{aligned}[b]\n&f_{{\\rm u}}({\\bf H}_{\\rm u})\n=\\sum_{\\ell=1}^{L_1}\n{\\frac{1}{\\sigma_{\\ell}^{2}} \\left\\|\n\\tilde{{\\bf R}}_{\\ell}- {\\bf D}_{\\ell} {\\bf H}_{{\\rm u}}\n\\right\\|^2_{\\rm F} }\\\\\n&\\quad+\\sum_{\\ell=L_1+1}^{N}\n{ \\frac{1}{\\bar\\sigma_{\\ell}^{2}} \\left\\|\n\\bar{\\bf r}_{\\ell}-{\\bf D}_{\\ell} {\\bf H}_{{\\rm u}} \\bar{\\bf x}\n\\right\\|^2_2 }\n+\\sum_{k=1}^K {\\bf h}_{{\\rm u},k}^{\\rm H} {\\bf C}_{{\\rm u},k} {\\bf h}_{{\\rm u},k}\n,\n\\end{aligned}\n\\end{equation}\nand\n\\begin{align}\n{\\bf D}_{\\ell}&={\\bf H}_{\\rm g}^{\\rm T} {\\rm diag} \\left({\\bm \\theta}_{\\ell}\\right), \\; {\\text{for}} \\; \\ell=1,2,\\cdots,N,\\\\\n{\\bf C}_{{\\rm u},k}&=\\sum_{m=1}^M {\\rm{diag}}({\\bf h}_{{\\rm g},m})^{\\rm H} {{\\bf C}_m^{(k)}}^{-1} {\\rm{diag}}({\\bf h}_{{\\rm g},m}), \\quad \\forall k.\n\\end{align}\n\n\n\\begin{lemma}[Convexity of ${\\mathcal{P}}{({\\text A}_{{\\rm u}})}$]\\label{fu_convex}\nFor any fixed ${\\bf H}_{\\rm g}$, the objective function of ${\\mathcal{P}}{({\\text A}_{{\\rm u}})}$ is a convex quadratic function of the vectorization of ${\\bf H}_{\\rm u}$, which is denoted by ${\\rm{vec}}({\\bf H}_{\\rm u})$.\n\\end{lemma}\n\n\\begin{IEEEproof}\nSee Appendix \\ref{proof_lemmafu}.\n\\end{IEEEproof}\n\n\nBased on Lemma \\ref{fu_convex}, the optimal solution of ${\\mathcal{P}}{({\\text A}_{{\\rm u}})}$ is the root of the first order derivative of $f_{{\\rm u}}({\\rm{vec}}({\\bf H}_{\\rm u}))$, which is given by\n\\begin{equation}\\label{equ:opt_hrk}\n\\begin{aligned}[b]\n{\\rm{vec}}({\\bf H}_{\\rm u}^\\star)\n&= {\\bm \\Lambda}_{{\\rm u}}^{-1} {\\bm \\nu}_{{\\rm u}}\n,\n\\end{aligned}\n\\end{equation}\nwhere\n\\begin{align}\n{\\bm \\Lambda}_{{\\rm u}}&=\n {\\bf C}_{{\\rm u}}\n+{\\bf I}_K \\otimes \\left(\\sum_{\\ell=1}^{L_1} \\frac{1}{\\sigma_{\\ell}^{2}} {\\bf D}_{\\ell}^{\\rm H} {\\bf D}_{\\ell}\\right) \\notag\\\\\n&\\quad + \\left(\\bar{\\bf x}^\\ast \\bar{\\bf x}^{\\rm T}\\right)\\otimes \\left(\\sum_{\\ell=L_1+1}^{N} \\frac{1}{\\bar\\sigma_{\\ell}^{2}} |\\bar{x}_k|^2 {\\bf D}_{\\ell}^{\\rm H} {\\bf D}_{\\ell}\\right)\n, \\label{equ:opt_hrk_e1}\\\\\n{\\bf C}_{{\\rm u}}&={\\rm{blkdiag}}({\\bf C}_{{\\rm u},1},{\\bf C}_{{\\rm u},2},\\cdots,{\\bf C}_{{\\rm u},K}),\n\\end{align}\nand\n\\begin{equation}\\label{equ:opt_hrk_e2}\n\\begin{aligned}[b]\n{\\bm \\nu}_{{\\rm u}}\n&={\\rm{vec}}\\left(\n\\sum_{\\ell=1}^{L_1} \\frac{1}{\\sigma_{\\ell}^{2}} {\\bf D}_{\\ell}^{\\rm H}\n\\tilde{{\\bf R}}_{\\ell}\n+ \\sum_{\\ell=L_1+1}^{N} \\frac{1}{\\bar\\sigma_{\\ell}^{2}}\n {\\bf D}_{\\ell}^{\\rm H} \\bar{\\bf r}_{\\ell} \\bar{\\bf x}^{\\rm H}\n\\right).\n\\end{aligned}\n\\end{equation}\n\n\n\n\n\\subsubsection{Optimize ${\\bf H}_{{\\rm g}}$}\nSimilarly, the sub-problem of optimizing ${\\bf H}_{{\\rm g}}$ is given by\n\\begin{align*}\n{\\mathcal{P}}{(\\text{A}_{{\\rm g}})} \\; \\min_{ {\\bf H}_{{\\rm g}} }\\; \\sum_{m=1}^M f_{{\\rm g},m}({\\bf h}_{{\\rm g},m})\n,\n\\end{align*}\nwhere\n\\begin{align}\n&f_{{\\rm g},m}({\\bf h}_{{\\rm g},m})\n=\n \\sum_{k=1}^K \\sum_{\\ell=1}^{L_1} \\frac{1}{\\sigma_{\\ell}^{2}} \\left\\|\n\\tilde{{r}}_{\\ell,m,k}-{\\bf h}_{{\\rm g},m}^{\\rm T} {\\bf b}_{\\ell,k}\n\\right\\|^2 \\notag\\\\\n&+ \\sum_{\\ell=L_1+1}^{N}\\frac{1}{\\bar\\sigma_{\\ell}^{2}} \\left\\|\n\\bar{r}_{\\ell,m}-\\sum_{k=1}^K \\bar{x}_k {\\bf h}_{{\\rm g},m}^{\\rm T} {\\bf b}_{\\ell,k}\n\\right\\|^2+{\\bf h}_{{\\rm g},m}^{\\rm H} {\\bf C}_{{\\rm g},m} {\\bf h}_{{\\rm g},m}\n, \\label{equ:obj_f_gm}\\\\\n&\\quad{\\bf b}_{\\ell,k}={\\rm diag}\\left({\\bm \\theta}_{\\ell}\\right) {\\bf h}_{{\\rm u},k}, \\; {\\text{for}} \\; \\ell=1,2,\\cdots,N,\\\\\n&\\quad{\\bf C}_{{\\rm g},m}=\\sum_{k=1}^K {\\rm{diag}}({\\bf h}_{{\\rm u},k})^{\\rm H} {{\\bf C}_m^{(k)}}^{-1} {\\rm{diag}}({\\bf h}_{{\\rm u},k}), \\quad \\forall m,\n\\end{align}\n$\\tilde{{r}}_{\\ell,m,k}$ denotes the entry in the $m$-th row and $k$-th column of $\\tilde{\\bf R}_{\\ell}$,\nand $\\bar{r}_{\\ell,m}$ denotes the $m$-th entry in $\\bar{\\bf r}_{\\ell}$.\n\n\n\\begin{lemma}[Convexity of ${\\mathcal{P}}{({\\text A}_{{\\rm g}})}$]\\label{fg_convex}\nFor any fixed ${\\bf H}_{\\rm u}$, the objective function of ${\\mathcal{P}}{({\\text A}_{{\\rm g}})}$ is a convex quadratic function of $\\{{\\bf h}_{{\\rm g},1},{\\bf h}_{{\\rm g},2},\\cdots,{\\bf h}_{{\\rm g},M}\\}$.\n\\end{lemma}\n\n\\begin{IEEEproof}\nSee Appendix \\ref{proof_lemmafg}.\n\\end{IEEEproof}\n\n\n\nBased on lemma \\ref{fg_convex}, the optimal ${\\bf h}_{{\\rm g},m}$ is the\nroot of the first order derivative of $f_{{\\rm g},m}({\\bf h}_{{\\rm g},m})$, which is given by\n\\begin{equation}\\label{equ:opt_gm}\n\\begin{aligned}[b]\n{\\bf h}_{{\\rm g},m}^\\star\n&= {\\bm \\Lambda}_{{\\rm g},m}^{-1} {\\bm \\nu}_{{\\rm g},m}\n,\n\\end{aligned}\n\\end{equation}\nfor $m=1,2,\\cdots,M$, where\n\\begin{equation}\\label{equ:opt_gm_e1}\n\\begin{aligned}[b]\n{\\bm \\Lambda}_{{\\rm g},m}&={\\bf C}_{{\\rm g},m}^{-1}\n+\\sum_{k=1}^K\\sum_{\\ell=1}^{L_1} \\frac{1}{\\sigma_{\\ell}^{2}} {\\bf b}_{\\ell,k}^\\ast {\\bf b}_{\\ell,k}^{\\rm T}\\\\\n&\\;+ \\sum_{\\ell=L_1+1}^{N}\\frac{1}{\\bar\\sigma_{\\ell}^{2}}\n\\left(\\sum_{k =1}^K \\bar{x}_k {\\bf b}_{\\ell,k}^{\\rm T}\\right)^{\\rm H}\n\\left(\\sum_{k =1}^K \\bar{x}_k {\\bf b}_{\\ell,k}^{\\rm T}\\right)\n,\n\\end{aligned}\n\\end{equation}\nand\n\\begin{equation}\\label{equ:opt_gm_e2}\n\\begin{aligned}[b]\n&{\\bm \\nu}_{{\\rm g},m}=\\sum_{k=1}^K\\sum_{{\\ell}=1}^{L_1} \\frac{\\tilde{{r}}_{\\ell,m,k}}{\\sigma_{\\ell}^{2}} {\\bf b}_{\\ell,k}^\\ast\n+ \\sum_{\\ell=L_1+1}^{N}\\frac{\\bar{r}_{\\ell,m}}{\\bar\\sigma_{\\ell}^{2}}\n\\left(\\sum_{k =1}^K \\bar{x}_k^{\\ast} {\\bf b}_{\\ell,k}^{\\ast}\\right)\n.\n\\end{aligned}\n\\end{equation}\n\n\n\\subsubsection{Initial Estimation on ${\\bf H}_{{\\rm g}}$}\n The quality of the solution obtained by the alternative optimization depends heavily on the initial point.\n Here, we propose\nan efficient estimator for ${\\bf H}_{{\\rm g}}$ to initialize the proposed alternative optimization algorithm.\nIn particular, we construct a special $\\left\\{{\\bf H}_{\\rm g},{\\bf H}_{\\rm u}\\right\\}$ pair whose elements are given by\n\\begin{align}\n{\\bf h}_{{\\rm g},m}&= {\\rm diag} ({\\bf H}_{\\rm r}\\bar{\\bf x}) {\\bf g}_m, \\label{equ:hg1}\\\\\n{\\bf h}_{{\\rm u},k}&={\\rm diag} ({\\bf H}_{\\rm r}\\bar{\\bf x})^{-1}{\\bf h}_{{\\rm r},k}. \\label{equ:hu1}\n\\end{align}\nSubstituting the above ${\\bf H}_{\\rm g}$ into \\eqref{equ:rbar}, we have\n\\begin{equation}\\label{equ:rbar_v2}\n\\begin{aligned}[b]\n\\bar{\\bf r}_{\\ell}\n&={\\bf H}_{\\rm g}^{\\rm T} {\\bm \\theta}_\\ell\n+\\bar{\\bf z}_{\\ell}\n,\n\\end{aligned}\n\\end{equation}\nfor $\\ell=1,2,\\cdots,N$.\nTherefore, ${\\bf H}_{\\rm g}$ can be initialized by the least squares (LS) estimator as follows:\n\\begin{equation}\\label{equ:est_Hc}\n\\begin{aligned}[b]\n\\hat{\\bf H}_{\\rm g}=\\left( \\left[\\bar{\\bf r}_1,\\cdots,\\bar{\\bf r}_N \\right]\n\\left[{\\bm \\theta}_1,\\cdots,{\\bm \\theta}_N \\right]^{-1} \\right)^{\\rm T}.\n\\end{aligned}\n\\end{equation}\nSince $\\hat{\\bf H}_{\\rm g}$ in \\eqref{equ:est_Hc} is unbiased and it has exploited most of the available observations in all the $N+1$ training timeslots,\nit will give a good initial point.\n\n\n\\subsection{The Overall Proposed Algorithm}\nThe overall proposed cascaded IRS channel estimation algorithm is summarized in Algorithm \\ref{alg:P1}.\nThe convergence of the proposed alternating optimization algorithm is analyzed in Lemma \\ref{convergence}.\n\n\\begin{algorithm}[!ht]\n\\caption{ Proposed alternating optimization algorithm for the cascaded IRS channel estimation}\n\\label{alg:P1}\n\\begin{algorithmic}[1]\n\\STATE {Initialize ${\\bf H}_{{\\rm g}}$ by \\eqref{equ:est_Hc}.}\\\\\n\\REPEAT\n\\STATE Update ${\\bf H}_{{\\rm u}}$ by \\eqref{equ:opt_hrk};\n\\STATE Update ${\\bf h}_{{\\rm g},m}$ by \\eqref{equ:opt_gm} for all $m$;\n\\UNTIL{ $f_{{\\rm A}}({\\bf H}_{\\rm g}, {\\bf H}_{\\rm u})$ in \\eqref{equ:obj_f_eq} converges;}\\\\\n\\STATE {Output the cascaded channel ${\\hat{\\bf h}}_{{\\rm I},k,m}={\\rm{diag}}({\\bf h}_{{\\rm u},k}) {\\bf h}_{{\\rm g},m}$ for all $k$ and $m$.}\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{lemma}[Convergence of the Proposed Alternating Optimization Algorithm]\\label{convergence}\nThe objective function $f_{{\\rm A}}({\\bf H}_{\\rm g}, {\\bf H}_{\\rm u})$ is non-increasing in every step when ${\\bf H}_{{\\rm u}}$ or ${\\bf H}_{{\\rm g}}$ are updated, and the optimization iterations in \\eqref{equ:opt_hrk} and \\eqref{equ:opt_gm} converge to a local optimum\nof ${\\mathcal{P}}{(\\text{A})}$.\n\\end{lemma}\n\n\\begin{IEEEproof}\nAs shown in Section \\ref{sec:opt_est_AO}, the original problem is decomposed into two unconstrained minimization problems whose objectives are convex quadratic functions, and each subproblem has a unique optimal solution, which is derived in \\eqref{equ:opt_hrk} and \\eqref{equ:opt_gm}.\nTherefore, the whole alternating optimization algorithm will converge to a local optimum of the original problem ${\\mathcal{P}}{(\\text{A})}$ \\cite{BCD}.\n\\end{IEEEproof}\n\n\n\\emph{Remark}:\nThe complexity for updating ${\\bf H}_{{\\rm u}}$ by \\eqref{equ:opt_hrk} is ${\\cal O}(K^3 N^3+KMN^2)$. The complexity for updating ${\\bf h}_{{\\rm g},m}$ by \\eqref{equ:opt_gm} is ${\\cal O}(KN^3)$, and thus the complexity to update ${\\bf H}_{{\\rm g}}$ is ${\\cal O}(KMN^3)$. Therefore, the overall complexity of the solution is ${\\cal O}(IK^3 N^3+IKMN^3)$, where $I$ denotes the number of iterations of the alternating optimization algorithm.\\footnote{We will show in simulation that the algorithm will converge quickly in about two or three iterations. In addition, the ${\\cal O}(N^3)$ complexity is costed by the matrix inversion operation. However, since the matrices required inversion operation are all Hermitian positive semi-definite matrices, they can be implemented very efficiently by advanced algorithms such as the Cholesky-decomposition-based algorithm \\cite{matrix_inverse}. }\n\n\n\\section{Training Phase Shifting Configuration}\\label{sec:Phaseshift_cofig}\n\\subsection{ Motivation of the Phase Shifting Configuration}\nThe IRS steers the incident signal to different directions by configuring different phase shifting vectors ${\\bm \\theta}_{\\ell}$, as illustrated in {\\figurename~\\ref{theta_opt}}.\nAccording to the protocol, the cascaded channel is scanned by $N$ spatial directions in $N$ training timeslots, and a proper design on $\\{ {\\bm \\theta}_1,{\\bm \\theta}_2,\\cdots,{\\bm \\theta}_N \\}$ guarantees that the whole channel information in all directions is contained by the received signals such that good channel estimation performance can be achieved.\n\n\n\\begin{figure}\n[!ht]\n\\centering\n\\includegraphics[width=.9\\columnwidth]{phase_shift_opt.eps}\n\\caption{Illustration of the impact of different phase shiftings.}\n\\label{theta_opt}\n\\end{figure}\n\n\nIn the SU-MISO scenario, the overall received measurements after removing the pilots and the BS-user channels is given by\n\\begin{equation}\\label{equ:y_su}\n\\begin{aligned}[b]\n{\\bf Y}_{\\rm{SU}}&= {\\bf H}_{{\\rm I},1} {\\bm \\Phi}\n+{\\bf Z}\n,\n\\end{aligned}\n\\end{equation}\nwhere ${\\bm \\Phi}=[{\\bm \\theta}_1,{\\bm \\theta}_2,\\cdots,{\\bm \\theta}_N]$. The LS estimator may adopted as follows \\cite{Zhouzy2020decompositionCE}:\n\\begin{equation}\\label{equ:hatH_su}\n\\begin{aligned}[b]\n\\hat{\\bf H}_{{\\rm I},1}&= {\\bf Y}_{\\rm{SU}} {\\bm \\Phi}^{-1}\n,\n\\end{aligned}\n\\end{equation}\nwhere $\\hat{\\bf H}_{{\\rm I},1}$ denotes the estimated cascaded channel.\nThen ${\\bm \\Phi}$ is optimized by minimizing the MSE:\n\\begin{align*}\n\\min_{ {\\bm \\Phi} }\\; & {\\rm{tr}} \\left(\\left({\\bm \\Phi} {\\bm \\Phi}^{\\rm H}\\right)^{-1}\\right)\\\\\n{\\bf s.t.} \\;\n& |{\\bm \\Phi}_{i,j}|=1, \\quad \\forall i,j=1,2,\\cdots,N\n.\n\\end{align*}\nIt is proved in \\cite{Zhouzy2020decompositionCE} that the optimal value of the MSE is $1$, which can be achieved by the DFT matrix such that ${\\bm \\Phi}={\\bf F}$, where\n\\begin{equation}\\label{equ:DFT_F}\n\\begin{aligned}[b]\n{\\bf F}\n=\n\\begin{bmatrix}\n 1 & 1 & 1 & \\cdots & 1 \\\\\n 1 & e^{-\\jmath 2 \\pi \\frac{1}{N}} & e^{-\\jmath 2 \\pi \\frac{2}{N}} & \\cdots & e^{-\\jmath 2 \\pi \\frac{N-1}{N}} \\\\\n 1 & e^{-\\jmath 2 \\pi \\frac{2}{N}} & e^{-\\jmath 2 \\pi \\frac{4}{N}} & \\cdots & e^{-\\jmath 2 \\pi \\frac{2(N-1)}{N}} \\\\\n \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n 1 & e^{-\\jmath 2 \\pi \\frac{N-1}{N}} & e^{-\\jmath 2 \\pi \\frac{2(N-1)}{N}} & \\cdots & e^{-\\jmath 2 \\pi \\frac{(N-1)^2}{N}}\n\\end{bmatrix}\n.\n\\end{aligned}\n\\end{equation}\n\nFor the protocol extended from the SU case \\cite{Zhouzy2020decompositionCE,Kundu2021OJCSLMMSE_DFTGOOD,Alwazani2020OJCSLMMSE_DFT} shown in {\\figurename~\\ref{protocol_tran}}, the transmit signals from users are the same in different timeslots.\nTherefore, the columns of ${\\bf F}$ may be permuted to any orders, and the MSE will remain the same.\nHowever, in our proposed protocol in {\\figurename~\\ref{protocol:d}}, the transmit signals are different in stage I and stage II.\nIn particular, the received signals in stage I contribute to the estimation on both the common-link variable and the user-specific variables in the cascaded channels, while the signals in stage II contribute to the common-link variable only.\nHence, the phase shifting vectors $ {\\bm \\theta}_1, {\\bm \\theta}_2,\\cdots,{\\bm \\theta}_{L_1} $\nin stage I require additional design.\n\n\n\n\n\n\n\n\n\\subsection{Optimization Formulation of the Phase Shifting Configuration for MU-MISO IRS Systems}\n\nAs shown in \\eqref{equ:est_Hc}, the initial estimation on the common-link variable $\\hat{\\bf H}_{\\rm g}$ in \\eqref{equ:est_Hc} is almost the same as the estimator for the SU case shown in \\eqref{equ:hatH_su}.\nTherefore, we still adopt the DFT-based phase shifting configuration for all the $N$ timeslots. Additionally, an additional steering direction ${\\bm{\\vartheta}} \\in {\\mathbb C}^{N \\times 1}$ is introduced for a more flexible design:\n\\begin{equation}\\label{equ:prop_phi}\n\\begin{aligned}[b]\n{\\bm \\Phi}&= {\\rm{diag}}({\\bm{\\vartheta}}){\\bf F}\n.\n\\end{aligned}\n\\end{equation}\nDenote by $\\vartheta_n$ the $n$-th element in ${\\bm{\\vartheta}}$. We have $|\\vartheta_n|=1$ for all $n=1,2,\\cdots,N$.\nOne can see that the value of ${\\rm{tr}} \\left(\\left({\\bm \\Phi} {\\bm \\Phi}^{\\rm H}\\right)^{-1}\\right)$ is kept at $1$ for any ${\\bm{\\vartheta}}$.\nDenote by ${\\bf f}_{\\ell}$ the $\\ell$-th column of $\\bf F$. The training phase shifting vector in the $\\ell$-th timeslot is given by\n\\begin{equation}\\label{equ:prop_theta_ell}\n\\begin{aligned}[b]\n{\\bm \\theta}_\\ell&= {\\rm{diag}}({\\bm{\\vartheta}}) {\\bf f}_{\\ell}\n,\n\\end{aligned}\n\\end{equation}\nwhere $\\ell=1,2,\\cdots,N$.\nThe remaining task is to design ${\\bm{\\vartheta}}$.\n\nHowever, it is difficult to design a straightforward objective function to optimize ${\\bm{\\vartheta}}$ since the MSE of the estimated ${\\bf H}_{\\rm u}$ by the proposed algorithm is complicated.\nConsidering that we have the knowledge of the covariances ${\\bf C}_m^{(k)}$ of the cascaded channels for all $m$ and $k$,\nthe average received power of the effective IRS channel from user $k$ to the $m$-th BS antenna in timeslot $\\ell$ can be denoted by a function of ${\\bm{\\vartheta}}$, ${\\bf f}_{\\ell}$ and ${\\bf C}_m^{(k)}$:\n\\begin{equation}\\label{equ:channel_gain}\n\\begin{aligned}[b]\nQ_{\\ell,k,m} &= {\\mathbb E} \\left[ | {\\bf g}_m^{\\rm T} {\\rm{diag}}({\\bm \\theta}_\\ell) {\\bf h}_{{\\rm r},k} |^2 \\right]\\\\\n &= {\\mathbb E} \\left[ | {\\bf h}_{{\\rm I},k,m}^{\\rm T} {\\bm \\theta}_\\ell |^2 \\right]\\\\\n&= {\\mathbb E} \\left[ {\\bm \\theta}_\\ell^{\\rm H}\n\\left( {\\bf h}_{{\\rm I},k,m} {\\bf h}_{{\\rm I},k,m}^{\\rm H} \\right)^{\\ast}\n{\\bm \\theta}_\\ell \\right]\\\\\n&= {\\bm \\theta}_\\ell^{\\rm H}\n\\left( {\\bf C}_m^{(k)} \\right)^{\\ast}\n{\\bm \\theta}_\\ell\\\\\n&={\\bm{\\vartheta}}^{\\rm H} {\\rm{diag}}({\\bf f}_{\\ell})^{\\rm H}\n\\left( {\\bf C}_m^{(k)} \\right)^{\\ast}\n{\\rm{diag}}({\\bf f}_{\\ell}){\\bm{\\vartheta}}\n.\n\\end{aligned}\n\\end{equation}\nSince ${\\bf f}_{\\ell}$ and ${\\bf C}_m^{(k)}$ are known variables, the summation of $Q_{\\ell,k,m}$ over antennas $m=1,2,\\cdots,M$, users $k=1,2,\\cdots,K$ and timeslots $\\ell=1,2,\\cdots,L_1$ is a function of ${\\bm{\\vartheta}}$, which is given by\n\\begin{equation}\\label{equ:channel_gain_all}\n\\begin{aligned}[b]\nf_{\\rm B}({\\bm{\\vartheta}})&=\\sum_{\\ell=1}^{L_1} \\sum_{k=1}^K \\sum_{m=1}^M Q_{\\ell,k,m} \\\\\n&=\\sum_{\\ell=1}^{L_1} \\sum_{k=1}^K \\sum_{m=1}^M\n{\\bm{\\vartheta}}^{\\rm H} {\\rm{diag}}({\\bf f}_{\\ell})^{\\rm H}\n\\left( {\\bf C}_m^{(k)} \\right)^{\\ast}\n{\\rm{diag}}({\\bf f}_{\\ell}){\\bm{\\vartheta}} \\\\\n&={\\bm{\\vartheta}}^{\\rm H} \\left(\\sum_{\\ell=1}^{L_1} \\sum_{k=1}^K \\sum_{m=1}^M\n {\\rm{diag}}({\\bf f}_{\\ell})^{\\rm H}\n\\left( {\\bf C}_m^{(k)} \\right)^{\\ast}\n{\\rm{diag}}({\\bf f}_{\\ell})\\right){\\bm{\\vartheta}} .\n\\end{aligned}\n\\end{equation}\nWe define\n\\begin{equation}\n\\begin{aligned}[b]\n{\\bf E}=\\left(\\sum_{\\ell=1}^{L_1} \\sum_{k=1}^K \\sum_{m=1}^M\n {\\rm{diag}}({\\bf f}_{\\ell})^{\\rm H}\n\\left( {\\bf C}_m^{(k)} \\right)^{\\ast}\n{\\rm{diag}}({\\bf f}_{\\ell})\\right).\n\\end{aligned}\n\\end{equation}\nThe optimization problem on ${\\bm{\\vartheta}}$ is formulated to maximize $f_{\\rm B}({\\bm{\\vartheta}})$:\n\\begin{equation*\n\\begin{aligned}[b]\n{\\mathcal{P}}{(\\text{B})}\\quad \\max_{ {\\bm{\\vartheta}} } \\; &\nf_{\\rm B}({\\bm{\\vartheta}})={\\bm{\\vartheta}}^{\\rm H} {\\bf E} {\\bm{\\vartheta}}\\\\\n{\\bf s.t.} \\;\n& |\\vartheta_n|=1, \\quad \\forall n=1,2,\\cdots,N.\n\\end{aligned}\n\\end{equation*}\n\n\n\\subsection{Solution for ${\\mathcal{P}}{(\\text{B})}$}\n${\\mathcal{P}}{(\\text{B})}$ is a non-convex problem due to the maximizing of a convex objective function and the unit-modulus constraints.\nWe solve ${\\mathcal{P}}{(\\text{B})}$ by the successive convex approximation (SCA) algorithm.\nIn particular, a surrogate problem, shown as follows, is iteratively solved:\n\\begin{equation*\n\\begin{aligned}[b]\n{\\mathcal{P}}{({\\text{B}}_i)}\\quad \\max_{ {\\bm{\\vartheta}} } \\; &\n{f}_{\\rm B}^{(i)} ({\\bm{\\vartheta}},\\bar{\\bm{\\vartheta}})\\\\\n{\\bf s.t.} \\;\n& |\\vartheta_n|=1, \\quad \\forall n=1,2,\\cdots,N,\n\\end{aligned}\n\\end{equation*}\nwhere $i$ is the iteration index, $\\bar{\\bm{\\vartheta}}$ is the solution of the surrogate problem in the $(i-1)$-th iteration, and ${f}_{\\rm B}^{(i)}({\\bm{\\vartheta}},\\bar{\\bm{\\vartheta}})$ is the first-order approximation of ${f}_{\\rm B}({\\bm{\\vartheta}})$ at $\\bar{\\bm{\\vartheta}}$:\n\\begin{equation}\\label{equ:surrogate}\n\\begin{aligned}[b]\n{f}_{\\rm B}^{(i)} ({\\bm{\\vartheta}},\\bar{\\bm{\\vartheta}})\n&= 2 {\\rm Re} \\left\\{{\\bar{\\bm \\vartheta}}^{\\rm H} {\\bf E} {\\bm \\vartheta}\\right\\}-{\\bar{\\bm \\vartheta}}^{\\rm H} {\\bf E} {\\bar{\\bm \\vartheta}}.\n\\end{aligned}\n\\end{equation}\nOne can see that ${f}_{\\rm B}^{(i)}({\\bm{\\vartheta}},\\bar{\\bm{\\vartheta}})$ is a linear function of ${\\bm{\\vartheta}}$, and thus the optimal solution of ${\\mathcal{P}}{({\\text{B}}_i)}$ is given by\n\\begin{equation}\\label{equ:opt_theta}\n\\begin{aligned}[b]\n{\\bm{\\vartheta}}=e^{\\jmath \\angle ({\\bf E}{\\bar{\\bm \\vartheta}})}.\n\\end{aligned}\n\\end{equation}\nThe proof on the convergence of the SCA algorithm can be referred to in \\cite{SCA}.\n\n\n\n\\section{Numerical Examples}\\label{simulation}\n\\subsection{Simulation Setups}\nThis section evaluates the performance of the proposed cascaded channel estimation algorithm.\nIn particular, we consider the indoor femtocell network illustrated in {\\figurename~\\ref{indoor_8user}} in which $K=8$ users are randomly distributed in a 5 m $\\times$ 5 m square area and are served by one BS and one IRS.\nWe generate the channel coefficients according to the 3GPP ray-tracing model \\cite[Section 7.5]{3GPP} using the model parameters for the Indoor-Office scenario \\cite[Table 7.5-6]{3GPP}.\nThe system parameters for the simulations are summarized in Table \\ref{table_sim}, in which the path-loss is set according to the Indoor-Office pathloss model in \\cite[Table 7.4.1-1]{3GPP}.\n\n\n\\begin{figure}\n[!ht]\n\\centering\n\\includegraphics[width=.8\\columnwidth]{simulation_scena.eps}\n\\caption{The simulated IRS-aided $K$-user MISO communication scenario comprising of one $M$-antenna BS and one $N$-element IRS.}\n\\label{indoor_8user}\n\\end{figure}\n\n\\begin{table}[!ht]\n\\footnotesize\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{Simulation Parameters}\n\\label{tablepm}\n\\centering\n\\begin{tabular}{c|c}\n\\hline\nParameters & Values \\\\\n\\hline\nCarrier frequency & 2.4 GHz\\\\\n\\hline\n Transmission bandwidth & $200$ kHz\\\\\n\\hline\nNoise power spectral density & $-170$ dBm\/Hz\\\\\n\\hline\nPath-loss for BS-IRS and IRS-user links (dB)& $40 + 17.3 \\lg d$\\\\\n\\hline\nPath-loss for BS-user link (dB)& $30 + 31.9 \\lg d+\\zeta$\\\\\n\\hline\nPenetration loss $\\zeta$ due to obstacle & 20 dB \\\\\n\\hline\n Reflection efficiency of IRS & 0.8\\\\\n\\hline\nHeight of users & 1.5 m\\\\\n\\hline\nLocation of BS & (0, 0, 3m)\\\\\n\\hline\nLocation of IRS & (0, 10m, 3m)\\\\\n\\hline\n\\end{tabular}\n\\label{table_sim}\n\\end{table}\n\n\n\n\nIn the simulation, we consider two baseline schemes to benchmark the proposed scheme.\n\\begin{itemize}\n\\item {\\bf Baseline 1 [LMMSE using the protocol in {\\figurename~\\ref{protocol:b}} \\cite{Kundu2021OJCSLMMSE_DFTGOOD,Alwazani2020OJCSLMMSE_DFT}]}: This curve illustrates the performance of the LMMSE estimator proposed in \\cite{Kundu2021OJCSLMMSE_DFTGOOD} and \\cite{Alwazani2020OJCSLMMSE_DFT}. For simplicity, we assume that the BS-user channels have already been perfectly estimated by this scheme.\n The protocol illustrated in {\\figurename~\\ref{protocol:b}} is adopted.\n In additon, for fair comparison, the number of training timeslots is set as $\\lceil \\frac{N-1}{K}\\rceil+\\lceil \\frac{N}{M}\\rceil$ such that the total pilot overhead is just slightly higher than that of the proposed scheme.\n\\item {\\bf Baseline 2 [Bilinear alternating least squares (BALS) algorithm \\cite{Araujo2021JSAC_CE_PARAFAC}]}:\n In \\cite{Araujo2021JSAC_CE_PARAFAC}, an iterative algorithm is proposed to estimate ${\\bf G}$ and ${\\bf H}_{\\rm r}$ by utilizing the PARAFAC decomposition, which adopts the same channel estimation protocol as Baseline 1. Note that due to the ambiguity issue (see Lemma \\ref{eq_decomposite} or \\cite[Section IV]{Araujo2021JSAC_CE_PARAFAC}), the BALS also cannot exactly reconstruct ${\\bf G}$ and ${\\bf H}_{\\rm r}$, and the actually estimated variable is still the cascaded channel.\n\\item {\\bf Baseline 3 [MAP modification for the BALS in \\cite{Araujo2021JSAC_CE_PARAFAC}]}: In this baseline, we make a simple modification based on the MAP optimization in this paper to further enhance the performance of the BALS algorithm in \\cite{Araujo2021JSAC_CE_PARAFAC} by exploiting the prior knowledge of the cascaded channels.\n\\item {\\bf Baseline 4 [Selected On-off protocol \\cite{Liuliang_CE2020TWC}]}: This curve illustrates the performance of the estimation algorithm in \\cite{Liuliang_CE2020TWC} based on the selected on-off channel estimation protocol shown in {\\figurename~\\ref{protocol:c}}. We assume that the covariances of ${\\bf h}_{{\\rm u},k} ={\\rm diag}({\\bf h}_{{\\rm r},1})^{-1}{\\bf h}_{{\\rm r},k}$ for all $k=1,2,\\cdots,8$ are available, and the LMMSE estimator in \\cite[Section V]{Liuliang_CE2020TWC} is adopted. In addition, we always select user $1$ as the reference user.\n\\end{itemize}\nNote that the proposed scheme and Baselines 3 and 4 have the same pilot overhead, i.e., $K+N+\\lceil \\frac{N}{M}\\rceil(K-1)$.\nWe focus on the evaluation of the performance of cascaded channel estimation, and use the normalized MSE (NMSE) as the evaluation metric, which is given by\n\\begin{equation}\n{\\rm{NMSE}}=\\frac{\\sum_{k=1}^K\\sum_{m=1}^M{\\mathbb E} \\left[\\left|{\\bf h}_{{\\rm I},k,m}-{\\hat{\\bf h}}_{{\\rm I},k,m}\\right|_2^2\\right]}\n{\\sum_{k=1}^K\\sum_{m=1}^M {\\mathbb E} \\left[\\left|{\\bf h}_{{\\rm I},k,m}\\right|_2^2\\right]}\n.\n\\end{equation}\nIn addition, based on the proposed protocol, the BS-user channel estimation can be independent of the cascaded channel estimation by applying the signal pre-processing as shown in Section \\ref{proposed_protocol}. This signal pre-processing operation will provide a theoretical $3$ dB gain for the BS-user channel estimation compared to the conventional solution, which shuts down the IRS to estimate the BS-user channel (see Appendix \\ref{app_Estimate_Hd}), and thus we do not compare the estimation performance for the BS-user channel in the simulations.\n\n\n\\begin{figure}\n[!t]\n \\centering\n \\subfigure[$P_{\\rm T}$ vs NMSE]{\n \\label{nmse_vs_PT:a}\n \\includegraphics[width=1\\columnwidth]{nmse_vs_pt_v2.eps}}\n \n \\subfigure[Convergence behavior when $P_{\\rm T}=15$ dBm]{\n \\label{nmse_vs_PT:b}\n \\includegraphics[width=1\\columnwidth]{converge_v2.eps}}\n \\caption{The NMSE versus transmit power when $M=8$ and $N=32$.}\n \\label{nmse_vs_PT}\n\\end{figure}\n\n\\subsection{Simulation Results}\n{\\figurename~\\ref{nmse_vs_PT:a}} illustrates the NMSE of different schemes with respect to the transmit power of users, in which the BS adopts a $4\\times2$ uniform planar array (UPA), and the IRS adopts an $8\\times4$ UPA. Thus, we have $M=8$ and $N=32$.\nThe BALS algorithm in \\cite{Araujo2021JSAC_CE_PARAFAC} achieves the worst performance since it does not exploit the channel prior knowledge.\nThe performance of the LMMSE using the traditional protocol in \\cite{Kundu2021OJCSLMMSE_DFTGOOD} and \\cite{Alwazani2020OJCSLMMSE_DFT} does not vary with the increase of $P_{\\rm T}$ since the main bottleneck is that the number of training timeslots is smaller than $N$.\nMoreover, the performance of BALS-MAP is better than that of LMMSE at a low SNR, but worsens as $P_{\\rm T}$ increases since it will reduce to BALS when $P_{\\rm T}$ is infinite.\nBased on the above observations, we can draw a conclusion that the traditional channel estimation protocol shown in {\\figurename~\\ref{protocol:b}} is not effective for exploiting the common-link structure, and thus we do not consider Baselines 2 and 3 in the remaining simulations.\nOn the other hand, it is seen that the proposed protocol with the optimization-based channel estimation algorithm achieves significant gain compared to all the baselines.\nIn addition, the phase shifting configuration by solving ${\\mathcal{P}}{(\\text{B})} $ achieves a more than 3 dB gain by steering the reflected signals in Stage I to the direction with a higher SNR compared to the random configuration baseline.\nNext, in {\\figurename~\\ref{nmse_vs_PT:b}}, we fix the transmit power $P_{\\rm T}$ at 15 dBm and show the convergence behaviors of the proposed Algorithm \\ref{alg:P1} for ${\\mathcal{P}}{(\\text{A})}$.\nOne can see that the proposed algorithm converges quickly.\nNote that although the solution without phase-shift optimization achieves a higher objective value, this does not imply it will have better performance since the objective functions of the two curves are different due to them adopting different ${\\bm{\\vartheta}}$ for the training phase shifting configuration.\n\n\\begin{figure}\n[!t]\n\\centering\n\\includegraphics[width=1\\columnwidth]{change_M_v2.eps}\n\\caption{NMSE versus $M$, when $P_{\\rm T}=20$ dBm.}\n\\label{M_vs_NMSE}\n\\end{figure}\n\nIn {\\figurename~\\ref{M_vs_NMSE}}, we simulate the performance of different BS antenna numbers $M$ when the BS adopts the uniform linear array and the IRS is still $8\\times4$ UPA. It is seen that the NMSE of all curves increases as $M$ increases since the ratio of the channel unknowns to the training observations decreases as $M$ increases. Moreover, the performance gain achieved by the phase shifting configuration increases as $M$ increases. This is because when $M$ increases, the number of training timeslots in Stage I of the proposed protocol decreases, and the probability that the random configuration scheme steers to the highest SNR direction becomes lower.\n\n\n\n\n\n\\begin{figure}\n[!t]\n\\centering\n\\includegraphics[width=1\\columnwidth]{change_N_v2.eps}\n\\caption{NMSE versus $N$, when $P_{\\rm T}=20$ dBm.}\n\\label{N_vs_NMSE}\n\\end{figure}\n\n\\begin{figure}\n[!t]\n\\centering\n\\includegraphics[width=1\\columnwidth]{CCDF_v2.eps}\n\\caption{The CCDFs for random user locations.}\n\\label{CCDF_location}\n\\end{figure}\n\nIn {\\figurename~\\ref{N_vs_NMSE}}, we simulate the NMSE of different schemes for different IRS sizes $N$. The BS is $4\\times2$ UPA, and the IRS is $N_1 \\times 8$ UPA in which $N_1$ increases from $4$ to $10$. Note that as $N$ increases, the pilot overhead increases according to the proposed protocol but the ratio of the channel unknowns to the training observations is almost fixed.\nIt is seen that the NMSE of Baselines 1 and 4 varies only a little, while the NMSE of the proposed scheme decreases as $N$ increases. This is because the channel becomes more correlated as $N$ becomes large, and the proposed scheme has a better capability of exploiting the channel prior knowledge.\n\nFinally, we investigate the impact of user locations on the estimation performance.\nIn particular, we fix $N=8 \\times 4$ and $M=4 \\times 2$, and generate $100$ snapshots for random user locations. For each snapshot, we further generate $1000$ channel realizations with independent small-scale fading to reduce the impact of other system parameters.\n{\\figurename~\\ref{CCDF_location}} plots the complementary cumulative distribution functions (CCDFs) of the NMSE for different snapshots.\nOne can see that the performance gains of the proposed scheme are irrespective of user locations.\nIn addition, we further increase $P_{\\rm T}=40$ dBm for Baseline 4 (i.e., the selected-on-off-protocol-based scheme \\cite{Liuliang_CE2020TWC}) such that it achieves a similar average NMSE to the proposed scheme with $P_{\\rm T}=20$ dBm. However, Baseline 4 achieves a much worse outage performance.\nThis is because the performance of the selected-on-off-protocol-based scheme \\cite{Liuliang_CE2020TWC} highly depends on the channel quality of the reference user, while the proposed scheme is much more robust since it does not require selecting one reference user.\n\n\n\n\\section{Conclusion}\\label{conclusion}\nIn this paper, we proposed a novel always-ON channel estimation protocol for uplink cascaded channel\nestimation in IRS-assisted MU-MISO systems.\nIn contrast to the existing schemes, the pilot overhead required by the proposed protocol is greatly reduced\nby exploiting the common-link structure.\nBased on the protocol, we formulated an optimization-based joint channel estimation problem that utilizes the combined\nstatistical information of the cascaded channels, and then we proposed an alternating optimization algorithm to solve the problem with the local optimum solution.\nIn addition, we optimized the phase shifting configuration in the proposed protocol, which may further enhance the channel estimation performance.\nThe simulation results demonstrated that the proposed protocol using the optimization based joint channel estimation algorithm achieves a more than $15$ dB gain compared to the benchmark. In addition, the proposed optimized phase shifting configuration achieves a more than $3$ dB gain compared to the random configuration scheme.\n\n\n\n\\appendices\n\\section{Proof of Lemma \\ref{lemma0}}\\label{proof_lemma0}\nDefine the virtual reference channel by ${\\bf H}_{\\rm v}={\\bf G}^{\\rm T} {\\rm diag} ({\\bf H}_{\\rm r}\\bar{\\bf x})$. Based on \\eqref{equ:rbar}, we have\n\\begin{equation}\\label{equ:app0_Hv}\n\\begin{aligned}[b]\n\\left[\\bar{\\bf r}_1,\\cdots,\\bar{\\bf r}_N \\right]\n&={\\bf H}_{\\rm v} {\\bm \\Phi}\n.\n\\end{aligned}\n\\end{equation}\nThus ${\\bf H}_{\\rm v}$ can be perfectly estimated by:\n\\begin{equation}\\label{equ:app0_Hv2}\n\\begin{aligned}[b]\n{\\bf H}_{\\rm v}= \\left[\\bar{\\bf r}_1,\\cdots,\\bar{\\bf r}_N \\right]\n{\\bm \\Phi}^{-1} .\n\\end{aligned}\n\\end{equation}\nWe further define the $K$ relative channels by ${\\bf h}_{{\\rm A},k}={\\rm diag} ({\\bf H}_{\\rm r}\\bar{\\bf x})^{-1} {\\bf h}_{{\\rm r},k}$, and ${\\bf H}_{\\rm A}=[{\\bf h}_{{\\rm A},1},{\\bf h}_{{\\rm A},2},\\cdots,{\\bf h}_{{\\rm A},K}]$.\nThen the cascaded channels become ${\\bf H}_{{\\rm I},k}= {\\bf H}_{\\rm v}{\\rm diag}({\\bf h}_{{\\rm A},k})$. Therefore, the remaining task is to perfectly estimate ${\\bf H}_{\\rm A}$.\n\nBased on the assumption on $\\bf G$ and ${\\bf H}_{\\rm r}$, we have\n${\\bf H}_{\\rm v}={\\bf F}_{\\rm B} {\\ddot{\\bf G}}^{\\rm T} {\\bf F}_{\\rm R}^{\\rm T} {\\rm diag} ({\\bf F}_{\\rm R} {\\ddot{\\bf H}}_{\\rm r}\\bar{\\bf x})$ where ${\\ddot{\\bf H}}_{\\rm r}=[{\\ddot{\\bf h}}_{{\\rm r},1},\\cdots,{\\ddot{\\bf h}}_{{\\rm r},K}]$.\nDefine ${\\bf V}=[{\\bf v}_{1},{\\bf v}_{2},\\cdots,{\\bf v}_{M}]$ which is given by\n\\begin{equation}\\label{equ:app0_V}\n\\begin{aligned}[b]\n{\\bf v}_m={\\rm diag} ({\\bf F}_{\\rm R} {\\ddot{\\bf H}}_{\\rm r}\\bar{\\bf x}){\\bf F}_{\\rm R} {\\ddot{\\bf g}}_m,\n\\end{aligned}\n\\end{equation}\nand thus ${\\bf H}_{\\rm v}={\\bf F}_{\\rm B} {\\bf V}^{\\rm T}$.\nUsing the independence of ${\\ddot{\\bf g}}_m$ and ${\\ddot{\\bf h}}_{{\\rm r},k}$, we have ${\\mathbb E}[{\\bf v}_i {\\bf v}_j^{\\rm H}]={\\bm 0}$. Since all ${\\bf v}_{m}$ ($m=1,2,\\cdots,M$) follow joint multivariate normal distribution, ${\\bf v}_{m}$ for all $m$ are pairwise independent to each other.\nIn addition, since ${\\bf C}_m^{(k)}={\\mathbb E}\\left[ {\\bf h}_{{\\rm I},k,m} {\\bf h}_{{\\rm I},k,m}^{\\rm H} \\right]$ is full-rank for all $k$ and $m$, ${\\mathbb E}[{\\bf v}_m {\\bf v}_m^{\\rm H}]$ is also full-rank for all $m$ with properly-designed $\\bar{\\bf x}$.\n\nNext, using ${\\bf R}_{\\ell}$ in \\eqref{equ:R1} and \\eqref{equ:R2L1}, we have\n\\begin{equation}\\label{equ:app0_RL}\n\\begin{aligned}[b]\n\\tilde{\\bf R}_{\\ell} &= {\\bf F}_{\\rm B}^{-1} {\\bf R}_{\\ell} {\\bf X}^{-1}\\\\\n&={\\bf F}_{\\rm B}^{-1} {\\bf G}^{\\rm T} {\\rm diag} ({\\bm \\theta}_\\ell) {\\bf H}_{\\rm r}\\\\\n&={\\bf F}_{\\rm B}^{-1} {\\bf H}_{\\rm v} {\\rm diag} ({\\bm \\theta}_\\ell) {\\bf H}_{\\rm A}\\\\\n&={\\bf V}^{\\rm T} {\\rm diag} ({\\bm \\theta}_\\ell) {\\bf H}_{\\rm A}\n.\n\\end{aligned}\n\\end{equation}\nStacking all $\\tilde{\\bf R}_{\\ell}$, we have\n\\begin{equation}\\label{equ:app0_mesure}\n\\begin{aligned}[b]\n\\begin{bmatrix}\\tilde{\\bf R}_1\\\\ \\vdots \\\\ \\tilde{\\bf R}_{L_1}\\end{bmatrix}\n=\\begin{bmatrix}{\\bf V}^{\\rm T}{\\rm diag} ({\\bm \\theta}_1)\\\\ \\vdots \\\\ {\\bf V}^{\\rm T}{\\rm diag} ({\\bm \\theta}_{L_1})\\end{bmatrix}\n{\\bf H}_{\\rm A}.\n\\end{aligned}\n\\end{equation}\nDefine ${\\bm \\Psi}=[{\\rm diag} ({\\bm \\theta}_1){\\bf V},\\cdots,{\\rm diag} ({\\bm \\theta}_{L_1}){\\bf V}]^{\\rm T}$. Now we need to prove ${\\text{rank}}({\\bm \\Psi})={\\rm{min}}\\{N,ML_1\\}$ with probability one, and the whole proof is completed.\n\nBy permuting the columns of ${\\bm \\Psi}$, we have a new matrix\n$\\bar{\\bm \\Psi}=[{\\rm diag} ({\\bf v}_1)\\bar{\\bm \\Phi},\\cdots,{\\rm diag} ({\\bf v}_M)\\bar{\\bm \\Phi}]^{\\rm T}$\nwhere $\\bar{\\bm \\Phi}=[{\\bm \\theta}_1,\\cdots,{\\bm \\theta}_{L_1}]$.\nThen, it is equivalent to prove that ${\\text{rank}}(\\bar{\\bm \\Psi})={\\rm{min}}\\{N,ML_1\\}$ with probability one. We prove it by induction.\nDefine $\\bar{\\bm \\Psi}_m=[{\\rm diag} ({\\bf v}_1)\\bar{\\bm \\Phi},\\cdots,{\\rm diag} ({\\bf v}_m)\\bar{\\bm \\Phi}]^{\\rm T}$.\nSince $\\bar{\\bm \\Phi}$ is semi-orthogonal, ${\\text{rank}}(\\bar{\\bm \\Psi}_1)=L_1$ with probability one.\nLet ${\\text{rank}}(\\bar{\\bm \\Psi}_{m-1})={\\rm{min}}\\{N,(m-1)L_1\\}$ with probability one, and the rest task is to prove ${\\text{rank}}(\\bar{\\bm \\Psi}_m)={\\rm{min}}\\{N,mL_1\\}$ with probability one.\nWe prove it by contradiction.\nConsider the case when ${\\text{rank}}(\\bar{\\bm \\Psi}_{m-1})=(m-1)L_1$ which is smaller than $N$ but ${\\text{rank}}(\\bar{\\bm \\Psi}_m)<{\\rm{min}}\\{N,mL_1\\}$.\nSince ${\\rm diag} ({\\bf v}_m) {\\bm \\theta}_i$ and ${\\rm diag} ({\\bf v}_m) {\\bm \\theta}_j$ are orthogonal for $i \\neq j$, there shall exists $\\bf x$ which satisfies:\n\\begin{equation}\\label{equ:app0_x}\n\\begin{aligned}[b]\n[{\\rm diag} ({\\bf v}_1)\\bar{\\bm \\Phi},\\cdots,{\\rm diag} ({\\bf v}_{m-1})\\bar{\\bm \\Phi}]{\\bf x}={\\rm diag} ({\\bf v}_m) {\\bm \\theta}_\\ell,\n\\end{aligned}\n\\end{equation}\nfor some $1\\leq \\ell \\leq L_1$ such that ${\\text{rank}}(\\bar{\\bm \\Psi}_m)<{\\rm{min}}\\{N,mL_1\\}$ is true.\nHowever, since ${\\bf v}_m$ is independent to ${\\bf v}_1,{\\bf v}_2,\\cdots,{\\bf v}_{m-1}$ with full-rank covariance matrix, equation \\eqref{equ:app0_x} is inconsistent (which has no solution) with probability one, and the whole proof is finished.\n\n\\section{Estimation on the BS-User Channel}\\label{app_Estimate_Hd}\nBased on ${\\bm \\theta}_0=-{\\bm \\theta}_1$, we have\n\\begin{equation}\\label{equ:r_direct}\n\\begin{aligned}[b]\n{\\bf R}_{0} &= \\frac{1}{2}\\left({{\\bf Y}}_0+{{\\bf Y}}_1\\right) {\\bf X}^{-1}\\\\\n&={\\bf H}_{\\rm d}+\\tilde{\\bf Z}_0\n,\n\\end{aligned}\n\\end{equation}\nwhere $\\tilde{\\bf Z}_0$ is the noise matrix consisting of $MK$ i.i.d. complex Gaussian variables following ${\\cal{CN}}({ 0},\\frac{1}{2K}\\sigma_0^2) $.\nLet ${\\bf r}_{{0},k}$ be the $k$-th column of ${\\bf R}_{0}$.\nThe BS-user direct channel for the $k$-th user can be estimated by the\nLMMSE\nestimator \\cite{Kay1993statisticalSP}:\n\\begin{equation}\\label{equ:hat_direct}\n\\begin{aligned}[b]\n\\hat{\\bf h}_{{\\rm d},k}={\\bf C}_{{\\rm d},k} \\left({\\bf C}_{{\\rm d},k}+ \\frac{\\sigma_0^2}{2K} {\\bf I}_M\\right)^{-1} {\\bf r}_{0,k},\n\\end{aligned}\n\\end{equation}\nwhere ${\\bf C}_{{\\rm d},k}={\\mathbb E}\\left[ {\\bf h}_{{\\rm d},k} {\\bf h}_{{\\rm d},k}^{\\rm H} \\right]$\nis the covariance matrix of the direct channel from the BS to the $k$-th user.\nNote that, as shown in \\eqref{equ:r_direct} and \\eqref{equ:hat_direct}, the proposed protocol may achieve a $3$ dB performance gain compared to the existing works, \\cite{Kundu2021OJCSLMMSE_DFTGOOD,Alwazani2020OJCSLMMSE_DFT,Liuliang_CE2020TWC}, on the estimation of the BS-user channels since it exploits doubled observation samples.\n\n\\section{Proof of Lemma \\ref{eq_decomposite}}\\label{proof_lemma1}\nThe objective function of ${\\mathcal{P}}{(\\text{A})}$ can be denoted by the function of the cascaded channel coefficients $\\{{\\bf h}_{{\\rm I},k,m}\\}$, as follows:\n\\begin{equation}\\label{equ:obj_f_A2}\n\\begin{aligned}[b]\n&f_{{\\rm A}}({\\bf H}_{\\rm g}, {\\bf H}_{\\rm u})\n=f_{{\\rm A}}(\\{{\\bf h}_{{\\rm I},k,m}\\})\\\\\n&=-\\sum_{\\ell=1}^{L_1} \\sum_{k=1}^K\n{\\frac{1}{\\sigma_{\\ell}^{2}} \\left\\|\n\\tilde{{\\bf r}}_{\\ell,k}- \\sum_{m=1}^M {\\bf h}_{{\\rm I},k,m}^{\\rm T} {\\bm \\theta}_{\\ell}\n\\right\\|^2 }\\\\\n&\\quad-\\sum_{\\ell=L_1+1}^{N}\n{ \\frac{1}{\\bar\\sigma_{\\ell}^{2}} \\left\\|\n\\bar{\\bf r}_{\\ell}-\\sum_{k=1}^K \\sum_{m=1}^M \\bar{x}_k {\\bf h}_{{\\rm I},k,m}^{\\rm T} {\\bm \\theta}_{\\ell}\n\\right\\|^2 }\n\\\\\n&\\quad-\\sum_{m=1}^M \\sum_{k=1}^K {\\bf h}_{{\\rm I},k,m}^{\\rm H} {{\\bf C}_m^{(k)}}^{-1} {\\bf h}_{{\\rm I},k,m} .\n\\end{aligned}\n\\end{equation}\nTherefore, if one optimal solution $\\left\\{{\\bf H}_{\\rm g}^\\star,{\\bf H}_{\\rm u}^\\star \\right\\} \\in {\\cal A}$, any $\\left\\{{\\bf H}_{\\rm g},{\\bf H}_{\\rm u}\\right\\}$ pair in set ${\\cal A}$ is an optimal solution of ${\\mathcal{P}}{(\\text{A})}$, and the lemma is proved.\n\n\n\n\\section{Proof of Lemma \\ref{fu_convex}}\\label{proof_lemmafu}\nDenote $\\ddot{\\bf h}_{\\rm u}={\\rm{vec}}({\\bf H}_{\\rm u})$. The objective function $f_{{\\rm u}}$ in \\eqref{equ:obj_f_hrk} is given by\n\\begin{equation}\\label{equ:proof_fu_1}\n\\begin{aligned}[b]\n&f_{{\\rm u}}(\\ddot{\\bf h}_{\\rm u})\n=\\sum_{\\ell=1}^{L_1}\n{\\frac{1}{\\sigma_{\\ell}^{2}} \\left\\|\n{\\rm{vec}} (\\tilde{{\\bf R}}_{\\ell})- \\left({\\bf I}_K \\otimes {\\bf D}_{\\ell}\\right) \\ddot{\\bf h}_{\\rm u}\n\\right\\|^2_{2} }\\\\\n&\\quad+\\sum_{\\ell=L_1+1}^{N}\n{ \\frac{1}{\\bar\\sigma_{\\ell}^{2}} \\left\\|\n\\bar{\\bf r}_{\\ell}-\\left({\\bar{\\bf x}}^{\\rm T} \\otimes {\\bf D}_{\\ell}\\right) \\ddot{\\bf h}_{\\rm u}\n\\right\\|^2_2 }\n+ \\ddot{\\bf h}_{\\rm u}^{\\rm H} {\\bf C}_{{\\rm u}} \\ddot{\\bf h}_{\\rm u}\n,\n\\end{aligned}\n\\end{equation}\nwhere ${\\bf C}_{{\\rm u}}={\\rm{blkdiag}}({\\bf C}_{{\\rm u},1},{\\bf C}_{{\\rm u},2},\\cdots,{\\bf C}_{{\\rm u},K})$.\nThe second order derivative of $f_{{\\rm u}}(\\ddot{\\bf h}_{\\rm u})$ is given by\n\\begin{equation}\\label{equ:proof_fu_2}\n\\begin{aligned}[b]\n\\frac{ \\partial^2 f_{{\\rm u}}(\\ddot{\\bf h}_{\\rm u})}{\\partial \\ddot{\\bf h}_{\\rm u} \\partial \\ddot{\\bf h}_{\\rm u}^{\\rm H}}\n&=2{\\bf C}_{{\\rm u}}\n+2{\\bf I}_K \\otimes \\left(\\sum_{\\ell=1}^{L_1} \\frac{1}{\\sigma_{\\ell}^{2}} {\\bf D}_{\\ell}^{\\rm H} {\\bf D}_{\\ell}\\right) \\notag\\\\\n&\\quad + 2\\left(\\bar{\\bf x}^\\ast \\bar{\\bf x}^{\\rm T}\\right)\\otimes \\left(\\sum_{\\ell=L_1+1}^{N} \\frac{1}{\\bar\\sigma_{\\ell}^{2}} |\\bar{x}_k|^2 {\\bf D}_{\\ell}^{\\rm H} {\\bf D}_{\\ell}\\right)\n,\n\\end{aligned}\n\\end{equation}\nwhich is a Hermitian positive semi-definite matrix. Thus, the lemma is proved.\n\n\\section{Proof of Lemma \\ref{fg_convex}}\\label{proof_lemmafg}\nThe second order derivative of $f_{{\\rm g},m}({\\bf h}_{{\\rm g},m})$ in \\eqref{equ:obj_f_gm} is given by\n\\begin{equation}\\label{equ:proof_fg_1}\n\\begin{aligned}[b]\n&\\frac{ \\partial^2 f_{{\\rm g},m}({\\bf h}_{{\\rm g},m})}{\\partial {\\bf h}_{{\\rm g},m} \\partial {\\bf h}_{{\\rm g},m}^{\\rm H}}\n=2{\\bf C}_{{\\rm g},m}^{-1}\n+2\\sum_{k=1}^K\\sum_{\\ell=1}^{L_1} \\frac{1}{\\sigma_{\\ell}^{2}} {\\bf b}_{\\ell,k}^\\ast {\\bf b}_{\\ell,k}^{\\rm T}\\\\\n&\\quad+ 2\\sum_{\\ell=L_1+1}^{N}\\frac{1}{\\bar\\sigma_{\\ell}^{2}}\n\\left(\\sum_{k =1}^K \\bar{x}_k {\\bf b}_{\\ell,k}^{\\rm T}\\right)^{\\rm H}\n\\left(\\sum_{k =1}^K \\bar{x}_k {\\bf b}_{\\ell,k}^{\\rm T}\\right)\n,\n\\end{aligned}\n\\end{equation}\nwhich is a Hermitian positive semi-definite matrix. Thus, $f_{{\\rm g},m}({\\bf h}_{{\\rm g},m})$ is a convex quadratic function of ${\\bf h}_{{\\rm g},m}$. Then, the objective function $\\sum_{m=1}^M f_{{\\rm g},m}({\\bf h}_{{\\rm g},m})$ is a convex quadratic function of $\\{{\\bf h}_{{\\rm g},1},{\\bf h}_{{\\rm g},2},\\cdots,{\\bf h}_{{\\rm g},M}\\}$.\n\n\n\n\n\n\\bibliographystyle{IEEEtran\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n The stability of quantum motion in dynamical systems,\n measured by quantum Loschmidt echo \\cite{Peres84}, has attracted much attention\n in recent years.\n The echo is the overlap of the evolution of the\n same initial state under two Hamiltonians with slight difference in the classical limit,\n $ M(t) = |m(t) |^2 $, where\n \\begin{equation} m(t) = \\langle \\Psi_0|{\\rm exp}(iHt\/ \\hbar ) {\\rm exp}(-iH_0t \/ \\hbar) |\\Psi_0 \\rangle \\label{mat} \\end{equation}\n is the fidelity amplitude.\n Here $H_0$ and $H$ are the unperturbed and perturbed Hamiltonians, respectively,\n $ H=H_0 + \\epsilon H_1 $, with $\\epsilon $ a small quantity and $H_1$ a perturbation.\n This quantity $M(t)$ is called fidelity in the field of\n quantum information \\cite{nc-book}.\n\n\n Fidelity decay in quantum systems whose classical counterparts have\n strong chaos with exponential instability, has been studied well\n \\cite{JP01,JSB01,CLMPV02,JAB02,BC02,CT02,PZ02,WL02,VH03,STB03,WCL04,Vanicek04,WL05,WCLP05,GPSZ06}.\n Related to the perturbation strength, previous investigations show\n the existence of at least three regimes of fidelity decay:\n (i) In the perturbative regime in which the typical transition matrix element is smaller than the\n mean level spacing, the fidelity has a Gaussian decay.\n (ii) Above the perturbative regime, the fidelity has an exponential decay with a rate\n proportional to $\\epsilon^2$, usually called the Fermi-golden-rule (FGR) decay of fidelity.\n (iii) Above the FGR regime is the Lyapunov regime in which $M(t)$ has usually an\n approximate exponential decay with a perturbation-independent rate.\n\n\n Fidelity decay in regular systems with quasiperiodic motion in the\n classical limit has also attracted much attention\n \\cite{PZ02,JAB03,PZ03,SL03,Vanicek04,WH05,Comb05,HBSSR05,GPSZ06,WB06,pre07}.\n For single initial Gaussian wavepacket, the fidelity has\n been found to have initial Gaussian decay followed by power law decay\\cite{PZ02,WH05,pre07}.\n\n\n Meanwhile, there exists a class of system which lies between the two classes of system mentioned above,\n namely, between chaotic systems with exponential instability and regular systems\n with quasiperiodic motion.\n One example of this class of system is the triangle map proposed by Casati and Prosen \\cite{triangle}.\n The map has linear instability with vanishing Lyapunov exponent,\n but can be ergodic and mixing with power-law decay of correlations.\n The classical Loschmidt echo in the triangle map has been studied recently\n and found behaving differently from that in systems with exponential instability\n and in systems with quasiperiodic motion \\cite{c-fid-tri}.\n This suggests that the decaying behavior of fidelity in the quantum triangle map may be\n different from that in the other two classes of system as well.\n In this paper, we present numerical results which confirm this expectation.\n\n\n Specifically, like in systems possessing strong chaos,\n in the triangle map three regimes of fidelity decay are found\n with respect to the perturbation strength: weak, intermediate and strong.\n However, in each of the three regimes, the decaying law(s) for the fidelity in the triangle map has\n been found different from that in systems possessing strong chaos.\n In section II, we recall properties of the classical triangle map\n and discuss its quantization.\n Section III is devoted to numerical investigations for the laws of fidelity decay\n in the three regimes of perturbation strength.\n Conclusions are given in section IV.\n\n\n \\section{Triangle map}\n\n\n \\begin{figure}\n \\includegraphics[width=\\columnwidth]{s0001-t1p7-N8.EPS}\n \\caption{ (color online).\n Averaged fidelity at weak perturbation, $\\sigma =10^{-4}$(solid curve),\n with average taken over 50 initial point sources chosen randomly, $N=2^{12}=4096$.\n The dashed-dotted straight line has a slope 1.7, showing that $\\log_{10}\\overline M(t)$ is approximately\n a function of $t^{1.7}$.\n For comparison, we also show two straight lines (dashed and dotted) with\n slopes 1 and 2, respectively.\n } \\label{fig-s0001-t1p7}\n \\end{figure}\n\n\n\n On the torus $(r,p) \\in {T}^2 = [-\\pi ,\\pi ) \\times [-\\pi ,\\pi )$,\n the triangle map is\n \\begin{eqnarray} \\nonumber p_{n+1} = p_n + \\alpha \\ \\text{sgn} (r_n)+ \\beta , \\hspace{1cm} (\\text{mod} 2\\pi )\n \\\\ r_{n+1} = r_n + p_{n+1} , \\hspace{1cm} (\\text{mod} 2\\pi ) \\label{map} \\end{eqnarray}\n where $\\text{sgn}(r) = \\pm 1 $ is the sign of $r$ for $r \\ne 0$ and\n $\\text{sgn}(r) =0$ for $r=0$ \\cite{triangle}.\n Rich behaviors have been found in the map:\n For rational $\\alpha \/\\pi$ and $\\beta \/\\pi$, the system is pseudointegrable.\n With the choice of $\\alpha =0$ and irrational $\\beta \/ \\pi$, it is ergodic but not mixing.\n Interestingly, for incommensurate irrational values of $\\alpha \/\\pi$ and $\\beta \/\\pi$,\n the dynamics is ergodic and mixing.\n In our numerical calculations, we take $\\alpha = \\pi^2 $ and $\\beta = (\\sqrt{5} -1)\\pi \/2$,\n {for which $(\\beta \/ \\alpha )$ is an irrational number, the golden mean divided by $\\pi$,\n and the map is ergodic and mixing.}\n\n\n The triangle map (\\ref{map}) can be associated with the Hamiltonian\n \\begin{eqnarray} H = \\frac 12 \\widetilde p^2 + V(r) \\sum_{n=-\\infty }^{\\infty } \\delta (t-nT), \\label{H} \\end{eqnarray}\n where $ V(r) = - \\widetilde \\alpha |r| - \\widetilde \\beta r$ and $T$ is the period of kicking.\n It is easy to verify that the dynamics produced by this Hamiltonian gives the map (\\ref{map})\n with the replacement $p=T\\widetilde p, \\alpha = T\\widetilde \\alpha $, and $\\beta =T\\widetilde \\beta $.\n\n\n \\begin{figure}\n \\includegraphics[width=\\columnwidth]{s0001-001-N1.EPS}\n \\caption{ (color online).\n Averaged fidelity at three weak perturbation strengths, $\\sigma =10^{-4}$(thin solid curve), $10^{-3}$\n (dashed curve), and $10^{-2}$(thick solid curve),\n with average taken over 50 initial point sources chosen randomly, $N=2^{12}=4096$.\n The dashed-dotted straight line represents $M_1(t)$ in Eq.~(\\ref{ctgamma})\n with $\\gamma =1.7$ and $c$ as an adjusting parameter.\n Inset: Fidelity of $\\sigma =10^{-3}$ and $N=2^{n}$;\n the two curves are almost indistinguishable.\n } \\label{fig-s0001-tlog}\n \\end{figure}\n\n\n The classical map can be quantized by the method of quantization on torus\n \\cite{HB80-q-tori, FMR91,WB94,Haake}.\n Schr\\\"{o}dinger evolution under the Hamiltonian in Eq.~(\\ref{H})\n for one period of time is given by the Floquet operator\n \\begin{equation} \\label{U1} U = \\exp \\left [ -\\frac i2 ({\\hat{\\widetilde p}})^2T \\right ]\n \\exp [-i V( {\\hat r}) ], \\end{equation}\n where we set $\\hbar =1$ in Schr\\\"{o}dinger equation.\n In this quantization scheme, an effective Planck constant $\\hbar_{\\rm eff}=T$ is introduced.\n It has the following relation to the dimension $N$ of the Hilbert space,\n \\begin{equation} \\label{h} N h_{\\rm eff} =4\\pi^2, \\end{equation}\n hence, $\\hbar_{\\rm eff} = 2\\pi \/ N$.\n In what follows, for brevity, we will omit the subscript eff of $\\hbar_{\\rm eff}$.\n Eigenstates of $\\hat{r} $ and $\\hat p$ are discretized,\n $\\hat{r}|j\\rangle = j \\hbar |j\\rangle $ and $\\hat{p}|k\\rangle = k \\hbar |k\\rangle $,\n with $j,k =-N\/2,-N\/2+1,\\ldots ,0,1, \\ldots , (N\/2)-1$.\n Then,\n {making use of the above discussed relations among $\\widetilde p, p, T, \\widetilde \\alpha , \\alpha ,\n \\widetilde \\beta , \\beta $, in particular, $T=\\hbar $},\n the Floquet operator in Eq.~(\\ref{U1}) can be written as\n \\begin{equation} \\label{U} U = \\exp \\left [ -\\frac {i}{2\\hbar} ({\\hat{ p}})^2 \\right ]\n \\exp \\left [ \\frac{i}{\\hbar} (\\alpha |\\hat r| +\\beta \\hat r) \\right ] . \\end{equation}\n In numerical computation, the time evolution\n $ |\\psi (t)\\rangle = U^t |\\psi_0\\rangle $ is calculated by the fast Fourier transform (FFT) method.\n\n\n The fidelity in Eq.~(\\ref{mat}) involves two slightly different Hamiltonians,\n unperturbed and perturbed.\n In this paper, for an unperturbed system with parameters $\\alpha $ and $\\beta $,\n the perturbed system is given by\n \\begin{equation} \\alpha \\to \\alpha + \\epsilon \\ \\ \\ \\ \\beta \\to \\beta . \\end{equation}\n Without the loss of generality, we assume $\\epsilon \\ge 0$.\n The parameter $\\sigma =(\\epsilon \/ \\hbar )$ can be used to characterize the strength of quantum\n perturbation.\n\n\n\n \\begin{figure}\n \\includegraphics[width=\\columnwidth]{mt-s001-01-N2.EPS}\n \\caption{ (color online).\n Variation of the averaged fidelity with $\\sigma t$ for $\\sigma =0.01, 0.02$ and 0.1,\n with average taken over 100 initial point sources chosen randomly, $N=4096$.\n The solid straight line is drawn for a comparison with linear dependence on $\\sigma t$.\n For $\\sigma = 0.02$ and 0.1, $\\log_{10} \\overline M(t)$ is approximately a linear function of $ \\sigma t$,\n before it becomes close to the saturation value.\n Inset: The distribution $P(y)$ for the action difference $\\Delta S$ at $t=40$,\n where $y=(\\Delta S -\\langle \\Delta S \\rangle )\n \/ \\epsilon $ and $\\langle \\Delta S \\rangle$ is the average value of $\\Delta S$.\n It is calculated by taking randomly $10^7$ initial points in the phase space.\n $P(y)$ does not have a Gaussian shape.\n } \\label{fig-mt-s001-01}\n \\end{figure}\n\n\n \\section{Three regimes of fidelity decay}\n\n \\subsection{Weak perturbation regime}\n\n\n Let us first discuss weak perturbation.\n As mentioned in the introduction, in systems with strong chaos in the classical limit,\n the fidelity has a Gaussian decay under sufficiently weak perturbation.\n The Gaussian decay is derived by making use of the first order perturbation theory for eigensolutions\n of $H$ and $H_0$ and the random matrix\n theory for $\\Delta E_n \\equiv E_n-E^0_n$, where $E_n$ and $E^0_n$ are\n eigenenergies of $H$ and $H_0$, respectively.\n Numerical results in Ref.~\\cite{EKW05} show agreement of the spectral\n statistics in the triangle map with the prediction of random matrix theory,\n hence, at first sight, Gaussian decay might be expected for the fidelity decay\n in the weak perturbation regime of the triangle map.\n\n\n However, our numerical results show a non-Gaussian decay of fidelity for small perturbation.\n An example is given in Fig.~\\ref{fig-s0001-t1p7} for $\\sigma=10^{-4}$.\n To obtain relatively smooth curves for fidelity,\n average has been taken over 50 initial point sources (eigenstates of $\\hat r$) chosen randomly.\n This figure, plotted with $\\log_{10} \\left (-\\log_{10} \\overline M(t)\\right )$ versus $ \\log_{10} t $,\n shows clearly that $\\log_{10}\\overline M(t)$ is approximately proportional to $t^{1.7}$ (the\n dashed-dotted straight line), while is far from the Gaussian case of $t^2$ and the\n exponential case of $t$ represented by the dotted and dashed lines, respectively.\n\n\n Furthermore, we found that the averaged fidelity $\\overline M(t)$ can be fitted well by\n \\begin{equation} \\label{ctgamma} M_1(t) = \\exp (-c \\sigma^2 t^{\\gamma }) \\end{equation}\n with $\\gamma \\simeq 1.7$ and $c$ as a fitting parameter.\n In Fig.~\\ref{fig-s0001-tlog}, we show fidelity decay for three different values of $\\sigma $.\n With the horizontal axis scaling with $\\log_{10}\\sigma^2 t^{1.7 }$,\n the three curves corresponding to the three values of $\\sigma $\n are hardly distinguishable in their overlapping regions (except for long times).\n Note that, to show clearly the dashed-dotted straight line which represents\n $M_1(t)$ in Eq.~(\\ref{ctgamma}),\n we have deliberately adjusted a little the best-fitting value of $c$ such that the dashed-dotted line\n is a little above the curves of the fidelity.\n\n\n \\begin{figure}\n \\includegraphics[width=\\columnwidth]{s0p1-n11-n12-N3.EPS}\n \\caption{ (color online).\n Fidelity decay for $\\sigma =0.1$ and $N=2^{n}$,\n averaged over 100 initial point sources.\n } \\label{fig-s0p1-n11-n12}\n \\end{figure}\n\n\n\n In the inset of Fig.~\\ref{fig-s0001-tlog}, we show curves of fidelity for the same $\\sigma$\n but different values of $\\epsilon $ and $N$.\n The two curves are very close, supporting the assumption\n that $\\epsilon $ and $N$ appear in the form of the\n single variable $\\sigma $ as written on the right hand side of Eq.~(\\ref{ctgamma}).\n This dependence of $\\overline M(t)$ on the variable $\\sigma $ for sufficiently small $\\sigma $\n can be understood in a first-order perturbation treatment of fidelity,\n as shown in the following arguments.\n\n\n Let us consider a Hilbert space with sufficiently large dimension $N$\n and make use of arguments similar to those used in Ref.~\\cite{CT02}\n for deriving the Gaussian decay,\n but without assuming the applicability of the random matrix theory.\n It follows that, for times not very long, the averaged fidelity (averaged over initial states)\n is mainly determined by\n $\\langle \\exp (-i\\Delta \\omega_n t ) \\rangle$, where $\\Delta \\omega_n =\\omega_{n} -\\omega_n^0$\n and $\\langle \\ldots \\rangle $ indicates average over the quasi-spectrum.\n Here $\\omega_n^0$ is an eigen-frequency of the Floquet operator $U$ in Eq.~(\\ref{U})\n and $\\omega_n$ is the corresponding eigen-frequency of $(U e^{i\\sigma |r|})$.\n For large $N$, $\\langle \\exp (-i\\Delta \\omega_n t ) \\rangle$\n can be calculated by making use of the distribution of $\\Delta \\omega_n $.\n Since the two Floquet operators $U$ and $(U e^{i\\sigma |r|})$ differ by $e^{i\\sigma |r|}$,\n the distribution of $\\Delta \\omega_n $ is approximately a function of $\\sigma $.\n Then, $M(t)$ is approximately a function $\\sigma $.\n\n\n Finally, we give some remarks on the value of $\\gamma $.\n When $\\Delta \\omega_n$ has a Gaussian distribution, $\\overline M(t)$ has a Gaussian decay with $\\gamma =2$,\n as in the case of systems possessing strong chaos.\n In the triangle map, the non-Gaussian decay of fidelity discussed above implies\n that $\\Delta \\omega_n$ does not have a Gaussian distribution.\n Other types of distribution may predict values of $\\gamma $ different from 2, in particular,\n a L\\'{e}vy distribution would give $\\gamma <2$ in agreement with our numerical result.\n We also remark that the results here are not in confliction with numerical results of Ref.~\\cite{EKW05},\n in which only the statistics of $\\omega_n$ (not that of $\\Delta \\omega_n$) is found\n in agreement with the prediction of random matrix theory.\n\n\n \\subsection{Intermediate perturbation strength}\n\n\n \\begin{figure}\n \\includegraphics[width=\\columnwidth]{mt-s01-1-poi-st-N4.EPS}\n \\caption{ (color online).\n Averaged fidelity of $\\sigma$ from 0.1 to 1, with average taken over 1000 randomly chosen\n initial pointer sources, $N=2^{14}=16384$.\n For $\\sigma =0.2$ and above, the averaged fidelity obeys a decaying law which is different from\n that in Eq.~(\\ref{Mt-sigma-t}), in particular, it is not a function of $(\\sigma t)$.\n } \\label{fig-mt-s01-1-poi-st}\n \\end{figure}\n\n\n\n With increasing perturbation strength, exponential decay of $\\overline M(t)$ appears\n (see Fig.~\\ref{fig-mt-s001-01}).\n For $\\sigma $ from 0.02 to 0.1, after some initial times and before approaching\n its saturation value, the fidelity decays as\n \\begin{equation} M_2(t) = \\exp (- a \\sigma t), \\label{Mt-sigma-t} \\end{equation}\n with $a$ as a fitting parameter.\n Numerically, we found that $a \\approx 0.08$.\n The decay rate is proportional to $(\\sigma t)$, unlike in the FGR decay found in systems\n with strong chaos,\n \\begin{equation} M_{\\rm FGR}(t) \\sim \\exp (-2 \\sigma^2 K_E t), \\label{FGR} \\end{equation}\n where $K_E$ is the classical action diffusion constant \\cite{CT02}.\n The curves of $\\sigma =0.02$ and 0.1 in Fig.~\\ref{fig-mt-s001-01} are quite close,\n while that of $\\sigma =0.01$ has some deviation from the two.\n This implies that the $\\exp (- a \\sigma t)$ behavior of $\\overline M(t)$ appears\n between $\\sigma =0.01$ and 0.02.\n Note that vertical shifts have been made for the two curves of\n $\\sigma =0.02$ and 0.1 in Fig.~\\ref{fig-mt-s001-01} for better comparison.\n\n\n \\begin{figure}\n \\includegraphics[width=\\columnwidth]{lglgm-st2p5-s2-10-N5.EPS}\n \\caption{(color online).\n Averaged fidelity at strong perturbation,\n with average taken over 1000 randomly chosen initial Gaussian wavepackets,\n $N=2^{17}=131072$.\n $z=\\epsilon t^{2.5}\/\\hbar $ with $\\hbar $ fixed in this figure.\n The solid line represents a curve $\\exp (-c \\epsilon t^{2.5})$,\n where the fitting parameter $c$ is determined from comparison with the two curves\n of $\\sigma =2$ and 4 in the small-$z$ region.\n } \\label{fig-lglgm}\n \\end{figure}\n\n\n The origin of the non-FGR decay of fidelity\n in this regime of perturbation strength, may come from weak chaos.\n In fact, in another system which also possesses weak chaos in the classical limit,\n namely, the sawtooth map in some parameter regime, linear dependence of the decaying rate\n on $\\sigma$ has also been observed in the intermediate perturbation regime \\cite{WCL04,WL05,foot1}.\n In this regime of perturbation strength, the semiclassical theory predicts that,\n in the first order classical perturbation theory,\n the averaged fidelity is given by \\cite{WCL04}\n \\begin{eqnarray} \\overline M(t) \\simeq \\left | \\int d\\Delta S e^{i\\Delta S\/ \\hbar }\n P(\\Delta S)\\right |^2, \\label{Mp-ps} \\end{eqnarray}\n where\n $ \\Delta S( {\\bf p} _0 , {\\bf r} _0 ; t) = \\epsilon \\int_0^t dt' H_1[( {\\bf r} (t')]$\n is the action difference of two the classical trajectories starting at the same\n point $( {\\bf p} _0 , {\\bf r} _0)$ in the two systems,\n with $H_1$ evaluated along one of the two trajectories,\n and $P(\\Delta S)$ is the distribution of $ \\Delta S( {\\bf p} _0 , {\\bf r} _0 ; t)$.\n In systems possessing strong chaos, $P(\\Delta S)$ may have a Gaussian form, which implies\n the FGR decay for the fidelity.\n In the triangle map, $P(\\Delta S)$ is not a Gaussian distribution\n as shown in the inset of Fig.~\\ref{fig-mt-s001-01},\n hence, the fidelity does not have the FGR decay with a rate proportional to $\\sigma^2$.\n\n\n It is difficult to find an analytical expression for $P(\\Delta S)$,\n hence, we can not derive Eq.~(\\ref{Mt-sigma-t}) analytically.\n However, a qualitative understanding of the $(\\sigma t)$-dependence of $\\overline M(t)$ can be\n gained, as shown in the following arguments.\n Equation (\\ref{Mp-ps}) shows that the time-dependence of fidelity decay comes mainly from\n the dependence of $P(\\Delta S)$ on time.\n In the case of strong chaos, $\\Delta S$ behaves like a random walk, hence,\n $P(\\Delta S)$ has a Gaussian form with a width increasing as $\\sqrt t$ \\cite{CT02}.\n Since $\\Delta S \\propto \\epsilon $, the width of $P(\\Delta S)$ is a function of $(\\epsilon \\sqrt t)$;\n then, Eq.~(\\ref{Mp-ps}) gives the FGR decay of $\\overline M(t)$ which depends on $(\\sigma^2t)$.\n In the case of the triangle map, due to the linear instability of the map, it may happen\n that the width of $P(\\Delta S)$ increase linearly with $ t$\n in some situations when $t$ is not very long.\n This implies that the width of $P(\\Delta S)$ may be a function of the variable $(\\epsilon t)$.\n Then, it is possible for $\\overline M(t)$ to be approximately a function of $(\\sigma t)$.\n\n\n Equation (\\ref{Mp-ps}) predicts that, up to the first order classical perturbation theory,\n the dependence of $\\overline M(t)$ on $\\epsilon $ and $\\hbar $\n takes the single variable $\\sigma =\\epsilon \/ \\hbar $.\n Numerically we found that this is approximately correct, as shown in Fig.~\\ref{fig-s0p1-n11-n12}.\n Specifically, for fixed $\\sigma =0.1$, $\\overline M(t)$ of $N=2^{11}$ and of $N=2^{12}$ separate at about $t=15$.\n Indeed, for long times $t$, higher order contributions in the classical perturbation theory may\n need consideration and $\\overline M(t)$ may depend on $\\epsilon $ and $\\hbar $ in a different way.\n For larger $N$, hence smaller $\\hbar $, the agreement becomes better,\n e.g., $\\overline M(t)$ of $N=2^{12}$ is closer to $N=2^{13}$ than to $N=2^{11}$.\n\n\n\n When $\\sigma $ goes beyond 0.1,\n the exponential decay of $\\overline M(t)$ expressed in Eq.~(\\ref{Mt-sigma-t}) disappears,\n in particular, the dependence of $\\overline M(t)$ on $\\sigma $ and $t$ does not take the form of $(\\sigma t)$\n (see Fig.~\\ref{fig-mt-s01-1-poi-st}).\n Meanwhile fluctuations of $\\overline M(t)$ becomes larger and larger with increasing $\\sigma $\n for initial point states.\n For example, Fig.~\\ref{fig-mt-s01-1-poi-st} shows that $\\overline M(t)$ of $\\sigma =1$\n has considerable fluctuations even after averaging over 1000 initial point sources.\n Taking initial Gaussian wavepackets, the fluctuations can be much suppressed.\n\n\n \\begin{figure}\n \\includegraphics[width=\\columnwidth]{mt-sgt1-tavg-N6.EPS}\n \\caption{ (color online).\n Averaged Fidelity for strong perturbation, from top to bottom, $\\sigma =2,4$ and 10.\n The average is taken over 1000 initial Gaussian wavepackets chosen randomly\n and over time from $t-2$ to $t+2$. $N=2^{17}$.\n The time axis is plotted in the logarithm scale.\n It shows that the long time decay of fidelity is slower than power law decay.\n } \\label{fig-mt-sgt1-tavg}\n \\end{figure}\n\n\n\n \\subsection{Strong perturbation regime}\n\n\n The triangle map has vanishing Lyapunov exponent, hence, its fidelity may not have\n the perturbation-independent decay\n which has been observed at strong perturbation in systems possessing exponential\n instability in the classical limit\n \\cite{JP01,BC02,STB03,WCLP05}.\n To understand fidelity decay in the triangle map, it is helpful to recall results\n about the classical fidelity given in \\cite{c-fid-tri}.\n In the classical triangle map, the classical fidelity decays as\n $M_{cl}(t) \\sim \\exp (-c \\epsilon t^{2.5})$\n for initial times when $M_{cl}(t)$ remains close to one,\n and has an exponential decay $\\exp (-c' \\epsilon^{2\/5}t)$ for longer times.\n The interesting feature is that the classical fidelity depends\n on the same scaling variable $\\tau \\equiv \\epsilon t^{2.5}$ in different time regions.\n\n\n In the weak and intermediate perturbation regimes discussed in the previous sections,\n the dependence of fidelity on $\\epsilon $ and $t$ does not take the form of the single\n variable $\\tau$.\n This is not strange, because the classical limit is achieved in the limit\n $\\hbar \\to 0$, which implies $\\sigma \\to \\infty$ for whatever small but fixed $\\epsilon $.\n Therefore, it is the strong perturbation regime in which\n the decaying behavior of fidelity may have some relevance to the classical fidelity.\n Numerical results presented below indeed support this expectation.\n\n\n \\begin{figure}\n \\includegraphics[width=\\columnwidth]{mt-sgt1-tavg-140-1k-N7.EPS}\n \\caption{ (color online).\n The same as in Fig.~\\ref{fig-mt-sgt1-tavg},\n with a different scale for the horizontal axis and\n for the time interval $140 < t < 1000$.\nFor $\\sigma =4$ and 10, $\\log_{10}\\overline M(t)$ form two lines for each $\\sigma$.\nThe three solid lines represent $\\log_{10} M_3(t)$ given by Eq.~(\\ref{loglogt}),\nwith $b=9.6,9.3$, and 8.3 from top to bottom.\n } \\label{fig-mt-sgt1-tavg-140-1k}\n \\end{figure}\n\n\n Figure \\ref{fig-lglgm} shows variation of the averaged fidelity with $\\log_{10}\\epsilon t^{2.5}$,\n with average taken over 1000 initial Gaussian wavepackets chosen randomly.\n The initial decay of the fidelity of $\\sigma =2$ and 4\n are quite close to the classical prediction $\\exp (-c \\epsilon t^{2.5})$.\n For longer times, the fidelity of $\\sigma $ from 2 to 10 (with $\\hbar $ fixed)\n is approximately a function of $\\tau$, the scaling variable predicted in the classical case,\n but, the decaying behavior of fidelity\n is not the same as that of the classical fidelity, i.e., not an exponential decay.\n We found that the dependence of $\\overline M(t)$ on $\\hbar$ does not take the form of $\\tau \/ \\hbar$,\n i.e., $\\overline M(t)$ is not a function of the single variable $(\\tau \/ \\hbar )$.\n\n\n For long times, the fidelity has large fluctuations even after averaging over 1000\n initial Gaussian wavepackets.\n The fluctuations can be much suppressed, when a further average is taken for time $t$ .\n Specifically, for each time $t$, we take average over $\\overline M(t')$ for $t'$ from $t-2$ to $t+2$.\n The results are given in Fig.~\\ref{fig-mt-sgt1-tavg},\n which shows that the long time decay of fidelity is slower than power law decay.\n To study the decaying behavior of the slower-than-power-law decay,\n we compare it with the function\n \\begin{equation} M_3(t) = a(\\log_{10} t)^{-b}, \\label{loglogt} \\end{equation}\n with $a$ and $b$ as fitting parameters.\n In the time interval $140 < t < 1000$, the averaged fidelity can be fitted by this function,\n as shown in Fig.~\\ref{fig-mt-sgt1-tavg-140-1k}, where we plot $\\log_{10} M(t)$ versus\n $ \\log_{10} (\\log_{10} t)$.\n Further research work is needed to find analytical explanations for this slower-than-power-law decay\n of fidelity.\n\n\n\\vspace{1cm}\n\n \\section{Conclusions and Discussions}\n\n\n We present numerical results on fidelity decay in the triangle map with linear instability.\n Three regimes of fidelity decay has been found with respect to the perturbation strength:\n weak, intermediate and strong.\n At weak perturbation, the fidelity decays like $\\exp (-c \\sigma^2 t^{1.7})$.\n In the intermediate regime, the fidelity has an exponential decay\n which is approximately $\\exp (-c' \\sigma t)$.\n In the regime of strong perturbation, the fidelity is approximately a function of\n $\\epsilon t^{2.5}$\n and decays slower than power law decay for long times.\n\n\n These results show that the fidelity in the triangle map obeys decaying laws which are\n different from those in systems with strong chaos or with regular motion.\n The difference is closely related to the weak-chaos feature of the classical triangle map.\n In which way and to what extent does weak chaos influence the fidelity decay?\n This is still an open question.\n Indeed, common features of fidelity decay in systems with weak chaos, as well as\n their explanations, should be an interesting topic for future research work.\n In particular, one may note that stretch exponential decay of fidelity has also been observed\n for wave packets which initially reside in the border between chaotic\n and regular regions in mixed-type systems \\cite{WLT02}.\n\n\nACKNOWLEDGMENTS. The author is very grateful to G.~Casati and T.~Prosen\nfor valuable discussions and suggestions.\nThis work is partially supported by Natural Science Foundation of China Grant\nNo.~10775123 and the start-up funding of USTC.\n\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Acknowledgments}\n\nThis research is based upon work supported in part by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, under Award Number DE-SC0021398. This paper was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.\n\n\n\n\n\\section{Equivalence of Incremental and Terminal Information Gain in sOED} \n\\label{app:incre_terminal}\n\n\\begin{proof}[Proof of \\cref{prop:terminal_incremental}]\nUpon substituting \\cref{eq:terminal1,eq:terminal_info_gN} into \\cref{eq:expected_utility}, the expected utility for a given deterministic policy $\\pi$ using the terminal formulation is\n\\begin{align}\n U_T(\\pi)&=\\mathbb{E}_{y_0,...,y_{N-1}|\\pi,x_0}\\[\\int_{\\Theta} p(\\theta|I_N)\\ln{\\frac{p(\\theta|I_N)}{p(\\theta|I_0)}}\\,d\\theta \\] \\nonumber\\\\\n &= \\mathbb{E}_{I_1,\\dots,I_N|\\pi,x_0}\\[\\int_{\\Theta} p(\\theta|I_N)\\ln{\\frac{p(\\theta|I_N)}{p(\\theta|I_0)}}\\,d\\theta \\]\n \\label{e:app_UT}\n\\end{align}\nwhere recall $I_k=\\{ d_0,y_0,\\dots,d_{k-1},y_{k-1} \\}$ (and $I_0=\\emptyset$).\nSimilarly, substituting \\cref{eq:incremental1,eq:incremental2}, the expected utility for the same policy $\\pi$ using the incremental formulation is\n\\begin{align}\n U_I(\\pi)&=\\mathbb{E}_{y_0,...,y_{N-1}|\\pi,x_0}\\[\\sum_{k=1}^N \\int_{\\Theta} p(\\theta|I_k)\\ln{\\frac{p(\\theta|I_k)}{p(\\theta|I_{k-1})}}\\,d\\theta \\] \\nonumber\\\\\n &=\\mathbb{E}_{I_1,\\dots,I_N|\\pi,x_0}\\[\\sum_{k=1}^N \\int_{\\Theta} p(\\theta|I_k)\\ln{\\frac{p(\\theta|I_k)}{p(\\theta|I_{k-1})}}\\,d\\theta \\].\n \\label{e:app_UI}\n\\end{align}\nIn both cases, \n$\\mathbb{E}_{y_0,\\dots,y_{N-1}|\\pi,x_0}$ can be equivalently replaced by $\\mathbb{E}_{I_1,\\dots,I_N|\\pi,x_0}$ since\n\\begin{align*}\n \\mathbb{E}_{I_1,\\dots,I_N |\\pi,x_0} \\[\\cdots\\] &= \\mathbb{E}_{d_0,y_0,d_1,y_1,\\dots,d_{N-1},y_{N-1} | \\pi,x_0} \\[\\cdots\\] \\\\\n &= \\mathbb{E}_{d_0|\\pi} \\mathbb{E}_{y_0,d_1,y_1,\\dots,d_{N-1},y_{N-1}|\\pi,x_0,d_0} \\[\\cdots\\] \\\\\n &= \\mathbb{E}_{y_0,d_1,y_1,\\dots,d_{N-1},y_{N-1}|\\pi,x_0,\\mu_0(x_0)} \\[\\cdots\\] \\\\\n &= \\mathbb{E}_{y_0,d_1,y_1,\\dots,d_{N-1},y_{N-1}|\\pi,x_0} \\[\\cdots\\] \\\\\n &= \\mathbb{E}_{y_0|\\pi,x_0} \\mathbb{E}_{d_1|\\pi,x_0,y_0} \\mathbb{E}_{y_1,\\dots,d_{N-1},y_{N-1}|\\pi,x_0,y_0,d_1} \\[\\cdots\\] \\\\\n &= \\mathbb{E}_{y_0|\\pi,x_0} \\mathbb{E}_{y_1,\\dots,d_{N-1},y_{N-1}|\\pi,x_0,y_0,\\mu_1(x_1)} \\[\\cdots\\] \\\\\n &= \\mathbb{E}_{y_0|\\pi,x_0} \\mathbb{E}_{y_1,\\dots,d_{N-1},y_{N-1}|\\pi,x_0,y_0} \\[\\cdots\\] \\\\\n &= \\mathbb{E}_{y_0|\\pi,x_0} \\mathbb{E}_{y_1|\\pi,x_0,y_0} \\mathbb{E}_{d_2,\\dots,d_{N-1},y_{N-1}|\\pi,x_0,y_0,y_1} \\[\\cdots\\] \\\\\n & \\qquad\\vdots \\\\\n &= \\mathbb{E}_{y_0|\\pi,x_0} \\mathbb{E}_{y_1|\\pi,x_0,y_0} \\cdots \\mathbb{E}_{y_{N-1}|\\pi,x_0,y_0,y_1,\\dots,y_{N-2},\\mu_{N-1}(x_{N-1})} \\[\\cdots\\] \\\\\n &= \\mathbb{E}_{y_0|\\pi,x_0} \\mathbb{E}_{y_1|\\pi,x_0,y_0} \\cdots \\mathbb{E}_{y_{N-1}|\\pi,x_0,y_0,y_1,\\dots,y_{N-2}} \\[\\cdots\\] \\\\\n &= \\mathbb{E}_{y_0,\\dots,y_{N-1}|\\pi,x_0} \\[\\cdots\\],\n\\end{align*}\nwhere the third equality is due to the deterministic policy (Dirac delta function) $d_0=\\mu_0(x_0)$, the fourth equality is due to \n$\\mu_0(x_0)$ being known if $\\pi$ and $x_0$ are given. The seventh equality is due to $\\mu_1(x_1)$ being known if $\\pi$ and $x_1$ are given, and $x_1$ is known if $x_0$, $d_0=\\mu_0(x_0)$ and $y_0$ are given, and $\\mu_0(x_0)$ is known if $\\pi$ and $x_0$ are given, so overall $\\mu_1(x_1)$ is known if $\\pi$, $x_0$ and $y_0$ are given.\nThe eighth to second-to-last equalities all apply the same reasoning recursively. The last equality brings the expression back to a conditional joint expectation. \n\nTaking the difference between \\cref{e:app_UT} and \\cref{e:app_UI}, we obtain\n\\begin{align*}\n &U_I(\\pi) - U_T(\\pi)\\\\\n &=\\mathbb{E}_{I_1,\\dots,I_N|\\pi,x_0}\\[\\sum_{k=1}^N \\int_{\\Theta} p(\\theta|I_k)\\ln{\\frac{p(\\theta|I_k)}{p(\\theta|I_{k-1})}}\\,d\\theta - \\int_{\\Theta} p(\\theta|I_N)\\ln{\\frac{p(\\theta|I_N)}{p(\\theta|I_0)}}\\,d\\theta \\]\\\\\n &=\\int_{\\Theta}\\mathbb{E}_{I_1,\\dots,I_N|\\pi,x_0}\\[ \\sum_{k=1}^N p(\\theta|I_k)\\ln{\\frac{p(\\theta|I_k)}{p(\\theta|I_{k-1})}} - p(\\theta|I_N)\\ln{\\frac{p(\\theta|I_N)}{p(\\theta|I_0)}} \\]\\, d\\theta\\\\\n &=\\int_{\\Theta}\\mathbb{E}_{I_1,\\dots,I_N|\\pi,x_0}\\[ \\sum_{k=1}^{N-1} p(\\theta|I_k)\\ln{\\frac{p(\\theta|I_k)}{p(\\theta|I_{k-1})}} + p(\\theta|I_N)\\ln{\\frac{p(\\theta|I_0)}{p(\\theta|I_{N-1})}} \\]\\,d\\theta\\\\\n &=\\int_{\\Theta}\\mathbb{E}_{I_1,\\dots,I_{N-1}|\\pi,x_0} \\int_{I_N} p(I_N|I_{N-1},\\pi) \\[ \\sum_{k=1}^{N-1} p(\\theta|I_k)\\ln{\\frac{p(\\theta|I_k)}{p(\\theta|I_{k-1})}} + p(\\theta|I_N)\\ln{\\frac{p(\\theta|I_0)}{p(\\theta|I_{N-1})}} \\]\\,dI_N\\,d\\theta\\\\\n &=\\int_{\\Theta}\\mathbb{E}_{I_1,\\dots,I_{N-1}|\\pi,x_0} \\[ \\sum_{k=1}^{N-1} p(\\theta|I_k)\\ln{\\frac{p(\\theta|I_k)}{p(\\theta|I_{k-1})}} + \\int_{I_N} p(\\theta,I_N|I_{N-1},\\pi)\\ln{\\frac{p(\\theta|I_0)}{p(\\theta|I_{N-1})}}\\,dI_N \\]\\,d\\theta\\\\\n &=\\int_{\\Theta}\\mathbb{E}_{I_1,\\dots,I_{N-1}|\\pi,x_0} \\[ \\sum_{k=1}^{N-1} p(\\theta|I_k)\\ln{\\frac{p(\\theta|I_k)}{p(\\theta|I_{k-1})}} + p(\\theta|I_{N-1})\\ln{\\frac{p(\\theta|I_0)}{p(\\theta|I_{N-1})}} \\]\\,d\\theta\\\\\n &=\\int_{\\Theta}\\mathbb{E}_{I_1,\\dots,I_{N-1}|\\pi,x_0} \\[ \\sum_{k=1}^{N-2} p(\\theta|I_k)\\ln{\\frac{p(\\theta|I_k)}{p(\\theta|I_{k-1})}} + p(\\theta|I_{N-1})\\ln{\\frac{p(\\theta|I_0)}{p(\\theta|I_{N-2})}} \\]\\,d\\theta\\\\\n &=\\int_{\\Theta}\\mathbb{E}_{I_1,\\dots,I_{N-2}|\\pi,x_0} \\[ \\sum_{k=1}^{N-3} p(\\theta|I_k)\\ln{\\frac{p(\\theta|I_k)}{p(\\theta|I_{k-1})}} + p(\\theta|I_{N-2})\\ln{\\frac{p(\\theta|I_0)}{p(\\theta|I_{N-3})}} \\]\\,d\\theta\\\\\n &\\qquad \\vdots \\\\\n &=\\int_{\\Theta}\\mathbb{E}_{I_1|\\pi,x_0} \\[ p(\\theta|I_{1})\\ln{\\frac{p(\\theta|I_0)}{p(\\theta|I_0)}} \\]\\,d\\theta\\\\\n &=0,\n\\end{align*}\n\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\where the third equality takes the last term from the sigma-summation and combines it with the last term, the fourth equality expands the expectation and uses $p(I_N|I_1,\\ldots,I_{N-1},\\pi) = p(I_N|I_{N-1},\\pi)$, the fifth equality makes use of $p(\\theta|I_N)=p(\\theta|I_N,\\pi)$, and the seventh to second-to-last equalities repeat the same procedures recursively. \nHence, $U_T(\\pi)=U_I(\\pi)$.\n\\end{proof}\n\n\\section{Policy Gradient Expression}\n\\label{app:pg_derive}\n\nOur proof for \\cref{thm:PG} follows the proof given by \\cite{silver2014deterministic} for a general infinite-horizon MDP. \nBefore presenting our proof, we first introduce a shorthand notation for writing the state transition probability:\n\\begin{align}\np(x_k \\rightarrow x_{k+1}|\\pi_w)=p(x_{k+1}|x_k,\\mu_{k,w}(x_k)).\n\\end{align}\nWhen taking an expectation over consecutive state transitions, we further use the simplifying notation\n\\begin{align}\n&\\int_{x_{k+1}}p(x_k \\rightarrow x_{k+1}|\\pi_w) \\int_{x_{k+2}} p(x_{k+1} \\rightarrow x_{k+2}|\\pi_w) \\nonumber\\\\\n&\\qquad \\cdots \\int_{x_{k+m}} p(x_{k+(m-1)} \\rightarrow x_{k+m}|\\pi_w) \\[\\cdots\\] \\,dx_{k+1} \\, dx_{k+2} \\cdots \\, dx_{k+m} \\nonumber\\\\ \n&= \\int_{x_{k+m}} p(x_k \\rightarrow x_{k+m}|\\pi_w) \\[\\cdots\\] \\, dx_{k+m}\n\\\\ \n&= \\mathbb{E}_{x_{k+m} | \\pi_w, x_k} \\[\\cdots\\].\n\\end{align}\n\nTo avoid notation congestion, below we will also omit the subscript on $w$ and shorten $\\mu_{k,w_k}(x_k)$ to $\\mu_{k,w}(x_k)$, with the understanding that $w$ takes the same subscript as the $\\mu$ function. \n\n\\begin{proof}[Proof of \\cref{thm:PG}]\n\nWe begin by recognizing that the gradient of expected utility in \\cref{eq:expected_utility_w} can be written using the V-function:\n\\begin{align}\n \\nabla_w U(w) = \\nabla_w V^{\\pi_w}_0(x_0).\\label{e:gradU_derive}\n\\end{align}\nThe goal is then to derive the gradient expression for the V-functions. \n\nWe apply the definitions and recursive relations for the V- and Q-functions, and obtain a recursive relationship for the gradient of V-function:\n\\begin{align}\n \\nabla_w V^{\\pi_w}_k(x_k) \n &= \\nabla_w Q^{\\pi_w}_k(x_k,\\mu_{k,w}(x_k)) \n \\nonumber\\\\\n &= \\nabla_w \\Bigg[ \\int_{y_k} p(y_k|x_k,\\mu_{k,w}(x_k))g_k(x_k,\\mu_{k,w}(x_k),y_k)\\,dy_k \\nonumber\\\\\n &\\qquad\\qquad + \\int_{x_{k+1}} p(x_{k+1}|x_k,\\mu_{k,w}(x_k)) V^{\\pi_w}_{k+1}(x_{k+1}) \\,dx_{k+1} \\Bigg] \\nonumber\\\\\n &= \\nabla_w \\int_{y_k} p(y_k|x_k,\\mu_{k,w}(x_k))g_k(x_k,\\mu_{k,w}(x_k),y_k)\\,dy_k \\nonumber\\\\\n &\\qquad\\qquad + \\nabla_w \\int_{x_{k+1}} p(x_{k+1}|x_k,\\mu_{k,w}(x_k)) V^{\\pi_w}_{k+1}(x_{k+1}) \\,dx_{k+1} \\nonumber\\\\\n &= \\int_{y_k} \\nabla_w \\mu_{k,w}(x_k) \\nabla_{d_k} \\[ p(y_k|x_k,d_k) g_k(x_k,d_k,y_k) \\]\\Big|_{d_k=\\mu_{k,w}(x_k)} \\,dy_k \\nonumber\\\\\n &\\qquad\\qquad + \\int_{x_{k+1}} \\Big[ p(x_{k+1}|x_k,\\mu_{k,w}(x_k)) \\nabla_w V^{\\pi_w}_{k+1}(x_{k+1}) \n \\nonumber\\\\ \n &\\qquad\\qquad +\\nabla_w \\mu_{k,w}(x_k) \\nabla_{d_k} p(x_{k+1}|x_k,d_k)\\Big|_{d_k=\\mu_{k,w}(x_k)} V^{\\pi_w}_{k+1}(x_{k+1}) \\Big] \\,dx_{k+1} \\nonumber\\\\\n &= \\nabla_w \\mu_{k,w}(x_k) \\nabla_{d_k} \\Bigg[ \\int_{y_k} p(y_k|x_k,d_k) g_k(x_k,d_k,y_k) \\,dy_k \n \\nonumber\\\\\n &\\qquad\\qquad\\qquad\\qquad + \\int_{x_{k+1}} p(x_{k+1}|x_k,d_k)V^{\\pi_w}_{k+1}(x_{k+1})dx_{k+1} \\Bigg]\\Bigg\\vert_{d_k=\\mu_{k,w}(x_k)} \\nonumber\\\\\n &\\qquad\\qquad + \\int_{x_{k+1}} p(x_{k+1}|x_k,\\mu_{k,w}(x_k)) \\nabla_w V^{\\pi_w}_{k+1}(x_{k+1}) \\,dx_{k+1} \\nonumber\\\\\n &= \\nabla_w \\mu_{k,w}(x_k) \\nabla_{d_k} Q^{\\pi_w}_{k}(x_k,d_k)\\Big|_{d_k=\\mu_{k,w}(x_k)} \n \\label{e:gradV_recursive}\\\\\n &\\qquad\\qquad + \\int_{x_{k+1}} p(x_k \\rightarrow x_{k+1}|\\pi_w) \\nabla_w V^{\\pi_w}_{k+1}(x_{k+1}) \\,dx_{k+1}. \\nonumber\n\\end{align}\nApplying the recursive formula \\cref{e:gradV_recursive} to itself repeatedly and expanding out the overall expression,\nwe obtain\n\\begin{align}\n &\\nabla_w V^{\\pi_w}_k(x_k) \\nonumber\\\\\n &= \\nabla_w \\mu_{k,w}(x_k) \\nabla_{d_k} Q^{\\pi_w}_k(x_k,d_k)\\Big|_{d_k=\\mu_{k,w}(x_k)} \\nonumber\\\\\n &\\qquad + \\int_{x_{k+1}} p(x_k \\rightarrow x_{k+1}|\\pi_w) \\nabla_w \\mu_{k+1,w}(x_{k+1}) \\nabla_{d_{k+1}} Q^{\\pi_w}_{k+1}(x_{k+1},d_{k+1})\\Big|_{d_{k+1}=\\mu_{k+1,w}(x_{k+1})} \\,dx_{k+1} \\nonumber\\\\\n &\\qquad + \\int_{x_{k+1}} p(x_k \\rightarrow x_{k+1}|\\pi_w) \\int_{x_{k+2}} p(x_{k+1} \\rightarrow x_{k+2}|\\pi_w) \\nabla_w V^{\\pi_w}_{k+2}(x_{k+2}) \\,dx_{k+2} \\,dx_{k+1} \\nonumber\\\\\n &= \\nabla_w \\mu_{k,w}(x_k) \\nabla_{d_k} Q^{\\pi_w}_k(x_k,d_k)\\Big|_{d_k=\\mu_{k,w}(x_k)} \\nonumber\\\\\n &\\qquad + \\int_{x_{k+1}} p(x_k \\rightarrow x_{k+1}|\\pi_w) \\nabla_w \\mu_{k+1,w}(x_{k+1}) \\nabla_{d_{k+1}} Q^{\\pi_w}_{k+1}(x_{k+1},d_{k+1})\\Big|_{d_{k+1}=\\mu_{k+1,w}(x_{k+1})} \\,dx_{k+1} \\nonumber\\\\\n &\\qquad + \\int_{x_{k+2}} p(x_{k} \\rightarrow x_{k+2}|\\pi_w) \\nabla_w V^{\\pi_w}_{k+2}(x_{k+2}) \\,dx_{k+2} \\nonumber\\\\\n &= \\nabla_w \\mu_{k,w}(x_k) \\nabla_{d_k} Q^{\\pi_w}_k(x_k,d_k)\\Big|_{d_k=\\mu_{k,w}(x_k)} \\nonumber\\\\\n &\\qquad + \\int_{x_{k+1}} p(x_k \\rightarrow x_{k+1}|\\pi_w) \\nabla_w \\mu_{k+1,w}(x_{k+1}) \\nabla_{d_{k+1}} Q^{\\pi_w}_{k+1}(x_{k+1},d_{k+1})\\Big|_{d_{k+1}=\\mu_{k+1,w}(x_{k+1})} \\,dx_{k+1} \\nonumber\\\\\n &\\qquad + \\int_{x_{k+2}} p(x_k \\rightarrow x_{k+2}|\\pi_w) \\nabla_w \\mu_{k+2,w}(x_{k+2}) \\nabla_{d_{k+2}} Q^{\\pi_w}_{k+2}(x_{k+2},d_{k+2})\\Big|_{d_{k+2}=\\mu_{k+2,w}(x_{k+2})} \\,dx_{k+2}\\nonumber\\\\\n &\\hspace{2.5em}\\vdots \\nonumber\\\\\n &\\qquad + \\int_{x_{N}} p(x_{k} \\rightarrow x_{N}|\\pi_w) \\nabla_w V^{\\pi_w}_{N}(x_{N}) \\,dx_{N} \\nonumber\\\\\n &= \\sum_{l=k}^{N-1} \\int_{x_l} p(x_k \\rightarrow x_l|\\pi_w) \\nabla_w \\mu_{l,w}(x_l) \\nabla_{d_l} Q^{\\pi_w}_l(x_l,d_l)\\Big|_{d_l=\\mu_{l,w}(x_l)} \\,dx_l\\nonumber\\\\\n &= \\sum_{l=k}^{N-1} \\mathbb{E}_{x_l| \\pi_w, x_k} \\[\\nabla_w \\mu_{l,w}(x_l) \\nabla_{d_l} Q^{\\pi_w}_l(x_l,d_l)\\Big|_{d_l=\\mu_{l,w}(x_l)}\\] \\,dx_l,\\label{e:gradV_final}\n\\end{align}\nwhere for the second-to-last equality, we absorb the first term into the sigma-notation by using\n\\begin{align*}\n& \\nabla_w \\mu_{{k},w}(x_{k}) \\nabla_{d_{k}} Q^{\\pi_w}_{k}(x_{k},d_{k})\\Big|_{d_{k}=\\mu_{k,w}(x_{k})} \\nonumber\\\\\n& \\qquad = \\int_{x_{k}} p(x_k | x_{k},\\mu_{k,w}(x_k)) \\nabla_w \\mu_{{k},w}(x_{k}) \\nabla_{d_{k}} Q^{\\pi_w}_{k}(x_{k},d_{k})\\Big|_{d_{k}=\\mu_{k,w}(x_{k})} \\,dx_{k}\n\\nonumber\\\\\n& \\qquad = \\int_{x_{k}} p(x_k \\rightarrow x_{k}|\\pi_w) \\nabla_w \\mu_{{k},w}(x_{k}) \\nabla_{d_{k}} Q^{\\pi_w}_{k}(x_{k},d_{k})\\Big|_{d_{k}=\\mu_{k,w}(x_{k})} \\,dx_{k},\n\\end{align*}\nand we eliminate the last term in the summation since\n$\\nabla_w V^{\\pi_w}_{N}(x_{N})=\\nabla_w g_{N}(x_{N})=0$.\n\nAt last, substituting \\cref{e:gradV_final} into \n\\cref{e:gradU_derive}, we obtain the policy gradient expression:\n\\begin{align}\n \\nabla_w U(w) &= \\nabla_w V^{\\pi_w}_0(x_0) \\nonumber\\\\\n &= \\sum_{l=0}^{N-1} \\mathbb{E}_{x_l|\\pi_w,x_0} \\[ \\nabla_w \\mu_{l,w}(x_l) \\nabla_{d_l} Q^{\\pi_w}_l(x_l,d_l)\\Big|_{d_l=\\mu_{l,w}(x_l)} \\]. \\nonumber\n \\end{align}\nRenaming the iterator from $l$ to $k$ arrives at \\cref{eq:pg_theorem} in \\cref{thm:PG}, completing the proof.\n\\end{proof}\n\n\\section{Conclusions}\n\\label{sec:conclusions}\n\n\nThis paper presents a mathematical framework and computational methods to optimally design a finite number of sequential experiments (sOED); the code is available at \\url{https:\/\/github.com\/wgshen\/sOED}. \nWe formulate sOED as a finite-horizon POMDP. \nThis sOED form is provably optimal, incorporates both elements of feedback and lookahead, and generalizes the suboptimal batch (static) and greedy (myopic) design strategies. \nWe further structure the sOED problem in a fully Bayesian manner and with information-theoretic rewards (utilities), and prove the equivalence of incremental and terminal information gain setups. In particular, sOED can accommodate expensive nonlinear forward models with general non-Gaussian posteriors of continuous random variables. \n\n\n\n\nWe then introduce numerical methods for solving the sOED problem, which entails finding the optimal policy that maximizes the expected total reward.\nAt the core of our approach is PG, an actor-critic RL technique that parameterizes and learns both the policy and value functions in order to extract the gradient with respect to the policy parameters.\nWe derive and prove the PG expression for finite-horizon sOED, and propose an MC estimator. \nAccessing derivative information enables the use of gradient-based optimization algorithms to achieve efficient policy search. \nSpecifically, we parameterize the policy and value functions as DNNs, and detail architecture choices that accommodate a nonparametric representation of the Bayesian posterior belief states. Further combined with a terminal information gain formulation, the Bayesian inference becomes embedded in the design sequence, allowing us to sidestep the need for explicitly and numerically computing the Bayesian posteriors at intermediate experiments.\n\n\n\n\nWe apply the overall PG-sOED method to two different examples.\nThe first is a linear-Gaussian problem that offers a closed form solution, serving as a benchmark. We validate the PG-sOED policy against the analytic optimal policy, and observe orders-of-magnitude speedups of PG-sOED over an ADP-sOED baseline.\nThe second entails a problem of contaminant source inversion in a convection-diffusion field. Through multiple sub-cases, we illustrate the advantages of PG-sOED over greedy and batch designs, and provide insights to the value of feedback and lookahead in the context of time-dependent convection-diffusion processes. \nThis demonstration also illustrates the ability of PG-sOED to accommodate expensive forward models with nonlinear physics and dynamics. \n\n\nThe main limitation of the current PG-sOED method is its inability \nto handle high-dimensional settings. While the nonparametric representation sidesteps the need to compute intermediate posteriors, Bayesian inference is ultimately required in order to estimate the KL divergence in the terminal reward.\nThus, an important direction of future work is to improve scalability for high-dimensional inference, to go beyond the current gridding method. This may be approached by employing more general and approximate inference methods such as MCMC, variational inference, approximate Bayesian computation, and transport maps, perhaps in combination with dimension-reduction techniques.\n\nAnother fruitful area to explore is within advanced RL techniques\n(e.g., \\cite{mnih2015human,lillicrap2015continuous,mnih2013playing, \nschulman2017proximal}).\nFor example, replay buffer stores the experienced episodes, and training data can be sampled from this buffer to reduce sampling costs, control correlation among samples, and reach better convergence performance. \nOff-policy algorithms track two version of the policy network and Q-network---a behavior network for determining actions and a target network for learning---which have demonstrated improved sample efficiency.\nParameters of the policy and Q-networks may also be shared due to their similar features.\nFinally, adopting new utility measures, such as those reflecting goal-orientedness, robustness, and risk, would be of great interest to better capture the value of experiments and data in real-life and practical settings. \n\n\n\n\\section{Problem Formulation}\n\\label{sec:formulation}\n\n\n\\subsection{The Bayesian Paradigm}\n\nWe consider designing a finite\\footnote{In experimental design, the experiments are generally expensive and limited in number. Finite and small values of $N$ are therefore of interest. \nThis is in contrast to RL that often deals with infinite horizon.} number of $N$ experiments, indexed by integers $k=0,1,\\ldots,N-1$.\nWhile the decision of how many experiments to perform (i.e. choice of $N$) is important, it is \nnot considered\nin this paper; instead, we assume $N$ is given and fixed.\nFurthermore, let \n$\\theta\\in \\mathbb{R}^{N_{\\theta}}$ denote the unknown model parameter we seek to \nlearn\nfrom the experiments, $d_k \\in \\mathcal{D}_k\\subseteq \\mathbb{R}^{N_d}$ the experimental design variable for the $k$th experiment (e.g., \nexperiment conditions),\n$y_k \\in \\mathbb{R}^{N_y}$ the noisy observation from the $k$th experiment (i.e. experiment measurements), and $N_{\\theta}$, $N_{d}$, and $N_{y}$ respectively the dimensions of parameter, design, and observation spaces. We further consider continuous $\\theta$, $d_k$, and $y_k$, although discrete or mixed settings can be accommodated as well.\nFor simplicity,\nwe also let $N_d$ and $N_y$ be constant across all experiments, but this is not a requirement.\n\nA Bayesian approach treats $\\theta$ as a random variable. \nAfter performing the $k$th \nexperiment, its conditional probability density function (PDF) is described by Bayes' rule:\n\\begin{align}\n \\label{eq:bayes_rule}\n p(\\theta|d_k,y_k,I_k) = \\frac{p(y_k|\\theta,d_k,I_k)p(\\theta|I_k)}{p(y_k|d_k,I_k)}\n\\end{align}\nwhere $I_k=\\{ d_0,y_0,\\dots,d_{k-1},y_{k-1} \\}$ (and $I_0=\\emptyset$) is the information set collecting the design and observation records from all experiments prior to the $k$th experiment, $p(\\theta|I_k)$ is the prior PDF for the $k$th experiment,\n$p(y_k|\\theta,d_k,I_k)$ is the likelihood function,\n$p(y_k|d_k,I_k)$ is the model evidence (or marginal likelihood, which is constant with respect to $\\theta$),\nand $p(\\theta|d_k,y_k,I_k)$ is the posterior PDF. The prior is then a representation of the uncertainty about $\\theta$ before \nthe $k$th experiment, and the posterior describes the updated uncertainty about $\\theta$ after having observed the outcome from the $k$th experiment.\nIn \\cref{eq:bayes_rule}, we also simplify the prior $p(\\theta|d_k,I_k)=p(\\theta|I_{k})$, invoking a reasonable assumption that knowing only the design for $k$th experiment (but without knowing its outcome) would not affect the prior. \nThe likelihood function carries the relation between the hidden parameter $\\theta$ and the observable $y_k$, through a forward model $G_k$ that governs the underlying process for the $k$th experiment (e.g., constrained via a system of partial differential equations (PDEs)). For example, a common likelihood form is \n\\begin{align}\n y_k = G_k(\\theta, d_k; I_k) + \\epsilon_k,\n\\end{align}\nwhere $\\epsilon_k$ is a Gaussian random variable that describes the discrepancy between model prediction $G_k$ and observation $y_k$ due to, for instance, measurement noise. The inclusion of $I_k$ in $G_k$ signifies that model behavior may be affected by previous experiments. Each evaluation of the likelihood $p(y_k|\\theta,d_k,I_k) = p_{\\epsilon}(y_k-G_k(\\theta,d_k; I_k))$ thus involves a forward model solve, typically the most expensive part of the computation.\nLastly, \nthe posterior $p(\\theta|d_k,y_k,I_k)=p(\\theta|I_{k+1})$ becomes the prior for the $(k+1)$th experiment via the same form of \\cref{eq:bayes_rule}. Hence, Bayes' rule can be consistently and recursively applied for a sequence of multiple experiments. \n\n\\subsection{Sequential Optimal Experimental Design}\n\\label{sec:math_formulation}\n\nWe now present a general framework for sOED, posed as a POMDP.\nAn overview flowchart for sOED is presented in \\cref{fig:process} to accompany the definitions below.\n\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=0.95\\linewidth]{Figures\/process.jpg}\n \\caption{Flowchart of the process involved in a $N$-experiment sOED.}\n \\label{fig:process}\n\\end{figure}\n\n\\textbf{State.} We introduce the state variable $x_k=[x_{k,b},x_{k,p}] \\in \\mathcal{X}_k$\nto be the state prior to designing and performing the $k$th experiment. \nHence, \n$x_0,\\ldots,x_{N-1}$ denote the respective states prior to each of the $N$ experiments, and $x_1,\\ldots,x_N$ denote the respective states after each of the $N$ experiments.\nThe state is an entity that summarizes past information needed for making experimental design decisions in the future. \nIt is very general and can contain different quantities deemed to be decision-relevant. \nIn our case here, the state consists of a belief state $x_{k,b}$ reflecting our state of uncertainty about the hidden $\\theta$, and a physical state $x_{k,p}$ carrying other non-random variables pertinent to the design problem. \nSince $\\theta$ is not observable and can be only inferred from noisy and indirect observations $y_k$ through Bayes' rule in \\cref{eq:bayes_rule}, this setup can be viewed as a POMDP for $\\theta$ (or a MDP for $x_k$).\n\nConceptually, a \\emph{realization} of the belief state manifests as \nthe continuous posterior (conditional) random variable \n$(x_{k,b} = x'_{k,b}) = (\\theta|I_k=I_k')$, \nwhere the prime denotes realization. Such a random variable can be \nportrayed by, for example, its PDF, cumulative distribution function, or characteristic function\\footnote{\nIt is possible for $\\theta|I_k$'s with different $I_k'$'s to have the same PDF (or distribution or characteristic function), for example simply by exchanging the experiments. Hence, the mappings from $I_k$ to these portrayals (PDF, distribution, characteristic functions) are non-injective. This may be problematic when considering transition probabilities of the belief state, but avoided if we keep to our root definition of belief state based on $I_k$, which remains unique.}.\nAttempting to directly represent these infinite-dimensional quantities in practice would require some finite-dimensional approximation or discretization.\nAlternatively, one can adopt a nonparametric approach and track \n$I_k$\n(from a given initial $x_0$),\nwhich then yields a \nrepresentation of $x_{k}$ (both $x_{k,b}$ and $x_{k,p}$) without any approximation\\footnote{$I_k$ collects the complete history of experiments and their observations, therefore is a sufficient statistic for $x_k$ by definition. Hence, if $I_k$ is known, then the full state $x_k$ is equivalently represented. \nAll of these are conditioned on a given initial $x_0$ (which includes the prior on $\\theta$), but for simplicity we will omit this conditioning when writing the PDFs in this paper, with the understanding that it is always implied. }\nbut its dimension grows with $k$. However, the dimension is always bounded since the maximum number of experiments considered is finite (i.e. $k < N$).\nIn any case, the belief state space is uncountably infinite since $\\theta$ is a continuous random variable (i.e. the possible posteriors that can be realized is uncountably infinite).\nWe will further detail our numerical representation of the belief state in \\cref{sec:policy_net} and \\cref{sec:numerical_belief_state}.\n\n\\textbf{Design (action) and policy.} Sequential experimental design involves building policies mapping from the state space to the design space, $\\pi = \\{\\mu_k : \\mathcal{X}_k \\mapsto \\mathcal{D}_k, k=0,\\ldots,N-1\n\\}$, such that the design for the $k$th experiment is determined by the state via $d_k=\\mu_k(x_k)$. Thus, sequential design is inherently adaptive, computing designs based on the current state which depends on the previous experiments and their outcomes.\nWe focus on deterministic policies in this study, where policy functions $\\mu_k$ produce deterministic outputs. \n\n\\textbf{System dynamics (transition function).} The system dynamics, denoted by $x_{k+1}=\\mathcal{F}_k(x_k,d_k,y_k)$, describes the transition from state $x_k$ to state $x_{k+1}$ after carrying out the $k$th experiment with design $d_k$ and observation $y_k$. For the belief state, the prior $x_{k,b}$ can be updated to the posterior $x_{k+1,b}$ via Bayes' rule in \\cref{eq:bayes_rule}. The physical state, if present, evolves based on the relevant physical process.\nWhile the system dynamics described in \\cref{eq:bayes_rule} appears deterministic given a specific realization of $d_k$ and $y_k$, it is a stochastic transition since the observation $y_k$ is random. In particular, there exists an underlying transition probability\n\\begin{align}\np(x_{k+1}|x_{k},d_{k})=p(y_k|x_k,d_k)=p(I_{k+1}|d_{k},I_{k}) =p(y_{k}|d_{k},I_{k}) = \n\\int_{\\Theta} p(y_k|\\theta,d_k, I_k)p(\\theta|I_{k})\\,d\\theta,\n\\label{eq:transition}\n\\end{align}\nwhere we \nsimplify the prior with $p(\\theta|d_k,I_k)=p(\\theta|I_{k})$. \nThis transition probability is intractable and does not have a closed form. However, we are able to generate samples of the next state by sampling from the prior and likelihood, as suggested by the last equality in \\cref{eq:transition}. Hence, we have a model-based (via a sampling model) setup.\n\n\n\\textbf{Utility (reward).} We denote $g_k(x_k,d_k,y_k) \\in \\mathbb{R}$ to be the immediate reward from performing an experiment. Most generally, this quantity can depend on the state, design, and observation. For example, it may simply be the (negative) cost of the $k$th experiment.\nSimilarly, we define a terminal reward $g_N(x_N) \\in \\mathbb{R}$ containing any additional reward measure that reflects the benefit of reaching certain final state, and that can only be computed after the entire set of experiments is completed. We will provide a specific example of reward functions pertaining to information measures in \\cref{sec:information_gain}.\n\n\n\\textbf{sOED problem statement.} The sOED problem seeks the policy that solves the following optimization problem: \nfrom a given initial state $x_0$, \\begin{align}\n \\label{eq:optimal_policy}\n \\pi^\\ast = \\operatornamewithlimits{arg\\,max}_{\\pi=\\{\\mu_0,\\ldots,\\mu_{N-1}\\}}& \\qquad U(\\pi)\\\\\n \\text{s.t.}& \n \\qquad d_k = \\mu_k(x_k) \\in \\mathcal{D}_k, \\nonumber\\\\\n &\\qquad x_{k+1}=\\mathcal{F}_k(x_k,d_k,y_k),\n \\hspace{3em} \\text{for}\\quad k=0,\\dots,N-1, \\nonumber\n\\end{align}\nwhere\n\\begin{align}\n \\label{eq:expected_utility}\n U(\\pi) = \\mathbb{E}_{y_0,...,y_{N-1}|\\pi,x_0}\\[\\sum_{k=0}^{N-1}g_k(x_k,d_k,y_k)+g_N(x_N)\\]\n\\end{align}\nis the expected total utility functional. \nWhile here $x_0$ is fixed, this formulation can easily be adjusted to accommodate stochastic $x_0$ as well, by including $x_0$ as a part of $I_k$ and taking another expectation over $x_0$ in \\cref{eq:expected_utility}. \n\nOverall, our sOED problem corresponds to a model-based planning problem of RL. It is challenging for several reasons: \n\\begin{itemize}\n\\item finite horizon, where the policy functions $\\mu_k$ are different for each $k$ and need to be tracked and solved for separately; \n\\item partially and indirectly observed hidden $\\theta$ whose belief state space is uncountably infinite and also infinite-dimensional or nonparametric; \n\\item deterministic policy;\n\\item continuous design (action) and observation spaces; \n\\item transition probability intractable to compute, and transition can only be sampled;\n\\item each belief state transition involves a Bayesian inference, \nrequiring many \nforward model evaluations; \n\\item reward functions are information measures for continuous random variables (discussed below), which are difficult to estimate.\n\\end{itemize}\n\n\\subsection{Information Measures as Experimental Design Rewards}\n\\label{sec:information_gain}\n\nWe wish to adopt reward functions \nthat reflect the degree of success for the experiments, \nnot only\nthe experiment costs. Determining such an appropriate quantity depends on the experimental goals, e.g., to achieve inference, prediction, model discrimination, etc. One popular choice corresponding to the goal of parameter inference is to maximize a measure of information gained on $\\theta$. \nLindley's seminal paper~\\cite{Lindley1956} proposes to use the mutual information between the parameter and observation as the expected utility, and Ginebra~\\cite{Ginebra2007} provides more general criteria for proper measure of information gained from an experiment. \nFrom the former, mutual information is equal to the expected KL\ndivergence from the prior to the posterior. The KL divergence provides an intuitive interpretation as it quantifies the farness between the prior and the posterior distributions, and thus a larger divergence corresponds to a greater degree of belief update---and hence information gain---resulting from the experiment and its observation.\n\nIn this paper, we follow Lindley's approach and demonstrate the use of KL divergence as sOED rewards,\nand present two reasonable sequential design formulations that are in fact equivalent. The first, call it the \\emph{terminal formulation}, involves clumping the information gain from all $N$ experiments in the terminal reward (for clarity, we omit all other reward contributions common to the two formulations, although it would be trivial to show the equivalence for those cases too): \n\\begin{align}\n g_k(x_k, d_k, y_k) &= 0, \\qquad k=0,\\ldots,N-1 \\label{eq:terminal1}\\\\\n g_N(x_N) &= D_{\\mathrm{KL}}\\( p(\\cdot|I_N)\\,||\\,p(\\cdot|I_0) \\) \\nonumber\\\\ &= \\int_{\\Theta} p(\\theta|I_N) \\ln\\[\\frac{p(\\theta|I_N)}{p(\\theta|I_0)}\\]\\,d\\theta.\\label{eq:terminal_info_gN}\n\\end{align}\nThe second, call it the \\emph{incremental formulation}, entails the use of incremental information gain from each experiment in their respective immediate rewards:\n\\begin{align}\n g_k(x_k, d_k, y_k) &= D_{\\mathrm{KL}}\\( p(\\cdot|I_{k+1})\\,||\\,p(\\cdot|I_k) \\) \\nonumber\\\\&= \\int_{\\Theta} p(\\theta|I_{k+1}) \\ln\\[\\frac{p(\\theta|I_{k+1})}{p(\\theta|I_k)}\\]\\,d\\theta, \\qquad k=0,\\ldots,N-1\\label{eq:incremental1}\\\\\n g_N(x_N) &= 0. \\label{eq:incremental2}\n\\end{align}\n\n\\begin{theorem} \n\\label{prop:terminal_incremental}\nLet $U_T(\\pi)$ be the sOED expected utility defined in \\cref{eq:expected_utility} subject to the constraints in \\cref{eq:optimal_policy} for a given policy $\\pi$ while using the terminal formulation \\cref{eq:terminal1,eq:terminal_info_gN}. Let $U_I(\\pi)$ be the same except using the incremental formulation \\cref{eq:incremental1,eq:incremental2}. Then $U_T(\\pi)=U_I(\\pi)$. \n\\end{theorem}\n\nA proof is provided in \\cref{app:incre_terminal}.\nAs a result, the two formulations correspond to the same sOED problem. \n\n\\subsection{Generalization of Suboptimal Experimental Design Strategies}\n\\label{sec:subopt_design}\n\n\nWe also make the connection between sOED to \nthe commonly used batch design and greedy sequential design.\nWe illustrate below that both batch and greedy designs are,\nin general, suboptimal with respect to the expected utility \\cref{eq:expected_utility}. Thus, sOED generalizes these design strategies.\n\nBatch OED designs all $N$ experiments together prior to performing any of those experiments. Consequently, it is non-adaptive, and cannot make use of new information acquired from any of the $N$ experiments to help adjust the design of other experiments. Mathematically, batch design seeks static design values (instead of a policy) over the joint design space $\\mathcal{D}:=\\mathcal{D}_0 \\times \\mathcal{D}_1 \\times \\cdots \\times \\mathcal{D}_{N-1}$:\n\\begin{align}\n (d_0^{\\mathrm{ba}},\\dots,d_{N-1}^{\\mathrm{ba}}) = \\operatornamewithlimits{arg\\,max}_{(d_0,\\dots,d_{N-1}) \\in \\mathcal{D}} \\mathbb{E}_{y_0,\\dots,y_{N-1}|d_0,\\dots,d_{N-1},x_0}\\[ \\sum_{k=0}^{N-1}g_k(x_k,d_k,y_k) + g_N(x_N) \\],\\label{eq:batch}\n\\end{align}\nsubject to the system dynamics. In other words, the design $d_k$ is chosen independent of $x_k$ (for $k > 0$).\nThe suboptimality of batch design becomes clear once realizing \\cref{eq:batch} is equivalent to the sOED formulation in \\cref{eq:optimal_policy} but restricting all $\\mu_k$ to be only constant functions. Thus, $U(\\pi^{\\ast}) \\geq U(\\pi^{\\mathrm{ba}}=d^{\\mathrm{ba}})$. \n\nGreedy design is also a type of sequential experimental design and produces a policy. It optimizes only for the immediate reward at each experiment:\n\\begin{align}\n\\mu_k^{\\mathrm{gr}} = \\operatornamewithlimits{arg\\,max}_{\\mu_k} \\mathbb{E}_{y_k|x_k,\\mu_k(x_k)}\\[ g_k(x_k,\\mu_k(x_k),y_k) \\], \\qquad k=0,\\dots,N-1,\\label{eq:greedy}\n\\end{align}\nwithout needing to subject to the system dynamics since the policy functions $\\mu_k^{\\mathrm{gr}}$ are decoupled. $U(\\pi^{\\ast}) \\geq U(\\pi^{\\mathrm{gr}})$ follows trivially.\nAs a more specific example when using information measure utilities described in \\cref{sec:information_gain}, greedy design would only make sense under the incremental formulation (\\cref{eq:incremental1,eq:incremental2}).\nThen, together with \\cref{prop:terminal_incremental}, we have \n$U_{T}(\\pi^{\\ast})=U_{I}(\\pi^{\\ast}) \\geq U_{I}(\\pi^{\\mathrm{gr}})$.\n\n\n\n\n\n\\section{Introduction}\n\\label{sec:intro}\n\n\n\n\n\nExperiments are indispensable for scientific research. Carefully designed experiments can provide substantial savings for these often expensive data-acquisition opportunities. However, designs based on heuristics are usually not optimal, especially for complex systems with high dimensionality, nonlinear responses and dynamics, multiphysics, and uncertain and noisy environments. \nOptimal experimental design (OED), while leveraging a\ncriteria based on a forward model that simulates the experiment process, systematically quantifies and maximizes the value of experiments. \n\n\n\n\nOED for linear models \n\\cite{Fedorov1972,Atkinson2007} \nuses criteria based on the information matrix derived from the model, which can be calculated analytically. Different operations on this matrix form the core of the well-known alphabetical designs, such as the $A$- (trace), $D$- (determinant), and $E$-optimal (largest eigenvalue) designs. \nBayesian OED further incorporates the notion of prior and posterior distributions that reflect the uncertainty update as a result of the experiment data \n\\cite{Berger1985, Chaloner1995}.\nIn particular, the Bayesian $D$-optimal criterion generalizes to the nonlinear setting under an information-theoretic perspective \\cite{Lindley1956}, and is equivalent to the expected Kullback\u2013Leibler (KL) divergence from the prior to the posterior. \nHowever, these OED criteria are generally intractable to compute for nonlinear models\nand must be\napproximated~\\cite{Box1959,Ford1989,Chaloner1995,Muller2005,Ryan2016}. With advances in computing power and a need to tackle bigger and more complex systems in engineering and science, there is a growing interest, urgency, and opportunity for computational development \nof nonlinear OED methods~\\cite{Ryan2003,Terejanu2012,Huan2013,Long2015,Weaver2016,Alexanderian2016,Tsilifis2017,Overstall2017,Beck2018,Kleinegesse2019,Foster2019,Wu2020}. \n\n\nWhen designing multiple experiments, commonly used \napproaches are often suboptimal. The first is \\emph{batch} (or static) design: it rigidly designs all experiments together \\emph{a priori} using the aforementioned linear or nonlinear OED method, and does not offer any opportunity to adapt when new information becomes available (i.e. no feedback). \nThe second is \\emph{greedy} (or myopic) design \n\\cite{Box1992, Dror2008, Cavagnaro2010, Solonen2012, Drovandi2013, Drovandi2014, Kim2014,Hainy2016,Kleinegesse2021}:\nit plans only for the \\emph{next} experiment, \nupdates with its observation, and repeats the design process. While greedy design has feedback, it lacks consideration for future effects and consequences (i.e. no lookahead). Hence, greedy design does not see the big picture or plan for the future. It is easy to relate, even from everyday experience (e.g., driving a car, planning a project), that a lack of feedback (for adaptation) and lookahead (for foresight) can lead to suboptimal decision-making with \nundesirable consequences. \n\n\n\n\n\n\n\nA provably optimal formulation of sequential experimental design---we refer to as sequential OED (sOED)~\\cite{Muller2007,VonToussaint2011,Huan2015,Huan2016}---needs both elements of feedback and lookahead, and \ngeneralizes the batch and greedy designs. The main features\nof sOED are twofold. First, sOED works with design \\emph{policies} (i.e. functions that can adaptively suggest what experiment to perform depending on the current situation) in contrast to\nstatic design values. Second, sOED always designs for all remaining experiments, thus capturing the effect on the entire future horizon when each design decision is made. \nFormally, the sOED problem can be formulated as a {partially observable Markov decision process} (POMDP). Under this agent-based view, the experimenter (agent) selects the experimental design (action) following a policy, and observes the experiment measurements (observation) in order to maximize the total utility (reward) that depends on the unknown model parameters (hidden state). \nA belief state \ncan be further formed\nbased on the Bayesian posterior that describes the uncertainty \nof the hidden state, thereby turning the POMDP into a \nbelief Markov decision process (MDP) \\cite{littman1995learning}. \n\nThe sOED problem targeted in our paper presents an atypical and challenging POMDP: finite horizon, continuous random variables, uncountably infinite belief state space, deterministic policy, continuous designs and observations, sampling-only transitions that each involves a Bayesian inference, and information measures as rewards. Thus, while there exists an extensive POMDP literature (e.g.,~\\cite{cassandra1994acting, littman1995efficient, cassandra1998survey, kurniawati2016online, igl2018deep}), off-the-shelf methods cannot be directly applied to this sOED problem. \nAt the same time, attempts for sOED have been sparse, with examples~\\cite{Carlin1998,Gautier2000,Pronzato2002, Brockwell2003, Christen2003, Murphy2003, Wathen2006} \nfocusing on discrete settings \nand with special problem and solution forms,\nand do not use an information criteria or do not adopt a Bayesian framework. \nMore recent efforts for Bayesian sOED~\\cite{Huan2015,Huan2016} employ approximate dynamic programming (ADP) and transport maps, and illustrate the advantages of sOED over batch and greedy designs. However, this ADP-sOED method remains computationally expensive.\n\n\n\n\n\n\n\n\n\n\nIn this paper, we create new methods to solve the sOED problem in a computationally efficient manner, by drawing the state-of-the-art from reinforcement learning (RL) \\cite{watkins1992q, sutton2000policy, szepesvari2010algorithms, mnih2015human, schulman2015trust, silver2016mastering, silver2017mastering, li2017deep, sutton2018reinforcement}.\nRL approaches are often categorized as value-based (learn value functions only)\n\\cite{watkins1992q,mnih2015human,wang2016dueling,van2016deep}, policy-based (learn policy only)\n\\cite{willianms1988toward, williams1992simple}, or actor-critic (learn policy and value functions together) \\cite{konda2000actor, peters2008natural, silver2014deterministic, lillicrap2015continuous}. \nADP-sOED~\\cite{Huan2015,Huan2016} is thus value-based, where the policy is only implicitly expressed via the learnt value functions. Consequently, each policy evaluation involves optimizing the value functions on-the-fly, a costly calculation especially for continuous action spaces. \nBoth policy-based and actor-critic methods are more efficient in this respect. \nActor-critic methods have further been observed to produce lower solution variance and faster convergence \\cite{sutton2018reinforcement}. \n\nWe adopt an actor-critic approach in this work. \nRepresenting and learning the policy explicitly further enables the use of policy gradient (PG) techniques \\cite{sutton2000policy, kakade2001natural, degris2012off, silver2014deterministic, lillicrap2015continuous, schulman2015trust, mnih2016asynchronous, schulman2017proximal, lowe2017multi, liu2017stein, barth2018distributed} that estimate the gradient with respect to policy parameters, and in turn permits the use of gradient-based optimization algorithms.\nInspired by deep deterministic policy gradient (DDPG)~\\cite{lillicrap2015continuous}, we further employ deep neural networks (DNNs) to parameterize and approximate the policy and value functions. The use of DNNs can take advantage of potentially large number of episode samples generated from the transition simulations, and compute gradients efficiently through back-propagation. \nNevertheless, care needs be taken to design the DNNs and their hyperparameters in order to \nobtain stable and rapid convergence to a good sOED policy, which we will describe in the paper. \n\nThe main contributions of our paper are as follows.\n\\begin{itemize}\n\\item We formulate the sOED problem as a finite-horizon POMDP under a Bayesian setting for continuous random variables, and illustrate its generalization over the batch and greedy designs.\n\\item We present the PG-based sOED (that we call PG-sOED) algorithm, proving the key gradient expression and proposing its Monte Carlo estimator. We further present the DNN architectures for the policy and value functions, and detail the numerical setup of the overall method.\n\\item We demonstrate the speed and optimality advantages of PG-sOED over ADP-sOED, batch, and greedy designs, on a benchmark and a problem of contaminant source inversion in a convection-diffusion field that involves an expensive forward model. \n\\item We make available our PG-sOED code at \\url{https:\/\/github.com\/wgshen\/sOED}. \n\\end{itemize}\n\nThis paper is organized as follows. \\Cref{sec:formulation} introduces the components needed in an sOED problem, culminating with the sOED problem statement. \\Cref{sec:method} describes the details of the entire PG-sOED method.\n\\Cref{sec:results} presents numerical examples, a linear-Gaussian benchmark and a problem of contaminant source inversion in a convection-diffusion field, to validate PG-sOED and demonstrate its advantages over other baselines.\nFinally, \\cref{sec:conclusions} concludes the paper and provides an outlook for future work.\n\n\n\n\n\\section{Policy Gradient for Sequential Optimal Experimental Design}\n\\label{sec:method}\n\nWe approach the sOED problem by directly parameterizing the policy functions and representing them explicitly. We then develop gradient expression with respect to the policy parameters, so to enable gradient-based optimization for numerically identifying optimal or near-optimal policies. Such approach is known as the PG\nmethod (e.g., \\cite{silver2014deterministic, lillicrap2015continuous}).\nIn addition to the policy, we also parameterize and learn the value functions,\nthus arriving at an actor-critic form. \nPG contrasts with previous ADP-sOED efforts~\\cite{Huan2015,Huan2016} that \napproximate only the value functions. In those works, the policy is represented implicitly, and requires solving a (stochastic) optimization problem each time the policy is evaluated. This renders both the offline training and online policy usage computationally expensive. As we will demonstrate, PG sidesteps this requirement.\n\n\nIn the following, we first derive the exact PG expression in \\cref{ss:PG_exact}. We then present numerical methods in \\cref{ss:PG_numerical} to estimate this exact PG expression. In particular, this requires adopting a parameterization of the policy functions; we will present the use of DNNs to achieve this parameterization. Once the policy parameterization is established, we can then compute the PG with respect to the parameters, and optimize them using a gradient ascent procedure.\n\n\\subsection{Derivation of the Policy Gradient}\n\\label{ss:PG_exact}\n\nThe PG approach to sOED (PG-sOED) involves parameterizing each policy function $\\mu_{k}$ with parameters $w_k$ ($k=0,\\ldots,N-1$), which we denote by the shorthand form $\\mu_{k,w_k}$. In turn, the policy $\\pi$ is parameterized by $w=\\{w_k, \\forall k\\} \\in \\mathbb{R}^{N_w}$ \nand denoted by $\\pi_{w}$, where $N_w$ is the dimension of the overall policy parameter vector. The sOED problem statement from \\cref{eq:optimal_policy,eq:expected_utility} then updates to: from a given initial state $x_0$,\n\\begin{align}\n \\label{eq:PG_sOED}\n w^{\\ast} = \\operatornamewithlimits{arg\\,max}_{w}& \\qquad U(w)\\\\\n \\text{s.t.}& \n \\qquad d_k = \\mu_{k,w_k}(x_k) \\in \\mathcal{D}_k, \\nonumber\\\\\n &\\qquad x_{k+1}=\\mathcal{F}_k(x_k,d_k,y_k), \n \\hspace{3em} \\text{for}\\quad k=0,\\dots,N-1, \\nonumber\n\\end{align}\nwhere\n\\begin{align}\n \\label{eq:expected_utility_w}\n U(w) = \\mathbb{E}_{y_0,...,y_{N-1}|\\pi_w,x_0}\\[\\sum_{k=0}^{N-1}g_k(x_k,d_k,y_k)+g_N(x_N)\\].\n\\end{align}\nWe now aim to derive the gradient $\\nabla_{w} U(w)$.\n\nBefore presenting the gradient expression, we need to introduce the value functions. \nThe \\emph{state-value function} (or \\emph{V-function}) following policy $\\pi_{w}$ and at the $k$th experiment is\n\\begin{align}\nV_k^{\\pi_{w}}(x_k)&=\\mathbb{E}_{y_k,\\dots,y_{N-1}|\\pi_{w},x_k}\\[\\sum_{t=k}^{N-1} g_t(x_t,\\mu_{t,w_t}(x_t),y_t) + g_N(x_N)\\] \\\\\n &= \\mathbb{E}_{y_k|\\pi_w,x_k} \\[ g_k(x_k,\\mu_{k,w_k}(x_k),y_k) + V^{\\pi_w}_{k+1}(x_{k+1}) \\] \\\\\n V_N^{\\pi_{w}}(x_N) &= g_N(x_N)\n\\end{align}\nfor $k=0,\\ldots,N-1$, where $x_{k+1}=\\mathcal{F}_k(x_k,\\mu_{k,w_k}(x_k),y_k)$.\nThe V-function is the expected cumulative remaining reward starting from a given state $x_k$ and following policy $\\pi_{w}$ for all remaining experiments. \nThe \\emph{action-value function} (or \\emph{Q-function}) following policy $\\pi_{w}$ and at the $k$th experiment is\n\\begin{align}\n\\label{eq:action_bellman}\nQ_k^{\\pi_{w}}(x_k,d_k)&=\\mathbb{E}_{y_k,\\dots,y_{N-1}|\\pi_{w},x_k,d_k}\\[g_k(x_k,d_k,y_k) + \\sum_{t=k+1}^{N-1} g_t(x_t,\\mu_{t,w_t}(x_t),y_t) + g_N(x_N)\\]\n\\\\\n&=\\mathbb{E}_{y_k|x_k,d_k} \\[ g_k(x_k,d_k,y_k) + Q^{\\pi_w}_{k+1}(x_{k+1},\\mu_{k+1,w_{k+1}}(x_{k+1}))\\]\n\\label{eq:action_bellman2}\n\\\\\nQ_{N}^{\\pi_{w}}(x_N,\\cdot) &= g_N(x_N).\n\\end{align}\nfor $k=0,\\ldots,N-1$, where $x_{k+1}=\\mathcal{F}_k(x_k,d_k,y_k)$. \nThe Q-function is the expected cumulative remaining reward for performing the $k$th experiment at the given design $d_k$ from a given state $x_k$ and thereafter following policy $\\pi_{w}$. The two functions are related via\n\\begin{align}\nV_k^{\\pi_{w}}(x_k)=Q_k^{\\pi_{w}}(x_k,\\mu_{k,w_k}(x_k)).\n\\end{align}\n\n\n\n\n\\begin{theorem}\n\\label{thm:PG}\nThe gradient of the expected utility in \\cref{eq:expected_utility_w} with respect to the policy parameters (i.e. the policy gradient) is \n\\begin{align}\n \\nabla_w U(w) = \\sum_{k=0}^{N-1} \\mathbb{E}_{x_k|\\pi_w,x_0} \\[ \\nabla_w \\mu_{k,w_k}(x_k) \\nabla_{d_k} Q^{\\pi_w}_k(x_k,d_k)\\Big|_{d_k=\\mu_{k,w_k}(x_k)} \\].\\label{eq:pg_theorem}\n\\end{align}\n\\end{theorem}\nWe provide a proof in \\cref{app:pg_derive}, which follows the proof in \\cite{silver2014deterministic} for a general infinite-horizon MDP. \n\n\\subsection{Numerical Estimation of the Policy Gradient}\n\\label{ss:PG_numerical}\n\n\nThe PG \\cref{eq:pg_theorem} generally cannot be evaluated in closed form, and needs to be approximated numerically. We propose a Monte Carlo (MC) estimator:\n\\begin{align}\n \\label{eq:policy_grad}\n \\nabla_w U(w) \\approx \\frac{1}{M} \\sum_{i=1}^M \\sum_{k=0}^{N-1} \\nabla_w \\mu_{k,w_k}(x^{(i)}_k) \\nabla_{d^{(i)}_k} Q^{\\pi_w}_k(x^{(i)}_k,d^{(i)}_k)\\Big|_{d^{(i)}_k=\\mu_{k,w_k}(x^{(i)}_k)}\n\\end{align}\nwhere superscript indicates the $i$th episode (i.e. trajectory instance) generated from MC sampling. Note that the \\emph{sampling} only requires a given policy and does not need any Q-function. Specifically, for the $i$th episode, we first sample a hypothetical ``true'' $\\theta^{(i)}$ from the prior belief state $x_{0,b}$ and freeze it for the remainder of this episode---that is, all subsequent $y_k^{(i)}$ will be generated from this $\\theta^{(i)}$.\nWe then compute $d_k^{(i)}$ from the current policy $\\pi_w$, sample $y_k^{(i)}$ from the likelihood $p(y_k|\\theta^{(i)},d_k^{(i)},I_k^{(i)})$, and repeat for all experiments $k=0,\\dots,N-1$. The same procedure is then repeated for all episodes $i=1,\\dots,M$. The choice of $M$ can be selected based on indicators such as MC standard error, ratio of noise level compared to gradient magnitude, or the validation expected utility from sOED policies produced under different $M$. \nWhile we propose to employ a fixed sample $\\theta^{(i)}$ for the entire $i$th episode, one may also choose to resample $\\theta_k^{(i)}$ at each stage $k$ from the updated posterior belief state $x_{k,b}^{(i)}$.\nThese two approaches are in fact equivalent, since from factoring out the expectations we have\n\\begin{align}\n \\label{eq:equivalency_sample_theta}\n U(w) &= \\mathbb{E}_{y_0,...,y_{N-1}|\\pi_w,x_0}\\[\\sum_{k=0}^{N-1}g_k(x_k,d_k,y_k)+g_N(x_N)\\] \\nonumber \\\\\n &= \\mathbb{E}_{\\theta|x_{0,b}} \\mathbb{E}_{y_0|\\pi_w,\\theta,x_0} \\mathbb{E}_{y_1|\\pi_w,\\theta,x_0,y_0} \\cdots \\nonumber\\\\\n &\\qquad \\qquad \\qquad \\cdots \\mathbb{E}_{y_{N-1}|\\pi_w,\\theta,x_0,y_0,\\dots,y_{N-2}} \\[\\sum_{k=0}^{N-1}g_k(x_k,d_k,y_k)+g_N(x_N)\\] \\\\\n &= \\mathbb{E}_{\\theta_0|x_{0,b}} \\mathbb{E}_{y_0|\\pi_w,\\theta_0,x_0} \\mathbb{E}_{\\theta_1|x_{1,b}} \\mathbb{E}_{y_1|\\pi_w,\\theta_1,x_{1}} \\cdots \\nonumber\\\\\n & \\qquad \\qquad \\qquad \\cdots\\mathbb{E}_{\\theta_{N-1}|x_{N-1,b}} \\mathbb{E}_{y_{N-1}|\\pi_w,\\theta_{N-1},x_{N-1}} \\[\\sum_{k=0}^{N-1}g_k(x_k,d_k,y_k)+g_N(x_N)\\],\n\\end{align}\nwhere the second equality corresponds to the episode-fixed $\\theta^{(i)}$, and the last equality corresponds to the resampling of $\\theta_k^{(i)}$. The former, however, is computationally easier, since it does not require working with the intermediate posteriors.\n\nFrom \\cref{eq:policy_grad}, the MC estimator for PG entails computing the gradients $\\nabla_w \\mu_{k,w_k}(x^{(i)}_k)$ and $\\nabla_{d^{(i)}_k} Q^{\\pi_w}_k(x^{(i)}_k,d^{(i)}_k)$. While the former can be obtained through the parameterization of the policy functions, the latter typically requires parameterization of the Q-functions as well. We thus parameterize both the policy and Q-functions, arriving at an actor-critic method. Furthermore, we adopt the approaches from Deep Q-Network (DQN)~\\cite{mnih2015human} and \nDDPG~\\cite{lillicrap2015continuous}, and use DNNs to approximate the policy and Q-functions. We present these details next. \n\n\n\n\\subsubsection{Policy Network}\n\\label{sec:policy_net}\n\nConceptually, we would need to construct individual DNNs $\\mu_{k,w_k}$ to approximate $\\mu_{k} : \\mathcal{X}_k \\mapsto \\mathcal{D}_k$ for each $k$. Instead, we choose to combine them together \ninto a single function $\\mu_{w}(k, x_k)$, which then requires only a single DNN for the entire policy at the cost of a higher input dimension. Subsequently, the $\\nabla_w \\mu_{k,w_k}(x^{(i)}_k)=\\nabla_w \\mu_{w}(k,x^{(i)}_k)$ term from \\cref{eq:policy_grad} can be obtained via back-propagation. Below, we discuss the architecture design of such a DNN, with particular focus on its input layer.\n\n\nFor the first input component, i.e. the stage index $k$,\ninstead of passing in the integer directly we opt to use one-hot encoding that takes the form of a unit vector:\n\\begin{align}\n k \\qquad \\longrightarrow \\qquad e_{k+1}=[0,\\dots,0,\\underbrace{1}_{(k+1)\\rm{th}},0,\\dots,0]^T.\n\\end{align}\nWe choose one-hot encoding because the stage index\nis an ordered categorical variable instead of a quantitative variable (i.e. it has notion of ordering but no notion of metric). Furthermore, these unit vectors are always orthogonal, which we observed to offer good overall numerical performance of the policy network. The tradeoff is that the dimension of representing $k$ is increased from 1 to $N$.\n\nFor the second component, i.e. the state $x_k$ (including both $x_{k,b}$ and $x_{k,p}$), we represent it in a nonparametric manner as discussed in \\cref{sec:math_formulation}:\n\\begin{align}\nx_k \\qquad \\longrightarrow \\qquad I_k=\\{d_0,y_0,\\dots,d_{k-1},y_{k-1}\\}.\n\\end{align}\nTo accommodate states up to stage $(N-1)$ (i.e. $x_{N-1}$), we use a fixed total dimension of $(N-1)(N_d+N_y)$ for this representation, where for $k < (N-1)$ the entries for $\\{d_l, y_l \\,|\\, l \\geq k\\}$ (experiments that have not happened yet) are padded with zeros (see \\cref{eq:NN_input}). \nIn addition to providing a \nstate representation without any approximation, another major advantage of such nonparametric form can be seen under the terminal formulation in \\cref{eq:terminal_info_gN}, where now none of the intermediate belief states (i.e. $x_{k,b}$ for $k