diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzcvzw" "b/data_all_eng_slimpj/shuffled/split2/finalzzcvzw" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzcvzw" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nThe origin of turbulence and enhanced angular momentum transport in accretion flows is a fascinating problem of considerable importance in astrophysics. It is commonly believed that the magneto-rotational instability (MRI), in one form or another, plays a fundamental role in destabilizing the basic quasi-Keplerian flow. When a net magnetic flux is present the MRI sets in as a classical linear instability with a well defined growth rate and characteristic wavenumber \\citep{Balbus91}; the turbulence then develops from the nonlinear evolution of this instability. When there is no net flux the problem is more complicated and the turbulence must develop from a nonlinear subcritical instability. In this case, the problem becomes fundamentally one of establishing what form of dynamo action can be sustained in a disc. Much of what is currently known about dynamo action in accretion flows is based on numerical studies formulated within the framework of the shearing-box approximation \\citep{Hawley95}. The simplest set up, both conceptually and numerically, consists of an unstratified, isothermal shearing-box with periodic boundary conditions in the vertical direction. It is now well established that this configuration suffers from the so-called convergence problem. As the magnetic diffusivity decreases, or equivalently the resolution increases, the Maxwell stresses decrease, eventually to become negligible \\citep[][see however Fromang 2010 for a different view]{Fromang07, Pessah07, Guan09, Simon09, Bodo11}. The cause for this ``non convergence\" has been attributed to the the lack of a characteristic outer scale in the periodic, unstratified problem \\citep[for a discussion see][]{Bodo11}. The next step towards more realistic simulations is to retain the shearing-box geometry but with the inclusion of vertical gravity, and consequently, stratification. This introduces a characteristic length--the scale height--that may help to remedy the convergence problem \\citep{Davis10, Shi10, Oishi11}. Whether this is the case or not, at the moment, remains an open question. Certainly, in the stratified cases the solutions manifest a richness both in space and time that is absent in the unstratified cases \\citep{Gressel10, Guan11, Simon12}. It is important to note that most of these studies adopt an isothermal equation of state; the resulting density distribution is correspondingly close to a Gaussian with most of the mass concentrated near the mid-plane, and tenuous, low density regions above and below. This leads to very different dynamo processes operating in the mid-plane and in the overlying regions. Although an isothermal formulation is conceptually simple and easy to implement numerically, it neglects the possibly important process of turbulent heating by viscous and Ohmic dissipation. It can be argued that in an optically thin environment turbulent heating may not be important since the energy can easily escape without substantially heating the ambient plasma. However, this is definitely not the case in an optically thick environment. In this case the plasma will be heated locally and the final thermal structure will be determined by a balance between energy deposition and energy transport. In this case, it is possible that substantial departures may develop from the isothermal case that, in turn may impact the operation of the dynamo. \nSome of these issues have been addressed by Hirose and collaborators \\citep{ Hirose06, Hirose09, Blaes11}, who have considered radiation dominated discs and have included a sophisticated treatment of the radiation field, and also by the works of Flaig and collaborators \\citep{Flaig10, Flaig12} whose models of proto-planetary discs include partial ionization, chemical networks and heat transport in the radiative conduction approximation. All these works indicate that turbulent heating can indeed be important. Here, we also address the problem of turbulent heating but in the somewhat simpler case of a fully ionized, pressure dominated disc. Our intention is to provide a bridge between the works of Hirose et al. and Flaig et al. and those based on the isothermal equation of state. To this end we consider a stratified shearing-box with a perfect gas equation of state and finite (constant) thermal diffusivity. The objective is to study how the basic state and the corresponding dynamo action changes as the thermal diffusivity is varied. In this work we deliberately keep the formulation as simple as possible in order to highlight some of the basic underlying physical processes. \n\n\\section{Formulation} \\label{formulation} \nOur objective is to provide a simple model in which the effects of dissipative heating can be studied. In particular we want to assess how these processes, together with thermal transport lead to departures from the more familiar isothermal cases. We assume that the plasma is optically thick and approximate the radiative transport by a diffusion process which we model by a thermal conduction term in the energy equation. In the spirit of keeping things as simple as possible, and in order to capture more easily the general properties of the solutions, we make further simplifications, by neglecting the dependencies on density and temperature resulting from the diffusion approximation to the radiative transport equation, and assuming a constant thermal diffusivity. A more realistic treatment of the radiation will be considered in future work.\n\nWe perform three-dimensional, numerical simulations of a perfect gas with thermal conduction in a shearing box with vertical gravity. A detailed presentation of the shearing box approximation can be found in \\citet{Hawley95}. The Magneto-Hydro-Dynamics (MHD) shearing-box equations, including vertical gravity and thermal conduction can be written as:\n\n\\begin{equation}\n\\frac{\\partial \\rho}{\\partial t} + \\nabla \\cdot \\left( \\rho {\\mathbf v} \\right) = 0,\n\\label{eq:mass}\n\\end{equation}\n\n\\begin{equation}\n\\frac{\\partial {\\mathbf v}}{\\partial t} + {\\mathbf v} \\cdot \\nabla {\\mathbf v} + 2 \\Omega \\times {\\mathbf v} = \\frac{{\\mathbf B} \\cdot \\nabla {\\mathbf B}}{4 \\pi \\rho} - \\frac{1}{\\rho} \\nabla \n\\left( \\frac{{\\mathbf B}^2}{8 \\pi} + P \\right) - \\nabla \\left( 2 A \\Omega x^2 + \\frac{1}{2} \\Omega^2 z^2 \\right),\n\\label{eq:momentum}\n\\end{equation}\n\n\\begin{equation}\n\\frac{\\partial {\\mathbf B}}{\\partial t} - \\nabla \\times \\left( {\\mathbf v} \\times {\\mathbf B} \\right) = 0,\n\\label{eq:induction}\n\\end{equation}\n\n\\begin{equation}\n\\frac{\\partial E}{\\partial t} + \\nabla \\cdot [(E + P_T) {\\mathbf v} + ({\\mathbf v} \\cdot {\\mathbf B}) {\\mathbf B} - k \\nabla T] = 0 ,\n\\label{eq:energy}\n\\end{equation}\nwhere ${\\mathbf B}$, ${\\mathbf v}$, $\\rho$ and $P$ denote, respectively, the magnetic field intensity, the velocity, the density and the thermal pressure; $E$ is the total energy density, $P_T$ is the total (thermal plus magnetic) pressure and $k$ is the thermal conductivity. The local angular velocity ${\\mathbf \\Omega} = \\Omega {\\mathbf e_z}$ and the shear rate \n \\begin{equation}\nA \\equiv \\frac{R}{2} \\frac{\\partial \\Omega}{\\partial R}\n\\end{equation}\nare assumed constant. For a Keplerian disk $A = -(3\/4) \\Omega$. The system is closed by the equation of state for a perfect gas: \n\\begin{equation}\nP = \\rho T\n\\end{equation}\nwhere we have absorbed the perfect gas constant in the definition of the temperature.\nThe thermal conductivity can be written as\n\\begin{equation}\nk = \\frac{5}{2} \\kappa \\rho\n\\end{equation}\nwhere $\\kappa$ is the thermal diffusivity, which, as discussed above, we assume to be constant, and the factor of $5\/2$ is appropriate for a gas with three degrees of freedom.\n \n\nWe start our simulations from a state with a uniform shear flow, $\\vec{v} = -2 A x\\hat{\\mathbf{e}}_y$, and density and pressure distributions that satisfy vertical hydrostatic balance with constant temperature $T_0$. With these conditions the initial density has a Gaussian profile given by\n\\begin{equation}\n\\rho = \\rho_0 \\exp(- \\Omega ^2 z^2 \/ 2 T_0), \\\n\\end{equation}\nwhere $\\rho_0$ is the value of density on the equatorial plane.\n\nIf the MRI develops to substantial amplitude, this initial state will be driven away from thermal equilibrium by the energy input from dissipative processes. The temperature in the equatorial regions will progressively increase and a thermal gradient will be established until a new equilibrium is reached whereby the energy input is balanced by thermal losses at the upper and lower boundaries. As we shall see, the new equilibrium can be quite different from the initial isothermal state and is determined self-consistently by the heating associated with the process of angular momentum transport by the MRI. We note here that, in our current formulation, we do not include viscous and Ohmic dissipation explicitly. The heating of the fluid occurs because of numerical dissipation together with a conservative formulation of the total energy equation. The latter requires that whatever kinetic or magnetic energy is lost by dissipative processes it be re-introduced in the form of internal energy (heating). \n\nThe computational domain covers the region $L_x \\times L_y \\times L_z$, where $L_x = H$, $L_y = \\pi H$ and $L_z = 6H$,\nwhere\n\\begin{equation}\nH = \\frac{\\sqrt{2 T_0}}{\\Omega}\n\\end{equation}\nis the pressure scale height in the initial isothermal state. In the vertical direction the box is symmetric with respect to the equatorial plane $z = 0$, where gravity changes sign. Numerically, the domain is covered by a grid of $32 \\times 96 \\times 192$ grid points. \nA magnetic field of the form\n\\begin{equation}\n{\\mathbf B} = B_0 \\sin \\left( \\frac{2 \\pi x}{H} \\right) \\hat{\\mathbf{e}}_z\n\\end{equation}\nis imposed initially\nwhere $B_0$ corresponds to the ratio between thermal and magnetic pressure and has a value of $1600$. Clearly, there is no net magnetic flux threading the box. In addition we introduce random noise in the $y$ component of the velocity in order to destabilize the system. \n\nFollowing common practice, we assume periodic boundary conditions in the $y$ direction and shear periodic conditions in the $x$ direction. In the vertical direction, we assume that the upper and lower boundaries ($z = \\pm 3H$), are impenetrable and stress free, giving $v_z = 0$ , $\\partial v_x \/ \\partial z = \\partial v_y \/ \\partial z = 0$, and also that the magnetic field is purely vertical, giving $\\partial B_z \/ \\partial z = 0$, $B_x = B_y = 0$. We should note that these conditions allow a net flux of magnetic helicity through the boundaries with, possibly, important consequences to the dynamo processes \\citep{VC01, Kapyla10}.\nFinally, we assume that the boundaries are in hydrostatic balance, and that the temperature is constant and equal to $T_0$; thus\n\\begin{equation}\n\\frac {\\partial p_T}{\\partial z} = \\mp 3 \\rho \\Omega^2 H, \\qquad \\qquad T=T_0.\n\\end{equation}\nAll simulations are carried out with the PLUTO code \\citep{Mignone07}, with a second order accurate scheme, HLLD Riemann solver and an explicit treatment of thermal conduction. \n\n\n\n\\section{Results} \\label{results} \nWe now describe the development of MRI driven turbulence from an initially isothermal state. Hereinafter, and unless otherwise specified, when presenting the numerical results, we adopt $\\Omega^{-1}$ as the unit of time, $H$ as the unit of length, and the mid-plane density in the initial isothermal state $\\rho_0$, as the unit of density and, since $H$ is our unit of length, we have $T_0=1\/2$. \nFollowing the initial perturbations, a sub-critical instability sets in leading to the generation of magnetic fields and the development of turbulence. Dissipative processes heat the plasma driving the system away from the initial isothermal state. Eventually the system reaches a stationary state in which the heat generated by the turbulence is balanced by the heat lost through the upper and lower boundaries. \nLocally, the balance is between the volumetric heat production and the divergence of the heat flux that can arise both by thermal conduction and turbulent transport. The relative importance of these two processes depends on the value of the thermal diffusivity, which, here, is expressed in units of the product of the scale-height and the isothermal sound speed, i.e. it has the form of \nan inverse P\\'eclet number. A typical evolution for a case with $\\kappa = 2 \\times 10^{-2}$ can be followed in Fig. \\ref{fig:maxwtime} where we show the time history of the Maxwell stresses averaged over the entire computational domain. Clearly, there is a long adjustment phase lasting approximately 500 time-units after which the system settles into a stationary state in which the stresses remain strongly fluctuating but with a well defined (time) average value. The corresponding thermal history can be assessed by inspection of Figs. \\ref{fig:avt_time} and \\ref{fig:avrho_time}, showing respectively, the horizontally averaged temperature, $\\tilde T(z)$ and density $\\tilde \\rho(z)$, at several times. The asymptotic profiles in the stationary state (obtained by time averaging from $t = 500$ to the end of the simulation, $t = 2000$) are denoted by angle brackets. Clearly the increase in the Maxwell stresses are accompanied by the heating of the central regions leading to the establishment of a nearly parabolic profile in temperature. We note a corresponding dramatic change in the density distribution that evolves from the initial Gaussian profile to an almost constant distribution at later times. \n\nThe development of a constant density state is somewhat remarkable, and deserves further investigation. At first sight it may appear as the result of a fortuitous choice of thermal diffusivity. As we shall see presently, this is not entirely the case. \nIn the stationary state the average temperature and density are related by the condition of hydrostatic balance, which we write here in dimensional form, and for simplicity we only consider $z > 0$;\n\\begin{equation}\n\\frac{1}{\\rho} \\frac{d \\rho}{d z} = \\frac{1}{T} \\left( -\\Omega^2 z - \\frac{d T}{d z} \\right) .\n\\label{eq:hydeq}\n\\end{equation}\nClearly, whether the density decreases upwards, increases upwards, or remains constant depends on the relative magnitude of the two terms in the brackets on the RHS of (\\ref{eq:hydeq}). The first term is a fixed linear function of $z$. The second--the temperature gradient--is negative since the layer is heated from within, but its magnitude depends on a balance between local heat production rate and local heat transport. \nTo a first approximation, one could assume that the energy production rate should be independent of the thermal diffusivity $\\kappa$. This is not unreasonable, since the production rate is driven by turbulent dissipation, which in turn depends solely on the efficiency of the MRI. This being the case, the magnitude of the temperature gradient could be made arbitrarily small by choosing a large value of $\\kappa$. Clearly, if the thermal diffusivity is huge, thermal conduction can easily transport all the generated heat along very shallow gradients. The temperature will be nearly constant, the density will rapidly decrease upward in accordance to (\\ref{eq:hydeq}), and resemble the isothermal distribution. By contrast, if $\\kappa$ is tiny the temperature gradients required to carry the heat will be huge (in absolute value), the RHS of (\\ref{eq:hydeq}) will be positive and the density will rapidly increase with height. However, this configuration with a density inversion is strongly unstable to Rayleigh-Taylor type instabilities. The resulting overturning motions will both carry the heat more efficiently than thermal conduction, and homogenize the mass towards a constant density state. Thus we can conjecture the existence of a critical value of $\\kappa=\\kappa_{crit}$ above which the transport is mostly conductive, the layers have a density decreasing with height and a stratification approaching that of an isothermal layer in the limit of large $\\kappa$ (conductive states). For $\\kappa \\ll \\kappa_{crit}$, the heat transport is mostly advective, and the density is approximately constant (convective states).\n\nSome of these ideas can be easily verified by considering a series of calculations with varying thermal diffusivity. The results are summarized in Figs. \\ref{fig:avt_z} and \\ref{fig:avrho_z} where we show the steady state horizontally averaged temperature and density distributions for different values of $\\kappa$. As expected, the temperature gradient is always negative ($z>0$), its magnitude increases with decreasing $\\kappa$ as does the overall temperature of the layer. For large values of $\\kappa$ the temperature distributions have an approximate parabolic profile and the density decreases upwards. \n\nFor small values of $\\kappa$ the temperature in the interior approaches a ``tent\" profile with a parabolic shape near the equator then a linear decrease over most of the domain and thin boundary layers at the edges. As $\\kappa$ decreases the profiles move up retaining their shape but producing progressively thinner boundary layers. \nThe corresponding density profiles confirm the establishment of a constant density state that becomes asymptotically independent of $\\kappa$. From Fig. \\ref{fig:avrho_z}, we can estimate that the critical value of $\\kappa$ for a transition from conductive to convective regimes, in this setup, satisfies\n\\begin{equation}\n\\kappa_{crit} \\approx 2 \\times 10^{-2}.\n\\label{eq:kappa-crit}\n\\end{equation}\nOur conjecture that as $\\kappa$ crosses its critical value the vertical heat transport changes from conductively dominated to advectively dominated can also be verified by considering the horizontally averaged conductive, and advective fluxes that can be defined, respectively, as\n\\begin{equation}\nF_c = - \\frac{5}{2} \\kappa \\rho \\frac{d \\tilde T}{d z} ,\n\\end{equation}\nand \n\\begin{equation}\nF_T = \\frac{1}{L_x L_y} \\int \\frac{5}{2} \\rho v_z (T - \\tilde T) dx dy .\n\\end{equation}\nTheir values in the stationary state for the two extreme values of $\\kappa$ are shown in Fig. \\ref{fig:th_fluxes}. The roles of the two types of flux practically reverse. For $\\kappa=0.12$ the transport is entirely conductive and advection is negligible; for $\\kappa=4\\times 10^{-3}$ heat conduction is negligible, except in the boundary layers, and all of the flux is carried by advection. It is interesting to note that near the equator where the advective flux is small--it is actually zero at the equator--the density displays a weak inversion. This is related to the absence of gravity near the equator to drive Rayleigh-Taylor instabilities. \n\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=10cm]{fig1.eps}\n \\caption{Time history of the Maxwell stresses averaged over the computational box for the case $\\kappa = 2 \\times 10^{-2}$. }\n \\label{fig:maxwtime}\n\\end{figure}\n\n\n \\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=10cm]{fig2.eps}\n \\caption{Temperature averaged over horizontal planes, $\\tilde T$ as a function of the vertical coordinate $z$ for the case $\\kappa = 2 \\times 10^{-2}$. The different curves refer to different times, as indicated by the labels, and for comparison we plot also the time averaged distribution $ \\langle \\tilde T \\rangle$ in the steady state. }\n \\label{fig:avt_time}\n\\end{figure}\n\n \n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=10cm]{fig3.eps}\n \\caption{Density averaged over horizontal planes, $\\tilde \\rho$ as a function of the vertical coordinate $z$ for the case $\\kappa = 2 \\times 10^{-2}$. The different curves refer to different times, as indicated by the labels, and for comparison we plot also the time averaged distribution $\\langle \\tilde \\rho \\rangle$ in the steady state. }\n \\label{fig:avrho_time}\n\\end{figure}\n\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=10cm]{fig4.eps}\n \\caption{Plot of $ \\langle \\tilde T \\rangle $ as a function of $z$. The different curves refer to different values of $\\kappa$, as shown in the legend. For comparison we plot also the isothermal case}\n \\label{fig:avt_z}\n\\end{figure}\n\n \n \\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=10cm]{fig5.eps}\n \\caption{Plot of $ \\langle \\tilde \\rho \\rangle $ as a function of $z$. The different curves refer to different values of $\\kappa$, as shown in the legend. For comparison we plot also the isothermal case}\n \\label{fig:avrho_z}\n\\end{figure}\n\n \\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=10cm]{fig6.eps}\n \\caption{Plot of $ \\langle F_c \\rangle $ and $ \\langle F_T \\rangle$ as functions of $z$. The solid curves show $\\langle F_c \\rangle $ while the dashed curves show $ \\langle F_T \\rangle $, the different colors refer to different values of $\\kappa$ as indicated in the legend. }\n \\label{fig:th_fluxes}\n\\end{figure}\nThe existence of two regimes, conductive and convective, with strikingly different vertical structures is likely to lead to correspondingly different dynamo actions. A measure of these differences can be assessed by inspection of Fig. \\ref{fig:maxw} where the domain averaged Maxwell stresses are shown as a function of time for different values of $\\kappa$. The corresponding curve for an isothermal case is also included for comparison. Clearly, the angular momentum transport efficiency increases with decreasing $\\kappa$ and eventually saturates in the convective regime. It is natural to assume that once the heat transport is mostly advective further decreases in thermal diffusivity will not make any difference. What is remarkable is the difference between the convective cases and the purely isothermal one, with the latter being strikingly smaller. \n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=10cm]{fig7.eps}\n \\caption{Plot of the average Maxwell stresses as a function of $z$ for two cases with different t values of $\\kappa$. One case (solid line) is in the convective regime, the other (dashed-dotted line) is in the conductive regime. }\n \\label{fig:av_maxw}\n\\end{figure}\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=15cm]{fig8.eps}\n \\caption{Space-time diagrams of average azimuthal field. The horizontally averaged value of $B_y$ is plotted as a function of $z$ and $t$.\nThe upper panel corresponds to a case in the convective regime; the lower, to one in the conductive regime. The corresponding values of $\\kappa$ are as indicated. }\n \\label{fig:byavg_z_t}\n\\end{figure}\n\nFurther evidence for two distinct types of dynamo actions operating in the two regimes can be obtained by inspection of Fig. \\ref{fig:av_maxw}. This shows the horizontally and time averaged Maxwell stresses as a function of $z$ for two cases with different values of $\\kappa$ corresponding to the convective and conductive regimes. The curve for the conductive case follows the general trend of the more familiar isothermal calculations. The transport is largest in the denser central regions, steadily decreasing \nat higher values of z. This is in sharp contrast with the convective case in which the the stresses actually rapidly increase with distance from the mid-plane reaching a sharp maximum near the boundaries. In both cases, the corresponding Reynolds stresses are small and decrease steadily away from the mid-plane. The spatio-temporal behavior of the dynamo is also remarkably different in the two regimes as illustrated in Fig. \\ref{fig:byavg_z_t}. The two panels show space-time diagrams of the horizontally averaged azimuthal magnetic field as a function of $z$ and time. The lower panel, corresponding to a conductive case, displays the characteristic patterns typical of the isothermal cases signaling the presence of cyclic activity with magnetic structures propagating from the mid-plane to the boundaries. In the upper panel there is no evidence for cyclic activity or pattern propagation. The magnetic structures form and vanish seemingly at random with no apparent characteristic time between field reversals. Furthermore there are events in which coherent structures form that extend over the entire layer. Interestingly, for earlier times, when the layer is still close to isothermal there is some evidence for pattern propagation. From these last two figures it is clear that both the transport efficiency and the amount of generated toroidal flux is much higher in the convective regime than in the conductive one. \n\nA possible reason for this difference might be related to the influence of magnetic boundary conditions. It is well known that in unstratified shearing boxes the boundary conditions make a big difference to the operation of the dynamo. Periodic boundary conditions, as was mentioned in the introduction, lead to small-scale dynamo action and to the convergence problem. On the other hand, ``vertical\" boundary conditions, like the ones imposed here, lead to a much more efficient dynamo that appears to scale with the system size rather than with the dissipation scale \\citep{Kapyla10}. By contrast, the solutions in isothermal, stratified shearing boxes are more insensitive to the boundary conditions \\citep{Davis10, Shi10, Oishi11}. This most likely is because the boundaries are located in very tenuous regions characterized by low density and high Alfv\\'en speed. In the convective cases described here, the density is nearly constant as a function of height making the layer appear more ``unstratified\". \nPartial support for this argument can be provided by looking at what type of dynamo is operating in the convective regime. Fig. \\ref{fig:avby} shows the time history of the volume averaged value of $B_y$ (the azimuthal component) scaled in terms of the rms value of the fluctuations. Two things are worthy of notice: the average field changes sign, and its magnitude is comparable with--and actually it occasionally exceeds--that of the fluctuations. This is strongly suggestive of the operation of a system-scale dynamo \\citep{Tobias11} and should be contrasted with the corresponding isothermal case in which there is a different behavior depending on height and the ratio between average and fluctuations raises from about ten percent in the central region, where most of the transport takes place, to more substantial values in the upper and lower regions where the transport strongly declines.\n\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=10cm]{fig9.eps}\n \\caption{Time histories of the volume averaged Maxwell stresses for different values of the thermal diffusivity $\\kappa$. The values of $\\kappa$ are shown in the legend. }\n \\label{fig:maxw}\n\\end{figure}\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=10cm]{fig10.eps}\n \\caption{Time history of $\\overline B_y$ (the volume averaged $B_y$) in units of the rms value of the fluctuations. For this case $\\kappa = 4 \\times 10^{-3}$.}\n \\label{fig:avby}\n\\end{figure}\n\n\n\n\n \n \n\n\n\\section{Conclusions} \\label{conclusions}\nOur main objective has been to study numerically the effects of dissipative heating and finite heat transport in determining the thermal structure of the layer and the efficiency of angular momentum transport in stratified shearing boxes with zero magnetic flux. In particular we wanted to compare with the more commonly studied isothermal case. To this end we have considered the simple case of a fluid obeying the perfect gas law and with finite (constant) thermal diffusivity. \n\nOur main result is to identify two distinct regimes: conductive and convective, corresponding respectively to large and small values of the thermal diffusivity. In the conductive regime, the heat generated by dissipation is transported through the bulk of the layer by thermal conduction, and the temperature and density have close to parabolic profiles. This appears to be in agreement with the conclusions of the recent work by \\citet{Uzdensky12}. The convective regime is dramatically different. In these cases the heat is transported almost entirely by overturning motions driven by Rayleigh-Taylor type instabilities. The density profile becomes flat, and the temperature develops a ``tent\" profile with thin boundary layers at the upper and lower boundaries. There is evidence that the ``tent\" profile for the temperature and the flat profile for the density are universal, in the sense that they depend solely on the properties of the turbulence and not on the values of the collisional processes. \nThis last property in particular, may have important consequences for the dynamo processes. It appears that the dynamo can operate more efficiently in a layer with nearly constant density than in a corresponding layer with the same total mass and a Gaussian profile (isothermal case). This being the case, there is a interesting feedback effect. The dynamo drives the MRI turbulence that heats the layer causing it to become Rayleigh-Taylor unstable, the overturning motions associated with the Rayleigh-Taylor instability homogenize the density allowing a more efficient operation of the dynamo, and so on until the layer settles to a universal, convective, self-regulated state. At the moment it is not clear whether the Rayleigh-Taylor driven motions contribute directly to a more efficient working of the dynamo, or they contribute indirectly by maintaining the more beneficial constant density state. Some of the similarities between the dynamo properties observed here and those in the work of \\citet{Kapyla10} in which stratification is absent suggest that it may be the constant density feature that is important. \n\nFinally, we remark on the natural extensions of the present model. There are two avenues that immediately come to mind. One is to include a more realistic treatment of the thermal transport. The obvious next step is to consider thermal diffusivities that have power law dependencies on density and temperature. We anticipate that this may have some impact on the stratification in the conductive regime, but hardly any in the convective regime in which all the thermal transport is mediated by flows anyway. The other is to consider more realistic boundary conditions, like for instance those appropriate to black body radiation. Preliminary results in this direction show that in the convective regime this choice leads to a change in the overall value of the temperature but not in its profile. Also, the constant density profile remains unchanged. These results however are preliminary and a more thorough study is needed. Also, the assumption of impenetrable stress-free boundary conditions should be replaced by more realistic conditions in which there is a thin transition layer across which the opacity changes dramatically and the fluid goes from being optically thick to optically thin. The problem is similar to that of matching a photosphere on top of a stellar convection zone. Numerically this is extremely challenging and will be considered in future works.\nHowever all these extensions are secondary to the issue of convergence. In a sense, if the dynamo ceases to operate efficiently at high magnetic Reynolds numbers all bets are off. There is some room for cautious optimism since the evidence so far, is that the dynamo operating here in the convective regime is more likely to be of the system-scale type than of the small-scale type. Preliminary calculations with twice the resolutions indeed support this conjecture. However, in the end only a (very costly) convergence study will settle the issue. \n\n\n\\section{Acknowledgment}\nThis work was supported in part by the National Science Foundation \nsponsored Center for Magnetic Self Organization at the University of Chicago.\nGB, AM and PR ackowledge support by INAF through an INAF-PRIN grant. \nWe acknowledge that the results in this paper have been achieved using the PRACE Research Infrastructure resource JUGENE based in Germany at the J\\\"ulich Supercomputing Center.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\\IEEEPARstart{M}{any} applications in wireless networks involve multicast communication, which can be defined as the transmission of identical information to multiple receivers. \nOne example is connected driving, where applications such as platooning can benefit from transmitting the same status or control information to a group of vehicles \\cite{zheng2015stability}.\nAnother example is the transmission of audio signals for live events, where each spectator can select from a variety of audio streams. \nBoth use cases can benefit considerably from physical layer precoders that ensure a given quality-of-service (QoS) level for the requested stream at each receiver while reusing the same time and frequency resources for all receivers.\n\nPhysical layer multicasting schemes have been extensively investigated in the last two decades. The authors of \\cite{sidiropoulos2006transmit} show that the performance of multicast transmission can be greatly improved by exploiting channel state information (CSI) at the transmitter. They consider two beamforming problems for single-group multicast beamforming, the max-min-fair (MMF) multicast beamforming problem and the QoS constrained multicast beamforming problem. While the MMF formulation aims at maximizing the lowest signal-to-noise ratio (SNR) among a group of users subject to a unit power constraint on the beamforming vector, the objective of the QoS constrained formulation is to minimize the transmit power subject to SNR constraints for the individual users. Moreover, the authors of \\cite{sidiropoulos2006transmit} show that the solutions to both problems are equivalent up to a scaling factor.\n\nThe more general case with multiple cochannel multicast groups is considered in \\cite{karipidis2008quality}. \nUnlike the single-group case, the QoS constrained and MMF versions of the multi-group multicast beamforming problem are different in the sense that a solution to one version cannot generally be obtained by scaling a solution to the other. \nHowever, algorithms for the QoS constrained formulation can be straightforwardly extended to approximate the MMF version, by performing a bisection search over the target signal-to-interference plus noise ratio (SINR) values. In this paper, we will therefore restrict our attention to the QoS constained formulation.\n\n\n\nThe QoS-constrained multi-group multicast beamforming problem is a well-studied nonconvex quadratically constrained quadratic programming (QCQP) problem, for which various algorithmic approximations have been proposed. Existing approaches such as semidefinite relaxation with Gaussian randomization and successive convex approximation (SCA) algorithms -- also known as convex-concave-procedures (CCP) -- involve solving a sequence of convex subproblems.\nSolutions to these subproblems can be approximated either using off-the-shelf interior-point methods or using first-order algorithms such as the alternating direction method of multipliers (ADMM). \nWhile the use of interior-point methods typically results in a high computational complexity, the ADMM can require a large number of iterations to achieve a certain accuracy. \nRegardless of the algorithm used to approximate each subproblem, the CCP results in nested approximation loops. \nTerminating the inner iteration after a finite number of steps can hinder the feasibiltiy of estimates, which is required to ensure that the CCP converges.\nBy contrast, if we assume the singular value decomposition of a matrix to be computable,\\footnote{The convergence of algorithms for computing the singular value decomposition is well-studied (see, e.g., \\cite{van1983matrix}).} the algorithm proposed in this paper is free of nested optimization loops.\n\n\n\n\nIn cases where constrained minimization becomes too costly, the superiorization methodology (see, e.g.,\\cite{herman2012superiorization}, \\cite{censor2015weak}) constitutes a promising alternative. \nWhereas the goal of constrained minimization is to find a feasible point (i.e., a point satisfying all constraints) for which the objective value is minimal, superiorization typically builds upon a simple fixed-point algorithm that produces a sequence of points which provably converges to a feasible point. This fixed-point algorithm serves as the so-called \\emph{basic algorithm}, which is then modified by adding small perturbations in each iteration with the intent to find a feasible point with reduced (not necessarily minimal) objective value. By showing that the basic algorithm is bounded perturbation resilient, its convergence guarantee towards a feasible point can be extended to this modified algorithm called a superiorized version of the basic algorithm. \n\n\nIn this paper, we consider the QoS-constrained multi-group multicast beamforming problem in \\cite{karipidis2008quality} with optional per-antenna power constraints as introduced in \\cite{chen2017admm}. We propose an algorithmic approximation based on superiorization of a bounded perturbation resilient fixed point mapping.\nTo do so, we formulate the problem in a product Hilbert space composed of subspaces of Hermitian matrices. \nThis allows us to approximate a feasible point of the relaxed problem with the well-known projections onto convex sets (POCS) algorithm \\cite{stark1998vector}, which iteratively applies a fixed-point mapping comprised of the (relaxed) projections onto each constraint set.\nWe show that this operator is bounded perturbation resilient, which allows us to add small perturbations in each iteration with the intent to reduce the objective value and the distance to the nonconvex rank-one constraints.\nSimulations show that, compared to existing methods, the proposed approach can provide better approximations at a lower computational cost in many cases.\n\n\n\n\\subsection{Preliminaries and Notation}\\label{sec:notation}\nUnless specified otherwise, lowercase letters denote scalars, lowercase letters in bold typeface denote vectors, uppercase letters in bold typeface denote matrices, and letters in calligraphic font denote sets. The sets of nonnegative integers, nonnegative real numbers, real numbers, and complex numbers are denoted by ${\\mathbb N}$, ${\\mathbb R}_+$, ${\\mathbb R}$, and ${\\mathbb C}$, respectively.\nThe real part, imaginary part, and complex conjugate of a complex number $x\\in{\\mathbb C}$ are denoted by $\\mathrm{Re}\\{x\\}$, $\\mathrm{Im}\\{x\\}$, and $x^\\ast$, respectively. The nonnegative part of a real number $x\\in{\\mathbb R}$ is denoted by $\\relu[x]\\triangleq\\max(x,0)$.\n\nWe denote by ${\\mathrm{Id}}$ the identity operator and by ${\\mathbf I}_N$ the $N\\times N$-identity matrix. The all-zero vector or matrix is denoted by ${\\mathbf 0}$ and the $i$th Cartesian unit vector is denoted by ${\\mathbf e}_i$, where the dimension of the space will be clear from the context. The Euclidean norm of a real or complex column vector ${\\mathbf x}$ is denoted by $\\|{\\mathbf x}\\|_2=\\sqrt{{\\mathbf x}^H{\\mathbf x}}$.\nThe $i$th singular value of a matrix ${\\mathbf A}\\in{\\mathbb C}^{N\\times N}$ is denoted by $\\sigma_i({\\mathbf A})$, where the singular values are ordered such that $\\sigma_1({\\mathbf A})\\ge\\cdots\\ge\\sigma_N({\\mathbf A})$. \nFor square matrices ${\\mathbf A}$ we define $\\mathrm{diag}({\\mathbf A})$ to be the column vector composed of the diagonal of ${\\mathbf A}$, and for row or column vectors ${\\mathbf a}$ we define $\\mathrm{diag}({\\mathbf a})$ to be a square diagonal matrix having ${\\mathbf a}$ as its diagonal. \nWe write ${\\mathbf A}\\succcurlyeq {\\mathbf 0}$ for positive semidefinite (PSD) matrices ${\\mathbf A}$.\n\nThe distance between two points ${\\mathbf x},{\\mathbf y}\\in\\mathcal{H}$ in a real Hilbert space $(\\mathcal{H},\\langle\\cdot,\\cdot\\rangle)$ is $d({\\mathbf x},{\\mathbf y})=\\|{\\mathbf x}-{\\mathbf y}\\|$, where $\\|\\cdot\\|$ is the norm induced by the inner product $\\langle\\cdot,\\cdot\\rangle$. The distance between a point ${\\mathbf x}\\in\\mathcal{H}$ and a nonempty set $\\mathcal{C}\\subset\\mathcal{H}$ is defined as $d({\\mathbf x},\\mathcal{C})=\\inf_{{\\mathbf y}\\in\\mathcal{C}}\\|{\\mathbf x}-{\\mathbf y}\\|$.\nFollowing \\cite{bauschke2002phase}, we define the projection of a point ${\\mathbf x}\\in\\mathcal{H}$ onto a nonempty subset $\\mathcal{C}\\subset\\mathcal{H}$ as the set\n\\begin{equation*}\n\\Pi_\\mathcal{C}({\\mathbf x}) = \\left\\{{\\mathbf y}\\in\\mathcal{C}|~ d({\\mathbf x},{\\mathbf y}) = d({\\mathbf x},\\mathcal{C})\\right\\},\n\\end{equation*}\nand denote by $P_\\mathcal{C}:\\mathcal{H}\\to\\mathcal{C}$ an arbitrary but fixed selection of $\\Pi_\\mathcal{C}$, i.e., $(\\forall {\\mathbf x}\\in\\mathcal{H})$ $P_\\mathcal{C}({\\mathbf x})\\in\\Pi_\\mathcal{C}({\\mathbf x})$. If $\\mathcal{C}$ is nonempty, closed, and convex, the set $\\Pi_\\mathcal{C}({\\mathbf x})$ is a singleton for all ${\\mathbf x}\\in\\mathcal{H}$, so $\\Pi_\\mathcal{C}$ has a unique selection $P_\\mathcal{C}$, which itself is called a projector. For closed nonconvex sets $\\mathcal{C}\\neq\\emptyset$ in finite-dimensional Hilbert spaces, $\\Pi_\\mathcal{C}({\\mathbf x})$ is nonempty for all ${\\mathbf x}\\in\\mathcal{H}$, although it is not generally a singleton. Nevertheless, we will refer to the selection $P_\\mathcal{C}$ as the projector, as the distinction from the set-valued operator $\\Pi_\\mathcal{C}$ will always be clear. \n\n\nA fixed point of a mapping $T:\\mathcal{H}\\to\\mathcal{H}$ is a point ${\\mathbf x}\\in\\mathcal{H}$ satisfying $T({\\mathbf x})={\\mathbf x}$. The set $\\mathrm{Fix}(T)=\\{{\\mathbf x}\\in\\mathcal{H}~|~ T({\\mathbf x})={\\mathbf x}\\}$ is called the fixed point set of $T$ \\cite{yamada2005adaptive}. \nGiven two mappings $T_1, T_2:\\mathcal{H}\\to\\mathcal{H}$, we use the shorthand $T_1T_2:=T_1\\circ T_2$ to denote their concatenation, which is defined by the composition $(\\forall{\\mathbf x}\\in\\mathcal{H})$ $T_1T_2({\\mathbf x}):=(T_1\\circ T_2)({\\mathbf x}) = T_1\\left(T_2({\\mathbf x})\\right)$. \n\nFor the following statements, let $\\left(\\mathcal{H},\\langle\\cdot,\\cdot\\rangle\\right)$ be a real Hilbert space with induced norm $\\|\\cdot\\|$. \n\\begin{definition}\n\tA mapping $T:\\mathcal{H}\\to\\mathcal{H}$ is called \\emph{nonexpansive} if $(\\forall{\\mathbf x},{\\mathbf y}\\in\\mathcal{H})$ $\\|T({\\mathbf x})-T({\\mathbf y})\\|\\le \\|{\\mathbf x}-{\\mathbf y}\\|$ \\cite{yamada2005adaptive}.\n\\end{definition}\n\\begin{definition}\\label{def:alpha_avg_nonexpansive}\n\tA mapping $T: \\mathcal{H}\\rightarrow \\mathcal{H}$ is $\\alpha$-averaged nonexpansive if there exist $\\alpha\\in(0,1)$ and a nonexpansive operator $R: \\mathcal{H}\\rightarrow \\mathcal{H}$ such that $T = (1-\\alpha){\\mathrm{Id}} + \\alpha R$ \\cite[Definition 4.33]{bauschke2011convex}.\n\\end{definition}\n\\begin{fact}\\label{fact:nonexpansive_composition}\n\tLet $T_1,\\dots,T_L:\\mathcal{H}\\to\\mathcal{H}$ be (averaged) nonexpansive mappings with at least one common fixed point. Then the composition $T_1\\cdots T_L$ is also (averaged) nonexpansive and $\\mathrm{Fix}(T_1\\cdots T_L)=\\bigcap_{l\\in\\{1,\\dots,L\\}}\\mathrm{Fix}(T_l)$.\n\t\\cite[Fact~1]{yamada2005adaptive}, \\cite[Proposition~2.3]{he2017perturbation}\n\\end{fact}\n\\begin{fact}\\label{fact:mann_iteration}\n\tLet $T:\\mathcal{H}\\to\\mathcal{H}$ be a nonexpansive mapping with $\\mathrm{Fix}(T)\\neq\\emptyset$. Then for any initial point ${\\mathbf x}_0\\in\\mathcal{H}$ and $\\alpha\\in(0,1)$, the sequence $({\\mathbf x}_n)_{n\\in{\\mathbb N}}\\subset\\mathcal{H}$ generated by \n\t\\begin{equation*}\n\t{\\mathbf x}_{n+1} = (1-\\alpha){\\mathbf x}_n+\\alpha T({\\mathbf x}_n)\n\t\\end{equation*}\n\tconverges weakly\\footnote{In finite dimensional Hilbert spaces, weak convergence implies strong convergence \\cite{yamada2011minimizing}.} to an unspecified point in $\\mathrm{Fix}(T)$.\n\tThis fact is a special case of\t\n\t\\cite[Proposition~17.10b]{yamada2011minimizing}.\n\\end{fact}\n\\section{Problem Statement}\nIn Section~\\ref{sec:system_and_original_problem}, we define the system model and state the multi-group multicast beamforming problem with QoS- and per-antenna-power-constraints, and we reformulate it in terms of a nonconvex semidefinite program (SDP).\nA well-known approach to approximating solutions to such problems resorts to solving a convex relaxation: First, the original problem is relaxed and solved using, e.g., interior point methods. Subsequently, randomization techniques are applied to obtain candidate solutions to the original problem \\cite{karipidis2008quality}, \\cite{luo2010semidefinite}. \nHowever, in real-time applications, the complexity of interior point solvers becomes prohibitive as it grows very fast with the system size (i.e., the number of users and the number of antennas).\n\nTherefore, in Section~\\ref{sec:SDR_hilbert}, we formulate the problem in a real product Hilbert space composed of complex (Hermitian) matrices. This formulation makes the problem accessible by a variety of first-order algorithms with low complexity and provable convergence properties.\n\n\n\\subsection{System Model and Original Problem}\\label{sec:system_and_original_problem}\nFollowing the system model in \\cite{karipidis2008quality}, we consider the downlink in a network with a transmitter equipped with $N$ antenna elements, each of them represented by an element of the set $\\mathcal{N}\\triangleq\\{1,\\dots,N\\}$. Each user $k\\in\\mathcal{K}\\triangleq \\{1,\\dots,K\\}$ is equipped with a single receive antenna. The users are grouped into $M$ disjoint multicast groups $\\mathcal{G}_m\\subseteq\\mathcal{K}$ indexed by $m\\in\\mathcal{M}\\triangleq\\{1,\\dots,M\\}$, such that $\\bigcup_{m=1}^M\\mathcal{G}_m=\\mathcal{K}$. Each member of a multicast group $\\mathcal{G}_m$ is intended to receive the same information-bearing symbol $x_m\\in{\\mathbb C}$.\nThe receive signal for the $k$th user can be written as $y_k = \\sum_{m=1}^M {\\mathbf w}_m^H {\\mathbf h}_k x_m + n_k$, where ${\\mathbf w}_m\\in{\\mathbb C}^N$ is the beamforming vector for the $m$th multicast group, ${\\mathbf h}_k\\in{\\mathbb C}^N$ is the instantaneous channel to user $k$, and $n_k\\in{\\mathbb C}$ --- drawn independently from the distribution $\\mathcal{CN}(0,\\sigma_k^2)$ --- is the noise sample at the receiver. Consequently, the transmit power for group $\\mathcal{G}_m$ is proportional to $\\|{\\mathbf w}_m\\|_2^2$.\n\nIn this paper, we consider the multi-group multicast beamforming problem with QoS-constraints \\cite{karipidis2008quality}, which has the objective to minimize the total transmit power subject to constraints on the QoS expressed in terms of SINR requirements. We use the following problem formulation from \\cite{chen2017admm}, with an individual power-constraint for each transmit antenna:\n\\begin{subequations}\\label{eq:original_problem}\n\t\\begin{alignat}{3}\n\t\\underset{\\{{\\mathbf w}_m\\in{\\mathbb C}^N\\}_{m=1}^M}{\\mathrm{minimize}} &\\ \\sum\\limits_{m=1}^M\\|{\\mathbf w}_m\\|_2^2\\label{eq:original_prob_a}\\\\\n\t\\mathrm{s.t.}\\quad & (\\forall m \\in\\mathcal{M})(\\forall k \\in\\mathcal{G}_m)\\nonumber\\\\\n\t&\\frac{ |{\\mathbf w}_m^H{\\mathbf h}_k|^2}{\\sum_{l\\neq m}|{\\mathbf w}_l^H{\\mathbf h}_k|^2+\\sigma_k^2}\\ge \\gamma_k\\label{eq:original_prob_b}\\\\\n\t&(\\forall i\\in\\mathcal{N})\\ \\sum_{m=1}^M{\\mathbf w}_m^H\\e_i\\e^T_i{\\mathbf w}_m\\le p_i\\label{eq:original_prob_c}\n\n\t\\end{alignat}\n\\end{subequations}\nThe objective function in \\eqref{eq:original_prob_a} corresponds to the total transmit power. The inequalities in \\eqref{eq:original_prob_b} constitute the SINR-constraints, where $\\gamma_k$ is the SINR required by user $k$. The inequalities in \\eqref{eq:original_prob_c} correspond to the per-antenna power constraints, where ${\\mathbf e}_i\\in{\\mathbb R}^N$ is the $i$th Cartesian unit vector.\n\nThe problem in \\eqref{eq:original_problem} is a nonconvex QCQP, which is known to be NP-hard \\cite{sidiropoulos2006transmit}.\nA well-known strategy for approximating solutions to such problems is the semidefinite relaxation technique \\cite{karipidis2008quality}, \\cite{luo2010semidefinite}. By this technique, we obtain a convex relaxation of the original problem by reformulating it as a nonconvex semidefinite program and by dropping the nonconvex rank constraints.\nMore precisely, using the trace identity $\\mathrm{tr}({\\mathbf A}{\\mathbf B})=\\mathrm{tr}({\\mathbf B}{\\mathbf A})$ for matrices ${\\mathbf A},{\\mathbf B}$ of compatible dimensions, we can write $\\|{\\mathbf w}_m\\|_2^2={\\mathbf w}_m^H{\\mathbf w}_m=\\mathrm{tr}({\\mathbf w}_m^H{\\mathbf w}_m)=\\mathrm{tr}({\\mathbf w}_m{\\mathbf w}_m^H)$ and $|{\\mathbf w}_m^H{\\mathbf h}_k|^2={\\mathbf w}_m^H{\\mathbf h}_k({\\mathbf w}_m^H{\\mathbf h}_k)^*=\\mathrm{tr}({\\mathbf w}_m^H{\\mathbf h}_k{\\mathbf h}_k^H{\\mathbf w}_m)=\\mathrm{tr}({\\mathbf w}_m{\\mathbf w}_m^H{\\mathbf h}_k{\\mathbf h}_k^H)$. By defining $(\\forall k\\in\\mathcal{K})$ ${\\mathbf Q}_k={\\mathbf h}_k{\\mathbf h}_k^H$, and replacing the expression ${\\mathbf w}_m{\\mathbf w}_m^H$ by a positive semidefinite rank-one matrix ${\\mathbf X}_m\\in{\\mathbb C}^{N\\times N}$ for all $m\\in\\mathcal{M}$, we obtain the nonconvex semidefinite program\n\\begin{subequations}\\label{eq:SDP}\n\t\\begin{alignat}{5}\n\\underset{\\{{\\mathbf X}_m\\in{\\mathbb C}^{N\\times N}\\}_{m=1}^M}{\\mathrm{minimize}}\\ & \\sum\\limits_{m=1}^M\\mathrm{tr}({\\mathbf X}_m)\\label{eq:SDPa}\\\\\n\\mathrm{s.t.}\\quad & (\\forall m \\in\\mathcal{M}) (\\forall k \\in\\mathcal{G}_m)\\label{eq:SDPb}\\\\\n& \\mathrm{tr}({\\mathbf Q}_k{\\mathbf X}_m)\\ge \\gamma_k\\sum\\limits_{l\\neq m}\\mathrm{tr}({\\mathbf Q}_k{\\mathbf X}_l) + \\gamma_k\\sigma_k^2\\notag\\\\\n&(\\forall i\\in\\mathcal{N})\\ \\sum_{m=1}^M\\mathrm{tr}(\\e_i\\e^T_i{\\mathbf X}_m)\\le p_i\\label{eq:SDPc}\\\\\n&(\\forall m\\in\\mathcal{M})\\ {\\mathbf X}_m\\succcurlyeq{\\mathbf 0}\\label{eq:SDPd}\\\\\n&\\mathrm{rank}({\\mathbf X}_m)\\le 1,\\label{eq:SDPe}\n\\end{alignat}\n\\end{subequations}\nThis formulation is equivalent to \\eqref{eq:original_problem} in the sense that $\\{{\\mathbf X}_m={\\mathbf w}_m{\\mathbf w}_m^H\\}_{m=1}^M$ solves \\eqref{eq:SDP} if and only if $\\{{\\mathbf w}_m\\}_{m=1}^M$ solves \\eqref{eq:original_problem}.\n\nA convex relaxation of Problem~\\eqref{eq:SDP} can be obtained by simply dropping the rank-constraints in \\eqref{eq:SDPe}. \nThe approach in \\cite{sidiropoulos2006transmit}, \\cite{karipidis2008quality} solves this relaxed problem and, subsequently, generates candidate approximations for Problem~\\eqref{eq:SDP} (and hence \\eqref{eq:original_problem}) using randomization techniques. A solution to the relaxed problem is typically found using general-purpose interior point solvers, which results in high computational cost for large-scale problems. In the multi-group setting \\cite{karipidis2008quality}, each randomization step involves solving an additional power control problem, which further increases the computational burden. \n\n\n\n\n\n\\subsection{Problem Formulation in a Real Hilbert Space}\\label{sec:SDR_hilbert}\nThe objective of this section is to show that Problem~\\eqref{eq:SDP} can be formulated in a real Hilbert space, which enables us to approach the problem by means of efficient projection-based methods. \nTo this end, we consider the $\\emph{real}$ vector space $\\mathcal{V}\\triangleq {\\mathbb C}^{N\\times N}$ of complex $N\\times N$-matrices. More precisely, we define vector addition in the usual way, \nand we restrict scalar multiplication to real scalars $a\\in{\\mathbb R}$, where each coefficient of a vector ${\\mathbf X}\\in\\mathcal{V}$ is multiplied by $a$ to obtain the vector $a{\\mathbf X}\\in\\mathcal{V}$. In this way, $\\mathcal{V}$ is a real vector space, i.e., a vector space over the field ${\\mathbb R}$.\n\nIf we equip the space $\\mathcal{V}$ with a real inner product\\footnote{A proof that this function is in fact a real inner product can be found in Remark~\\ref{rem:real_inner_product} in the Appendix.}\n\\begin{equation}\\label{eq:innerProduct}\n(\\forall {\\mathbf X},{\\mathbf Y}\\in\\mathcal{V})\\quad \n\\left\\langle{\\mathbf X},{\\mathbf Y}\\right\\rangle\\triangleq\\mathrm{Re}\\left\\{\\mathrm{tr}\\left({\\mathbf X}^H{\\mathbf Y}\\right)\\right\\},\n\\end{equation}\nwhich induces the standard Frobenius norm \n\\begin{equation*}\n||{\\mathbf X}||=\\sqrt{\\left\\langle{\\mathbf X},{\\mathbf X}\\right\\rangle} = \\sqrt{\\mathrm{tr}\\left({\\mathbf X}^H{\\mathbf X}\\right)},\n\\end{equation*}\nwe obtain a \\emph{real} Hilbert space $\\left(\\mathcal{V},\\langle\\cdot,\\cdot\\rangle\\right)$. \n\nIn the remainder of this paper, we restrict our attention to the subspace ${\\mathcal H}\\triangleq \\{{\\mathbf X}\\in\\mathcal{V}~|~ {\\mathbf X}={\\mathbf X}^H\\}$ of Hermitian matrices.\nFollowing the notation in \\cite{stark1998vector}, we define a product space ${{\\mathcal H}^M}$ as the $M$-fold Cartesian product\n\\begin{equation*}\n{{\\mathcal H}^M} \\triangleq\\underset{M\\text{ times}}{\\underbrace{{\\mathcal H}\\times\\dots\\times{\\mathcal H}}}\n\\end{equation*}\n of ${\\mathcal H}$. \n In this vector space, the sum of two vectors ${\\mathbf X}=\\left({\\mathbf X}_1,\\dots,{\\mathbf X}_M\\right)$ and ${\\mathbf Y}=\\left({\\mathbf Y}_1,\\dots,{\\mathbf Y}_M\\right)\\in{{\\mathcal H}^M}$ is given by ${\\mathbf X}+{\\mathbf Y} := \\left({\\mathbf X}_1 + {\\mathbf Y}_1,\\dots,{\\mathbf X}_M +{\\mathbf Y}_M\\right)$ and scalar multiplication is restricted to real scalars $a\\in{\\mathbb R}$, where $a\\left({\\mathbf X}_1,\\dots,{\\mathbf X}_M\\right):=\\left(a{\\mathbf X}_1,\\dots,a{\\mathbf X}_M\\right)$.\n We equip the space ${{\\mathcal H}^M}$ with the inner product\n\\begin{equation}\\label{eq:InnerProduct}\n\\langle\\ipspacing\\langle {\\mathbf X},{\\mathbf Y} \\rangle\\ipspacing\\rangle \\triangleq\\sum\\limits_{m=1}^M \\langle {\\mathbf X}_m,{\\mathbf Y}_m\\rangle,\n\\end{equation}\nwhich induces the norm\n\\begin{equation*}\n\\Norm[{\\mathbf X}]^2=\\langle\\ipspacing\\langle{\\mathbf X},{\\mathbf X}\\rangle\\ipspacing\\rangle=\\sum\\limits_{m=1}^M \\|{\\mathbf X}_m\\|^2,\n\\end{equation*}\nwhere $(\\forall m\\in\\mathcal{M})$ ${\\mathbf X}_m\\in{\\mathcal H}$ and ${\\mathbf Y}_m\\in{\\mathcal H}$.\n Consequently, $\\left({{\\mathcal H}^M}, \\langle\\ipspacing\\langle\\cdot,\\cdot\\rangle\\ipspacing\\rangle\\right)$ is also a real Hilbert space.\n\nIn order to pose Problem~\\eqref{eq:SDP} in this Hilbert space, we express the objective function in \\eqref{eq:SDPa} and the constraints in \\eqref{eq:SDPb}--\\eqref{eq:SDPe} in terms of a convex function and closed sets in $\\left({{\\mathcal H}^M}, \\langle\\ipspacing\\langle\\cdot,\\cdot\\rangle\\ipspacing\\rangle\\right)$ as shown below:\n\n\\begin{enumerate}\n\t\\item The objective function in \\eqref{eq:SDPa} can be written as the following\n\tinner product:\n\t\\begin{equation}\\label{eq:hilber_objective}\n\t\\sum\\limits_{m=1}^M\\mathrm{tr}({\\mathbf X}_m) = \\langle\\ipspacing\\langle{\\mathbf J},{\\mathbf X}\\rangle\\ipspacing\\rangle,\n\t\\end{equation}\n\twhere ${\\mathbf J}=({\\mathbf I}_N,\\dots,{\\mathbf I}_N)$. This follows from \\eqref{eq:innerProduct}, \\eqref{eq:InnerProduct}, and the fact that $(\\forall{\\mathbf W}\\in{\\mathcal H})$ $\\mathrm{Im}\\{\\mathrm{tr}({\\mathbf W})\\}=0$.\n\t\n\t\\item \tThe SINR constraint for user $k\\in\\mathcal{K}$ in \\eqref{eq:SDPb} corresponds to the closed half-space\n\t\\begin{equation}\\label{eq:SINR_sets}\n\t{\\setQ_k}=\\left\\{\\left.{\\mathbf X}\\in{{\\mathcal H}^M}\\right|\\ \\langle\\ipspacing\\langle{\\mathbf X},{\\mathbf Z}^k\\rangle\\ipspacing\\rangle\\ge \\sigma_k^2 \\right\\},\n\t\\end{equation}\n\twhere $(\\forall k\\in\\mathcal{K})$ ${\\mathbf Z}^k\\in{{\\mathcal H}^M}$ is given by\n\t\\begin{equation*}\n\t{\\mathbf Z}^k =\\Big(\\underset{1,\\cdots,g_k-1}{\\underbrace{-{\\mathbf Q}_k,\\cdots,-{\\mathbf Q}_k}},\\underset{g_k}{\\underbrace{\\gamma_k^{-1}{\\mathbf Q}_k}} ,\\underset{g_k+1,\\cdots,M}{\\underbrace{-{\\mathbf Q}_k,\\cdots,-{\\mathbf Q}_k}}\\Big).\n\t\\end{equation*}\n\tHere, we introduced indices $\\{g_k\\}_{k\\in\\mathcal{K}}$ that assign to each receiver $k\\in\\mathcal{K}$ the multicast group $\\mathcal{G}_m$ to which it belongs (i.e., $g_k=m$, if $k\\in\\mathcal{G}_m$).\n\t\n\tIn order to verify that the set ${\\setQ_k}$ in \\eqref{eq:SINR_sets} indeed represents the SINR constraint for user $k$ in \\eqref{eq:SDPb}, we rearrange\\footnote{In the remainder of this paper, we use the convention that ${\\mathbf X}_m\\in{\\mathcal H}$ denotes the $m$th component matrix of an $M$-tuple ${\\mathbf X}\\in{{\\mathcal H}^M}$.}\n\t\\begin{equation*}\n\t\\langle\\ipspacing\\langle{\\mathbf X},{\\mathbf Z}^k\\rangle\\ipspacing\\rangle \n\t= \\frac{1}{\\gamma_k}\\langle {\\mathbf X}_{g_k},{\\mathbf Q}_k\\rangle - \\sum\\limits_{\\substack{l\\in\\mathcal{M}\\\\\n\t\t\tl\\neq g_k}}\\langle{\\mathbf X}_l, {\\mathbf Q}_k\\rangle.\n\t\\end{equation*}\n\t\n\tUsing the definition of the inner product in \\eqref{eq:innerProduct}, and the fact that $(\\forall{\\mathbf W}\\in{\\mathcal H})$ ${\\mathbf W}^H={\\mathbf W}$ and $\\mathrm{Im}\\{\\mathrm{tr}({\\mathbf W})\\}=0$, we can rewrite the constraint ${\\setQ_k}$ as\n\t\\begin{equation*}\n\t\\mathrm{tr}({\\mathbf X}_{g_k}{\\mathbf Q}_k) - \\gamma_k \\sum\\limits_{\\substack{l\\in\\mathcal{M}\\\\\n\t\t\tl\\neq g_k}}\\mathrm{tr}({\\mathbf X}_l{\\mathbf Q}_k) \\ge \\gamma_k\\sigma_k^2,\n\t\\end{equation*}\n\twhich corresponds to the $k$th SINR constraint in \\eqref{eq:SDPb}.\n\t\n\t\\item \tThe per-antenna power constraints in \\eqref{eq:SDPc} are expressed by the closed convex set\n\t\\begin{equation*}\n\t\\setP=\\left\\{{\\mathbf X}\\in{{\\mathcal H}^M}\\left|~(\\forall i\\in\\mathcal{N})~ \\langle\\ipspacing\\langle {\\mathbf D}^i ,{\\mathbf X}\\rangle\\ipspacing\\rangle \\le p_i \\right.\\right\\},\n\t\\end{equation*}\n\twhere \n\t\\begin{equation}\\label{eq:defDi}\n\t(\\forall i \\in\\mathcal{N})\\quad {\\mathbf D}^i\\triangleq(\\e_i\\e^T_i,\\dots,\\e_i\\e^T_i)\\in{{\\mathcal H}^M}.\n\t\\end{equation}\n\t\n\tThis follows immediately from \\eqref{eq:innerProduct} and \\eqref{eq:InnerProduct}.\n\t\n\t\\item \tThe PSD constraints in \\eqref{eq:SDPd} correspond to the closed convex cone ${\\setH^N_+}$ given by\n\t\\begin{equation*}\n\t{\\setH^N_+} = \\left\\{\\left.({\\mathbf X}_1,\\dots,{\\mathbf X}_M)\\in{{\\mathcal H}^M}\\right|~ (\\forall m\\in\\mathcal{M})\\ {\\mathbf X}_m\\succcurlyeq{\\mathbf 0} \\right\\}.\n\t\\end{equation*}\n\t\n\t\\item The rank constraints in \\eqref{eq:SDPe} can be represented by the nonconvex set\n\t\\begin{equation}\\label{eq:rank_constraint}\n\t\\mathcal{R}= \\left\\{{\\mathbf X}\\in{{\\mathcal H}^M}\\left|~ (\\forall m\\in\\mathcal{M})~ \\mathrm{rank}({\\mathbf X}_m) \\le 1\\right.\\right\\}.\n\t\\end{equation}\n\\end{enumerate}\n\n\n\n\nConsequently, we can pose Problem~\\eqref{eq:SDP} as \n\\begin{align}\\label{eq:SDP_hilbert}\n\\underset{{\\mathbf X}\\in{{\\mathcal H}^M}}{\\mathrm{minimize}}\\ &\\langle\\ipspacing\\langle{\\mathbf J},{\\mathbf X}\\rangle\\ipspacing\\rangle\\\\\\notag\n\\mathrm{s.t.}\\quad & (\\forall k \\in\\mathcal{K})~\t{\\mathbf X}\\in{\\setQ_k}\\\\\\notag\n&{\\mathbf X}\\in\\setP,\\quad\n{\\mathbf X}\\in{\\setH^N_+},\\quad\n{\\mathbf X}\\in\\mathcal{R}.\n\\end{align}\nThe problems in \\eqref{eq:SDP} and \\eqref{eq:SDP_hilbert} are equivalent in the sense that $\\{{\\mathbf X}_m\\in\\mathcal{V}\\}_{m\\in\\mathcal{M}}$ solves Problem~\\eqref{eq:SDP} if and only if $({\\mathbf X}_1,\\dots,{\\mathbf X}_M)\\in{{\\mathcal H}^M}$ solves Problem~\\eqref{eq:SDP_hilbert}.\nThe advantage of the formulation in \\eqref{eq:SDP_hilbert} is that it enables us\nto (i) streamline notation, (ii) express the updates of the algorithm proposed later in Section~\\ref{sec:algorithmic_solution} in terms of\nwell-known projections, and (iii) simplify proofs by using results in\noperator theory in Hilbert spaces, as we show in the following.\n\nIt is worth noting that all constraint sets described above are closed, so a projection onto each of the sets exists for any point ${\\mathbf X}\\in{{\\mathcal H}^M}$. This property is crucial to derive projection-based\nalgorithms, such as the proposed algorithm.\nIn particular, note that we cannot replace the inequality in \\eqref{eq:SDPe} with an equality, as commonly done in the\nliterature. The reason is that, with an equality, the corresponding set is not closed, as shown\nin Remarks~\\ref{rem:closed_rank} and \\ref{rem:non_closed_rank}, and the practical implication is that the projection may not exist everywhere. Specifically, this happens whenever ${\\mathbf X}=\\left({\\mathbf X}_1,\\dots,{\\mathbf X}_M\\right)$ satisfies ${\\mathbf X}_m={\\mathbf 0}$ for some $m\\in\\mathcal{M}$, which would leave the update\nrule at such points undefined in projection-based methods. This is illustrated for the case ${\\mathbf X}={\\mathbf 0}\\in{{\\mathcal H}^M}$ in Example~\\ref{ex:undefined_projection} below.\n\\begin{remark}\\label{rem:closed_rank}\n\tThe rank constraint set $\\mathcal{R}$ in \\eqref{eq:rank_constraint} is closed.\n\t\n\t\\emph{Proof:}\n\tLet $\\left({\\mathbf X}^{(n)}\\right)_{n\\in{\\mathbb N}}$ be a sequence of points in $\\mathcal{R}$ converging to a point ${\\mathbf X}^\\star=({\\mathbf X}^\\star_1,\\dots,{\\mathbf X}^\\star_M)\\in{{\\mathcal H}^M}$ and denote by $(\\forall m\\in\\mathcal{M})$\\allowbreak$(\\forall n\\in{\\mathbb N})$\n\t$\n\t{\\mathbf X}^{(n)}_m={\\mathbf U}_m^{(n)}{\\mathbf S}_m^{(n)}({\\mathbf V}_m^{(n)})^H\n\t$\n\tthe singular value decomposition of the $m$th component matrix of ${\\mathbf X}^{(n)}$. \n\tIt follows from ${\\mathbf X}^{(n)}\\in\\mathcal{R}$ that $(\\forall m\\in\\mathcal{M})$\n\t${\\mathbf S}_m^{(n)} = \\mathrm{diag}([s_m^{(n)},0,\\dots,0])$.\n\tSince a sequence of zeros can only converge to zero, the singular value decomposition ${\\mathbf X}^\\star_m={\\mathbf U}_m^\\star{\\mathbf S}_m^\\star({\\mathbf V}_m^\\star)^H$ of the $m$th component matrix of ${\\mathbf X}^\\star$ satisfies ${\\mathbf S}_m^\\star=\\mathrm{diag}([s_m^\\star,0,\\dots,0])$ for some $s^\\star_m\\in{\\mathbb R}_+$. Therefore $(\\forall m\\in\\mathcal{M})$ $\\mathrm{rank}({\\mathbf X}_m^\\star)\\le 1$, so ${\\mathbf X}^\\star\\in\\mathcal{R}$.\n\t\\pushQED{\\qed}\n\tThe above shows that $\\mathcal{R}$ contains all its limit points, so it is closed.\\qedhere\n\t\\popQED\n\\end{remark}\n\n\\begin{remark}\\label{rem:non_closed_rank}\nBy contrast,\n\\begin{equation*}\n\\mathcal{R}^\\prime= \\left\\{{\\mathbf X}\\in{{\\mathcal H}^M}\\left|~ (\\forall m\\in\\mathcal{M})~ \\mathrm{rank}({\\mathbf X}_m) = 1\\right.\\right\\}\n\\end{equation*} \nis not a closed set, since for all ${\\mathbf X}\\in\\mathcal{R}^\\prime$ and $\\alpha\\in(0,1)$, the sequence $\\left(\\alpha^n{\\mathbf X}\\right)_{n\\in{\\mathbb N}}$ in $\\mathcal{R}^\\prime$ converges to ${\\mathbf 0}\\notin\\mathcal{R}^\\prime$.\n\\end{remark}\n\n\\begin{example}\\label{ex:undefined_projection}\n\tThe set-valued projection of ${\\mathbf 0}\\in{{\\mathcal H}^M}$ onto the set $\\mathcal{R}^\\prime$ in Remark~\\ref{rem:non_closed_rank} is empty.\n\t\n\t\\emph{Proof:}\n\tSuppose that $\\Pi_{\\mathcal{R}^\\prime}({\\mathbf 0})\\neq\\emptyset$ and let ${\\mathbf Z}\\in\\Pi_{\\mathcal{R}^\\prime}({\\mathbf 0})$, i.e., ${\\mathbf Z}$ is any of the closest points of the set $\\mathcal{R}^\\prime$ to the zero vector ${\\mathbf 0}$. Since $\\Pi_{\\mathcal{R}^\\prime}({\\mathbf 0})\\subset\\mathcal{R}^\\prime$, $(\\forall m\\in\\mathcal{M})$ $\\mathrm{rank}({\\mathbf Z}_m)=1$, i.e., $(\\forall m\\in\\mathcal{M})$ $\\sigma_1({\\mathbf Z}_m)>0$.\n\tTherefore, for any $\\alpha\\in(0,1)$, $\\alpha{\\mathbf Z}\\in\\mathcal{R}^\\prime$ and $d({\\mathbf 0},\\alpha{\\mathbf Z})0$. These perturbations are problematic for the problem considered here because we are interested in solutions comprised of positive semidefinite rank-one matrices, and adding these perturbations to an iterate ${\\mathbf X}=({\\mathbf X}_1,\\dots,{\\mathbf X}_M)$ may result in indefinite full-rank component matrices ${\\mathbf X}_m-\\alpha{\\mathbf I}_N$. To avoid this problem, we introduce the function $\\sobjf:{{\\mathcal H}^M}\\to{\\mathbb R}_+$ given by\n\\begin{equation}\\label{eq:equiv_objective}\n\\sobjf({\\mathbf X}) \\triangleq \\sum_{m=1}^M\\|{\\mathbf X}_m\\|_\\ast,\n\\end{equation}\nwhere $\\|\\cdot\\|_\\ast$ is the nuclear norm. Since $\\mathcal{C}_\\star\\subset{\\setH^N_+}$ by \\eqref{eq:sdrCFP}, we have $(\\forall{\\mathbf X}\\in\\mathcal{C}_\\star)(\\forall m\\in\\mathcal{M})(\\forall i\\in\\mathcal{N})$ $\\sigma_i({\\mathbf X}_m)=\\lambda_i({\\mathbf X}_m)$, where $\\lambda_i({\\mathbf X}_m)$ and $\\sigma_i({\\mathbf X}_m)$ denote the $i$th eigenvalue and singular value of the $m$th component matrix of ${\\mathbf X}$, respectively. Hence we can write\n\\begin{align}\\label{eq:surrogate_obj}\n\\sobjf({\\mathbf X}) &= \\sum_{m=1}^M\\sum_{i=1}^N \\sigma_i({\\mathbf X}_m) \\\\\\notag\n&=\\sum_{m=1}^M\\sum_{i=1}^N \\lambda_i({\\mathbf X}_m) = \\sum_{m=1}^M\\mathrm{tr}({\\mathbf X}_m).\n\\end{align}\nTherefore, by \\eqref{eq:hilber_objective}, minimizing $\\sobjf$ over $\\mathcal{C}_\\star$ is equivalent to minimizing the linear objective function in \\eqref{eq:SDP_hilbert} (or \\eqref{eq:SDR_hilbert}) over $\\mathcal{C}_\\star$, in the sense that the solution sets to both formulations are the same. As we will show below, this surrogate objective function gives rise to power-reducing perturbations, which are guaranteed not to increase the rank of their arguments' component matrices (see Remark~\\ref{rem:rank_reduction}).\n\nThe power-reducing perturbations are designed according to two criteria. Firstly, they should decrease the value of the surrogate function $\\sobjf$. Secondly, they should not be too large in order to avoid slowing down convergence of the Basic Algorithm. \nFor a given point ${\\mathbf X}\\in{{\\mathcal H}^M}$ we derive a perturbation ${\\mathbf Y}_\\tau^\\star$ satisfying these two criteria by solving the problem\n\\begin{equation}\\label{eq:power_pert_problem}\n{\\mathbf Y}_\\tau^\\star:={\\mathbf Y}_\\tau^\\star({\\mathbf X})\\in \\underset{{\\mathbf Y}\\in{{\\mathcal H}^M}}{\\mathrm{arg\\ min}}~\\left( \\tau \\sobjf({\\mathbf X} + {\\mathbf Y}) + \\frac{1}{2} \\Norm[{\\mathbf Y}]^2\\right).\n\\end{equation}\nHere, $\\Norm[{\\mathbf Y}]^2$ acts as a regularization on the perturbations' magnitude, and the parameter $\\tau\\ge0$ balances the two design criteria. The next proposition shows that ${\\mathbf Y}_\\tau^\\star$ can be easily\ncomputed.\n\n\\begin{prop}\n\tThe unique solution to \\eqref{eq:power_pert_problem} is given by\n\t\\begin{equation}\\label{eq:power_pert_opt}\n\t(\\forall m\\in\\mathcal{M}) \\quad {\\mathbf Y}_\\tau^\\star|_m = \\mathcal{D}_\\tau({\\mathbf X}_m) - {\\mathbf X}_m,\n\t\\end{equation}\n\twhere $\\mathcal{D}_\\tau:{\\mathcal H}\\to{\\mathcal H}$ is the singular value shrinkage operator \\cite{cai2010singular}\n\t\\begin{align}\\label{eq:svd_shrink}\n\t\\mathcal{D}_\\tau({\\mathbf X}_m)&\\triangleq {\\mathbf U}_m\\mathcal{D}_\\tau({\\mathbf \\Sigma}_m){\\mathbf V}_m^H,\\\\\\notag \n\t\\mathcal{D}_\\tau({\\mathbf \\Sigma}_m)&=\\mathrm{diag}\\left(\\left\\{\\relu[{\\sigma_i({\\mathbf X}_m)-\\tau}]\\right\\}_{i\\in\\mathcal{N}}\\right),\n\t\\end{align}\n\tand $(\\forall m\\in\\mathcal{M})$ ${\\mathbf X}_m={\\mathbf U}_m{\\mathbf \\Sigma}_m{\\mathbf V}_m$ is the singular value decomposition of ${\\mathbf X}_m$ such that ${\\mathbf \\Sigma}_m=\\mathrm{diag}\\left(\\left\\{\\sigma_i({\\mathbf X}_m)\\right\\}_{i\\in\\mathcal{N}}\\right)$.\n\t\n\t\\emph{Proof:}\n\tDenote the perturbed point for a given choice of $\\tau$ by ${\\mathbf Z}_\\tau^\\star:={\\mathbf X}+{\\mathbf Y}_\\tau^\\star$. By substituting ${\\mathbf Y}={\\mathbf Z}-{\\mathbf X}$ in \\eqref{eq:power_pert_problem}, we can identify this point as ${\\mathbf Z}_\\tau^\\star={\\mathrm{prox}}_{\\tau \\sobjf}({\\mathbf X})$, where the proximal mapping is given by\n\t\\begin{equation}\\label{eq:proximal_op}\n\t{\\mathrm{prox}}_{\\tau \\sobjf}({\\mathbf X}) \\in \\underset{{\\mathbf Z}\\in{{\\mathcal H}^M}}{\\mathrm{arg\\ min}}~ \\left(\\tau \\sobjf({\\mathbf Z}) + \\frac{1}{2} \\Norm[{\\mathbf X}-{\\mathbf Z}]^2\\right).\n\t\\end{equation}\n\tNote that the function\n\t\\begin{equation*}\n\t\\tau \\sobjf({\\mathbf Z}) + \\frac{1}{2}\\Norm[{\\mathbf X}-{\\mathbf Z}]^2 = \\tau \\sum_{m=1}^M \\|{\\mathbf Z}_m\\|_\\ast + \\frac{1}{2}\\sum_{m=1}^M \\|{\\mathbf X}_m-{\\mathbf Z}_m\\|^2\n\t\\end{equation*}\n\tis separable over $m$. Consequently, we can compute the proximal mapping in $\\eqref{eq:proximal_op}$ by solving\n\t\\begin{equation}\\label{eq:prox_subspace}\n\t(\\forall m\\in\\mathcal{M})\\quad {\\mathbf Z}_\\tau^\\star|_m\\in\\underset{{\\mathbf Z}\\in{\\mathcal H}}{\\mathrm{arg\\ min}}~ \\tau \\|{\\mathbf Z}\\|_\\ast + \\frac{1}{2}\\|{\\mathbf X}_m-{\\mathbf Z}\\|^2.\n\t\\end{equation}\n\tAccording to \\cite[Thm. 2.1]{cai2010singular}, the unique solution to \\eqref{eq:prox_subspace} is given by ${\\mathbf Z}_\\tau^\\star|_m = \\mathcal{D}_\\tau({\\mathbf X}_m)$.\\footnote{The proof in \\cite{cai2010singular} is for real matrices. However, the generalization to complex matrices is straightforward.}\n\t\\pushQED{\\qed}\n\tSubstituting ${\\mathbf Y}_\\tau^\\star={\\mathbf Z}_\\tau^\\star-{\\mathbf X}$ yields \\eqref{eq:power_pert_opt}, which is the desired result.\\qedhere\n\\popQED\n\\end{prop}\n\n\n\t\n\nBy defining $(\\forall{\\mathbf X}\\in{{\\mathcal H}^M})$\n\\begin{equation}\n\\sigma_{\\max}({\\mathbf X})\\triangleq \\max_{\\substack{{m\\in\\mathcal{M}}\\\\{i\\in\\mathcal{N}}}}\\sigma_i({\\mathbf X}_m)\n\\end{equation}\nwe can express the power-reducing perturbation for a point ${\\mathbf X}\\in{{\\mathcal H}^M}$ as ${\\mathbf Y}=\\T[\\mathrm{P}]^\\alpha({\\mathbf X})-{\\mathbf X}$, where the mapping $\\T[\\mathrm{P}]^\\alpha\\triangleq{\\mathrm{prox}}_{\\alpha\\sigma_{\\max}({\\mathbf X})\\sobjf}$ is given component-wise by $(\\forall m \\in\\mathcal{M})$\n\\begin{equation}\\label{eq:op_tp}\n\\T[\\mathrm{P}]^{\\alpha}({\\mathbf X})|_m =\\mathcal{D}_\\tau({\\mathbf X}_m) \\quad \\text{with}\\quad \\tau= \\alpha\\sigma_{\\max}({\\mathbf X}).\n\\end{equation} \n\nNote that $\\T[\\mathrm{P}]^0({\\mathbf X}) = {\\mathbf X}$, and $(\\forall\\alpha\\ge 1)$ $\\T[\\mathrm{P}]^\\alpha({\\mathbf X})={\\mathbf 0}$.\nTherefore, the magnitude of the power-reducing perturbations can be controlled by choosing the parameter $\\alpha\\in[0,1]$. Moreover, in contrast to performing subgradient steps for the original cost function in \\eqref{eq:SDP_hilbert}, applying the perturbations in \\eqref{eq:op_tp} cannot increase the rank:\n\n\\begin{remark}\\label{rem:rank_reduction}\n\tFor all $\\alpha\\ge0$, $\\T[\\mathrm{P}]^\\alpha$ maps any point ${\\mathbf X}=({\\mathbf X}_m)_{m\\in\\mathcal{M}}\\in{\\setH^N_+}$ to a point ${\\mathbf Z}=({\\mathbf Z}_m)_{m\\in\\mathcal{M}}=\\T[\\mathrm{P}]^\\alpha({\\mathbf X})\\in{\\setH^N_+}$ satisfying $(\\forall m\\in\\mathcal{M})$ $\\mathrm{rank}({\\mathbf Z}_m)\\le\\mathrm{rank}({\\mathbf X}_m)$.\n\tThis follows immediately from \\eqref{eq:svd_shrink}.\n\\end{remark}\n\n\t\t\n\\subsubsection{Incorporating the Rank Constraints by Bounded Perturbations}\\label{sec:rank_perturbations}\nNext, we define perturbations that steer the iterate towards the rank constraint set $\\mathcal{R}$ in \\eqref{eq:rank_constraint}.\nWhile objective functions used for superiorization are usually convex, the function $\\sobjg:{{\\mathcal H}^M}\\to{\\mathbb R}_+$\n\\begin{equation}\\label{eq:rank_dist}\n\\sobjg({\\mathbf X})\\triangleq d({\\mathbf X},\\mathcal{R}),\n\\end{equation}\ni.e., the distance to the set $\\mathcal{R}$, constitutes a nonconvex superiorization objective, so our approach does not follow exactly the superiorization methodology in \\cite{censor2015weak} (but we can still prove convergence).\n\nAs the perturbations may steer the iterates away from the feasible set, their magnitude should not be unnecessarily large. Therefore, we choose the rank-reducing perturbations as $\\P[\\mathcal{R}]({\\mathbf X})-{\\mathbf X}$, where $\\P[\\mathcal{R}]({\\mathbf X})\\in\\Pi_\\mathcal{R}({\\mathbf X})$ denotes a (generalized) projection of a given point ${\\mathbf X}\\in{{\\mathcal H}^M}$ onto the closed nonconvex set $\\mathcal{R}$. \nSince $\\mathcal{R}$ is a closed set, the set-valued projection $\\Pi_\\mathcal{R}({\\mathbf X})$ is nonempty for all ${\\mathbf X}\\in{{\\mathcal H}^M}$. A projection onto $\\mathcal{R}$ can be computed by\ntruncating all but the largest singular value of each component matrix to zero. We formally state this fact below.\n\n\\begin{fact}\n\tLet ${\\mathbf X}_m={\\mathbf U}_m{\\mathbf \\Sigma}_m{\\mathbf V}_m^H\\in{\\mathcal H}$ be the singular value decomposition of the $m$th component matrix of ${\\mathbf X}$ with ${\\mathbf \\Sigma}_m=\\mathrm{diag}(\\sigma_1({\\mathbf X}_m),\\dots,\\sigma_N({\\mathbf X}_m))$.\n\n\tThen, $(\\forall {\\mathbf X}\\in{{\\mathcal H}^M})$ the $m$th component matrix of a point $\\P[\\mathcal{R}]({\\mathbf X})\\in\\Pi_\\mathcal{R}({\\mathbf X})$ is given by \\cite[Lemma~3.2]{luke2013prox}\n\t\\begin{equation}\\label{eq:proj_rank}\n\t\\P[\\mathcal{R}]({\\mathbf X})|_m = {\\mathbf U}_m\\mathrm{diag}\\left(\\sigma_1({\\mathbf X}_m),0,\\dots,0\\right){\\mathbf V}_m^H.\n\t\\end{equation}\n\\end{fact}\n\n\n\n\\subsubsection{Combining Power- and Rank Perturbations}\\label{sec:combined_perturbations}\nSince both $\\T[\\mathrm{P}]^\\alpha$ in \\eqref{eq:op_tp} and $\\P[\\mathcal{R}]$ in \\eqref{eq:proj_rank} operate on the singular values of the component matrices, their composition\n is given by $(\\forall m\\in\\mathcal{M})$\n\\begin{equation*\n\\P[\\mathcal{R}]\\T[\\mathrm{P}]^\\alpha({\\mathbf X})|_m=\\relu[{\\sigma_1({\\mathbf X}_m)-\\alpha\\sigma_{\\max}({\\mathbf X})}]{\\mathbf u}_{m1} {\\mathbf v}_{m1}^H\\in{\\mathcal H},\n\\end{equation*}\nwhere, $(\\forall m\\in\\mathcal{M})$ ${\\mathbf U}_m=[{\\mathbf u}_{m1},\\dots,{\\mathbf u}_{mN}]$ and ${\\mathbf V}_m=[{\\mathbf v}_{m1},\\dots,{\\mathbf v}_{mN}]$.\nMoreover, it is easy to verify that $(\\forall {\\mathbf X}\\in{{\\mathcal H}^M})$\\allowbreak$(\\forall \\alpha \\ge 0)$, $\\T[\\mathrm{P}]^\\alpha\\P[\\mathcal{R}]({\\mathbf X})=\\P[\\mathcal{R}]\\T[\\mathrm{P}]^\\alpha({\\mathbf X})$. \nWe will now use the composition of $\\T[\\mathrm{P}]^\\alpha$ and $\\P[\\mathcal{R}]$ to define a mapping $\\mathcal{Y}_\\alpha:{{\\mathcal H}^M}\\to{{\\mathcal H}^M}$ by $\\mathcal{Y}_\\alpha:=\\P[\\mathcal{R}]\\T[\\mathrm{P}]^\\alpha-{\\mathrm{Id}}$, i.e., $(\\forall {\\mathbf X}=({\\mathbf X}_m)_{m\\in\\mathcal{M}}\\in{{\\mathcal H}^M})$\\allowbreak$(\\forall m\\in\\mathcal{M})$\n\\begin{equation}\\label{eq:pert_mapping}\n\\mathcal{Y}_\\alpha({\\mathbf X})|_m= \\relu[{\\sigma_1({\\mathbf X}_m)-\\alpha\\sigma_{\\max}({\\mathbf X})}]{\\mathbf u}_{m1} {\\mathbf v}_{m1}^H - {\\mathbf X}_m.\n\\end{equation}\nFinally, we define the sequence $\\left(\\beta^{(n)}{\\mathbf Y}^{(n)}\\right)_{n\\in{\\mathbb N}}$ of perturbations in \\eqref{eq:superior_alg} by\n\\begin{equation}\\label{eq:seq_of_pert2}\n(\\forall n\\in{\\mathbb N})\\quad {\\mathbf Y}^{(n)}\\triangleq \\mathcal{Y}_{\\alpha^{(n)}}\\left({\\mathbf X}^{(n)}\\right),\n\\end{equation}\nwhere $\\left(\\alpha^{(n)}\\right)_{n\\in{\\mathbb N}}$ is a sequence in $[0,1]$ and $\\left(\\beta^{(n)}\\right)_{n\\in{\\mathbb N}}$ is a summable sequence in $[0,1]$. The following proposition shows that the perturbations in \\eqref{eq:seq_of_pert2} can simultaneously reduce the objective value and the distance to the rank constraint set.\n\n\n\\begin{prop}\\label{prop:nonincreasing}\n\tLet $\\alpha\\in{\\mathbb R}_+$ and $\\lambda\\in[0,1]$. Then each of the following holds for $\\mathcal{Y}_\\alpha:{{\\mathcal H}^M}\\to{{\\mathcal H}^M}$ in \\eqref{eq:pert_mapping}.\n\t\\begin{enumerate}\n\t\t\\item The perturbations cannot increase the distance to the set ${\\setH^N_+}$, i.e., $(\\forall{\\mathbf X}\\in{{\\mathcal H}^M})$ $d({\\mathbf X}+\\lambda\\mathcal{Y}_\\alpha({\\mathbf X}),{\\setH^N_+})\\le d({\\mathbf X},{\\setH^N_+})$. In particular, $(\\forall{\\mathbf X}\\in{{\\mathcal H}^M})$ ${\\mathbf X}\\in{\\setH^N_+}\\Rightarrow {\\mathbf X}+\\lambda\\mathcal{Y}_\\alpha({\\mathbf X})\\in{\\setH^N_+}$.\n\t\t\\item If $\\alpha>0$, the perturbations decrease the value of the function $\\sobjf$ in \\eqref{eq:surrogate_obj}: $\\left(\\forall {\\mathbf X}\\in{{\\mathcal H}^M}\\right)$ $\\sobjf\\left({\\mathbf X}+\\lambda\\mathcal{Y}_\\alpha({\\mathbf X})\\right)<\\sobjf({\\mathbf X})$ whenever $\\sobjf({\\mathbf X})>0$.\n\t\t\\item If $\\alpha>0$ and ${\\mathbf X}\\in{\\setH^N_+}$, then the perturbations decrease the objective value of Problem~\\eqref{eq:SDP_hilbert}, i.e., $\\langle\\ipspacing\\langle{\\mathbf J},{\\mathbf X}+\\lambda\\mathcal{Y}_\\alpha({\\mathbf X})\\rangle\\ipspacing\\rangle <\\langle\\ipspacing\\langle{\\mathbf J},{\\mathbf X}\\rangle\\ipspacing\\rangle$ whenever $\\langle\\ipspacing\\langle{\\mathbf J},{\\mathbf X}\\rangle\\ipspacing\\rangle>0$.\n\t\t\\item If $\\lambda > 0$, the perturbations decrease the distance to the rank constraint set $\\mathcal{R}$. More precisely, $\\left(\\forall {\\mathbf X}\\in{{\\mathcal H}^M}\\right)$ $\\sobjg\\left({\\mathbf X}+\\lambda\\mathcal{Y}_\\alpha({\\mathbf X})\\right)<\\sobjg({\\mathbf X})$ whenever $\\sobjg({\\mathbf X})>0$.\n\t\\end{enumerate}\n\n\t\\emph{Proof:}\n\t\\begin{enumerate}\n\t\t\\pushQED{\\qed} \n\t\t\\item This is an immediate consequence of \\eqref{eq:pert_mapping}.\t\t\n\t\t\\item It follows from \\eqref{eq:svd_shrink} and \\eqref{eq:op_tp} that $(\\forall {\\mathbf X}\\in{{\\mathcal H}^M})$\\allowbreak$(\\forall \\alpha>0)$ $\\sobjf({\\mathbf X})>0\\Rightarrow\\sobjf(\\T[\\mathrm{P}]^\\alpha({\\mathbf X}))<\\sobjf({\\mathbf X})$. Moreover, by \\eqref{eq:proj_rank} we have that $(\\forall \\lambda\\in[0,1])$ $\\sobjf((1-\\lambda){\\mathbf X}+\\lambda\\P[\\mathcal{R}]({\\mathbf X}))\\le\\sobjf({\\mathbf X})$. \n\t\tThis implies $\\sobjf({\\mathbf X}+\\lambda\\mathcal{Y}_\\alpha({\\mathbf X}))= \\sobjf\\left((1-\\lambda){\\mathbf X}+\\lambda\\P[\\mathcal{R}]\\T[\\mathrm{P}]^\\alpha({\\mathbf X})\\right)\\le\\sobjf\\left(\\T[\\mathrm{P}]^\\alpha({\\mathbf X})\\right)<\\sobjf({\\mathbf X})$ whenever $\\sobjf({\\mathbf X})>0$.\n\t\t\\item This result follows from 1) and 2), since $(\\forall {\\mathbf X}\\in{\\setH^N_+})$ $\\langle\\ipspacing\\langle{\\mathbf J},{\\mathbf X}\\rangle\\ipspacing\\rangle=\\sobjf({\\mathbf X})$ according to \\eqref{eq:surrogate_obj}.\n\t\t\\item Since $\\mathcal{R}$ is closed, we can write $\\sobjg({\\mathbf X})=d({\\mathbf X},\\mathcal{R})=\\Norm[{\\mathbf X} - P_\\mathcal{R}({\\mathbf X})]=\\sqrt{\\sum_{m\\in\\mathcal{M}}\\sum_{i=2}^N\\sigma_i^2({\\mathbf X}_m)}$.\n\t\tTherefore, it follows from \\eqref{eq:svd_shrink} that $(\\forall {\\mathbf X}\\in{{\\mathcal H}^M})$\\allowbreak$(\\forall \\alpha\\in{\\mathbb R}_+)$ $\\sobjg(\\T[\\mathrm{P}]^\\alpha({\\mathbf X}))\\le\\sobjg({\\mathbf X})$.\n\t\tMoreover, by \\eqref{eq:proj_rank}, $(\\forall \\lambda\\in(0,1])$ $\\sobjg({\\mathbf X})>0$ implies that $\\sobjg((1-\\lambda){\\mathbf X}+\\lambda\\P[\\mathcal{R}]({\\mathbf X}))<\\sobjg({\\mathbf X})$.\n\t\tThis in turn implies $\\sobjg({\\mathbf X}+\\lambda\\mathcal{Y}_\\alpha({\\mathbf X}))= \\sobjg\\left((1-\\lambda){\\mathbf X}+\\lambda\\P[\\mathcal{R}]\\T[\\mathrm{P}]^\\alpha({\\mathbf X})\\right)<\\sobjg\\left(\\T[\\mathrm{P}]^\\alpha({\\mathbf X})\\right)\\le\\sobjg({\\mathbf X})$ whenever $\\sobjg({\\mathbf X})>0$.\t\\qedhere\\popQED\t\n\t\\end{enumerate}\n\\end{prop}\nWith the perturbations defined in \\eqref{eq:seq_of_pert2}, the iteration in \\eqref{eq:superior_alg} yields the update rule \n\\begin{equation}\\label{eq:superior_alg2}\n(\\forall n \\in {\\mathbb N})\\quad \\nextit[{\\mathbf X}] = T_\\star\\left(\\current[{\\mathbf X}] + \\current[\\beta] \\mathcal{Y}_{\\alpha^{(n)}}\\left(\\current[{\\mathbf X}]\\right)\\right)\n\\end{equation}\nof the proposed algorithm, where ${\\mathbf X}^{(0)} \\in{{\\mathcal H}^M}$ is arbitrary, $\\left(\\alpha^{(n)}\\right)_{n\\in{\\mathbb N}}$ is a sequence in $[0,1]$, and $\\left(\\beta^{(n)}\\right)_{n\\in{\\mathbb N}}$ is a summable sequence in $[0,1]$.\n\n\n\n\\subsection{Convergence of the Proposed Algorithm}\\label{sec:boundedness}\nWe will now examine the convergence of the proposed algorithm in \\eqref{eq:superior_alg2}.\nFor this purpose, let $\\left(\\beta^{(n)}\\right)_{n\\in{\\mathbb N}}$ be a summable sequence in $[0,1]$, let $\\left(\\alpha^{(n)}\\right)_{n\\in{\\mathbb N}}$ be a sequence of nonnegative numbers, and denote by $\\left({\\mathbf Y}^{(n)}\\right)_{n\\in{\\mathbb N}}$ the sequence of perturbations according to \\eqref{eq:seq_of_pert2}. Then the sequence $\\left({\\mathbf X}^{(n)}\\right)_{n\\in{\\mathbb N}}$ produced by the algorithm in \\eqref{eq:superior_alg2} converges to a feasible point of Problem \\eqref{eq:SDR_hilbert} for all ${\\mathbf X}^{(0)}\\in{{\\mathcal H}^M}$. To show this, we prove the following facts.\n\\begin{enumerate}\n\t\\item The mapping $\\T[\\star]$ in \\eqref{eq:pocs} is bounded perturbation resilient.\n\t\\item The sequence $\\left({\\mathbf Y}^{(n)}\\right)_{n\\in{\\mathbb N}}$ is bounded, such that $\\left(\\beta^{(n)}{\\mathbf Y}^{(n)}\\right)_{n\\in{\\mathbb N}}$ is a sequence of bounded perturbations.\n\\end{enumerate}\n\n\\subsubsection{Bounded Perturbation Resilience of the Basic Algorithm}\nThe operator $T_\\star$ in \\eqref{eq:pocs} is known to be $\\alpha$-averaged (see, e.g., \\cite[Example 17.12(a)]{yamada2011minimizing}). We include this fact here for completeness:\n\\begin{remark}\\label{rem:alpha_avg}\n\tThe operator $T_\\star$ in \\eqref{eq:pocs} is $\\alpha$-averaged nonexpansive.\n\t\n\t\\emph{Proof:}\n\tNote that, for every nonempty subset $\\mathcal{C}\\subset{{\\mathcal H}^M}$, the reflector $R_\\mathcal{C}= {\\mathrm{Id}} + 2(P_\\mathcal{C}-{\\mathrm{Id}})$ is nonexpansive \\cite[Corollary~4.18]{bauschke2011convex}. Therefore, according to Definition~\\ref{def:alpha_avg_nonexpansive}, $\\left(\\forall \\mu\\in (0,2)\\right)$ the relaxed projector\n\t\\begin{equation*}\n\tT_\\mathcal{C}^\\mu= {\\mathrm{Id}} + \\mu(P_\\mathcal{C}-{\\mathrm{Id}}) = {\\mathrm{Id}} + \\frac{\\mu}{2}(R_\\mathcal{C} - {\\mathrm{Id}})\n\t\\end{equation*}\n\tis $\\mu\/2$-averaged. Further (see Fact~\\ref{fact:nonexpansive_composition}), the composite of finitely many averaged mappings is $\\alpha$-averaged for some\n\t\\pushQED{\\qed}\n\t$\\alpha\\in(0,1)$. \\qedhere\n\t\\popQED{}\n\\end{remark}\nConsequently, the bounded perturbation resilience of $\\T[\\star]$ follows directly from \\cite[Thm.~3.1]{he2017perturbation}. We summarize this fact in the following Lemma.\n\\begin{lemma}\\cite{he2017perturbation}\\label{lem:bpr_of_pocs}\n\t The algorithm in \\eqref{eq:superior_alg} is guaranteed to converge to a point in the solution set $\\mathcal{C}_\\star$ of the feasibility problem in \\eqref{eq:sdrCFP} if $\\mathcal{C}_\\star\\neq\\emptyset$ and $\\left(\\current[\\beta]\\current[{\\mathbf Y}]\\right)_{n\\in{\\mathbb N}}$ is a sequence of bounded perturbations. \n\t\n\t\\emph{Proof:}\n\t\\pushQED{\\qed}\n\tThe authors of \\cite{he2017perturbation} have proved the bounded perturbation resilience of $\\alpha$-averaged nonexpansive mappings with nonempty fix-point set in a real Hilbert space. \n\tConsequently, this lemma follows from Remark~\\ref{rem:alpha_avg} and \\cite[Thm. 3.1]{he2017perturbation}.\\qedhere\\popQED\n\\end{lemma}\n\n\n\n\n\n\\subsubsection{Boundedness of the Perturbations}\nIt remains to show that the sequence $\\left({\\mathbf Y}^{(n)}\\right)_{n\\in{\\mathbb N}}$ \nis bounded for all sequences $\\left(\\alpha^{(n)}\\right)_{n\\in{\\mathbb N}}$ of nonnegative numbers and $\\left(\\beta^{(n)}\\right)_{n\\in{\\mathbb N}}$ in $[0,1]$ such that $\\sum_{n\\in{\\mathbb N}}\\beta^{(n)}<\\infty$, regardless of the choice of ${\\mathbf X}^{(0)}\\in{{\\mathcal H}^M}$.\n\nTo this end, we note that $(\\forall n \\in {\\mathbb N})$ $\\Norm[{\\mathbf Y}^{(n)}]\\le\\Norm[{\\mathbf X}^{(n)}]$ for any sequence $\\left(\\alpha^{(n)}\\right)_{n\\in{\\mathbb N}}$ of nonnegative numbers:\n\\begin{lemma}\\label{lem:decreasing_norm}\n\tThe mapping $\\mathcal{Y}_\\alpha$ in \\eqref{eq:pert_mapping} satisfies \n\t\\begin{equation}\\label{eq:rem2}\n\t\t\\left(\\forall{\\mathbf X}\\in{{\\mathcal H}^M}\\right)\\left(\\forall\\alpha\\in{\\mathbb R}_+\\right)\\quad \\Norm[\\mathcal{Y}_\\alpha({\\mathbf X})]^2\\le\\Norm[{\\mathbf X}]^2.\n\t\\end{equation}\n\t\n\t\\emph{Proof:}\n\tLet $(\\forall m\\in\\mathcal{M})$ ${\\mathbf X}_m={\\mathbf U}_m\\mathrm{diag}(\\{\\sigma_i({\\mathbf X}_m)\\}_{i\\in\\mathcal{N}}){\\mathbf V}_m^H$ denote the singular value decomposition of the $m$th component matrix of ${\\mathbf X}$.\n\tAccording to \\eqref{eq:pert_mapping}, the $m$th component matrix of $\\mathcal{Y}_\\alpha({\\mathbf X})$ is given by $\\mathcal{Y}_\\alpha({\\mathbf X})|_m=-{\\mathbf U}_m{\\mathbf S}_m{\\mathbf V}_m^H$, where $(\\forall m\\in\\mathcal{M})$\n\t\\begin{equation*}\n\t{\\mathbf S}_m = \\mathrm{diag}\\left(\\min(\\sigma_1({\\mathbf X}_m),\\tau),\\sigma_2({\\mathbf X}_m),\\dots,\\sigma_N({\\mathbf X}_m)\\right)\n\t\\end{equation*}\n\twith $\\tau= \\alpha\\sigma_{\\max}({\\mathbf X})$.\n\tSince $(\\forall {\\mathbf W}\\in{\\mathcal H})$ $\\|{\\mathbf W}\\|^2=\\sum_{i\\in\\mathcal{N}}\\sigma_i^2({\\mathbf W})$, we can write\n\t\\begin{align*}\n\t\\Norm[\\mathcal{Y}_\\alpha({\\mathbf X})]^2= \\sum_{m=1}^M\\|{\\mathbf S}_m\\|^2\\le \\sum_{m=1}^M\\|{\\mathbf X}_m\\|^2=\\Norm[{\\mathbf X}]^2,\n\t\\end{align*}\n\twhich concludes the proof. \\qed\n\\end{lemma}\nThe following known result, which is a special case of \\cite[Lemma~5.31]{bauschke2011convex}, will be used in Lemma~\\ref{lem:boundedness} to prove that the proposed perturbations are bounded:\n\\begin{fact}\\label{fact:quasi_fejer}\t\n\tLet $\\left(a^{(n)}\\right)_{n\\in{\\mathbb N}}$, $\\left(\\beta^{(n)}\\right)_{n\\in{\\mathbb N}}$, and $\\left(\\gamma^{(n)}\\right)_{n\\in{\\mathbb N}}$ be sequences in ${\\mathbb R}_+$ such that $\\sum_{n\\in{\\mathbb N}}\\beta^{(n)}<\\infty$, $\\sum_{n\\in{\\mathbb N}}\\gamma^{(n)}<\\infty$ and\n\t$\n\t(\\forall n\\in{\\mathbb N})\\quad \n\ta^{(n+1)} \\le (1+\\beta^{(n)})a^{(n)} + \\gamma^{(n)}.\n\t$\n\tThen the sequence $\\left(a^{(n)}\\right)_{n\\in{\\mathbb N}}$ converges.\n\\end{fact}\n\\begin{lemma}\\label{lem:boundedness}\n\tSuppose that $\\left(\\beta^{(n)}\\right)_{n\\in{\\mathbb N}}$ is a summable sequence in $[0,1]$ and that $(\\forall n\\in{\\mathbb N})$ $\\alpha^{(n)}\\ge0$. Then the sequence of perturbations $\\left(\\beta^{(n)}{\\mathbf Y}^{(n)}\\right)$ with ${\\mathbf Y}^{(n)}$ defined by \\eqref{eq:seq_of_pert2} is bounded.\n\t\n\t\\emph{Proof:}\n\t\t\tWe need to show that $(\\exists R\\in{\\mathbb R})(\\forall n\\in{\\mathbb N})\\ \\Norm[{\\mathbf Y}^{(n)}]\\le R$. To this end, observe that $\\left(\\forall {\\mathbf X}^{(n)}\\in {{\\mathcal H}^M}\\right)\\left(\\forall {\\mathbf Z}\\in\\mathrm{Fix}(T_\\star)\\right)$ it holds that\n\t\t\t\\begin{align*}\n\t\t\t\\Norm[{\\mathbf X}^{(n+1)}-{\\mathbf Z}] &= \\Norm[T_\\star\\left({\\mathbf X}^{(n)}+\\beta^{(n)}{\\mathbf Y}^{(n)}\\right)-{\\mathbf Z}]\\\\\n\t\t\t\t&\\overset{(a)}{\\le} \\Norm[{\\mathbf X}^{(n)}+\\beta^{(n)}{\\mathbf Y}^{(n)}-{\\mathbf Z}]\\\\\n\t\t\t\t&\\overset{(b)}{\\le} \\Norm[{\\mathbf X}^{(n)}-{\\mathbf Z}] + \\beta^{(n)}\\Norm[{\\mathbf Y}^{(n)}],\n\t\t\t\\end{align*}\n\t\t\twhere (a) follows from the nonexpansivity of $T_\\star$, and (b) is a consequence of the triangle inequality.\n\t\t\tBy Lemma~\\ref{lem:decreasing_norm}, the perturbations defined in \\eqref{eq:seq_of_pert2} satisfy $(\\forall n\\in{\\mathbb N})$ $\\Norm[{\\mathbf Y}^{(n)}]\\le\\Norm[{\\mathbf X}^{(n)}]$. Consequently, applying the triangle inequality again yields\n\t\t\t\\begin{align*}\\label{eq:bounded_proof1}\n\t\t\t\\Norm[{\\mathbf X}^{(n+1)}-{\\mathbf Z}] &\\le \\Norm[{\\mathbf X}^{(n)}-{\\mathbf Z}]\\\\\\notag\n\t\t\t&\\qquad + \\beta^{(n)}\\left(\\Norm[{\\mathbf X}^{(n)}-{\\mathbf Z}]+\\Norm[{\\mathbf Z}]\\right).\n\t\t\t\\end{align*}\t\t\t\n\t\t\t\n\t\t\tBy defining $(\\forall n\\in{\\mathbb N})$ $a^{(n)}=\\Norm[{\\mathbf X}^{(n)}-{\\mathbf Z}]$ and $\\gamma^{(n)}=\\beta^{(n)}\\Norm[{\\mathbf Z}]$, we can deduce from Fact~\\ref{fact:quasi_fejer} that the sequence $\\left(a^{(n)}\\right)_{n\\in{\\mathbb N}}$ converges. This implies that there exists $r\\in{\\mathbb R}$ such that $(\\forall n\\in{\\mathbb N})$ $\\Norm[{\\mathbf X}^{(n)}-{\\mathbf Z}]\\le r$.\n\t\t\t\n\t\t\tConsequently, we have \n\t\t\t\\begin{align*}\n\t\t\t(\\forall n\\in{\\mathbb N})\\quad \\Norm[{\\mathbf Y}^{(n)}] &\\overset{(a)}{\\le}\\Norm[{\\mathbf X}^{(n)}]\n\t\t\t\\overset{(b)}{\\le} \\Norm[{\\mathbf X}^{(n)}-{\\mathbf Z}] + \\Norm[{\\mathbf Z}]\\\\\n\t\t\t&\\overset{(c)}{\\le}r+\\Norm[{\\mathbf Z}]=:R\n\t\t\t\\end{align*}\n\t\t\t\\pushQED{\\qed}\n\t\t\twhere (a) follows from Lemma~\\ref{lem:decreasing_norm}, (b) follows from the triangle inequality, and (c) follows from Fact~\\ref{fact:quasi_fejer}.\\qedhere\n\t\t\t\\popQED\n\\end{lemma}\n\n\nCombining Lemmas~\\ref{lem:bpr_of_pocs} and \\ref{lem:boundedness} shows that the proposed algorithm converges to a feasible point of the relaxed semidefinite program in \\eqref{eq:SDR_hilbert}. This is summarized in the following proposition.\n\n\\begin{prop}\n\tThe sequence produced by the algorithm in \\eqref{eq:superior_alg} with perturbations given by \\eqref{eq:seq_of_pert2} is guaranteed to converge to a feasible point of Problem~\\eqref{eq:SDR_hilbert} for all ${\\mathbf X}^{(0)}\\in{{\\mathcal H}^M}$ if $\\left(\\beta^{(n)}\\right)_{n\\in{\\mathbb N}}$ is a summable sequence in $[0,1]$ and $\\left(\\alpha^{(n)}\\right)_{n\\in{\\mathbb N}}$ is a sequence in ${\\mathbb R}_+$.\n\t\n\t\\emph{Proof:}\n\tFollows immediately from Lemma~\\ref{lem:bpr_of_pocs} and Lemma~\\ref{lem:boundedness}.\n\\end{prop} \n\n\\subsection{Relation to the Superiorization Methodology}\\label{sec:relation_to_superiorization}\nThe authors of \\cite{censor2015weak} define superiorization as follows:\n\\begin{quote}\n\\textit{'The superiorization methodology works by taking an iterative algorithm, investigating its perturbation resilience, and then, using proactively such permitted perturbations, forcing the perturbed algorithm to do something useful in addition to what it is originally designed to do.'} \n\\end{quote}\nAlthough our proposed algorithm matches this informal definition,\nthere are some slight differences to the formal definition in \\cite{censor2015weak}, where the perturbations are required to be nonascending vectors for a convex superiorization objective function.\n\\begin{definition}[Nonascending Vectors \\cite{censor2015weak}]\\\nGiven a function $\\supObj:{\\mathbb R}^J\\to{\\mathbb R}$ and a point ${\\mathbf y}\\in{\\mathbb R}^J$, a vector $\\mathbf{d}\\in{\\mathbb R}^J$ is said to be nonascending for $\\supObj$ at ${\\mathbf y}$ iff $\\|\\mathbf{d}\\|\\le 1$ and there is a $\\delta>0$ such that for all $\\lambda\\in[0,\\delta]$ we have $\\supObj({\\mathbf y}+\\lambda\\mathbf{d}) \\le \\supObj({\\mathbf y})$.\n\\end{definition}\n\nIn our case, the goal of superiorization is two-fold, in the sense that it is expressed by two separate functions $\\sobjf:{{\\mathcal H}^M}\\to{\\mathbb R}$ and $\\sobjg:{{\\mathcal H}^M}\\to{\\mathbb R}$.\nWhile the function $\\sobjf$ in \\eqref{eq:equiv_objective} is convex, the function $\\sobjg$ in \\eqref{eq:rank_dist} (i.e., the distance to nonconvex rank constraint set $\\mathcal{R}$ in \\eqref{eq:rank_constraint}) is a nonconvex function.\nMoreover, we use perturbations that are not restricted to a unit ball, and therefore they are not necessarily nonascending vectors.\nHowever, as we have shown in Proposition~\\ref{prop:nonincreasing}, the proposed perturbations simultaneously reduce the values of $\\sobjf$ and $\\sobjg$.\nKeeping these slight distinctions in mind, we will refer to the proposed algorithm in \\eqref{eq:superior_alg} as \\emph{Superiorized Projections onto Convex Sets}. \t\n\n\\subsection{Summary of the Proposed Algorithm}\\label{sec:algorithm_summary}\nThe proposed multi-group multicast beamforming algorithm is summarized in Algorithm~\\ref{alg:spocs}. It is defined by the relaxation parameters $\\mu_1,\\dots,\\mu_{K+2}$ of the operator $\\T[\\star]$ in \\eqref{eq:pocs},\na scalar $a\\in(0,1)$ controlling the decay of the power-reducing perturbations, a scalar $b\\in(0,1)$ controlling the decay of the sequence of perturbation scaling factors, i.e., $(\\forall n\\in{\\mathbb N})$ $\\alpha^{(n)}=a^n$ and $\\beta^{(n)}=b^n$. The stopping criterion is based on a tolerance value $\\epsilon>0$, and a maximum number $n_{\\max}$ of iterations.\n\nThe arguments of the algorithm are the indices $g_1,\\dots,g_K$ assigning a multicast group to each user, the channel vectors ${\\mathbf h}_1,\\dots,{\\mathbf h}_K\\in{\\mathbb C}^N$, SINR requirements $\\gamma_1,\\dots,\\gamma_K$, and noise powers $\\sigma_1,\\dots,\\sigma_K$ of all users as well as the per-antenna power constraints $p_1,\\dots,p_N$.\nAt each step, the algorithm computes a perturbation according to \\eqref{eq:pert_mapping} and applies the feasibility seeking operator $\\T[\\star]$ in \\eqref{eq:pocs}. It terminates when the relative variation of the estimate falls within the tolerance $\\epsilon$, or when the maximum number $n_{\\max}$ of iterations is reached.\nFinally, the beamforming vectors ${\\mathbf w}=\\{{\\mathbf w}_m\\}_{m\\in\\mathcal{M}}$ are computed by extracting the strongest principal component\n\\begin{equation}\\label{eq:principal}\n(\\forall m \\in\\mathcal{M})\\quad {\\mathbf w}_m=\\psi({\\mathbf X}_m)\\triangleq\\sqrt{\\sigma_1({\\mathbf X}_m)}{\\mathbf u}_{m1},\n\\end{equation}\nwhere $(\\forall m \\in\\mathcal{M})$ ${\\mathbf X}_m={\\mathbf U}_m{\\mathbf \\Sigma}_m{\\mathbf V}_m^H$, ${\\mathbf U}_m=[{\\mathbf u}_{m1},\\cdots,{\\mathbf u}_{mN}]$, and ${\\mathbf \\Sigma}_m=\\mathrm{diag}\\left(\\sigma_1({\\mathbf X}_m),\\dots,\\sigma_N({\\mathbf X}_m)\\right)$.\n\\begin{algorithm}[H]\n\\caption{Superiorized Projections onto Convex Sets}\\label{alg:spocs}\n\\begin{algorithmic}[1]\n\t\\State \\textbf{Parameters:}~ $\\{\\mu_k\\}_{k=1}^{K+2},~ a,b\\in(0,1),~ \\epsilon>0,~ n_{\\max}\\in{\\mathbb N}$\n\t\\State \\textbf{Input:}~ $\\{g_k\\}_{k\\in\\mathcal{K}}$, $\\{{\\mathbf h}_k\\}_{k\\in\\mathcal{K}}$, $\\{\\gamma_k\\}_{k\\in\\mathcal{K}}$, $\\{\\sigma_k\\}_{k\\in\\mathcal{K}}$, $\\{p_i\\}_{i\\in\\mathcal{N}}$\n\t\\State \\textbf{Output:}~ $\\{{\\mathbf w}_m\\in{\\mathbb C}^N\\}_{m\\in\\mathcal{M}}$\n\t\\State \\textbf{Initialization:}~ Choose arbitrary ${\\mathbf X}^{(0)}\\in{{\\mathcal H}^M}$\n\t\\For{$n=0,\\dots,n_{\\max}-1$}\t\n\t\\State ${\\mathbf Y}^{(n)}\\gets\\mathcal{Y}_{a^n}\\left({\\mathbf X}^{(n)}\\right)$\\Comment{Eq.~\\eqref{eq:pert_mapping}}\n\t\\State ${\\mathbf X}^{(n+1)} \\gets \\T[\\star]\\left({\\mathbf X}^{(n)} + b^n{\\mathbf Y}^{(n)}\\right)$\\Comment{Eq.~\\eqref{eq:pocs}}\n\t\\If{$\\Norm[{\\mathbf X}^{(n+1)}-{\\mathbf X}^{(n)}]<\\epsilon \\Norm[{\\mathbf X}^{(n+1)}]$}\n\t\\State \\textbf{break}\n\t\\EndIf\n\t\\EndFor\n\t\\State \\textbf{return} ${\\mathbf w}=\\left\\{\\psi\\left({\\mathbf X}_m^{(n+1)}\\right)\\right\\}_{m\\in\\mathcal{M}}$\\Comment{Eq.~\\eqref{eq:principal}}\n\\end{algorithmic}\n\\end{algorithm}\n\\section{Numerical Results}\nIn this section, we compare Algorithm~\\ref{alg:spocs} (\\texttt{S-POCS}) to several other methods from the literature. We choose identical noise levels and target SINRs for all users, i.e., $(\\forall k \\in\\mathcal{K})$ $\\sigma_k=\\sigma$ and $\\gamma_k=\\gamma$.\nFor each problem instance, we generate $K$ i.i.d. Rayleigh-fading channels $(\\forall k\\in\\mathcal{K})$ ${\\mathbf h}_k\\sim\\mathcal{CN}({\\mathbf 0},\\sigma^2{\\mathbf I}_N)$.\n\nIn the first simulation, we drop the per-antenna power constraints, i.e., we set $(\\forall i \\in\\mathcal{N})$ $p_i=\\infty$, and we consider the following algorithms:\n\\begin{itemize}\n\\item The proposed method summarized in Algorithm~\\ref{alg:spocs} (\\texttt{S-POCS})\n\\item Semidefinite relaxation with Gaussian randomization \\cite{karipidis2008quality} (\\texttt{SDR-GauRan})\n\\item The successive convex approximation algorithm from \\cite{mehanna2014feasible}, \\cite{christopoulos2015multicast} (\\texttt{FPP-SCA}{})\n\\item The ADMM-based convex-concave procedure from \\cite{chen2017admm} (\\texttt{CCP-ADMM})\n\\end{itemize} \nThe \\texttt{S-POCS}{} algorithm is as described in Algorithm~\\ref{alg:spocs}, with parameters $a=0.95$, $b=0.999$, $\\epsilon=10^{-6}$, $n_{\\max}=10^5$. For the QoS-constraint sets, we use relaxation parameters $(\\forall k\\in\\mathcal{K})$ $\\mu_k=1.9$, and for the per-antenna power constraint set $\\mathcal{P}$ and the PSD constraint ${\\setH^N_+}$, we use unrelaxed projections, i.e., $\\mu_{K+2}=\\mu_{K+1}=1$. We initialize the \\texttt{S-POCS}{} algorithm with ${\\mathbf X}^{(0)}={\\mathbf 0}$.\nThe convex optimization problems in the \\texttt{SDR-GauRan}{} and \\texttt{FPP-SCA}{} algorithms are solved with the interior point solver SDPT3 \\cite{toh1999sdpt3}. The parameters of the \\texttt{CCP-ADMM}{} algorithm are as specified in \\cite{chen2017admm}.\nAchieving a fair comparison between these methods is difficult because the structure of the respective algorithms is quite different. \n\nThe \\texttt{SDR-GauRan}{} algorithm begins by solving the relaxed problem in \\eqref{eq:SDR_hilbert}, and, subsequently, generates random candidate beamforming vectors using the \\texttt{RandA}{} method \\cite{sidiropoulos2006transmit}, \\cite{karipidis2008quality}. In the multi-group setting, where $M>1$, an additional convex optimization problem (multigroup multicast power control (MMPC), \\cite{karipidis2008quality}) needs to be solved for each candidate vector. If no feasible MMPC problem is found during the \\texttt{RandA}{} procedure, we define the output of the \\texttt{SDR-GauRan}{} algorithm to be $\\{\\psi({\\mathbf X}^\\star_m)\\}_{m\\in\\mathcal{M}}$, where ${\\mathbf X}^\\star\\in{{\\mathcal H}^M}$ is a solution to the relaxed SDP in \\eqref{eq:SDR_hilbert}.\n\nThe \\texttt{FPP-SCA}{} algorithm from \\cite{mehanna2014feasible} works by solving a sequence of convex subproblems. By introducing slack variables, the feasibility of each subproblem is ensured. This obviates the need for a feasible initialization point, which is typically required to ensure convergence of CCP\/SCA algorithms.\n\nThe \\texttt{CCP-ADMM}{} algorithm uses an ADMM algorithm to find a feasible starting point for the CCP. Subsequently, a similar ADMM algorithm is used to approximate each subproblem of the CCP.\nBecause the ADMM is a first-order method, the performance of \\texttt{CCP-ADMM}{} is heavily dependent on the stopping criteria of the inner ADMM algorithm.\n\nBy contrast, the \\texttt{S-POCS}{} algorithm does not require an initialization phase, and it works by iteratively applying a sequence of operators, which can be computed in a fixed number of steps.\nTherefore, we compare the performance based on computation time. Although we exclude the time required for evaluating the performance, we note that the computation time required by each of the methods severely depends on the particular implementation.\n\nThe authors of \\cite{chen2017admm} assess the performance of the considered algorithms based by comparing the transmit power achieved by the resulting beamformers. However, none of the methods considered here can guarantee feasibility of the beamforming vectors, when the algorithms are terminated after a finite number of iterations. \nFurthermore, in the multi-group case, it may not be possible to scale an arbitrary candidate beamformer ${\\mathbf w}=\\{{\\mathbf w}_m\\in{\\mathbb C}^N\\}_{m\\in\\mathcal{M}}$ such that it satisfies all constraints in Problem~\\eqref{eq:original_problem}.\nIn principle, we could evaluate the performance by observing both the objective value (i.e., the transmit power of the beamformers) and a measure of constraints violation such as the normalized proximity function used in \\cite{censor2012effectiveness}. However, defining this measure of constraints violation is not straightforward, as the considered methods approach the problem in different spaces. Moreover, we are interested in expressing the quality of a beamforming vector by a single value to simplify the presentation.\nTherefore, we will compare the performance based on the minimal SINR achieved by the beamformer $\\sqrt{\\bfscale}\\cdot{\\mathbf w}$ with\n\\begin{equation*}\n\\bfscale = \\min\\left(\\frac{P_{\\mathrm{SDR}}^\\star }{\\sum_{m=1}^M{\\mathbf w}_m^H{\\mathbf w}_m}, \\min_{i\\in\\mathcal{N}}\\left( \\frac{p_i}{\\sum_{m=1}^M |w_{im}|}\\right)\\right).\n\\end{equation*}\nThe scaled vector $\\sqrt{\\bfscale}\\cdot{\\mathbf w}$ satisfies all power constraints, and its total power is bounded by the optimal objective value $P_{\\mathrm{SDR}}^\\star$ of the relaxed SDP in \\eqref{eq:SDR_hilbert}. More compactly, given a candidate beamformer ${\\mathbf w}=\\{{\\mathbf w}_m\\in{\\mathbb C}^N\\}_{m\\in\\mathcal{M}}$ for Problem~\\eqref{eq:original_problem}, we assess its performance based on the function\\footnote{For the sake of simplicity, we will refer to the \\emph{minimal SINR achieved by the scaled beamformer} $\\sqrt{\\bfscale}\\cdot{\\mathbf w}$ in \\eqref{eq:sinr_min} as \\emph{SINR} in the following.}\n\\begin{equation}\\label{eq:sinr_min}\n\\mathrm{SINR}^{\\min}_{\\rho}\\left({\\mathbf w}\\right)=\n\\underset{k\\in\\mathcal{K}}{\\min}~ \\frac{ |{\\mathbf w}_m^H{\\mathbf h}_k|^2}{\\sum_{l\\neq m}|{\\mathbf w}_l^H{\\mathbf h}_k|^2+\\frac{\\sigma_k^2 }{\\bfscale}}.\n\\end{equation}\nSince $P_{\\mathrm{SDR}}^\\star$ is a lower bound on the objective value of the original problem in \\eqref{eq:original_problem}, it holds $(\\forall \\{{\\mathbf w}_m\\in{\\mathbb C}^N\\}_{m\\in\\mathcal{M}})$ that $\\mathrm{SINR}^{\\min}_{\\rho}({\\mathbf w})\\le\\gamma$, where equality can only be achieved, if the relaxed problem in \\eqref{eq:SDR_hilbert} has a solution composed of rank-one matrices.\n\n\\subsection{Performance vs. Computation Time}\nWe will now examine how the performance metric in \\eqref{eq:sinr_min} evolves over time for beamforming vectors produced by the respective algorithms.\nFigure~\\ref{fig:single_run} shows the performance comparison for an exemplary scenario with $N=20$ antennas, and $K=20$ users split evenly into $M=2$ groups, where $\\sigma=1$, $\\gamma=1$, and $(\\forall i\\in\\mathcal{N})$ $p_i=\\infty$.\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=0.59]{figures\/timeseq_K=20_M=2_N=20_single_markers.pdf}\n\t\\caption{$\\mathrm{SINR}^{\\min}_{\\rho}({\\mathbf w}^{(t)})$ over time in a system with $N=20$ antennas and $K=20$ users users split evenly into $M=2$ multicast groups. }\n\t\\label{fig:single_run}\n\\end{figure}\nIt can be seen that the \\texttt{S-POCS}{} algorithm quickly converges to a point achieving an SINR close to the specified target value $\\gamma$.\nThe discontinuities in the SINR curve for the \\texttt{CCP-ADMM}{} algorithm are due to the inner- and outer optimization loops. \nFor the \\texttt{SDR-GauRan}{} algorithm, the SINR increases whenever the randomization produces a beamformer with better performance than the previous one. \nThe SINR of the \\texttt{FPP-SCA}{} algorithm improves continuously, albeit more slowly than the \\texttt{S-POCS}{} and \\texttt{CCP-ADMM}{} algorithms.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=0.59]{figures\/sinr_percentiles_timeseq_replotted-compressed.pdf}\n\t\\caption{$\\mathrm{SINR}^{\\min}_{\\rho}({\\mathbf w}^{(t)})$ over time in a system with $N=20$ antennas and $K=20$ users split evenly into $M=2$ multicast groups. The shaded regions include the outcomes for $100\\%$, $75\\%$, $50\\%$, and $25\\%$ out of 100 problem instances, respectively,\n and the bold line represents the median.}\n\t\\label{fig:multi_group_vs_time}\n\\end{figure}\nNext, we evaluate the performance over 100 randomly generated problems. Since the SINR does not increase monotonically for all of the methods considered, we assume that each algorithm can keep track of the best beamformer produced so far. In this way, the oscillations in the SINR metric for the \\texttt{CCP-ADMM}{} algorithm do not have a negative impact on its average performance.\n\nFigure \\ref{fig:multi_group_vs_time} shows the performance of the beamforming vectors computed with the respective algorithms over time for a system with $N=20$ transmit antennas, and $K=20$ users split evenly into $M=2$ multicast groups.\nThe shaded regions correspond to the $100\\%$, $75\\%$, $50\\%$, and $25\\%$ quantiles over all randomly generated problems. More precisely, the margins of the shaded regions correspond to the 1st, 13th, 26th, 38th, 63rd, 75th, 88th, and 100th out of 100 sorted y-axis values. For each algorithm, the median is represented by a bold line.\nThe \\texttt{S-POCS}{} algorithm achieves the highest median SINR, while requiring the lowest computation time among all methods considered. Moreover, it can be seen that the variation around this median value is less severe compared to the remaining approaches.\nPut differently, the time required for reaching a certain SINR varies much less severely for the \\texttt{S-POCS}{} algorithm than for the remaining methods.\nThis can be of particular interest in delay sensitive applications, where a beamforming vector for a given channel realization must be computed within a fixed time period.\n\n\n\n\n\n\\subsection{Varying number of antennas}\nIn this subsection, we investigate the impact of the transmit antenna array size $N$ on the performance of the respective beamforming algorithms. To do so, we generate 100 random problem instances for each array size $N$ with $K=20$ users split evenly in to $M=2$ multicast groups. We choose unit target SINR and unit noise power for all users, and unit per-antenna power constraints, i.e., $\\gamma=1$, $\\sigma=1$ and $(\\forall i \\in{\\mathbb N})$ $p_i=1$.\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=0.59]{figures\/swp_N_sinr.pdf}\n\t\\caption{$\\mathrm{SINR}^{\\min}_{\\rho}({\\mathbf w})$ for $K=20$ users split evenly into $M=2$ groups for varying antenna array sizes $N$.}\n\t\\label{fig:multi_group_swpN}\n\\end{figure}\nFor the \\texttt{SDR-GauRan}{} algorithm, we generate $200$ candidate beamforming vectors for each problem instance. We use the \\texttt{CCP-ADMM}{} algorithm with parameters as specified in \\cite{chen2017admm}. Since the inner ADMM iteration converges slowly for some problem instances, we set the maximal number of steps of the ADMM to $j_{\\max}=300$. For the outer CCP loop, we use the stopping criteria specified in \\cite{chen2017admm}, i.e., we stop the algorithm once the relative decrease of the objective value is below $10^{-3}$ or $t_{\\max}=30$ outer iterations are exceeded.\nFor the \\texttt{FPP-SCA}{} algorithm, we use a fixed number of $30$ successive convex approximation steps.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.59]{figures\/swp_N_time.pdf}\n\\caption{Computation time for $K=20$ users split evenly into $M=2$ groups for varying antenna array sizes $N$.}\n\\label{fig:multi_group_swpN_time}\n\\end{figure}\n\nFigure~\\ref{fig:multi_group_swpN} shows the performance metric in \\eqref{eq:sinr_min} for different numbers $N$ of transmit antennas, averaged over 100 random problem instances each. For all $N$, \\texttt{S-POCS}{} achieves highest value for $\\mathrm{SINR}^{\\min}_{\\rho}(\\cdot)$, followed by the \\texttt{FPP-SCA}{}, \\texttt{CCP-ADMM}{}, and \\texttt{SDR-GauRan}{} algorithms. \nFor $N\\ge80$, the \\texttt{S-POCS}{} algorithm achieves an SINR of $\\mathrm{SINR}^{\\min}_{\\rho}({\\mathbf w}_{\\texttt{S-POCS}})\\ge \\SI{-0.05}{dB}$.\nBy contrast, the remaining methods do not exceed $\\mathrm{SINR}^{\\min}_{\\rho}({\\mathbf w}_{\\texttt{FPP-SCA}})=\\SI{-0.12}{dB}$, $\\mathrm{SINR}^{\\min}_{\\rho}({\\mathbf w}_{\\texttt{CCP-ADMM}})\\ge \\SI{-0.15}{dB}$ ,$\\mathrm{SINR}^{\\min}_{\\rho}({\\mathbf w}_{\\texttt{SDR-GauRan}})\\ge \\SI{-1.18}{dB}$, respectively.\n\n\n\n\n\nThe corresponding average computation times are shown in Figure~\\ref{fig:multi_group_swpN_time}.\nThe \\texttt{S-POCS}{} algorithm requires\n\\SI{0.26}{\\%}--\\SI{2.38}{\\%} of the computation time required by \\texttt{SDR-GauRan}{},\n\\SI{0.95}{\\%}--\\SI{11.64}{\\%} of the computation time required by\n\\texttt{FPP-SCA}{}, and\n\\SI{6.49}{\\%}--\\SI{233.6}{\\%} of the computation time required by \\texttt{CCP-ADMM}{}.\nFor $N\\ge80$, the computation time of \\texttt{S-POCS}{} exceeds that of \\texttt{CCP-ADMM}{}.\n\n\n\n\\subsection{Varying number of users}\nIn the following simulation, we fix an array size of $N=50$ antenna elements, and we evaluate the performance of each method for $K\\in\\{4, 8, 16, 32, 48, 64\\}$ users split evenly into $M=4$ multicast groups. Figure~\\ref{fig:multi_group_swpK} shows the performance metric in \\eqref{eq:sinr_min} averaged over $100$ random problem instances for each $K$. As before, we choose $\\gamma=1$, $\\sigma=1$, and $(\\forall i\\in\\mathcal{N})$ $p_i=1$.\n\nWhile all algorithms achieve close to optimal performance for small numbers of users, the SINR in \\eqref{eq:sinr_min} decreases considerably faster for \\texttt{SDR-GauRan}{} than for the remaining methods. \nFor all values of $K$, \\texttt{S-POCS}{} achieves the highest value for $\\mathrm{SINR}^{\\min}_{\\rho}(\\cdot)$ among all methods.\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=0.59]{figures\/swp_K_sinr.pdf}\n\t\\caption{$\\mathrm{SINR}^{\\min}_{\\rho}({\\mathbf w})$ for a system with $N=50$ transmit antennas and a varying number of users split evenly into $M=4$ multicast groups.}\n\t\\label{fig:multi_group_swpK}\n\\end{figure}\n\n\n\nThe corresponding average computation times are shown in Figure~\\ref{fig:multi_group_swpK_time}. \n\\texttt{S-POCS}{} requires\n\\SI{1.76}{\\%}--\\SI{6.12}{\\%} of the computation time required by \\texttt{SDR-GauRan}{},\n\\SI{3.75}{\\%}--\\SI{5.41}{\\%} of the computation time required by \\texttt{FPP-SCA}{}, and\n\\SI{20.18}{\\%}--\\SI{1626}{\\%} of the computation time required by \\texttt{CCP-ADMM}{}.\nWhile the \\texttt{CCP-ADMM}{} takes only a fraction of the time required by \\texttt{S-POCS}{} for small $K$, it slows down considerably as $K$ increases. \nFor moderate and large numbers of users, \\texttt{S-POCS}{} outperforms the remaining methods in terms of both approximation gap and computation time.\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=0.59]{figures\/swp_K_time.pdf}\n\t\\caption{Computation time for a system with $N=50$ transmit antennas and a varying number of users split evenly into $M=4$ multicast groups.}\n\t\\label{fig:multi_group_swpK_time}\n\\end{figure}\n\n\n\\subsection{Varying Target SINR} \nIn the following simulation, we evaluate the impact of the target SINR on the respective algorithms in a system with $N=30$ antenna elements, $K=20$ users split evenly into $M=2$ multicast groups, and unit noise power $\\sigma=1$. Since the target SINR has a strong impact on the transmit power, we set $(\\forall i\\in\\mathcal{N})$ $p_i=\\infty$, to avoid generating infeasible instances of Problem~\\eqref{eq:original_problem}.\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.59]{figures\/swp_gamma_sinr.pdf}\n\\caption{$\\mathrm{SINR}^{\\min}_{\\rho}({\\mathbf w})$ for a system with $N=30$ transmit antennas and $K=20$ users split evenly into $M=2$ multicast groups.}\n\\label{fig:multi_group_swpgamma}\n\\end{figure}\nFigure~\\ref{fig:multi_group_swpgamma} shows the performance metric in \\eqref{eq:sinr_min} achieved by each method for the respective target SINR. Except for the \\texttt{SDR-GauRan}{} algorithm, which exhibits a gap of about \\SI{2}{dB} to the target SINR, all methods achieve close to optimal performance for each target SINR. \nFigure~\\ref{fig:multi_group_swpgamma_time} shows the computation time required by each algorithm for varying target SINR $\\gamma$. The average computation time of \\texttt{FPP-SCA}{} is almost constant. For \\texttt{SDR-GauRan}{} and \\texttt{CCP-ADMM}{}, the computation decreases slightly with an increasing target SINR. While the proposed \\texttt{S-POCS}{} algorithm converges quickly for low target SINR levels, its computation time exceeds that of the \\texttt{CCP-ADMM}{} for target SINRs above \\SI{8}{dB}. This indicates that the best choice of first-order algorithms for multicast beamforming depends on the regime in which the system is operated.\n\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.59]{figures\/swp_gamma_time.pdf}\n\\caption{Computation time for a system with $N=30$ transmit antennas and $K=20$ users split evenly into $M=2$ multicast groups.}\n\\label{fig:multi_group_swpgamma_time}\n\\end{figure}\n\\section{Conclusion}\nIn this paper, we proposed an algorithm for multi-group multicast beamforming with per-antenna power constraints. We showed that the sequence produced by this algorithm is guaranteed to converge to a feasible point of the relaxed semidefinite program, while the perturbations added in each iteration reduce the objective value and the distance to the nonconvex rank constraints.\nNumerical comparisons show that the proposed method outperforms state-of-the-art algorithms in terms of both approximation gap and computation time in many cases.\nIts advantage over existing algorithms is particularly pronounced in the low target SINR regime as well as for large numbers of receivers. This makes the proposed method particularly relevant for low-energy or massive access applications. \n\nIn comparison to other techniques, the computation time of the proposed method varies less severely across different problem instances of the same dimension. In communication systems, which are typically subject to strict latency constraints, the iteration can be terminated after a fixed number of steps without suffering severe performance loss. Moreover, the simple structure of the proposed method allows for a straightforward implementation in real-world systems. \n\nThe applicability of the proposed algorithm is not restricted to the multicast beamforming problem considered here. A slight modification of the rank-constraint naturally leads to an algorithm for the general rank multicast beamforming problem considered in \\cite{taleb2020general}. Future research could apply superiorized projections onto convex sets to other nonconvex QCQP problems such as MIMO detection or sensor network localization \\cite{luo2010semidefinite}.\n\n\\section{Appendix}\n\\begin{remark}\\label{rem:real_inner_product}\n\tThe function $\\langle\\cdot,\\cdot\\rangle$ defined in \\eqref{eq:innerProduct} is a real inner product.\n\t\n\t\\emph{Proof:}\n\tGiven a real vector space $\\mathcal{V}$, a real inner product is a function $\\langle\\cdot,\\cdot\\rangle:\\mathcal{V}\\times\\mathcal{V}\\to{\\mathbb R}$ satisfying \\cite{jain2005functional}\n\t\\begin{enumerate}\n\t\t\\item $(\\forall {\\mathbf x}\\in\\mathcal{V})$ $\\langle{\\mathbf x},{\\mathbf x}\\rangle\\ge0$ and $\\langle{\\mathbf x},{\\mathbf x}\\rangle=0\\iff {\\mathbf x}={\\mathbf 0}$ \\label{it:1}\n\t\t\\item $(\\forall{\\mathbf x},{\\mathbf y}\\in\\mathcal{V})$ $\\langle{\\mathbf x},{\\mathbf y}\\rangle= \\langle{\\mathbf y},{\\mathbf x}\\rangle$ \\label{it:2}\n\t\t\\item $(\\forall{\\mathbf x},{\\mathbf y}\\in\\mathcal{V})$\\allowbreak$(\\forall \\alpha\\in{\\mathbb R})$ $\\langle\\alpha{\\mathbf x},{\\mathbf y}\\rangle= \\alpha\\langle{\\mathbf x},{\\mathbf y}\\rangle$ \\label{it:3}\n\t\t\\item $(\\forall{\\mathbf x},{\\mathbf y},{\\mathbf z}\\in\\mathcal{V})$ $\\langle{\\mathbf x}+{\\mathbf y},{\\mathbf z}\\rangle= \\langle{\\mathbf x},{\\mathbf y}\\rangle + \\langle{\\mathbf y},{\\mathbf z}\\rangle$.\\label{it:4}\n\t\\end{enumerate}\n\nNote that $(\\forall{\\mathbf X}\\in\\mathcal{V})$ $\\mathrm{Re}\\{\\mathrm{tr}({\\mathbf X}^H{\\mathbf X})\\}=\\mathrm{tr}({\\mathbf X}^H{\\mathbf X})=\\|{\\mathbf X}\\|_\\mathrm{F}^2$, where $\\|\\cdot\\|_\\mathrm{F}$ is the standard Frobenius norm.\nConsequently, \\ref{it:1}) follows from the nonnegativity and positive-definiteness of a norm. The symmetry in \\ref{it:2}) follows from the fact that $\\mathrm{tr}({\\mathbf A}{\\mathbf B})=\\mathrm{tr}({\\mathbf B}{\\mathbf A})$ for matrices ${\\mathbf A},{\\mathbf B}$ with compatible dimensions, and $\\mathrm{Re}\\{\\mathrm{tr}({\\mathbf X})\\}=\\mathrm{Re}\\{\\mathrm{tr}({\\mathbf X}^H)\\}$ for ${\\mathbf X}\\in\\mathcal{V}$. Moreover, \\ref{it:3}) and\n\\ref{it:4}) follow from the linearity of $\\mathrm{Re}\\{\\cdot\\}$ and $\\mathrm{tr}(\\cdot)$.\\qed\n\\end{remark}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIt has been over a century since Einstein \\cite{1,2,3} formulated general relativity (GR) in 1915. He was aware that the gravitational field must interact with itself, but was unable to produce a symmetric tensor to properly describe the energy-momentum of the gravitational field. Instead, a non-covariant pseudo-tensor was introduced. However, the difficulties associated with this pseudo-tensor led to the problem of the localization of energy in GR. Over the decades, other pseudo-tensors were developed and different approaches to describe the energy-momentum of the gravitational field were investigated, \\cite{4,5,6}(and references therein) but the energy localization problem still exists today. Despite this deficiency, general relativity is one of the two cornerstones of physics.\\par GR was developed by Einstein on a four-dimensional Riemannian manifold with the understanding that spacetime was locally Minkowskian under free fall. Today, we more properly describe spacetime on a time-oriented Lorentzian manifold with metric. The Lorentzian metric can be associated with a Riemannian metric by using the line element field, $(\\bm{X},\\bm{-X})$ that exists on a non-compact paracompact Hausdorff manifold. A classical result in Riemannian geometry, namely the Berger-Ebin theorem \\cite{7}, can then be adapted to spacetime. This results in the Orthogonal Decomposition Theorem (ODT): an arbitrary second rank symmetric tensor on a time-oriented Lorentzian manifold with a torsionless and metric compatible connection can be orthogonally decomposed into a linear sum of divergenceless tensors and a new tensor, $\\varPhi_{\\alpha\\beta}$. It is a symmetric tensor constructed from the Lie derivative along $X$ of both the metric and a product of unit line element covectors. \\par The left hand side of Einstein's equation $G_{\\alpha\\beta}+\\Lambda g_{\\alpha\\beta}=\\frac{8\\pi G}{c^{4}}T_{\\alpha\\beta} $ involves symmetric divergenceless tensors. The right hand side is defined by the variation of the action functional for all matter fields with respect to the metric. This generates a divergenceless symmetric tensor that must describe all interactions of the gravitational field with the matter fields, and with the energy-momentum of the gravitational field itself; otherwise, it would not be locally conserved. However, there is nothing in this definition that deals explicitly with the energy-momentum of the gravitational field. If we define $ \\tilde{T}_{\\alpha\\beta} $ as a symmetric energy-momentum tensor generated from the matter fields without the requirement that it completely describes the energy-momentum of the gravitational field as well, it cannot be locally conserved and would not be divergenceless. Consequently, this second rank symmetric tensor can be set proportional to an arbitrary symmetric tensor $ w_{\\alpha\\beta} $, which is then orthogonally decomposed by the ODT into a linear sum of divergenceless tensors and $\\varPhi_{\\alpha\\beta} $. Lovelock's theorem \\cite{8} proves that in four dimensions, the divergenceless tensors composed from the metric and its first two derivatives can only consist of the metric and the tensor named after Einstein, $G_{\\alpha\\beta} $. Therefore, Einstein's equation in a four-dimensional Lorentzian spacetime should be expressed more completely by including the $\\varPhi_{\\alpha\\beta}$ term. \\par It will be proved that $G_{\\alpha\\beta}+\\Lambda g_{\\alpha\\beta}+\\varPhi_{\\alpha\\beta}=\\frac{8\\pi G}{c^{4}}\\tilde{T}_{\\alpha\\beta} $ and that the tensor $T_{\\alpha\\beta}=\\tilde{T}_{\\alpha\\beta}-\\frac{c^{4}}{8\\pi G}\\varPhi_{\\alpha\\beta}$ is divergenceless which allows Einstein's equation to be recovered. In that sense, $\\varPhi_{\\alpha\\beta}$ is hidden in GR. Thus, general relativity is not complete; it is possible to construct a symmetric tensor from the metric and a regular vector field, that is independent of the energy-momentum tensor of the matter fields, and represents the energy-momentum of the gravitational field itself. \\par This differs with the presently and generally accepted belief that GR is complete. However, if that notion was true, GR should be able to describe particular features of dark matter. That unfortunately is not the case and is the reason why physicists invented the generally well accepted theory of Lambda cold dark matter ($ \\Lambda $CDM) to explain, in particular, the flat rotation curves of some galaxies, while leaving GR intact. Modified general relativity can describe those and other galactic rotation curves as discussed in section 7. \\par Since Lie derivatives have the same form when expressed with covariant or partial derivatives, $\\varPhi_{\\alpha\\beta}$ does not vanish when the connection coefficients vanish. The metric can be locally Minkowskian, as in free fall, without forcing $\\varPhi_{\\alpha\\beta}$ to vanish. This contrasts free fall in GR where the connection coefficients vanish locally and the gravitational field locally disappears; hence the well known conception that the energy-momentum of the gravitational field is not localizable \\cite{9}. $\\varPhi_{\\alpha\\beta}$ has the structure to describe local gravitational energy-momentum. In free fall, the effective force of gravity disappears locally but the self-energy of the gravitational field is intact accordingly. \\par In section 2, the Orthogonal Decomposition Theorem is proved. In section 3, a modified equation of GR is derived by using the principle of least action, the ODT and a fundamental postulate of GR. $\\varPhi_{\\alpha\\beta}$ appears naturally alongside the Einstein tensor and introduces $ X^{\\mu} $ from the line element field and its collinear unit vector $ u^{\\mu} $ as dynamical variables independent of the Riemannian metric. Variation of the action functional with respect to $ X^{\\mu} $ leads to the Lorentz invariant expression $ u_{\\mu}=\\frac{3}{\\Phi}\\partial_{\\mu}f$ where $ \\Phi $ is the trace of $ \\varPhi_{\\alpha\\beta} $ and $ f\\neq0 $ is the magnitude of $ X^{\\alpha} $. The myriad of possible line element covectors is restricted to those satisfying this condition. \\par Section 4 discusses the conservation equation for the divergenceless energy-momentum tensor $T_{\\alpha\\beta}=\\tilde{T}_{\\alpha\\beta}-\\frac{c^{4}}{8\\pi G}\\varPhi_{\\alpha\\beta} $ where $ \\tilde{T}_{\\alpha\\beta} $ is the total matter energy-momentum tensor describing all types of matter including baryonic and dark matter, massive neutrinos and any other possible particle; if dark matter particles exist. \n\\par In section 5, the cosmological constant is discussed. Using the global constraint $\\int \\Phi\\sqrt{-g}d^{4}x=0$, it is shown that the cosmological constant $\\Lambda$ is dynamically replaced with $ \\Phi $. \n\\par Section 6 is a discussion of the modified Einstein equation of GR in the Friedmann-Lema{\\^i}tre-Robertson-Walker (FLRW) metric, and dark energy. A gravitationally repulsive condition is described by $ \\Phi>-2\\Lambda_{d} $ where $\\Lambda_{d} $ is the dark energy density. $\\Phi$ defines dark energy. Dark energy describes the inflation of the universe immediately after the Big Bang when no matter of any type was present. The dark energy density then tends to the present value of the vacuum energy density. A cyclic universe is born with maximum and minimum values of the cosmological scale factor in the FLRW metric. Dark energy explains the small value of the vacuum energy density and why it now dominates the expansion and acceleration of the present universe.\\par Cyclic universes have been reported in the literature \\cite{10,11,12,13,14}. Dark energy has been described by various scalar theories such as the quintessential \\cite{15}, k-essence \\cite{16} (and references therein) phantom or quintom theories \\cite{17} (and further references therein). Dark energy in this article is not a scalar theory; it is the energy generated from the interaction of the gravitational field with its energy-momentum tensor.\\par The existence of a tensor that describes the energy-momentum of the gravitational field brings into question the subject of dark matter. Since Einstein's equation is incomplete without the tensor $ \\varPhi_{\\alpha\\beta} $, the plausibility of dark matter is questionable; its existence is based on the assumption that general relativity is a complete theory. Although the self-interactions in a weak gravitational field may be extremely small, in the gravitational field of a galaxy, they may be significant enough to explain dark matter.\\par In section 7, the modified equation of GR is calculated with a spheroidal metric in a region of spacetime outside of matter with the assumption that dark matter does not exist. Two additional terms appear in the modified Newtonian force equation that provides it the flexibility to describe various types of galaxies. By balancing the dark energy force with the Newtonian force, the Tully-Fisher relation is established and the acceleration parameter in MOND is expressed in terms of the dark energy radial force parameter. \n\n\\section{Orthogonal Decomposition of Symmetric Tensors}\nThe Lorentzian spacetime is described on a four-dimensional time-oriented non-compact paracompact Hausdorff manifold with metric, $ (M,g_{\\alpha\\beta}) $. The connection on the manifold is torsionless and metric compatible and the metric has a $ +2 $ signature. The manifold admits a smooth regular line element field $(\\bm{X},-\\bm{X}) $ and a unit vector $ \\bm{u} $ collinear with one of the pair of regular vectors in the line element field \\cite{18,19,23}. The spacetime is assumed to admit a Cauchy surface and is therefore globally hyperbolic. This forbids the presence of closed causal curves \\cite{23}. \\par \nThe orthogonal decomposition of symmetric tensors on Riemannian manifolds has been documented in the literature \\cite{7,20,21,22}. However, a decomposition of symmetric tensors on a time-oriented Lorentzian manifold is required. \n\\begin{theorem}\nAn arbitrary (0,2) symmetric tensor $ w_{\\alpha\\beta} $ in the symmetric cotangent bundle $ S^{2}T^{\\ast}M $ on an n-dimensional time-oriented Lorentzian manifold $ (M,g_{\\alpha\\beta}) $ with a torsionless and metric compatible connection can be orthogonally decomposed as\n\\begin{equation}\\label{ODT}\n\tw_{\\alpha\\beta}= v_{\\alpha\\beta}+ \\varPhi_{\\alpha\\beta} \n\\end{equation} where $v_{\\alpha\\beta} $ represents a linear sum of symmetric divergenceless (0,2) tensors and $\\varPhi_{\\alpha\\beta}=\\frac{1}{2}\\pounds_{X}g_{\\alpha\\beta}+\\pounds_{X}u_{\\alpha}u_{\\beta}$ where the unit vector $\\bm{u} $ is collinear to one of the pair of regular line element vectors $ (\\bm{X},-\\bm{X}) $.\n\\end{theorem}\n\\begin{proof}\nLet the Lorentzian manifold $ (M,g_{\\alpha\\beta}) $ be non-compact paracompact Hausdorff and orientable. A smooth regular line element field $(\\bm{X},\\bm{-X)}$ exists as does a unit vector $ \\bm{u} $ collinear with one of the pair of line element vectors. Let M be endowed with a smooth Riemannian metric $ g^{+}_{\\alpha\\beta} $. The smooth Lorentzian metric $ g_{\\alpha\\beta} $ is constructed \\cite{19,23} from $ g^{+}_{\\alpha\\beta} $ and the unit covectors $ u_{\\alpha}$ and $ u_{\\beta}$ by setting\n\\begin{equation}\\label{LRmet}\ng_{\\alpha\\beta}=g^{+}_{\\alpha\\beta}-2u_{\\alpha}u_{\\beta}.\t\n\\end{equation}\n Let $ w_{\\alpha\\beta} $ and $ v_{\\alpha\\beta} $ belong to $ S^{2}T^{\\ast}M $, the cotangent bundle of symmetric $(0,2)$ tensors on M. In the compact neighborhood of a point $p$ in an open subset of $ S^{2}T^{\\ast}M $ which contains $ g^{+}_{\\alpha\\beta} $, an arbitrary $ (0,2) $ symmetric tensor $ w_{\\alpha\\beta} $ can be orthogonally and uniquely decomposed by the Berger-Ebin theorem \\cite{7} according to \\begin{equation}\\label{key}\n\tw_{\\alpha\\beta}=v_{\\alpha\\beta}+\\frac{1}{2}\\pounds_{\\xi}g^{+}_{\\alpha\\beta}\n\\end{equation} where $ \\bm{\\xi} $ is an arbitrary vector and $v_{\\alpha\\beta} $ represents a linear sum of symmetric divergenceless (0,2) tensors: $ {\\nabla^{+}}^{\\alpha}v_{\\alpha\\beta}=0 $.\\par The divergence of $ v_{\\beta}^{\\alpha} $ in the mixed tensor bundle can be written as $ \\nabla_{\\alpha}v_{\\beta}^{\\alpha}=\\partial_{\\alpha}v_{\\beta}^{\\alpha}+\\frac{v_{\\beta}^{\\lambda}}{2g}\\partial_{\\lambda}g-\\frac{1}{2}v^{\\alpha\\lambda}\\partial_{\\beta}g_{\\alpha\\lambda} $. Since the determinant of $ g_{\\alpha\\beta} $, $ g $, is related to that of $ g^{+}_{\\alpha\\beta} $ by $ g=-g^{+} $ \n\\begin{equation}\\label{Dv}\n\t\\begin{split}\n\t\t\\nabla_{\\alpha}v_{\\beta}^{\\alpha}-\\nabla^{+}_{\\alpha}v_{\\beta}^{\\alpha}=v^{\\alpha\\lambda}\\partial_{\\beta}(u_{\\alpha}u_{\\lambda}).\n\t\\end{split}\n\\end{equation} The left hand side of (\\ref{Dv}) is a (0,1) tensor but the right hand side is not, which demands:\n\\begin{equation}\\label{}\n\tv^{\\alpha\\lambda}\\partial_{\\beta}(u_{\\alpha}u_{\\lambda})=0\n\\end{equation}where $\\partial_{\\beta}u_{\\alpha}\\neq0 $. This guarantees $\\nabla^{\\alpha}v_{\\alpha\\beta}=0 $ because $\\nabla^{+\\alpha}v_{\\alpha\\beta}=0 $. Hence, \n\\begin{equation}\\label{decomp}\n\n\tw_{\\alpha\\beta}=v_{\\alpha\\beta}+\\frac{1}{2}\\pounds_{\\xi}g_{\\alpha\\beta}+\\pounds_{\\xi}u_{\\alpha}u_{\\beta}\n\n\n\\end{equation} where $ \\nabla^{\\alpha}v_{\\alpha\\beta}=0 $. $ \\xi^{\\lambda} $ is an arbitrary vector which can be chosen to be collinear to $ u^{\\lambda} $. Without loss of generality, $ \\xi^{\\lambda} $ can then be replaced by $ X^{\\lambda} $. Using $ X^{\\lambda}=fu^{\\lambda} $ where $f\\neq0 $ is the magnitude of $X^{\\lambda} $, the expression $X^{\\lambda}\\nabla_{\\lambda}(u_{\\alpha}u_{\\beta}) $ in the last term of (\\ref{decomp}) then vanishes in an affine parameterization and\n\\begin{equation}\\label{udecomp}\n\tw_{\\alpha\\beta}=v_{\\alpha\\beta}+\\varPhi_{\\alpha\\beta}\n\\end{equation}where\n\\begin{equation}\\label{Phiab}\n\t\\varPhi_{\\alpha\\beta}:=\\frac{1}{2}(\\nabla_{\\alpha}X_{\\beta}+\\nabla_{\\beta}X_{\\alpha})+u^{\\lambda}(u_{\\alpha}\\nabla_{\\beta}X_{\\lambda}+u_{\\beta}\\nabla_{\\alpha}X_{\\lambda}).\n\\end{equation} The decomposition is orthogonal: $=0 $. \\\\\n\\end{proof}\n\\section{Derivation of the modified equation of general relativity}\nA modified equation of general relativity of the form $ C_{\\alpha\\beta}=0 $ is sought which contains a linear combination of symmetric tensors that define the Einstein equation, and a new tensor which can describe the energy-momentum of the gravitational field itself. This can be achieved by using the principle of least action, the Orthogonal Decomposition Theorem (\\ref{ODT}), and a fundamental postulate of GR. \\par First, the field equations contained in $C_{\\alpha\\beta} $, which are sought to describe general relativity and the energy-momentum of the gravitational field, must be derivable from the action functional \n\\begin{equation}\\label{S}\nS=S^{F}+S^{EH}+S^{G}\n\\end{equation} where $S^{F}$ and $L^{F}$ refer to the action and Lagrangian, respectively, for all types of matter fields including those of dark matter if dark matter particles exist. $S^{EH}$ is the Einstein-Hilbert action for general relativity and $S^{G}$ is the action for the energy-momentum of the gravitational field with Lagrangian $L^{G} $. The variation of $S^{F}$ with respect to $g^{\\alpha\\beta}$\\begin{equation}\n\\delta S^{F}=\\int (\\frac{\\delta L^{F}}{\\delta g^{\\alpha\\beta}}-\\frac{1}{2}L^{F}g_{\\alpha\\beta})\\delta g^{\\alpha\\beta}\\sqrt{-g}d^{4}x\n\\end{equation} generates the symmetric energy-momentum tensor $\\tilde{T}_{\\alpha\\beta}$ which represents the interaction of all types of matter fields and associated radiation in a gravitational field, but does not specifically include the energy-momentum of the gravitational field:\n\\begin{equation}\n\\tilde{T}_{\\alpha\\beta}:=-2c (\\frac{\\delta L^{F}}{\\delta g^{\\alpha\\beta}}-\\frac{1}{2}L^{F}g_{\\alpha\\beta}). \n\\end{equation}\n$C_{\\alpha\\beta}$ must then be expressed as\n\\begin{equation}\\label{key}\nC_{\\alpha\\beta}=\\frac{a}{c}\\tilde{T}_{\\alpha\\beta}+bw_{\\alpha\\beta}\n\\end{equation} where $ w_{\\alpha\\beta}$ is an unknown symmetric tensor independent of $ \\tilde{T}_{\\alpha\\beta} $; and a and b are arbitrary constants.\\par Second, $w_{\\alpha\\beta}$ can be orthogonally decomposed by the ODT into\n\\begin{equation}\\label{key}\nw_{\\alpha\\beta}=v_{\\alpha\\beta}+\\varPhi_{\\alpha\\beta} \\end{equation} where $ \\varPhi_{\\alpha\\beta} $ is given by (\\ref{Phiab}) and $ \\nabla^{\\alpha}v_{\\alpha\\beta}=0 $. \\par Third, Einstein concluded \\cite{1} that the metric should describe both the geometry of spacetime and the gravitational field. He postulated the totality of the matter energy-momentum tensor and the energy-momentum of the gravitational field, to be the source of the gravitational field. Adhering to this philosophy, the energy-momentum tensor $ T_{\\alpha\\beta} $ describing the totality of all types of matter and the energy-momentum of the gravitational field, is postulated to be the source of the gravitational field.\n\\par $\\varPhi_{\\alpha\\beta}$ is independent of $\\tilde{T}_{\\alpha\\beta}$ and is not divergenceless. $ \\varPhi_{\\alpha\\beta} $ is therefore the sole candidate to describe the energy-momentum of the gravitational field. Thus,\n\\begin{equation}\\label{T}\nT_{\\alpha\\beta}=\\tilde{T}_{\\alpha\\beta}+\\frac{bc}{a}\\varPhi_{\\alpha\\beta}\n\\end{equation} and the interaction of the gravitational field with its energy-momentum tensor can be defined with the action\n\\begin{equation}\\label{SG}\nS^{G}:=-b\\int g_{\\alpha\\beta}\\varPhi^{\\alpha\\beta}\\sqrt{-g}d^{4}x.\n\\end{equation}\n\\par It was proved by Lovelock \\cite{8} that the only tensors in a four-dimensional spacetime which are symmetric, divergence free, and a concomitant of the metric tensor together with its first two derivatives are the metric and the Einstein tensor, $ G_{\\alpha\\beta}=R_{\\alpha\\beta}-\\frac{1}{2}g_{\\alpha\\beta}R $. $ v_{\\alpha\\beta} $ must therefore contain the Lovelock tensors. \\par $C_{\\alpha\\beta}$ is then formally decomposed as \\begin{equation}\\label{Psidecomp}\nC_{\\alpha\\beta}=\\frac{a}{c}T_{\\alpha\\beta}+b v_{\\alpha\\beta}\n\\end{equation}\nwith $\\nabla_{\\alpha}v^{\\alpha\\beta}=0$ and $v_{\\alpha\\beta}:=\nG_{\\alpha\\beta}+\\Lambda g_{\\alpha\\beta}$. $\\Lambda$ \\emph{is a global integration constant} (in hindsight identified as the cosmological constant). With the collection of tensors $ C_{\\alpha\\beta} $ defined to vanish, we obtain the modified Einstein equation of general relativity with cosmological constant $\\Lambda$ and the gravitational energy-momentum term $\\varPhi_{\\alpha\\beta}$ \\begin{equation}\\label{MEQ}\n-\\frac{8\\pi G}{c^{4}}\\tilde{T}_{\\alpha\\beta}+R_{\\alpha\\beta}-\\frac{1}{2}g_{\\alpha\\beta}R+{\\Lambda} g_{\\alpha\\beta}+\\varPhi_{\\alpha\\beta}=0\n\\end{equation} by setting $a=-\\frac{1}{2}$ and $b=\\frac{c^{3}}{16\\pi G}.$\n\\par Ma and Wang \\cite{22} obtained a similar result to (\\ref{MEQ}) with $ \\Lambda=0 $, but with an entirely different $ \\varPhi_{\\alpha\\beta}=\\nabla_{\\alpha}\\partial_{\\beta}\\phi $ for some scalar $ \\phi $ by using a decomposition of symmetric tensors on a Riemannian manifold. \n\\par Equation (\\ref{MEQ}) must be derived from the action functional (\\ref{S}). With (\\ref{SG}):\n\\begin{equation}\\label{Se}\nS=\\int L^{F}( A^{\\beta},\\nabla^{\\alpha} A^{\\beta},...,g^{\\alpha\\beta})\\sqrt{-g}d^{4}x\n+b \\int (R-2\\Lambda)\\sqrt{-g}d^{4}x-b\\int \\varPhi_{\\alpha\\beta} g^{\\alpha\\beta}\\sqrt{-g} d^{4}x.\n\\end{equation}\nTo calculate the variation of $ S^{G} $ with respect to the inverse metric $ g^{\\alpha\\beta} $, the following results are used: $g^{\\alpha\\beta}=g^{{+}{\\alpha\\beta}}-2u^{\\alpha}u^{\\beta} $ is the inverse of $ g_{\\alpha\\beta}$; $g^{{+}{\\alpha\\beta}} \\delta g_{\\alpha\\beta}=-g_{\\alpha\\beta}\\delta g^{{+}{\\alpha\\beta}} $; $\\delta g^{{+}{\\rho\\beta}}=-g^{{+}{\\alpha\\beta}}g^{\\lambda\\rho}\\delta g^{+}_{\\alpha\\lambda} $; $g_{\\alpha\\beta}\\delta(u^{\\alpha}u^{\\beta})=u_{\\alpha}u_{\\beta}\\delta g^{\\alpha\\beta} $; and $\\delta(u^{\\alpha}u^{\\beta})=u_{\\lambda}u^{\\beta}\\delta g^{\\lambda\\alpha} $.\nThe variation of S with respect to $g^{\\alpha\\beta}$ is then\n\\begin{multline}\n\\delta S =\\int [-\\frac{1}{2c}\\tilde{T}_{\\alpha\\beta} +b(R_{\\alpha\\beta}-\\frac{1}{2}g_{\\alpha\\beta}R)+b\\Lambda g_{\\alpha\\beta}+b(\\nabla_{\\alpha}X_{\\beta}\n+2u^{\\lambda}u_{\\beta}\\nabla_{\\alpha}X_{\\lambda}\\\\\n+\\nabla_{\\mu}X_\\nu(-u_{\\alpha}u_{\\beta} g^{\\mu\\nu}+u^{\\mu}u^{\\nu}g_{\\alpha\\beta})\n)]\\delta g^{\\alpha\\beta}\n\\sqrt{-g}\\enspace d^{4}x\\enspace+2\\int \\nabla_{\\alpha}(u^{\\alpha}u^{\\beta})\\delta X_{\\beta}\\sqrt{-g}d^{4}x\n\\end{multline}after calculating $ \\delta \\Gamma^{\\lambda}_{\\alpha\\beta} $ induced by the variations in the inverse metric, and integrating by parts several times.\nThe last term in the variation with respect to $ g^{\\alpha\\beta} $ vanishes which follows by writing the tensor in brackets, $-u_{\\alpha}u_{\\beta}g^{\\mu\\nu}+u^{\\mu}u^{\\nu}g_{\\alpha\\beta}$, as its equivalent, $\\frac{1}{2}(g^{{+}{\\mu\\nu}}g_{\\alpha\\beta}-g^{+}_{\\alpha\\beta}g^{\\mu\\nu}) $; and choosing an orthonormal basis $(e_{\\alpha})$ at a point $p\\in M $ for $ g^{+} $ with $ e_{0}=u $. Then, $ u^{0}u_{0}=1 $, $ u^{i}u_{i}=0 $, $g^{+}_{\\alpha\\beta}=\\delta_{\\alpha\\beta}$, $ g^{00}=-g^{{+}00}$ and $g_{00}=-g^{+}_{00}$, with all other components of the metric g equal to those of the metric $ g^{+}$. Since $ \\delta {g^{\\alpha\\beta}}$ is symmetric, the second last term can be expressed as $ b\\varPhi_{\\alpha\\beta} $.\nWith $\\delta S=0$ and arbitrary variations for $\\delta g^{\\alpha\\beta}$ and $\\delta X_{\\beta} $, we have\n\\begin{equation}\n-\\frac{1}{2c}\\tilde{T}_{\\alpha\\beta}+b(R_{\\alpha\\beta}-\\frac{1}{2}g_{\\alpha\\beta}R)+b{\\Lambda} g_{\\alpha\\beta}+b\\varPhi_{\\alpha\\beta}=0\n\\end{equation} and \n\\begin{equation}\\label{nuab}\n\\nabla_{\\alpha}(u^{\\alpha}u^{\\beta})=0.\n\\end{equation}\nSetting $b=\\frac{c^{3}}{16\\pi G}$ yields the modified Einstein equation described in (\\ref{MEQ}). \n\\subsection{Some properties of $ u_{\\mu} $ and $ \\varPhi_{\\alpha\\beta} $}\n$ X^{\\beta} $ from the line element field and its collinear vector $ u^{\\beta} $ are dynamical variables independent of the Riemannian metric in (\\ref{LRmet}). The dynamical properties of the line element fields are obtained by varying (\\ref{Se}) with respect to $ X^{\\mu} $. This yields the equation\n\\begin{equation}\\label{Xmu1}\nu^{\\alpha}\\nabla_{\\alpha}X_{\\mu}+4u^{\\lambda}u_{\\mu}u^{\\alpha}\\nabla_{\\alpha}X_{\\lambda}-3u^{\\lambda}\\nabla_{\\mu}X_{\\lambda}=\\Phi u_{\\mu}\n\\end{equation} using (\\ref{MEQ}). With $ X^{\\alpha}=fu^{\\alpha} $ where $ f\\neq0 $ is the magnitude of $ X^{\\alpha} $, the first two terms of (\\ref{Xmu1}) are then geodesics in an affine parameterization and can be set to zero. Since\n\\begin{equation}\\label{key}\nX^{\\alpha}X_{\\alpha}=-f^{2},\n\\end{equation}it follows that\n\\begin{equation}\\label{Xmu}\nu_{\\mu}=\\frac{3}{\\Phi}\\partial_{\\mu}f. \n\\end{equation} \n\\par There is a myriad of regular vectors from the line element field for each Riemannian metric, and the associated Lorentzian metric is not unique. However, the variation of (\\ref{Se}) with respect to $X^{\\mu} $ restricts the line element fields to those given by (\\ref{Xmu}) which in turn restricts the Lorentzian metric. \\par $\\varPhi_{\\alpha\\beta} $ is expressed in terms of the Lie derivative of the metric and a product of unit line element covectors. Since Lie derivatives have the same form when expressed with covariant or partial derivatives, $\\varPhi_{\\alpha\\beta} $ does not vanish when the connection coefficients vanish. The metric can be locally Minkowskian, as in free fall, without affecting $ \\varPhi_{\\alpha\\beta} $. It has the structure to describe local gravitational energy-momentum. \n\\par Using (\\ref{nuab}), it is straightforward to calculate the coupling of the gravitational field with its energy-momentum tensor:\n\\begin{equation}\\label{0phi}\n\\int g_{\\alpha\\beta}\\varPhi^{\\alpha\\beta}\\sqrt{-g}d^{4}x=\\int \\Phi \\sqrt{-g}d^{4}x=0 \n\\end{equation} where $ \\Phi=\\nabla_{\\alpha}X_{\\beta}(g^{\\alpha\\beta}+2u^{\\alpha}u^{\\beta}) $. Equation (\\ref{0phi}) means the scalar $ \\Phi $ has local positive and negative values, all of which add to zero when integrated over the entire spacetime. $ \\Phi $ is globally conserved. Section 6 demonstrates that the positive values of $ \\Phi $ are attributed to the gravitationally repulsive properties of dark energy with the cosmological constant set to zero. The negative values represent the attractive part of the energy of the gravitational field interacting with its gravitational energy-momentum tensor. $ \\Phi $ is measurable; it can be expressed in terms of the density and pressure of total matter and the vacuum energy density as shown in section 6. The gravitational energy density is calculated from $ \\varPhi_{00} $ in section 7. The energy-momentum of the gravitational field is localizable and measureable. \n\\section{The conserved energy-momentum tensor $ T_{\\alpha\\beta} $} \nThe invariance of the action functional describing gravity, it's self-energy-momentum and total matter fields under the symmetry of diffeomorphisms demands a symmetric divergenceless energy-momentum tensor\\begin{equation}\\label{Tab}\nT^{\\alpha\\beta}=\\tilde{T}^{\\alpha\\beta}-\\frac{c^{4}}{8\\pi G}\\varPhi^{\\alpha\\beta}.\n\\end{equation} This follows from an analysis of each term in the action functional $ S $ defined in (\\ref{S}). The action $ S^{EH} $ is independently invariant under a diffeomorphism. Variation of the action $ S^{F} $ with respect to the metric contains only $ \\tilde{T}^{\\alpha\\beta} $ because the variations of $ S^{F} $ with respect to each field and its derivatives vanish with the corresponding Euler-Lagrange equations. Variation of $ S^{G} $ with respect to the metric yields $ \\varPhi^{\\alpha\\beta} $. Therefore, we can write\n\\begin{equation}\\label{key}\n\\int (-\\frac{1}{2c}\\tilde{T}^{\\alpha\\beta}+b\\varPhi^{\\alpha\\beta})\\delta g_{\\alpha\\beta}\\sqrt{-g}d^{4}x=0\n\\end{equation} where $b=\\frac{c^{3}}{16\\pi G}$. Under a diffeomorphism, the Lie derivative of the metric along a regular vector $ Y^{\\beta} $ generates the infinitesimal change in the metric $\\delta g_{\\alpha\\beta}=\\nabla_{\\alpha}Y_{\\beta}+\\nabla_{\\beta}Y_{\\alpha}$. Integrating by parts then gives\n\\begin{equation}\\label{key}\n\\int \\nabla_{\\alpha}(-\\frac{1}{2c}\\tilde{T}^{\\alpha\\beta}+b\\varPhi^{\\alpha\\beta})Y_{\\beta}\\sqrt{-g}d^{4}x=0\n\\end{equation} which requires\n\\begin{equation}\\label{Tcons}\n\\nabla_{\\alpha} T^{\\alpha\\beta}=0 \n\\end{equation}\nfor diffeomorphisms generated by $ Y^{\\beta} $. \\par Equation (\\ref{Tcons}) is the local description of the conservation of energy and momentum in a modified theory of GR described by (\\ref{MEQ}). The gravitational field has an intrinsic energy-momentum which is attributed to $ \\varPhi_{\\alpha\\beta} $. Being independent of $ \\tilde{T}_{\\alpha\\beta} $, $\\frac{c^{4}}{8\\pi G}\\varPhi_{\\alpha\\beta} $ provides the additional self-energy-momentum of the gravitational field necessary to complete the source $ T_{\\alpha\\beta} $ of the geometry of spacetime. $\\varPhi_{\\alpha\\beta} $ completes the Einstein equation and leaves it intact in form:\n\\begin{equation}\\label{E}\nG_{\\alpha\\beta}+\\Lambda g_{\\alpha\\beta}=\\frac{8\\pi G}{c^{4}}T_{\\alpha\\beta}.\n\\end{equation}\n\\section{Cosmological Constant}\nThe cosmological constant $ \\Lambda $ appears alongside the metric as the simplest and most basic Lovelock tensor. With a torsionless connection, the covariant derivative of the metric vanishes. Adding the metric to the Einstein equation seems trivial with the associated constant playing the role of a global integration constant. $ \\Lambda $ can then be interpreted as a constant global energy density. That seems very restrictive as energy densities are generally dynamic and not constant.\\par The regular vector fields that exist in a Lorentzian spacetime provide a dynamical background from which the energy-momentum of the gravitational field is constructed. It is not possible for a constant global energy density to represent the dynamic interaction of the metric with the energy-momentum tensor of the gravitational field. $ \\Lambda $ must therefore be dynamically replaced by a scalar.\n\\begin{theorem}\n\tThe cosmological constant $ \\Lambda $ is dynamically replaced by the trace of $\\varPhi_{\\alpha\\beta} $. \n\\end{theorem}\n\\begin{proof}\nUsing (\\ref{0phi}), $S^{EH}$ with $ \\Lambda=0 $ can be written as\n\\begin{equation}\\label{key}\nS^{EHG}=\\frac{c^{3}}{16\\pi G} \\int (R-\\Phi) \\sqrt{-g}d^{4}x\n\\end{equation} which generates the modified Einstein equation with no cosmological constant from (\\ref{Se}). If $ \\Phi=2\\Lambda $ \\emph{locally}, the Einstein equation with the cosmological constant is obtained accordingly. The trace of the tensor describing the energy-momentum of the gravitational field, dynamically replaces the cosmological constant but must obey the global equation (\\ref{0phi}).\\\\\n\\end{proof}\n\n\\section{Energy-momentum of the gravitational field in the FLRW metric: Dark energy}\n\\par Some properties of $\\varPhi_{\\alpha\\beta}$ in the Friedmann-Lema{\\^i}tre-Robertson-Walker metric are now investigated. The FLRW metric is typically used to describe a spatially maximal symmetric universe according to the cosmological principle \\cite{24} whereby the universe is homogeneous and isotropic when measured on a large scale. This metric is given by\n\\begin{equation}\nds^{2}=-c^{2}dt^{2}+a(t)^{2}[\\frac{1}{1-\\kappa r^{2}}dr^{2}+r^{2}(d\\theta^{2}+{sin^{2}\\theta} d\\varphi^{2})]\n\\end{equation} where $a(t)$ is the cosmological scale factor which satisfies $a>0 $ after the Big Bang at $ t=0 $. $\\kappa$ is a constant used to describe a particular spatial geometry. The connection components of the FLRW metric are\n\\begin{equation}\n\\Gamma^{i}_{j0}=\\frac{\\dot{a}}{ca}\\delta^{i}_{j},\\enspace \\Gamma^{0}_{ij}=\\frac{\\dot{a}}{ca}g_{ij},\\enspace \\Gamma^{\\mu}_{00}=0\n\\end{equation} where $j=1,2,3$. The Ricci tensor components are\n\\begin{equation}\nR_{00}=-3\\frac{\\ddot{a}}{ac^{2}},\\enspace R_{ij}=(\\frac{\\ddot{a}}{ac^{2}}+2\\frac{\\dot{a}^{2}}{a^{2}c^{2}}+2\\frac{\\kappa}{a^{2}})g_{ij}\n\\end{equation} and the Ricci scalar is\\begin{equation}\nR=\\frac{6}{a^{2}c^{2}}(a\\ddot{a}+\\dot{a}^{2}+\\kappa c^{2}).\n\\end{equation} \\par It was proved in \\cite{24} that a maximally spatial form invariant symmetric second rank tensor $B_{\\alpha\\beta}$ has components in the form\n\\begin{equation}\nB_{00}=\\varrho(t),\\enspace B_{0j}=0,\\enspace B_{ij}=p(t)g_{ij}\n\\end{equation} where $\\varrho(t)$ and $p(t)$ are arbitrary functions of time. We therefore set,\n\\begin{equation}\n\\tilde{T}_{00}=c^{2}\\varrho,\\enspace \\tilde{T}_{ij}=pg_{ij},\\enspace \\tilde{T}^{\\mu}_{\\mu}=-c^{2}\\varrho+3p\n\\end{equation} where $\\varrho(t)$ and $p(t)$ are designated as the mass density and pressure functions, respectively, of total matter including dark matter; if dark matter particles exist. Similarly,\n\\begin{equation}\n\\varPhi_{00}=\\Lambda_{d},\\enspace \\varPhi_{ij}=\\frac{P_{d}}{c^{2}}g_{ij},\\enspace \\varPhi_{\\mu}^{\\mu}=-\\Lambda_{d}+3\\frac{P_{d}}{c^{2}}\n\\end{equation} where $\\Lambda_{d}(t)$ and $P_{d}(t)$ refer to the energy density and pressure, respectively, of the tensor describing the energy-momentum of the gravitational field. \t\\par To obtain the Friedmann equations, we use the trace of the modified Einstein equation\n\\begin{equation}\n-\\frac{8\\pi G}{c^{4}}\\tilde{T}-R+\\Phi=0\n\\end{equation}\nto rewrite the modified Einstein equation as\n\\begin{equation}\nR_{\\alpha\\beta}=\\frac{8\\pi G}{c^{4}} (\\tilde{T}_{\\alpha\\beta}-\\frac{1}{2}g_{\\alpha\\beta}\\tilde{T})+\\frac{1}{2}g_{\\alpha\\beta}\\Phi -\\varPhi_{\\alpha\\beta}\n\\end{equation} from which we obtain\n\\begin{equation}\\label{frw1}\n3\\frac{\\ddot{a}}{a}=-4\\pi G(\\varrho+\\frac{3p}{c^{2}})+\\frac{1}{2}c^{2}\\Lambda_{d}+\\frac{3}{2}P_{d}\n\\end{equation} from the $R_{00}$ component. The $R_{11}$ component gives\n\\begin{equation}\\label{frw2}\n\\frac{\\ddot{a}}{a}+2\\frac{\\dot{a}^{2}}{a^{2}}+\\frac{2\\kappa c^{2}}{a^{2}}=\\frac{c^{2}}{2}(-\\Lambda_{d}+\\frac{P_{d}}{c^{2}})+4\\pi G(\\varrho-\\frac{p}{c^{2}})\n\\end{equation} and the conservation law for $T^{\\alpha\\beta}$ yields \n\\begin{equation}\\label{frw3}\n\\dot{\\varrho}-\\frac{c^{2}}{8\\pi G}\\dot{\\Lambda_{d}}=-3\\frac{\\dot{a}}{a}(\\varrho+\\frac{p}{c^{2}}-\\frac{c^{2}}{8\\pi G}(\\Lambda_{d}+\\frac{P_{d}}{c^{2}})).\n\\end{equation}\nInserting (\\ref{frw1}) into (\\ref{frw2}) produces a simpler equation\n\\begin{equation}\\label{frw4}\n\\frac{\\dot{a}^{2}}{a^{2}}+\\frac{\\kappa c^{2}}{a^{2}}=\\frac{8\\pi G}{3} \\varrho-\\frac{1}{3}c^{2}\\Lambda_{d}. \n\\end{equation} Equations (\\ref{frw1}) and (\\ref{frw4}) are the Friedmann equations modified with $\\varPhi_{\\alpha\\beta}$. \\par From (\\ref{frw1}), we immediately see that $\\Phi+2\\Lambda_{d}=\\Lambda_{d}+\\frac{3}{c^{2}}P_{d}>0$ tends to accelerate the universe; while all types of matter combined, with a positive mass density and pressure, tend to decelerate the universe. $\\Phi>-2\\Lambda_{d} $ is a gravitationally repulsive condition which relates dark energy to $\\Lambda_{d} $. Hence, $\\Lambda_{d}$ is called the dark energy density and $P_{d}$ the dark energy pressure. $ \\Phi $ tends to accelerate or decelerate the universe but has a net zero effect on it. $\\varPhi_{\\alpha\\beta}$ and therefore $ \\Phi $, provide the flexibility to describe various eras in the evolution of the universe. The cosmological constant $\\Lambda$, on the other hand, can be expressed as a fixed negative energy density which would have tended to accelerate the universe during \\emph{all} epochs. \\par One of the recent challenges in cosmology has been to find a natural mechanism that describes a small but positive vacuum energy density to explain the observed acceleration of the present universe. Dark energy provides a natural explanation of this challenge without the need of a cosmological constant.\\par After the discovery in 1929 by Hubble \\cite{25} that the universe was expanding, $\\Lambda$ was not required to obtain a static solution to the Einstein equations with a positive mass density. Since the cosmological constant was vastly smaller than any value predicted by particle theory, most particle theorists simply assumed, that for some unknown reason, this quantity was zero \\cite{26}. This was widely believed to be true until the discovery of the presently accelerating universe in 1998-99 \\cite{27,28}. $\\Lambda$ was then considered to be associated with the dark energy conundrum. However, it is just a global integration constant in the modified Einstein equation and is replaced by $ \\Phi $ as proved in theorem 5.1. This is readily verified by restricting the dark energy variables to the constant values $\\Lambda_{d}=-\\Lambda$ and $P_{d}=c^{2}\\Lambda$ in (\\ref{frw1}) and (\\ref{frw4}). The Friedmann equations with the cosmological constant $\\Lambda$ are then recovered in accordance with theorem 5.1. \\par The Friedmann equations are now considered with $\\kappa=1$ describing a closed universe:\n\\begin{equation}\\label{frw5}\n\\dot{a}^{2}=\\frac{8\\pi G}{3} \\varrho a^{2}-\\frac{c^{2}}{3}\\Lambda_{d}a^{2} -c^{2}\n\\end{equation} and\n\\begin{equation}\\label{frw6}\n\\ddot{a}=-\\frac{4\\pi G}{3}a(\\varrho+\\frac{3p}{c^{2}})+\\frac{ac^{2}}{6}(\\Lambda_{d}+\\frac{3}{c^{2}}P_{d}).\n\\end{equation} To avoid confusion with $\\Lambda$, we will denote the constant vacuum energy density as $\\Lambda_{v}$ with the property $ \\Lambda_{v}>0 $. In the present epoch, $ \\Lambda_{v}$ is measured to be $ \\approx1.1\\times10^{-52}m^{-2} $. By defining\n\\begin{equation}\\label{frw7}\n\\tilde{\\varrho}=8\\pi G\\varrho-c^{2}\\Lambda_{d}\n\\end{equation} and\n\\begin{equation}\\label{frw8}\n\\tilde{p}=-\\frac{4\\pi G}{c^{2}}p+\\frac{1}{2}P_{d},\n\\end{equation} these equations can be simplified to\n\\begin{equation}\\label{frw9}\n\\dot{a}^{2}=\\frac{\\tilde{\\varrho}a^{2}}{3}-c^{2}, \n\\end{equation} and\n\\begin{equation}\\label{frw10}\n\\ddot{a}=a(-\\frac{\\tilde{\\varrho}}{6}+\\tilde{p})\n\\end{equation}\nwith the conservation equation\n\\begin{equation}\\label{frw11}\n\\dot{\\tilde{\\varrho}}=-3\\frac{\\dot{a}}{a}(\\tilde{\\varrho}-2\\tilde{p}).\n\\end{equation} Unless otherwise stated, $\\varrho>0$ and $ p>0 $. Equation (\\ref{frw9}) requires $\\tilde{\\varrho}>0$. \\par It is interesting to explore how the energy-momentum of the gravitational field can describe critical features of a Big Bang universe. Immediately after the event of the Big Bang, the universe violently accelerates and $ \\dot{a}>0 $. For a very short time, there is no matter; $\\varrho=0$ and $p=0 $. In this very early stage of the evolution of the universe, it is possible that the constant vacuum energy density developed. If we set $\\varrho=0$ in (\\ref{frw5}), the inequality\n\\begin{equation}\\label{Ld}\n\\Lambda_{d}<-\\frac{3}{a^{2}}\n\\end{equation}\nmust hold. From (\\ref{frw3}) and (\\ref{frw6}) with $ \\dot{a}\\neq0 $,\n\\begin{equation}\\label{aconst}\n\\frac{d}{da}\\Lambda_{d}=-\\frac{2\\Lambda_{d}}{a}-\\frac{6\\ddot{a}}{a^{2}c^{2}}.\n\\end{equation} If $\t\\Lambda_{d}\\longrightarrow-\\Lambda_{v}$ and $ \\frac{P_{d}}{c^{2}}\\longrightarrow\\Lambda_{v} $ just after the Big Bang, (\\ref{frw6}) requires $ \\frac{\\ddot{a}}{a} $ to be constant. With those assumptions, equation (\\ref{aconst}) has the solution\n\\begin{equation}\\label{Lamd}\n\\Lambda_{d}=\\frac{c_{1}}{a^{2}}-\\frac{3\\ddot{a}}{ac^{2}}\n\\end{equation} where $ c_{1} $ is an arbitrary constant. Setting $c_{1}=-3$ and $\\Lambda_{v}=\\frac{3\\ddot{a}}{ac^{2}} $,\n\\begin{equation}\\label{frw13}\n\\Lambda_{d}=-\\frac{3}{{a}^{2}}-\\Lambda_{v}\n\\end{equation} which satisfies (\\ref{Ld}) and tends to $-\\Lambda_{v}$ as the universe expands. Dark energy can generate $\\Lambda_{v}$ during this epoch of the universe. The expansion of the universe is then described by \\begin{equation}\\label{vace}\n{\\dot{a}}^{2}=\\frac{1}{3}a^{2}c^{2}\\Lambda_{v}.\n\\end{equation} \nThe pressure density of dark energy is $P_{d}=\\frac{c^{2}}{a^{2}}+\\Lambda_{v}c^{2}$ and the acceleration of the universe is\n\\begin{equation}\\label{vaca}\n\\ddot{a}=\\frac{ac^{2}\\Lambda_{v}}{3}.\n\\end{equation}\t \t \nThe scalar $ \\Phi=\\frac{6}{a^{2}}+4\\Lambda_{v} $ is positive. $ \\Phi>0 $ is the condition to be satisfied for an expanding and accelerating universe when no matter is present. Because this result depends entirely on dark energy, $ \\Phi>0 $ defines dark energy.\n\\par With all types of matter appearing after the initial inflation, $\\Lambda_{d}$ must obey the constraint \\begin{equation}\\label{key}\n\\Lambda_{d}<-\\frac{3}{a^{2}}+\\frac{8\\pi G\\varrho}{c^{2}}.\n\\end{equation}With constant total matter, the equation\n\\begin{equation}\\label{rhoconst}\n\\frac{d}{da}\\Lambda_{d}=-\\frac{2\\Lambda_{d}}{a}+\\frac{16\\pi G\\varrho }{ac^{2}}-\\frac{6\\ddot{a}}{a^{2}c^{2}}\n\\end{equation}is obtained from (\\ref{frw3}) and (\\ref{frw6}) with $ \\dot{a}\\neq0 $. Since $ \\frac{\\ddot{a}}{a}=\\frac{d}{dt}(\\frac{\\dot{a}}{a})+(\\frac{\\dot{a}}{a})^{2} $, a slowly varying non-zero Hubble parameter $ \\frac{\\dot{a}}{a} $ requires $ \\frac{\\ddot{a}}{a} $ to be approximately constant. With that assumption, equation (\\ref{rhoconst}) has the solution\n\\begin{equation}\\label{Ldrho}\n\\Lambda_{d}=-\\frac{3}{a^{2}}+\\frac{8\\pi G\\varrho}{c^{2}}-\\Lambda_{v}\n\\end{equation} with \n\\begin{equation}\\label{Lv}\n\\Lambda_{v}=\\frac{3\\ddot{a}}{ac^{2}}.\n\\end{equation}The dark energy pressure is\n\\begin{equation}\\label{pd}\nP_{d}=\\frac{c^{2}}{a^{2}}+\\frac{8\\pi Gp}{c^{2}}+c^{2}\\Lambda_{v}.\n\\end{equation}\nA pure dark energy effect returns (\\ref{vace}) and (\\ref{vaca}) as the expansion and acceleration, respectively. In a universe with essentially constant matter, which is assumed to be the case of the present era, this demonstrates why $\\Lambda_{v}$ is important. As expected, $ \\Phi=\\frac{6}{a^{2}}+4\\Lambda_{v}+\\frac{8\\pi G}{c^{2}}(-\\varrho+\\frac{3p}{c^{2}}) $ is positive or negative. \n\\par Riess et al. \\cite{29} used the Hubble telescope ``to provide the first conclusive evidence for cosmic deceleration that preceded the current epoch of cosmic acceleration\". Given the violent acceleration after the Big Bang, this observation evidences the cyclic nature of the universe to this point in time. The cosmological scale factor must have had maximum and minimum values in the past because of the observed changes in sign of its second derivative; there were extremums at $ \\dot{a}=0 $. In general, this requires $ \\Lambda_{d}=-\\frac{3}{a^{2}}+\\frac{8\\pi G\\varrho}{c^{2}} $ from equation (\\ref{frw5}). The Hubble parameter vanishes and (\\ref{Ldrho}) must change because (\\ref{Lv}) is not constant at the extremum. Dark energy in the amount of $ \\Lambda_{v} $ must be transferred to $ \\Lambda_{d} $ from the dark energy pressure; $ \\frac{P_{d}}{c^{2}} $ in (\\ref{pd}) decreases by $ -\\Lambda_{v} $ with an offsetting change by that amount to $ \\Lambda_{d} $ in (\\ref{Ldrho}). This allows an extremum to occur while keeping $ \\Phi $ unchanged. Then, cosmic acceleration can change to a decelerating epoch, and conversely with the opposite exchange of dark energy. \\par The maxima or minima of the cosmological scale factor follows directly from equations (\\ref{frw9}) and (\\ref{frw10}). The second derivative of $ a $ must satisfy \n\\begin{equation}\\label{secdera}\n\\ddot{a}=a(-\\frac{c^2}{2a^{2}}+\\tilde{p})\n\\end{equation} when $\\dot{a}=0 $. The value of $ \\tilde{p} $ in equation (\\ref{secdera}) governs the condition for a maximum or minimum of $ a $. With $ -\\Lambda_{d} $ having a small fixed value of $\\Lambda_{v} $ determined early in the evolution of the universe, the variation in $ \\Phi $ is determined by $ P_{d} $. The constraint (\\ref{0phi}) on $ \\Phi $ can force $ P_{d} $ to change, which can change the sign of $ \\tilde{p} $. Near the end of an acceleration phase, if the dark energy pressure decreases so that $P_{d}\\leq\\frac{8\\pi Gp}{c^{2}}$, $ \\tilde{p} $ changes from positive to zero or negative, and the scale factor has a maximum value at $a_{max}$; $ \\tilde{p}\\leq0 $ is satisfied in (\\ref{secdera}). The acceleration phase ends and the universe undergoes a deceleration. The scale factor then decreases toward a minimum value $a_{min}$ at which the dark energy pressure increases enough to satisfy $\\tilde{p}>\\frac{c^2}{2a^{2}}$. The deceleration phase changes to that of an acceleration and the cyclic process continues indefinitely. $ \\Phi $, governed by (\\ref{0phi}), smoothly controls the maximum and minimum values that the cosmological scale factor can have. The global constraint on $ \\Phi $ keeps the universe gravitationally in balance. This model of the universe starts with the Big Bang and then cycles to eternity. It does not suffer the catastrophes of the Big Crunch or the Big Rip. \n\n\\par Although recent data and analysis \\cite{30} suggests the observable universe is flat, the data likely represents a small fraction of the presently unknown \\emph{entire} universe. If the entire universe has a positive curvature, a measurement of it will appear to be nearly flat if data from large enough distances is not available. Therefore, at this time, the conjecture of a flat universe which expands forever based on observational evidence is less likely than the cyclic universe described and observed after the Big Bang and into this epoch.\\par Dark energy thus provides a natural explanation of why the vacuum energy density is minute, and why it dominates the present epoch of the universe.\n\\section{Energy-momentum of the gravitational field: Dark matter}\nThe $\\Lambda$CDM model describes the formation of galaxies after the Big Bang from cooled baryonic matter gravitationally attracted into a dark matter skeleton. Dark matter in the $\\Lambda$CDM model also provides the additional mass required to describe the flat rotation curves observed in many galaxies. However, no dark matter particles have been detected and there have been several attempts to explain the flat rotational curves without dark matter. \n\\par The leading candidate is a \\emph{phenomenological} model of Modified Newtonian dynamics (MOND) introduced by Milgrom \\cite{31}. The Newtonian force F is modified according to\n\\begin{equation}\\label{key}\nF=m\\mu(\\frac{a}{A_{0}})a\n\\end{equation} where $ A_{0} $ is a fundamental acceleration $ \\approx1.2\\times10^{-10}m\/s^{2} $. $ \\mu $ is a function of the ratio of the acceleration relative to $ A_{0} $ which tends\nto one for $ a\\gg A_{0} $ and tends to $ \\frac{a}{A_{0}} $ for $ a\\ll A_{0} $. MOND successfully explains many, but not all, mass discrepancies observed in galactic data. However, it has no covariant roots in Einstein's equation or cosmological theory. MOND and $\\Lambda$CDM were thoroughly discussed by McGaugh in \\cite{32}.\\par Other alternatives to dark matter were reviewed by Mannheim in \\cite{33} with references therein. In particular, Moffat \\cite{34} used a nonsymmetric gravitational theory without dark matter to obtain the flat rotation curves of some galaxies. The bimetric theory of Milgrom \\cite{35} involved two metrics as independent degrees of freedom to obtain a relativistic formulation of MOND.\\par Different approaches to the missing matter problem include dipolar dark matter, which was introduced by Bernard, Blanchet and Heisenberg in \\cite{36} to solve the problems of cold dark matter at galactic scales and reproduce the phenomenology of MOND. The theory involves two different species of dark matter particles which are separately coupled to the two metrics of bigravity and are linked together by an internal vector field. In \\cite{37}, a theory of emergent gravity (EG) which claims a possible breakdown in general relativity, was introduced by Verlinde that provided an explanation for Milgrom's phenomenological fitting formula in reproducing the flattening of rotation curves. Campigotto, Diaferio and Fatibenec \\cite{38} showed conformal gravity cannot describe galactic rotation curves without the aid of dark matter. On the other hand, a logical analysis based on observational data was presented by Kroupa in \\cite{39} to support the conjecture that dark matter does not exist.\\par The existence of dark matter is based on the assumption that general relativity is correct. However, Einstein's equation is incomplete without the tensor $ \\varPhi_{\\alpha\\beta} $ describing the energy-momentum of the gravitational field. The validity of modified general relativity is now tested with the attempt to describe the additional gravitational attraction in various galaxies without dark matter.\n\\subsection{Modified GR in a spheroidal spacetime}\nIt is assumed dark matter does not exist and that baryonic matter and other possible sources of matter such as neutrinos, produce the gravitational field. In a region of spacetime where there is no matter, $ \\tilde{T}_{\\alpha\\beta}=0 $ and the field equations must satisfy\n\\begin{equation}\\label{EF}\nG_{\\alpha\\beta}+\\varPhi_{\\alpha\\beta}=0.\n\\end{equation} Spheroidal solutions to these nonlinear equations are now investigated. The spheroidal behaviour of the metric is to be determined from a particular solution to (\\ref{EF}) in a spacetime described by a metric of the form\n\\begin{equation}\\label{g}\nds^{2}=-e^\\nu c^{2}dt^{2}+e^{\\lambda}dr^{2}+r^{2}(d\\theta^{2}+sin^{2}\\theta d\\varphi^{2})\n\\end{equation} where $ \\nu $ and $ \\lambda $ are functions of t, r and $\\theta$. The non-zero connection coefficients (Christoffel symbols) are:\n\\begin{flalign*}\n&\\Gamma^{0}_{00}=\\frac{1}{2}\\partial_{0}\\nu,\\enspace\\Gamma^{0}_{01}=\\frac{1}{2}\\partial_{1}\\nu,\\enspace\\Gamma^{0}_{02}=\\frac{1}{2}\\partial_{2}\\nu,\\enspace\n\\Gamma^{0}_{11}=\\frac{1}{2}\\partial_{0}\\lambda e^{\\lambda-\\nu},\\enspace\\Gamma^{1}_{00}=\\frac{1}{2}\\partial_{1}\\nu e^{\\nu-\\lambda},\\enspace\\Gamma^{1}_{01}=\\frac{1}{2}\\partial_{0}\\lambda,&\\\\ \n&\\Gamma^{1}_{11}=\\frac{1}{2}\\partial_{1}\\lambda,\\enspace \\Gamma^{1}_{12}=\\frac{1}{2}\\partial_{2}\\lambda,\\enspace\\Gamma^{1}_{22}=-re^{-\\lambda},\\enspace \\Gamma^{1}_{33}=-r\\sin^{2}\\theta e^{-\\lambda},\\enspace\\Gamma^{2}_{00}=\\frac{1}{2r^{2}}\\partial_{2}\\nu e^{\\nu},&\\\\\n&\\Gamma^{2}_{11}=-\\frac{1}{2r^{2}}\\partial_{2}\\lambda e^{\\lambda},\\enspace\\Gamma^{2}_{12}=\\frac{1}{r},\\enspace\n\\Gamma^{2}_{33}=-\\sin\\theta\\cos\\theta,\\enspace\n\\Gamma^{3}_{13}=\\frac{1}{r},\\enspace\\Gamma^{3}_{23}=\\cot\\theta.&\n\\end{flalign*}\nThe unit vectors $ u^{\\alpha} $ satisfy \n\\begin{equation}\\label{uu1}\nu^{\\alpha}u_{\\alpha}=-1.\n\\end{equation} As a first step to understand this highly nonlinear set of equations given by (\\ref{EF}) with the property (\\ref{uu1}) in this metric, $ u_{3} $ is chosen to vanish. This requires\n\\begin{equation}\\label{X3}\nX_{3}=0 \n\\end{equation} \nbecause $ u_{\\alpha} $ is collinear with $ X_{\\alpha}$. All other components of $X_{\\alpha} $ are non-zero. \\par Static solutions to (\\ref{EF}) are sought which require the components of the line element field to satisfy \n\\begin{equation}\\label{xcomp}\n\\partial_{0}X_{\\alpha}=0\n\\end{equation} and from the metric,\n\\begin{equation}\\label{dlambdanu}\n\\partial_{0}\\lambda=0,\\enspace \\partial_{0}\\nu=0.\n\\end{equation}\\par The components of $ \\varPhi_{\\alpha\\beta} $ to be considered are then: \n\\begin{equation}\\label{key}\n\\varPhi_{00}=(1+2u_{0}u^{0})(-\\frac{1}{2}e^{\\nu-\\lambda}\\nu^{\\prime} X_{1}-\\frac{1}{2r^{2}}e^{\\nu}\\partial_{2}\\nu X_{2}),\n\\end{equation}\n\\begin{equation}\\label{key}\n\\varPhi_{11}=(1+2u_{1}u^{1} )({X_{1}}^{\\prime}-\\frac{1}{2}\\lambda^{\\prime}X_{1}+\\frac{1}{2r^{2}}e^{\\lambda}\\partial_{2}\\lambda X_{2}),\n\\end{equation} \n\\begin{equation}\\label{key}\n\\varPhi_{22}=(1+2u_{2}u^{2} )(\\partial_{2}X_{2}+re^{-\\lambda} X_{1}), \n\\end{equation} \n\\begin{equation}\\label{key}\n\\varPhi_{33}=r \\sin^{2}\\theta e^{-\\lambda}X_{1}+\\sin\\theta \\cos\\theta X_{2},\n\\end{equation} the Ricci scalar, which from (\\ref{EF}) equals $ \\Phi $, is\n\\begin{equation}\\label{R}\n\\begin{split}\nR=e^{-\\lambda}(-\\nu^{\\prime\\prime}-\\frac{1}{2}{\\nu^{\\prime}}^{2}+\\frac{1}{2}\\lambda^{\\prime}\\nu^{\\prime}-\\frac{2}{r}\\nu^{\\prime}+\\frac{2}{r}\\lambda^{\\prime}-\\frac{2}{r^{2}})+\\frac{1}{r^{2}}(-\\frac{1}{2}{\\partial_{2}\\nu}^{2}-\\frac{1}{2}{\\partial_{2}\\lambda}^{2}-\\frac{1}{2}\\partial_{2}\\nu\\partial_{2}\\lambda\\\\-\\partial_{2}\\partial_{2}\\nu-\\partial_{2}\\partial_{2}\\lambda-\\cot\\theta(\\partial_{2}\\nu+\\partial_{2}\\lambda)+2),\n\\end{split}\n\\end{equation}\nand the corresponding components of the Einstein tensor are:\n\\begin{equation}\\label{key}\nG_{00}=\\frac{1}{r^{2}}e^{\\nu-\\lambda}(r\\lambda^{\\prime}-1)+\\frac{e^{\\nu}}{2r^{2}}(-\\partial_{2}\\partial_{2}\\lambda-\\cot\\theta\\partial_{2}\\lambda-\\frac{1}{2}(\\partial_{2}\\lambda)^{2}+2),\n\\end{equation}\n\\begin{equation}\\label{key}\n\\begin{split}\nG_{11}=\\frac{1}{r^{2}}(1+r\\nu^{\\prime})+\\frac{e^{\\lambda}}{2r^{2}}[\\partial_{2}\\partial_{2}\\nu+\\frac{1}{2}({\\partial_{2}\\nu})^{2}+\\cot\\theta \\partial_{2}\\nu-2],\n\\end{split}\n\\end{equation}\n\\begin{equation}\\label{key}\n\\begin{split}\nG_{22}=\\frac{r^{2}e^{-\\lambda}}{2}[\\nu^{\\prime\\prime}+\\frac{1}{2}{\\nu^{\\prime}}^{2}+\\frac{\\nu^{\\prime}}{r}-\\frac{\\lambda^{\\prime}}{r}-\\frac{\\lambda^{\\prime}\\nu^{\\prime}}{2}]+\\frac{1}{4}{\\partial_{2}\\nu}\\partial_{2}\\lambda+\\frac{1}{2}\\cot\\theta(\\partial_{2}\\nu+\\partial_{2}\\lambda),\n\\end{split}\n\\end{equation}\n\\begin{equation}\\label{key}\n\\begin{split}\nG_{33}={\\sin\\theta}^2[\\frac{r^{2}e^{-\\lambda}}{2}(\\nu^{\\prime \\prime}+\\frac{1}{2}{\\nu^{\\prime}}^{2}+\\frac{\\nu^{\\prime}}{r}-\\frac{\\lambda^{\\prime}}{r}-\\frac{\\lambda^{\\prime}\\nu^{\\prime}}{2})+\\frac{1}{4}({\\partial_{2}\\nu})^2+\\frac{1}{4}({\\partial_{2}\\lambda})^2+\\frac{1}{4}\\partial_{2}\\nu\\partial_{2}\\lambda+\\frac{1}{2}{\\partial_{2}\\partial_{2}\\nu}+\\frac{1}{2}{\\partial_{2}\\partial_{2}\\lambda}]\n\\end{split}\n\\end{equation} where the prime denotes $ \\partial_{1} $. \\par These equations are greatly simplified by setting\n\\begin{equation}\\label{lambdanu}\n\\nu=-\\lambda\n\\end{equation}which provides a potential correspondence to a Schwarzschild-like solution.\nThus, a class of static spheroidal solutions to (\\ref{EF}) are sought with the restrictions (\\ref{X3}),(\\ref{xcomp}),(\\ref{dlambdanu}) and (\\ref{lambdanu}). \\par Since $ e^{\\lambda-\\nu}(\\varPhi_{00}+G_{00})+\\varPhi_{11}+G_{11}=0 $ from (\\ref{EF}),\n\\begin{equation}\\label{phiG0011}\n\\begin{split}\n(\\frac{\\lambda^{\\prime}}{2}X_{1}+\\frac{e^{\\lambda}}{2r^{2}}\\partial_{2}\\lambda X_{2})(1+2u_{0}u^{0})+(X_{1}^{\\prime}-\\frac{\\lambda^{\\prime}}{2}X_{1}+\\frac{e^{\\lambda}}{2r^{2}}\\partial_{2}\\lambda X_{2})(1+2u_{1}u^{1})\\\\+\\frac{e^{\\lambda}}{r^{2}}(-\\partial_{2}\\partial_{2}\\lambda-\\cot\\theta\\partial_{2}\\lambda)=0.\n\\end{split}\n\\end{equation} \n\\par From (\\ref{EF}) $G_{22}+\\varPhi_{22}=0 $ gives \\begin{equation}\\label{phiG22}\n\\begin{split}\n-\\lambda^{\\prime\\prime}+\\lambda^{\\prime 2}-\\frac{2}{r}\\lambda^{\\prime}+\\frac{2e^{\\lambda}}{r^{2}}(-\\frac{1}{4}({\\partial_{2}\\lambda})^{2}+(\\partial_{2}X_{2}+re^{-\\lambda}X_{1})(1+2u_{2}u^{2}))=0\n\\end{split}\n\\end{equation}\n\nand $G_{33}+\\varPhi_{33}=0 $ in the interval $ 0<\\theta<\\pi $ yields\n\\begin{equation}\\label{phiG33}\n\\begin{split}\n-\\lambda^{\\prime\\prime}+\\lambda^{\\prime 2}-\\frac{2}{r}\\lambda^{\\prime}+\\frac{2e^{\\lambda}}{r^{2}}(re^{-\\lambda}X_{1}+\\frac{1}{4}({\\partial_{2}\\lambda})^{2}+X_{2}\\cot\\theta)=0.\n\\end{split}\n\\end{equation} Subtracting (\\ref{phiG22}) from (\\ref{phiG33}) requires\n\\begin{equation}\\label{C2}\n\\cot\\theta X_{2}-\\partial_{2}X_{2}+\\frac{1}{2}({\\partial_{2}\\lambda})^{2}-2u_{2}u^{2}(\\partial_{2}X_{2}+re^{-\\lambda}X_{1})=0.\n\\end{equation} \\par A solution to these equations, which depends on both $ r $ and $ \\theta $ would be useful in\nthe study of angular-dependent aspects of cosmology. However, before tackling that\nproblem, the spherically symmetric solution to (\\ref{phiG33}) with $\\partial_{2}\\lambda=0 $ must be obtained. That is accomplished by expressing the bracketed term in (\\ref{phiG33}), $ re^{-\\lambda}X_{1}+X_{2}\\cot\\theta $, as a power series in $ r$. To determine a meaningful expression for the power series, there are two physical requirements that can be invoked. Firstly, the Tully-Fisher relation should be obtainable to describe the flat rotation curves of some galaxies within a universe filled with dark energy. This condition requires a term of $ \\ln r $ in $ \\lambda $ which will yield a $ \\frac{1}{r} $ term in $ \\lambda^{\\prime} $. Secondly, the Newtonian gravitational energy density has a $ \\frac{1}{r^{4}} $ radial dependence. These conditions will be the guides to follow from solutions to (\\ref{phiG0011}) and (\\ref{C2}). With the benefit of some hindsight, solutions of the form\n\\begin{equation}\\label{lambda}\n\\lambda=-\\ln(-Ar+2B\\ln r+\\frac{c_{1}}{r}+c_{2})\n\\end{equation}are sought where all of the parameters are arbitrary constants.\n\\par From (\\ref{Xmu}), the static condition requires $ \\partial_{0}f=0 $ so $ u_{0}=0 $ and $ u_{1}u^{1}+u_{2}u^{2}=-1 $. Equations (\\ref{phiG0011}) and (\\ref{C2}) can then be combined as\n\\begin{equation}\\label{C}\nX_2\\cot\\theta-\\partial_{2}X_{2}+2(\\frac{X_{1}^{\\prime}}{X_{1}\\lambda^{\\prime}}-1)(X_{2}\\cot\\theta+re^{-\\lambda}X_{1})=0.\n\\end{equation}This is the equation from which power series expressions for $ X_{1} $ and $ X_{2} $ are sought which can then be used in (\\ref{phiG33}) to generate $ \\lambda $ of the form given by (\\ref{lambda}). \\par Firstly, an expression for $ X_{1}$ of the form $ X_{1}=e^{\\lambda}P $ is pursued where $ P $ is a polynomial in $ r $. By setting \n\\begin{equation}\\label{key}\nX_{1}^{\\prime}=X_{1}\\lambda^{\\prime}(1+P)\n\\end{equation},\n\\begin{equation}\\label{key}\nX_{1}=c_{3}e^{\\lambda}e^{\\int P\\lambda^{\\prime}dr}\n\\end{equation}where $ c_{3} $ is an arbitrary constant. By demanding $ P\\lambda^{\\prime}=\\frac{P^{\\prime}}{P} $, we get the Ricatti equation\n\\begin{equation}\\label{key}\nP^{\\prime}=P^{2}\\lambda^{\\prime}\n\\end{equation}which leads to the expression\n\\begin{equation}\\label{key}\nX_{1}=c_{3}e^{\\lambda}P=\\frac{c_{3}e^{\\lambda}}{-\\lambda+c_{4}}\n\\end{equation}where $ c_{4} $ is an arbitrary constant. It is chosen to be an upper bound on $ \\lambda $: $ \\lambda\\ll c_{4} $. Then, \n\\begin{equation}\\label{key}\nX_{1}\\cong \\frac{c_{3}}{c_{4}^{2}}e^{\\lambda}(c_{4}+\\lambda).\n\\end{equation}By setting $ r=\\frac{1}{x+1} $ with $ \\mid x\\mid<1 $, the $\\ln r$ term in $ \\lambda $ of (\\ref{lambda}), and then $ \\lambda $, can be expanded to $O(r^{-2}) $ which results in the power series for $ X_{1}$: \n\\begin{equation}\\label{X11}\nX_{1}=e^{\\lambda}(a_{0}+\\frac{a_{1}}{r}+\\frac{a_{2}}{r^{2}})\n\\end{equation} where all of the parameters have been absorbed into the arbitrary constants $a_{0},a_{1} $ and $ a_{2} $ accordingly.\\par Equation (\\ref{C}), with $ X_{1} $ given by (\\ref{X11}), becomes \n\\begin{equation}\\label{key}\n\\partial_{2}X_{2}-X_{2}\\cot\\theta(1+\\frac{2P^{\\prime}}{\\lambda^{\\prime}P})=\\frac{2rP^{\\prime}}{\\lambda^{\\prime}}\n\\end{equation}which has a solution involving the hypergeometric function. However, a manageable approximate expression for $ X_{2} $ can be obtained by observing that $ \\frac{2P^{\\prime}}{\\lambda^{\\prime}P}\\simeq0 $ for large $ r $. Then \n\\begin{equation}\\label{key}\nX_{2}=c_{5} \\sin\\theta+N\\sin\\theta\\ln\\tan\\frac{\\theta}{2}\n\\end{equation}where $N:=\\frac{2rP^{\\prime}}{\\lambda^{\\prime}}\\simeq 2a_{1}+\\frac{4a_{2}}{r}$ and $ c_{5} $ is an arbitrary constant. In the equatorial plane, we can do the perturbation $ \\frac{\\pi}{2}+2\\theta $ around $ \\frac{\\pi}{2} $ where $ \\theta<<1 $ so $ \\ln\\tan(\\frac{\\pi}{4}+\\theta)\\simeq 2\\tan\\theta $. $ X_{2} $ then has the structure\n\\begin{equation}\\label{key}\nX_{2}=\\tan\\theta(b_{0}+\\frac{b_{1}}{r})\n\\end{equation}where $ c_{5}=0 $ and all parameters are absorbed into the arbitrary constants $ b_{0} $ and $ b_{1} $ accordingly. Equation $(\\ref{phiG33})$ can then be written as \n\\begin{equation}\\label{phiG33a}\n\\begin{split}\n-\\lambda^{\\prime\\prime}+\\lambda^{\\prime 2}-\\frac{2}{r}\\lambda^{\\prime}+\\frac{2e^{\\lambda}}{r^{2}}(a_{0}r+a_{1}+\\frac{a_{2}}{r}+b_{0}+\\frac{b_{1}}{r})=0.\n\\end{split}\n\\end{equation}\nBy demanding \n\\begin{equation}\\label{b1}\n b_{1}=-a_{2} \n\\end{equation}the undesirable term with $ \\frac{\\ln r}{r} $ in the solution for $ \\lambda $ can be avoided. Equation (\\ref{phiG33a}) then simplifies to\n \\begin{equation}\\label{}\n \\begin{split}\n \t-\\lambda^{\\prime\\prime}+\\lambda^{\\prime 2}-\\frac{2}{r}\\lambda^{\\prime}+\\frac{2e^{\\lambda}}{r^{2}}(a_{0}r-b)=0\n \\end{split}\n \\end{equation}where\n \\begin{equation}\\label{key}\n -b=a_{1}+b_{0}.\n \\end{equation} This has the exact solution in the equitorial plane as desired:\n\\begin{equation}\\label{wow}\n\\begin{split}\n\\lambda=-\\ln(-a_{0}r+2b\\ln r+\\frac{c_{1}}{r}+c_{2}), \\enspace 00 $. It is the term that gives rise to the flat rotation curves. The third term is positive and repulsive if $ a_{0}>0 $. This describes the repulsive dark energy force in the present epoch. However, during a part of the previous decelerating epoch observed by Riess et al. \\cite{29}, $ a_{0}<0 $. They used the Hubble telescope to provide the first conclusive evidence for cosmic deceleration that preceded the current epoch of cosmic acceleration. \\par Assuming a circular orbit about a point mass, it follows that the orbital velocity of a star rotating in the galaxy satisfies\n\\begin{equation}\\label{V}\nv^{2}=v^{2}_{N}+bc^{2}-\\frac{a_{0}c^{2}}{2}r\n\\end{equation} where $v_{N}^{2}$ is the Newtonian term \n\\begin{equation}\\label{v2N}\nv_{N}^{2}=\\frac{GM}{r}. \n\\end{equation} Equation (\\ref{V}) demands an upper limit to r describing a large but finite galaxy. \n\\par Because $ a_{0}\\neq0$, it is possible for the Newtonian force to balance the dark energy force. This requires\n\\begin{equation}\\label{VF}\nv^{2}_{N}-\\frac{a_{0}c^{2}}{2}r=0\n\\end{equation} and\n\\begin{equation}\\label{key}\nv^{2}=bc^{2}\n\\end{equation} with $ b>0 $ describes a specific class of galaxies with a flat orbital rotation curve. From (\\ref{v2N}) and (\\ref{VF}), we obtain the Tully-Fisher relation\n\\begin{equation}\\label{TF}\nv^{4}_{N}=\\frac{GMc^{2}a_{0}}{2},\\enspace a_{0}>0.\n\\end{equation} This result holds for any finite r in contrast to EG which holds only for large r as determined by Lelli, McGaugh and Schombert \\cite{41}. With $ \\frac{{c^{2}a_{0}}}{2}:=A_{0}$, the Tully-Fisher relation in MOND is evident.\n\\par The importance of the radial acceleration relative to the rotation curves of galaxies was discussed by Lelli, McGaugh, Schombert, and Pawlowski in \\cite{42} where it was determined that late time galaxies (spirals and irregulars), early time galaxies (ellipticals and lenticulars), and the most luminous dwarf spheroidals follow a tight radial acceleration relation which correlates well with that due to the distribution\nof baryons. \\par Equation (\\ref{Newt}), which does not include dark matter in this analysis, is general enough to describe the rotation curves of many types of galaxies. For example, galaxy NGC4261 has a relatively flat rotation curve but starts to rise at larger radii, reaching velocities of 700 km $s^{-1}$ at 100 kpc \\cite{42}. That requires $ a_{0} $ in (\\ref{V}) to be negative which was interpreted above. As another example, both $ a_{0}c^{2}r $ and $ bc^{2} $ could be small enough relative to $ \\frac{GM}{r} $, or those terms could cancel one another, so that the Newtonian term is dominant. Galaxies with no flat rotation curves have recently been observed by van Dokkum et.al \\cite{43}. \\par It should be remembered that the general equation (\\ref{EF}) provides additional variables that may explain even more aspects of cosmology now attributed to dark matter. However, it is still possible that dark matter particles may exist. As a part of (\\ref{MEQ}) in the total matter energy-momentum tensor $ \\tilde{T}_{\\alpha\\beta} $, they would contribute to the gravitational field outside of its source along with baryonic matter in equation (\\ref{EF}) and therefore in (\\ref{Newt}). But any dark matter contribution to the gravitational field would play a much lesser role because of the existence of $ \\varPhi_{\\alpha\\beta} $. \\par From the expression for $ \\lambda $ given by (\\ref{wow}), it is clear that in the absence of the line element field in GR, the Einstein tensors alone cannot represent the additional gravitational attraction attributed to dark matter. This explicitly shows why GR is incomplete and why $ \\Lambda $CDM was invented to describe dark matter.\n\\section{Conclusion}\nThe results in this article stem from the existence of the line element field in a Lorentzian spacetime. It is a fundamental part of the Lorentzian metric and provides the extra freedom to construct the symmetric tensor $\\varPhi_{\\alpha\\beta}$ from the Lie derivative along the line element vector of both the metric and the unit line element vectors. That tensor, which is absent in GR, solves the problem of the localization of gravitational energy-momentum. $ T_{\\alpha\\beta} $, the sum of the matter energy-momentum and $ \\varPhi_{\\alpha\\beta} $ is divergenceless. $ \\varPhi_{\\alpha\\beta} $ completes Einstein's equation and leaves it intact in form. \\par The line element field is a dynamical variable independent of the Riemannian metric. Variation of the action functional with respect to $ X^{\\mu} $ restricts the covectors that can be a part of the Lorentzian metric to be those satisfying the Lorentz invariant expression $ u_{\\mu}=\\frac{3}{\\Phi}\\partial_{\\mu}f $.\\par The gravitational energy density is calculated from $ \\varPhi_{00} $. It is shown that the radial contribution is twice the Newtonian gravitational energy density calculated from the linearized field equations in GR. The energy-momentum of the gravitational field is localizable and measureable. \\par \n$ \\Phi $ has the global property $ \t\\int g^{\\alpha\\beta}\\varPhi_{\\alpha\\beta}\\sqrt{-g}d^{4}x=0 $. The cosmological constant $ \\Lambda $ is dynamically replaced by $ \\Phi $.\n\\par Important features attributed to dark energy result from the investigation of the modified Einstein equation in the FLRW metric. $\\Phi>0$ defines dark energy in the present epoch. The dark energy pressure explains the observed cyclic nature of the universe after the Big Bang. The dark energy density explains the initial inflation of the universe and provides a natural explanation of why the vacuum energy density is so small and why it now dominates the expansion and acceleration of the present universe.\n\\par The energy-momentum of the gravitational field is important in the description of dark matter. A static solution is obtained from the modified Einstein equation in a spheroidal metric describing the gravitational field outside of its source, which does not contain dark matter. The modified Newtonian force contains two additional terms: one represents the dark energy force which depends on the parameter $ a_{0} $; and the other represents the $ \\frac{1}{r} $ ``dark matter\" force which depends on the parameter $ b $. The baryonic Tully-Fisher relation is obtained by balancing the dark energy force with the Newtonian force. This condition describes the class of galaxies associated with MOND. The rotation curves for galaxies with no flat orbital curves, and those with rising rotation curves for large radii describe examples of the flexibility of the orbital rotation curve equation. The results obtained from the \\emph{complete} Einstein equation thus far are able to substantially describe the missing mass problem attributed to dark matter. Further mathematical and detailed numerical analyses to explore the ability of the energy-momentum tensor of the gravitational field to replace dark matter in cosmology, are fully warranted. This rigorous analysis with comparison to astronomical data may still point to the existence of dark matter to some extent. But even if that is the case, the gravitational role of dark matter is substantially reduced by the impact of the energy-momentum tensor of the gravitational field.\n\\par Thus, $ \\varPhi_{\\alpha\\beta} $ represents the energy-momentum of the gravitational field itself and explains particular features of dark energy and dark matter. It is the symmetric tensor that Einstein sought many years ago. \n\\section*{Acknowledgements} \nI would like to thank the anonymous referee for his\/her constructive comments. \n\n\\vskip2pc\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nDeveloping Reinforcement Learning (RL) algorithms that can make effective decisions using high dimensional observations such as images is quite challenging. In addition, it consumes a lot of time and energy. In recent months researchers have worked on developing sample efficient RL, plug and play algorithms, that can directly learn from pixels. Srinivas \\textit{et al.} incorporated Contrastive Learning, into off-policy algorithms, to learn relevant features from image based inputs. Laskin \\textit{et al.} investigated developing data efficient and generalizable algorithms, by introducing a generic data augmentation module for RL algorithms; \\cite{laskin2020reinforcement, shang2021reinforcement, srinivas2020curl}.\n\nWhile a lot of work has been devoted to extracting positional information from image inputs, very little investigation has been done on learning from temporal information. Shang \\textit{et al.} performed experiments using DMControl (\\cite{tassa2018deepmind}) to highlight the importance of temporal information in RL. They compared two Soft-Actor-Critic (SAC) RL algorithms, wherein one had access to pose and temporal information and the other only had access to pose. It was found that the former algorithm swiftly learned the optimal policy, while the latter failed to do so. Furthermore, a recurring heuristic used by many papers is to stack sequential observations together while inputting it to a neural network; \\cite{mnih2015human}. This heuristic combines frames, without processing it and therefore can be considered analogous to early fusion; \\cite{karpathy2014large}. Recently, Shang \\textit{et al.} approached this as a video classification problem. This is a lucid approach, as considering a DRL state equivalent to a video, will help improve the prediction capabilities of the underlying neural network. Successful video recognition architectures use late fusion where all frames are processed, using neural networks, before they are combined together; \\cite{shang2021reinforcement}, \\cite{laskin2020reinforcement}. \n\nMoreover, a video stream consists of both spatial and temporal aspects. The former contains information about the video frame including objects and its surroundings, while the movement of the frame and its associated objects can be learned from the temporal portion; \\cite{simonyan2014two}. While learning the spatial aspect is enough for image recognition, video recognition requires learning both spatial and temporal components. Enabling agents to extract temporal information from a given set of frames will result in the DRL agent making better Q-value predictions and therefore result in improved data efficiency. Furthermore, it will contribute to the agent understanding the differences between seemingly similar actions, such as opening and closing objects; \\cite{lin2019tsm}, \\cite{shang2021reinforcement}. \n\nThere has been a plethora of work related to video recognition using 3D and standard 2D CNNs. 3D CNNs have the ability to simultaneously extract out spatial and temporal features from videos. However, they are computationally costly, which makes them hard to implement in real-time situations. Incorporating similar architectures with vision-based DRL exacerbates this problem, as many applications require fast predictions during training, and having latency is infeasible. Furthermore, the extra parameters could make the model more prone to overfitting without large amounts of data. This once again poses a roadblock to the development of sample efficient RL;\\cite{lin2019tsm,tran2015learning, carreira2017quo}. 2D CNNs, although relatively efficient however fail to extract out temporal information;\\cite{simonyan2014two}. \n\n\\cite{amiranashvili2018motion} incorporated optical flow in their RL algorithm, although their technique required state variables in addition to pixel observations during training. Modeling temporal information in RL using simply pixel inputs was investigated by \\cite{shang2021reinforcement}, and it brought a new approach to efficiently reducing sample complexity in reinforcement learning. We intend to further optimize this technique by leveraging recent work in the field of video action prediction and therefore propose the Temporal Shift Reinforcement Learning (TSRL) algorithm.\n\nThe contributions of our work \\footnote{Our code is available at \\url{https:\/\/anonymous.4open.science\/r\/TSM_RL-85F5\/README.md}} are presented here:\n\n1) We propose a plug and play architecture works with any generic vision-based DRL algorithms.\n\n2) We augment a video recognition; \\cite{lin2019tsm} that does not require any additional parameters to model temporal information in DRL.\n\n\\section{Related Work}\n\\subsection{Latent Flow} Simonyan \\textit{et al.} investigated the use of optical flow techniques to perform video classification and was able to achieve SOTA performance by a significant amount over previous work in video classification. They developed a dual-stream architecture using ConvNets, consisting of spatial and temporal recognition components. The spatial stream was learned using a pre-trained ConvNet, wherein each frame was sent to the network as input. The input to the temporal stream was stacked optical flow displacement fields generated from consecutive frames. Movement among frames can be obtained from optical flow fields, thereby eliminating the need for the network to learn it. This technique achieved high accuracies without requiring a lot of data. More importantly, they established that training a temporal CNN using optical flow was a drastically better technique compared to training on a stacked bunch of images; \\cite{simonyan2014two, karpathy2014large}. The downside of this algorithm is that it is computationally costly both during inference and training and therefore cannot be combined with RL algorithms; \\cite{shang2021reinforcement}.\n\n\\subsection{Flow of Latents} Shang \\textit{et al.} looked for a computationally feasible technique to integrate RL with optical flow. They were inspired by late fusion techniques; wherein every frame was run through a CNN before fusion was applied. Late fusion provides improved performance with lesser parameters and also allows multi-modal data \\cite{jain2019learning, chebotar2017path}. They presented a structured late fusion architecture, wherein each image frame was encoded using a neural network. The encodings at each time step were subtracted from their prior, and this difference was fused with the latent encodings, which was then used by the RL algorithm. This technique was analogous to the work done by \\cite{simonyan2014two}. The optical flow was approximated using the difference in encodings, which provided temporal information. The spatial component was obtained by encoding each of the frames. This technique provided the CNN with a necessary inductive bias. They chose Rainbow DQN, and RAD; \\cite{laskin2020reinforcement} to be their base algorithm and found that it outperformed SOTA algorithms in performance and sample efficiency. Also, they showed that their algorithm reached optimal performance in state-based RL despite only being provided positional state information and no state velocity.\n\nThey also separately investigated encoding frames and then stacking the encodings together instead of the raw images. This technique yielded sub-par results, and the authors hypothesized that stacking high dimensional image frames would allow CNNs to learn temporal information. However, by stacking latent frames, the temporal information was lost and thereby causing the difference in results. \n\n\\subsection{Temporal Shift Module}\nWhile working with video model activations consisting of $N$ frames, $C$ channels and $H$ height and $W$ width, \\\\$A \\in \\mathbb{R}^{N X C X T X H X W}$, 2D CNNs don't consider the temporal dimension $T$ thereby ignoring it. \\cite{lin2019tsm} addressed this by shifting channels, thereby mixing information from neighboring frames through the temporal dimension and referred to it as the Temporal Shift Module (TSM). Therefore the current frame contains information that was obtained from its surroundings. They leveraged the concepts of shifts and multiply-accumulate, which are the basic principles of a convolution operation. They extended it by shifting one step forward and backward along the temporal dimension. Furthermore, the multiply-accumulate was folded from the channel dimension to the temporal dimension. However, for online video recognition, only previous frames could be shifted forward and not the other way around. Therefore in such cases, a uni-directional TSM was implemented.\n\nWhile this process doesn't require extra parameters, they found that this technique had drawbacks - 1) The data movement generated due to the shift strategy was not efficient and would increase the latency, especially since 5D activation of videos results in large memory usage. This implied that moving all channels would result in inference latency and large memory footprint on the hardware hosting the model. 2) Moving channels directly across the temporal dimension, referred to as in-place shift, would affect the accuracy of model since the spatial model is distorted. This is because the current channel would have some of its frames (or feature maps) shifted, and therefore, the 2D CNN would lose that information during the classification process. The authors obtained a 2.6 \\% accuracy drop relative to their baseline; \\cite{wang2016temporal} while naively shifting channels. The former issue was mitigated by shifting only a partial number of channels, thereby reducing the amount of data movement and latency incurred. For the latter problem, the TSM module was inserted within the residual branch of a Res-Net, thereby enabling the 2D CNN to learn spatial features without degrading. The authors claimed that this method, namely residual shift, allows the information present within the original activation to be retained after channel shifting due to identity mapping. Therefore, the TSM module is a simple modification to the 2D CNN. After encoding images, it shifts frames in the temporal dimension by +1, -1, and 0. However, shifting frames by -1, i.e. backward, is only possible for offline problems. For online image classification problems, the frames are moved +1; \\cite{lin2019tsm}.\n\nA major advantage of online TSM was that it enabled multi-level temporal fusion. Other online methods are generally limited to late and mid-level temporal fusion. The authors found multi-level temporal fusion to significantly influence temporal problems; \\cite{zhou2018temporal,lin2019tsm, zolfaghari2018eco}.\n \n\\subsection{Prioritized Deep Q Network}\n\\cite{mnih2015human} combined Q Networks with CNNs in order to obtain an approximation of the Q values - \n\n$Q^{*}(s,a) = \\max_{\\pi}\\mathbb{E}[r_t + \\gamma r_{t+1} + \\gamma^2 r_{t+2} + ... |s_t = s, a_t = a, \\pi]$\n\\begin{figure*}[htp]\n \\centering\n \\includegraphics[width=14cm]{TSRL_Schematic.png}\n \\caption{A schematic of Temporal Shift Reinforcement Learning algorithm}\n \\label{fig:env}\n\\end{figure*}\n\nThe above expression maximizes the sum of discounted rewards for an agent following a policy, $\\pi = P(a|s)$, $r$ using a discount factor, $\\gamma$ during every time step $t$. It was known to be the first RL algorithm that could be integrated into various environments with raw pixels as inputs. They addressed the learning instabilities that RL presented when coupled with a deep neural network using a replay buffer and target network. They found that the sequential observations were highly correlated with each other and also that minimal changes to $Q$ would drastically affect the policy. The use of a replay buffer mitigated this issue by randomizing the data during the training process. This was done by storing the transitions as a tuple $(s,a_t,s_{t+1}, r{t+1})$ of state, action, next states and rewards within a cyclic buffer. This provided a two-fold benefit. The replay buffer reduced the number of environments needed for the agent to learn since the agent could always resample from the buffer. Furthermore, this reduces the variance during gradient descent since batches are sampled. The target network takes the weight from the current network but updates it only after a fixed duration of time. The target network's weights are then used to compute the TD error, which is the difference between the Q value and the TD target. If we use the parameters from the current network to estimate both these values, they'll become correlated and will result in instability. \\cite{hasselt2010double} suggested using dual instead of single estimators to estimate the expected return since the latter led to over-estimated values and introduced the Double-Q learning algorithm (DDQN). A later investigation by \\cite{van2016deep} showed that rather than learning a separate function, the target network could be used to obtain the estimate; \\cite{mnih2015human, arulkumaran2017brief}. \n\nIn addition, \\cite{schaul2015prioritized} modified the experience replay process so that, instead of the conventional uniform sampling process, important samples were given a higher priority. The Prioritized Experience Replay (PER) technique was found to double the learning speed and also achieve SOTA scores on Atari games.\n\n\\section{Approach}\n\nThe motivation behind TSRL was to introduce an efficient algorithm that did not require any additional parameters, leveraging the benefits of multi-level temporal fusion. The architecture developed by \\cite{lin2019tsm} for online Temporal Shift was modified and incorporated into a Double DQN with Prioritized Experience Replay (DDQN-PER). \\cite{lin2019tsm} used a ResNet model for their experiments, however going with the conventional CNN models used by the vision RL community, we used a shallow three layer CNN. \n\nAlso, we used in-place shift instead of residual shift wherein the channels were directly moved across the temporal dimension. We assumed that the accuracy improvements obtained, in predicting the Q values, while modeling the temporal aspect would compensate for the loss obtained due to spatial degradation. Furthermore, the online TSM algorithm \\cite{lin2019tsm} cached the features in memory and then replaced it with those in the next time step. Our approach was to directly roll the features across time steps. \n\nFinally, the authors of the TSM paper found that the highest accuracy for the online model was obtained by shifting 1\/8th of channels for each layer of the neural network. However, while testing our algorithm, we found that the best results were obtained when we shifted around 1\/5 to 1\/3 of our channels. \n\nA schematic of our algorithm has been given in Figure~\\ref{fig:env} and a PyTorch based pseudocode for our algorithm has been presented here -\n\\begin{algorithm}\n\\caption{TSRL}\\label{alg:cap}\n\n\\begin{minipage}{\\linewidth}\n\n\\begin{lstlisting}\nFor each step t do\n For each convolution step do\n x = self.relu1(self.conv1(x))\n n,c,h,w = x.shape\n x = x.reshape(n\/\/T, T, c, h, w)\n copy = torch.clone(x)\n x[:,:, :c\/\/8, :, :] = torch.roll(x[:,:, :c\/\/8, :, :],\n shifts = 1, dims = 1)\n x[:,0, :c\/\/8, :, :] = copy[:,0, :c\/\/8, :, :]\n z_t = FullyConnected(x)\n End For\nEnd For\n\\end{lstlisting}\n\n\\end{minipage}\n\\end{algorithm}\n\n\\section{Experiments}\nWe tested our algorithm using OpenAI Gym Atari environments with visual images as input. An open-sourced implementation of DDQN (https:\/\/github.com\/higgsfield\/RL-Adventure) combined with PER was used. The images were converted to grayscale to speed up the learning process. To gauge the sample efficiency of TSRL we compared it with a generic DDQN-PER getting stacked images as input. Also, we used our own implementation of the algorithm developed by \\cite{shang2021reinforcement} and referred to it as Flare, in order to compare against state of the art. The number of stacked images was kept equal to the timesteps considered by TSRL both for DDQN-PER and Flare. Also, all algorithms were run for 1.4M time steps using 5 different trials. The performance of the algorithm was gauged by averaging the trials and then summing over all rewards obtained; \\cite{brockman2016openai, bellemare2013arcade, mott1996stella}.\n\\begin{figure}[!htp]\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[width=.8\\linewidth]{Freeway.png}\n \\caption{Freeway Atari Environment}\n \\label{fig:sfig1}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[width=.8\\linewidth]{asterix.png}\n \\caption{Asterix Atari Environment}\n \\label{fig:sfig2}\n\\end{subfigure}\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[width=.8\\linewidth]{riverraid.png}\n \\caption{River Raid Atari Environment}\n \\label{fig:sfig3}\n\\end{subfigure}\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[width=.8\\linewidth]{pong.png}\n \\caption{Pong Atari Environment}\n \\label{fig:sfig4}\n\\end{subfigure}\n\\caption{OpenAI Gym environments used for training}\n\\label{fig:fig}\n\\end{figure}\n\n\\subsection{Results}\nTable \\ref{table:1} shows the sum of average rewards obtained across the five runs for each environment. The shift parameter, $s$ column, shows the ratio of channels that were shifted. For instance, if $s = 3$, then the first 1\/3\\textsuperscript{rd} channels would be shifted across the temporal dimension for every layer of the CNN.\n\nFigure~\\ref{fig:plots} shows the reward obtained per episode. In some cases, an algorithm may have large step sizes relatively early. This would lead to a lower number of episodes and vice versa. \n\nTSRL outperforms both DDQN-PER and Flare in all environments except Asterix, wherein it only defeats the DDQN-PER.\n\n\n\\begin{figure}[!htp]\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[width=.8\\linewidth]{freeway_result.png}\n\n \\label{fig:sfig1}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[width=.8\\linewidth]{asterix_result.png}\n\n \\label{fig:sfig2}\n\\end{subfigure}\n\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[width=.8\\linewidth]{riverraid_result.png}\n\n \\label{fig:sfig3}\n\\end{subfigure}\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[width=.8\\linewidth]{pong_result.png}\n\n \\label{fig:sfig4}\n\\end{subfigure}\n\\caption{Plots of episode vs reward for different Atari environments}\n\\label{fig:plots}\n\\end{figure}\n\n\\begin{table}[h!]\n\n\n\\centering\n\\caption{Sum of average rewards obtained.}\n\\begin{tabular}{||c c c c c||} \n\n \\hline\n Environment & Shift & TSRL & DDQN-PER & FLARE\\\\ [0.5ex] \n \n \\hline\\hline\n Freeway &3 & \\textbf{18291.5} & 17807.6 & 14686.19\\\\ [1ex] \n\\hline\n Asterix &5 & 22854.25 & 20702.0 & \\textbf{33496.93}\\\\ [1ex] \n\\hline\n Riverraid &5 & \\textbf{41850.3} & 34849.2 & 34966.0\\\\ [1ex] \n\\hline\nPong &5 & \\textbf{7892.17} & 7221.80 & -36528.20\\\\ [1ex] \n\\hline\n\\end{tabular}\n\\end{table}\n\\label{table:1}\n\n\\subsection{Discussion}\nA major difference between our algorithm and other RL algorithms taking temporal aspects into account is that we provide multi-level temporal fusion. Most RL algorithms implement early fusion \\cite{mnih2015human} and the recent ones \\cite{amiranashvili2018motion, shang2021reinforcement} have experimented with late fusion. However, our approach enables RL to have temporal fusion across all levels. This type of fusion was found to significantly help difficult temporal modeling problems \\cite{lin2019tsm}. \n\nIt is interesting to note that instead of a single shift hyperparameter being optimal for all tasks, it varies across environments. We hypothesize that this is caused due to the trade-off between spatial and temporal learning. Some environments might not require a higher number of feature maps and therefore could work with a lower shift hyperparameter. This would permit a larger number of channels to be moved, leading to improved temporal learning. However, this might not be the case in complicated environments, and such situations might require the shift hyperparameter to be higher.\n\nFinally, we see that TSRL is able to beat the baseline and SOTA for almost all the environments. \\footnote{We used our own implementation of the Flare algorithm.} Since Flare concatenates latent flow with features, we feel that this increases the number of parameters and, therefore, the relative training time compared to TSRL. Furthermore, the latent flow is obtained by subtracting the current frame from the immediately preceding frame while ignoring the frames before that. This might not provide much information in situations when the difference between immediate frames is minute. This problem is mitigated by the multi-level fusion abilities of our algorithm.\n\\section{Conclusions}\nWe present a facile shifting technique for learning temporal features in DRL problems without the requirement of additional parameters. After testing our algorithm on OpenAI Atari environments, we find that our algorithm outperforms the commonly used frame-stacking heuristic.\n\nA major drawback of our algorithm is the requirement to find a suitable shift hyperparameter. Future work could include either learning the optimal value of this hyperparameter online or changing how the shift is performed (such as residual shift \\cite{lin2019tsm}) so that the spatial features aren't disturbed.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{intro}\nA permutation $\\pi=\\pi_1\\pi_2\\cdots \\pi_n$ is called {\\em up-down} if $\\pi_1<\\pi_2>\\pi_3<\\pi_4>\\pi_5<\\cdots$. A permutation $\\pi=\\pi_1\\pi_2\\cdots \\pi_n$ is called {\\em down-up} if $\\pi_1>\\pi_2<\\pi_3>\\pi_4<\\pi_5>\\cdots$. A famous result of Andr\\'{e} is saying that if $E_n$ is the number of up-down (equivalently, down-up) permutations of $1,2,\\ldots,n$, then $$\\sum_{n\\geq 0}E_n\\frac{x^n}{n!}=\\sec x+\\tan x.$$ Some aspects of up-down and down-up permutations, also called {\\em reverse alternating} and {\\em alternating}, respectively, are surveyed in~\\cite{Stanley2010survey}. Slightly abusing these definitions, we refer to {\\em alternating permutations} as the union of up-down and down-up permutations. This union is known as the set of {\\em zigzag permutations}.\n\nIn this paper, we extend the study of alternating permutations to that of {\\em alternating words}. These words, also called {\\em zigzag words}, are the union of up-down and down-up words, which are defined in a similar way to the definition of up-down and down-up permutations, respectively. For example, $1214$, $2413$, $2424$ and $3434$ are examples of up-down words of length 4 over the alphabet $\\{1,2,3,4\\}$.\n\nSection~\\ref{en-alt-words} is dedicated to the enumeration of up-down words, which is equivalent to enumerating down-up words by applying the operation of {\\em complement}. For a word $w=w_1w_2\\cdots w_n$ over the alphabet $\\{1,2,\\ldots,k\\}$ its complement $w^c$ is the word $c_1c_2\\cdots c_n$, where for each $i=1,2,\\ldots,n$, $c_i=k+1-w_i$. For example, the complement of the word $24265$ over the alphabet $\\{1,2,\\ldots,6\\}$ is $53512$. Our enumeration in Section~\\ref{en-alt-words} is done by linking bijectively up-down words to order ideals of certain posets and using known results.\n\nA ({\\em permutation}) {\\em pattern} is a permutation $\\tau=\\tau_1\\tau_2\\cdots\\tau_k$. We say that a permutation $\\pi=\\pi_1\\pi_2\\cdots\\pi_n$ {\\em contains an occurrence} of $\\tau$ if there are $1\\leq i_1< i_2<\\cdots< i_k\\leq n$ such that $\\pi_{i_1}\\pi_{i_2}\\cdots \\pi_{i_k}$ is order-isomorphic to $\\tau$. If $\\pi$ does not contain an occurrence of $\\tau$, we say that $\\pi$ {\\em avoids}~$\\tau$. For example, the permutation 315267 contains several occurrences of the pattern 123, for example, the subsequences 356 and 157, while this permutation avoids the pattern 321. Occurrences of a pattern in words are defined similarly as subsequences order-isomorphic to a given word called pattern (the only difference with permutation patterns is that word patterns can contain repetitive letters, which is not in the scope of this paper).\n\nA comprehensive introduction to the theory of patterns in permutations and words can be found in~\\cite{Kitaev2011Patterns}. In particular, Section 6.1.8 in~\\cite{Kitaev2011Patterns} discusses known results on pattern-avoiding alternating permutations, and Section 7.1.6 discusses results on permutations avoiding patterns in a more general sense.\n\nIn this paper we initiate the study of pattern-avoiding alternating words. In Section~\\ref{123-up-down-sec} we enumerate up-down words over $k$-letter alphabet avoiding the pattern 123. In particular, we show that in the case of even length, the answer is given by the {\\em Narayana numbers} counting, for example, {\\em Dyck paths} with a specified number of peaks (see Theorem~\\ref{main-thm}). Interestingly, the number of 132-avoiding words over $k$-letter alphabet of even length is also given by the Narayana numbers, which we establish bijectively in Section~\\ref{bijection-sec}. In Section~\\ref{sec-132-av} we provide a (non-closed form) formula for the number of 132-avoiding words over $k$-letter alphabet of odd length. In Section~\\ref{312-up-down-sec} we show that the enumeration of 312-avoiding up-down words is equivalent to that of 123-avoiding up-down words. Further, a classification of all cases of avoiding a length 3 permutation pattern on up-down words is discussed in Section~\\ref{all-cases}. Finally, some concluding remarks are given in Section~\\ref{last-sec}.\n\nIn what follows, $[k]=\\{1,2,\\ldots,k\\}$.\n\n\\section{Enumeration of up-down words}\\label{en-alt-words}\n\nIn this section, we consider the enumeration of up-down words.\nWe shall show that this problem is the same as that of enumerating order ideals of a certain poset.\nSince up-down words are in one-to-one-correspondence with down-up words by using the complement operation, we consider only down-up words throughout this section.\n\nTable~\\ref{tab1} provides the number $N_{k,\\ell}$ of down-up words of length $\\ell$ over the alphabet $[k]$ for small values of $k$ and $\\ell$ indicating connections to the {\\em Online Encyclopedia of Integer Sequences} ({\\em OEIS})~\\cite{SloaneLine}.\n\n\\begin{table}[!htb]\n\\small\n\\begin{center}\n\\begin{tabular}{|c|lllllllllll|l|}\n\\hline\n\\diagbox{$k$}{$\\ell$}& 0 & 1 & 2 & 3 & 4 &5 &6 & 7 &8&9 &10& OEIS\\\\\n\\hline\n2& 1& 2&1& 1&1& 1&1& 1&1& 1&1& trivial \\\\\n\\hline\n3& 1& 3& 3& 5& 8& 13& 21& 34& 55& 89& 144 & A000045\\\\\n\\hline\n4& 1& 4& 6& 14& 31& 70& 157& 353& 793& 1782& 4004 & A006356\\\\\n\\hline\n5& 1& 5& 10& 30& 85& 246& 707& 2037& 5864& 16886& 48620& A006357\\\\\n\\hline\n6& 1& 6& 15& 55& 190& 671& 2353& 8272& 29056 & 102091 & 358671 & A006358\\\\\n\\hline\n7& 1& 7& 21& 91& 371& 1547& 6405& 26585& 110254 & 457379 & 1897214 & A006359\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{The number $N_{k,\\ell}$ of down-up words on $[k]$ of length $\\ell$ for small values of $k$ and~$\\ell$.}\\label{tab1}\n\\end{table}\n\nWe assume the reader is familiar with the notion of a partially ordered set (poset) and some basic properties of posets; e.g. see \\cite{Stanley1997Enumerative}. A partially ordered set $P$ is a set together with a binary relation denoted by $\\leq_P$ that satisfies the properties of reflexivity, antisymmetry and transitivity. An order ideal of $P$ is a subset $I$ of $P$ such that if $x\\in I$ and $y\\leq x$ then $y\\in I$. We denote $J(P)$ the set of all order ideals of $P$.\n\nLet $\\mathbf{n}$ be the poset on $[n]$ with its usual order ($\\mathbf{n}$ is a linearly ordered set).\nThe {\\em $m$-element zigzag poset}, denoted $Z_m$, is shown schematically in Figure \\ref{Zm}. Note that the order $<_{Z_m}$ in $Z_m$ is $1<2>3<4>5<\\cdots.$ The definition of the order $\\leq_{Z_m}$ is self-explanatory.\n\n\\begin{figure}[!htb]\n\\small\n \\centering\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\begin{tikzpicture}[scale=1.4]\n\t\\draw (3,0)--(3.5,1)--(4,0)--(4.5,1)--(5,0);\n\t\\fill(3,0) circle(0.04cm); \\node [below] at (3,0) {$1$};\n\t\\fill(3.5,1) circle(0.04cm); \\node [above] at (3.5,1) {$2$};\n\t\\fill(4,0) circle(0.04cm); \\node [below] at (4,0) {$3$};\n\t\\fill(4.5,1) circle(0.04cm); \\node [above] at (4.5,1) {$4$};\n\t\\fill(5,0) circle(0.04cm); \\node [below] at (5,0) {$5$};\n\t\n\t\\fill(5.2,0.5) circle(0.02cm); \\fill(5.5,0.5) circle(0.02cm); \\fill(5.8,0.5) circle(0.02cm);\n\n\t\\draw (6,0)--(6.5,1);\n\t\\fill(6,0) circle(0.04cm); \\node [below] at (6,0){$m-1$};\n\t\\fill(6.5,1) circle(0.04cm); \\node [above] at (6.5,1){$m$};\n\t\n\t\\coordinate[label=below:$m$ even] (1) at (5,-0.5);\n \\end{tikzpicture}\n \\end{minipage}%\n \\begin{minipage}{0.5\\textwidth}\n \\centering\n \\begin{tikzpicture}[scale=1.4]\n\t\\draw (3,0)--(3.5,1)--(4,0)--(4.5,1)--(5,0);\n\n\t\\fill(3,0) circle(0.04cm); \\node [below] at (3,0) {$1$};\n\t\\fill(3.5,1) circle(0.04cm); \\node [above] at (3.5,1) {$2$};\n\t\\fill(4,0) circle(0.04cm); \\node [below] at (4,0) {$3$};\n\t\\fill(4.5,1) circle(0.04cm); \\node [above] at (4.5,1) {$4$};\n\t\\fill(5,0) circle(0.04cm); \\node [below] at (5,0) {$5$};\n\t\n\t\\fill(5.2,0.5) circle(0.02cm); \\fill(5.5,0.5) circle(0.02cm); \\fill(5.8,0.5) circle(0.02cm);\n\n\t\\draw (6,1)--(6.5,0);\n\t\\fill(6,1) circle(0.04cm); \\node [above] at (6,1){\\small $m-1$};\n\t\\fill(6.5,0) circle(0.04cm); \\node [below] at (6.5,0){\\small $m$};\n\t\\coordinate[label=below:$m$ odd] (1) at (5,-0.5);\n\t\\end{tikzpicture}\n \\end{minipage}\n \\caption{The zigzag poset $Z_{m}$.}\n \\label{Zm}\n\\end{figure}\n\nThe poset $Z_m \\times \\mathbf{n}$ is as shown in Figure~\\ref{Zmn}. Elements of $Z_m \\times \\mathbf{n}$ are pairs $(i,j)$, where $i \\in Z_m$ and $j\\in [n]$, and the order is defined as follows:\n$$(i,j)\\leq (k,\\ell) \\mbox{ if and only if } i\\leq_{Z_{m}}k\\mbox{ and } j\\leq \\ell.$$\n\n\n\\begin{figure}[!htb]\n\\tiny\n \\centering\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\begin{tikzpicture}[scale=1.5]\n \\draw (3,1)--(3.5,2)--(4,1)--(4.5,2)--(5,1);\n \\draw (3,-1)--(3.5,0)--(4,-1)--(4.5,0)--(5,-1);\n \\draw (3,-2)--(3.5,-1)--(4,-2)--(4.5,-1)--(5,-2);\n \\draw (3,1)--(3,0.7); \\draw (3,-0.3)--(3,-1)--(3,-2);\n \\draw (3.5,2)--(3.5,1.3); \\draw (3.5,0.3)--(3.5,0)--(3.5,-1);\n \\draw (4,1)--(4,0.7); \\draw (4,-0.3)--(4,-1)--(4,-2);\n \\draw (4.5,2)--(4.5,1.3); \\draw (4.5,0.3)--(4.5,0)--(4.5,-1);\n \\draw (5,1)--(5,0.7); \\draw (5,-0.3)--(5,-1)--(5,-2);\n \\fill(3,1) circle(0.04cm); \\node [right] at (3,1){ $(1,n)$};\n \\fill(3.5,2) circle(0.04cm); \\node [right] at (3.5,2){ $(2,n)$};\n \\fill(4,1) circle(0.04cm); \\node [right] at (4,1){ $(3,n)$};\n \\fill(4.5,2) circle(0.04cm); \\node [right] at (4.5,2){ $(4,n)$};\n \\fill(5,1) circle(0.04cm); \\node [right] at (5,1) { $(5,n)$};\n \\fill(3,-1) circle(0.04cm); \\node [right] at (3,-1) { $(1,2)$};\n \\fill(3.5,0) circle(0.04cm); \\node [right] at (3.5,0) { $(2,2)$};\n \\fill(4,-1) circle(0.04cm); \\node [right] at (4,-1) { $(3,2)$};\n \\fill(4.5,0) circle(0.04cm); \\node [right] at (4.5,0) { $(4,2)$};\n \\fill(5,-1) circle(0.04cm); \\node [right] at (5,-1) { $(5,2)$};\n \\fill(3,-2) circle(0.04cm); \\node [right] at (3,-2) { $(1,1)$};\n \\fill(3.5,-1) circle(0.04cm); \\node [right] at (3.5,-1) { $(2,1)$};\n \\fill(4,-2) circle(0.04cm); \\node [right] at (4,-2) { $(3,1)$};\n \\fill(4.5,-1) circle(0.04cm);\\node [right] at (4.5,-1) { $(4,1)$};\n \\fill(5,-2) circle(0.04cm); \\node [right] at(5,-2) { $(5,1)$};\n\n \\fill(3.7,0.5) circle(0.02cm); \\fill(4,0.5) circle(0.02cm); \\fill(4.3,0.5) circle(0.02cm);\n \\fill(5.2,1.5) circle(0.02cm); \\fill(5.5,1.5) circle(0.02cm); \\fill(5.8,1.5) circle(0.02cm);\n \\fill(5.2,0.5) circle(0.02cm); \\fill(5.5,0.5) circle(0.02cm); \\fill(5.8,0.5) circle(0.02cm);\n \\fill(5.2,-0.5) circle(0.02cm); \\fill(5.5,-0.5) circle(0.02cm); \\fill(5.8,-0.5) circle(0.02cm);\n \\fill(5.2,-1.5) circle(0.02cm); \\fill(5.5,-1.5) circle(0.02cm); \\fill(5.8,-1.5) circle(0.02cm);\n\n \\draw (6,1)--(6.5,2);\\draw (6,-1)--(6.5,0);\\draw (6,-2)--(6.5,-1);\n \\draw (6,1)--(6,0.7); \\draw (6,-0.3)--(6,-1)--(6,-2);\n \\draw (6.5,2)--(6.5,1.3); \\draw (6.5,0.3)--(6.5,0)--(6.5,-1);\n \\fill(6,1) circle(0.04cm); \\node [right] at (6,1) { $(m-1,n)$};\n \\fill(6.5,2) circle(0.04cm); \\node [right] at (6.5,2) { $(m,n)$};\n \\fill(6,-1) circle(0.04cm); \n \\fill(6.5,0) circle(0.04cm); \\node [right] at (6.5,0) { $(m,2)$};\n \\fill(6,-2) circle(0.04cm); \\node [right] at (6,-2) { $(m-1,1)$};\n \\fill(6.5,-1) circle(0.04cm); \\node [right] at (6.5,-1) { $(m,1)$};\n \\coordinate[label=below:\\small $m$ even] (1) at (5,-2.5);\n \\end{tikzpicture}\n \\end{minipage}%\n \\begin{minipage}{0.5\\textwidth}\n \\centering\n \\begin{tikzpicture}[scale=1.5]\n \\draw (3,1)--(3.5,2)--(4,1)--(4.5,2)--(5,1);\n \\draw (3,-1)--(3.5,0)--(4,-1)--(4.5,0)--(5,-1);\n \\draw (3,-2)--(3.5,-1)--(4,-2)--(4.5,-1)--(5,-2);\n \\draw (3,1)--(3,0.7); \\draw (3,-0.3)--(3,-1)--(3,-2);\n \\draw (3.5,2)--(3.5,1.3); \\draw (3.5,0.3)--(3.5,0)--(3.5,-1);\n \\draw (4,1)--(4,0.7); \\draw (4,-0.3)--(4,-1)--(4,-2);\n \\draw (4.5,2)--(4.5,1.3); \\draw (4.5,0.3)--(4.5,0)--(4.5,-1);\n \\draw (5,1)--(5,0.7); \\draw (5,-0.3)--(5,-1)--(5,-2);\n \\fill(3,1) circle(0.04cm); \\node [right] at (3,1){ $(1,n)$};\n \\fill(3.5,2) circle(0.04cm); \\node [right] at (3.5,2){ $(2,n)$};\n \\fill(4,1) circle(0.04cm); \\node [right] at (4,1){ $(3,n)$};\n \\fill(4.5,2) circle(0.04cm); \\node [right] at (4.5,2){ $(4,n)$};\n \\fill(5,1) circle(0.04cm); \\node [right] at (5,1) { $(5,n)$};\n \\fill(3,-1) circle(0.04cm); \\node [right] at (3,-1) { $(1,2)$};\n \\fill(3.5,0) circle(0.04cm); \\node [right] at (3.5,0) { $(2,2)$};\n \\fill(4,-1) circle(0.04cm); \\node [right] at (4,-1) { $(3,2)$};\n \\fill(4.5,0) circle(0.04cm); \\node [right] at (4.5,0) { $(4,2)$};\n \\fill(5,-1) circle(0.04cm); \\node [right] at (5,-1) { $(5,2)$};\n \\fill(3,-2) circle(0.04cm); \\node [right] at (3,-2) { $(1,1)$};\n \\fill(3.5,-1) circle(0.04cm); \\node [right] at (3.5,-1) { $(2,1)$};\n \\fill(4,-2) circle(0.04cm); \\node [right] at (4,-2) { $(3,1)$};\n \\fill(4.5,-1) circle(0.04cm);\\node [right] at (4.5,-1) { $(4,1)$};\n \\fill(5,-2) circle(0.04cm); \\node [right] at(5,-2) { $(5,1)$};\n\n \\fill(3.7,0.5) circle(0.02cm); \\fill(4,0.5) circle(0.02cm); \\fill(4.3,0.5) circle(0.02cm);\n \\fill(5.2,1.5) circle(0.02cm); \\fill(5.5,1.5) circle(0.02cm); \\fill(5.8,1.5) circle(0.02cm);\n \\fill(5.2,0.5) circle(0.02cm); \\fill(5.5,0.5) circle(0.02cm); \\fill(5.8,0.5) circle(0.02cm);\n \\fill(5.2,-0.5) circle(0.02cm); \\fill(5.5,-0.5) circle(0.02cm); \\fill(5.8,-0.5) circle(0.02cm);\n \\fill(5.2,-1.5) circle(0.02cm); \\fill(5.5,-1.5) circle(0.02cm); \\fill(5.8,-1.5) circle(0.02cm);\n\n \\draw (6,2)--(6.5,1);\\draw (6,0)--(6.5,-1);\\draw (6,-1)--(6.5,-2);\n \\draw (6,2)--(6,1.3); \\draw (6,0.3)--(6,0)--(6,-1);\n \\draw (6.5,1)--(6.5,0.7); \\draw (6.5,-0.3)--(6.5,-1)--(6.5,-2);\n \\fill(6,2) circle(0.04cm); \\node [right] at (6,2) { $(m-1,n)$};\n \\fill(6.5,1) circle(0.04cm); \\node [right] at (6.5,1) { $(m,n)$};\n \\fill(6,0) circle(0.04cm); \\node [right] at (6,0) { $(m-1,2)$};\n \\fill(6.5,-1) circle(0.04cm); \\node [right] at (6.5,-1) { $(m,2)$};\n \\fill(6,-1) circle(0.04cm);\n \\fill(6.5,-2) circle(0.04cm); \\node [right] at (6.5,-2) { $(m,1)$};\n \\coordinate[label=below:\\small $m$ odd] (1) at (5,-2.5);\n \\end{tikzpicture}\n \\end{minipage}\n \\caption{The poset $Z_{m} \\times \\mathbf{n}$.}\n \\label{Zmn}\n\\end{figure}\n\nIt is known that, for $m\\ge 2$, the size of $J(Z_m)$ equals to the Fibonacci number $F_{m+2}$, which is defined recursively as $F_1=F_2=1$ and $F_{n+1}=F_n+F_{n-1}$ for any $n\\geq 2$; see Stanley \\cite[Ch. 3 Ex. 23.a]{Stanley1997Enumerative}.\nThe enumeration of $J(Z_m \\times \\mathbf{n})$ was studied by Berman and K\\\"ohler \\cite{Berman1976Cardinalities}.\nThe following theorem reveals their connection with the enumeration of alternating words. We shall give two proofs of it here, a bijective proof and an enumerative proof.\n\n\\begin{thm}\\label{enum-down-up-words}\n For any $k \\ge 2$ and $\\ell \\ge 2$, the number $N_{k,\\ell}$ of down-up words over $[k]$ of length $\\ell$ is equal to the number of order ideals of $Z_\\ell \\times (\\mathbf{k-2})$.\n\\end{thm}\n\n\\noindent \\textbf{Bijective Proof.}\nLet $\\mathcal{W}_{k,\\ell}$ denote the set of down-up words over $[k]$ of length $\\ell$. We shall build a bijection between $\\mathcal{W}_{k,\\ell}$ and ${J}(Z_\\ell \\times (\\mathbf{k-2}))$.\n\n We first define a map $\\Phi:\\mathcal{W}_{k,\\ell}\\rightarrow {J}(Z_\\ell \\times (\\mathbf{k-2}))$. Given a down-up word $w=w_1 w_2 \\cdots w_{\\ell}$, we define the word $\\alpha=\\alpha_1 \\alpha_2 \\cdots \\alpha_{\\ell}$ as follows:\n $$\\alpha_i=\\left\\{\n\\begin{array}{ll}\nw_i-2,& \\mbox{ if } i \\mbox{ is odd},\\\\[3mm]\nw_i-1,& \\mbox{ if } i \\mbox{ is even},\n\\end{array}\\right.\n$$\nwhere $1\\leq i\\leq \\ell$.\nThen let $$\\Phi(w)=\\{(i,\\beta_j) : 1\\leq i\\leq \\ell , \\ 1 \\le \\beta_j \\le \\alpha_i\\}.$$\nFor example, let $k=4$ and $\\ell=7$, and consider the word $w=3241423$. Then, $\\alpha=1120211$ and thus $\\Phi(w)=\\{(1,1),(2,1),(3,1),(3,2),(5,1),(5,2),(6,1),\n(7,1)\\}$, which is an order ideal of $Z_7 \\times \\mathbf{2}$.\n\nWe need to show that this map is well defined.\nIt suffices to prove that $\\Phi(w)$ is an order ideal of $Z_\\ell \\times (\\mathbf{k-2})$, that is to say that,\nif $(i',j')\\leq (i,j)$ and $(i,j)\\in \\Phi(w)$ then $(i',j')\\in \\Phi(w)$.\nFrom the definition of the order of $Z_\\ell \\times (\\mathbf{k-2})$, we have that $i'\\le_{Z_{\\ell}} i$ and $j'\\le j$.\nNow, we divide the situation into two cases: $i'=i$ and $i'<_{Z_\\ell}i$.\nFor the case $i'=i$, the argument is obviously true from the construction of $\\Phi(w)$. We just need to consider the case $i'<_{Z_\\ell}i$.\nAt this time, $i$ must be even, and $i'$ can only be $i-1$ or $i+1$.\nSince $(i,j)\\in \\Phi(w)$, we have that $\\alpha_i\\geq j$ and thus $w_i\\geq j+1$.\nFrom the fact that $w$ is a down-up word, it follows that $w_{i'}> w_i$. Hence, $w_{i'} \\ge j+2$ and thus $\\alpha_{i'}\\geq j$.\nFrom the construction of $\\Phi(w)$, we obtain that $(i',j')\\in \\Phi(w)$ for all $j'\\leq j$, as desired.\n\n\n Next, we define a map $\\Psi: {J}(Z_\\ell \\times (\\mathbf{k-2})) \\rightarrow \\mathcal{W}_{k,\\ell}$.\n Given an order ideal $I$ of $Z_\\ell \\times (\\mathbf{k-2})$,\n we define a word $\\gamma=\\gamma_1 \\gamma_2 \\cdots \\gamma_{\\ell}$ as follows. For each $1\\leq i\\leq\\ell$, if there exists at least one $j$ such that $(i,j)\\in I$, then let $\\gamma_i$ be the maximum $j$. Otherwise, we let $\\gamma_i$=0.\n The corresponding word $\\Psi(I)$ is defined as $(2+\\gamma_1)(1+\\gamma_2)(2+\\gamma_3)(1+\\gamma_4)\\cdots$.\n For exmaple, if $I=\\{(1,1),(2,1),(3,1),(3,2),(5,1),(5,2),(6,1),\n(7,1)\\}$,\n then $\\gamma=1120211$ and thus $w=3241423$.\n\n It is easy to see that, for any even integer $i$, we have $\\gamma_i\\leq \\gamma_{i+1}$ and $\\gamma_i\\leq \\gamma_{i-1}$, since $I$ is an order ideal. From the construction of $\\Psi(I)$, we see that\n it is a down-up word.\n\nFinally, it is not difficult to prove that $\\Psi \\circ \\Phi = id$ and $\\Phi \\circ \\Psi = id$. Hence $\\Phi$ is a bijection. This completes our bijective proof.\\\\\n\n\\noindent{\\bf Enumerative Proof.}\nWe first prove that the numbers in question satisfy the following recurrence relation, for $k\\ge 3$ and $\\ell\\ge 2$,\n\\begin{align}\\label{eq-N-rec}\n N_{k,\\ell} = N_{k-1,\\ell} + \\sum_{i=0}^{\\lfloor \\frac{\\ell-1}{2}\\rfloor} N_{k-1,2i} N_{k,\\ell-2i-1} - \\delta_{\\ell \\text{ is even}} N_{k-1,\\ell-2},\n\\end{align}\nwith the initial conditions\n$N_{k, 0}=1$, $N_{k, 1}=k$ for $k\\ge 2$, and $N_{2,\\ell}=1$ for $\\ell \\ge 2$. To this end, we note that any down-up word $w$ over $[k]$ of length $\\ell$ belongs to one of the following two cases.\\\\\n\n\\noindent\n{\\bf Case 1:} $w$ does not contain the letter $k$. Then the number we count is that of down-up words over the alphabet $[k-1]$ of length $\\ell$, which is $N_{k-1,\\ell}$. This corresponds to the first term on the righthand side of \\eqref{eq-N-rec}.\\\\\n\n\\noindent\n{\\bf Case 2:} $w$ is of the form $w_1kw_2$, where $w_1$ is a down-up word of even length over $[k-1]$, and $w_2$ is an up-down word over $[k]$. Note that the number of up-down words equals to that of down-up words, as mentioned above. This corresponds to the second term on the right hand side of \\eqref{eq-N-rec}.\nThe only exception occurs when the subword after the leftmost letter $k$ is of length one. It can be any letter in $[k-1]$, but $N_{k, 1}=k$. So, an additional term occurs, which fixes this. In these cases, $\\ell-2i-1$ equals 1, which means that $\\ell$ is even. This completes the proof of \\eqref{eq-N-rec}.\n\nNow, let us denote the number of order ideals of $Z_\\ell \\times \\mathbf{k}$ by $M_{k,\\ell}$.\nWe note that\nBerman and K\\\"ohler \\cite[Example 2.3]\n{Berman1976Cardinalities} studied a similar recurrence for $M_{k,\\ell}$, which is,\nfor $k\\ge 1$ and $\\ell\\ge 1$,\n\\begin{align*}\n M_{k,\\ell} = M_{k-1,\\ell} + \\sum_{i=0}^{\\lfloor \\frac{\\ell-1}{2}\\rfloor} M_{k-1,2i} M_{k,\\ell-2i-1},\n\\end{align*}\nwith the initial conditions\n$M_{k, 0}=1$ for $k\\ge0$ and $M_{0,\\ell}=1$ for $\\ell\\ge1$.\n\nOwing to their akin recurrence relations, we made a minor change to the number $N_{k,\\ell}$ to complete the proof.\nWe let $\\widetilde{N}_{k,\\ell}$ be $N_{k,\\ell}$ except $\\widetilde{N}_{k,1}=k-1$. One can easily check that,\nfor $k\\ge 3$ and $\\ell\\ge 1$,\n\\begin{align*}\n \\widetilde{N}_{k,\\ell} = \\widetilde{N}_{k-1,\\ell} + \\sum_{i=0}^{\\lfloor \\frac{\\ell-1}{2}\\rfloor} \\widetilde{N}_{k-1,2i} \\widetilde{N}_{k,\\ell-2i-1} ,\n\\end{align*}\nwith the initial conditions\n$\\widetilde{N}_{k, 0}=1$ for $k\\ge2$ and $\\widetilde{N}_{2,\\ell}=1$ for $\\ell\\ge1$.\nIt follows immediately that, for $k\\ge2$ and $\\ell\\ge0$, $$\\widetilde{N}_{k, \\ell} = M_{k-2,\\ell},$$ since they have the same initial conditions and recurrence relations.\nTogether with the fact $N_{k, \\ell} =\\widetilde{N}_{k, \\ell}$ except $\\ell=1$, we obtain that\n$$N_{k, \\ell}=M_{k-2,\\ell}$$\nfor $k\\ge2$ and $\\ell\\ge2$. This completes our enumerative proof.\n\\qed\n\nAs an immediate corollary of Theorem~\\ref{enum-down-up-words}, we have the following statement. \n\n\\begin{thm} For $k\\geq 3$ and $\\ell\\geq 2$, the numbers $N_{k,\\ell}$ of down-up (equivalently, up-down) words of length $\\ell$ over $[k]$ satisfy (\\ref{eq-N-rec}) with the initial conditions $N_{k,0}=1$, $N_{k,1}=k$ for $k\\geq 2$, and $N_{2,\\ell}=1$ for $\\ell\\geq 2$. \\end{thm} \n\nNote that the Fibonacci numbers have the following recurrence relations \\cite[pp. 5--6]{Vorobiev2002Fibonacci}:\n$$F_{2n}=\\sum_{i=0}^{n-1}F_{2i+1}, \\quad F_{2n+1}=1+\\sum_{i=1}^{n}F_{2i}.$$\nUsing \\eqref{eq-N-rec} and the fact that $N_{2,\\ell}=1$ for $\\ell\\geq 2$, one can prove the following statement.\n\n\\begin{thm} For $\\ell\\geq 2$, $N_{3,\\ell}=F_{\\ell+2}$, the $(\\ell+2)$th Fibonacci number. \\end{thm}\n\n\\section{Enumeration of 123-avoiding up-down words}\\label{123-up-down-sec}\n\nIn this section, we consider the enumeration of 123-avoiding up-down words. Denote $A_{k,\\ell}$ the number of 123-avoiding up-down words of length $\\ell$ over the alphabet $[k]$, and $A_{k, \\ell}^j$ the number of those words counted by $A_{k,\\ell}$ that end with $j$.\n\n\\subsection{Explicit enumeration}\n\nIt is easy to see that\n\\begin{align}\n A_{k,2i}=\\sum_{j=2}^{k} A_{k,2i}^j.\n\\end{align}\n\nNext, we deal with the enumeration of $A^j_{k,2i}$. In what follows, for a word $w$, we have $\\{w\\}^*=\\{\\epsilon, w,ww,www,\\ldots\\}$, where $\\epsilon$ is the empty word, and $\\{w\\}^+=\\{w,ww,www,\\ldots\\}$.\n\n\\begin{lem}\nFor $k\\ge 3$ and $2\\le j\\le k$, the numbers $A_{k,2i}^{j}$ satisfy the following recurrence relation,\n\\begin{align}\\label{rec}\nA_{k,2i}^{j}\n= \\sum_{i'=1}^{i} \\left( A_{k-1, 2i'}^{j-1} - A_{k-1, 2i'}^{j} + A_{k,2i'}^{j+1} \\right),\n\\end{align}\nwith the boundary condition $A_{k,2i}^{k}= \\binom{i+k-2}{i}$.\nFurthermore, an explicit formula for $A^j_{k,2i}$ is\n\\begin{align}\\label{A^j}\nA^j_{k,2i}=\\frac{j-1}{k-1} \\binom{i+k-2}{i} \\binom{i+k-j-1}{i-1}.\n\\end{align}\n\\end{lem}\n\n\n\\begin{proof}\nWe first check the boundary condition.\n When $j=k$, the words must be of the form $$\\{(k-1)k\\}^{*}\\{(k-2)k\\}^{*}\\ldots\\{2k\\}^{*}\\{1k\\}^{*}.$$\n The structure is dictated by the presence of the rightmost $k$; violating the structure, we will be forced to have an occurrence of the pattern $123$.\n Therefore, $A_{k,2i}^{k}=\\binom{i+k-2}{i}$, where we applied the well known formula for the number of solutions of the equation $x_1+\\cdots +x_{k-1}=i$ with $x_i\\geq 0$ for $1\\leq i\\leq k-1$.\n\nNow we proceed to deduce the recurrence relation \\eqref{rec}.\nAll the legal words of length $2i$ ending with $j$ can be divided into the following cases according to the occurrence of the letter $1$:\n\n\\noindent\n{\\bf Case 1:} For the legal words that contain the letter $1$, the letter $1$ must appear in the second last position, since otherwise it would lead to a $123$ pattern. We now divide all the legal words ending with $1j$ into the following subcases:\n\\\\[4pt]\n \\indent {\\bf Case 1.1:} There is only one word of the form $\\{1j\\}^{+}$.\\\\[4pt]\n \\indent {\\bf Case 1.2:} We deal with the words of the form $w\\{j'j\\}^{+}\\{1j\\}^{+}$, where $w$ is a legal word and $2\\le j'\\le j-1$. Note that $w$ cannot contain 1 because of an occurrence of $j'j$. Thus, we consider a legal word over the alphabet set $[2,k]$ of even length which ends with $j'j$. By subtracting 1 from each letter of this word, we obtain a legal word over $[k-1]$ ending with $(j'-1)(j-1)$. Thus, the number of all words in this case is equal to that of all words over $[k-1]$ ending with $j-1$, which is $\\sum_{i'=1}^{i-1} A_{k-1, 2i'}^{j-1}$. \\\\[4pt]\n \\indent {\\bf Case 1.3:} The others are the words of the form $wj'^{+}\\{1j\\}^{+}$, where $w$ is a legal word and $j'\\ge j+1$. Clearly, the number of such words in this case is $\\sum_{i'=1}^{i-1}\\sum_{j'=j+1}^{k}A_{k,2i'}^{j'}$. \\\\[6pt]\n\\noindent\n{\\bf Case 2:} We next deal with the legal words ending with $j$ over the alphabet $[k]\\setminus \\{1\\}=\\{2,3,\\ldots,k\\}$.\nIn this case, it has the same enumeration as that of the legal words over $[k-1]$ ending with $j-1$. The number of such words is $A_{k-1,2i}^{j-1}$.\n\nThus, we have the following recurrence relation\n\\begin{align}\\label{formula-A-k-2i-j}\nA_{k,2i}^{j}\n= 1+\\sum_{i'=1}^{i} A_{k-1, 2i'}^{j-1} + \\sum_{i'=1}^{i-1}\\sum_{j'=j+1}^{k}A_{k,2i'}^{j'}.\n\\end{align}\nFrom (\\ref{formula-A-k-2i-j}), we have that\n\\begin{align*}\nA_{k,2i}^{j} - A_{k,2i}^{j+1}\n= \\sum_{i'=1}^{i} \\left( A_{k-1, 2i'}^{j-1}-A_{k-1, 2i'}^{j} \\right) + \\sum_{i'=1}^{i-1}A_{k,2i'}^{j+1},\n\\end{align*}\nand therefore the recurrence \\eqref{rec} follows.\n\nNow we deduce the formula \\eqref{A^j} for $A_{k,2i}^{j}$.\nLet $$A'(k,i,j) = \\frac{j-1}{k-1} \\binom{i+k-2}{i} \\binom{i+k-j-1}{i-1}.$$\nWe next prove that $A_{k,2i}^{j} = A'(k,i,j)$ by induction on $k-j$ and $k$. We shall show that these numbers have the same base case and satisfy the same recursion.\nIndeed, for $k=j\\ge 2$, this fact is obviously true, since $A'(k,i,k)=A^k_{k,2i}$.\nWe will now check that $A'(k,i,j)$ satisfy the following recurrence relation:\n\\begin{align}\\label{rec-formula-Bs}\nA'(k,i,j)\n= \\sum_{i'=1}^{i} \\left( A'(k-1, i',j-1) - A'(k-1, i',j) +A'(k,i',j+1) \\right).\n\\end{align}\nIndeed, (\\ref{rec-formula-Bs}) is true if and only if\n\\begin{align*}\nA'(k,i,j)-A'(k,i-1,j)\n= A'(k-1, i, j-1) - A'(k-1, i, j) +A'(k, i, j+1),\n\\end{align*}\nwhile the later equation is easy to check to be true.\nThis completes the proof.\n\\end{proof}\n\nFurther, the number of 123-avoiding up-down words of length $2i$ over $[k]$ is\n $$A_{k,2i}=\\sum_{j=2}^k\\frac{j-1}{k-1} \\binom{i+k-2}{i} \\binom{i+k-j-1}{i-1}=\\frac{1}{i+1} \\binom{i+k-2}{i} \\binom{i+k-1}{i}.$$\nThe last equality can be deduced from the Gosper algorithm \\cite{Petkovsek1996AB}.\n\nNow we consider legal words of odd length. For any legal word of length $2i\\ (i\\ge 1)$ ending with $j\\ (2\\le j\\le k)$, we can adjoin any letter in $[j-1]$ at the end to form an up-down word of length $2i+1$ over $[k]$. In fact, such words are necessarily 123-avoiding.\nSo, we obtain that\n\\begin{align*}\nA_{k,2i+1}= & \\sum_{j=2}^k(j-1)A^j_{k,2i}\\\\\n=& \\sum_{j=2}^k\\frac{(j-1)^2}{k-1} \\binom{i+k-2}{i} \\binom{i+k-j-1}{i-1}\\\\\n= & \\frac{i+2k-2}{(i+1)(i+2)}\\binom{i+k-2}{i} \\binom{i+k-1}{i}.\n\\end{align*}\nAlso, the last equality can be deduced from the Gosper algorithm \\cite{Petkovsek1996AB}.\n\n\nHence, we have proved the following theorem.\n\n\\begin{thm}\\label{main-thm} For $A_{k,\\ell}$, the number of $123$-avoiding up-down words of length $\\ell$ over $[k]$, $A_{k,0}=1$, $A_{k,1}=k$, and for $\\ell\\geq2$,\n\\begin{align}\\label{main-formula-A-k-l}\n A_{k,\\ell}=\n\\begin{cases}\n \\frac{1}{i+1} \\binom{i+k-2}{i} \\binom{i+k-1}{i}, & \\mbox{ if } \\ell=2i,\\\\[6pt]\n \\frac{i+2k-2}{(i+1)(i+2)} \\binom{i+k-2}{i} \\binom{i+k-1}{i}, & \\mbox{ if } \\ell=2i+1.\n\\end{cases}\n\\end{align}\n\\end{thm}\n\n\\subsection{Generating functions}\n\n\nIn this subsection, an expression for the generating function for the numbers $A_{k,i}$ of 123-avoiding up-down words of length $i$ over $[k]$ is given.\nWe adopt the notation of {\\em Narayana polynomials}, which are defined as $N_0(x)=1$ and, for $n\\geq 1$,\n$$N_{n}(x)=\\sum_{i=0}^{n-1}\\frac{1}{i+1} \\binom{n}{i}\\binom{n-1}{i}x^i.$$\nDue to Brenti \\cite{Brenti1989Unimodal} and Reiner and Welker \\cite[Section 5.2]{Reiner2005Charney}, a remarking generating function for $A_{k,2i}$ can be expressed as follows: \n\\begin{align}\\label{eq:Narayana}\n\\sum_{i\\geq 0}A_{k,2i}x^i=\\frac{N_{k-2}(x)}{(1-x)^{2k-3}}.\n\\end{align}\nOn the other hand, by Theorem \\ref{main-thm}, a routine computation leads to the following identity,\n\\begin{align}\\label{eq:123-odd}\n A_{k,2i-1}&=A_{k,2i}-A_{k-1,2i},\n\\end{align}\nfor all $i\\geq 2$.\n(Note that we shall also give a combinatorial interpretation of \\eqref{eq:123-odd} in Section \\ref{all-cases}.)\nThus, together with $A_{k,1}=k$, it follows that\n\\begin{align*}\n \\sum_{i\\geq 1}A_{k,2i-1}x^i&= x+ \\sum_{i\\geq 1}A_{k,2i}x^i-\\sum_{i\\geq 1}A_{k-1,2i}x^i\\\\\n &=x+\\frac{N_{k-2}(x)}{(1-x)^{2k-3}}-\\frac{N_{k-3}(x)}\n {(1-x)^{2k-5}}\\\\\n &=x+\\frac{N_{k-2}(x)-(1-x)\n ^2N_{k-3}(x)}{(1-x)^{2k-3}}.\n\\end{align*}\nHence, we are ready to obtain the main result of this subsection,\n\\begin{align*}\n \\sum_{i\\geq 0}A_{k,i}x^i&=\\sum_{i\\geq0}A_{k,2i}x^{2i}\n + \\sum_{i\\geq 1}A_{k,2i-1}x^{2i-1}\\\\\n &=\\frac{N_{k-2}(x^2)}{(1-x^2)^{2k-3}}+x+ \\frac{N_{k-2}(x^2)-\n (1-x^2)^2N_{k-3}(x^2)}{x(1-x^2)^{2k-3}}\\\\\n &=x+\\frac{(1+x)N_{k-2}(x^2)-(1-x^2)^2N_{k-3}(x^2)}{x(1-x^2)^{2k-3}}.\n\\end{align*}\n\n\\section{A bijection between \\texorpdfstring{$S^{132}_{k,2i}$}{Lg} and \\texorpdfstring{$S^{123}_{k,2i}$}{Lg}}\\label{bijection-sec}\n\nLet $p$ be a pattern and $S^{p}_{k,\\ell}$ be the set of $p$-avoiding up-down words of length $\\ell$ over $[k]$.\nIn this section, we will build a bijection between $S^{132}_{k,2i}$ and $S^{123}_{k,2i}$.\n\nThe idea here is to introduce the notion of irreducible words and show that irreducible words in $S^{132}_{k,2i}$ can be mapped in a 1-to-1 way into irreducible words in $S^{123}_{k,2i}$, while reducible words in these sets can be mapped to each other as well.\n\n\\begin{definition} A word $w$ is {\\em reducible}, if $w=w_1w_2$ for some non-empty words $w_1$ and $w_2$, and each letter in $w_1$ is no less than every letter in $w_2$. The place between $w_1$ and $w_2$ in $w$ is called a {\\em cut-place}.\\end{definition}\n\nFor example, the word 242313 is irreducible, while the word 341312 is reducible (it can be cut into 34 and 1312).\n\nNote that in a reducible up-down word, if we have a cut-place, and there are equal elements on both sides of it, then to the left such elements must be bottom elements, and to the right they must be top elements.\n\n\\begin{lem} A word $w$ in $S^{132}_{k,2i}$ is irreducible if and only if $w=w_1xy$, where $w_1$ is a word in $S^{132}_{k,2i-2}$, $x$ is the minimum letter in $w$ (possibly, there are other copies of $x$ in $w$) and $y$ is the maximum letter in $w$ (possibly, there are other copies of $y$ in $w$).\\end{lem}\n\n \\begin{proof}\n If $x$ is not the minimum element in $w$, then the element right before it, and the minimum element in $w$ will form the pattern 132.\n Since $w$ is irreducible, $y$ is forced to be no less than the minimum element in $w_1$.\n On the other hand, if $y$ is not the maximum element in $w$, then the maximum one in $w_1$ and the element just preceding it will form the pattern 132.\n This completes the proof.\n \\end{proof}\n\nNow, given a word $w$ in $S^{132}_{k,2i}$, we can count in how many ways it can be extended to an irreducible word in $S^{132}_{k,2i+2}$. Suppose that $a$ and $b$ are the minimum and the maximum elements in $w$, respectively. Then the number of extensions of $w$ in $S^{132}_{k,2i+2}$ is $a\\cdot(k-b+1)$, since there are $a$ choices of the next to last element and $k-b+1$ choices of the last element.\n\nNext, we discuss a procedure of turning any word $w$ in $S^{123}_{k,2i}$ into an irreducible word in $S^{123}_{k,2i+2}$. From this procedure, it would be clear that the number of choices is $a\\cdot(k-b+1)$, where $a$ and $b$ are the minimum and the maximum elements in $w$, respectively.\n\nSuppose that $w=b_1 t_1 b_2 t_2 \\cdots b_i t_i$, where $b_j$'s and $t_j$'s stand for bottom and top elements, respectively.\nTo obtain the desired word, we inserting a new top element $x$, shift the bottom elements one position to the left, and then insert one more bottom element $y$.\nThen, the extension is of the form\n$$w'=b_1 x b_2 t_1 b_3 t_2 \\cdots b_i t_{i-1} y t_i,$$\nwhere $x\\ge b$ and $y\\le a$, and again, $a$ and $b$ are the minimum and the maximum elements in $w$, respectively.\nFor example, if $w = 242313\\in S^{123}_{5,6}$, $a=1$ and $b=4$, then $w'$ can be $24241313$ or $25241313$.\n\nTo see that the resulting word $w'$ is an up-down word. In fact, it is sufficient to show that $b_{j+1} < t_{j}$ for $1\\le j\\le i-1$ and $t_j > b_{j+2}$ for $1\\le j\\le i-2$. The first inequality follows from the fact that $w$ is an up-down word, while the second one is true, because otherwise $t_j\\le b_{j+2}j$ in $w$ would lead to an occurrence of three letters $j'w_{2i-1}j$ forming the pattern $312$. Thus, we have that $$C_{k,2i}^j=C_{j,2i}^j,$$ where $2\\leq j\\leq k$ .\n\nMoreover, for any word in $S^{312}_{j,2i}$ ending with $j$, we can remove $j$ to form a word of length $2i-1$, which is also $312$-avoiding. On the other hand, for any word $S^{312}_{j,2i-1}$, we can adjoin a letter $j$ at the end to form a $312$-avoiding word of length $2i$. Thus, $$C_{j,2i}^j=C_{j,2i-1}.$$\nSo, we obtain that\n\\begin{align*}\n C_{k,2i} &= \\sum_{j=2}^{k}C_{j,2i-1}\\\\\n &=\\sum_{j=2}^{k}\\frac{i-3+2j}{i(i+1)}\\binom{i+j-3}{ i-1}\\binom{i+j-2}{i-1}\\\\\n &=\\frac{1}{i+1} \\binom{i+k-2}{i}\\binom{i+k-1}{i}\\\\\n &=A_{k,2i},\n\\end{align*}\nwhere the second last equality can be deduced from the Gosper algorithm \\cite{Petkovsek1996AB}.\n\nWe have just obtained the main result in this section.\n\\begin{thm}\\label{123-312}\nThe sets of $312$-avoiding up-down words and $123$-avoiding up-down words are equinumerous, that is, $$C_{k,\\ell}=A_{k,\\ell}$$ for all $k\\geq 1$ and $\\ell\\geq 0.$\n\\end{thm}\n\n\n\n\\section{Enumeration of other pattern-avoiding up-down words}\\label{all-cases}\n\n\nIn this section, we consider the enumeration of other pattern-avoiding up-down words.\nIn order to avoid confusion, let $N_{k,\\ell}(p)$ denote the number of $p$-avoiding up-down words of length $\\ell$ over the alphabet $[k]$.\n\n\nWe first focus on all six length 3 permutation patterns to be avoided on up-down words of odd length.\n\\begin{thm} For all $k\\geq 1$ and $i\\geq 0$, we have\n$$N_{k,2i+1}(123)=N_{k,2i+1}(312)=N_{k,2i+1}(213)\n=N_{k,2i+1}(321)$$\nand\n$$N_{k,2i+1}(132)=N_{k,2i+1}(231).$$\n\\end{thm}\n\n\n\\begin{proof}\n Through the reverse operation, the following equations hold:\n$$N_{k,2i+1}(132)=N_{k,2i+1}(231),$$\n$$N_{k,2i+1}(123)=N_{k,2i+1}(321),$$ and\n$$N_{k,2i+1}(312)=N_{k,2i+1}(213).$$\nCombing with Theorem \\ref{123-312}, the proof is complete.\n\\end{proof}\n\nNext, we obtain the following result for the case of the even length.\n\\begin{thm}\nFor all $k\\geq 1$ and $i\\geq 1$, there is \n$$N_{k,2i}(123)=N_{k,2i}(132)=N_{k,2i}(312)=N_{k,2i}(213)\n=N_{k,2i}(231).$$\n\\end{thm}\n\\begin{proof}\n Through the complement and reverse operations, it follows that \n$$N_{k,2i}(132)=N_{k,2i}(213),$$ and\n$$N_{k,2i}(231)=N_{k,2i}(312).$$\nFrom Section \\ref{bijection-sec}, we have that\n$$N_{k,2i}(132)=N_{k,2i}(123).$$\nTogether with Theorem \\ref{123-312}, we complete the proof.\n\\end{proof}\n\n\nIn the rest of this section, we deal with the only remaining case, 321-avoiding up-down words of even length. Our approach is based on deriving the desired from an alternative enumeration of 123-avoiding up-down words.\n\n\nAll 123-avoiding up-down words of length $\\ell$ over $[k]$, for $\\ell \\geq 4$, can be divided into the following two cases:\n\\begin{itemize}\n\\item Legal words containing no $k$ in them. These words are counted by $A_{k-1, \\ell}$.\n\\item Legal words that contain at least one $k$. Such words $w=w_1w_2\\cdots w_{\\ell}$ are necessarily of the form $w_1kw_3\\cdots w_{\\ell}$, since otherwise $w_1w_2k$ would be an occurrence of the pattern $123$. \nClearly, $w_1\\geq w_3$ (otherwise, $w_1w_3w_4$ would form the pattern 123).\nWe let $w'$ be $k w_3\\cdots w_{\\ell}$ if $w_1=w_3$ and $w_1 w_3\\cdots w_{\\ell}$ if $w_1>w_3$.\nClearly, this is a reversible procedure and the obtained words $w'$ are 123-avoiding down-up words. By applying the complement operation, we obtain 321-avoiding up-down words over $[k]$ of length $\\ell-1$.\n\\end{itemize}\nIt follows that for $\\ell \\geq 4$, $$A_{k, \\ell}=A_{k-1, \\ell}+N_{k, \\ell-1}(321),$$\nand thus \n$$N_{k, \\ell-1}(321)= A_{k, \\ell}-A_{k-1, \\ell}.$$\nHence, by Theorem \\ref{main-thm}, we are ready to obtain an expression for $N_{k, \\ell}(321)$.\n\n\\begin{thm}\\label{thm-321} For the number of $321$-avoiding up-down words of length $\\ell$ over $[k]$, $N_{k,0}(321)=1$, $N_{k,1}(321)=k$, $N_{k,2}(321)=\\binom{k}{2}$, and for $\\ell\\geq 3$,\n\\begin{align*}\n N_{k, \\ell}(321)=\n\\begin{cases}\n\\frac{i (i+2 k-3) (i+2 k-2)+2 (k-2) (k-1)}{(i+1) (i+2) (k-2) (k-1)} \\binom{i+k-2}{i} \\binom{i+k-3}{i} , & \\mbox{ if } \\ell=2i,\\\\[6pt]\n \\frac{i+2k-2}{(i+1)(i+2)} \\binom{i+k-2}{i} \\binom{i+k-1}{i}, & \\mbox{ if } \\ell=2i+1.\n\\end{cases}\n\\end{align*}\n\\end{thm}\n\nSince $N_{k,2i+1}(123)=N_{k,2i+1}(321)$, we actually give another approach to deal with $321$-avoiding up-down words of odd length.\n\n\n\\section{Concluding remarks}\\label{last-sec}\n\nIn this paper we initiated the study of (pattern-avoiding) alternating words. In particular, we have shown that 123-avoiding up-down words of even length are given by the Narayana numbers. Thus, alternating words can be used, for example, for encoding Dyck paths with a specified number of peaks \\cite{GaoZhang}. To our surprise, the enumeration of 123-avoiding up-down words turned out to be easier than that of 132-avoiding up-down words, as opposed to similar studies for permutations, when the structure of 132-avoiding permutations is easier than that of 123-avoiding permutations.\n\nAbove, we gave a complete classification of avoidance of permutation patterns of length~$3$ on alternating words. We state it as an open direction of research to study avoidance of longer patterns and\/or patterns of different types (see~\\cite{Kitaev2011Patterns}) on alternating (up-down or down-up) words.\n\n\\vskip 3mm\n\\noindent {\\bf Acknowledgments.} \nThis work was supported by the 973 Project, the PCSIRT Project of the Ministry of Education and the National Science Foundation of China.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}