id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2309.02462
Active flow control for three-dimensional cylinders through deep reinforcement learning
This paper presents for the first time successful results of active flow control with multiple independently controlled zero-net-mass-flux synthetic jets. The jets are placed on a three-dimensional cylinder along its span with the aim of reducing the drag coefficient. The method is based on a deep-reinforcement-learning framework that couples a computational-fluid-dynamics solver with an agent using the proximal-policy-optimization algorithm. We implement a multi-agent reinforcement-learning framework which offers numerous advantages: it exploits local invariants, makes the control adaptable to different geometries, facilitates transfer learning and cross-application of agents and results in significant training speedup. In this contribution we report significant drag reduction after applying the DRL-based control in three different configurations of the problem.
Pol Suárez, Francisco Alcántara-Ávila, Arnau Miró, Jean Rabault, Bernat Font, Oriol Lehmkuhl, R. Vinuesa
2023-09-04T13:30:29Z
http://arxiv.org/abs/2309.02462v1
# Active flow control for three-dimensional cylinders through deep reinforcement learning ###### Abstract This paper presents for the first time successful results of active flow control with multiple independently controlled zero-net-mass-flux synthetic jets. The jets are placed on a three-dimensional cylinder along its span with the aim of reducing the drag coefficient. The method is based on a deep-reinforcement-learning framework that couples a computational-fluid-dynamics solver with an agent using the proximal-policy-optimization algorithm. We implement a multi-agent reinforcement-learning framework which offers numerous advantages: it exploits local invariants, makes the control adaptable to different geometries, facilitates transfer learning and cross-application of agents and results in significant training speedup. In this contribution we report significant drag reduction after applying the DRL-based control in three different configurations of the problem. ## 1 Introduction Recent advances in the aerospace community demonstrate a growing interest in exploring new strategies for reducing emissions generated by the aviation industry. The implementation of active-flow-control systems, which aim to reduce drag, plays a vital role in the search for sustainable solutions that can effectively reduce fuel consumption, mitigate pollution and minimize vehicle transport emissions. Over the past decades, the industry has witnessed the deployment of flow-control techniques, encompassing both passive and active approaches. One common example of passive flow control is the use of winglets on aircraft. Winglets reduce lift-induced drag on the entire wing, resulting in improved fuel efficiency and overall drag reduction. On the other hand, active control involves dynamic strategies to manipulate the flow. Synthetic jet actuators are one example of active-flow-control devices. Controlled bursts of air are potentially leading to drag reduction, improved lift-to-drag ratios and enhanced aircraft performance. Machine-learning (ML) techniques have emerged as a valuable tool, offering the potential to uncover novel strategies highly relevant to the aerospace sector. Deep reinforcement learning (DRL) and neural networks have particularly demonstrated great promise, enabling the development of effective control strategies at a reasonable computational cost. Some recent research leveraging DRL has carried out for two-dimensional (2D) cylinders at low Reynolds numbers and known geometries actuated by zero-mass rate jets in the surface (Rabault et al. (2019), Tang et al. (2020)), and also slightly higher Reynolds numbers (Varela et al. (2023)). For a more detailed understanding of recent advances in flow control, we refer to, _e.g._, Vignon et al. (2023b) or Brunton & Noack (2015). DRL is based on maximizing a reward function (\(R\)), which is provided to an agent that interacts continuously with an environment through several action (\(A\)) inputs. The agent receives information about the environment state at each actuation step thanks to partial observations (\(O_{st}\)) of the system. Note that a sequences of consecutive actions is denoted as episode. When a batch of episodes is finished, the agent updates the neural-network weights in order to progressively determine a configuration that yields the maximum expected reward accumulated in time, for a given observation state. The primary objective of this study is to extend the knowledge gained from successful studies where DRL is applied to 2D cylinders (Varela et al. (2023)), turbulent channels (Guastoni et al. (2023)), and Rayleigh-Benard convection problems (Vignon et al. (2023a)) to the scenario of three-dimensional (3D) cylinders equipped with multiple actuators on their surfaces. In this new setting, the agent observes the transition from laminar to turbulent flow in the cylinder wake and devises strategies to exploit structures of different spanwise wavelengths. As a result, each interaction of the agent is associated with a high computational cost because it needs to solve larger numerical problem per interaction. Therefore, throughout this work, striking a balance between acceptable training times and achieving optimal control performance remains a key consideration. Methodology **Problem configuration and numerical setup** The problem consist of a 3D cylinder with a constant inlet velocity boundary condition \(U_{\rm in}/U_{\infty}=1\). All lengths are normalized taking into account the cylinder diameter \(D\). The fluid domain box has a streamwise length of \(L_{x}=30D\). The height is \(L_{y}=15D\) and the cylinder is located at \([7.5,7.5]D\) in the \(xy\) plane. Regarding the spanwise length, two configurations are investigated: \(L_{z}=4D\) and \(10D\), see Table 1. We studied three different training setups denoted as W85, N85 and N255. The letter corresponds to the domain type: W for wide (\(L_{z}=10D\)) and N for narrow (\(L_{z}=4D\)). The number denotes the \(O_{st}\) size: 85 or 255 probes, without or with neighboring, explained in detail below. Regarding the rest of boundary conditions, the top, bottom and outflow surfaces (parallel to the \(xz\) plane) are defined as outlets with zero velocity gradient and constant pressure. No-slip conditions are imposed in the cylinder walls, \(U/U_{\infty}=0\), and periodic boundary conditions are imposed in the spanwise direction. The coordinate-system origin is placed in the front-face left-bottom corner, as seen in schematic representation of the domain in figure 1. The cylinder has two synthetic jets placed on the top and bottom with an arc length of \(w=10^{\circ}\) each. These positions of the actuators are chosen to avoid the momentum injection and have a drag reduction coming from effective actuation. These jet velocities \(V_{\rm jet}\) are a function of both the jet angle \(\theta\) and the desired mass flow rate \(Q\) determined by the DRL control. The jet velocity profile is defined as follows: \[V_{\rm jet}(Q,\omega)=Q\frac{2\pi}{\omega D^{2}}\cos\left[\frac{\pi}{\omega} (\Theta-\Theta_{0})\right], \tag{1}\] where \(\Theta_{0}\) corresponds to the angle where the jet is centered (in this problem, 90\({}^{\rm o}\) and 270\({}^{\rm o}\) degrees). The scaling factor is used so the integration of the jet velocity corresponds to the mass flow rate and the cosine function ensures zero velocity at the boundaries with the cylinder. The flow profile within an individual actuator jet is constant in all its spanwise length. No spatial smoothing is needed in the arc boundary that exist between adjacent actuators. The Reynolds number, \(Re=\rho U_{in}D/\mu\), where \(\rho\) is the density and \(\mu\) is the molecular viscosity, considered are 100, 200, 300 and 400. This range contains the transition from laminar flow to the emergence of three-dimensional instabilities in the cylinder wake (Williamson (1996) and Zhang et al. (1995)). The motivation of this work is to assess how the control is capable of tackling and exploiting the different wake structures in 3D. The numerical simulations are carried out by means of the numerical solver Alya (Vazquez et al. (2016)). The spatial discretization is based on the finite-element method (FEM) and the incompressible Navier-Stokes equations are considered. It is worth noting that, due to the large amount of training required for the DRL control considered here, the computational cost of numerical simulations dominates in determining the overall wall-clock time. When designing the mesh, a compromise between cost and accuracy has been made, ensuring that the chosen mesh effectively captures the primary structures and wavelengths in the cylinder wake. This provides the agent essential information for controlling them. **Multi-agent reinforcement learning (MARL)** Previous work done in 2D cylinders used single-agent reinforcement learning (SARL), where every set of actions is decided at once. Note that with the increase of the action space, this becomes a much more challenging task because it is necessary to find the best policy for a high-dimensional control. Therefore, SARL is not viable option in 3D cylinders because the agent need more episodes to tackle all possible combinations in the \(n\) jets located in the cylinder surface. Added to the fact that the computational cost per action is orders of magnitude higher than 2D environments then the total wall-clock time required becomes excessive. On the other hand, the potential of MARL in these cases has been recently documented by Belus et al. (2019) and Vignon et al. (2023a). The MARL framework avoids the curse of dimensionality present in this particular setup. This new approach, in contrast to SARL, aims to train locally on environment partitions, which are denoted as pseudo environments. Note that all of them share the same neural-network weights. Doing so, the high-dimensional control space becomes tractable and the agent is trained in smaller domains to maximize the local rewards, some additional features are added to Figure 1: Non-dimensionalized configuration (reference cylinder diameter \(D\)). where \(w\) is the jet width and \(\Theta_{0}\) is the angular location of each jet. In green, we show the velocity condition for the inlet \(U_{\rm in}\) and the sinusoidal profile in jets. This representation is not to scale. ensure the pursue of global reward maxima. The agent interacts with the numerical simulation domain through three main channels. The observation state \(O_{st}\), that is sent from the simulation to the agent, consists of partial pressure information from slices of 85 probes in the wake, centered on the corresponding pseudo environment location in \(z\) (not shown here). In the present work, two configurations are considered, _i.e._ with or without observation of neighboring pressure values, as shown in table 1. The neighboring consist in adding slices of each side, one set of 85 probes per side. The observation state becomes three slices of probes, 255 pressure values in total. The simulation also sends the DRL environment total reward \(R\), see Equation (2) below, defined as a sum of the local and global reward. The scalar \(R_{\rm n}\) fit the reward signal into \([-1,1]\) range as Tensorforce libraries require. A new heuristic parameter is added, the parameter \(\beta\), used to balance the local and global rewards, in this research is set to \(\beta=0.8\). These rewards \(r\), Equation (3), are computed as a function of the drag coefficient reduction, \(\Delta C_{d}=C_{d}(t,i_{\rm jet})-C_{d_{\rm h}}\), being \(C_{d_{\rm h}}\) the uncontrolled baseline known value. In addition we have a lift contribution multiplied by \(\alpha\), acting as a penalty to avoid axis-switching and ensure only reduction in the streamwise force component. Note that aerodynamic forces (\(C_{d}\) and \(C_{l}\)) are defined in Equation (4). The frontal area \(A_{f}\) corresponds to local pseudo environment surfaces for \(C_{d,l_{\rm local}}\) and the whole cylinder surface for \(C_{d,l_{\rm global}}\). \[R(t,i_{\rm jet})=R_{\rm n}(\beta r_{\rm local}(t,i_{\rm jet})+(1- \beta)r_{\rm global}(t)), \tag{2}\] \[r(t,i_{\rm jet})=C_{d_{\rm h}}-C_{d}(t,i_{\rm jet})-\alpha|C_{l }(t,i_{\rm jet})|, \tag{3}\] \[\text{where}\quad C_{d}=\frac{2F_{x}}{\rho A_{f}v_{\infty}^{2}}\quad\text{and }\quad C_{l}=\frac{2F_{y}}{\rho A_{f}v_{\infty}^{2}}. \tag{4}\] The action \(A\) is computed by the agent based on the state of the system. The DRL library employed here outputs this value in the range \([-1,1]\), thus this value needs to be rescaled as \(Q=AQ_{\rm max}\) in order to avoid excessively large actuations. During training we observed that the \(Q_{\rm max}\) obtained in 2D studies were not adequate in the context of the present 3D cylinders. Thus, \(Q_{\rm max_{\rm 3D}}=2Q_{\rm max_{\rm 2D}}=0.176\) was set to yield adequate results. Note that, based on Equation (1), \(Q\) is directly related to the mass flow rate from the jet. For each pseudo environment, we set is opposite action values between the top and bottom jet, _i.e._\(Q_{\rm 90^{\circ}}=-Q_{\rm 270^{\circ}}\), in order to ensure the global zero mass flow rate. Although one needs to take into account the energy consumption of the actuator in order to calculate the net energy saving, this is highly dependent on the actual experimental setup. In the present numerical setting the cost of the control is negligible compared with the drag reduction (Guastoni et al. (2023)), this would not necessarily be true in an experiment. Every action \(A\) from the agent is applied in the system during \(T_{a}\) time units. Jet boundary conditions are updated following Equation (1). The transition in time between actions, \(Q_{t}\to Q_{t+1}\), is done by an exponential function in time. Some DRL setup parameters are closely related to the fluid-mechanics problem at hand. The duration of an episode is defined to contain 6 vortex-shedding periods (\(T_{k}=1/f_{k}\)). In this case, the Strouhal number for the range of Reynolds numbers under consideration is around \(St=f_{k}DU_{\rm in}/U_{\infty}=0.2\). Note that we set \(T_{a}<0.05T_{k}\), which is in agreement with the recommendations from previous publications. Consequently, a total of 120 actuations per episode is considered to be adequate to evaluate the accumulated reward. Also note that every episode starts from a uncontrolled converged state of the problem. The neural-network architecture consists of two dense layers of 512 neurons. A proximal-policy-optimization agent define the neural-network weights based on policy-gradient method. The open-source library Tensorforce is used (Kuhnle et al. (2017)). The batch size, _i.e._ the total experiences uses the PPO agent for each gradient-descent iteration, is set to 80. This is different from the standard size of 20 used in previous implementations. This has been modified for computational cost and multi-environment synchronization purposes, being an adequate configuration to run enough experiences at the same time to do neural-networks updates efficiently. If 8 environments (independent simulations) with 10 pseudoenironments each are running at the same time, it is essential to not lose any information when the next \(10\times 8\) experiences begin. Consequently, the next episodes will not start until the neural-network weights are updated. \begin{table} \begin{tabular}{l|c|c|c|} \multirow{2}{*}{Case} & 2D & W85 & N85 & N255 \\ \cline{2-4} & 43600 & 6.2M & 2.6M \\ \cline{2-4} & \multicolumn{3}{c|}{30} \\ \cline{2-4} & \multicolumn{3}{c|}{15} \\ \cline{2-4} & - & 10 & 4 \\ \cline{2-4} & - & 1 & 0.4 \\ \cline{2-4} & \multicolumn{3}{c|}{85} & 255 \\ \cline{2-4} & 0.088 & \multicolumn{3}{c|}{0.176} \\ \cline{2-4} & \multicolumn{3}{c|}{2} \\ \cline{2-4} & \multicolumn{3}{c|}{5} \\ \cline{2-4} & \multicolumn{3}{c|}{0.25} \\ \cline{2-4} Actions/episodes & \multicolumn{3}{c|}{100} & 120 \\ \cline{2-4} CPUs/environment & \multicolumn{3}{c|}{1} & 128 \\ \cline{2-4} Parallel environments & \multicolumn{3}{c|}{10} & 8 \\ \cline{2-4} Baseline duration [TU] & \multicolumn{3}{c|}{150} \\ \cline{2-4} Lift penalty & \multicolumn{3}{c|}{0.6} \\ \cline{2-4} \#Neurons(layers) & \multicolumn{3}{c|}{512(2)} \\ \cline{2-4} Reynolds numbers & \multicolumn{3}{c|}{100, 200, 300 and 400} \\ \cline{2-4} Time smoothing & \multicolumn{3}{c|}{linear} & exponential \\ \cline{2-4} \end{tabular} \end{table} Table 1: Main parameters of the simulations for each training setup and compared with the benchmark in 2D. It is important to mention that there is an individual agent for each Reynolds number and case setup. Although transfer-learning techniques have shown good potential, they are not applied in the present work because the focus here is to compare setups with MARL and define benchmark focused on assessing how the agents discover approaches to control wake instabilities. ## 3 Results and discussion To the authors' knowledge, this study constitutes the first time that a 3D cylinder with multiple jets configuration is successfully trained using MARL. Figure 2 shows all pseudo environment rewards \(R\), together with the pure drag reduction and lift-biased penalisation. For instance, in the \(Re=200\) case, the lift contribution to the reward is close to zero for the later episodes, a fact that indicates that the agent has discovered a very good control approach. Also note that, even if the reward remains close to 0, the agent may be learning. This can be observed in the \(Re=400\) case, where a strategy with great drag reduction is achieved, although at the cost of inducing lift biases. The aerodynamic impact of such an effect needs to be accounted for when assessing the merit of the control approach. After finishing the various training cases, the performance of the agent is evaluated by running the obtained policy in deterministic mode. In this case, the policy is evaluated without any exploration. Consequently, the agent computes the most probable value of the action \(A\) probability distribution that ensures the maximum expected reward. The case runs until the control converges into a periodic control behaviour. All the cases lead to effective drag reduction rates. The drag-reduction rates reported for all the 3D cases in Table 2 are slightly different from those obtained 2D. In the latter case the physics is significantly constrained, and as expected the discrepancy between 2D and 3D results increases with \(Re\) (both in controlled and uncontrolled cases). When comparing the drag coefficient signals (Figure 3), we observe that the performance of the 3D control strategies are more consistent for increasing \(Re\) than that of the 2D cases. Thus, while the performance of the 2D model at \(Re=400\) is degraded compared with the low-\(Re\) case, the 3D case still exhibits excellent performance. Note that all results are presented in dimensionless units. As shown in Figure 4, the DRL control leads to an attenuation of the vortex-shedding strength, as illustrated by visualizing vortical structures (Hunt (1987)). Also, note that the control give rise to vortex-street instabilities earlier in \(Re=200\) than expected in the uncontrolled cases (not shown here). The time series of the actions \(Q\) for the various cases are shown in Figure 5. We note that, for \(Re=200\), between \(t=150\) and 200 there is a noticeable change in the amplitude of the signal. This shows how the policy is able to exploit combinations of \(Q\) while avoiding the curse of dimensionality. In an overall analysis the blowing intensity is different in 3D compared to 2D because the physical system is different as we increase the Reynolds number. However, the actual control in most of the 3D cases presented seem to have an "extruded" strategy. All jets blow in sync and can be simplified as a constant velocity profile along the cylinder span. Maybe the three-dimensional instabilities appearing are weak enough to not dominate in the near wake. The actions may not take advantage yet because the low \(Re\) regime or the configuration studied here. Our results suggest that shorter jets in spanwise direction may be better to deal with higher Reynolds numbers, although this point will be further assessed in future work. Maybe the cylinders studied are short in order to see bigger patterns in the control. Smaller \(L_{\rm jet}\) and better \begin{table} \begin{tabular}{c c c c c} \hline \(Re\) & 2D benchmark & W85 & N85 & N255 \\ \hline 100 & 13.0 & 9.4 & 4.3 & 8.0 \\ 200 & 14.9 & 17.2 & 11.1 & 12.7 \\ 300 & 21.9 & 6.7 & 10.8 & 15.3 \\ 400 & 5.6 & 9.9 & 15.1 & 11.1 \\ \end{tabular} \end{table} Table 2: Summary of of percentual drag reduction \([(1-C_{d_{\rm DRL}}/C_{d_{\rm h}})\times 100\%]\) obtained in deterministic converged stages for each case. Figure 2: Training curves showing the reward in N255 case for \(Re=100\), \(200\), \(300\) and \(400\) (from top to bottom) can be key to discover non-"extruded" control that may yield to the wake instabilities exploitation. Note that the narrower jets receive a more local observation and can in principle exploit position of the wake structures (modes A and B as seen in literature Williamson (1996)) to develop policies leading to higher drag reduction. In future work we will investigate if this "extruded" strategy is best in general or if a more sophisticated control can be observed in different setup configurations. The recirculation bubble downstream of the cylinder is studied through the mean streamwise velocity in Figure 6. This figure indicates that the reattachment location is delayed in the controlled cases, which exhibit a higher velocity than the uncontrolled one for larger \(x/D\), a fact that indicates that the wake is less affected by the bluff body. ## 4 Conclusions In this study, the MARL framework is coupled with numerical solver Alya to train and find optimal drag-reduction strategies, controlling multiple jets placed in the spanwise direction of a 3D cylinder. Recent state-of-the-art studies in DRL control in 2D cylinders has been extended with new implementations to account for the wake three-dimensionality. This study is carried out in the transition regime where vortex-street instabilities emerge, and this constitutes an additional challenge for DRL at various Reynolds numbers. Our results indicate that MARL is essential to achieve learning in the cases under study by exploiting the underlying physics within pseudo environments and optimizing the global problem involving multiple interactions in parallel. Further investigations will be carried out into the exploration of the action space size and jet dimensions. One of the main advantages of using MARL is the ability to deploy the trained agents for different cylinders lengths and actuator numbers just maintaining \(L_{\mathrm{jet}}\). Note that the training focuses on the symmetries and invariant structure through all spanwise direction. This would not be possible with SARL, which is restricted to a certain number of actuators. Furthermore, MARL allows performing cheaper training sessions in smaller and under-resolved domains, speeding up the process, before tackling the control in high-fidelity simulations. The training results demonstrate effective control for \(Re\)=100, 200, 300, and 400, achieving drag reductions of 9.4%, 17.2%, 6.7%, and 9.9%, respectively, when using a jet length of \(L_{\mathrm{jet}}/D\)=1. For a jet length of \(L_{\mathrm{jet}}/D\)=0.4, the drag reduction is 4.3%, 11.0%, 10.8%, and 15.08% with local observation, and 8.0%, 12.7%, 15.2%, and 11% when extending the observa Figure 4: Instantaneous snapshots comparing the baseline (top) and controlled (bottom) cases. We show vortical motions Hunt (1987)) defined by isosurfaces equal to (a) 0.5 and (b) 0.35, colored by streamwise velocity. Figure 3: Evolution of the drag coefficients as a function of time for all training cases (from top to bottom): \(Re=100\), \(200\), \(300\) and \(400\). tion to spanwise neighbors. These findings highlight the effectiveness of the training process in achieving significant drag reduction across different cases with slightly different DRL configurations. Future work will leverage the present coupling between MARL and AFC problems for more realistic cases, scaling up to turbulent regimes and more complex geometries. Furthermore, the present results bring new benchmark results for the DRL community, which may motivate its use for future applications. ## Acknowledgments Ricardo Vinuesa acknowledges funding by the ERC through Grant No. "2021-CoG-101043998, DEEPCONTROL".
2303.00548
Dynamics of Ball-Chains and Very Elastic Fibres Settling under Gravity in a Viscous Fluid
We study experimentally the dynamics of one and two ball-chains settling under gravity in a very viscous fluid at a Reynolds number much smaller than unity. We demonstrate that single ball-chains in most cases do not tend to be planar and often rotate, not keeping the ends at the same horizontal level. Shorter ball-chains usually form shapes resembling distorted U, and longer ones in the early stage of the evolution form a shape resembling distorted W, and later deform non-symmetrically and significantly out of plane. This behaviour is reproduced in our numerical simulations for a single very elastic filament, with the use of the bead model and multipole expansion of the Stokes equations, corrected for lubrication and implemented in the precise Hydromultipole numerical codes. In our experiments, two ball-chains, initially one above the other, later move away or approach each other, for a larger or smaller initial distance, respectively.
H. J. Shashank, Yevgen Melikhov, Maria L. Ekiel-Jezewska
2023-03-01T14:43:15Z
http://arxiv.org/abs/2303.00548v1
# Dynamics of Ball-Chains and Very Elastic Fibres Settling under Gravity in a Viscous Fluid+ ###### Abstract We study experimentally the dynamics of one and two ball-chains settling under gravity in a very viscous fluid at a Reynolds number much smaller than unity. We demonstrate that single ball-chains in most cases do not tend to be planar and often rotate, not keeping the ends at the same horizontal level. Shorter ball-chains usually form shapes resembling distorted U, and longer ones in the early stage of the evolution form a shape resembling distorted W, and later deform non-symmetrically and significantly out of plane. This behaviour is reproduced in our numerical simulations for a single very elastic filament, with the use of the bead model and multipole expansion of the Stokes equations, corrected for lubrication and implemented in the precise Hydromultipole numerical codes. In our experiments, two ball-chains, initially one above the other, later move away or approach each other, for a larger or smaller initial distance, respectively. ## 1 Introduction Recently, there has been a lot of interest in studying the motion of flexible and rigid microfibres under external forces [1] or ambient flows [2, 3]. This research is guided by many potential applications to biological systems [4, 5], to medical diagnostic techniques [6] or in to design of new materials [7, 8]. In particular, it is important to study the effect of gravity on swimming or just settling deformable microorganisms [9, 10], and on flexible microobjects produced by innovative modern technologies [11]. Therefore, sedimentation of single or multiple deformable objects of different types has been investigated for a wide range of bending stiffness, and for different shapes [12, 13, 14, 15], also for non-negligible inertia effects [16]. Dynamics of one, two, or three elastic fibres settling under gravity in a viscous fluid at a Reynolds number much smaller than unity has been extensively investigated theoretically and numerically [17, 18, 19, 20, 21, 12, 22, 23, 13, 24, 25]. Different types of motion, hydrodynamic repulsion or attraction, and shape deformations have been found, depending on values of the so-called elasto-gravitational number, equal to the ratio of gravitational to elastic forces, and also depending on initial relative orientations and relative positions of the fibres. However, the number of experimental studies of sedimenting flexible fibres, as far as we know, has been much smaller, and limited to rather short and moderately elastic fibres that tend to reach a stable, stationary, 'V' or 'U' shaped configuration [23]. Therefore, the main goal of this paper is to explore experimentally the dynamics and shape evolution of very flexible elongated objects. After many trials, we decided to focus on investigating a single ball-chain or two ball-chains close to each other that settle under gravity in a viscous fluid at a low Reynolds number. It is expected that the ball-chain dynamics is similar to those of very elastic filaments, with the number of beads on the object determining its ability to bend, as the bending of each triplet of the consecutive beads is limited by geometry. Using ball-chains with a moderate or relatively large number of beads, we can compare their observed behaviour with the dynamics predicted numerically for very flexible elastic filaments [12], which has not been done experimentally up until now. In general, we aim towards extracting experimentally basic features of the dynamics of very flexible elongated objects. Next, we will perform numerical simulations to determine these features using a theoretical description. One of the interesting goals is to investigate how a planar vertical symmetric U-shape, inherent for moderately flexible filaments, becomes unstable when their flexibility is increased. The paper is structured as follows. In Section 2, we present the experimental setup, materials and the methods used, including the image processing techniques. Section 3 contains the experimental results - evolution of shapes and positions of a single ball-chain and of a pair of ball-chains, illustrated also in the supplementary videos. Then, in Section 4, we present numerical simulations for a single elastic filament - first, the theoretical description and its numerical implementation Hydromultipole, and next, the numerical results that agree with the experiments remarkably well. Finally, we conclude in Section 5. ## 2 Experimental Techniques, Materials & Methods ### Experimental Arrangement We conduct experiments within a glass tank of inner dimensions \(200mm\) width, \(200mm\) depth and \(500mm\) height, that is filled with highly viscous silicon oil (manufactured by _Silikony Polskie_) with kinematic viscosity \(\nu=5\times 10^{-3}m^{2}/s\) and density \(\rho=970\ kg/m^{3}\) at \(25^{\circ}C\). Into this tank, we drop flexible ball-chains. The motion of the ball-chains is viewed using two cameras placed in perpendicular orientations, corresponding to the front and side views, as shown in the schematic of the experimental setup in Fig. 1. Camera 2 views the glass tank directly, while Camera 1 views the glass tank via a first surface mirror (manufactured by _First Surface Mirror LLC_, USA) placed at an angle of \(45^{\circ}\). These two views provide a greater insight into the lateral movements of the ball-chains. A fluorescent lamp is placed behind each of the vertical faces of the tank opposite to where the cameras are located. Camera 1 is illuminated by Fluorescent lamp 1, through the mirror, and Camera 2 is illuminated by Fluorescent lamp 2, as seen in the schematic shown in Fig. 1. This arrangement of the lamps ensures the best illumination of the tank, with the opaque sedimenting ball-chains being clearly contrasted from the bright background. The cameras used are two identical full-frame DSLRs (Canon 5D Mark iv) with a resolution of 30 megapixels and equipped with a \(100mm\) prime lens. Both cameras are triggered externally so that photographs can be captured at the same time. We use an _Esper Triggerbox_ to trigger both cameras simultaneously. The triggerbox maintains a trigger delay of less than \(20ms\) between the two cameras. The triggerbox itself is controlled by a laptop, and the photographs are captured at the maximum permissible rate allowed by the DSLR cameras (1 photo per second). The exposure time of the photographs from both cameras is set to \(1/125s\) to ensure that the motion of the ball-chains remains frozen for the entire duration of the exposure time. The distance travelled by the ball-chains in this time is smaller than a pixel. The f-number is set to its highest available, f/32 (i.e. the smallest aperture), to ensure that the ball-chains remain in focus even as they meander out of the focal plane of the cameras. Finally, the ISO rating is kept at 400 to ensure sufficient image brightness, whilst also ensuring a low to moderate noise levels. Figure 1: Schematic of the experimental arrangement. The cameras are placed in portrait orientation at about \(800mm\) from the front faces of the glass tank, according to their respective views. This gives a field of view of about \(300mm\) in the vertical direction and \(200mm\) in the horizontal direction. We centre the cameras at the middle of the glass tank so that approximately \(100mm\) from the free surface and \(100mm\) from the bottom glass face are absent in the photographs, thereby reducing the effect of the free surface and the bottom glass wall on the sedimenting ball-chains. ### Ball-Chains The ball-chains that are used in this work (manufactured by _Koniarscy S.C._) consist of metallic beads connected to each other through an inextensible string, with a possibility of a slight movement of the beads with respect to the string. The diameter of all metallic beads is the same, \(d=1.5mm\), and ball-chains of different lengths (i.e., different numbers of beads \(N\)) have been used, typically with \(N\)=12 and \(N\)=20. The filament length \(L\) is defined as \(L=Nd+(N-1)d_{b}\), where \(d_{b}\) is the length of a string that connects two consecutive beads. In our experiments, \(d_{b}\approx 0.3mm\) (on average, since the beads can slightly slide over the string). The ball-chains are more dense than silicon oil and settle down owing to gravity. Hydrodynamic interactions between the beads cause the whole ball-chain to bend while moving. The ball-chains do not have an inherent elasticity, and this allows them to bend with zero spending of energy. Bending angle \(\beta_{i}\) of a triplet of consecutive beads, \(i-1\), \(i\) and \(i+1\), is defined by the relation \[\cos\beta_{i}=\frac{(\mathbf{r}_{i}-\mathbf{r}_{i-1})\cdot(\mathbf{r}_{i+1}-\mathbf{r}_{i})}{ |\mathbf{r}_{i}-\mathbf{r}_{i-1}||\mathbf{r}_{i+1}-\mathbf{r}_{i}|}, \tag{1}\] where \(\mathbf{r}_{i}=(x_{i},y_{i},z_{i})\) is the position of the centre of the bead \(i\) (see Fig. 2). The change of shape of a ball-chain is limited by: (a) the maximum bending angle of a bead triplet, and (b) the number of beads in the ball-chain. Naturally, a ball-chain that allows a larger bending angle of a bead triplet would allow for larger bending of the whole ball-chain. Furthermore, for a larger number of beads in the ball-chain the shape can be more bent, too. We have measured the bending angles of bead triplets of the sedimenting ball-chains, as well as when the ball-chain is outside the fluid, free of gravitational effects. In the latter case, we find that the maximum bending angle \(\beta_{i}\) of a bead triplet is \(55^{\circ}\) when we force the ball-chain to bend to its maximum possible extent. On the other hand, the maximum bending angle of a bead triplet of a sedimenting ball-chain ranges from \(33^{\circ}\) to \(40^{\circ}\) indicating that the ball-chain does not bend to its full extent as it sediments in our experiments. In this paper, we will study sedimentation of a single type of a ball-chain, thus ensuring that the ball-chain maximum deformation is a function of the number of beads alone. We will first investigate time-dependent shape and velocity of a single ball-chain settling under gravity in the silicon oil. We will then present the interaction between two ball-chains that sediment close to each other, but do not touch. ### Experimental Methods & Analysis The experiments are conducted by manually dropping one or two ball-chains approximately at the centre of the tank, in an approximately horizontal orientation. In the case of single ball-chain runs, a ball-chain is always inserted parallel to the plane of view of camera 2. The experiments with two ball-chains sedimenting close to each other are performed by placing the ball-chains one by one, at a certain time separation, resulting in the position one above the other in a perpendicular orientation. The bottom ball-chain is always placed earlier and parallel to the plane of view of camera 2 (perpendicular to the plane of view of camera 1) and the top ball-chain is always placed later and parallel to the plane of view of camera 1 (perpendicular to the plane of view of camera 2). The cameras are triggered at the moment when both ball-chains are already inside the tank, and photographs Figure 2: Schematic of a section of a ball-chain. A local bending angle \(\beta_{i}\) is shown. are acquired until the top ball-chain exits the field of view. We stress that, given the pliable nature of the ball-chains, it is not possible to ensure that the ball-chains are always exactly horizontal nor exactly straight once they are inserted. Moreover, the ball-chains bend transversely just after entering the fluid, and they are already significantly curved when they enter the camera field of view. We analyse the motion of the ball-chains by extracting relevant data from the photographic image sequence obtained from both cameras. The recorded photographs from each camera are imported into MATLAB, and image processing techniques are applied to identify each of the ball-chains. The post-processing steps involve image thresholding, image binarisation and performing morphological operations on the binarised image. Following these steps, each ball-chain is uniquely identified in each frame and in each camera view. Once the ball-chains are uniquely identified, we then calculate the various parameters of the ball-chains to better understand their sedimentation mechanics. Some of the characteristic parameters are illustrated in Fig. 3. (i) Following Refs. [18, 13], we determine the bending amplitude \(A\) as the absolute difference between the uppermost location \(y_{top}\) and the lowermost location \(y_{bot}\) of the ball-chain, \(A=|y_{top}-y_{bot}|\), and we divide it by the filament length \(L\) (defined in the previous section). (ii) We also evaluate the vertical component \(y_{y}\) of a ball-chain time-dependent velocity as the ratio of the vertical distance between its lowermost locations at the times of the two consecutive photographs (approximately equal to the distance travelled by the ball-chain between two consecutive photographs) to the time difference between those photographs, \(v_{y}=\frac{y_{bot}^{y_{bot}-y_{bot}}}{\Delta t}\), with \(\Delta t=1s\). These quantities are first calculated separately from each camera view, and are later averaged across the two views. The fluid and the ball-chains are chosen in such a way that a typical Reynolds number in the experiments is much smaller than unity, with \(Re=7\cdot 10^{-4}-10^{-3}\) if based on the ball-chain width, and \(Re=8\cdot 10^{-3}-2\cdot 10^{-2}\) if based on the ball-chain length. We will now discuss the uncertainty of the measurements. The accuracy in the calculation of \(A\) and \(v_{y}\) depends on the accuracy of the identification of \(y_{bot}\) and \(y_{top}\) during post-processing. The high contrast between the back-lit image of the ball-chains and the bright background ensure that the errors in identifying the edges are minimal. The error of \(y_{bot}\) and \(y_{top}\) is around \(\pm\) 0.1 \(mm\) (\(\pm\) 2 pixels), which leads to the bending amplitude \(A\) having an uncertainty of \(\pm\) 0.15 \(mm\), and the vertical velocity \(v_{y}\) an uncertainty of \(\pm\) 0.19 \(mm/s\). This yields a measurement uncertainty of about 10% for a single camera measurement. Moreover, during the combined measurement with both cameras, the proximity of cameras to the glass tank, coupled with the tendency of the ball-chains to drift off the focal plane of the cameras, may cause a systematic error due to a parallax. The parallax errors are most noticeable at the extreme ends of the field of view, _i.e._, at the top and bottom. However, the influence of parallax is not significant for the measurement of local quantities, such as the bending amplitude \(A\) and the local velocity \(v_{y}\). Furthermore, the side walls also affect the dynamics. Long-range hydrodynamic interactions of the ball-chains with the walls slow down the motion and may influence their shapes, particularly if the ball-chains drift away from the centre of the tank. Figure 3: Schematic describing definition of parameters. (a) Bending amplitude \(A\), determined in single and multiple ball-chain experiments. (b) Vertical distance \(\Delta y\) between two ball-chains and their widths \(W\). ## 3 Experimental Results ### Sedimentation of a Single Ball-Chain We recorded and analysed sedimentation of a single ball-chain in 5 experimental trials for 12-bead ball-chains and in 14 experimental trials for 20-bead ball-chains. (The shorter ball-chains did not show rich dynamics in contrast to the longer ones, hence the moderate difference in the number of experimental trials.) Snapshots from three trials for each type of the ball-chains are shown in Figs. 4 and 5, respectively, with the time interval between consecutive snapshots being \(10s\). The difference in shapes between the 12 and 20 bead ball-chains is evident from the snapshots. In each figure, we present three different trials that show distinct sedimentation dynamics. In addition to the snapshots, the evolution is quantified by the plots of the time-dependent vertical Figure 4: Snapshots of a single 12-bead ball-chain settling under gravity in a viscous oil, taken simultaneously by two cameras (Cam1 and Cam2) located at the same level, and with perpendicular lines of sight: (a) a trial without rotation; (b) a trial with rotation; (c) a trial with rotation and non-horizontal orientation of the end-to-end vector; (d) the sedimentation velocity \(v_{y}\) and the bending amplitude \(A/L\) vs. time \(t\), plotted with blue, red and green lines for the trials shown in (a), (b) and (c), respectively. Experimental movies that correspond to the trials (a), (b) and (c) are shown in the ESI! in Videos 4a, 4b and 4c, respectively. Figure 5: Snapshots of a single 20-bead ball-chain settling under gravity in a viscous oil, taken simultaneously by two cameras (Cam1 and Cam2) located at the same level, and with perpendicular lines of sight: (a) a trial that begins with a W-shape; (b) a trial that begins with a U-shape; (c) a trial that begins with an asymmetric hook-shape; (d) the sedimentation velocity \(v_{y}\) and the bending amplitude \(A/L\) vs. time, plotted with blue, red and green lines for the trials shown in (a), (b) and (c), respectively. Experimental movies that correspond to the trials (a), (b) and (c) are shown in the ESI! in Videos 5a, 5b and 5c, respectively. velocity \(v_{y}\) and bending amplitude \(A/L\) for the runs shown in the snapshots. A typical evolution of the 12-bead ball-chain is illustrated in Fig. 4. The ball-chain consistently shows a slightly distorted U-shape during all the trials. In Fig. 4(a), it seems that the ball-chain remains close to a plane, similar to the previous numerical and experimental studies of relatively short elastic fibres of a moderate stiffness [18, 19, 20, 21, 12, 22, 23, 13, 24]. The existence of a stable stationary U-shaped vertical configuration of an elastic filament was predicted numerically, provided that the bending stiffness is above a certain threshold value [18, 21, 12, 22, 13, 24], and was confirmed experimentally [23]. Dynamics of ball-chains have not been studied numerically so far, but it could be expected that they are similar to the dynamics of very elastic fibres. Our experimental results indicate that shorter ball-chains indeed tend to a certain bent U-shape, with some small deviations. For example, in Fig. 4(a), the fibre is almost planar, and this plane is slightly inclined with respect to vertical, and a small sideways drift is observed, as predicted by numerical simulations of moderately elastic filaments performed in Ref. [12], and illustrated in the middle panel of their Fig. 1. We also observe behaviour that has not been reported in previous studies of elastic fibres. In Fig. 4(b), the ball-chain rotates around the gravity direction in a screw-like fashion as it sediments. Previous studies have shown the screw-like rotation of an elastic fibre when under the influence of a second fibre [12], but not in the case of a fibre sedimenting alone. The rotation is caused by small deviations from a planar vertical shape, such that the rotational-translational mobility is different from zero. In Fig. 4(c), one can also see, in addition to the screw-like rotation, that the shape clearly has no left-right symmetry, and the end points of the ball-chain are not horizontally aligned. For some of the trials, like those shown in Figs. 4(a) and (c), the deviation from the horizontal orientation of the end-to-end vector increases with time, contrary to the expectations that a flexible relatively short filament would tend to a stable configuration with the left-right symmetry. Such an attracting symmetric configuration was found numerically for relatively stiffer elastic fibres [18, 19, 20, 12, 13, 24]. For moderate stiffness, out-of-plane shapes found in Ref. [12] also exhibit left-right symmetry. The difference between the dynamics observed in Figs. 4(a), 4(b) and 4(c) is likely due to the difference between the initial configurations when the ball-chains are inserted into the fluid. However, it is worth noting that the small differences in shapes, observed in Fig. 4(a)-(c), result in the comparable vertical velocity of sedimentation \(v_{y}\) and the bending amplitude \(A/L\), shown in Fig. 4(d) (the differences are not greater than 10%). In other words, the velocity of the ball-chain exhibiting rotation and a clear lack of the left-right symmetry, with the pronounced end-to-end asymmetry, shown in Fig. 4(c), is very close to that of the ball-chain exhibiting only rotation and almost no end-to-end asymmetry, as seen in Fig. 4(b). The common feature of the dynamics of the shorter ball-chains is the lack of a uniquely defined stable configuration reached at a relatively short time of the evolution. Rotation and breaking of the left-right symmetry seem to be typical. The 20-bead ball-chains exhibit a distinctly wider range of different shapes than the 12-bead ball-chains, as seen in Fig. 5. We notice, unlike in the 12-bead runs, that the first observed shape of 20-bead ball-chains is different in each run. From the point of view of Camera 2, we see that the shape of the ball-chain as it enters the field of view is: a bimodal W-shape, as in Fig. 5(a); a wide U-shape, as in Fig. 5(b); an asymmetric, hook-shape, as in Fig. 5(c). The initial shapes in Fig. 5(a-b) are slightly asymmetric, but the initial shape in Fig. 5(c) significantly breaks the left-right symmetry. Fig. 5 illustrates that ball-chain trajectories and the evolution of their shapes are sensitive to the initial configuration. The perturbed bimodal W-shapes, observed as the first snapshots, deform with time, in agreement with the numerically observed instability of the W-shape of very elastic, relatively long filaments, reported in Ref. [18]. Hence, in Fig. 5(a), the initial W-shape evolves into the hook-shape before bending out of the plane and drifting away from the centre of the tank. On the other hand, in Fig. 5(c), the ball-chain is already at the hook-phase as it enters the field of view, and continues to bend out of plane and drift away from the centre of the tank. These two runs can thus be said to be similar to each other, but "shifted in time" with respect to each other. The first shape observed in Fig. 5(b) resembles a deformed U-shape rather than a deformed W-shape, and it changes in time differently from Figs. 5(a) and (c), but a bit similar to the evolution of the 12 bead ball-chain, shown in Fig. 4(b). However, the observed shapes are significantly non-planar and with a pronounced asymmetry, in contrast to the shorter ball-chains. Moreover, they do not seem to converge to a stable planar and symmetric U-shape. This seems to be in agreement with the non-existence of a stable stationary configuration for sufficiently elastic filaments [18, 12, 13]. The stark ranges in the ball-chain shapes from the W and hook-phases to perturbed rotating U-shapes in dicate that the shapes are sensitive to the initial configuration. It is thus tempting to attribute this variation to the uncertainty of keeping the ball-chains horizontal and straight as they are inserted into the tank. We have observed runs that, for instance, exhibit the hook-phase, but do not exhibit the significant out-of-plane bending, which can be related to only a small initial perturbation of the planar shape. It might be possible that the out-of-plane bending would have eventually been observed if the tank was higher, and a small unstable perturbation had enough time to grow. The behaviour of the 20 bead ball-chains is thus complex, and it is difficult to classify it, as it was possible for the 12 bead ball-chains. It is remarkable that our observation of the subsequent W, hook, and out-of-plane bending agrees very well with the trajectory observed in previous numerical studies of very elastic (semi-flexible) fibres at large bending amplitudes, shown in the right panel of Fig. 1 in Ref. [12]. Even though our experimental observations of the trajectory span only a part of the time of the numerical simulation, the agreement is striking. The plots of the vertical sedimentation velocity \(v_{y}\) and the bending amplitude \(A\) for our experiments, shown in Fig. 5(a)-(c), are presented in Fig. 5(d). A correlation between both parameters is visible. As \(A\) increases, so too does \(v_{y}\), and a reduction in \(A\) leads to a reduction in \(v_{y}\). The effect of the dynamic variation of the shape of the ball-chains on the vertical sedimentation velocity is evident from these snapshots, resulting in similar curves of \(A\) and \(v_{y}\)4. Footnote 4: For some of the trials with 20-bead ball-chains, the correlation is not as evident, which is related to the more complex shapes of the ball-chains. ### Sedimentation of Two Interacting Ball-chains #### 3.2.1 Experimental Observations In order to study the hydrodynamic interactions of two ball-chains as they settle under gravity in a viscous fluid, we choose an initial configuration as follows: the ball-chains are inserted horizontally one above the other, such that their centres are aligned vertically, with the bottom ball-chain being in the focal plane of Camera 1 (and perpendicular to Camera 2) and the top ball-chain being in the focal plane of Camera 2 (and perpendicular to Camera 1). The ball-chains are inserted manually at the centre of the tank. The motion of both ball-chains is recorded using the camera arrangement shown in Fig. 1. We performed a number of trials, i.e., 13 trials for the 12-bead ball-chains and 23 trials for the 20-bead ball-chains, to study time-dependent configurations of the sedimenting ball-chains. Figure 6: Results of two experimental trials for a a pair of 12-bead ball-chains settling close to each other under gravity in a viscous oil, traced simultaneously by two cameras (Cam1 and Cam2) located at the same level, and with perpendicular lines of sight: (a) shows snapshots of a trial in which the ball-chains come closer to each other; (c) shows snapshots of a trial in which the ball-chains separate from each other; (b) and (d) show the sedimentation velocities \(v_{y}\) and the bending amplitudes \(A/L\) of both ball-chains vs. time, plotted for the trials in (a) and (c), respectively. The blue colour represents the top ball-chain while the red colour represents the bottom ball-chain. Experimental movies that correspond to the trials shown in (a) and (c) are shown in the ESI+ in Videos 6a and 6c, respectively. We present the typical behaviour of a pair of ball-chains in Figs. 6 and 7 for the 12-bead and 20-bead ball-chains, respectively. In both Figs. 6 and 7, we present two individual trials, shown in panels (a)-(b) and (c)-(d), respectively. In addition to the snapshots visible in panels (a) and (c), we also show in panels (b) and (d) the corresponding bending amplitudes \(A\) and the sedimentation velocities \(v_{y}\) of the top and bottom fibres. In Fig. 6(a), we observe that the 12-bead ball-chains approach each other as they sediment (the term "attraction" is used to describe such a behaviour). It is shown in Fig. 6(b), that the top ball-chain (coloured blue) exhibits a larger bending amplitude, as well as a higher sedimentation velocity than the bottom ball-chain (coloured red). In other words, the top ball-chain bends in such a way that its mobility is larger than the mobility of the bottom ball-chain. Previous studies of elastic and semiflexible fibres have also demonstrated the attraction between two fibres of low/moderate elasticity that are one above the other [26, 12]. In our experiments with the 12-bead ball-chains, we have consistently observed that the attraction of the ball-chains is correlated with the top ball-chain having a larger bending amplitude than the bottom one. This finding is in agreement with the results of numerical simulations, shown in Fig. 5 in Ref. [12], where \(A_{\text{top}}>A_{\text{bot}}\). Attraction of elastic fibres settling one above the other was also found in numerical simulations presented in Ref. [26]8. Footnote 8: Numerically determined shapes of elastic fibres sedimenting within the same vertical plane, presented in Fig. 2 in Ref. [26], are different from those shown in Fig. 5 in Ref. [12], probably owing to the use of a different, simplified theoretical model. We also present, to the best of our knowledge, the first demonstration of a vertical separation of ball-chains that are one above the other. This behaviour has been observed in our experiments, as clearly visible in the snapshots in Fig. 6(c). The bottom ball-chain (coloured red) moves away from the top ball-chain (coloured blue) as both ball-chains sediment. The plots of sedimentation velocities \(v_{y}\) and bending amplitudes \(A\) of both fibres, presented in Fig. 6(d), corroborate the separation of the ball-chains, with the bottom ball-chain having a larger bending amplitude, and, consequently, a larger sedimentation velocity than the top ball-chain. In our experiments, the shapes of the ball-chains typically do not have the right-left symmetry, and the non-horizontal end-to-end line of the ball-chains seems to increase significantly their speed. On the other hand, deviations from the horizontal location of the ball-chain arms seem to be random, and for the top ball-chain, they can be both smaller and larger than for the bottom one. We have not observed a tendency to approach with time symmetric or even planar shapes, nor any specific relative orientation of the ball-chains. The 20-bead ball-chains also exhibit the two distinct behaviours described for the 12-bead ball-chains: vertical attraction or repulsion. Fig. 7(a) shows snapshots from a trial where the ball-chains approach each other, while Fig. 7(c) shows snapshots from another trial where the ball-chains move away from each other. In most runs, we observe a direct correlation between the bending amplitude \(A\) and the sedimentation velocity \(v_{y}\) - the larger the bending amplitude, the faster the sedimentation velocity (see Fig. 7(b) and (d)), but there are Figure 7: Results of two experimental trials for a pair of 20-bead ball-chains, settling close to each other under gravity in a viscous oil, traced simultaneously by two cameras (Cam1 and Cam2) located at the same level, and with perpendicular lines of sight: (a) shows snapshots of a trial in which the ball-chains come closer to each other; (c) shows snapshots of a trial in which the ball-chains separate from each other; (b) and (d) show the sedimentation velocity \(v_{y}\) and the bending amplitude \(A/L\) vs. time, plotted for the trials in (a) and (c), respectively. The blue colour represents the top ball-chain while the red colour represents the bottom ball-chain. Experimental movies that correspond to the trials shown in (a) and (c) are shown in the ESI\(\dagger\) in Videos 7a and 7c, respectively. exceptions. If the top ball-chain bends more than the bottom ball-chain, the ball-chains come together. If the bottom ball-chain bends more than the top ball-chain, a separation is observed. 2.2 How Much Hydrodynamic Interactions between Two Ball-Chains Change the Isolated Ball-Chain Dynamics? Previous studies on the dynamics of two very elastic filaments in a top-down initial orientation suggest that at large values of \(B\), the dynamics of the filaments are dominated by the behaviour of a single filament, and not by hydrodynamic interactions between the two filaments, if the distance between the filaments is not small [12]. The shapes exhibited by the separating 20-bead ball-chains that are relatively far from each other in an experimental trial shown in Fig. 7(c) seem to agree with this assessment, since the sequence of the ball-chain shapes is close to the different stages of their individual sedimentation process - for instance, both ball-chains in the first frame of Fig. 7(c) are in the bimodal W-phase, but the bottom ball-chain is already beginning to enter the hook-phase. Such a time shift could be easily understood since the upper ball chain was inserted into the fluid later than the lower one. The velocity of a hook-shaped ball-chain is larger than that of a W-shaped ball-chain, as discussed earlier and shown in Fig. 5 of Sec. 3.1. Furthermore, by closely observing the values of the sedimentation velocity \(v_{y}\) of both the top and bottom ball-chains (Fig. 7 (d)), we see that the separating ball-chains exhibit a similar velocity range to that of an isolated ball-chain (Fig. 5 (d)). It is thus very likely that the separation of the 20-bead ball-chains observed in the current experiments is merely a result of the bottom ball-chain being further in its individual evolution than the top ball-chain. We also point out that, while it is possible to explain the separation Figure 8: The vertical distance between the ball-chains, \(\Delta y\), normalised with ball-chain length \(L\), as a function of time for: (a) 12 beads, (b) 20 beads. The orange curves represent ball-chains that come closer to each other and the green ones ball-chains that move away from each other. Figure 9: Difference \((W_{\text{top}}-W_{\text{bot}})/L\) of the top and bottom ball-chain widths as a function of time, for: (a) 12 beads, (b) 20 beads. The orange curves represent ball-chains that approach each other, and the green ones represent ball-chains that move away from each other. of the 20-bead ball-chains based on their sequences of shapes, the limited range of shapes exhibited by the 12-bead ball-chains makes it difficult to explain the reason of their separation. For the top and bottom ball-chains relatively close to each other, it is expected that their mutual hydrodynamic interactions would cause them to approach each other [12]. We have seen clear evidence of an attracting behaviour of the hydrodynamic interactions in a few trials where the top ball-chain, which is not directly above the bottom ball-chain but still coming closer to the bottom ball-chain changes its trajectory and turns towards the bottom ball-chain once it is close enough (see Videos 6a and 7a in the ESI\({}^{\dagger}\) ). Direct evidence of hydrodynamic interactions can be found in Figs. 6(b) and 7(b) - we observe that both ball-chains sediment at a larger velocity when compared to the sedimentation velocity of an isolated ball-chain, shown in Figs. 4(d) and 5(d). It is well known that a particle sedimenting close to other particles moves faster than in the absence of any other particles. To further illustrate the relevance of hydrodynamic interactions at small distances, we present in Fig. 8 the time-dependence of vertical distance \(\Delta y\) between the two ball-chains as they traverse the height of the tank, for all the experimental runs performed (see Fig. 3 for the definition of \(\Delta y\)). We adopt the following colour code in the plots: The runs with a greater vertical separation distance between the ball-chains at the end of the run than at the beginning are assigned the colour green (moving apart, separating), whereas the runs with a smaller vertical separation distance at the end of the run than at the beginning are assigned the colour orange (moving closer, attracting). In other words, the green curves correspond to runs in which \(\Delta y_{t=end}>\Delta y_{t=0}\) and the orange curves correspond to runs in which \(\Delta y_{i=end}<\Delta y_{i=0}\). It is clear from the plots in Fig. 8 that the orange curves correspond to two ball-chains that are closer to each other at the beginning of the run. We observe this tendency for both the 12-bead as well as the 20-bead runs. It is known that hydrodynamic interactions between two flexible objects (or groups of particles), one above the other, cause the lower object to become wider and with a smaller vertical dimension [27, 12]. This mechanism can cause the lower ball-chain to move slower than the upper one. It is clear that the closer the ball-chains are to each other, the stronger they interact hydrodynamically. On the other hand, the green curves in Fig. 8 correspond to two ball-chains that are further from each other at the beginning of the run. Therefore, the influence of the isolated ball-chain dynamics on their evolution is more pronounced. As discussed before, the time shift between the moments of the release of both 20-bead ball-chains might explain why they move away from each other in the early stage of the runs. Indeed, as shown in Fig. 5(d), the velocity of an isolated ball-chain increases with time in the early stage of its evolution. However, later it reaches a maximum and then decreases, which might be responsible for the non-monotonicity of some green curves in Fig. 8(b), showing that the ball-chains initially move away from each other before eventually coming closer together. Important are also dynamic changes of shapes for the 20-bead ball-chains. The 12-bead ball-chains do not change their shapes so dynamically, which seems to be related to the almost monotonic curves in Fig. 8(a). It might be interesting to compare the widths of the top and bottom ball-chains, \(W_{\text{top}}\) and \(W_{\text{bot}}\), as defined in Fig. 3(b). The ball-chains in the current experiments exhibit significant out-of-plane motion, such as rotation and bending. It is thus challenging to determine the width of the ball-chains since in general, they are not in a plane that would coincide with either plane of the camera views. However, there exist a few runs in which both ball-chains do not exhibit significant rotation or out-of-plane bending. We measure the ball-chain width \(W\) of such runs, but we only consider frames in which both top and bottom ball-chains are planar, and their planes coincide with the perpendicular planes of the camera view. In other words, the top ball-chain needs to stay within the plane of view of Camera 1 and perpendicular to the plane of view of Camera 2. At the same time, the bottom ball-chain needs to stay within the plane of view of Camera 2 and perpendicular to the plane of view of Camera 1. Our analysis is thus reduced to only a few runs that satisfy these conditions. In Fig. 9, we plot the difference in widths of the top and bottom ball-chains, \(W_{\text{top}}-W_{\text{bot}}\), normalised with the ball-chain length \(L\), as a function of the time \(t\). It is evident from the plot that \(W_{\text{top}}>W_{\text{bot}}\) for runs in which the ball-chains move away from each other, and \(W_{\text{top}}<W_{\text{bot}}\) for the runs in which the ball-chains approach each other. The small number of curves shown in Fig. 9 is related to a generic feature of the dynamics observed in our experiments: the ball chains tend to rotate and bend out of plane. There are only 5 runs (out of 13) for 12-bead ball-chains, and only 3 runs (out of 23) for 20-bead ball-chains, such that both ball chain shapes remain in a single plane, and also only for a limited period of time at the early stage of the evolution, as shown in Fig. 9. Probability that the ball-chains remain in their original planes is very small. Our experiments indicate that the out-of-plane motion of very flexible elongated objects is ubiquitous, and that their purely planar motion would seldom be observed in practical situations. Numerical Simulations In the experiments, the ball-chains are fully flexible until a limit bending angle is reached. They are assumed to move and deform in a similar way to very elastic fibres. To confirm this, we compare the experiments with numerical simulations of elastic fibres. Taking into account the sensitivity of the experimental results to the initial conditions, we focus our numerical analysis on a single sedimenting very elastic fibre. The goal is to verify whether we can theoretically reproduce basic features of the dynamics observed in our experiments, i.e., a non-stationary fibre shape which may exhibit rotation, out-of-plane deformation and which may have a non-horizontal end-to-end vector connecting the first and the last beads. ### Theoretical Description In order to model dynamics of a single fibre settling under gravity in a viscous fluid at a Reynolds number much smaller than unity, we employ the bead-chain model in which the fibre is represented by \(N\) identical spherical beads of a diameter \(d\)[28]. Centres of consecutive beads are connected by springs. The distance between the centres of the consecutive beads (the bond length) at the elastic equilibrium is \(l_{0}\), chosen to be very close to the bead diameter, \(l_{0}=1.01d\). Thus the length of the fibre at the elastic equilibrium is \(L_{0}=(N-1)\cdot l_{0}\ +\ d\). To represent almost inextensible elastic interactions between the consecutive beads \(i\) and \(j=i+1\), with \(i=1,...,N-1\), we employ finitely extensible nonlinear elastic (FENE) potential energy that has the following form [29, 30]: \[U^{FENE}=-\frac{1}{2}k(l_{0}-d)^{2}\sum_{i=1}^{N-1}\ln\left[1-\left(\frac{r_{ i,i+1}-l_{0}}{l_{0}-d}\right)^{2}\right]. \tag{2}\] Here, \(r_{i,j}=|\mathbf{r}_{i}-\mathbf{r}_{j}|\) is the distance between the centres of beads \(i\) and \(j\) (in particular, \(r_{i,i+1}\) is the length of bond \(i\)), with \(\mathbf{r}_{i}\) being a time-dependent position of bead \(i\), and \(k\) is the spring constant. With the FENE potential energy defined in Eq. 2, the distance between surfaces of the consecutive beads, initially equal to \(0.01d\), stays very small during settling, and does not exceed \(0.02d\). For such a small gap, lubrication interactions with the fluid suppress spurious rotations of the beads [31]. At the elastic equilibrium, the fibre is straight. During its settling, it resists deformations by bending forces between triplets of the consecutive beads. We employ the harmonic bending potential energy [32, 33, 13, 34, 35]: \[U^{b}=\frac{\mathcal{A}}{2l_{0}}\sum_{i=2}^{N-1}\beta_{i}^{2}. \tag{3}\] Here, \(\beta_{i}\) is the bending angle of a triplet of consecutive beads \(i-1,\ i,\ i+1\), defined in Eq. 1 and shown in Fig. 2, and \(\mathcal{A}\) is bending stiffness. Based on the model of an elastic cylinder of diameter \(d\)[36], the expressions \[\begin{cases}k=E_{Y}\pi d^{2}/(4l_{0})\\ \mathcal{A}=E_{Y}\pi d^{4}/64\end{cases} \tag{4}\] allow identification of the spring constant \(k\) and the bending stiffness \(\mathcal{A}\) if the Young's modulus \(E_{Y}\) is known [13]. The total elastic potential energy of the fibre has the form, \[U=U^{FENE}+U^{b}. \tag{5}\] The elastic force on a bead \(i\) is \[\mathbf{F}_{i}^{e}=-\frac{\partial}{\partial\mathbf{r}_{i}}U. \tag{6}\] In addition, each bead is subject to the same constant external gravitational force, \(\mathbf{F}^{g}=-mg\cdot\mathbf{\hat{y}}\), where \(g\) is the gravitational acceleration, \(\mathbf{\hat{y}}\) is the unit vector along \(y\) axis, and \(m\) is the bead mass corrected for buoyancy. The total external force on bead \(i\) reads, \[\mathbf{F}_{i}=\mathbf{F}_{i}^{e}+\mathbf{F}^{g}. \tag{7}\] Dimensionless elasto-gravitational number \(B\) estimates the ratio of gravitational to bending forces in the following way [26, 18, 12, 23, 13, 24]: \[B=L_{0}^{2}Nmg/\mathcal{A}, \tag{8}\] with larger values of \(B\) corresponding to fibres that are more flexible. As the Reynolds number in our experiments is very small, \(Re\ll 1\), we assume that the fluid flow satisfies the Stokes equations. We use the multipole expansion of solutions to the Stokes equations [37, 38], and assume stick boundary conditions on the surfaces of the beads. Then, we introduce the lubrication correction to speed up the convergence of the multipole expansion [39, 40]. We obtain the following set of the first-order ODEs for the time-dependent positions \(\mathbf{r}_{i}\) of the bead centers, \(i=1,...,N\), \[\dot{\mathbf{r}}_{i}=\sum_{j=1}^{N}\mathbf{\mu}_{ij}\cdot\mathbf{F}_{j}, \tag{9}\] with the mobility matrices \(\mathbf{\mu}_{ij}\) that depend on the positions of all the beads, and are evaluated from the multipole expansion by the precise numerical codes Hydromultipole [40, 41]. The numerical codes Hydromultipole are capable of evaluating the mobility matrices in the presence of interfaces [42, 43, 44, 45, 46, 47]. However, here we applied the model of an infinite fluid, since during the experimental observations the fibres were relatively far away from the container walls and the free surface. Here we have used dimensionless variables, using as the length and time units \(d\) and \(\tau_{b}=\frac{\pi\eta d^{2}}{mg}\), respectively, where \(\eta\) is the fluid dynamic viscosity. Therefore, velocity unit is \(v_{b}=\frac{d}{\tau_{b}}=\frac{mg}{\pi\eta d}\). ### Results: Sedimentation of a Single Elastic Fibre Numerical simulations were performed of the dynamics of a single fibre having different numbers \(N\) of the beads. Here we present the results for either \(N=14\) (shorter fibre) or \(N=24\) (longer fibre). The aspect ratios of these fibres were chosen to be similar to the aspect ratios of the ball-chains in our experiments. Very flexible fibres were assumed in this study, with the elasto-gravitational number in the range \(4,000<B<10,000\). The choice of a very large \(B\) was obvious to model very flexible ball-chains in the experiment. Specific choice of values was guided by the requirement that the elasto-gravitational number is large enough to allow for excitation of higher bending modes, with out-of-plane deformations, and lack of stability of a U-shaped configuration, reported in [18, 12]. This condition was needed to match our experimental observations. Figure 10: Simulation results for a single elastic fibre made of 14 beads settling under gravity. (a), (b) Snapshots taken at the time intervals \(\Delta t=14.5\tau_{b}\) for (a) \(B=8500\) and initially C-shape, and (b) \(B=7000\) and initially symmetric propeller shape. (c) The sedimentation velocity of the centre of mass \(v_{y,CM}/v_{b}\) and the bending amplitude \(A/L\) vs. vertical component of the center-of-mass position \(y_{CM}/d\), plotted with blue and red lines for the cases shown in (a) and (b), respectively. Due to the experimental conditions, we are interested in relatively short times of the evolution that allow to study only the initial stage of the settling dynamics. Before the beginning of the observations, while settling on the distance of 10 cm, i.e., \(67d\), the ball-chains already reach U-shaped almost planar configurations. For very flexible elastic fibres characteristic times to bend are also very short [18]. The height of the observed part of our glass tank is about 300 mm, i.e., about \(200d\), and therefore simulations should be able to cover about \(300d\) in the vertical direction. This distance is reached at about \(255\,\tau_{b}\) in case of a shorter fibre with \(N=14\) beads, which is slower than about \(235\,\tau_{b}\) required for a longer one with \(N=24\) beads. Nevertheless, in either case, this time is usually smaller than the time required to reach one of stationary or stable periodic modes of sedimentation. Moreover, time needed to destabilize U-shaped or W-shaped configurations is very sensitive to its out-of-plane perturbations. Therefore, it is essential to choose proper initial conditions in the simulations, allowing for a fast destabilisation and triggering out-of-plane dynamics as early as possible rather than matching precisely the initial configurations observed in the experiments. It is known that initially straight fibre, horizontal or inclined, does not destabilize fast even if a small random perturbation is added [18, 12], as discussed in Appendix A.1. Taking it into account, we searched for initial configurations close to elastic equilibrium but not in a vertical plane. We used such configurations: an inclined planar C shape with a non-horizontal end-to-end vector (in short, C-shape), and a propeller shape, with horizontal and non-horizontal end-to-end vector (in short, symmetric and asymmetric propeller shapes, respectively). These three types of the initial configurations are specified in details in Appendix A. It is essential that the initial configurations are only slightly disturbed in comparison to a straight fibre in elastic equilibrium, as shown in the first snapshots in Figs. 10a,b and Fig. 11a,b. The elastic fibres made of 14 beads, shown in Fig. 10, rotate. It is a generic feature seen in our numerical simulations, in agreement with the experimental observations presented in Figs. 4b, 4c and the corresponding Videos 4b, 4c. In the simulations, initial shapes with a slightly non-horizontal end-to-end position seem to have one arm higher than the other for a long time, also in agreement with the experiments, as illustrated in Fig. 4. In addition, in the simulations such shapes significantly deform out of a vertical plane, as shown in Fig. 10a. Simulations of longer elastic fibres, made of 24 beads, also demonstrate an evolution pattern similar to that seen in the experiments (compare Fig. 11 to Fig. 5). The fibres form a W shape, then tilt it and form a hook, Figure 11: Simulation results for a single elastic fibre made of 24 beads settling under gravity. (a), (b) Snapshots taken at the time intervals \(\Delta t=17\,\tau_{b}\) for (a) \(B=8500\) and initially C-shape, and (b) \(B=8500\) and initially asymmetric propeller shape. (c) The sedimentation velocity of the centre of mass \(v_{y,CM}/v_{b}\) and the bending amplitude \(A/L\) vs. vertical component of the center-of-mass position \(y_{CM}/d\), plotted with blue and red lines for the cases shown in (a) and (b), respectively. deform out of a vertical plane while decreasing the difference between vertical coordinates of the ends. It is interesting to point out that the formation of the W shape depends not only on the value of \(B\)[12], but also on the fibre aspect ratio \(N\). It does not take place for shorter fibres made up of 14 beads, while it is present for longer fibres made up of 24 beads, with the same value of the elasto-gravitational number \(B\). In connection with the experimental findings, it is worth investigating in the simulations if the initially imposed small left-right asymmetry of the fibre shape will decrease or increase with time, or oscillate or stabilize. To see it, we evaluate for each time the difference \(\Delta A=y_{N}-y_{1}\) between vertical coordinate, \(y_{N}\) and \(y_{1}\), of the positions of the centres of the last and the first bead of the fibre. Then, \(\Delta A/d\) is plotted in Fig. 12 as a function of the dimensionless instantaneous vertical coordinate \(y_{CM}/d\) of the fibre centre of mass, for the numerical trials shown in Figs. 10 and 11, with the same meaning of the colours. It is clear that the initially small (but non-zero) value of \(\Delta A=y_{N}-y_{1}\) increases when the fibre settles down and then oscillates for quite a long time. Therefore, a non-horizontal end-to-end position seems to be an inherent feature of the fibre dynamics on a relatively short time scale, taking into account usually a random initial perturbation of the fibre shape. To estimate the rotation along \(y\)-axis, we proceed as follows. For each time, we evaluate the moment of inertia tensor and its eigenvectors and eigenvalues. The eigenvector corresponding to the largest eigenvalue determines a unit vector \(\mathbf{n}\). Using spherical coordinates with vertical zenith direction, vector \(\mathbf{n}\) can be determined by the polar angle \(\theta\) it makes with the \(y\)-axis and the azimuthal angle \(\phi\) that its horizontal projection makes with the \(x\)-axis. We evaluate \(\phi\) for each time instant \(t\) and plot \(\phi\) in Fig. 12b as a function of the dimensionless instantaneous vertical coordinate \(y_{CM}/d\) of the fibre centre of mass, for the numerical trials shown in Figs. 10 and 11, with the same meaning of the colours. It is clear that the polar angle \(\phi\) changes significantly with time, also in agreement with the experiments. In the simulations, we observe that the changes of \(\phi\) are often non-monotonic, what is related to shape deformations. In the simulations, we observe that very flexible fibres typically tend to deform out of plane. To describe this Figure 12: (a) Difference \(\Delta A/d\) between vertical positions of the fibre ends, (b) azimuthal angle \(\phi\) and (c) non-flatness \(\sigma/d\), plotted as functions of vertical component of the center-of-mass position \(y_{CM}/d\) for the numerical trials shown in Figs. 10 and 11, with the same meaning of the colours. feature quantitatively, we determine how far the shapes are from "an average" plane that contains the centre of mass of the fibre and is spanned by two eigenvectors that correspond to the smaller eigenvalues. In this plane the fibre would be positioned if its shape was flat. With this goal in mind, we evaluate the following time-dependent non-flatness parameter \(\sigma\): \[\sigma=\sqrt{\frac{1}{N}\sum_{i=1}^{N}h_{i}^{2}}, \tag{10}\] where \(h_{i}\) is the distance of the center of bead \(i\) from the plane and the summation is over all the beads. Obviously, \(\sigma=0\) if the fibre is planar. In Fig. 11(c), we plot \(\sigma/d\) versus vertical position of the fibre centre-of-mass, \(y_{CM}/d\) for the numerical trials shown in Figs. 10 and 11, with the same meaning of the colours. All the fibres, even if initially flat, become non-planar as they settle down, and their non-flatness is pronounced. ## 5 Conclusions In this paper, we investigated the dynamics of very flexible filaments sedimenting in a viscous fluid at the Reynolds number much smaller than unity. In the experimental study, we used ball-chains and a silicon oil. We also performed numerical simulation of the evolution of very elastic fibres with the elasto-gravitational number \(B>4000\), using the precise Hydromultipole numerical codes, based on the multipole expansion of the Stokes equations. We have demonstrated that the dynamics of ball-chains and very elastic filaments are similar. We observed that the dynamics of shorter and longer ball-chains are different. In the early stage of the evolution, shorter ball-chains form shapes that could be approximately classified as vertically oriented planar U-shapes while longer ball-chains form shapes close to vertically oriented planar W-shapes. These findings agree with numerical studies of sedimenting elastic filaments carried out earlier [18, 12, 25], and also in this paper. To the best of our knowledge, W-shapes of very flexible filaments have not been observed in experiments so far. However, we observed that vertically oriented planar U- and W-shapes do not seem to be stable, unlike more stiff elastic filaments [20, 19, 18, 12, 23]. In the experiments, we found that shorter ball-chains typically rotate; moreover, their end-to-end vectors are not horizontal and sometimes even increase the inclination angle with time. Longer ball-chains relatively quickly deform significantly and move out of a vertical plane. These features are also present in our numerical simulations of very elastic fibres with a large value of \(B\), providing the choice of initially out-of-plane configurations with a non-horizontal end-to-end vector. The experimental observations are limited to relatively short times, owing to the height of the container. Therefore, the numerical simulations presented here were also performed for a comparable range of times and vertical distances. The numerical study of a longtime behaviour of sedimenting very elastic fibres will be the subject of a separate article, with an emphasis on the identification of attractors of the dynamics: periodic motions or stationary configurations. Those results would extend the class of the solutions shown in Ref. [12]. Finally, in the experiments, we also studied hydrodynamic interactions between two ball-chains, sedimenting one above the other, initially in approximately perpendicular vertical planes. We have shown that, depending on the initial conditions and length, they can approach each other (so-called attraction) or move away from each other (so-called repulsion). In previous studies, the attraction of elastic fibres at symmetric initial configurations one above the other has been found numerically [26, 12], but not the repulsion: our results are new. In the literature, two very elastic fibres with large values of \(B\) have been observed numerically to always separate from each other if initially straight and in the same horizontal plane: collinear [26] or in a symmetric configuration [24]. For moderate values of \(B\) and the symmetric initial configurations of two elastic fibres, an attracting stationary relative configuration has been found, with attraction at larger distances and repulsion at smaller ones [24, 48]. The examples discussed above illustrate the complexity of the sedimentation of multiple flexible objects. This work is just a step towards understanding its basic features. ## Conflicts of interest There are no conflicts to declare. ## Appendix A Details of the numerical simulations ### Straight fibres with a very small random perturbation and their evolution It is known from the literature [18, 12] that it takes a lot of time to observe an out-of-plane instability of a very elastic, almost straight fibre, and in this Appendix, we provide some estimations of how fast it grows. Assume that a straight elastic fibre at the elastic equilibrium is oriented horizontally or inclined at a certain angle \(\gamma\) with respect to the horizontal plane, and perturb randomly positions of all the beads with a maximum amplitude \(0.0001d\). Examples of the fibre dynamics for such initial configurations are described below. Let us start with a short fibre with \(N=14\) beads. In the case of _almost_ straight initial configurations, when the fibre is oriented horizontally, the fibre stays _almost_ in a vertical plane and _almost_ symmetric with respect to reflections in the perpendicular vertical plane for a relatively long time. The initial straight shape is turned into a U-shape relatively quickly but then the fibre keeps such a shape for a long time. Only after that, the fibre stops sedimenting vertically only and starts moving also sideways. For example, in the case of a relatively flexible fibre with the elasto-gravitational number \(B=8,000\), vertical sedimentation is observed until about \(230\tau_{b}\) with the fibre travelling to the depth of about \(290d\). Then symmetry of the U-shape remains to be present but the fibre starts drifting sideways similar to the intermediate mode reported in Ref. [12]. The results of simulations for such initial conditions cannot qualitatively describe experimental results presented in Fig. 4. The initial inclination of the _almost_ straight fibre does not lead to a rotation (a feature seen in the experimental trial (b) in Fig. 4) as the left-right symmetry of the U-shape is quickly restored. A non-horizontal orientation of the end-to-end vector seen in trial (c) in Fig. 4, does not occur in the simulations under these initial configurations either. A similar situation is observed for a longer fibre with \(N=24\) beads in an _almost_ straight initial configuration. For example, for \(B=8,000\), the initial straight and horizontal shape is turned into a W-shape already at \(18\tau_{b}\) at depth of \(20d\). It then retains an almost symmetric W-shape until \(190\tau_{b}\) at \(215d\) depth. Then it turns into a hook-shape preserved until the depth of \(522d\) at \(380\tau_{b}\). Nevertheless, the fibre is still oriented 'almost' vertically all this time: the polar angle \(\theta\) for unit vector \(\mathbf{n}\) starts to differ from \(90^{\circ}\) at \(\approx 200\tau_{b}\) but decreases only to \(89.7^{\circ}\) at \(380\tau_{b}\). This is similar to the initial stage of the sedimentation process presented in Ref. [12] in case of larger values of \(B\). Similar to the case of a short fibre, the results of simulations for almost straight configurations of longer fibres cannot qualitatively describe features seen in the experimental results presented in Fig. 5. In brief, the evolution of tiny perturbations is too slow. We expect that in the experiment, perturbations are not tiny and not as random as, e.g., in the Brownian motion. Therefore, in this work, we used different initial configurations, with a small (but not tiny) perturbation from a vertical plane. Details will be given in the next section. ### C-shaped and propeller-shaped initial configurations Ultimately, we performed simulations for two families of initial configurations of fibres, described below. I. The fibre has a planar **C-shape** constructed in three steps: (1) Shape of a circular arc, i.e. \(C\)-planar shape, is created using the following equations, \[\begin{cases}x_{i}=r\cdot\sin\left(\left(i-\frac{N+1}{2}\right)\alpha\right) \\ y_{i}=0\\ z_{i}=r\cdot\cos\left(\left(i-\frac{N+1}{2}\right)\alpha\right)-r,\end{cases} \tag{11}\] where \(\alpha\) is the curvature angle, \(r=l_{0}/\left(2\cdot\sin\left(\alpha/2\right)\right)\) is the radius of the circular arc, and \(\mathbf{r}_{i}=(x_{i},y_{i},z_{i})\). (2) The whole fibre is then inclined out of the horizontal plane by a certain tilt angle \(\gamma\), via the rotation matrix \[\mathcal{R}=\begin{bmatrix}\cos\gamma&-\sin\gamma&0\\ \sin\gamma&\cos\gamma&0\\ 0&0&1\end{bmatrix}. \tag{12}\] (3) Positions of all the beads are randomly disturbed with a maximum amplitude of \(0.0001d\). II. The fibre has a **propeller shape** constructed in five steps: (1) \(S\)-planar shape is created using Eq. 13; \[\begin{cases}x_{i}=r\cdot\sin\left(\left(i-\frac{2J+1}{2}\right)\alpha\right)\\ y_{i}=0\\ z_{i}=\pm\left(r\cdot\cos\left(\left(i-\frac{2J+1}{2}\right)\alpha\right)-r \right).\end{cases} \tag{13}\] Here, sign "minus" is taken for the bead number \(i\) with \(i\!\leq\!J\) and sign "plus" is taken for other beads. (2) A specific bead \(J\) is chosen that, together with the neighbouring bead \((J+1)\) in case of an even \(N\), will initially serve as a tip of the propeller. (3) One arm of the fibre formed by the beads with \(i\leq J\) is then inclined at a certain angle \(\gamma\). (4) Another arm of the fibre formed by the beads with \(i\!\geq\!J+1\) is inclined at the angle \(-\gamma\). (5) Positions of beads are randomly disturbed with a maximum amplitude of \(0.0001d\). As the initial conditions in the simulations shown in Fig. 10, we took shapes with the following values of the parameters: * C-shape with \(\alpha=1^{\circ}\) and \(\gamma=1^{\circ}\), * symmetric propeller shape with equal arms (\(J\!=\!N/2\)), \(\alpha=1^{\circ}\) and \(\gamma=5.65^{\circ}\). As the initial conditions in the simulations shown in Fig. 11, we took shapes with the following values of the parameters: * C-shape with \(\alpha=1^{\circ}\) and \(\gamma=1^{\circ}\), * asymmetric propeller shape with unequal arms (\(J\!=\!N/2-1\)), \(\alpha=1^{\circ}\) and \(\gamma=5.65^{\circ}\). In Figs. 10a-b and Fig. 11a-b, the first rows of the snapshots show the projections of the initial shapes defined above. ### Evolution of the maximum bending angle of an elastic fibre In the experiments, the ball-chain local bending angles \(\beta_{i}\) typically did not exceed \(33^{\circ}-40^{\circ}\), being smaller than the maximum bending angle out of the fluid (around \(55^{\circ}\)). In the numerical simulations of flexible fibres, the maximum bending angles were typically of a comparable magnitude as in the experiments. In the simulations of a short fibre with \(N=14\) beads, for the blue case, the bending angle starts with \(1^{\circ}\) at \(y_{CM}=0d\), then the maximum bending angle of \(57^{\circ}\) is achieved at depths of \(y_{CM}\approx 60d\), and further down at \(y_{CM}=300d\) it decreases to \(33^{\circ}\). For the red case, the bending angle starts with \(6^{\circ}\) at \(y_{CM}=0d\), then the maximum bending angle of \(44^{\circ}\) is achieved at depths of \(y_{CM}\approx 65d\), and further down at \(y_{CM}=300d\) it decreases to \(39^{\circ}\). In the simulations of a long fibre with \(N=24\) beads, for the black case, the bending angle starts with \(1^{\circ}\) at \(y_{CM}=0d\), then the maximum bending angle of \(39^{\circ}\) is achieved at depths of \(y_{CM}\approx 100d\), and further down at \(y_{CM}=300d\) it decreases to \(26^{\circ}\). For the green case, the bending angle starts with \(6^{\circ}\) at \(y_{CM}=0d\), then the maximum bending angle of \(35^{\circ}\) is achieved at depths of \(y_{CM}\approx 65d\), and further down at \(y_{CM}=300d\) it decreases to \(29^{\circ}\). ## Appendix B Experiment: description of the videos A list of the experimental videos that are available in Electronic supplementary information (ESI)\({}^{\dagger}\). The names of video files refer to the appropriate figure numbers. **Video 4a:** Settling of a single 12-bead ball-chain under gravity in a silicon oil recorded from two cameras with perpendicular lines of sight. Snapshots from this trial are shown in Fig. 4(a). The duration of the experiment in real-time is 132s. **Video 4b:** Settling of a single 12-bead ball-chain under gravity in a silicon oil recorded from two cameras with perpendicular lines of sight. Snapshots from this trial are shown in Fig. 4(b). The ball-chain rotates. The duration of the experiment in real-time is 128s. **Video 4c:** Settling of a single 12-bead ball-chain under gravity in a silicon oil recorded from two cameras with perpendicular lines of sight. Snapshots from this trial are shown in Fig. 4(c). Rotation and non-horizontal orientation of the end-to-end vector are visible. The duration of the experiment in real-time is 136s. **Video 5a:** Settling of a single 20-bead ball-chain under gravity in a silicon oil recorded from two cameras with perpendicular lines of sight. Snapshots from this trial are shown in Fig. 5(a). Formation of a W-shape in the early stage of the evolution is visible. The duration of the experiment in real-time is 110s. **Video 5b:** Settling of a single 20-bead ball-chain under gravity in a silicon oil recorded from two cameras with perpendicular lines of sight. Snapshots from this trial are shown in Fig. 5(b). Formation of a wide, irregular U-shape in the early stage of the evolution is visible. The duration of the experiment in real-time is 110s. **Video 5c:** Settling of a single 20-bead ball-chain under gravity in a silicon oil recorded from two cameras with perpendicular lines of sight. Snapshots from this trial are shown in Fig. 5(c). A hook shape is formed very early. The duration of the experiment in real-time is 110s. **Video 6a:** Settling of a pair of 12-bead ball-chains under gravity in a silicon oil recorded from two cameras with perpendicular lines of sight. The ball-chains were placed initially one above the other in perpendicular orientations. Snapshots from this trial are shown in Fig. 6(a). The ball-chains approach each other. The duration of the experiment in real-time is 104s. **Video 6c:** Settling of a pair of 12-bead ball-chains under gravity in a silicon oil recorded from two cameras with perpendicular lines of sight. The ball-chains were placed initially one above the other in perpendicular orientations. Snapshots from this trial are shown in Fig. 6(c). The ball-chains move away from each other. The duration of the experiment in real-time is 108s. **Video 7a:** Settling of a pair of 20-bead ball-chains under gravity in a silicon oil recorded from two cameras with perpendicular lines of sight. The ball-chains were placed initially one above the other in perpendicular orientations. Snapshots from this trial are shown in Fig. 7(a). The ball-chains approach each other. The duration of the experiment in real-time is 108s. **Video 7c:** Settling of a pair of 20-bead ball-chains under gravity in a silicon oil recorded from two cameras with perpendicular lines of sight. The ball-chains were placed initially one above the other in perpendicular orientations. Snapshots from this trial are shown in Fig. 7(c). The ball-chains move away from each other. The duration of the experiment in real-time is 140s. ## Acknowledgements This work was supported in part by the National Science Centre under grant UMO-2018/31/B/ST8/03640.
2306.03847
Learning Human Mesh Recovery in 3D Scenes
We present a novel method for recovering the absolute pose and shape of a human in a pre-scanned scene given a single image. Unlike previous methods that perform sceneaware mesh optimization, we propose to first estimate absolute position and dense scene contacts with a sparse 3D CNN, and later enhance a pretrained human mesh recovery network by cross-attention with the derived 3D scene cues. Joint learning on images and scene geometry enables our method to reduce the ambiguity caused by depth and occlusion, resulting in more reasonable global postures and contacts. Encoding scene-aware cues in the network also allows the proposed method to be optimization-free, and opens up the opportunity for real-time applications. The experiments show that the proposed network is capable of recovering accurate and physically-plausible meshes by a single forward pass and outperforms state-of-the-art methods in terms of both accuracy and speed.
Zehong Shen, Zhi Cen, Sida Peng, Qing Shuai, Hujun Bao, Xiaowei Zhou
2023-06-06T16:35:45Z
http://arxiv.org/abs/2306.03847v1
# Learning Human Mesh Recovery in 3D Scenes ###### Abstract We present a novel method for recovering the absolute pose and shape of a human in a pre-scanned scene given a single image. Unlike previous methods that perform scene-aware mesh optimization, we propose to first estimate absolute position and dense scene contacts with a sparse 3D CNN, and later enhance a pretrained human mesh recovery network by cross-attention with the derived 3D scene cues. Joint learning on images and scene geometry enables our method to reduce the ambiguity caused by depth and occlusion, resulting in more reasonable global postures and contacts. Encoding scene-aware cues in the network also allows the proposed method to be optimization-free, and opens up the opportunity for real-time applications. The experiments show that the proposed network is capable of recovering accurate and physically-plausible meshes by a single forward pass and outperforms state-of-the-art methods in terms of both accuracy and speed. Code is available on our project page: [https://zju3dv.github.io/sahmr/](https://zju3dv.github.io/sahmr/). ## 1 Introduction Monocular human mesh recovery (HMR), i.e., estimating pose and shape parameters of a parametric human model from a single image, has gained significant attention in recent years. To better capture and understand human behaviors, many recent works [1, 2, 3, 4, 5] propose to address the problem of scene-aware HMR which involves human-scene interaction constraints when recovering human meshes, given the 3D geometry of the scene scanned by range sensors [5, 6, 7], as well as the camera pose of the input image relative to the scene, which may enable more applications in video surveillance, household robots, and motion analysis in gyms and clinics. Most existing methods propose using scene-aware optimization to fit the human mesh into a pre-scanned scene. They optimize a parametric human model iteratively, e.g., SMPL [8], to minimize scene penetration, chamfer distance of contact regions, and the 3D-2D re-projection error. However, optimization tends to be slow at inference time and is sensitive to initialization and hyperparameters, failing to respond in low-latency applications. As illustrated in Fig. 1, an optimization-based method PROX [5] takes 18.4s to fit a human model into the scene, while the incorrect position and pose still occur. Recent works [9, 10, 11, 12, 13] propose to recover human mesh with neural networks trained on large-scale datasets [14, 15, 16, 17]. Specifically, the networks learn a mapping from an input image to a human mesh in the canonical coordinates. Applying these methods in the scene-aware HMR Figure 1: **Comparison between the optimization-based method and the proposed method.** Optimization-based methods typically fit a parametric human model iteratively by minimizing 2D reprojection error and scene conflicts. In contrast, the proposed method utilizes a single forward pass of the network to estimate the global position (blue ball), contact scene points (colored scene points), and a scene-aware human mesh. This design leads to improvements in both efficiency and accuracy. task still requires post-processing optimization, where the global translation and the human poses are refined in accordance with the given scene. However, the monocular prediction is conditioned on the input image solely, omitting the joint distribution of human pose and scene geometry, and therefore tends to suffer from depth ambiguity and occlusion. As a result, the optimization-based post-processing could be easily deteriorated by the erroneous initial poses and may even worsen the initial prediction. In this work, we propose a Scene-Aware Human Mesh Recovery network (SA-HMR), the first learning-based approach that predicts the absolute position and mesh of a human in the scene by a single forward pass. The overall pipeline is illustrated in Fig. 2. Given the input image and scene point cloud, we first use a sparse 3D CNN to estimate dense scene contacts and absolute human position, where the scene contact estimation is treated as a point cloud labeling task, and the human position prediction is presented as a voting vector field refinement task. The predicted dense contact points are centered by the human position and passed to a scene network in the human mesh recovery step. Specifically, we enhance a pretrained monocular HMR network METRO [12] by cross-attention with the proposed scene network in parallel. In this way, SA-HMR learns a joint distribution of human pose and scene geometry, resulting in more reasonable postures, contacts, and global positions, as illustrated in Fig. 1. Learning scene-aware cues in the network also avoids scene-aware optimization as post-processing and achieves fast inference speed. We evaluate the proposed method on the RICH [6] and PROX [5] datasets of indoor and outdoor scenes. The experimental results show that SA-HMR is not only effective in recovering absolute positions and meshes that are in accordance with the given scene, but also significantly faster than the optimization-based baselines. In summary, we make the following contributions: * The first optimization-free framework for scene-aware human mesh recovery from a single image and a pre-scanned scene. * The cross-attention design for enhancing a pretrained HMR network with a parallel scene network, enabling joint learning on the human pose and scene geometry. * Superior performance compared to optimization-based baselines in terms of both accuracy and speed. ## 2 Related Work Monocular Human Mesh Recovery.Most existing approaches formulate the monocular HMR task as recovering the mesh of statistical human body models, e.g. SMPL-X SMPL-X [18, 19, 8], where recent works can be divided into optimization-based and learning-based approaches. The optimization-based approach fits a parametric human model by minimizing the 3D-2D re-projection error of body joints and energy terms of heuristic priors iteratively, which is represented by SMPLify [20] that fits SMPL [8]. More recently, SMPLify-X [19] proposes a variational pose prior and fits a more expressive SMPL-X. Pose-NDF [21] proposes to represent the manifold of plausible human poses with a neural field. While optimization-based methods are general in their mathematical formulation, they are usually sensitive to hyperparameters and require much time for inference. The learning-based methods utilize deep neural networks to predict either the parameters [22, 9, 10, 23] or the mesh vertices [11, 12, 13] of the SMPL model. HMR [9] is the pioneering work in predicting SMPL parameters, and SPIN [10] improves upon it using an optimization loop. For predicting SMPL mesh vertices, GraphCMR [11] deforms a template human mesh using graph neural network, while METRO [12] uses transformers, and [13] uses graph hierarchy to further improve the performance. However, for scene-aware HMR, the vertices of the human and scene meshes in contact are close in Euclidean space, making methods that regress parameters unsuitable due to errors that accumulate along the kinematic chains. Therefore, the proposed method is built on the works that predict mesh vertices. More details can be found in Sec. 3. Scene-aware Human Mesh Recovery.PROX [5] is a seminal work that uses scene constraints to reduce the depth and occlusion ambiguity in monocular HMR. It achieves this by adding two energy terms of human-scene contact and penetration in the optimization process [19]. In addition, scene-aware pose generative models [24, 2] can also be used as prior terms in the scene-aware HMR task. Other recent works in this area include MoCapDeform [4], which considers deformable scene objects, LEMO [1], which uses temporal information, and HULC [3], which uses consecutive frames and dense contacts prediction on both the scene and human body. In contrast to these works, the proposed method is optimization-free and requires only a single forward pass. In a broader topic of capturing humans in a scene-aware manner, [25, 26, 27] propose using a simulator and dynamic model, where a pre-defined agent is controlled to interact with the scene, and [28, 29] consider human-object arrangement by first predicting the human and object and then performing global optimization. Attention in Transformers.Attention is a key mechanism in Transformers [30]. It allows a set of query features to fuse the most relevant information from another set of key-value features. When query and key-value features come from the same source, it is called self-attention, otherwise cross-attention. In HMR, METRO [12] uses self-attention to re duce occlusion ambiguity by establishing non-local feature exchange between visible and invisible parts of a template human mesh. In feature matching, SuperGlue [31] uses cross-attention to make the corresponding image features more similar. Predator [32] uses cross-attention in matching two sets of point clouds. Inspired by feature matching, the proposed method uses cross-attention to potentially make the features of the human and scene that are in contact more similar, resulting in better contact and more reasonable postures. ## 3 Methods Given a calibrated image and a pre-scanned scene point cloud (Sec. 3.1), SA-HMR first estimates the absolute human root position and scene contacts (Sec. 3.2), and then recovers the human mesh with the contact points by enhancing a pretrained METRO network (Sec. 3.3). An overview of the proposed method is presented in Fig. 2. ### Preliminaries **Human Representation.** We use SMPL [8] as the human representation. The SMPL is a parametric model that uses the body joint rotations, root translation, and body shape coefficients to compute the body mesh. Following [11, 12], we directly predict the SMPL mesh vertices \(V=\mathbb{R}^{6890\times 3}\), and use H36M [14] joint regression matrix \(M\in\mathbb{R}^{14\times 6890}\) to compute 3D joints \(J\in\mathbb{R}^{14\times 3}\) from the vertices for quantitative evaluation, \(J=MV\). **Scene and Image Representation.** We assume that the scene is pre-scanned with range sensors, as in RICH [6] and PROX [5], and the image is calibrated and localized in the scene, i.e. with known intrinsic and extrinsic parameters \(\{(f,c_{x},c_{y}),(R_{c},c_{c})\}\). Following METRO [12], we detect a squared bounding box around the target human and resize the cropped region as the input image \(I\in\mathbb{R}^{224\times 224\times 3}\). Based on the camera parameters and the bounding box, we select scene points that fall within the visual frustum as the input scene point cloud \(S\in\mathbb{R}^{N_{S}\times 3}\). **Human-Scene Contact.** Following PROX [5], we use 7 regions of the SMPL mesh that are most likely to be contacted. The details are provided in the supplementary material. Using these 7 categories and one for not being in contact, we perform a segmentation task on the scene point cloud. ### Human Root and Scene Contacts Given an image \(I\) bounding the human, we propose using a 2D convolutional neural network (CNN) to extract image features \(F\) and predict the initial human root \(r\). Based on the scene points \(S\), we unproject the image features \(F\) to 3D, resulting in \(\hat{F}\). Additionally, we calculate point-wise offset vectors \(O\) that point from a voxelized scene point cloud to the initial root. By taking \(\hat{F}\) and \(O\) as input, a sparse 3D CNN predicts the segmentation of scene contacts and the refined offsets, which are then converted to the refined human root \(r^{*}\). An overview of this process is presented in the left column of Fig. 2. **Initial Root.** We predict the initial root \(r\)=\((X,Y,Z)\) in a 2.5D manner following SMAP [33]. Specifically, we use a CNN to predict the 2D heatmap and a normalized depth map of the root. Then, the 2D position \((x,y)\) is obtained by applying argmax to the heatmap, and the corresponding normalized depth value \(\tilde{Z}\) is retrieved from the depth map. Finally, using the intrinsic parameters \(f,c_{x},c_{y}\) and image size \(w\), the 3D root position is computed: \[Z=\tilde{Z}\frac{f}{w} \tag{1}\] Figure 2: **Overview of the proposed SA-HMR. 1.** The human root and scene contact estimation module (Sec. 3.2) that first predicts the initial root and then refines the root with 3D scene cues using a sparse 3D CNN. The module also predicts contact labels [5] for each scene point. Please refer to Sec. 3.2 for a detailed definition of the 3D feature construction module. **2.** The scene-aware human mesh recovery module (Sec. 3.3) that enhances the pretrained METRO [12] network with a parallel scene network. The scene network takes the predicted contact scene points as input, and uses cross-attention to pass messages to the intermediate features of the METRO network. \[X=\frac{x-c_{x}}{f}\cdot Z,\quad Y=\frac{y-c_{y}}{f}\cdot Z \tag{2}\] From our observation, estimating \((x,y)\) achieves good results with a mean squared error of fewer than two pixels across datasets. However, estimating \(\tilde{Z}\) is relatively challenging, possibly due to the variations in human shapes. **3D Feature Construction.** Based on the initial root \(r\) and image features \(F\), we construct the 3D features on the voxelized scene point cloud, which is illustrated in Fig. 3. First, we select regions of interest around the initial root \(r\) in the point cloud. Specifically, we treat \(r\) as an anchor and keep points within a radius \(\gamma_{1}\). Since \(Z\) has more uncertainty than \((x,y)\), we sample two additional anchors along the z-axis, whose distance to \(r\) is \(\gamma_{2}\). Next, we construct a sparse volume \(\bar{S}\) by voxeling these points with voxel size \(s_{vox}\), where the center of each voxel is denoted as \(\bar{s}_{i}\). For each voxel \(i\), the feature consists of the offset vector \(o_{i}\) and the unprojected image feature \(\hat{f}_{i}\). Specifically, \(o_{i}\) is a vector pointing from the voxel center to the human root: \[o_{i}=r-\bar{s}_{i} \tag{3}\] \(\hat{f}\) is computed by projecting voxel center \(\bar{s}_{i}\) onto the image using camera parameters and bilinearly sampling the image feature map \(F\). **Estimating Refined Root and Scene Contacts.** We use sparse 3D CNN [34] to process the constructed 3D features and learn to improve the root estimation and predict the scene contacts. Specifically, the output of each voxel includes an updated offset vector \(o_{i}^{*}\), confidence \(c_{i}\), and segmentation indicating the contact category. We compute the refined root \(r^{*}\): \[r^{*}=\sum_{i}c_{i}\cdot(o_{i}^{*}+\bar{s}_{i}). \tag{4}\] There are 8 categories of contact points, including 7 most probable regions on the body that would be contacted [5] and 1 category of not being in contact. We take the category of the highest score as the prediction for the voxel and set the dense point cloud belonging to the voxel with that category. The contact points \(\hat{S}_{seg3d}\in\mathbb{R}^{\bar{N}_{S}\times 3}\) serve as the input for the mesh recovery module. ### Scene-aware Human Mesh Recovery Since the training data of scene-aware human mesh recovery is limited, we build our model upon a network named METRO [12] that is pre-trained on large-scale data of monocular human mesh recovery. METRO processes feature based on the self-attention mechanism, and our approach enhances METRO by adding a parallel scene network, which provides a cross-attention-based mechanism that enables METRO to notice important scene details and achieve scene-aware human mesh recovery. **METRO** consists of a CNN backbone and multiple Transformer encoders. It first extracts global CNN features, then combines the feature to the vertices of a zero posed SMPL mesh, and finally predicts a posed mesh with shape through the transformers. The part of the transformer is illustrated in the orange part of Fig. 4. **Enhancing METRO with Cross-Attention.** We improve METRO with a scene network, which makes the predicted human vertices to be close to the corresponding contact scene points \(\hat{S}_{seg3d}\) (Sec. 3.2). As illustrated in Fig. 4, we add a parallel network, which has a similar architecture as the transformer of METRO, Figure 4: **Enhancing METRO with a parallel network.** The orange parts are the original METRO [12] network, where the residual connection and positional encoding are omitted for simplicity. The blue parts are the proposed parallel network which takes predicted contact scene points as input. The yellow parts indicate feature interaction between METRO and the parallel network. Figure 3: **3D feature construction.** We voxelize a scene point cloud to a sparse volume. The initial feature of each voxel consists of two parts, which are the offset vector pointing from the voxel center to the human root and the unprojected image features. to extract features of scene contact points and output the point positions like an autoencoder. Specifically, we first use METRO's CNN backbone to extract the image feature and map it to a set of vertex tokens by a fully-connected layer. In METRO, these tokens are directly concatenated with the positions of initial human vertices. However, the number of scene contact points is not the same as the tokens', and there is no one-to-one correspondence between them. To resolve this issue, we perform the average pooling to vertex tokens based on the contact categories defined on the SMPL mesh vertices [5], resulting in 7 tokens. Then, we append each scene contact point with the corresponding aggregated token based on the predicted category in Sec. 3.2. Intuitively, this helps the cross-attention to focus on the semantically corresponding parts. To be invariant to the global translation, the scene contact points are zero-centered by the predicted root \(r^{*}\). Motivated by recent feature matching methods [31, 32], we propose to use cross-attention to pass features from scene contact points to human vertices. The cross-attention and self-attention share the same underlying mechanism, both of which first compute the similarity of query and key, and then use the weighted sum to fuse features. When the query and key are from the same source features, it is self-attention, and otherwise is cross-attention. In practice, we use linear attention operator [35] to improve efficiency. A visualization of cross attention over human vertices and scene points is in Fig. 5. The detailed network architecture is provided in the supplementary material. Note that, we use a weight-sharing regressor layer with METRO, which regresses from point-wise features to point position \((x,y,z)\). Therefore, to get a similar \((x,y,z)\), the input features of this layer should also be similar. This strategy implicitly aligns the features of human vertices and scene points when their final predictions are near in 3D space, thus facilitating the cross-attention to find correspondences between human vertices and scene points. ### Training Loss **Root and Contact.** The loss function \(L_{\text{RC}}\) for the root and contact estimation is defined as: \[L_{\text{RC}}=L_{\text{R2D}}+w_{\text{RZ}}\cdot L_{\text{RZ}}+L_{\text{ROV}}+L _{\text{R3D}}+L_{\text{C}} \tag{5}\] where \(L_{\text{R2D}}\) is the MSE loss on the root heatmap; \(L_{\text{RZ}}\), \(L_{\text{ROV}}\), and \(L_{\text{R3D}}\) are the L1 losses on the relative depth, offset vectors, and the 3D root, respectively; \(L_{\text{C}}\) is the cross-entropy loss for contact categories of voxel points, where we additionally train an auxiliary task of 2D contact segmentation similar to the voxel points. **Human Mesh Recovery.** The loss function \(L_{\text{HMR}}\) for the mesh recovery is defined as: \[L_{\text{HMR}}=L_{\text{V}}+L_{\text{J}}+L_{\text{CP}}+L_{\text{GV}} \tag{6}\] where \(L_{\text{V}}\), \(L_{\text{J}}\), \(L_{\text{CP}}\), and \(L_{\text{GV}}\) are the L1 losses on the translation-aligned human vertices, human joints, reconstructed contact points, and global human vertices, respectively. More details are in the supplementary material. ### Implementation Details We train two modules separately. For the root and contact module, the CNN is HRNet-stride-4 [36] with METRO initialization, the sparse 3D CNN is SPVCNN [34, 37] with random initialization. We use linear layers to align intermediate feature dimensions. \(\gamma_{1}\) is \(1.25m\), \(\gamma_{2}\) is \(0.5m\), \(s_{vox}\) is \(5^{3}cm^{3}\), and \(w_{\text{RZ}}\) is 10. The contact threshold is \(7cm\). We flip images for augmentation. The module is trained with an initial learning rate of 3.75e-5 and a batch size of 24. It converges after 30 epochs of training on one V100 GPU. For the mesh recovery module, the METRO network is initialized as pretrained and the scene network is randomly initialized. The initial learning rate is 7.5e-6 and the batch size is 24. It converges after 30 epochs of training. ## 4 Experiments ### Datasets We train and evaluate the proposed method on RICH [6] and PROX [5] datasets separately. **RICH [6]** captures multi-view video sequences in 6 outdoor and 2 indoor environments. It provides images, reconstructed bodies, scene scans, and human-scene contact Figure 5: **Visualization of the cross-attention from a body vertex (blue point) to the predicted dense scene contacts (white points).** (a) The image and the predicted dense scene contacts. (b) The hand vertex is in contact with the scene according to the image, and its feature is similar to the nearby scene point features, enabling the final vertex prediction to be close to the corresponding scene surface. (c) The arm vertex is not in contact with the scene, where feature similarities tend to be evenly distributed. The feature similarities are normalized to the same range of 0\(\rightarrow\)1. labels annotated on SMPL vertices. We skip frames including multiple subjects, remove the first 45 frames of each video to avoid static starting pose, and skip frames where the subjects' 2D bounding boxes are not inside the images. Then we downsample the train / val / test splits to 2 / 1 / 1 fps, resulting in 15360 / 3823 / 3316 frames. **PROX [5]** captures monocular RGBD videos in 12 indoor environments. We use RGB images and scene scans. It is a challenging dataset where severe occlusions exist in most frames. We use the qualitative set for training and the quantitative set for testing. In order to get better training annotations, we additionally combine HuMoR [26] which utilizes motion prior and optimizes a sequence of frames. Then, we manually remove the failed frames that are not consistent with the images and scenes. Finally, the training split contains 4852 frames. ### Metrics We evaluate quantitatively in terms of human mesh recovery and human-scene contact. **Human Mesh Recovery.** We report the Global Mean-Per-Joint-Position-Error (**G-MPJPE**) and Global Mean-Per-Vertex-Error (**G-MPVE**) in scene coordinates, which calculates the average L2 distances between predicted and ground truth joints/vertices. Additionally, we report the translation-aligned metrics **MPJPE** and **MPVE**. **Human-scene Contact.** We report the Penetration Error (**PenE**) and Contact Failure Error (**ConFE**). PenE measures the total distance that SMPL vertices penetrate the scene mesh: \[\text{PenE}=\sum_{i=1}^{V}\mathbb{1}_{x<0}[\mathit{sdf}(v_{i},\mathcal{S})] \cdot|\mathit{sdf}(v_{i},\mathcal{S})|, \tag{7}\] where \(V\) is the number of SMPL vertices, \(\mathit{sdf}\)\((v_{i},\mathcal{S})\) is the signed distance of vertex \(v\) to scene \(\mathcal{S}\), and \(\mathbb{1}_{x<0}[\cdot]\) is an indicator function that returns 1 when the condition is met, and 0 otherwise. ConFE measures contact quality when the ground-truth contact label is available: \[\text{ConFE}= \sum_{i=1}^{V}(C_{gt}(v_{i})\cdot|\mathit{sdf}(v_{i},\mathcal{S})|\] \[+ (1-C_{gt}(v_{i}))\cdot\mathbb{1}_{x<0}[\mathit{sdf}(v_{i}, \mathcal{S})]\cdot|\mathit{sdf}(v_{i},\mathcal{S})|), \tag{8}\] where \(C_{gt}(v)\) equals 1 if \(v\) is labeled as in contact, and 0 otherwise. In order to obtain a good result of ConFE, the body vertices in contact should be near the scene surface, while vertices not in contact should avoid penetration. ### Main Results **Baselines.** Optimization: SMPLify-X [19] uses RGB only, PROX [5] extends it with losses of human-scene contact and penetration, POSA [2] and PLACE [38] extend PROX with scene-aware pose priors, and METRO [12]+SA-Opt stands for post-processing a finetuned METRO with scene-aware optimization, which will be explained later. Learning-based: METRO predicts canonical human mesh vertices and a weak-perspective camera. We solve the transformation from human to camera coordinate by minimizing joint re-projection error with a PnP solver [39]. The METRO\({}^{\dagger}\) is finetuned with the same training protocol as SA-HMR. Since METRO does not consider scenes, we additionally \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Method & Learning-based & Optimization & Scene-aware & G-MPJPE\(\downarrow\) & G-MPVE\(\downarrow\) & PenE\(\downarrow\) & ConFE\(\downarrow\) & MPJPE\(\downarrow\) & MPVE\(\downarrow\) \\ \hline Dataset GT [6] & & ✓ & ✓ & / & / & 9.8 & 10.8 & / & / \\ \hline SMPLify-X [19] & & ✓ & & 482.0 & 483.7 & 35.7 & 43.4 & 166.9 & 177.6 \\ PROX [5] & & ✓ & ✓ & **390.1** & **397.2** & 15.5 & 24.1 & 164.1 & 175.8 \\ POSA [2] & & ✓ & ✓ & 427.8 & 434.0 & 21.1 & 27.0 & 177.2 & 188.4 \\ PLACE [38] & & ✓ & ✓ & 395.9 & 403.0 & 16.1 & 24.8 & 163.8 & 175.4 \\ METRO [12]\({}^{\dagger}\) + SA-Opt [6, 29] & ✓ & ✓ & ✓ & 563.1 & 561.3 & **7.4** & **14.8** & **102.7** & **112.8** \\ \hline METRO [12] & ✓ & & & 678.6 & 679.4 & 52.2 & 56.9 & 129.6 & 134.5 \\ METRO [12]\({}^{\dagger}\) & ✓ & & & 511.7 & 509.7 & 33.6 & 37.6 & 98.8 & 107.9 \\ **SA-HMR** & ✓ & & ✓ & **264.6** & **272.7** & **14.9** & **19.0** & **93.9** & **103.0** \\ \hline \hline \end{tabular} \end{table} Table 1: **Evaluation on the RICH [6] dataset. METRO\({}^{\dagger}\) indicates that the model is finetuned on the dataset. SA-Opt indicates scene-aware optimization, with contact estimation from BSTRO [6] and loss formulation from PROX [5] and PHOSA [29]. The proposed SA-HMR achieves the overall best results and is significantly faster than the methods that require optimization.** \begin{table} \begin{tabular}{l c c c c c} \hline \hline Method & G-MPJPE\(\downarrow\) & G-MPVE\(\downarrow\) & PenE\(\downarrow\) & MPJPE\(\downarrow\) & MPVE\(\downarrow\) \\ \hline Dataset GT [5] & / & / & 9.6 & / & / \\ \hline SMPLify-X [19] & 216.0 & 222.6 & 49.3 & **100.7** & **112.8** \\ PROX [5] & 172.0 & 178.5 & **10.7** & 101.1 & 114.0 \\ POSA [2] & 172.3 & 180.9 & 16.6 & 108.5 & 119.4 \\ PLACE [38] & **168.1** & **176.7** & 12.3 & 100.8 & 113.7 \\ \hline METRO [12] & 283.2 & 277.7 & 62.4 & 137.0 & 147.2 \\ METRO [12]\({}^{\dagger}\) & 265.6 & 262.7 & 67.5 & 117.1 & 128.5 \\ **SA-HMR** & **150.4** & **160.0** & **26.9** & **111.1** & **122.5** \\ \hline \hline \end{tabular} \end{table} Table 2: **Evaluation on the PROX [6] dataset. The proposed method achieves the best performance in global metrics.** optimize the global pose and scale by minimizing scene-aware losses, including re-projection error, human-scene penetration, contact distance [6], and ordinal depth error [29], following the key ideas of PROX and PHOSA [29]. **Results.** For the RICH dataset, Tab. 1 shows that SA-HMR notably outperforms other baselines in terms of the G-MPJPE and G-MPVE by a significant margin, demonstrating the effectiveness of the proposed pipeline. The joint learning on both image and scene geometry also improves the metrics of local pose and human-scene contact. We use open-sourced code for SMPLify-X and PROX, and implement POSA and PLACE upon PROX. Optimization methods approximately cost 18s for a single fitting, which is much slower compared to 0.2s of SA-HMR. We provide qualitative results comparing to baselines in Fig. 6. For the PROX dataset, SA-HMR outperforms all baselines in terms of global accuracy as illustrated in Tab. 2. Since the pseudo ground truth is still not of low quality for the training set of PROX, as well as a domain gap exists in the test set where the subject wears a MoCap suit, our method falls a little behind in local accuracy and scene penetration. And we do not report ConFE, since the ground-truth contact label is not available. Nevertheless, the clear improvement compared to the most relevant model METRO\({}^{\dagger}\) has demonstrated the effectiveness of the proposed method. We also observe that while considering the scene geometry is critical for estimating the global position and improving physical plausibility, it may not fully resolve the ambiguity of the local pose, where multiple physically plausible solutions may still exist. For example, the RGB-only method SMPLify-X and the scene-aware method PROX perform similarly in MPJPE and MPVE. ### Ablation Study **Root and Contact Module.** Tab. 3 shows that the predicted human root position is improved progressively by the refinement and scene-aware HMR modules, where the initial prediction [33] is improved 44%/52% in RICH, and 64%/69% in PROX. The offset representation helps to improve erroneous initial root prediction that is not consistent with the scene surface. For scene contact estimation, the precision/recall is 0.57/0.53 on RICH, and 0.45/0.24 on PROX. We observe that the contacts are difficult to predict, which aligns with the conclusion of a recent work HULC [3]. More visualizations are presented in Fig. 7. **Mesh Recovery Module.** As shown in Tab. 4, the parallel network that uses cross-attention outperforms a variant that fuses the pointnet features of the contact points to the METRO network in the early stage. The CErr indicates the error of contact mesh vertices in the translation-aligned coordinates. In Tab. 5, we validate the upper bound of the mesh recovery module. We replace the intermediate estimation of root and contact, and find a steady improvement in the pose and shape accuracy that outperform the baseline. **Running Time.** SA-HMR runs at 170ms with a peak memory cost of 1852 MB for a \(224\times 224\) image and a scene point cloud of \(2cm\) resolution on a V100 GPU. Specifically, the root and contact module takes 92ms (CNN 50ms, SPVCNN 42ms), the mesh recovery module 75ms (CNN 49ms, Transformer 26ms), and 3ms for the intermediate processing. ## 5 Conclusion This work addressed the challenge of estimating the human mesh from an RGB image with the consideration of the scene geometry. Our key idea is to inject 3D scene cues into a monocular human mesh recovery network to recover the absolute human pose and shape in the scene. To this end, our approach first predicts the 3D human location and then uses a sparse 3D CNN to estimate dense human-scene contacts. We developed a transformer to extract features from contact scene points and fed them into the pose estimation network using the cross-attention scheme. Experiments demonstrated our approach achieves state-of-the-art performance on the RICH and PROX datasets. **Acknowledgement.** This work was supported by NSFC (Grant 62172364) and the Information Technology Center and State Key Lab of CAD&CG, Zhejiang University. \begin{table} \begin{tabular}{c c c c c} \hline \hline Method & G-MPJPE & G-MPVE & MPJPE & MPVE & CErr\({}_{\downarrow}\) \\ \hline w/o parallel & 304.8 & 312.9 & 98.5 & 108.9 & 10.2 \\ Ours & **264.6** & **272.7** & **93.9** & **103.0** & **8.9** \\ \hline \hline \end{tabular} \end{table} Table 4: **Ablation study of the parallel network** on the RICH dataset. The compared variant fuses features of contact points at the early stage. \begin{table} \begin{tabular}{l c c c} \hline \hline Dataset & Initial RErr & Refined RErr & Final RErr \\ \hline RICH & 510.8 & 284.7 & **246.5** \\ PROX & 364.2 & 132.3 & **111.8** \\ \hline \hline \end{tabular} \end{table} Table 3: **Ablation study of root estimation.** The human root position errors (RtErr) in mm are reported. \begin{table} \begin{tabular}{c c c c c} \hline \hline Root & Contact & MPJPE & MPVE & CErr\({}_{\downarrow}\) \\ \hline / & / & 98.8 & 107.9 & 10.5 \\ Est. & Est. & 93.9 & 103.0 & 8.9 \\ GT & Est. & 89.2 & 98.1 & 7.9 \\ Est. & GT & 90.4 & 99.2 & 8.3 \\ GT & GT & **76.7** & **84.6** & **5.2** \\ \hline \hline \end{tabular} \end{table} Table 5: **Ablation study of the scene-aware mesh recovery module.** We validate the upper bound of the proposed method on the RICH dataset. Figure 6: **Qualitative results on the RICH [6] dataset.** We compare the proposed method to PROX [5], finetuned METRO [12], finetuned METRO with scene-aware optimization, and ground truth. The leftmost column shows the input images. The proposed method recovers the global positions and human-scene contact more accurately because of the 3D learning on human root refinement and dense scene contact labeling tasks. Figure 7: **Qualitative visualization of the estimated root locations and dense scene contacts.** In both examples, the estimated contact points provide accurate position and scene structure for the following step of mesh recovery. The reconstructed human mesh is in good contact with the corresponding scene regions.
2306.12792
BPM: Blended Piecewise Moebius Maps
We propose a novel Moebius interpolator that takes as an input a discrete map between the vertices of two planar triangle meshes, and outputs a smooth map on the input domain. The output map interpolates the discrete map, is continuous between triangles, and has low quasi-conformal distortion when the input map is discrete conformal. Our map leads to considerably smoother texture transfer compared to the alternatives, even on very coarse triangulations. Furthermore, our approach has a closed-form expression, is local, applicable to any discrete map, and leads to smooth results even for extreme deformations. Finally, by working with local intrinsic coordinates, our approach is easily generalizable to discrete maps between a surface triangle mesh and a planar mesh, i.e., a planar parameterization. We compare our method with existing approaches, and demonstrate better texture transfer results, and lower quasi-conformal errors.
Shir Rorberg, Amir Vaxman, Mirela Ben-Chen
2023-06-22T10:47:52Z
http://arxiv.org/abs/2306.12792v1
# BPM: Blended Piecewise Mobius Maps ###### Abstract We propose a novel Mobius interpolator that takes as an input a discrete map between the vertices of two planar triangle meshes, and outputs a continuous map on the input domain. The output map interpolates the discrete map, is continuous between triangles, and has low quasi-conformal distortion when the input map is discrete conformal. Our map leads to considerably smoother texture transfer compared to the alternatives, even on very coarse triangulations. Furthermore, our approach has a closed-form expression, is local, applicable to any discrete map, and leads to smooth results even for extreme deformations. Finally, by working with local intrinsic coordinates, our approach is easily generalizable to discrete maps between a surface triangle mesh and a planar mesh, i.e., a planar parameterization. We compare our method with existing approaches, and demonstrate better texture transfer results, and lower quasi-conformal errors. ## 1 Introduction Given two triangle meshes with the same connectivity, a natural vertex-to-vertex map is induced by the shared connectivity. In addition, a natural triangle-to-triangle map is induced by the unique linear map between corresponding triangles. These _piecewise linear_ maps are used almost exclusively in graphics and geometry applications to transfer quantities such as texture between meshes with the same connectivity. While simple, piecewise linear maps lead to visible discontinuities when applied to coarse triangulations that undergo large defor mations. Furthermore, even when the vertex-to-vertex map is _discrete conformal_[25], the corresponding piecewise linear map can induce very large angular distortions (see Fig. 2). We propose an alternative triangle-to-triangle map, denoted _blended piecewise Mobius_ (BPM), which is based on Mobius transformations, and leads to considerably less artefacts. First, when the vertex-to-vertex map is discrete conformal, BPM yields a low quasi-conformal distortion. Furthermore, BPM is equivariant to global Mobius transformations, and is Mobius transformation reproducing. This allows us to define BPM between surfaces and planar meshes, by defining the map _locally_. Finally, BPM is applicable to _any_ vertex-to-vertex map, and leads to smoother texture transfer compared to the alternatives. ### Related work There is a large number of works on computing conformal maps, whether approximated, e.g., [13, 14], under some definition of discrete conformality, e.g. [25], or defined smoothly on the domain e.g. [26, 27]. Our work, however, deals with the _interpolation_ of a given _discrete_ map, to a smooth map with different properties. To the best of our knowledge, there are very few such interpolators. Of course, one can use a smooth conformal [26] or quasi-conformal [26] map, and add constraints for the interpolated vertices. However, such an approach will often lead to over constrained systems, which either do not interpolate the constraints, or create double covers. In terms of _local_ interpolators, it is possible to use a piecewise-linear map; however, it leads to visible artefacts for coarse triangulations. Furthermore, our goal is to design an interpolator that commutes with Mobius transforms, and of course, a linear (or higher order) map will in general not have this property. Finally, it is possible to use a projective interpolation scheme [25, 26, 27]. This approach leads to nice results when applied to discrete conformal maps; however it is _discontinuous_ on general deformations. We note that some methods [13, 14] approached conformal mappings by designing a _discretized_, rather than _discrete_ (cf. [13]) field of rotations and scale factors that were integrated into a map which was conformal up to integrability. Specifically, [13] constructed a representation of this field in volumes that by itself construes an interpolation of Mobius maps. However, these works did not explicitly present a continuous and interpolating blend for triangle meshes as we do. ### Contributions Our main contributions are: * BPM: A vertex-interpolating, non-linear triangle-to-triangle map, which is smooth across triangles. * BPM is equivariant to Mobius transformations, and has low quasi-conformal distortion when the vertex-to-vertex map is discrete conformal. * BPM provides a smooth texture pullback, even for very coarse triangulations, and for _any_ vertex-to-vertex map. ## 2 Background We describe our method first as a plane-to-plane map in global planar coordinates, and show how it is easily generalizable to curved surfaces with local intrinsic coordinates in Section 4. ### Discrete and continuous maps Consider a triangle mesh \(\mathcal{M}=\{\mathcal{V},\mathcal{E},\mathcal{T}\}\), embedded in the complex plane \(\mathbb{C}\) without overlaps. We parameterize the embedding by the vertex coordinates, \(Z=\{z_{v}\in\mathbb{C}\,|\,v\in\mathcal{V}\}\). A map \(F:Z\to W\), which transforms the vertex positions by \(F(z_{v})=w_{v}\), is denoted _discrete_. We are mainly interested in computing an _interpolation_ of a discrete map \(F\) into a _continuous_ map \(f:\mathbb{Z}\to\mathbb{C}\), where \(\mathbb{Z}\) is the union of all the triangles defined by \(\mathcal{T}\) with vertex coordinates in \(Z\). Such a map is _interpolating_ when \(\forall v\in\mathcal{V}\), \(f(z_{v})=F(z_{v})\). We define the _interpolator_ as the operator \(o:(\mathbb{Z},F)\to\mathbb{C}\), such that: \[f(z)=o(z,F).\] For instance, barycentric interpolation is an interpolator that generates piecewise-linear functions. ### Holomorphic maps A differentiable map \(f:\mathbb{R}^{2}\to\mathbb{R}^{2}\), \(f=(u(x,y),v(x,y))\) with a Jacobian of the form \(\nabla f=\left(\begin{smallmatrix}a&b\\ -b&a\end{smallmatrix}\right)\), is _holomorphic_, when considered as a function on the complex plane, \(f\colon\mathbb{C}\to\mathbb{C}\), where \(f(x+iy)=u(x,y)+iv(x,y)\). Alternatively, this can be written as \(\frac{\partial f}{\partial z}=0\), indicating that a complex function that is independent of \(\frac{\partial f}{\partial z}=0\), holomorphic maps preserve the angle between any two intersecting curves, and are therefore detail preserving and useful for texture mapping. A simple example of a holomorphic map \(f\colon\mathbb{C}\to\mathbb{C}\) is the complex affine map \(f(z)=az+b\), for some \(a,b\in\mathbb{C}\), which is a global similarity transformation (i.e., scale, rotation and translation). Such a map is uniquely defined by the transformation of two points. Perhaps the quintessential holomorphic map is the _Mobius transformation_ (defined on the extended complex plane \(\hat{\mathbb{C}}=\mathbb{C}\cup\infty\)), which has the form \(m(z)=\frac{az+b}{z+d}\), for some \(a,b,c,d\in\mathbb{C}\) such that \(ad-bc\neq 0\). The parameters \(a,b,c,d\) are unique up to a multiplicative factor \(\alpha\in\mathbb{C}\). We therefore additionally assume the normalization \(ad-bc=1\), which leads to uniqueness of the parameters up to sign. By working with complex homogeneous coordinates, a Mobius transformation \(m(z)\) can also be represented as a matrix \(M=\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)\in\mathbb{C}^{2\times 2}\) with determinant \(1\). Then, we have \(M[z;1]=\left(\begin{smallmatrix}0&0\\ c&d\end{smallmatrix}\right)\in\mathbb{C}^{2\times 2}\). Figure 2: Piecewise-Linear map of a CETM vertex-to-vertex map [25]. The input vertex to vertex map (a) and the pullback of the texture (b). Note the large angular distortion. \([az+b;cz+d]\equiv[m(z);1]\). The matrix representation of the composition of two Mobius maps \(m_{1}(m_{2}(z))\) is given by the multiplication of their matrix representations, i.e., by \(M_{1}M_{2}\). Similarly, the matrix representation of \(m^{-1}\) is \(M^{-1}\). Mobius maps include similarities and inversions in spheres, and are defined uniquely by the transformation of _three points_. Since both \(M\) and \(-M\) represent the same transformation \(m\), we use \(\equiv\) to denote matrix equality up to sign, i.e. \(M\equiv-M\). The choice of the sign is only required when taking a unique root or logarithm of a Mobius matrix, as elaborated in Sec. 3.2. Barycentric blends of complex affine maps have been used successfully for generating interpolators for _polygonal domains_[21], by blending the complex affine maps defined by the deformation of the polygon _edges_. We generalize this idea, and propose to use _blends of Mobius maps_ for generating an interpolator for a discrete map between two planar _triangle meshes_, by blending the Mobius maps defined by the deformation of the _triangles_. ### Piecewise-Compatible Mobius Maps We parameterize any discrete map \(F:Z\to W\) with a set of Mobius transformations \(\{m_{t}\,|\,t=(i,j,k)\in\mathcal{T}\}\) defined uniquely per triangle by the transformation of the vertices: \(m_{t}(z_{i})=w_{i},m_{t}(z_{j})=w_{j},m_{t}(z_{k})=w_{k}\). We denote by \(\{M_{t}\in\mathbb{C}^{2\times 2}\,|\,t\in\mathcal{T}\}\) the corresponding matrices, with components \(a_{t},b_{t},c_{t},d_{t}\in\mathbb{C}\). **Compatibility condition.** A set of transformations \(\{M_{t}\}\) is _compatible_ with a map \(F:Z\to W\) if the transformations of neighboring triangles agree on the map of their _common vertices_. Specifically, given two adjacent triangles \(t_{1}=(i,j,k),t_{2}=(j,i,l)\in\mathcal{T}\) with a shared edge \(e=(i,j)\), we have that \(w_{i}=M_{t_{1}}(z_{i})=M_{t_{2}}(z_{i})\) and similarly for \(z_{j}\). Given a triangle mesh \(\mathcal{M}\), a set of Mobius transformations \(\{M_{t}\}\) that fulfills the compatibility condition defines a _Piecewise-Compatible Mobius (PCM) Map_[15]. It is advantageous to consider general deformations as PCMs (as opposed to, e.g., piecewise-affine maps) due to their natural connection to conformal and discrete conformal deformations. For example, PCM maps are closed under global (single) Mobius transformations. Namely, given a matrix representation \(M_{g}\) of a global Mobius transformation \(m_{g}\), we have that the set of transformations \(\{M_{t}M_{g}\}\) and \(\{M_{g}M_{t}\}\) are also PCM maps. In addition, discrete conformality (CETM) [20] has an elegant description in the PCM representation in terms of the _corner variables_\(\{X_{i,i}\in\mathbb{C}\,|\,t\in\mathcal{T},i\in t,v_{i}\in\mathcal{V}\}\), where \(X_{i,j}=(c_{i}z_{i}+d_{t})^{-1}\). Specifically, a PCM map is a discrete conformal equivalence if and only if \(|X_{i,j}|\) does not depend on \(t\). Then, \(|X_{i,j}|=e^{\mu/2}\), where \(u:\mathcal{V}\to\mathbb{R}\) is the conformal factor. Unfortunately, unlike the piecewise-affine interpolation, the trivial interpolation of a discrete PCM map, where the Mobius transformation \(M_{t}\) is applied to every point \(z\in t\), is not continuous between triangles. A simple way to see this is that a Mobius map is uniquely determined by 3 points. Therefore, the transformation of _all the points on the edge_ shared by two triangles is compatible by both triangles if and only if they are transformed by a single Mobius transformation, which means that the entire mesh is. Our challenge is then to find an _interpolator_ of PCM maps. ## 3 Blended Piecewise Mobius Maps ### Blended Maps Desiderata Given an input discrete map \(F:Z\to W\), denote by \(M(F)=\{M_{t}\,|\,t\in\mathcal{T}\}\) the PCM map (i.e., the Mobius matrices) induced by \(F\). We define a _map_ interpolator \(o(\overline{Z},F)\) using a continuous _Mobius matrix_ interpolator \(o:(\overline{Z},M(F))\to\mathbb{C}^{2\times 2}\), namely a Mobius transformation \(O(z,M(F))\) with spatially varying blended coefficients. We then define \(O\) and \(o\) such that: \[[o(z,F);1]\equiv O(z,M(F))[z;1]. \tag{1}\] Our requirements from the PCM interpolator \(O(z,M)\) of \(M\) are: 1. **Locality.**\(O(z,M)\) should depend only on the local neighborhood of \(z\). 2. **Identity reproduction.**\(O(z,\{M_{t}\equiv Id\})\equiv Id\). 3. **Continuity.** The resulting map \(o(z,F)\) should be at least \(C^{0}\)-continuous between neighboring triangles. 4. **Mobius equivariance.** The interpolator should commute with Mobius transformations. That is, for any global Mobius transformation \(M_{g}\) we have: \[O(z,\{M_{g}M_{t}\}) \equiv M_{g}O(z,M).\] \[O(z,\{M_{t}M_{g}\}) \equiv O(z,M)M_{g}.\] (2) Namely, interpolating the discrete map and performing a global Mobius transformation can be done in any order for the same result. 5. **Mobius reproduction.** If all vertices are transformed by _the same_ Mobius transformation \(M_{g}\) then the interpolator \(O\) reproduces that Mobius transformation, i.e., \(O(z,M_{g})\equiv M_{g}\). This is a corollary of Properties (2) and (4). We note that Mobius equivariance is essential for the consistency of interpolating CETM maps; the set of CETM maps are closed under Mobius transformations; specifically, any global Mobius transformation induces a CETM map. Properties (4) and (5) then guarantee that this property carries over to our interpolator. We prove in Sec. 3.2.3 that our requirements are met by the interpolator that we define in Sec. 3.2. We further list objectives for the interpolator that we empirically witnessed in all our examples: 1. **CETM interpolation.** If the interpolator is applied to a CETM map \(M\), then the result should be a close approximation to a continuous conformal map. 2. **QC Errors are bounded.** The quasiconformal error of the interpolated \(M(z)\) for any \(z\in t\in\mathcal{T}\) is bounded above by the (discrete) quasiconformal error of \(t\) in \(M\). We list the above as objectives since we do not have explicit proofs that they are always true; nevertheless we provide ample empirical evidence in Sec. 5. ### Mobius Interpolator #### 3.2.1 The Mobius ratio Let \(M_{t},M_{u}\in\mathbb{C}^{2\times 2}\) be two normalized Mobius matrices representing transformations on two faces adjacent at edge \(e_{ij}\) (see Fig. 3). The _Mobius ratio_\(\delta_{uu}\) is given by: \[\delta_{uu}=M_{t}M_{u}^{-1}. \tag{3}\] Intuitively, the Mobius ratio describes the difference between applying \(M_{u}\) and applying \(M_{t}\), in the sense that \(M_{t}=\delta_{tu}M_{u}\). It is easy to check that \(\delta_{tu}^{-1}\equiv\delta_{ut}\), and \(\delta_{tu}\equiv Id\) if and only if \(M_{t}\equiv M_{u}\). Furthermore, due to the PCM compatibility between \(M_{t}\) and \(M_{u}\), we have that \(F(z_{i})\) and \(F(z_{j})\) are _fixed points_ of the transformation \(\delta_{tu}\). We additionally define the _log Mobius ratio_, given by: \[\ell_{tu}=\log\left(\operatorname{Sign}(\operatorname{Tr}(\Re(\omega_{tu}))) \cdot\delta_{tu}\right), \tag{4}\] where \(\Re()\) is the real part of a complex number, \(\operatorname{Tr}()\) is the trace operator, and \(\operatorname{\mathit{Sign}}()\) is the sign of a real number (outputting \(\pm 1\)). Thus, \(\ell_{tu}\) is the log of either \(\delta_{tu}\) or \(-\delta_{tu}\), whichever is closer to the identity in the Frobenius norm (see Appendix B). The square root of the Mobius ratio is correspondingly given by: \(\sqrt{\delta_{tu}}=\exp(\frac{1}{2}\ell_{tu})\). **Boundary edges.** If \(e_{ij}\) is a boundary edge, then we set its ratio to Id. That encodes the choice that the transformation "beyond" the edge is the same Mobius transformation of \(t\), which naturally adheres to our requirements. #### 3.2.2 Ratio interpolator Consider a face \(t=ijk\in\mathcal{T}\) and neighboring triangles \(u,v,w\in\mathcal{T}\) adjacent to the edges \(e_{ij},e_{jk},e_{kl}\in\mathcal{E}\), respectively (Fig. 3). Each face has a corresponding Mobius matrix \(M_{t},M_{u},M_{v},M_{w}\), and each edge has a corresponding log Mobius ratio of its neighboring triangles: \(\ell_{tu}\), \(\ell_{\text{rt}}\) and \(\ell_{\text{wt}}\). We define the _log ratio interpolator_ as: \[\ell_{t}(z,M)=\frac{B_{ij}(z)\ell_{tu}+B_{jk}(z)\ell_{\text{rt}}+B_{ki}(z)\ell _{\text{rt}}}{B_{ij}(z)+B_{jk}(z)+B_{ki}(z)}, \tag{5}\] for some _edge barycentric coordinates_\(0\leq B_{e}(z)\leq 1\), with \(e\in\mathcal{E}_{t}=\{e_{ij},e_{jk},e_{kl}\}\). We require that for \(e\tilde{e}\in\mathcal{E}_{t}\), and a non-vertex point \(z\in\tilde{e},z\notin\{z_{i},z_{j},z_{k}\}\) we have that \(B_{e}(z)/\sum_{\tilde{e}\in\mathcal{E}_{t}}B_{\tilde{e}}(z)=1\) if \(e=\tilde{e}\) and \(0\) otherwise. In addition, we require that the sum of the coordinates does not vanish. Specifically, we take \(B_{e}(z)=d(z,e)^{-1}\), where \(d(z,e)\) is the distance of \(z\) to the line the edge \(e\) lies on. See Appendix A for the implementation details. Finally, our _Mobius interpolator_ is given by: \[O(z\in t,M)=\exp\left(\frac{1}{2}\ell_{t}(z,M)\right)M_{t}=\sqrt{\delta_{t}(z,M)}M_{t}. \tag{6}\] **Discussion.** Our interpolator is similar in spirit to the rotation interpolant of Alexa [1], and is based on the general approach of interpolation in Lie groups [14]. By linearly interpolating the _log_ Mobius ratio, we guarantee that the blended matrix \(O(z,M)\) is normalized (i.e., has determinant 1) if the input matrices \(M\) are normalized. That is because the zero-trace property is invariant under a linear blend. #### 3.2.3 Properties Our interpolator is local (Req. (1)) since it is defined using a triangle and its 3 neighbors, and it is easy to check that it reproduces the identity (Req. (2)). **Continuity on edges.** Without loss of generality, when \(z\in e_{ij},z\neq z_{i},z_{j}\), we have that \(\ell_{t}(z)=\ell_{\text{rt}}\) and \(\ell_{u}(z)=\ell_{tu}\), and thus our interpolation reduces to: \[O(z,M)=\sqrt{\delta_{tu}}\cdot M_{t}\equiv\sqrt{\delta_{tu}}\cdot M_{u},\quad \forall z\in e_{ij},z\neq z_{i},z_{j} \tag{7}\] Hence, the Mobius interpolator on the edge \(e_{ij}\) only depends on the two faces \(t,u\) adjacent to the edge, and it is symmetric in \(t,u\) (up to sign) leading to the same map \(O(z,M)\). Note that Eq (7) is similar to SLERP interpolation for quaternions [10]. **Continuity on vertices.** Note that the barycentric coordinates are not continuous on a vertex (e.g. \(z_{i}\)), hence the ratio interpolant is also not continuous at the vertex. However, we have that \(F(z_{i})\) is a _fixed point_ of the Mobius ratios, and thus we interpolate the original PCM map at \(z_{i}\). This leads to continuity on vertices across different triangles, as needed by Req. (3). **Mobius equivariance.** We first note that the ratios \(\delta\) are invariant to right composition \(M_{t|u|v|v}M_{g}\) with a global Mobius transformation \(M_{g}\); thus, the interpolant \(O\) is trivially equivariant to right composition. For left composition \(M_{g}M_{t|u|v|v}\) (first PCM then global), we have a conjugated ratio \(\delta_{tu}=M_{g}(M_{t}M_{u}^{-1})M_{g}^{-1}\). Since trace is invariant to conjugation, and since conjugation commutes with matrix logarithm and exponent, the entire interpolant becomes: \[O(z\in t,\{M_{g}M_{t}\})=M_{g}\delta(z,\{M_{t}\})M_{g}^{-1}\cdot M_{g}M_{t}=M_{ g}O(z\in t,M). \tag{8}\] Thus, we also fulfill Req. (4), and with (2) we fulfill Req. (5). **Local injectivity.** Mobius transformations are locally injective in a region that does not contain poles. Specifically, if a single Mobius transformation \(m_{t}\) of a triangle \(t\) does not flip or degenerate the triangle edges, we have that \(m_{t}\) has a positive Jacobian anywhere inside. Nevertheless, for the _blended_ Mobius transformation we do not have such a guarantee. In practice, our maps are well behaved for the blending weights that we have chosen, however extreme cases may exist (see Figure 10). ## 4 Curved surfaces Our method is also applicable for mapping from curved surfaces to the plane. The discrete mapping is computed _locally_ for each triangle, by flattening it and its neighboring three triangles isometrically to the plane to generate the source triangles \(Z\). The continuous mapping is then computed by blending inside the triangle, using the same scheme as in the two-dimensional case, and pulling the resulting map back to the surface. Figure 3: Our notation. More formally, consider a triangle mesh \(\mathcal{M}=\{\mathcal{V},\mathcal{E},\mathcal{T}\}\), embedded in \(\mathbb{R}^{3}\). Let \(X=\{x_{v}\in\mathbb{R}^{3}\,|\,v\in\mathcal{V}\}\) be its vertex coordinates. The discrete map \(F:X\to W\) transforms the vertex positions by \(F(x_{v})=w_{v}\in\mathbb{C}\). We are interested in computing a continuous interpolating map \(f:\mathbb{X}\to\mathbb{C}\), where \(\overline{X}\) is the union of all the triangles defined by \(\mathcal{T}\) with vertex coordinates in \(\mathbb{R}^{3}\) and \(\forall v\in\mathcal{V}\), \(f(x_{v})=F(x_{v})\). We define for each \(t\in\mathcal{T}\), a _local_ discrete map \(\tilde{F}_{t}:\tilde{Z}_{t}\to W\) where \(\tilde{Z}_{t}\) is an isometric embedding in 2D of the face \(t\) and its neighboring faces \(u,v,w\). The corresponding Mobius matrices \(M(\tilde{F}_{t})=\{M_{t},M_{u},M_{v},M_{w}\}\) are defined as before, as is the matrix interpolator \(O(z,M(\tilde{F}_{t}))\), and correspondingly the interpolator \(o(z,\tilde{F}_{t})\). Let \(\tilde{z}_{t}\in\mathbb{C}\) be the planar point that corresponds to some point \(x\in t\) on the mesh under the local isometric embedding. The interpolator is defined \(\forall t\in\mathcal{T}\) as follows: \[f_{t}(x)=f_{t}(\tilde{z}_{t})=o(\tilde{z}_{t},\tilde{F}_{t}). \tag{9}\] ### Continuity We need to show that this definition is well-posed, since it is defined for each triangle separately. We get this since (1) Our interpolator is Mobius equivariant, (2) there exists a Mobius map between isometric embeddings, and (3) the map of points on the edge depends only on the Mobius matrices of its neighboring triangles. Formally, Let \(t,u\in\mathcal{T}\), be two triangles that share an edge \(e_{ij}\), and let \(\tilde{Z}_{t},\tilde{Z}_{u}\) be the corresponding (independent) isometric embeddings of each triangle and its neighboring faces. See Fig. 4 for our notation. Since the two embeddings map the triangles \(t,u\) isometrically to the plane, there exists a Mobius transformation \(m_{g}\) such that \(\forall x\in t\cup u\), its corresponding planar points \(\tilde{z}_{t}\in\tilde{Z}_{t}\) and \(\tilde{z}_{u}\in\tilde{Z}_{u}\) satisfy \(\tilde{z}_{u}=m_{g}(\tilde{z}_{t})\). We denote by \(M_{t}(\tilde{F}_{t}),M_{t}(\tilde{F}_{u})\) the Mobius matrices corresponding to \(t\) induced by \(\tilde{F}_{t},\tilde{F}_{u}\), respectively, and similarly for \(M_{u}(\tilde{F}_{t}),M_{u}(\tilde{F}_{u})\). By construction, we have that: \[M_{t}(\tilde{F}_{t})\equiv M_{t}(\tilde{F}_{u})M_{g},\qquad M_{u}(\tilde{F}_{ t})\equiv M_{u}(\tilde{F}_{u})M_{g}, \tag{10}\] where \(M_{g}\) is the Mobius matrix that corresponds to \(m_{g}\). Let \(x\in e_{ij}\) be a point on the mutual edge of \(t\) and \(u\), with the corresponding planar points \(\tilde{z}_{t},\tilde{z}_{u}\). The interpolator of a point on the edge depends _only_ on the Mobius matrices of its neighboring triangles, and is given by Equation (7). We have: \[\delta_{tu}(\tilde{F}_{t})=M_{t}(\tilde{F}_{t})M_{u}^{-1}(\tilde{F}_{t})=M_{t} (\tilde{F}_{u})M_{g}M_{g}^{-1}M_{u}^{-1}(\tilde{F}_{u})=\delta_{tu}(\tilde{F}_ {u}). \tag{11}\] Thus, the matrix interpolator is given by \[O(\tilde{z}_{t},M(\tilde{F}_{t})) =\sqrt{\delta_{tu}(\tilde{F}_{t})}M_{t}(\tilde{F}_{t})= \tag{12}\] \[=\sqrt{\delta_{tu}(\tilde{F}_{u})}M_{t}(\tilde{F}_{u})M_{g}=O( \tilde{z}_{u},M(\tilde{F}_{u}))M_{g}.\] Finally, we have: \[[o(\tilde{z},\tilde{F}_{t});1] =O(\tilde{z},M(\tilde{F}_{t}))[\tilde{z}_{t};1]= \tag{13}\] \[=O(\tilde{z}_{u},M(\tilde{F}_{u}))M_{g}M_{g}^{-1}[\tilde{z}_{u};1 ]=[o(\tilde{z}_{u},\tilde{F}_{u});1].\] Hence, we have that the map interpolation is consistent, as required. Note that this consistency generalizes to _any_ locally defined interpolator, as long as it is equivariant to maps between the local flattened patches. We present results in Fig. 1 and in Sec. 5. Note that our map is at least \(C^{0}\) continuous, but not \(C^{1}\) in general. We provide the pseudo code for our algorithm in Appendix C. ## 5 Experimental Results We use a variety of examples to demonstrate the effectiveness of our interpolators. For each example, we show the source and target meshes, and visualize the map by (1) pulling back a texture from the target mesh to the source mesh, as well as (2) pushing forward a texture from the source mesh to the target mesh. Note that on the target mesh, the edges are _curved_. While our interpolator is smooth and in closed-form, computing the resulting Quasi-conformal (QC) distortion introduces a complicated expression which varies non-linearly within the triangle. To facilitate its visualization, we simply approximate the resulting QC error by refining the source mesh using 4 levels of subdivision, applying the computed (continuous, non-linear) interpolator to the refined vertices, and computing the QC distortion of the linear map between the subdivided triangles. For a single subdivided triangle, the QC distortion is given by the ratio of the singular values of the linear map [20]. For the input discrete deformations we use different deformations/parameterization techniques. We use Conformal Equivalence of Triangle Meshes (CETM) [20] and Boundary-First Flattening (BFF) [18] for generating discrete conformal input maps. For pure planar deformations, we use As-Mobius-as-possible (AMAP) [14] for discrete maps with small QC and CETM Figure 4: Notation for 3D Framework. Figure 5: CETM as input. (left) The input CETM deformation. (right) The QC errors of the input discrete deformation and the BPM mapping. Note that the error of BPM is considerably lower than the input errors. distortion. We use Cauchy coordinates (CC) [20] to generate discrete deformations sampled from continuous conformal maps. We additionally use As-Killing-As-Possible shape deformation (AKVF) [1] to generate inputs that are far from conformal. For additional mappings of surfaces to the plane we use models from the recent parameterization dataset [21] in Figs. 16, 18. The parameterization method used is mentioned in each example. For comparison, we consider piecewise linear (PL) interpolation, and circumcircle preserving projective interpolation (PROJ) [21, 1]. ### Properties We first validate the two objectives mentioned in Sec. 3.1. **CETM as input.** When the discrete input map is a conformal equivalence, i.e., fulfills the CETM conditions, our interpolator leads to a low QC distortion, even when the QC distortion of the input map is quite large. We demonstrate this for two input deformations in Fig. 5. **Bounded QC Errors.** In all cases the QC error of our map is lower than the QC error of the input map. When the input deformation is close to conformal (Figs. 11, 12, 15), our method gives the best results. However, even for deformations far from conformal, (Fig. 13), our mapping is smooth with small QC errors. ### Robustness We demonstrate the robustness of our approach to different meshes. **Non-uniform triangulations.** We use a mesh whose left and right halves are meshed differently. We deform it using AKVF, and show the interpolation results in Fig. 6. Note that the texture deformed using our map looks similar on the left and right side of the mesh, thus our method is not sensitive to meshing. **Non simply connected.** Our method is applicable to meshes of any topology. We demonstrate it on a few non-simply connected meshes in Fig. 7. **Different resolutions.** We remesh a model to 4 different resolutions, and apply the same deformation by sampling the continuous Cauchy Coordinates, using the same source and target cages. We show the result in Fig. 8, and compare with piecewise-linear interpolation. Note that, unlike the PL map, our results are virtually indistinguishable across resolutions, despite the very different mesh resolutions. **Large deformations.** We assume that the discrete map is slowly varying between triangles, therefore \(\delta_{\text{tr}}\) is close to \(Id\) or \(-Id\), and the chosen logarithm branch will be the same for the 3 edges of the triangle. However, even if this is not the case, our interpolator is smooth, but may be more oscillatory. In this experiment, we demonstrate that our map is resilient to large changes in the deformation of neighboring triangles. In Fig. 9 we show a discrete map with very large deformations, where our map is still smooth. **Local injectivity** as mentioned in Sec. 3.2, our interpolator is not formally guaranteed to be locally injective. In fact, as we demonstrate in Fig. 10, this might be the case even if the deformed triangles are not flipped. This happens when the ratios \(\delta\) are very different between the edges of the same triangle, which eventually results from a big variation in the Mobius transformation between neighboring triangles. Since parameterization algorithms try to avoid such variations with regularization, we do not expect this to occur often in practice. ### Comparisons **Interpolators on triangles.** We compare our approach to PL and projective interpolation, for inputs created with a variety of deformation methods (AMAP, CETM, BFF, AKVF, CC). The projective interpolation requires the computation of scaling factors per vertex, which we compute individually per triangle. Note that for meshes that are not CETM, the scaling factors do not agree between different triangles sharing vertices, and therefore the interpolation can be discontinuous. We show in Figs. 11, 12, 15, 13 the resulting texture maps, as well as the QC distortion for each example. Note that for discrete conformal maps (CETM), and for maps that are close to Figure 6: Non-uniform triangulations. (a) The input triangulation, (b) the pullback and (c) push-forward of the texture shown in a black frame with our mapping. Figure 7: Non simply connected meshes. (a) The pull-back, and (b) the push-forward of the texture (shown in a black frame) using our mapping for two non simply connected meshes. conformal (BFF, PCM), both the projective interpolation and our approach achieve a good result, though our QC error is lower. Furthermore, our method is applicable to _any_ discrete map, whereas projective interpolation is discontinuous for non-CETM maps. This is clearly visible for meshes deformed using AKVF, which can induce significant angle distortion (see Fig. 13). Compared to PL interpolation, our map is smoother even for very coarse triangulations (see also Fig. 8). **Continuous interpolators.** Instead of interpolating each triangle separately, or by blending, we attempt to use a continuous interpolator with constraints. Namely, we use a method for which the map is given on the full source triangulation domain (and not only on the vertices), and constrain the vertices to the locations prescribed by the discrete input map. We use Cauchy Coordinates as a smooth interpolator, as it is exactly holomorphic. Fig. 14 shows the result of the comparison. On a coarse mesh, if we use a small number of vertices for the cage, the constraints on the vertices cannot be achieved. If, on the other hand, we use a large number of cage vertices, the map generates poles and overlaps. Furthermore, deformation with Cauchy Coordinates is only feasible for a mesh with a small number of vertices, as it is a global approach, that requires solving a linear system with a dense matrix. Hence, our local closed-form approach is a better alternative. ### Application to texture mapping Using the intrinsic formulation presented in Sec. 4 we interpolate the texture coordinates of 3D meshes, leading to considerably smoother textures compared to the alternatives (PL and projective). We demonstrate this in Figs. 16, 17, 18, where the inputs are generated using CETM, BFF, and designed by artists, respectively. For CETM, the results are comparable to the projective interpolation, yet our approach achieves lower QC errors, and somewhat smoother outputs. For BFF and artists' generated parameterizations, the projective interpolation is discontinuous, and our results are considerably smoother than both the linear and projective approaches. ## 6 Conclusion and Future Work We presented a blending scheme (BPM) of Mobius transformations that interpolates a discrete map between triangulations to a continuous map on the input domain. Our scheme leads to small quasi-conformal errors when the input discrete map is close to conformal, and is applicable to _any_ discrete input map. We additionally showed that our blending scheme can be done _intrinsically_, thus allowing non-linear interpolation of the texture coordinates of a 3D mesh. In the future we plan to explore other applications for our interpolation scheme, such as surface to surface, spherical parameterization, Figure 8: Multiple resolutions. Pull-back of our mapping. from left to right: increased mesh density. Note that our mapping of the coarse triangulation (bottom left) is comparable to the linear map on the much denser triangulation (top right). Figure 10: Example of a non-locally-injective transformation. (a): original triangles with part of \(e_{ij}\) in black and a parallel line inside \(t\) in blue. (b): an extreme deformation with matrix \(M\) of the bottom triangle \(u\) (while the rest are stationary) leads to edge ratios \(\delta_{\mathrm{tr}}=M,\delta_{\mathrm{tr}}=\delta_{\mathrm{tr}}=Id\). However, the result is still locally injective. By the barycentric blending, any line originally parallel to \(e_{ij}\) in \(t\) is transformed by matrices \(M^{d}\), with varying \(d<\frac{1}{2}\), and thus closer to \(Id\) than the transformation \(M^{\frac{1}{2}}\) of \(e_{ij}\). In this case, \(e_{ij}\) would be more curved inwards than the other parallel lines within. Thus, in (c), when \(M\) is made even more extreme, the target black circular arc from edge \(e_{ij}\) and the less-curved blue curve transformed by \(M^{d}\) intersect, causing a loss of injectivity. Figure 9: Even when the input map is far from conformal (here computed using AKVF), our interpolator leads to a smooth map. etc. In addition, we plan to investigate _time interpolation_ in this setting, as well as generalizing our scheme to blends where the input map is _approximated_ instead of interpolated. Finally, we aim to derive theoretical bounds for the QC error of our blends, and classify the conditions under which the map is provably bijective. ## 7 Acknowledgments Mirela Ben-Chen acknowledges the support of the Israel Science Foundation (grant No. 1073/21).
2303.17034
Overcoming Challenges to Continuous Integration in HPC
Continuous integration (CI) has become a ubiquitous practice in modern software development, with major code hosting services offering free automation on popular platforms. CI offers major benefits, as it enables detecting bugs in code prior to committing changes. While high-performance computing (HPC) research relies heavily on software, HPC machines are not considered "common" platforms. This presents several challenges that hinder the adoption of CI in HPC environments, making it difficult to maintain bug-free HPC projects, and resulting in adverse effects on the research community. In this article, we explore the challenges that impede HPC CI, such as hardware diversity, security, isolation, administrative policies, and non-standard authentication, environments, and job submission mechanisms. We propose several solutions that could enhance the quality of HPC software and the experience of developers. Implementing these solutions would require significant changes at HPC centers, but if these changes are made, it would ultimately enable faster and better science.
Todd Gamblin, Daniel S. Katz
2023-03-29T21:35:52Z
http://arxiv.org/abs/2303.17034v1
# Overcoming Challenges to Continuous Integration in HPC ###### Abstract Continuous integration (CI) has become a ubiquitous practice in modern software development, with major code hosting services offering free automation on popular platforms. CI offers major benefits, as it enables detecting bugs in code prior to committing changes. While high-performance computing (HPC) research relies heavily on software, HPC machines are not considered "common" platforms. This presents several challenges that hinder the adoption of CI in HPC environments, making it difficult to maintain bug-free HPC projects, and resulting in adverse effects on the research community. In this article, we explore the challenges that impede HPC CI, such as hardware diversity, security, isolation, administrative policies, and non-standard authentication, environments, and job submission mechanisms. We propose several solutions that could enhance the quality of HPC software and the experience of developers. Implementing these solutions would require significant changes at HPC centers, but if these changes are made, it would ultimately enable faster and better science. ## 1 High performance computing is a key enabler for developing scientific understanding and knowledge. "High performance" typically refers to computing that requires large-scale resources, e.g., those on the Top500 list of the world's fastest machines [1]. HPC sites range from universities with smaller clusters of commodity machines to large, GPU-accelerated supercomputers at national computing facilities. HPC systems need application software to be useful. Since around the 1940s, HPC applications have spanned the computational science domains: simulations and modeling in climate, physics, chemistry, engineering, etc. More recently, the field grew to include applications in data analysis and machine learning. All of these applications rely on other software, from operating systems to libraries (e.g., for communications and math), and still more software is needed in the development process: compilation, testing, packaging, and distribution. Historically, staff at HPC sites developed their own applications, with the vendor of the HPC system providing the operating system, compilers, and math libraries. Export controls and other data sensitivity concerns limit access to a large number of HPC applications. Because of these and more general security concerns, HPC sites only grant access to a set of known account holders. However, a large fraction of today's software is developed on social coding platforms like GitHub and GitLab, which allow a community to perform collaborative planning, development, maintenance, and testing. These sites not only provide infrastructure for working on code, but offer a large number of free cloud CPU cycles for continuous integration (CI). Under the CI model, tests run when developers suggest changes, and the tests ensure that code is correct _before_ it is accepted. Developers can thus have high confidence that the code will work correctly. While continuous integration is standard prac tice for developers who can use common cloud environments, HPC environments introduce challenges to this practice. Technical, security, and political issues all make it extremely difficult to integrate externally developed open-source software with internal applications and machines. Even though many HPC software projects are developed in the open, they must run on closed HPC resources, and it is increasingly difficult to ensure that the vast majority of modern _open-source_ applications will run reliably on HPC systems. ## Modern Software is Complex Modern software applications are not monolithic; they integrate packages written by many authors on different project teams, and they rely heavily on publicly available open-source software. Figure 1 shows the software packages used by ARES, a proprietary multi-physics application used on HPC machines at Lawrence Livermore National Laboratory (LLNL). These packages include core scientific libraries and utility libraries for logging, math, I/O, programming models, performance portability, memory management, and other purposes. A number of build-time dependencies like compilers and testing frameworks are also included. Even though 30 core components of Ares are LLNL-proprietary, the other 85 packages are open source. Of these, 12 are publicly developed by LLNL on GitHub, and the remaining 71 packages are open-source packages developed by others. This situation is not unusual; _most_ modern software leverages and depends on open-source components. Reimplementing all the capabilities provided by the modern open-source-software ecosystem would be impossible or at least impractically expensive for a single organization Software reuse comes with a cost: integration Figure 1: Dependencies of the ARES multi-physics code: 31 are internal proprietary packages, 13 are open-source packages developed at LLNL, and together these rely on 72 external open-source software packages. complexity. In a project developed by a single team, developers commit code to a common repository, maintaining project consistency. s In large integrated systems, however, different teams may work on individual components, and developers are responsible for ensuring that all versions of the components are compatible. Unfortunately, most open-source developers lack access to HPC resources, and even if they have it, many lack the time to manually test their packages in HPC-like environments. HPC developers who leverage open-source software must be prepared to perform extensive porting and integration testing to ensure that the open components work seamlessly on closed systems. ## HPC Systems Are Unique HPC systems are typically designed and built to meet specific local requirements, balancing expected workload characteristics, hardware options (e.g., number and type of CPUs, GPUs, and other accelerators; internal networking; storage), packaging, cooling, external networking, energy usage, cost, etc. Key components of the local software stack are often bespoke for each system. For example, proprietary MPI implementations like Cray MPI can only run on Cray systems. In this case, the Cray MPI license disallows inclusion in containers or other software distributions that can run in the cloud. The same is true for math libraries and compilers in the Cray environment. Moreover, filesystem organization is not standardized. Paths to tools, libraries, home, and temporary directories are system-dependent. Finally, authorization and access may be set by site policies that are often developed locally. ## ROADBLOCKS TO CI IN HPC Open-source developers are now accustomed to widely available compute cycles for continuous integration. Major code hosting sites (GitHub, GitLab, Bitbucket) as well as third-party paid services that integrate with these sites offer free CI services. Developers can attach workflows to their repositories that run tests concurrently across Linux, macOS, and Windows, and if they need to test in custom environments they can bring their _own_ containerized test environments. In HPC settings, however, a number of hurdles prevent adoption of automated code building and testing. ### HPC environments are hard to replicate Due to ubiquitous cloud computing, CI is the norm outside of HPC. It has never been easier to set up automated testing in widely-used software environments. But, as discussed above, HPC environments are, by definition, special. For example, it is seldom possible to reliably test _optimized_ CPU builds in cloud CI, as the fleet of test systems used by cloud virtual machines (VMs) is often heterogeneous and one cannot request that a test be run on a specific microarchitecture. So far, there is no (free) cloud-based CI for GPUs. Testing scientific workflow systems is even harder. Each workflow system is essentially a distributed application, and testing a workflow system requires access to the job submission interface. This access can include authentication and authorization from remote systems, local environments and configurations, batch scheduler parameters, etc. Because resource managers are used to run the CI system itself, it is difficult to vary and test system software and resource managers _within_ the CI system. In this case, we need to see how the system is set up in practice, and not completely isolate from it. We also need interfaces and abstractions that allow us to test that the system works across different schedulers and configurations. Without the ability to replicate the software environment of popular HPC systems, it is very difficult to ensure that open-source software will continue working on them. ### Security challenges HPC machines are large, shared computing systems like clouds, and one obvious way to replicate the HPC environment in cloud CI would be to offer cycles on a local HPC system to run CI jobs for sites like GitHub or GitLab. However, most HPC sites disallow users from running jobs on behalf of external systems. Consider the open-source CI model, where an unknown user (or at least a user who is unknown to the HPC center) submits a pull request (PR) to a project. The PR triggers jobs that build and test the changed code in cloud environments. Developers and maintainers often want to instead trigger jobs on a set of HPC platforms. While the cloud allows users to provision isolated virtual machines and even isolated virtual networks for CI jobs, most HPC systems lack this level of isolation. HPC sites implement security at the facility boundary, allowing only trusted users in. Once on the system, all users can access the shared filesystem and can connect to compute nodes over the cluster network. A privilege escalation in this environment could give a user access to other users' files, which may be export-controlled or otherwise sensitive. HPC security teams therefore disallow setting up CI to run arbitrary code, as it opens the site to such attacks. Running code from _protected_ branches, e.g., the maintainer-approved main branch of a popular open-source project, may be allowed, but doing this loses the benefits of testing changes _before_ they are integrated into the project. When the PR is still open, contributors are motivated to fix issues that come out in testing because they want their changes to be merged. If fixes are made after the fact, it can be very difficult to keep fast-moving projects working for HPC. Administrative and political challenges A number of administrative and political reasons can hinder progress on solutions to the above problems. First, HPC sites do not typically prioritize build or test cycles, because this is perceived to reduce cycles available for production science runs, which is typically their raison d'etre. CI jobs tend to be small and numerous, as opposed to the more traditional larger, longer-running HPC jobs, and queuing policies that support this type of work are not well understood, especially for heavily utilized systems with mostly larger jobs. When asked how many cycles are needed for testing, users often reply with large numbers and the need to test at scale. Facilities are reluctant to provide any one project with a large testing allocation. The tradeoff between using cycles for testing and saving cycles on production code that may fail is not easy to quantify, but if public CI systems are any benchmark, a large fraction of the benefit of CI can be realized through short-running builds and smoke tests. Cloud CI services impose strict limits on job runtimes and resource usage--typically just a few hours and one or two CPUs per job. All but the largest codes can be built and tested for correctness within this footprint, at least at a coarse granularity. Providing separate queues with similar policies on HPC systems would require only a small fraction of overall system CPU hours, and while this approach would not detect bugs that only appear at massive scale, it would still prevent many production cycles from being wasted. Because HPC sites are very focused on production jobs and production job performance, very little interest has emerged in the HPC community for compute or network virtualization. This is unfortunate, because these technologies would provide the type of resource flexibility needed to run isolated, secure CI jobs. Infiniband, the most popular HPC network, has very limited support for traffic isolation (8 or so isolated channels--not enough for thousands of users), and most HPC systems still run applications on bare metal instead of in VMs. Meanwhile, clouds have developed very lightweight, secure VM solutions (e.g., Amazon Web Service's Nitro hypervisor) with almost no virtualization overhead. Finally, HPC center leadership has limited understanding of modern development workflows. It is difficult to grasp the extent to which open-source software has spread throughout the scientific software ecosystem, the rate at which modern software is developed, and the interdependence of packages. The idea that key science applications rely on externally developed software, that helping external software projects test on HPC machines could be _beneficial_ to internal projects, and that many _internal_ projects are actually hosted and developed externally still needs socializing in order to broaden understanding of the needs of modern software developers. ## 4 Potential Solutions Building and testing software on HPC systems has always been hard, but some solutions to the challenges presented above have recently begun to emerge. ### Wisdom of the crowds Systems like Spack [2] and EasyBuild [3] have made building on HPC systems easier by crowd-sourcing institutional build knowledge. These systems include curated repositories of build scripts that aggregate and preserve institu tional knowledge of different machine environments and make HPC software easier to build. While the projects themselves require extensive CI, changes are only checked automatically with cloud CI, not on a diverse set of HPC resources. Without immediate, automated testing of contributions, builds still frequently break and tests still frequently fail in these environments. #### 3.2.1 Jacamar and secure CI For _internal_, trusted projects at HPC centers, projects like Jacamar CI [4] solve some of the security problems. They allow users to run CI jobs _as themselves_ on HPC machines, preserving the OS-level security boundaries that HPC centers require users to adhere to. With Jacamar, one user cannot access and steal another user's data through the CI system. While internal projects _can_ pull in trusted versions of external software (e.g., recent releases), integration testing is still difficult. Internal teams cannot easily test changes from PRs, because the changes in a PR cannot be attributed to any trusted HPC center user. Without the ability to test PRs, incompatibilities or bugs can be introduced through dependencies. Either the site must attribute every PR to a known user, which is often not possible, or they must isolate the untrusted code in its own environment. #### 3.2.2 Separate resources for CI HPC sites are considering setting up separate resources for open-source CI. One of the authors has been involved in such an effort at LLNL, to set up an isolated cluster _without_ sensitive data, where public CI jobs can run with little risk to the main HPC resources. The challenge with this approach is that it duplicates effort--HPC system administrators are a scarce resource, and maintaining an additional machine in a different network zone requires redundant work. It is also difficult to ensure that the separate machine stays up to date with the main systems. #### 3.2.2 Vendor and cloud support As customers have come to rely on an increasing volume of open-source software, HPC vendors have shown more interest in ensuring that this software works well on the platforms they offer. At the same time, more cloud vendors are producing their own HPC offerings, and users can easily set up clusters in the cloud to run HPC jobs. These clusters can even use a wide range of resource managers, like SLURM and PBS. Such environments are not free, of course. HPC vendors _may_ begin to provide free, public cloud CI resources that open-source developers could use to test their software. For large projects like Spack, cycles can be donated in one place, but scaling the approach to support the many smaller, independent HPC development projects that need CI is a much larger effort that requires more cooperation between major HPC vendors and cloud platforms. #### 3.2.3 Containerized environments In lieu of hardware resources, HPC vendors could also begin to provide containerized versions of their software stack for building and testing in CI. Some vendors have begun testing this approach, for example with Cray's Containerized Programming Environment (CPE). Unfortunately, the container is currently only licensed to run on HPC resources, so its versatility for build and test use cases is limited. It cannot be run in the cloud, where adequate isolation and cycles are available. For commodity clusters, it may be easier to provide containerized reproductions of the production HPC environment, as there are not as many licensing issues involved. Since most HPC sites are still administered very manually, HPC administrators will need to lean into a culture of automation. If the site can provision the production environment automatically, they can reliably provide a container with _exactly_ the same software that the main HPC site runs. The idea of building containers based on production HPC environments is not new; one of this paper's authors proposed it with his colleagues to NSF's XD solicitation [5] in 2008. The proposal ultimately became part of NSF's XSEDE environment, which operated between 2011 and 2022, but the containerization component did not make it into the final, funded project. More recently, NASA Ames [6] has successfully provisioned cloud-bursting capabilities allowing users to build, test, and run codes in small allocations in cloud environments before running them unmodified on the production Pleiades HPC cluster. They leverage portable container workflows and abstract differences between cloud and onsite resources through the MPI interface. This pioneering effort to create "reproducible" HPC infrastructure still has security limitations. Even in NASA's environment, the cloud resources are provisioned in the same logical network as HPC onsite resources. They cannot run untrusted code without risk to onsite data. More virtualization and laaS The solution that likely makes most sense for HPC CI is to move towards a less trusting, more isolated security model that would allow HPC systems to function more like clouds. Flexible, isolated allocations for either internal or external CI jobs would eliminate the duplication of effort required by many of the potential approaches mentioned above. Isolated allocations also enable Infrastructure as a Service (IaaS) within the HPC site, which would allow sites to mock entire distributed resource manager environments and services. With this capability, developers could test entire workflow systems in much more realistic scenarios. We will need to work with vendors to develop and provide HPC environments with network and OS support for isolation. In general, the only integrators currently providing these capabilities widely are clouds, and there are _not_ good on-premises solutions. HPC sites will need to either start working with clouds more closely, or push HPC vendors to provide like capabilities for HPC centers. It will likely be a long time before truly "converged" infrastructure becomes widespread. Conclusion Continuous integration is indispensable for most software development and maintenance today, including for scientific software. However, CI is difficult to implement in HPC environments for many reasons. Limitations of on-premises infrastructure preclude many of the security isolation techniques used in modern cloud environments. HPC security policies must respect these limitations by restricting the automation needed for responsive CI. Current solutions require duplicated effort, either in provisioning dedicated resources for CI, or by duplicating deployment effort with containerized environments. The most promising solution is to move toward more automated, secure, flexible infrastructure, which will be neither quick nor easy to implement with the restrictions of today's HPC environment. ## Acknowledgment This article was inspired in part by a panel on Build, Integration and Testing for Sustainable Scientific Computing Software, chaired by Keita Teranishi and Roscoe A. Bartlett, at the 2022 SIAM Conference on Parallel Processing for Scientific Computing. The authors thank the chairs for inviting us to the panel and creating the opportunity for discussion. Daniel S. Katz thanks Ben Clifford and other members of the Parsl and funcX teams for some of the ideas here. Part of this work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344. Lawrence Livermore National Security, LLC (LLNL-JRNL-846623).
2305.05966
Graph Neural Networks and 3-Dimensional Topology
We test the efficiency of applying Geometric Deep Learning to the problems in low-dimensional topology in a certain simple setting. Specifically, we consider the class of 3-manifolds described by plumbing graphs and use Graph Neural Networks (GNN) for the problem of deciding whether a pair of graphs give homeomorphic 3-manifolds. We use supervised learning to train a GNN that provides the answer to such a question with high accuracy. Moreover, we consider reinforcement learning by a GNN to find a sequence of Neumann moves that relates the pair of graphs if the answer is positive. The setting can be understood as a toy model of the problem of deciding whether a pair of Kirby diagrams give diffeomorphic 3- or 4-manifolds.
Pavel Putrov, Song Jin Ri
2023-05-10T08:18:10Z
http://arxiv.org/abs/2305.05966v2
# Graph Neural Networks and 3-Dimensional Topology ###### Abstract We test the efficiency of applying Geometric Deep Learning to the problems in low-dimensional topology in a certain simple setting. Specifically, we consider the class of 3-manifolds described by plumbing graphs and use Graph Neural Networks (GNN) for the problem of deciding whether a pair of graphs give homeomorphic 3-manifolds. We use supervised learning to train a GNN that provides the answer to such a question with high accuracy. Moreover, we consider reinforcement learning by a GNN to find a sequence of Neumann moves that relates the pair of graphs if the answer is positive. The setting can be understood as a toy model of the problem of deciding whether a pair of Kirby diagrams give diffeomorphic 3- or 4-manifolds. ## 1 Introduction and Summary Geometric Deep Learning (GDL) [1] is an area of Machine Learning (ML) that has been under very active development during the last few years. It combines various approaches to ML problems involving data that has some underlying geometric structure. The neural networks used in GDL are designed to naturally take into account the symmetries and the locality of the data. It has been successfully applied to problems involving computer vision, molecule properties, social or citation networks, particle physics, etc (see [2] for a survey). It is natural to apply GDL techniques also to mathematical problems in topology. In general, ML has been already used in various problems in low-dimensional topology, knot theory in particular, [3, 4, 5, 6, 7, 8, 9, 10, 11, 12], as well as various physics-related problems in geometry (for a recent survey see [13]). However, the used neural network models were mostly not specific to GDL. The goal of this paper is to test the efficiency of GDL in a very simple setting in low-dimensional topology. Namely, we consider a special class of 3-manifolds known as plumbed, or graph, 3-manifolds. Those are 3-manifolds that are specified by a choice of a graph with particular features assigned to edges and vertices. Such 3-manifolds are therefore very well suited for analysis by Graph Neural Networks (GNN). GNN is one of the most important and used types of neural networks used in GDL. In general, GNN are designed to process data represented by graphs. In this paper, we use GNNs for the following problems involving plumbed 3-manifolds. Different (meaning not isomorphic) graphs can correspond to equivalent, i.e. homeomorphic, 3-manifolds. Note that in 3 dimensions (or less) any topological manifold has a unique smooth structure and the notions of homeomorphism and diffeomorphism are equivalent. It is known that a pair of graphs that produce two equivalent 3-manifolds must be related by a sequence of certain _moves_, commonly known as Neumann moves [14]. These moves establish a certain equivalence relation on the graphs (in addition to the standard graph isomorphism). First, we consider a neural network that, as the input has a pair of plumbing graphs, and, as the output gives the decision whether the graphs correspond to homeomorphic 3-manifolds or not, i.e. whether the two graphs are equivalent, or not, in the sense described above. Supervised Learning (SL) is then used to train the network. The training dataset consists of randomly generated graph pairs, for which it is known whether the corresponding 3-manifolds are homeomorphic or not. The trained neural network, up until the very last layer, can be understood to produce an approximate topological invariant of plumbed 3-manifolds. Second, we consider a neural network for which the input is a plumbing graph and the output is a sequence of Neumann moves that "simplifies" the graph according to a certain criterion. The aim is to build a neural network such that if it is applied to equivalent graphs it simplifies them to the same graph. If the result is successful this can be used to provide an explicit demonstration that a given pair of For both cases, SL and RL, we consider different architectures of the neural networks and compare their performance. Note that in principle there is an algorithm for determining whether two plumbing graphs give homeomorphic 3-manifolds or not, which was already presented in [14]. It involves bringing both graphs to a certain normal form (which is, in a sense, similar to the "simplification" process in the RL setup mentioned above) and then checking that normal forms are the same (i.e. isomorphic graphs). However, it is known that just checking isomorphism of graphs already goes beyond polynomial time. The plumbing graphs can be considered as a particular class of more general Kirby diagrams that can be used to describe arbitrary closed oriented 3-manifolds, with Neumann moves being generalized to the so-called Kirby moves. Even in this case, in principle there exists an algorithm of checking whether two Kirby diagrams produce homeomorphic 3-manifolds or not [15]. There is a also of version of Kirby diagrams and moves for smooth 4-manifolds. Moreover, in this case, however, an algorithm for the recognition of diffeomorphic pairs does not exist. In 4 dimensions the notion of diffeomorphism and homeomorphism are not the same. In particular, there exist pairs of manifolds that are homeomorphic but not diffeomorphic. While the classification of 4-manifolds up to homeomorphisms (with certain assumptions on the fundamental group) is relatively not difficult, classification up to diffeomorphisms is an important open question. The setup with plumbed 3-manifolds that we consider in this paper can be understood as a toy model for the problem of recognition of diffeomorphic pairs of general 3- and 4-manifolds, for which one can try to apply neural networks with similar architecture in the future. The rest of the paper is organized as follows. In Section 2 we review basic preliminaries about plumbed 3-manifolds and Graph Neural Networks needed for the analysis that follows. In Section 3 we consider various GNN architectures for supervised learning of whether a pair of plumbing graphs provide homeomorphic 3-manifolds or not. In Section 4 we consider reinforcement learning of the process of simplification of a plumbing graph representing a fixed (up to a homeomorphism) 3-manifold. Finally, we conclude with Section 5 where we discuss the obtained results and mention possible further directions. The Appendix A contains some basic algorithms that are specific to the problems considered in this paper. ## 2 Preliminaries ### Plumbed 3-manifolds In this section we review basic facts about plumbed 3-manifolds, also known as graph 3-manifolds. For a more detailed exposition we refer to the original paper [14]. First, let us describe how to build a 3-manifold from a _plumbing graph_, or simply a _plumbing_. For convenience, we restrict ourselves to the case when the graph is a tree, i.e., the graph is connected and acyclic. We will also consider the case of genus zero plumbings only. In this setting, apart from the graph itself, the only additional information that one needs to specify is the set of integer _weights_\(w(v)\in\mathbb{Z}\) labeling vertices \(v\in V\) (\(V\) denotes set of all vertices of the graph), also referred to as _framings_ in the context of topology. A typical plumbing graph looks like the one shown in Figure 1. The weights \(w(v)\), together with standard graph data can be naturally encoded in an \(|V|\times|V|\) matrix \(a\) with integral elements \(a_{ij}\) as follows. Outside of the diagonal this matrix coincides with the standard adjacency matrix of the graph (i.e. \(a_{ij}=1\) if \(i\neq j\in V\) are connected by an edge, and \(a_{ij}=0\) otherwise). The diagonal elements are given by the weights: \(a_{ii}=w(i)\). One can build a 3-manifold corresponding to such a plumbing graph as follows. First, consider a graph containing a single vertex with weight \(p\in\mathbb{Z}\) and no edges. To such a one-vertex graph we associate lens space 3-manifold \(L(|p|,\pm 1)\), where the sign of \(\pm 1\) coincides with the sign of \(p\). It can be described as a Figure 1: An example of a plumbing graph. quotient of the standard unit 3-sphere \(S^{3}=\{\left|z_{1}\right|^{2}+\left|z_{2}\right|^{2}=1\left|\left(z_{1},z_{2} \right)\in\mathbb{C}^{2}\right\}\subset\mathbb{R}^{4}\cong\mathbb{C}^{2}\) with respect to the action of cyclic group \(\mathbb{Z}_{\left|p\right|}\) or order \(\left|p\right|\), generated by \(\left(z_{1},z_{2}\right)\rightarrow\left(z_{1}e^{\frac{2\pi i}{\left|p\right|} },z_{2}e^{\frac{2\pi i}{\left|p\right|}}\right)\) transformation. This 3-manifold can be equivalently understood as a circle fibration over \(S^{2}\) base with Euler number \(p\). More explicitly, it can be constructed as follows. Let us start with two copies of \(D^{2}\times S^{1}\) (where \(D^{2}\) denotes 2-dimensional disk), that can be viewed as trivial circle fibrations over \(D^{2}\). We then can glue two \(D^{2}\)'s along the common boundary \(\partial D^{2}\cong S^{1}\) into \(S^{2}\) (so that each \(D^{2}\) can be understood as a hemisphere), with the \(S^{1}\) fibers along the two boundaries being glued with relative rotation specified by a certain map \(f:\partial D^{2}\cong S^{1}\to SO(2)\cong S^{1}\). The homotopy class of such a map is completely determined by the "winding number". The homeomorphism class of the resulting closed 3-manifold only depends on this number. To obtain the lens space \(L(p,1)\) one takes the winding number to be \(p\). Next, consider a vertex with weight \(p\) being a part of a general tree plumbing (as, for example, the one shown in Figure 1). For each edge coming out of the vertex, we remove a single \(S^{1}\) fiber (over some generic point in the \(S^{2}\) base) of the fibration together with its tubular neighborhood. The neighborhood can be chosen to be the restriction of the fibration to a small disk in the \(S^{2}\) base that contains the chosen point. Such an operation, out of the original lens space \(L(p,1)\), produces a 3-manifold that has a boundary component \(\partial D^{2}\times S^{1}\cong S^{1}\times S^{1}=T^{2}\) for each edge coming out of the vertex. Having an edge between a pair of vertices in the graph then corresponds to gluing two \(T^{2}\) boundary components in the way that the two circles, the fiber \(S^{1}\) and the boundary \(S^{1}\) of the small disk on the base, are swapped (with the orientation of one of the circles reversed, so that the resulting 3-manifold is orientable). Performing such operations to all the vertices and edges of the graph one obtains a 3-manifold that has no boundary components. This is the 3-manifold that one associates to the plumbing graph. Equivalently, to a plumbing graph one can associate Dehn surgery diagram, in the way that each vertex \(v\in V\) corresponds to an unknot framed by \(w(v)\) and the presence of an edge between two vertices signifies that the corresponding unknots form a Hopf link. Applying the prescription described above to different graphs may result in homeomorphic 3-manifolds. In [14] it was proved that this happens if and only if the graphs can be related by a sequence of local graph transformations, or _moves_, now commonly known as _Neumann moves_, shown in Figure 2. ### Graph neural networks Here we provide a brief review on some of GNNs for a later purpose. There are 3 main computational modules to build a typical GNN architecture: propagation modules, sampling modules and pooling modules. Since in this paper we will use only convolution operators, which are one of the most frequently used propagation modules, we focus on some of convolution operators. For a broad review on various modules, we refer the reader to [16]. Convolution operators are motivated by convolutional neural networks (CNN), which have achieved a notable progress in various areas. In general, the role of convolution operators can be described as \[\mathbf{x}_{i}^{(k)}=\gamma^{(k)}\left(\mathbf{x}_{i}^{(k-1)},\bigoplus_{j\in \mathcal{N}(i)}\phi^{(k)}\left(\mathbf{x}_{i}^{(k-1)},\mathbf{x}_{j}^{(k-1)}, \mathbf{e}_{j,i}\right)\right),\] where \(\mathbf{x}_{i}^{(k)}\in\mathbb{R}^{F}\) denotes node features of node \(i\) in the \(k\)-th layer and \(\mathbf{e}_{j,i}\in\mathbb{R}^{D}\) denotes edge features Figure 2: There are 3 different types of Neumann moves, which preserve the resulting 3-manifold up to homeomorphism. Considering the sign, one can count 8 Neumann moves. Among them, there are 5 blow-up moves (by which a new vertex is created) and 3 blow-down moves (by which one or two vertices are annihilated). of the edge connecting from node \(j\) to node \(i\). We also note that \(\bigoplus\) over a neighborhood \(\mathcal{N}(i)\) of node \(i\) is a differentiable, permutation invariant function such as sum, mean and max, and \(\gamma\) and \(\phi\) denote differentiable functions such as Multi Layer Perceptrons (MLPs). Among various convolution operators existing in the literature, the following will appear in the next sections. * Graph Embedding Network (GEN) [17] GEN is designed for deep graph similarity learning and embeds each graph into a vector, called a graph embedding. More explicitly, it first computes initial node embeddings \(\mathbf{x}_{i}^{(1)}\) from the node features \(\mathbf{x}_{i}^{(0)}\) through MLP \[\mathbf{x}_{i}^{(1)}=\text{MLP}\left(\mathbf{x}_{i}^{(0)}\right),\] then it executes the single message propagation to compute node embeddings \(\mathbf{x}_{i}^{(2)}\) by the information in its local neighbourhood \(\mathcal{N}(i)\)1 Footnote 1: It is also possible to apply a finite number of propagation process iteratively, but we will only consider single propagation here. \[\mathbf{x}_{i}^{(2)}=\text{MLP}\left(\mathbf{x}_{i}^{(1)},\sum_{j\in\mathcal{N }(i)}\text{MLP}\left(\mathbf{x}_{i}^{(1)},\mathbf{x}_{j}^{(1)}\right)\right).\] Once the node embeddings \(\mathbf{x}_{i}^{(2)}\) are computed, an aggregator computes a graph embedding by aggregating the set of node embeddings. In Section 3.1, we describe the details of the aggregator which we will apply not only to GEN but also the other models GCN and GAT. * Graph Convolutional Network (GCN) GCN is introduced in [18] as a variant of convolutional neural networks for graphs. It operates as the following formula: \[\mathbf{z}_{i}=\boldsymbol{\Theta}^{\intercal}\sum_{j\in\mathcal{N}(i)\cup \{i\}}\frac{1}{\sqrt{\hat{d}_{j}\hat{d}_{i}}}\mathbf{x}_{j},\] where \(\mathbf{z}_{i}\) is the output for the \(i\)-th node, \(\boldsymbol{\Theta}\) is a matrix of filter parameters, and \(\hat{d}_{i}\) is the degree of \(i\)-th node. * Graph Attention Network (GAT) GAT is proposed in [19], which incorporates the attention mechanism into the message propagation. The mechanism of GAT can be formulated as \[\mathbf{x}_{i}^{\prime}=\alpha_{i,i}\boldsymbol{\Theta}\mathbf{x}_{i}+\sum_{ j\in\mathcal{N}(i)}\alpha_{i,j}\boldsymbol{\Theta}\mathbf{x}_{j}.\] Here the attention coefficients \(\alpha\) are given by \[\alpha_{i,j}=\frac{\exp\left(\text{LeakyReLU}(\mathbf{a}^{\intercal}[ \boldsymbol{\Theta}\mathbf{x}_{i}\|\boldsymbol{\Theta}\mathbf{x}_{j})]\right)} {\sum_{k\in\mathcal{N}(i)\cup\{i\}}\exp\left(\text{LeakyReLU}(\mathbf{a}^{ \intercal}[\boldsymbol{\Theta}\mathbf{x}_{i}\|\boldsymbol{\Theta}\mathbf{x}_{k })]\right)},\] where the attention mechanism \(\mathbf{a}\) is implemented by a single-layer feedoward neural network, and \(\|\) is the concatenation operator. All the neural networks including GNNs are implemented based on PyTorch [20] and PyTorch Geometric [21]. 2 Footnote 2: Python code is available on Github. ## 3 Supervised Learning In this section we use supervised learning to decide whether or not two plumbing graphs represent a same plumbed 3-manifold. We build 3 models GEN+GAT, GCN+GCN and GCN+GAT and examine their performance for the task.3 ### Models All the models are designed to have two convolution operators, one aggregation layer and one classification layer. The models are named by concatenating the names of two convolution operators. For a fair comparison, we use the common aggregation layer and the classification layer, and all the layers have the same dimensions for both input and output. Since we have already reviewed the convolution operators in Section 2.2, let us now elaborate on the common aggregation layer and classification layer. The aggregator computes a graph embedding by aggregating all of its node embeddings, passed from convolution operators. We use the aggregation layer proposed in [22], which is formulated by \[\mathbf{h}_{G}=\text{MLP}_{G}\left(\sum_{i\in V}\text{Softmax}(\text{MLP}_{ \text{gate}}(\mathbf{x}_{i}))\odot\text{MLP}(\mathbf{x}_{i})\right),\] where \(\mathbf{h}_{G}\) is a graph-level output and \(\odot\) denotes element-wise multiplication. The classification layer plays a role to determine, for a given pair of plumbing graphs, whether or not they are equivalent. This layer has the concatenation of two graph embeddings as its input and classifies into two classes, class 0 and class 1. Here class 1 means two plumbing graphs are equivalent while class 0 denotes they are inequivalent. We implement the classification layer by using MLP with two hidden layers. A detailed information of the architecture for 3 models are presented in Table 1. For each layer in the table, the first element in the bracket followed by a name of model denotes the dimension of input vectors of the layer while the second one denotes the dimension of output embedding. ### Experimental Settings For training and validation, we put together datasets including 80,000 random pairs of plumbings generated by algorithms presented in Appendix A. More explicitly, the datasets consists of * 40,000 pairs of equivalent plumbings generated by EquivPair, Algorithm 3, with \(N_{\max}=40\). To generate a pair of equivalent plumbings, the algorithm starts with a random plumbing created by RandomPlumbing, Algorithm 1, and iteratively applies Neumann moves using RandomNeumannMove, Algorithm 2, up to \(N_{\max}\) times, to each plumbing in the pair. * 30,000 pairs of inequivalent plumbings generated by InequPair, Algorithm 4, with \(N_{\max}=40\). It has a similar process to EquivPair, but it starts with a pair of inequivalent plumbings, each of which is separately generated by RandomPlumbing.4 Footnote 4: We note that two plumbings, generated by running RandomPlumbing twice, could be accidentally equivalent and this might affect the accuracy of models in training. However, we will ignore this since it is statistically insignificant. * 10,000 pairs of inequivalent plumbings generated by TweaxPair, Algorithm 5, with \(N_{\max}=40\). This algorithm generates a pair of inequivalent plumbings, one of which is obtained by tweaking the other. Here, by tweaking a plumbing, we mean that we make a small change of the weight (or node feature) of a randomly chosen node in the plumbing. Since tweaking is different from Neumann moves, this process creates an inequivalent plumbing to the original one. After tweaking, it also applies RandomNeumannMove iteratively up to \(N_{\max}\). These pairs are added into the datasets in order for the models to make the decision boundary more accurate, since for a pair generated by InequPair, two plumbings might be quiet different due to random generators. \begin{table} \begin{tabular}{|c|c|c|} \hline Layers & GEN+GAT & GCN+GAT & GCN+GCN \\ \hline First convolution & GEN(1, 128) & GCN(1, 128) \\ \hline Second convolution & GAT(128, 128) & GCN(128, 128) \\ \hline Aggregation & Aggregator(128, 32) \\ \hline Classification & MLP(64, 2) \\ \hline \end{tabular} \end{table} Table 1: The architecture of 3 models with parameter values. We divide the datasets into training and validation sets by the ratio 8:2. We train our models on training sets containing 64,000 pairs of plumbings up to 150 epochs. For each model, we use cross-entropy loss for a loss function and Adam for an optimizer with the learning rate 0.001. ### Results The comparison of the performance between 3 models is plotted in Figure 3. We find that GEN+GAT model significantly outperforms the other models GCN+GAT and GCN+GCN. The model GCN+GAT seems to outperform GCN+GCN by few percent, but the performance difference is negligible. 5 Footnote 5: For GCN+GAT and GCN+GCN models, we have checked that increasing weight dimensions and longer training phases did not lead to better performance. We have also tried other models such as GEN+GEN, GEN+GCN and GAT+GAT to figure out which convolution operators has an important role. The model GEN+GCN shows similar performance with GEN+GAT, but slightly underperforms, and the performance of GAT+GAT is somewhere between that of GEN+GAT and GCN+GAT. This means that GEN plays a significant role to evaluate equivalence or inequivalence for a pair of plumbing graphs. However, we found that GEN+GEN does not perform as good as GEN+GAT or GEN+GCN. We used the following datasets to test our models: * Test set 1 It contains 5,000 pairs of equivalent plumbing graphs generated by EquivPair with \(N_{\max}=40\) and 5,000 pairs of inequivalent plumbings generated by InequivPair with \(N_{\max}=40\). * Test set 2 This dataset is similar to Test set 1, but with \(N_{\max}=60\). * Test set 3 This set is also similar to Test set 1, but with \(N_{\max}=80\). * Test set 4 It contains 64 pairs of plumbings generated in a manual way such that, for each pair, the determinants of adjacency matrices (with weights on the diagonal) of two plumbings are the same. We use this Test set in order to check that graph embeddings from the models are not just functions of the determinant of the adjacency matrix of a plumbing. All types of Neumann moves have the property that it preserves the determinant of the adjacency matrix of the plumbing, which is the order of the first homology group of the corresponding 3-manifold. We wish graph embeddings to not depend on the determinant only, but be more sophisticated (approximate) invariants of plumbed 3-manifolds. The results are depicted in Figure 4 and they enlighten us with the following two points. The first point is that the accuracy for Test set 2 and Test set 3 is almost the same level as Test set 1 even though Test set 2 and 3 contain plumbing pairs with larger \(N_{\max}\) than Test set 1. It is perhaps surprising that Figure 3: Overview of the performance and loss comparison between GEN+GAT, GCN+GAT and GCN+GCN models. such somewhat counter-intuitive property holds even for GCN+GAT and GCN+GCN models, which show less training accuracy than GEN+GAT. The second point is that GEN+GAT model still outperforms the others for Test set 4 and it can distinguish correctly even inequivalent pairs with the same determinants. Since GEN is designed for graph similarity learning and to have a good generalization, we can see that the model GEN+GAT outperforms significantly the others GCN+GCN and GCN+GAT, designed for general classification problems (with a relatively small number of classes), on various Test sets. ## 4 Reinforcement Learning In this section, we consider reinforcement learning of a neural network that allows, for a given pair of plumbings, not only to recognize whether they are equivalent or not, but also to find out their simplest representations. ### The environment #### 4.1.1 State space In our RL environment, the plumbing graph defines the state and the state space is infinity. In order to handle the start state and terminal stats in an easy way, we set the start state for an episode is set to be a plumbing generated by RandomPlumbing, Algorithm 1, with number of nodes equal to 10, then applying Neumann moves \(N=15\) times. Between two equivalent plumbings, we define a relation as follows: for two equivalent states \(s_{1}\) and \(s_{2}\), one state is said to be _simpler_ than the other if \[f(s_{1})<f(s_{2}),\] where \(f(s)\) for a state \(s\) is defined by \[f(s):=5|V(s)|+\sum_{v\in V(s)}|w(v)|. \tag{4.1}\] It is easy to check that this relation is well-defined in a set of all equivalent plumbings. One might think that number of nodes in a state is enough to decide which state is simpler. The reason why we add the sum of the absolute values of the weights of nodes is to make the simplest state generically Figure 4: Performance comparison between 3 models, GEN+GAT, GCN+GAT and GCN+GCN on various Test sets. The error bars are not displayed in the figure since the standard errors on Test set 1, 2, and 3 are too small (smaller than 0.7) to notice. The standard errors on Test set 4 are about 2.64, 6.25, and 6.20 for GEN+GAT, GCN+GAT and GCN+GCN models, respectively. unique6. For example, two plumbings depicted in Figure 5 have same number of nodes and it is easy to check that they are equivalent by applying 2 Neumann moves. In this example, we say that the plumbing on the right-hand side is simpler than the other from (4.1). Footnote 6: There still could be specific examples with different plumbings in the same equivalence class that minimize \(f(s)\). However, as the results below suggest, such cases are statistically insignificant. By using this comparison relation, we set the terminal state to be a state equal to or simper than the initial state in the episode. We also terminate each episode after taking 15 time steps. #### 4.1.2 Action space An action for the agent in a state is defined to be a Neumann move applied to one of the nodes. There are 8 possible Neumann moves: 5 blow-up moves and 3 blow-down moves. However, blow-down moves are not always available for all nodes and this could raise a problem that there might be too many of such _illegal_ actions. Therefore, we incorporate 3 blow-down moves into one such that it takes an available blow-down if the corresponding node satisfies one of three following conditions: * the degree of the node is 2 and its weight is equal to \(\pm 1\), * the degree of the node is 1 and its weight is equal to \(\pm 1\), * the degree of the node is 1 and its weight is equal to 0. Then, for a given state, the total number of possible actions is equal to 6 (5 blow-up moves and 1 blow-down move) times number of nodes in the state. If the agent takes an illegal action, then the next state remains the same state as the current state and the agent will be punished with a negative reward, on which we will elaborate soon. #### 4.1.3 Rewards Since the goal for the RL agent is to find out the simplest representation for an initial state, it is natural to use \(-f(s^{\prime})\) as a reward (or punishment \(+f(s^{\prime})\)) for taking an action in the current state \(s\), where \(s^{\prime}\) denotes the next state obtained by taking an action to the current state \(s\). Since all the rewards are negative and simpler state is less punished, it helps the agent not only make the current representation as simple as possible, but also do this job as fast as possible. It is also important to note that some states must get a new blow-up node in order to be simplified, which means the agent has to sacrifice the immediate reward at some time steps to maximize the total return. As we have seen previously, there are some illegal actions in the action space for each state. The reward for such illegal actions is set to be equal to \(-2f(s^{\prime})\) for the next state \(s^{\prime}\), which remains the same as the current state \(s\) as we have discussed above. We set the discount factor as \(\gamma=0.99\), very close to 1. ### The deep RL algorithm We remind that the RL task is to obtain the simplest representation from a given initial state by using Neumann moves. To accomplish this task, we used Asynchronous Advantage Actor-Critic (A3C) [23] as an RL algorithm, which is the asynchronous version of Actor-Critic (AC) [24], with feedforward GNNs. A3C executes multiple local AC agents asynchronously in parallel to decorrelate the local agent's data into Figure 5: Two plumbings are equivalent and they have same number of nodes. The right-hand side plumbing is simpler than the left one in the sense of (4.1). a more stationary process. It also provides practical benefits of being able to use only multi-core CPU, not having to rely on specialized hardware such as GPUs. The Actor network defines the policy function \(\pi(a|s)\), whose output shows the probability of taking action \(a\) in state \(s\), while the Critic network is to approximate the value function \(V^{\pi}(s)\), which represents the expected return from state \(s\). Since the inputs of the Actor and Critic are plumbing graphs, in the context of GNNs, the Actor network can be thought as the GNNs for node-level action-selection problem and the Critic is for graph-level estimation problem. The architecture of the Actor is designed by using two graph convolutional layers GCN+GCN and one single-layer feedforward neural network. The Critic has a similar structure, but it has an extra aggregation layer, for which we used a simple mean function. We have also tried GEN+GAT and GEN+GCN for the convolutional layers in the Actor and Critic networks. They seemed to perform well, but it takes a bit longer time for training than GCN+GCN. Since the results with GCN+GCN were already pretty good, we ended up using GCN+GCN. We trained the agents for \(8\times 10^{4}\) episodes using 8 CPU cores and no GPU, which takes around 8 hours. We used Adam optimizer with learning rate \(5\times 10^{-4}\). For a comparison, we have also implemented Deep Q-Network (DQN) [25] with feedforward GNNs GCN+GCN with the same settings as those for A3C. ### Results Our RL agents can be used to find the simplest representative in the equivalence class of a given plumbing graph. Furthermore, it also can be used to check whether a pair of plumbing graphs represents the same 3-manifold or not. For the latter purpose, we run the RL agents on a pair of plumbings to get the simplest representations for two plumbings, then we compare those to decide whether two equivalent plumbings are isomorphic or not. This process provides us with another advantage that, given two equivalent plumbings, we can get a sequence of Neumann moves that change one plumbing into the other, even though such sequence of Neumann moves is not necessarily the optimal one between two plumbings. From this perspective, we are going to check the performance of the RL agents by running them on pairs of plumbings that represent the same 3-manifolds. For the initial inputs of the agents, we generate 10,000 random pairs of plumbings by EQUIVPAIR, Algorithm 3, but with a fixed number of Neumann moves \(N\in\{20,40,60,80,100\}\). At each time step, the agents choose a Neumann move and apply it to each plumbing in a pair, then we get another pair of plumbings as the next input for the agents. After taking each action, we compare two plumbings and check if they are isomorphic. If yes, we consider it as the success of finding out a sequence of Neumann moves connecting two plumbings in the initial pair. Otherwise, we move on to the next step and we repeat the process until the number of time steps exceeds \(5N\). We define the accuracy of the performance as the ratio the number of successes divided by the number of total episodes. An example of a pair of equivalent graphs with the successful result by the A3C trained agent is shown in Figure 9. The results of the RL agents is presented in Figure 6. The plot on the left in Figure 6 shows the accuracy comparison between A3C and DQN. The accuracy for A3C tends to slightly decrease as \(N\) gets larger, but it's around 93% for all pairs of plumbings. However, the accuracy for DQN drops significantly from around 86% to 42% when \(N\) increases from \(N=20\) to \(N=100\). On the right in Figure 6, we show the average number of actions that the agent takes until obtaining a pair of exactly same two plumbings from an initial pair of equivalent plumbings. For A3C agent, the Figure 6: Performance comparison between A3C and DQN algorithms. average numbers of actions do not exceed around 1.35 times \(N\), which means the trained A3C agent has a good efficiency to make a plumbing simpler. The DQN agent needs similar number of actions to the A3C for \(N=20\) and \(N=40\). However, it takes almost twice as many number of actions as A3C for larger \(N\). We have also studied the distribution of Neumann moves (or actions) that the A3C agent performs before and after training to simplify plumbings generated with \(N=100\). In Figure 7, we plot the number of each Neumann move taken by the agent divided by \(N\). In the plot, moves 1-5 denote blow-up moves and moves 6-8 denote blow-down moves. It is natural to observe that all blue dots in Figure 7 lay on the line \(y=0.125\), because the untrained agent takes each action equally often from a uniform distribution. On the other hand, red dots for trained agent show that the agent takes blow-down moves (moves 6-8) with a probability of around 75% and takes blow-up moves (moves 1-5) with the remaining probability. This makes sense from the fact that blow-down moves can actually make the plumbing simpler and get a less punishment than blow-up moves. Especially, we see that the move 7, blow-down move of type (b), is the most frequent action and the move 1, blow-up move of type (a), is the least frequent action. This is explained by the fact that the move 1 is not helpful for the agent to get a simpler plumbing. Before we jump into the conclusion, it is interesting to check whether or not the trained A3C agent is indeed maximizing the total return instead of immediate rewards by a simple example depicted in 8. The left plumbing in Figure 8 is a standard representation that realizes a 3-manifold known as a Brieskorn 3-sphere \(\overline{\Sigma(2,3,5)}\), while the plumbing on the right represents a homeomorphic 3-manifold which can also be considered as the boundary of the \(E_{8}\) manifold. As one can see immediately, the plumbing on the right in Figure 8 does not have nodes available for blow-down moves. Therefore, in order to get the left plumbing from the right one, the RL agent should take appropriate blow-up moves first, then taking available blow-down moves. This is why we take this example for the test. We notice that 6 actions are needed to turn one plumbing into the other in an optimal way. The trained A3C agent successfully simplify the \(E_{8}\) plumbing to the plumbing \(\overline{\Sigma(2,3,5)}\) by taking 16 actions, while the trained DQN does not find a solution until the number of actions exceeds 50. This test ensures that the A3C agent indeed pursues not short-term rewards, but its maximal long-term return. Figure 7: Comparison of the number of Neumann moves taken by a trained A3C agent and an untrained A3C agent to simplify plumbing. The values shown are the total number of Neumann moves of a given type divided by the total number of actions performed, aggregated over multiple examples. ## 5 Conclusion and Future Work ### Conclusion In this paper we have examined the GNN approach to the problems in 3-dimensional topology, which ask whether two given plumbing graphs represent a same 3-manifold or not, and whether or not it is possible to find out the sequence of Neumann moves that connects two plumbings if they are equivalent. In Section 3, we used supervised learning to solve the binary classification of whether or not a pair of plumbings is equivalent. We built 3 models by combining graph convolution operators GEN, GCN and GAT, together with a certain graph aggregation module and an MLP as a classifier. We found that GEN + GAT model outperformed GCN + GCN and GCN + GAT models on randomly generated training datasets with maximal number \(N_{\max}=40\) of applied Neumann moves. GEN + GAT achieved about 95% accuracy while accuracy for the others is below 80%. We also tested those 3 models on randomly generated testsets with larger \(N_{\max}=60\) and \(N_{\max}=80\). Even though those models were trained by a training sets with \(N_{\max}=40\), it is an interesting point that, on such testsets, they still performed on a similar level to their training performance. In Section 4, we utilized reinforcement learning to find out the sequence of Neumann moves that relates to a given pair of equivalent plumbings. We trained the agent such that it could find the simplest representation of a plumbing by using Neumann moves as its actions. We define the simplicity as a certain linear combination of number of nodes and sum of the absolute value of node features. We ran the trained agent on each of two equivalent plumbings until it arrived at two isomorphic plumbings. In this way, we can construct a sequence of Neumann moves connecting two equivalent plumbings. Using A3C algorithm, we see that the agent can find a sequence of Neumann moves in over 90% of randomly generated equivalent plumbing pairs even with \(N_{max}=100\). This outperforms the DQN agent by a factor of around 1.5 when \(N=60\), and by more than a factor of 2 when \(N=100\). ### Future work In this paper we have used Geometric Deep Learning, GNN in particular, in the problem of classification of 3-manifolds up to homeomorphisms. We restricted to a special simple class of 3-manifolds corresponding to tree plumbing graphs. We hope to apply similar neural network models for more general 3-manifolds and also 4-manifolds in the future. One direct generalization would be considering 3-manifolds corresponding to general plumbing graphs described in [14], possibly disconnected, with loops, and with non-trivial genera assigned to the vertices7. This, in particular, would involve considering extra features associated to the vertices and also to the edges of graphs, as well as additional set of moves relating equivalent graphs. A more interesting generalization would be considering general Kirby diagrams for 3-manifolds. A Kirby diagram of a 3-manifold is a planar diagram of a link with an integer framing number assigned to each link component. The 3-manifold corresponding to the diagram is then obtained by performing Dehn surgery on this framed link. Two diagrams produce homeomorphic 3-manifolds if and only if they can be related by a sequence of Reidemeister moves (that do not change the isotopy class of the link) together with the so-called Kirby, or equivalently, Fenn-Rourke moves that do change the link but not the Figure 8: Two equivalent representations of a plumbed 3-manifold \(\overline{\Sigma(2,3,5)}\). resulting 3-manifold (up to homeomorphism). Such a diagram can be understood as a 4-regular plane graph with additional data specifying the types of crossings in the link diagram and the framings of the link components. Alternatively, one can consider Tait graph associated to a checkboard coloring of the link diagram. For practical purposes, this presentation most likely will be more efficient. The Reidemeister, as well as Kirby/Fenn-Rourke moves then can be understood again as certain local operations on graphs associated with Kirby diagrams. The main new challenge would be incorporating the structure of the planar embedding of the graph in GNN. This can be done, for example, by specifying the cyclic order of edges at each vertex, or cyclic order of edges for each face of the plane graph. This additional structure should be taken into account in the layers of the network. This is not considered in most standard GNN architectures. A further step would be the problem of recognizing whether a pair of Kirby diagrams for 4-manifolds produces a diffeomorphic pair. Such Kirby diagrams are again framed link diagrams that also contain special "dotted" link components. There is a corresponding set of local Kirby moves that relate diagrams realizing diffeomorphic 4-manifold. For a comprehensive reference about the Kirby diagrams of 3- and 4-manifold we refer to [26]. ## Acknowledgements We would like to thank Sergei Gukov for the useful comments and suggestions on the draft of the paper. We would also like to thank the anonymous referees who provided insightful and detailed comments and suggestions on a earlier version of the paper. ## Appendix A Algorithms In this section, we provide details of the algorithms which have been used to generate datasets for training and testing both SL and RL models in Section 3 and Section 4. * RandomPlumbing This algorithm generates a random plumbing tree by creating a random array for node features and building an adjacency matrix. It starts to choose a random integer as a number of nodes between 1 and 25. In general, there are \(N^{N-2}\) different plumbing trees with \(N\) nodes if we don't consider node features. Therefore, the upper limit 25 is large enough to generate around \(10^{6}\) random plumbing tess with statistically insignificant overlapping plumbings. The array of node feature is also created by randomly choosing an integer in the interval \((-20,20)\) for each node. Then we define the adjacency matrix for the plumbing tree, and the algorithm returns a pair of node feature array and adjacency matrix as data for the output plumbing. Note that all random process is done by using a uniform distribution. * RandomNeumannMove The role of this algorithm is to apply a randomly chosen Neumann move to a random node of the input plumbing, then returns the resulting plumbing. A random Neumann move is characterized by 3 variables, i.e., \(type\), \(updown\), and \(sign\). Here \(type\in\{1,2,3\}\) denotes 3 types of Neumann moves depicted in Figure 2, \(updown\in\{1,-1\}\) points out blow-up (\(updown=1\)) or blow-down (\(updown=-1\)), and \(sign\in\{1,-1\}\) denotes the sign of the new vertex for blow-up Neumann moves of type (b) and (c). Notice that other moves does not require \(sign\). The algorithm first takes a random node of the input and fixes a random tuple \((type,updown,sign)\) from a uniform distribution. Then it builds new node feature array and adjacency matrix for the plumbing obtained by applying the Neumann move to the chosen node. If the Neumann move determined by a tuple \((type,updown,sign)\) is an illegal move, the output plumbing is the same as the input. The algorithm also returns another variable \(done\in\{\textsc{True},\textsc{False}\}\), which makes it possible to notice whether the Neumann move to be applied is legal (\(done=\textsc{True}\)) or illegal (\(done=\textsc{False}\)). This variable \(done\) will be used to decide the rewards of actions in Section 4. * EquivPair and InequPair These are used to generate an equivalent plumbing pair (EquivPair) or an inequivalent plumbing pair (InequivPair). At the first step, EquivPair generates an initial pair of isomorphic plumbings, while InequPair generates two inequivalent plumbings, by using RandomPlumbing. Then they have the same process, in which they apply Neumann moves iteratively up to \(N_{\max}\) times to each plumbing in the initial pair. Then they return the resulting pair as well as a variable, named \(label\), which will be used for classification problem in Section 3. Notice that \(label=1\) for EquivPair and \(label=-1\) for InequPair. * TweakPair This algorithm generates an inequivalent pair of plumbings, but with the same graph structure. One plumbing is generated by RandomPlumbing, and the other is obtained by tweaking a copy of the first plumbing, i.e., by making a small change to a feature of a randomly chosen node. These two plumbings form an initial pair. Since the adjacency matrices of two plumbings are same, they have the same graph structure. However, due to the small change, two plumbings are inequivalent. Then the algorithm has the same structure as in EquivPair and InequPair to apply random Neumann moves iteratively to each plumbing in the initial pair. ``` \(n\leftarrow\) random integer between \(1\) and \(25\)\(\triangleright\) number of nodes \(\mathbf{x}\leftarrow\) array of \(n\) random integers between \(-20\) and \(20\)\(\triangleright\) node features \(\mathbf{a}\gets n\times n\) matrix of zeros \(\triangleright\) initialize the adjacency matrix for\(i=2\) to \(n\)do\(\triangleright\) construct the adjacency matrix \(j\leftarrow\) random integer between \(1\) and \(i-1\) \(\mathbf{a}_{i,j},\mathbf{a}_{j,i}\gets 1\) endfor \(G\leftarrow(\mathbf{x},\mathbf{a})\)\(\triangleright\)\(G\) defines the plumbing return\(G\) ``` **Algorithm 1**RandomPlumbing ``` \(\mathbf{Require:}\) a plumbing \(G\)\(v\leftarrow\) a random node of \(G\)\(type\leftarrow\) a random choice in \(\{1,2,3\}\)\(updown\leftarrow\) a random choice in \(\{1,-1\}\) if\(updown=1\)then\(\triangleright\) blow-up move if\(type=1\)then\(G^{\prime}\leftarrow\) a plumbing applied a blow-up move of type (a) to the node \(v\) else\(sign\leftarrow\) a random choice in \(\{1,-1\}\)\(G^{\prime}\leftarrow\) a plumbing applied a blow-up move determined by \((type,sign)\) endif \(done\leftarrow\) True else\(\triangleright\) blow-down move if\(v\) can be removed by a blow-down move then\(G^{\prime}\leftarrow\) a plumbing applied a blow-down move to the node \(v\) \(done\leftarrow\) True else\(G^{\prime}\gets G\)\(\triangleright\) returns the input plumbing for a forbidden move \(done\leftarrow\) False endif endif endif return\((done,G^{\prime})\) ``` **Algorithm 2**RandomNeumannMove ``` \(\mathbf{x}\leftarrow\) array of \(n\) random integers between \(-20\) and \(20\)\(\triangleright\) initialize the adjacency matrix for\(i=2\) to \(n\)do\(\triangleright\) construct the adjacency matrix \(j\leftarrow\) random integer between \(1\) and \(i-1\)\(\mathbf{a}_{i,j},\mathbf{a}_{j,i}\gets 1\) endfor \(G\leftarrow(\mathbf{x},\mathbf{a})\)return\(G\) ``` **Algorithm 3**RandomPlumbing ``` \(\mathbf{Require:}\) a plumbing \(G\)\(v\leftarrow\) a random node of \(G\)\(type\leftarrow\) a random choice in \(\{1,2,3\}\)\(updown\leftarrow\) a random choice in \(\{1,-1\}\) if\(updown=1\)then\(\triangleright\) blow-up move if\(type=1\)then\(G^{\prime}\leftarrow\) a plumbing applied a blow-up move of type (a) to the node \(v\) else\(sign\leftarrow\) a random choice in \(\{1,-1\}\)\(G^{\prime}\leftarrow\) a plumbing applied a blow-up move determined by \((type,sign)\) endif \(done\leftarrow\) True else\(\triangleright\) blow-down move if\(v\) can be removed by a blow-down move then\(G^{\prime}\leftarrow\) a plumbing applied a blow-down move to the node \(v\) \(done\leftarrow\) True else\(G^{\prime}\gets G\)\(\triangleright\) returns the input plumbing for a forbidden move \(done\leftarrow\) False endif endif endif return\((done,G^{\prime})\) ``` **Algorithm 4**RandomPlumbing ``` \(\mathbf{x}\leftarrow\) array of \(n\) random integers between \(-20\) and \(20\)\(\triangleright\) initialize the adjacency matrix for\(i=2\) to \(n\)do\(\triangleright\) construct the adjacency matrix \(j\leftarrow\) random integer between \(1\) and \(i-1\)\(\mathbf{a}_{i,j},\mathbf{a}_{j,i}\gets 1\) endfor \(G\leftarrow(\mathbf{x},\mathbf{a})\)return\(G\) ``` **Algorithm 5**RandomPlumbing ``` \(\mathbf{Require:}\) a plumbing \(G\)\(v\leftarrow\) a random node of \(G\)\(type\leftarrow\) a random choice in \(\{1,2,3\}\)\(updown\leftarrow\) a random choice in \(\{1,-1\}\)\(updown=1\)then\(\triangleright\) blow-up move if\(type=1\)then\(G^{\prime}\leftarrow\) a plumbing applied a blow-up move of type (a) to the node \(v\) else\(sign\leftarrow\) a random choice in \(\{1,-1\}\)\(G^{\prime}\leftarrow\) a plumbing applied a blow-up move determined by \((type,sign)\) endif \(done\leftarrow\) True else\(\triangleright\) blow-down move if\(v\) can be removed by a blow-down move then\(G^{\prime}\leftarrow\) a plumbing applied a blow-down move to the node \(v\) \(done\leftarrow\) True else\(G^{\prime}\gets G\)\(\triangleright\) returns the input plumbing for a forbidden move \(done\leftarrow\) False endif endif endif return\((done,G^{\prime})\) ``` **Algorithm 5**RandomPlumbing ``` \(\mathbf{x}\leftarrow\) array of \(n\) random integers between \(-20\) and \(20\)\(\triangleright\) initialize the adjacency matrix for\(i=2\) to \(n\)do\(\triangleright\) construct the adjacency matrix for\(i=2\) to \(n\)do\(\triangleright\) construct the adjacency matrix \(j\leftarrow\) random integer between \(1\) and \(i-1\)\(\mathbf{a}_{i,j},\mathbf{a}_{j,i}\gets 1\) endfor \(G\leftarrow(\mathbf{x},\mathbf{a})\)return\(G\) ``` **Algorithm 6**RandomPlumbing ``` \(\mathbf{Require:}\) a plumbing \(G\)\(v\leftarrow\) a random node of \(G\)\(type\leftarrow\) a random choice in \(\{1,2,3\}\)\(updown\leftarrow\) a random choice in \(\{1,-1\}\)\(updown=1\)then\(\triangleright\) blow-up move if\(type=1\)then\(G^{\prime}\leftarrow\) a plumbing applied a blow-up move of type (a) to the node \(v\) else\(sign\leftarrow\) a random choice in \(\{1,-1\}\)\(G^{\prime}\leftarrow\) a plumbing applied a blow-up move determined by \((type,sign)\) endif \(done\leftarrow\) True else\(\triangleright\) blow-down move if\(v\) can be removed by a blow-down move then\(G^{\prime}\leftarrow\) a plumbing applied a blow-down move to the node \(v\) \(done\leftarrow\) True else\(G^{\prime}\gets G\)\(\triangleright\) returns the input plumbing for a forbidden move \(done\leftarrow\) False endif endif endif return\((done,G^{\prime})\) ``` **Algorithm 6**RandomPlumbing ``` \(\mathbf{x}\leftarrow\) array of \(n\) random integers between \(-20\) and \(20\)\(\triangleright\) initialize the adjacency matrix for\(i=2\) to \(n\)do\(\triangleright\) construct the adjacency matrix for\(i=2\) to \(n\)do\(\triangleright\) construct the adjacency matrix for\(j=\) random integer between \(1\) and \(i-1\)\(\mathbf{a}_{i,j},\mathbf{a}_{j,i}\gets 1\) endfor \(G\leftarrow(\mathbf{x},\mathbf{a})\)return\(G\) ``` **Algorithm 7**RandomPlumbing ``` \(\mathbf{Require:}\) a plumbing \(G\)\(v\leftarrow\) a random node of \(G\)\(type\leftarrow\) a random choice in \(\{1,2,3\}\)\(updown\leftarrow\) a random choice in \(\{1,-1\}\)\(updown=1\)\(updown=1\)\(updown=1\)\(updown=1\)\(updown=1\)\(updown=1\)\(updown=1\)\(updown=1\)\(updown=1\)\(updown=1\)\(updown=1\)\(updown=1\)\(updown=1\)\(updown=1\)\(updown=1\)\(updown=1\)\(updown=1\)\(down=down=1\)\(updown=1\)\(down=1\)\(down=down=1\)\(down=1\)\(down=down=1\)\(down=down=1\)\(down=down=1\)\(down=down=1\)\(down=down=1\)\(down=down=1\)\(down=down=1\)\(down=down=down=1\)\(down=down=1\)\(down=down=down=1\)\(down=down=1\)\(down=down=down=1\)\(down=down=down=1\)\(down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=1\)\(down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=1\)\(down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=down=1\)\(down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down= ``` Require:\(N_{\max}\in\mathbb{Z}^{+}\) \(G\leftarrow\) a plumbing by RandomPlumbing \(G_{1}\gets G\) \(n_{1}\leftarrow\) a random integer between 1 and \(N_{\max}\) for\(i=1\) to \(n_{1}\)do\(G_{1}\leftarrow\) RandomNeumannMove\((G_{1})\) endfor \(G_{2}\gets G\) \(n_{2}\leftarrow\) a random integer between 1 and \(N_{\max}\) for\(j=1\) to \(n_{2}\)do\(G_{2}\leftarrow\) RandomNeumannMove\((G_{2})\) endfor \(label\gets 1\) return\(G_{1},G_{2},label\) ``` **Algorithm 3** EQUIVPAIR ``` Require:\(N_{\max}\in\mathbb{Z}^{+}\) \(G_{1}\leftarrow\) a plumbing by RandomPlumbing \(G_{2}\leftarrow G_{1}\) \(v\leftarrow\) a random node in \(G_{2}\) \(t\leftarrow\) a random integer between -3 and 3, not 0. \(\mathbf{x}\leftarrow\) node feature of \(G_{2}\) \(\mathbf{a}\leftarrow\) adjacency matrix of \(G_{2}\) \(\mathbf{x}_{v}\leftarrow\mathbf{x}_{v}+t\) \(G_{2}\leftarrow\) a plumbing with \((\mathbf{x},\mathbf{a})\) \(n_{1}\leftarrow\) a random integer between 1 and \(N_{\max}\) for\(i=1\) to \(n_{1}\)do\(G_{1}\leftarrow\) RandomNeumannMove\((G_{1})\) endfor \(n_{2}\leftarrow\) a random integer between 1 and \(N_{\max}\) for\(i=1\) to \(n_{2}\)do\(G_{2}\leftarrow\) RandomNeumannMove\((G_{2})\) endfor \(label\gets-1\) return\(G_{1},G_{2},label\) ``` **Algorithm 4** INEQUIVPAIR ``` Require:\(N_{\max}\in\mathbb{Z}^{+}\) \(G_{1}\leftarrow\) a plumbing by RandomPlumbing \(G_{2}\leftarrow G_{1}\) \(v\leftarrow\) a random node in \(G_{2}\) \(t\leftarrow\) a random integer between -3 and 3, not 0. \(\mathbf{x}\leftarrow\) node feature of \(G_{2}\) \(\mathbf{a}\leftarrow\) adjacency matrix of \(G_{2}\) \(\mathbf{x}_{v}\leftarrow\mathbf{x}_{v}+t\) \(G_{2}\leftarrow\) a plumbing with \((\mathbf{x},\mathbf{a})\) \(n_{1}\leftarrow\) a random integer between 1 and \(N_{\max}\) for\(i=1\) to \(n_{1}\)do\(G_{1}\leftarrow\) RandomNeumannMove\((G_{1})\) endfor \(n_{2}\leftarrow\) a random integer between 1 and \(N_{\max}\) for\(i=1\) to \(n_{2}\)do\(G_{2}\leftarrow\) RandomNeumannMove\((G_{2})\) endfor \(label\gets-1\) return\(G_{1},G_{2},label\) ``` **Algorithm 5** TweakPair Figure 9: An example of a pair of equivalent plumbing graphs generated by EQUIVPAIR with the number of Neumann moves fixed to \(N=40\). The graphs are successfully recognized as equivalent both by the RL agent trained by A3C algorithm considered in Section 4 and the GEN+GAT neural network considered in Section 3.
2301.06740
Contributions of negative-energy states to the E2-M1 polarizability of the Sr clock
With the improvement of high-precision optical clock, the higher-order multipolar interaction between atoms and light needs quantitative evaluation. However for the Sr clock, the differential dynamic E2-M1 polarizability at the magic wavelength has contradictions among available theoretical and experimental results. Recently, the new experimental measurement of S. D\"{o}rscher {\em et al.} [arXiv: 2210. 14727] is consistent with measurement of Ushijima {\em et al.}, which poses new challenges to theory and urgently calls for theoretical explanations. In present work, we investigate contributions of negative-energy states to the E2 and M1 polarizabilities. We find that for the M1 polarizability, the contribution from negative-energy states is crucial and dominant. Our new theoretical result for E2-M1 polarizability difference is $-7.74(3.92)\times 10^{-5}$ a.u., which is in good agreement with the recent experiment of S. D\"{o}rscher et al., so the inconsistency problem of E2-M1 polarizability in the Sr clock between theory and experiment is eliminated.
Fang-Fei Wu, Ting-Yun Shi, Li-Yan Tang
2023-01-17T07:51:19Z
http://arxiv.org/abs/2301.06740v2
# Contributions of negative-energy states to the E2-M1 polarizability difference of the Sr clock ###### Abstract With improvement of high-precision optical clock, the higher-order multipolar interaction between atoms and light needs quantitative evaluation. However for the Sr clock, the dynamic E2-M1 polarizability difference at the magic wavelength has contradictions among available results, especially the strongly incompatible sign problem exists between theory and all experiments, which poses new challenges to theory. We investigate contributions of negative-energy states to the E2 and M1 polarizabilities. We find that for the M1 polarizability, the contribution from negative-energy states is dominant. Our result for E2-M1 polarizability difference is \(-7.74(3.92)\times 10^{-5}\) a.u., which has the same sign as all the experimental values. Present work has solved the inconsistency problem of sign for E2-M1 polarizability difference in the Sr clock, and the importance of negative-energy-states contribution can be extended directly into the evaluation of multipolar optical frequency shift for other clocks. pacs: 31.15.ac, 31.15.ap, 34.20.Cf ## I Introduction High-precision optical clock has extensive and important applications, such as redefine the unit of time [1; 2; 3], establish quantum metrology [4; 5], test variations of the fundamental constants [6; 7; 8], probe dark matter and dark energy [9; 10], and search for new physics [11; 12; 13]. Strontium as a typical representative of optical lattice clocks, where atoms are trapped in a magic wavelength optical lattice [14; 15], the leading order of Stark shift that related to the dynamic electric dipole (E1) polarizability at the magic wavelength can be eliminated, but the multipolar Stark shifts that related to the dynamic electric quadrupole (E2) and magnetic dipole (M1) polarizabilities can not be cancelled. At present, the systematical uncertainty of the Sr clock has entered into \(10^{-18}\) level of precision [16; 17; 18; 19; 20; 21]. Aiming to develop and realize a new generation of higher-precision optical clocks with uncertainty and stability beyond \(10^{-18}\), the multipolar interaction between light and atoms related to the E2 and M1 polarizabilities needs to be quantitatively evaluated [22; 23; 24; 25; 26]. However, for the Sr clock, there is strongly incompatible sign problem that exists for the E2-M1 polarizability difference at the magic wavelength of \(813.4280(5)\) nm [27] between theory [22; 23; 24; 28] and experiment [29; 30], which limits the improvement of precision for the Sr optical clock. At present, the results of two different theoretical methods are consistent with each other. One is from the ab-initio calculations of Porsev _et al._[24], they report a value of \(2.80(36)\times 10^{-5}\) a.u. by using the configuration interaction combined linearized coupled-cluster (CI+all-order) method. The other result of \(2.68(94)\times 10^{-5}\) a.u. [28], which is obtained from the combined method of Dirac-Fock plus core polarization (DFCP) and relativistic configuration interaction (RCI) approaches, agrees well with the value of Porsev _et al._ But both of theoretical results have opposite sign to the measured value of \(-0.962(40)\) mHz of RIKEN group [25]. Recently, two newest experimental results for E2-M1 polarizability difference are reported. One value of \(-987^{+174}_{-223}\)\(\mu\)Hz is measured by PTB group [29], and the other value of \(-1.24(5)\) mHz is reported by JILA group [30]. Both of experiments have same negative sign with the measurement of RIKEN group. This further confirms that the incompatibility of sign between theory and experiment is still pending, which poses a new challenge to theory. Therefore, new theoretical interpretation is urgently needed for solving the current contradiction of the E2-M1 polarizability in the Sr clock. From theoretical perspective, it is crucial to keep the completeness of intermediates states when using the sum-over-states method to calculate the multipolarizabilities. Negative-energy states as products of Dirac theory and as one part of state completeness, the importance of this part has been emphasized in the calculations of \(g\)-factor of atoms and ions [31; 32; 33; 34; 35; 36; 37; 38]. However, the contribution of negative-energy states to the multipolar polarizabilities for the optical clocks has never been discussed before. In present work, we take account of the negative-energy-states contributions to the dynamic multipolar polarizabilities of the Sr clock by using improved DFCP+RCI method. Different from available calculations, all the negative-energy states of the Sr\({}^{+}\) ion are included to construct configurations of the Sr atom. In addition, the summation in calculating the multipolar polarizabilities involves all the negative-energy and positive-energy states of the Sr atom. We find that for the M1 polarizability, the negative-energy-states contribution is much larger than that of positive-energy states by several orders of magnitude and can completely change the sign of final result. So present work has eliminated the inconsistency problem of sign for E2-M1 polarizability difference in the Sr clock between theory and experiment. ## II Theoretical Method The combined DFCP+RCI method is effective in predicting structural properties of multi-electron atoms and ions, which can obtain consistent results with other ab-initio methods. For example, for the E1 polarizability of the Sr, Mg, and Cd clocks, the values of DFCP+RCI method agree with the results of CI+all-order method [28; 39; 40] within 3%. In present work, an improved DFCP+RCI method has been developed by including all the positive- and negative-energy states of monovalent-electron ion to construct the configurations of divalent-electron atom. The detailed implementation process to obtain the energies and wavefunctions of the Sr atom is as follows: Firstly, we need to solve the Dirac-Fock (DF) equation for the frozen Sr\({}^{2+}\) core to obtain the real core-orbital wavefunctions \(\psi(\mathbf{r})\), which can be used to construct the DF potential \(V_{DF}(\mathbf{r})\) between a valence electron and nucleus. Secondly, we need to solve the DFCP equation to obtain the monovalent-electron wavefunctions \(\phi(\mathbf{r})\) of the Sr\({}^{+}\) ion, \[h_{\rm DFCP}(\mathbf{r})\phi(\mathbf{r})=\varepsilon\phi(\mathbf{r})\,, \tag{1}\] and \(h_{\rm DFCP}(\mathbf{r})\) represents the DFCP Hamiltonian, \[h_{\rm DFCP}(\mathbf{r})=c\mathbf{\alpha}\cdot{\bf p}+(\beta-1)c^{2}+V_{N}(\mathbf{r})+V_ {DF}(\mathbf{r})+V_{1}(\mathbf{r})\,, \tag{2}\] where \(\mathbf{\alpha}\) and \(\beta\) are the \(4\times 4\) Dirac matrices, \({\bf p}\) is the momentum operator for the valence electron, \(V_{N}(\mathbf{r})\) is the Coulomb potential between a valence electron and nucleus. \(V_{1}(\mathbf{r})\) is the one-body core-polarization potential [41], which is kept the same as Ref. [28]. In this step, it is specially to point out that we keep all the wavefunction \(\phi(\mathbf{r})\) of positive- and negative-energy states for constructing the configuration wavefunctions \(|\Phi_{I}(\sigma\pi JM)\rangle\) in the following step. Thirdly, we perform the configuration interaction calculation for the divalent-electron Sr atom, \[\bigg{[}\sum_{i}^{2}h_{\rm DFCP}(\mathbf{r}_{i})+\frac{1}{\mathbf{r}_{12}}+V_{2}(\mathbf{ r}_{12})\bigg{]}|\Psi(\pi JM)\rangle=E|\Psi(\pi JM)\rangle\,, \tag{3}\] where \(V_{2}(\mathbf{r}_{ij})\) is two-body core-polarization interaction [28; 42; 43]. The wave function \(|\Psi(\pi JM)\rangle\) with parity \(\pi\), angular momentum \(J\), and magnetic quantum number \(M\) of the system is expanded as a linear combination of the configuration-state wave functions, \[|\Psi(\pi JM)\rangle=\sum_{I}C_{I}|\Phi_{I}(\sigma\pi JM)\rangle\,, \tag{4}\] where \(C_{I}\) and \(\sigma\) are, respectively, the expansion coefficients and the additional quantum number that define each configuration state uniquely. In this step, we can obtain all the positive- and negative-energy states of the Sr atom. When an atom exposed under a linear polarized laser field with the laser frequency \(\omega\), the general expressions of dynamic M1 and E2 polarizabilities for the initial state \(|0\rangle\equiv|n_{0},J_{0}=0\rangle\) (where \(n_{0}\) represents all other quantum numbers) are written as [44] \[\alpha^{M1}(\omega) = \frac{2}{3}\sum_{n}\frac{\Delta E_{n0}|\langle 0\|T_{1}^{(0)}\|nJ _{n}\rangle|^{2}}{\Delta E_{n0}^{2}-\omega^{2}}\,, \tag{5}\] \[\alpha^{E2}(\omega) = \frac{1}{30}(\alpha\omega)^{2}\sum_{n}\frac{\Delta E_{n0}|\langle 0 \|T_{2}^{(1)}\|nJ_{n}\rangle|^{2}}{\Delta E_{n0}^{2}-\omega^{2}}\,, \tag{6}\] where \(\alpha\) is the fine structure constant, \(T_{I}^{(\lambda)}\) is the \(2^{\ell}\)-pole transition operator, \(\lambda=0\) and \(\lambda=1\) represent the magnetic and electric transition operators, respectively. \(\Delta E_{n0}\) represents the transition energy between the initial state \(|0\rangle\) and the intermediate state \(|nJ_{n}\rangle\). The reduced matrix elements \(\langle 0\|T_{1}^{(0)}\|nJ_{n}\rangle\) and \(\langle 0\|T_{2}^{(1)}\|nJ_{n}\rangle\) can be expressed by the reduced matrix elements \(\langle i\|t_{1}^{(0)}\|k\rangle\) and \(\langle i\|t_{2}^{(1)}\|k\rangle\) of monovalent-electron system [45], \[\langle i\|t_{1}^{(0)}\|j\rangle = \frac{\kappa_{i}+\kappa_{j}}{2}\langle-\kappa_{i}\|C^{1}\|\kappa _{j}\rangle \tag{7}\] \[\int r[P_{i}(r)Q_{j}(r)+Q_{i}(r)P_{j}(r)]dr\,,\] \[\langle i\|t_{2}^{(1)}\|j\rangle = \langle\kappa_{i}\|C^{2}\|\kappa_{j}\rangle \tag{8}\] \[\int r^{2}[P_{i}(r)P_{j}(r)+Q_{i}(r)Q_{j}(r)]dr\,,\] where \(P_{i}(r)\) and \(Q_{i}(r)\) are the large and small components of wavefunctions for monovalent-electron system. It is worth noting that the radial integrations of electric and magnetic reduced matrix elements are different. The magnetic part in Eq. (7) involves the cross product of large and small components of wavefunctions. In addition, the summation of the M1 and E2 polarizabilities in Eqs. (5) and (6) involves all the intermediate states, including the negative-energy states. In present work, we have tested the convergence of multipolar polarizabilities as the number of B-spline basis sets \(N\) and partial-wave \(\ell\) increased. We find that our results remain unchanged with at less 4 significant digits as \(N\) and \(\ell\) increased. So in the following section, we choose to list the results under the maximum basis set. The maximum number of B-spline basis in our calculation is 40, the maximum number of partial-wave is 5, and the total number of configuration has reached 128781. ## III Results and Discussions Using the improved DFCP+RCI method with negative-energy states included, we have performed calculations of energies, reduced matrix elements, and multipolar polarizabilities for the Sr clock. We find that with inclusion of the negative-energy states, the energy correction for low-lying states is less than 3 ppm. And the negative-energy states have little effect on the E1 polarizability, which can not be reflected under current theoretical accuracy. For the multipolar polarizabilities of the clock states at the 813.4280(5) nm magic wavelength that we are interested in, Tables 1 and 2 list the itemized contributions to the E2 and M1 polarizabilities, respectively. It is seen that for the E2 polarizability, the main contribution to the ground-state \(5s^{2}\,{}^{1}S_{0}\) comes from the positive-energy states of \(5s4d\,^{1}D_{2}\) and \(5s5d\,^{1}D_{2}\), both of them contribute about 75% and 13% to the total E2 polarizability, respectively. For the excited-state \(5s5p\,^{3}P_{0}^{o}\), the main contribution comes from the \(5d5p\,^{3}F_{2}^{o}\), \(5s6p\,^{3}P_{2}^{o}\) and \(5s4f\,^{3}F_{2}^{o}\) states, these three items together contribute about 60% to the total E2 polarizability. Especially, for both of the \(5s^{2}\,{}^{1}S_{0}\) and \(5s5p\,^{3}P_{0}^{o}\) clock states, the contribution of negative-energy states is less than \(10^{-14}\), which can be almost ignored. However, different from the E2 polarizability, the influence of negative-energy states on the dynamic M1 polarizability is obvious and dominant, which can be seen clearly from Table 2. For the \(5s^{2}\,{}^{1}S_{0}\) state, if the negative-energy states are not taken into account, the largest contribution comes from the \(5p^{2}\,{}^{3}P_{1}\) state. After considering the negative-energy states, the final M1 polarizability at the 813.4280(5) nm magic wavelength is changed from \(2.17\times 10^{-9}\) a.u. to \(-3.84\times 10^{-4}\) a.u., the sign of which is changed completely. This dues to the contribution of negative-energy states is five orders of magnitude larger than that of the positive-energy states, and the sign of contribution is opposite. Similarly, for the \(5s5p\,^{3}P_{0}^{o}\) state, the contribution of negative-energy states is two orders of magnitude larger than that of the positive-energy states, accounting for 99% of the final M1 polarizability. In order to further explore the reasons of large and dominant negative-energy-states contribution, we have analyzed the itemized contributions of negative-energy states. We find that different from the positive-energy-states contribution, the negative-energy-state contribu \begin{table} \begin{tabular}{l c c c} \multicolumn{2}{c}{\(5s^{2}\,{}^{1}S_{0}\)} & \multicolumn{2}{c}{\(5s5p\,^{3}P_{0}^{o}\)} \\ \hline Sub item & Contr. & Sub item & Contr. \\ \hline \(5s4d\,^{3}D_{1}\) & 1.483[-15] & \(5s5p\,^{3}P_{1}^{o}\) & \(-4.81[\)-6\(]\) \\ \(5s6s\,^{3}S_{1}\) & 4.098[-13] & \(5s5p\,^{1}P_{1}^{o}\) & \(-2.702[\)-7\(]\) \\ \(5s5d\,^{3}D_{1}\) & 1.273[-12] & \(5s6p\,^{3}P_{1}^{o}\) & 7.336[-10] \\ \(5p^{2}\,{}^{3}P_{1}\) & 1.539[-9] & \(5s6p\,^{1}P_{1}^{o}\) & 1.766[-8] \\ Tail & 5.81[-10] & Tail & 1.35[-8] \\ \(\alpha^{M1+}\) & 2.17[-9] & \(\alpha^{M1+}\) & \(-5.05[\)-6\(]\) \\ \(\alpha^{M1-}\) & \(-3.84[\)-4] & \(\alpha^{M1-}\) & \(-4.88[\)-4\(]\) \\ Total & \(-3.84[\)-4] & Total & \(-4.93[\)-4\(]\) \\ \end{tabular} \end{table} Table 2: Itemized contributions (Contr.) to the dynamic M1 polarizability (in a.u.) for the \(5s^{2}\,{}^{1}S_{0}\) and \(5s5p\,^{3}P_{0}^{o}\) clock states at the 813.4280(5) nm magic wavelength. Tail represents the contribution from other positive-energy states, \(\alpha^{M1+}\) and \(\alpha^{M1-}\) represent the total contribution from positive-energy and negative-energy states, respectively. The numbers in the square brackets denote powers of ten. \begin{table} \begin{tabular}{l c c c} \multicolumn{2}{c}{\(5s^{2}\,{}^{1}S_{0}\)} & \multicolumn{2}{c}{\(5s5p\,^{3}P_{0}^{o}\)} \\ \hline Sub item & Contr. & Sub item & Contr. \\ \hline \(5s4d\,^{3}D_{2}\) & 1.258[-7] & \(5s5p\,^{3}P_{2}^{o}\) & \(-2.805[\)-6\(]\) \\ \(5s4d\,^{1}D_{2}\) & 6.965[-5] & \(5d5p\,^{3}P_{2}^{o}\) & \(3.095[\)-5\(]\) \\ \(5s5d\,^{1}D_{2}\) & 1.224[-5] & \(5d5p\,^{1}D_{2}^{o}\) & 3.149[-6] \\ \(5s5d\,^{3}D_{2}\) & 1.106[-8] & \(5s6p\,^{3}P_{0}^{o}\) & 1.741[-5] \\ \(5p^{2}\,{}^{3}P_{2}\) & 5.966[-8] & \(4d5p\,^{3}D_{2}^{o}\) & 3.603[-6] \\ \(5d^{2}\,{}^{1}D_{2}\) & 3.887[-8] & \(5d5p\,^{3}P_{2}^{o}\) & 2.139[-6] \\ \(5s6d\,^{3}D_{2}\) & 4.981[-10] & \(5s4f\,^{3}F_{2}^{o}\) & 2.644[-5] \\ \(5s6d\,^{1}D_{2}\) & 1.226[-7] & \(5s7p\,^{3}P_{0}^{o}\) & 2.601[-6] \\ \(5s7d\,^{1}D_{2}\) & 2.600[-6] & \(5s5f\,^{3}F_{2}^{o}\) & 8.768[-6] \\ Tail & 7.950[-6] & Tail & 3.214[-5] \\ \(\alpha^{E2+}\) & 9.28[-5] & \(\alpha^{E2+}\) & 12.44[-5] \\ \(\alpha^{E2-}\) & \(-8.64[\)-16] & \(\alpha^{E2-}\) & \(-1.10[\)-15\(]\) \\ Total & 9.28[-5] & Total & 12.44[-5] \\ \end{tabular} \end{table} Table 1: Itemized contributions (Contr.) to the dynamic E2 polarizability (in a.u.) for the \(5s^{2}\,{}^{1}S_{0}\) and \(5s5p\,^{3}P_{0}^{o}\) clock states at the 813.4280(5) nm magic wavelength. Tail represents the contribution from other positive-energy states, \(\alpha^{E2+}\) and \(\alpha^{E2-}\) represent the total contribution from positive-energy and negative-energy states, respectively. The numbers in the square brackets denote powers of ten. tion is not the main contribution of a few intermediate states, but is a cumulative effect of thousands of negative-energy states with energy in the range of \(-37558(1)\) (\(2mc^{2}\approx 37558\)) a.u. Although all these negative-energy states of \(-37558(1)\) a.u. are far away from the initial state, the radial wavefunction \(Q_{j}(r)\) of these states have large overlap with the \(P_{i}(r)\) part of the initial state wavefunction, which results in the large \(P_{i}(r)Q_{j}(r)\) product in Eq. (7). In other words, it is a series of large M1 transition matrix elements between the negative-energy states and the initial state that lead to the dominant contribution of negative-energy states to the M1 polarizability. Further, since the results of DFCP+RCI method are consistent with that of ab-initio calculations within 3% [28; 39], we can introduce \(\pm 3\%\) fluctuation into all the reduced matrix elements to conservatively evaluate the uncertainty of present E2 and M1 polarizabilities. The final values are summarized in Table 3, and a detailed comparison is also given in this table. It can be clearly seen that for the E2 polarizability, present values including the negative-energy-states contribution are in good agreement with Refs. [24; 28], which only include the positive-energy states. This confirms again that the negative-energy-states contribution to the electric polarizability can be neglected. In addition, the obvious difference between present work and other calculations in Table 3 is the M1 polarizability. For the \(5s^{2}\,{}^{1}S_{0}\) state, present value of \(-3.84(24)\times 10^{-4}\) a.u. has opposite sign to the values of Refs. [24; 28]. And the absolute value of \(-3.84(24)\times 10^{-4}\) a.u. is five orders of magnitude larger than the values of Refs. [24; 28]. For the \(5s5p\,{}^{3}P_{o}^{o}\) state, present value of \(-4.93(30)\times 10^{-4}\) a.u. is two orders of magnitude larger than other values in Refs. [24; 28]. When adding \(\Delta\alpha^{E2}(\omega)\) and \(\Delta\alpha^{M1}(\omega)\) together, we can get the final E2-M1 polarizability difference \(\Delta\alpha^{QM}(\omega)=-7.74(3.92)\times 10^{-5}\) a.u., which includes the negative-energy-states contribution of \(-1.04(38)\times 10^{-4}\) a.u. Compared with our previous value of \(2.68(94)\times 10^{-5}\) a.u. [28], the large uncertainty in present work does to the dominant differential M1 polarizability \(\Delta\alpha^{M1}(\omega)\). Since the absolute value of \(-1.09(38)\times 10^{-4}\) a.u. is an order of magnitude larger than the differential E2 polarizability \(\Delta\alpha^{E2}(\omega)=3.16(95)\times 10^{-5}\) a.u., the addition of two terms causes the cancellation of significant figures. This is completely different from other calculations of Refs. [24; 28], where \(\Delta\alpha^{M1}(\omega)\) is an order of magnitude less than \(\Delta\alpha^{E2}(\omega)\). For the larger uncertainty in our present value of \(-7.74(3.92)\times 10^{-5}\) a.u., there is limited room to improve the accuracy for our DFCP+RCI method at present. Therefore, to further reduce the theoretical uncertainty in the future, it is necessary to develop high-accuracy theoretical methods for calculations of multi-electron atomic structure, such as the CI+all-order method. To compare with experiments directly, we convert all the theoretical values of E2-M1 polarizability difference from atomic units (a. u.) to the unit of Hz. It is can be seen from Fig. 1. Where \(\tilde{\alpha}^{QM}=\Delta\alpha^{QM}(\omega)E_{R}/\alpha^{E1}(\omega)\), \(\alpha^{E1}(\omega)=287(17)\) a.u. is present dynamic E1 polarizability at 813 nm magic wavelength of clock states, and \(E_{R}\) is the lattice photon recoil energy [25]. It is clearly seen that our present value of \(-0.935(477)\) mHz, which includes the negative-energy-states contribution, agrees well with the three measured results of \(-0.962(40)\)[25], \(-0.987^{+0.174}_{-0.223}\)[29] and \(-1.24(5)\) mHz [30]. This illustrates that the negative-energy states are crucial to the calculation of multipolar polarizabilities. In addition, from Fig. 1, it is also seen that there discrepancy exists between the recent measurement of JILA [30] and previous measurement of RIKEN [25]. If adding present negative-energy-states contribution \(-1.04(38)\times 10^{-4}\) a.u. into the CI+all-order value of Figure 1: (Color online) Comparison of the \(\tilde{\alpha}^{QM}/h\) (in mHz). The green line represents experimental results. The blue line represents our present value, and the magenta line denotes other theoretical results. \(2.80(36)\times 10^{-5}\) a.u. [24], and considering the uncertainty in CI+all-order method is about a factor of 1/3 of our DFCP+RCI method, then we can get an estimated value of \(-7.60(1.50)\times 10^{-5}\) a.u. (equals to \(-0.918(189)\) mHz), which is expected to judge on the current experimental results after taking into account of the negative-energy-states contribution. Therefore, development of high-accuracy theoretical methods with negative-energy states included is urgently needed to solve the existing discrepancy among different measurements. ## IV Conclusions Focusing on the obvious contradiction in sign for the E2-M1 polarizability difference between existing theory and experiment in the Sr clock, we develop the combined DFCP+RCI method with inclusion of negative-energy states, and apply it into calculations of dynamic M1 and E2 polarizabilities for the Sr clock. Our result of E2-M1 polarizability difference is \(-7.74(3.92)\times 10^{-5}\) a.u., which has the same sign with all the measured values. Our work has solved the sign inconsistency for the E2-M1 polarizability difference in the Sr clock. In the future, developing high-accuracy theoretical method with the negative-energy states included is expected to solve the discrepancy among different experiments. In addition, our work has revealed the importance of negative-energy states that lack in all previous calculations of optical clocks, which can be extended into investigations of multipolar interaction between atoms and light in the field of precision measurement physics. ###### Acknowledgements. We thank Yong-Hui Zhang for helpful discussions on the negative-energy states, and thank J. Chen, K.-L. Gao, and Z.-C. Yan for reading our paper. This work was supported by the National Natural Science Foundation of China under Grant Nos. 12174402 and 12004124, and by the Nature Science Foundation of Hubei Province Nos.2019 CFA058 and 2022 CFA013.
2306.02581
Stability of Alexandrov-Fenchel Type Inequalities for Nearly Spherical Sets in Space Forms
In this paper, we first derive a quantitative quermassintegral inequality for nearly spherical sets in $\mathbb{H}^{n+1}$ and $\mathbb{S}^{n+1}$, which is a generalization of the quantitative Alexandrov-Fenchel inequality proved in $\mathbb{R}^{n+1}$ [22]. Then we use this method to derive the stability of some geometric inequalities involving weighted curvature integrals and quermassintegrals for nearly spherical sets in $\mathbb{R}^{n+1}$ and $\mathbb{H}^{n+1}$.
Rong Zhou, Tailong Zhou
2023-06-05T04:13:41Z
http://arxiv.org/abs/2306.02581v2
# Stability of Alexandrov-Fenchel type inequalities for nearly spherical sets in space forms ###### Abstract. In this paper, we first derive a quantitative quermassintegral inequality for nearly spherical sets in \(\mathbb{H}^{n+1}\) and \(\mathbb{S}^{n+1}\), which is a generalization of the quantitative Alexandrov-Fenchel inequality proved in \(\mathbb{R}^{n+1}\)[22]. Then we use this method to derive the stability of some geometric inequalities involving weighted curvature integrals and quermassintegrals for nearly spherical sets in \(\mathbb{R}^{n+1}\) and \(\mathbb{H}^{n+1}\). Key words and phrases:quermassintegrals, weighted curvature integrals, nearly spherical sets, isoperimetric deficit. 2020 Mathematics Subject Classification: 52A40; 53C42 ## 1. Introduction In this paper, we will discuss the stability of Alexandrov-Fenchel inequalities involving quermassintegrals and related weighted curvature integral inequalities for nearly spherical sets in space forms \(\mathbb{N}^{n+1}(K)\) with constant curvature \(K=1,-1,0\). Let \(\Omega\) be a bounded domain in \((\mathbb{N}^{n+1}(K),\overline{g})\), which is star-shaped with respect to the origin \(O\). Denote \(M=\partial\Omega\) and suppose that \(M\) can be parametrized as \(M=\{(\rho(1+u(x)),x):x\in\mathbb{S}^{n}\}\) in the polar coordinates, where \(u:\mathbb{S}^{n}\to(-1,+\infty)\) is a \(C^{3}\) function, \(\rho>0\) is a constant. We call that \(M\) is a nearly spherical set, if there exists a small constant \(\varepsilon>0\) such that \(\|u\|_{W^{2,\infty}(\mathbb{S}^{n})}<\varepsilon\). Fuglede [7, 8] studied the stability of the isoperimetric inequality for nearly spherical sets in Euclidean space with \(\rho=1\). Let \(\Omega\subset\mathbb{R}^{n+1}\) be a domain with nearly spherical boundary \(\partial\Omega=M=\{(1+u(x),x):x\in\mathbb{S}^{n}\}\), where \(\|u\|_{C^{1}(\mathbb{S}^{n})}<\varepsilon\). Under the condition \[\mathrm{Vol}(\Omega)=\mathrm{Vol}(B),\ \mathrm{bar}(\Omega)=O, \tag{1.1}\] where \(B\) is the unit ball centered at \(O\) in \(\mathbb{R}^{n+1}\), \(\mathrm{bar}(\Omega)=\dfrac{1}{\mathrm{Area}(\mathbb{S}^{n})}\int_{\mathbb{S }^{n}}(1+u)^{n+2}x\mathrm{d}A\) is the barycenter of \(\Omega\), he gave a lower bound for the isoperimetric deficit \[\bar{\delta}_{0,-1}(\Omega):=\dfrac{\mathscr{A}_{0}(\Omega)-\mathscr{A}_{0}( B)}{\mathscr{A}_{0}(B)}\] concerning \(\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}\), which is vanishing if and only if \(\Omega\) is the unit ball. Here \(\mathscr{A}_{0}(\cdot)\) is the \(0\)th quermassintegral of a domain, which measures the perimeter of the boundary (see section 2.3 for the definition of the \(k\)th quermassintegrals \(\mathscr{A}_{k}(\cdot),\ k=-1,\cdots,n\)). In specific, there exists a constant \(C\) independent of \(\Omega\) such that \[\bar{\delta}_{0,-1}(\Omega)\geqslant C\|u\|_{W^{1,2}(\mathbb{S}^{n})}^{2}, \tag{1.2}\] and the sharp quantitative isoperimetric inequality follows. In [5], Cicalese and Leonardi proved a weak version of Fuglede's estimate (1.2) under the same condition (1.1) as Fuglede's. They estimated the upper bound of the Fraenkel asymmetry \(\bar{\alpha}(\Omega)\) in Euclidean space, where \[\bar{\alpha}(\Omega)=\inf\left\{\frac{\operatorname{Vol}\left(\Omega\Delta B_{\rho} (x)\right)}{\operatorname{Vol}(B_{\rho})}:x\in\mathbb{R}^{n+1},\operatorname{ Vol}(\Omega)=\operatorname{Vol}(B_{\rho})\right\} \tag{1.3}\] measures the \(L^{1}\)-distance between the set \(\Omega\) and an optimal geodesic ball of the same volume. The quantitative isoperimetric inequality in \(\mathbb{R}^{n+1}\) asks if there exists a constant \(C(n)>0\) such that for all Borel sets \(\Omega\) with finite measure satisfy \[\bar{\delta}_{0,-1}(\Omega)\geqslant C(n)\bar{\alpha}^{m}(\Omega)\] for some exponent \(m\). If we write \(u=\sum\limits_{k=0}^{\infty}a_{k}Y_{k}\), where \(\{Y_{k}\}_{k=0}^{\infty}\) corresponds the spherical harmonics which forms an orthonormal basis for \(L^{2}(\mathbb{S}^{n})\), then (1.1) implies \[a_{0}^{2}=O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2},\ a_{1}^{2}=O( \varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}, \tag{1.4}\] and consequently the isoperimetric deficit \(\overline{\delta}_{0,-1}(\Omega)\) is bounded from below by \(\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}\). With this observation, authors of [5] used the Fraenkel asymmetry \(\overline{\alpha}(\Omega)\) to characterize the stability of the isoperimetric inequality and proved that \[\bar{\delta}_{0,-1}(\Omega)\geqslant\left(C(n)+O(\varepsilon)\right)\bar{ \alpha}^{2}(\Omega). \tag{1.5}\] The methods that Fuglede's estimate used can also be applied to prove the Minkowski inequality for nearly spherical sets in Euclidean space, interested readers can refer to Glaudo's work [13]. ### Quantitative Alexandrov-Fenchel inequalities in space forms Quantitative isoperimetric inequalities for nearly spherical sets can also be considered in hyperbolic space and on the sphere. In a general space form \(\mathbb{N}^{n+1}(K),\ K=-1,1\), (1.3) and (1.8) is not invariant under scaling, so now the isoperimetric deficit \(\delta_{0,-1}(\Omega)\) for \(\Omega\) is defined as \[\delta_{0,-1}(\Omega)=\mathscr{A}_{0}(\Omega)-\mathscr{A}_{0}(\overline{B}_{ \rho}), \tag{1.6}\] where \(\overline{B}_{\rho}\) is the geodesic ball with radius \(\rho\) centered at \(O\) in \(\mathbb{N}^{n+1}(K)\) (\(K=-1,1\)), and the definition for the Fraenkel asymmetry is adjusted in the following rescaled way: **Definition 1.1** ([1]).: _The Fraenkel asymmetry of \(\Omega\subset\mathbb{N}^{n+1}(K)\)\((K=-1,1)\), denoted by \(\alpha(\Omega)\), is defined as_ \[\alpha(\Omega)=\inf\left\{\operatorname{Vol}\left(\Omega\Delta\overline{B}_{ \rho}(x)\right):x\in\mathbb{N}^{n+1}(K),\operatorname{Vol}(\Omega)= \operatorname{Vol}(\overline{B}_{\rho}(x))\right\}, \tag{1.7}\] _where \(\Delta\) is the symmetric difference between two sets._ In [1, 2], Bogelein, Duzaar, Scheven and Fusco generalized Fuglede's estimate to \[\frac{\delta_{0,-1}(\Omega)}{\mathscr{A}_{0}(\overline{B}_{\rho}))}\geqslant C \left(\frac{\alpha(\Omega)}{\mathscr{A}_{-1}(\overline{B}_{\rho})}\right)^{2} \tag{1.8}\] for nearly spherical domain \(\Omega\subset\mathbb{N}^{n+1}(K)\) enclosed by \(M=\{(\rho(1+u(x)),x):x\in\mathbb{S}^{n}\}\) in \(\mathbb{N}^{n+1}(K)\)\((K=-1,1)\). They have defined the barycenter of a set in space forms, which is crucial to derive a technical Poincare-type estimate. **Definition 1.2** (The barycenter of a set in \(\mathbb{N}^{n+1}(K)\)[1, 2]).: _The barycenter of a set \(\Omega\subset\mathbb{N}^{n+1}(K)\), denoted by \(\operatorname{bar}(\Omega)\), is defined as a minimizer \(p\in\mathbb{N}^{n+1}(K)\) of the function_ \[p\mapsto\int_{\Omega}d_{K}^{2}(y,p)\mathrm{d}\mu_{K}(y). \tag{1.9}\] _where \(d_{K}(y,p)\) represents the geodesic distance between \(y\in\Omega\) and \(p\), \(\mathrm{d}\mu_{K}(y)\) is the measure respect to \(\mathbb{N}^{n+1}(K)\)._ In 1930s, Alexandrov and Fenchel proved the following inequalities for convex body \(\Omega\) in \(\mathbb{R}^{n+1}\): \[\frac{\mathscr{A}_{k}(\Omega)}{\binom{n}{k}\mathrm{Area}(\mathbb{S}^{n})} \leq\left(\frac{\mathscr{A}_{l}(\Omega)}{\binom{n}{l}\mathrm{Area}(\mathbb{S}^ {n})}\right)^{\frac{n-k}{n-l}},\ -1\leq k<l\leq n. \tag{1.10}\] Equality holds if and only if \(\Omega\) is a ball. When \(l=-1,\ k=0\), (1.10) reduces to the classic isoperimetric inequality. Guan and Li [14] proved the Alexandrov-Fenchel inequalities (1.10) for all star-shaped and \(k\)-convex domains in \(\mathbb{R}^{n+1}\), using curvature flow method. It is conjectured that for domain \(\Omega\) in space form \(\mathbb{N}^{n+1}(K),\ K=\pm 1\), satisfying suitable convexity, the Alexandrov-Fenchel inequalities take the form of (see for instance in [4]) \[\mathscr{A}_{k}(\Omega)\leq\left(\psi_{k}\circ\psi_{l}^{-1}\right)\left( \mathscr{A}(\Omega)\right),\ -1\leq k<l\leq n, \tag{1.11}\] where \(\psi_{m}(\rho):=\mathscr{A}_{m}(\bar{B}_{\rho})\). Our work is motivated by VanBlargan and Wang [22], they proved the quantitative Alexandrov-Fenchel inequalities for nearly spherical sets in \(\mathbb{R}^{n+1}\). For \(0\leqslant j<k\), \(1\leqslant k\leqslant n-1\), let \(\Omega\) be enclosed by a nearly spherical set \(M=\{(1+u(x),x):x\in\mathbb{S}^{n}\}\). Under the condition \[\mathscr{A}_{j}(\Omega)=\mathscr{A}_{j}(B),\quad\mathrm{bar}(\Omega)=O, \tag{1.12}\] they proved \[\mathscr{A}_{k}(\Omega)-\mathscr{A}_{k}(B)\geqslant C(n,k,j)\left((1+O( \varepsilon))\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+\left(\frac{1}{2}+O(\varepsilon )\right)\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}\right), \tag{1.13}\] where \(C(n,k,j)=\binom{n}{k}\dfrac{(n-k)(k-j)}{2n}\). Then the \((k,j)\)-isoperimetric deficit \[\bar{\delta}_{k,j}(\Omega)=\frac{\mathscr{A}_{k}(\Omega)-\mathscr{A}_{k}(B_{ \Omega,j})}{\mathscr{A}_{k}(B_{\Omega,j})}, \tag{1.14}\] where \(B_{\Omega,j}\) is the ball centered at \(O\) such that \(\mathscr{A}_{j}(\Omega)=\mathscr{A}_{j}(B_{\Omega,j})\), is bounded form below by the Fraenkel asymmetry: \[\bar{\delta}_{k,j}(\Omega)\geqslant\left(\frac{n(n-k)(k-j)}{4(n+1)^{2}}+O( \varepsilon)\right)\overline{\alpha}^{2}(\Omega). \tag{1.15}\] In general space forms, we also consider the rescaled isoperimetric deficit in contrast to (1.14) in Euclidean space. **Definition 1.3** (The \((k,j)\)-isoperimetric deficit).: _Let \(\Omega\) be a domain in \(\mathbb{N}^{n+1}(K)\). For any given \(0\leqslant k\leqslant n\), \(-1\leqslant j<k\), define the \((k,j)\)-isoperimetric deficit for \(\Omega\), denoted by \(\delta_{k,j}(\Omega)\), as_ \[\delta_{k,j}(\Omega)=\mathscr{A}_{k}(\Omega)-\mathscr{A}_{k}(\overline{B}_{ \Omega,j}), \tag{1.16}\] _where \(\overline{B}_{\Omega,j}\) is a geodesic ball centered at \(O\) in \(\mathbb{N}^{n+1}(K)\) which satisfies \(\mathscr{A}_{j}(\Omega)=\mathscr{A}_{j}(\overline{B}_{\Omega,j})\)._ The first goal of this paper is to give the quantitative version of Alexandrov-Fenchel inequalities (1.11) in space forms under the same assumption (1.12) as in [1, 2, 22], as a generalization of (1.15): For \(\Omega\) enclosed by nearly spherical sets \(M\) in hyperbolic space \(\mathbb{H}^{n+1}\) and on the sphere \(\mathbb{S}^{n+1}\), \[\delta_{k,j}(\Omega)\geqslant C\alpha^{2}(\Omega), \tag{1.17}\] where \(C\) is a constant independent of \(\Omega\). We make an effort to derive the expression for \(\mathscr{A}_{k}(\Omega)-\mathscr{A}_{k}(\overline{B}_{\rho})\)\((-1\leqslant k\leqslant n)\). Our result is as follows: **Theorem 1.4**.: _Let \(M=\{(\rho(1+u(x)),x):x\in\mathbb{S}^{n}\}\) be a nearly spherical set in \(\mathbb{N}^{n+1}(K)\)\((K=-1,1)\), where \(u\in C^{3}(\mathbb{S}^{n})\). \(\Omega\) is a bounded subset which is star-shaped with respect to \(O\) and is enclosed by \(M\). Suppose \(0\leqslant k\leqslant n-1\), \(-1\leqslant j<k\), if both of the following hold_ 1. \(\mathscr{A}_{j}(\Omega)=\mathscr{A}_{j}(\overline{B}_{\rho})\)_,_ 2. \(\mathrm{bar}(\Omega)=O\)_,_ _then for any \(\eta>0\), there exists \(\varepsilon>0\), such that when \(\|u\|_{W^{2,\infty}(\mathbb{S}^{n})}<\varepsilon\), we have_ \[\delta_{k,j}(\Omega)\geqslant\left(\frac{n(n-k)(k-j)}{4\mathrm{ Area}(\mathbb{S}^{n})}\binom{n}{k}\frac{\phi^{\prime k}(\rho)}{\phi^{n+k+2}( \rho)}-\eta\right)\alpha^{2}(\Omega), \tag{1.18}\] _where \(\phi\) is defined in (2.1)._ ### Quantitative weighted quermassintegral inequalities For a smooth, star-shaped and \(k\)-convex (i.e. \(\sigma_{i}(\kappa)\geqslant 0\)\((1\leqslant i\leqslant k)\)) hypersurface in \(\mathbb{R}^{n+1}\), geometric inequalities involving weighted curvature integrals, with \(\Phi\) (a radial function defined in (2.3)) as the weighting factor, have been studied in [12, 19, 24]. More precisely, if \(M\) is such a hypersurface enclosing a bounded domain \(\Omega\), then \[\int_{M}\Phi\sigma_{k}(\kappa)\mathrm{d}\mu_{g}\geqslant C(n,l,k )\mathscr{A}_{l}(\Omega)^{\frac{n+2-k}{n-l}}, \tag{1.19}\] where \(1\leqslant k\leqslant n\), \(-1\leqslant l<k\), and equality holds if and only if \(\Omega\) is a ball. Moreover, if \(M\) is h-convex (i.e. \(\kappa_{i}\geqslant 1\)\((1\leqslant i\leqslant n)\)) in \(\mathbb{H}^{n+1}\), Wei and the second author [24] proved \[\int_{M}\Phi\sigma_{k}(\kappa)\mathrm{d}\mu_{g}\geqslant\xi_{k,l} (\mathscr{A}_{l}(\Omega)), \tag{1.20}\] where \(\xi_{k,l}\) is the unique function such that equality holds in (1.20) when \(\Omega\) is a geodesic ball, \(1\leqslant k\leqslant n\), \(-1\leqslant l<k\) and equality holds if and only if \(\Omega\) is a geodesic ball. Besides \(\Phi\), another interesting weighting factor is \(\phi^{\prime}\) defined in (2.1). In [18], the authors compared the following weighted integral of \(k\)th mean curvature of a smooth h-convex hypersurface to its \(l\)th quermassintegral: \[\int_{M}\phi^{\prime}\sigma_{k}(\kappa)\mathrm{d}\mu_{g}\geqslant \eta_{k,l}(\mathscr{A}_{l}(\Omega)), \tag{1.21}\] where \(\eta_{k,l}\) is the unique function such that equality holds in (1.21) when \(\Omega\) is a geodesic ball, \(1\leqslant k\leqslant n\), \(-1\leqslant l<k\) and equality holds if and only if \(\Omega\) is a geodesic ball. For \(k=1\), the weighted total mean curvature integral is strongly related to the quasilocal mass and the Riemannian Penrose inequality (see for instance in [3]), and the corresponding weighted Minkowski inequality was proved in [6] for strictly mean convex and star-shaped hypersurfaces. Relevant studies for the inequality (1.21) can be found in [4, 11]. Our next goal is to derive the quantitative version of inequalities (1.19) and (1.20) for nearly spherical sets \(M\) in \(\mathbb{R}^{n+1}\) and \(\mathbb{H}^{n+1}\) characterized by \(\alpha^{2}(\Omega)\) respectively and obtain the quantitative version of (1.21) for nearly spherical sets in \(\mathbb{H}^{n+1}\). In specific, it states that for any fixed \(0\leqslant k\leqslant n\), if \(\mathscr{A}_{j}(\Omega)=\mathscr{A}_{j}(\overline{B}_{\rho})\)\((-1\leqslant j<k)\), then \[\int_{M}\Psi(r)\sigma_{k}(\kappa)\mathrm{d}\mu_{g}-\int_{\partial\overline{B}_ {\rho}}\Psi(r)\sigma_{k}(\kappa)\mathrm{d}\mu_{g}\geqslant C\alpha^{2}(\Omega),\] where \(\Psi=\Phi\) or \(\phi^{\prime}\), \(\Omega\) is enclosed by \(M\) and \(C\) is a constant independent of \(\Omega\). In the following, we present the stability of (1.19) for nearly spherical sets in \(\mathbb{R}^{n+1}\). Note that in this case, the weighted curvature integrals are invariant under rescaling, we only consider the stability in a domain \(\Omega\) which is close to the unit ball \(B\) and characterize the stability by \(\overline{\alpha}^{2}(\Omega)\) defined in (1.3). **Theorem 1.5**.: _Let \(M=\{(1+u(x),x):x\in\mathbb{S}^{n}\}\) be a nearly spherical set in \(\mathbb{R}^{n+1}\), where \(u\in C^{3}(\mathbb{S}^{n})\). \(\Omega\subset\mathbb{R}^{n+1}\) is a bounded subset which is star-shaped with respect to \(O\) and is enclosed by \(M\). Suppose \(0\leqslant k\leqslant n\), \(-1\leqslant j<k\), if both of the following hold_ 1. \(\mathscr{A}_{j}(\Omega)=\mathscr{A}_{j}(B)\)_,_ 2. \(\mathrm{bar}(\Omega)=O\)_,_ _then for any \(\eta>0\), there exists \(\varepsilon>0\), such that when \(\|u\|_{W^{2,\infty}(\mathbb{S}^{n})}<\varepsilon\) holds, we have_ \[\frac{\int_{M}\Phi\sigma_{k}(\kappa)\mathrm{d}\mu_{g}-\int_{ \partial B}\Phi\sigma_{k}(\kappa)\mathrm{d}\mu_{g}}{\int_{ \partial B}\Phi\sigma_{k}(\kappa)\mathrm{d}\mu_{g}}\geqslant\left[\frac{n\,( (n-k+2)(k-j)+2k-2)}{4(n+1)^{2}}-\eta\right]\overline{\alpha}^{2}(\Omega), \tag{1.22}\] _where \(\Phi\) is defined in (2.3)._ We also derive the stability of inequalities (1.20) and (1.21) for nearly spherical sets in \(\mathbb{H}^{n+1}\). **Theorem 1.6**.: _Let \(M=\{(\rho(1+u(x)),x):x\in\mathbb{S}^{n}\}\) be a nearly spherical set in \(\mathbb{H}^{n+1}\), where \(u\in C^{3}(\mathbb{S}^{n})\). \(\Omega\subset\mathbb{H}^{n+1}\) is a bounded subset which is star-shaped with respect to \(O\) and is enclosed by \(M\). Suppose \(0\leqslant k\leqslant n-1\), \(-1\leqslant j<k\), if both of the following hold_ 1. \(\mathscr{A}_{j}(\Omega)=\mathscr{A}_{j}(\overline{B}_{\rho})\)_,_ 2. \(\mathrm{bar}(\Omega)=O\)_,_ _then for any \(\eta>0\), there exists \(\varepsilon>0\), such that when \(\|u\|_{W^{2,\infty}(\mathbb{S}^{n})}<\varepsilon\) holds, we have_ \[\int_{M}\Phi\sigma_{k}(\kappa)\mathrm{d}\mu_{g}-\int_{\partial \overline{B}_{\rho}}\Phi\sigma_{k}(\kappa)\mathrm{d}\mu_{g} \tag{1.23}\] \[\geqslant \left(\frac{n(n-k)(k-j)}{4\mathrm{Area}(\mathbb{S}^{n})}\binom{n }{k}\frac{\phi^{\prime k-2}(\rho)\Phi(\rho)}{\phi^{n+k+2}(\rho)}-\eta\right) \alpha^{2}(\Omega),\] _and_ \[\int_{M}\phi^{\prime}\sigma_{k}(\kappa)\mathrm{d}\mu_{g}-\int_{ \partial\overline{B}_{\rho}}\phi^{\prime}\sigma_{k}(\kappa)\mathrm{d}\mu_{g} \tag{1.24}\] \[\geqslant \left(\frac{n(n-k)(k-j)}{4\mathrm{Area}(\mathbb{S}^{n})}\binom{n }{k}\frac{\phi^{\prime k+1}(\rho)}{\phi^{n+k+2}(\rho)}-\eta\right)\alpha^{2}( \Omega),\] _where \(\Phi(r)\) and \(\phi(r)\) are defined in (2.1) and (2.3)._ We should mention that the curvature flow methods could be used to establish the quantitative quermassintegral inequalities in \(\mathbb{N}^{n+1}(K)\). Using the inverse curvature flow, VanBlargan and Wang [23] managed to give the inequality (1.13) a new proof; Scheuer [20] established a quantitative quermassintegral inequality for closed and star-shaped \(C^{2}\)-hypersurfaces in \(\mathbb{N}^{n+1}(K)\). For the study of other weighted geometric inequalities in quantitative form, we refer to [9, 10]. The paper is organized as follows. In Section 2, we present some preliminaries for nearly spherical sets in space forms and some useful identities about symmetric polynomials. In Section 3, we derive an explicit expression for \(\delta_{k,j}(\Omega)\) and weighted curvature integrals \(\int_{M}\Phi\sigma_{k}(\kappa)\mathrm{d}\mu_{g}\)\((0\leqslant k\leqslant n)\) and \(\int_{M}\phi^{\prime}\sigma_{k}(\kappa)\mathrm{d}\mu_{g}\)\((0\leqslant k\leqslant n)\) under the condition \(\|u\|_{W^{2,\infty}(\mathbb{S}^{n})}<\varepsilon\). In Section 4, we study the quantitative quermassintegral inequality and prove the first main result Theorem 1.4. In Section 5, we discuss the weighted quermassintegral inequalities and prove Theorem 1.5 and Theorem 1.6. ## 2. Preliminaries Here we collect some properties of nearly spherical sets parametrized by radial function, elementary symmetric polynomials and quermassintegrals for nearly spherical sets. ### Nearly spherical sets in space forms Let \((\mathbb{N}^{n+1}(K),\overline{g})\) (\(n\geqslant 2\)) be a space form of dimension \(n+1\) with sectional curvature \(K=0,1,-1\). Its warped product metric is \(\overline{g}=\mathrm{d}r^{2}+\phi^{2}(r)g_{\mathbb{S}^{n}}\), where \(r\) is the radial distance to the origin and \[\phi(r)=\left\{\begin{array}{ll}r,&r\in[0,+\infty),\ \ K=0,\\ \sin r,&r\in[0,\dfrac{\pi}{2}),\ \ \ \ K=1,\\ \sinh r,&r\in[0,+\infty),\ \ K=-1.\end{array}\right. \tag{2.1}\] Then we have \[(\phi^{\prime})^{2}+K\phi^{2}=1,\ \phi^{\prime\prime}=-K\phi. \tag{2.2}\] Define the radial function \(\Phi(r)=\int_{0}^{r}\phi(s)\mathrm{d}s\) on \(\mathbb{N}^{n+1}(K)\), that is \[\Phi(r)=\left\{\begin{array}{ll}\dfrac{1}{2}r^{2},&r\in[0,+\infty),\ \ K=0,\\ 1-\cos r,&r\in[0,\dfrac{\pi}{2}),\ \ \ \ K=1,\\ \cosh r-1,&r\in[0,+\infty),\ \ K=-1.\end{array}\right. \tag{2.3}\] It's well known that \(V=\nabla^{\mathbb{N}^{n+1}(K)}\Phi(r)=\phi(r)\dfrac{\partial}{\partial r}\) is a conformal Killing vector field. We refer to [16] for more details. Let \(\Omega\) be a bounded domain in \(\mathbb{N}^{n+1}(K)\) (\(K=-1,1\)) which is star-shaped respect to the origin \(O\) and is enclosed by \(M\). Suppose that \(M=\{(\rho(1+u(x)),x):x\in\mathbb{S}^{n}\}\), where \(u:\mathbb{S}^{n}\rightarrow(-1,+\infty)\) is a \(C^{3}\) function satisfying \(\|u\|_{W^{2,\infty}(\mathbb{S}^{n})}<\varepsilon\). We present some geometric quantities for the hypersurface \(M\) under this radial parametrization. In a geodesic polar coordinate of \(\mathbb{N}^{n+1}(K)\), denote \(\left\{\dfrac{\partial}{\partial\theta_{1}},\frac{\partial}{\partial\theta_{2} },\cdots,\frac{\partial}{\partial\theta_{n}},\frac{\partial}{\partial r}\right\}\) as the tangent basis and \(s_{ij}\) as the canonical metric on the unit sphere \(\mathbb{S}^{n}\). Then \[\langle\dfrac{\partial}{\partial\theta_{i}},\dfrac{\partial}{\partial r} \rangle=0,\ \ \langle\dfrac{\partial}{\partial r},\dfrac{\partial}{\partial r}\rangle=1,\ \ \langle\dfrac{\partial}{\partial\theta_{i}},\dfrac{\partial}{\partial\theta_{j}} \rangle=\phi^{2}(r)s_{ij}. \tag{2.4}\] We also refer \(u_{i}=\dfrac{\partial u}{\partial\theta_{i}}\), then \(\{e_{i}=\dfrac{\partial}{\partial\theta_{i}}+\rho u_{i}\dfrac{\partial}{ \partial r}:i=1,2,\cdots,n\}\) form a tangent basis of \(M\). Let \(g_{ij}\), \(g^{ij}\), \(N\), \(h_{ij}\) denote the induced metric, the inverse metric matrix, the outward unit normal vector and the second fundamental form corresponding to \(M\) respectively. For convenience, we denote \[\phi=\phi(r),\ \phi^{\prime}=\phi^{\prime}(r),\ r=\rho(1+u), \tag{2.5}\] and \[D=\sqrt{\phi^{2}+\rho^{2}|\nabla u|^{2}}, \tag{2.6}\] where \(\nabla\) is the Levi-Civita connection on \(\mathbb{S}^{n}\). Then the induced metric of \(M\) is \[g_{ij}=\rho^{2}u_{i}u_{j}+\phi^{2}s_{ij}. \tag{2.7}\] Thus the area element \(\mathrm{d}\mu_{g}\) corresponding to the induced metric \(g\) is \[\mathrm{d}\mu_{g}=\sqrt{\det(g_{ij})}\mathrm{d}A=\phi^{n-1}D\mathrm{d}A, \tag{2.8}\] the inverse of \((g_{ij})\) is \[g^{ij}=\dfrac{s^{ij}}{\phi^{2}}-\dfrac{1}{\phi^{2}}\cdot\dfrac{\rho^{2}u_{k}u _{l}s^{ik}s^{jl}}{D^{2}}, \tag{2.9}\] and the outward unit normal vector corresponding to \(M\) is \[N=\frac{-\rho s^{ij}u_{i}\frac{\partial}{\partial\theta_{j}}+\phi^{2}\frac{ \partial}{\partial r}}{\phi D}. \tag{2.10}\] Note that \(h_{ij}=-\langle\nabla_{e_{i}}^{\mathbb{N}^{n+1}(K)}e_{j},N\rangle\), the second fundamental form of \(M\) is \[h_{ij}=\frac{1}{D}\left[2\phi^{\prime}\rho^{2}u_{i}u_{j}+\phi^{2}\phi^{\prime} s_{ij}-\phi\rho u_{ij}\right], \tag{2.11}\] then the weingarten tensor \(h_{j}^{i}=g^{ik}h_{kj}\) is \[h_{j}^{i}=\frac{\phi^{\prime}\delta_{j}^{i}}{D}-\frac{\rho u_{j}^{i}}{D\phi}+ \frac{\phi^{\prime}\rho^{2}u^{i}u_{j}}{D^{3}}+\frac{\rho^{3}u^{i}u_{k}u_{j}^{ k}}{D^{3}\phi}, \tag{2.12}\] where \(|\nabla u|^{2}=s^{ij}u_{i}u_{j}\). We also refer \(u^{i}=s^{ij}u_{j}\), \(u_{j}^{k}=s^{ki}u_{ij}\). ### Properties of elementary symmetric functions Here we present some properties of elementary symmetric polynomials (see e.g., in [15]). Let \(\lambda=(\lambda_{1},\lambda_{2},\cdots,\lambda_{n})\in\mathbb{R}^{n}\), we denote \[\sigma_{k}(\lambda)=\sum_{1\leqslant i_{1}<i_{2}<\cdots<i_{k}\leqslant n} \lambda_{i_{1}}\lambda_{i_{2}}\cdots\lambda_{i_{k}} \tag{2.13}\] as the \(k\)th elementary symmetric polynomials of \(\lambda\in\mathbb{R}^{n}\) when \(1\leqslant k\leqslant n\) and have the convention that \(\sigma_{0}(\lambda)=1\). Generally, let \(A=(A_{j}^{i})_{n\times n}\) be a symmetric matrix, then for all \(1\leqslant k\leqslant n\), \[\sigma_{k}(A)=\frac{1}{k!}\delta_{i_{1}i_{2}\cdots i_{k}}^{j_{1}j_{2}\cdots j_ {k}}A_{j_{1}}^{i_{1}}A_{j_{2}}^{i_{2}}\cdots A_{j_{k}}^{i_{k}}. \tag{2.14}\] Besides, we set \(\sigma_{0}(A)=1\). Let \(A_{1},\cdots,A_{k}\) be \(n\times n\) symmetric matrices, the \(k\)th Newton operator (\(1\leqslant k\leqslant n\)) for them is defined as \[[T_{k}]_{i}^{j}(A_{1},A_{2},\cdots,A_{k})=\frac{1}{k!}\delta_{ii_{1}i_{2} \cdots i_{k}}^{jj_{1}j_{2}\cdots j_{k}}(A_{1})_{j_{1}}^{i_{1}}(A_{2})_{j_{2}} ^{i_{2}}\cdots(A_{k})_{j_{k}}^{i_{k}}. \tag{2.15}\] We also have the convention that \([T_{0}]_{i}^{j}=\delta_{i}^{j}\). Particularly, when \(A_{1}=A_{2}=\cdots=A_{k}=A\), we briefly denote \[[T_{k}]_{i}^{j}(A)=[T_{k}]_{i}^{j}(\underbrace{A,A,\cdots,A}_{k}). \tag{2.16}\] Note that \([T_{k}]_{i}^{j}(A)=\frac{\partial\sigma_{k}(A)}{\partial A_{j}^{i}}\), it is well-known that \[A_{s}^{j}[T_{m}]_{j}^{i}(A)=\delta_{s}^{i}\sigma_{m+1}(A)-[T_{m+1}]_{s}^{i}(A). \tag{2.17}\] By using the anti-symmetry property of Kronecker-Delta \(\delta_{i_{1}i_{2}\cdots i_{k}}^{j_{1}j_{2}\cdots j_{k}}\), we have the following Lemma: **Lemma 2.1** ([22], Lemma 3.2).: _Suppose that \(w,v_{1},v_{2}\) are column vectors in \(\mathbb{R}^{n}\), then there holds_ \[\frac{1}{(k-1)!}\delta_{i_{1}i_{2}\cdots i_{k}}^{j_{1}j_{2}\cdots j_{k}}(wv_{1 }^{t})_{j_{1}}^{i_{1}}(wv_{2}^{t})_{j_{2}}^{i_{2}}(A_{1})_{j_{3}}^{i_{3}} \cdots(A_{k-2})_{j_{k}}^{i_{k}}=0. \tag{2.18}\] ### Quermassintegrals and weighted curvature integrals of nearly spherical sets In a space form \(\mathbb{N}^{n+1}(K)\), the quermassintegrals of a compact convex domain \(\Omega\) are defined as (see [21]): \[\mathscr{A}_{k}(\Omega)=(n-k)\binom{n}{k}\frac{\omega_{k}\cdots\omega_{0}}{ \omega_{n-1}\cdots\omega_{n-k-1}}\int_{\mathcal{L}_{k+1}}\chi(L_{k+1}\cap \Omega)\mathrm{d}L_{k+1} \tag{2.19}\] for \(k=0,1,\cdots,n-1\), where \(\omega_{k}=|\mathbb{S}^{k}|\) denotes the area of \(k\)-dimensional unit sphere, \(\mathcal{L}_{k+1}\) is the space of \((k+1)\)-dimensional totally geodesic subspaces \(L_{k+1}\) in \(\mathbb{N}^{n+1}(K)\), the funtion \(\chi\) is defined to be \(1\) if \(L_{k+1}\cap\Omega\neq\varnothing\) and to be \(0\) otherwise. In particular, we have \[\mathscr{A}_{-1}(\Omega)=\mathrm{Vol}(\Omega),\ \mathscr{A}_{0}(\Omega)=| \partial\Omega|,\ \mathscr{A}_{n}(\Omega)=\frac{\omega_{n}}{n+1}. \tag{2.20}\] For a domain \(\Omega\) with \(C^{2}\) boundary, we denote \(\kappa=(\kappa_{1},\kappa_{2},\cdots,\kappa_{n})\) as the principal curvature vector of the hypersurface \(M\). Let \(\mathrm{d}\mu_{g}\) the area element of \(M\), \(\mathrm{d}A\) the area element of \(\mathbb{S}^{n}\), then the quermassintegrals can be defined (see [17]) and calculated as follows: \[\mathscr{A}_{-1}(\Omega) = \mathrm{Vol}(\Omega)=\int_{\mathbb{S}^{n}}\left(\int_{0}^{\rho(1+u )}\phi^{n}(r)\mathrm{d}r\right)\mathrm{d}A, \tag{2.21}\] \[\mathscr{A}_{0}(\Omega) = \int_{M}1\mathrm{d}\mu_{g}=\int_{\mathbb{S}^{n}}\phi^{n-1}(\rho( 1+u))\sqrt{\phi^{2}(\rho(1+u))+\rho^{2}|\nabla u|^{2}}\mathrm{d}A,\] (2.22) \[\mathscr{A}_{1}(\Omega) = \int_{M}\sigma_{1}(\kappa)\mathrm{d}\mu_{g}+Kn\mathrm{Vol}(\Omega),\] (2.23) \[\mathscr{A}_{k}(\Omega) = \int_{M}\sigma_{k}(\kappa)\mathrm{d}\mu_{g}+K\frac{n-k+1}{k-1} \mathscr{A}_{k-2}(\Omega)\ (2\leqslant k\leqslant n). \tag{2.24}\] The weighted curvature integrals for \(\Omega\) enclosed by a nearly spherical set \(M=\{(\rho(1+u(x)),x):x\in\mathbb{S}^{n}\}\) is defined as \(\int_{M}\Psi(r)\sigma_{k}(\kappa)\mathrm{d}\mu_{g}\) (\(0\leqslant k\leqslant n\)), where \(\Psi\) equals \(\Phi\) or \(\phi^{\prime}\) are defined in (2.1) and (2.3). ## 3. Derive the \((k,m)\)-isoperimetric deficit In this section, we aim to derive the specific expression for \(\mathscr{A}_{k}(\Omega)-\mathscr{A}_{k}(\overline{B}_{\rho})\) (\(-1\leqslant k\leqslant n\)) in terms of \(u\) and its derivatives. Firstly, we use (2.14) to calculate \(\sigma_{k}(\kappa)\) (\(1\leqslant k\leqslant n\)) of \(M\). Then, under the condition \(\|u\|_{W^{2,\infty}(\mathbb{S}^{n})}<\varepsilon\), we will compute the curvature integrals \(\int_{M}\sigma_{k}(\kappa)\mathrm{d}\mu_{g}\) (\(0\leqslant k\leqslant n\)) and the weighted curvature integrals \(\int_{M}\Psi\sigma_{k}(\kappa)\mathrm{d}\mu_{g}\) (\(0\leqslant k\leqslant n\)). Finally, one can use iteration process to obtain the asymptotic expression for \(\mathscr{A}_{k}(\Omega)-\mathscr{A}_{k}(\overline{B}_{\rho})\) (\(-1\leqslant k\leqslant n\)). ### Computation of \(\sigma_{k}(\kappa)\) Using the properties of elementary symmetric polynomials that have been discussed in Section 2.2, we can calculate \(\sigma_{k}(\kappa)\) (\(1\leqslant k\leqslant n\)) as follows: **Lemma 3.1** (Expression for \(\sigma_{k}(\kappa)\) (\(1\leqslant k\leqslant n\))).: _Let \(\Omega\subset\mathbb{N}^{n+1}(K)\) (\(K=-1,1\)), \(M=\partial\Omega=\{(\rho(1+u(x)),x):x\in\mathbb{S}^{n}\}\), where \(u\in C^{2}(\mathbb{S}^{n})\). Denote_ \[\phi:=\phi(\rho(1+u)),\ \phi^{\prime}:=\phi^{\prime}(\rho(1+u)),\ D=\sqrt{\phi^{2}+ \rho^{2}|\nabla u|^{2}},\] _then for any \(1\leqslant k\leqslant n\), there holds_ \[\sigma_{k}(\kappa)=\sum_{m=0}^{k}\frac{(-1)^{m}\phi^{\prime k-m}}{D^{k+2}\phi ^{m}}\binom{n-m}{k-m}\rho^{m}\left[\phi^{2}\sigma_{m}(D^{2}u)+\frac{k+n-2m}{ n-m}\rho^{2}u^{i}u_{j}[T_{m}]_{i}^{j}(D^{2}u)\right]. \tag{3.1}\] Proof.: Note that \[\sigma_{k}(h^{i}_{j}) = \frac{1}{k!}\delta^{j_{1}j_{2}\cdots j_{k}}_{i_{1}i_{2}\cdots i_{k} }\left(\frac{\phi^{\prime}\delta^{i_{1}}_{j_{1}}}{D}-\frac{\rho u^{i_{1}}_{j_{1} }}{D\phi}+\frac{\phi^{\prime}\rho^{2}u^{i_{1}}u_{j_{1}}}{D^{3}}+\frac{\rho^{3} u^{i_{1}}u_{p}u^{p}_{j_{1}}}{D^{3}\phi}\right) \tag{3.2}\] \[\cdots\left(\frac{\phi^{\prime}\delta^{i_{k}}_{j_{k}}}{D}-\frac{ \rho u^{i_{k}}_{j_{k}}}{D\phi}+\frac{\phi^{\prime}\rho^{2}u^{i_{k}}u_{j_{k}}}{ D^{3}}+\frac{\rho^{3}u^{i_{k}}u_{p}u^{p}_{j_{k}}}{D^{3}\phi}\right).\] By (2.18), we know the term \(\frac{\phi^{\prime}\rho^{2}u^{i}u_{j}}{D^{3}}\) or \(\frac{\rho^{3}u^{i}u_{p}u^{p}_{j}}{D^{3}\phi}\) occurs at most once in each sum. Thus we calculate (3.2) in three cases. (1) \(m\geqslant 0\) times \(-\frac{\rho u^{i}_{j}}{D\phi}\), others are \(\frac{\phi^{\prime}\delta^{i}_{j}}{D}\). Note that \[\delta^{j_{1}j_{2}\cdots j_{m}j_{m+1}\cdots j_{k}}_{i_{1}i_{2}\cdots i_{m}i_{m +1}\cdots i_{k}}\delta^{i_{m+1}}_{j_{m+1}}\cdots\delta^{i_{k}}_{j_{k}}=\delta^ {j_{1}j_{2}\cdots j_{m}}_{i_{1}i_{2}\cdots i_{m}}{n-m\choose k-m}(k-m)!, \tag{3.3}\] the sum is \[A_{m}= {k\choose m}\frac{1}{k!}\delta^{j_{1}j_{2}\cdots j_{k}}_{i_{1}i_{ 2}\cdots i_{k}}\!\!-\!\rho u^{i_{1}}_{j_{1}}\!\!-\!\rho u^{i_{2}}_{j_{2}}\!\! \cdots\!\!-\!\rho u^{i_{m}}_{j_{m}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! (3) Once \(\dfrac{\rho^{3}u^{i}u_{k}u_{j}^{k}}{D^{3}\phi}\), \(m\geqslant 0\) times \(-\dfrac{\rho u_{j}^{i}}{D\phi}\), others are \(\dfrac{\phi^{\prime}\delta_{j}^{i}}{D}\). By using (3.5) again, the sum is \[C_{m}= k\binom{k-1}{m}\dfrac{1}{k!}\delta_{i_{1}i_{2}\cdots i_{k}}^{j_{1}j_{2 }\cdots j_{k}}\dfrac{\rho^{3}u^{i_{1}}u_{p}u_{j_{1}}^{p}}{D^{3}\phi}\dfrac{- \rho u_{j_{2}}^{i_{2}}}{D\phi}\cdots\dfrac{-\rho u_{j_{m+1}}^{i_{m+1}}}{D\phi} \dfrac{\phi^{\prime}\delta_{j_{m+2}}^{i_{m+2}}}{D}\cdots\dfrac{\phi^{\prime} \delta_{j_{k}}^{i_{k}}}{D}\] \[= \binom{k-1}{m}\dfrac{(-1)^{m}\rho^{m+3}\phi^{\prime k-m-1}}{D^{k+2 }\phi^{m+1}}\dfrac{1}{(k-1)!}\delta_{i_{1}i_{2}\cdots i_{k}}^{j_{1}j_{2}\cdots j _{k}}u^{i_{1}}u_{p}u_{j_{1}}^{p}u_{j_{2}}^{i_{2}}\cdots u_{j_{m+1}}^{i_{m+1}} \delta_{j_{m+2}}^{i_{m+2}}\cdots\delta_{j_{k}}^{i_{k}}\] \[= \binom{k-1}{m}\dfrac{(-1)^{m}\rho^{m+3}(\phi^{\prime})^{k-m-1}}{D ^{k+2}\phi^{m+1}}\dfrac{1}{(k-1)!}\delta_{i_{1}i_{2}\cdots i_{m+1}}^{j_{1}j_{2} \cdots j_{m+1}}u^{i_{1}}u_{p}u_{j_{1}}^{p}u_{j_{2}}^{i_{2}}\cdots u_{j_{m+1}}^ {i_{m+1}} \tag{3.7}\] \[\cdot\binom{n-m-1}{k-m-1}(k-m-1)!\] \[= \dfrac{(-1)^{m}\rho^{m+3}\phi^{\prime k-m-1}}{D^{k+2}\phi^{m+1}} \binom{n-m-1}{k-m-1}u^{i}u_{p}u_{j}^{p}[T_{m}]_{i}^{j}(D^{2}u).\] (4) Other cases are all \(0\). Hence \[\sigma_{k}(h_{j}^{i}) = \sum_{m=0}^{k}A_{m}+B_{m}+C_{m}\] \[= \sum_{m=0}^{k}\dfrac{(-1)^{m}\rho^{m}\phi^{\prime k-m}}{D^{k+2} \phi^{m}}\binom{n-m}{k-m}\left[\phi^{2}\sigma_{m}(D^{2}u)+\rho^{2}|\nabla u|^{ 2}\sigma_{m}(D^{2}u)\right]\] \[+\sum_{m=0}^{k}\dfrac{(-1)^{m}\rho^{m+2}\phi^{\prime k-m}}{D^{k+2 }\phi^{m}}\binom{n-m}{k-m}\dfrac{k-m}{n-m}u^{i}u_{j}[T_{m}]_{i}^{j}(D^{2}u)\] \[+\sum_{m=1}^{k}\dfrac{(-1)^{m-1}\rho^{m+2}\phi^{\prime k-m}}{D^{k +2}\phi^{m}}\binom{n-m}{k-m}u^{i}u_{p}u_{j}^{p}[T_{m-1}]_{i}^{j}(D^{2}u).\] Using (2.17), we get the conclusion (3.1). ### Computation of curvature integrals and weighted curvature integrals **Lemma 3.2** (Expression for \(\int_{M}\sigma_{k}(\kappa)\mathrm{d}\mu_{g}\)\((0\leqslant k\leqslant n)\)).: _Let \(\Omega\subset\mathbb{N}^{n+1}(K)\)\((K=-1,1)\), \(M=\partial\Omega=\{(\rho(1+u(x)),x):x\in\mathbb{S}^{n}\}\), where \(u\in C^{3}(\mathbb{S}^{n})\). Suppose \(\|u\|_{W^{2,\infty}(\mathbb{S}^{n})}<\varepsilon\), then for any \(0\leqslant k\leqslant n\), there holds_ \[\int_{M}\sigma_{k}(\kappa)\mathrm{d}\mu_{g}\] \[= \int_{\mathbb{S}^{n}}\binom{n}{k}\phi^{n-k}(\rho)\phi^{\prime k}( \rho)\mathrm{d}A\] \[+\int_{\mathbb{S}^{n}}\binom{n}{k}\left[(n-k)\phi^{n-k-1}(\rho) \phi^{\prime k+1}(\rho)-Kk\phi^{n-k+1}(\rho)\phi^{\prime k-1}(\rho)\right]\rho u \mathrm{d}A\] \[+\int_{\mathbb{S}^{n}}\binom{n}{k}\left[\frac{(n-k)(n-k-1)}{2} \phi^{n-k-2}(\rho)\phi^{\prime k+2}(\rho)\right.\] \[\left.+K(k^{2}-kn-\frac{n}{2})\phi^{n-k}(\rho)\phi^{\prime k}(\rho)\right.\] \[\left.+\left.\frac{k(k-1)}{2}\phi^{n-k+2}(\rho)\phi^{\prime k-2}( \rho)\right]\rho^{2}u^{2}\mathrm{d}A\] \[+O(\varepsilon)\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}\ \ (m\geqslant 2), \tag{3.15}\] \[\int_{\mathbb{S}^{n}}u\sigma_{m}(D^{2}u)\mathrm{d}A = -\frac{m+1}{2m}\int_{\mathbb{S}^{n}}|\nabla u|^{2}\sigma_{m-1}(D^{2 }u)\mathrm{d}A \tag{3.16}\] \[+O(\varepsilon)\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}\ \ (m\geqslant 1),\] \[\int_{\mathbb{S}^{n}}u^{2}\sigma_{m}(D^{2}u)\mathrm{d}A = O(\varepsilon)\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}\ \ (m\geqslant 1), \tag{3.17}\] we can rewrite (3.9) as \[\int_{M}\sigma_{k}(\kappa)\mathrm{d}\mu_{g} \tag{3.19}\] \[= \int_{\mathbb{S}^{n}}\left[A_{0}^{0}+A_{1}^{0}u+A_{2}^{0}u^{2}+(A ^{0}+B^{0})|\nabla u|^{2}\right]\mathrm{d}A\] \[+\int_{\mathbb{S}^{n}}\left[-A_{1}^{1}|\nabla u|^{2}+(A^{1}+ \frac{3}{2}B^{1})|\nabla u|^{2}\sigma_{1}(D^{2}u)\right]\mathrm{d}A\] \[+\int_{\mathbb{S}^{n}}\sum_{m=2}^{k}\left[A_{0}^{m}\frac{n-m+1}{2 }|\nabla u|^{2}\sigma_{m-2}(D^{2}u)-A_{1}^{m}\frac{m+1}{2m}|\nabla u|^{2} \sigma_{m-1}(D^{2}u)\right.\] \[\qquad\qquad\left.+A^{m}|\nabla u|^{2}\sigma_{m}(D^{2}u)+B^{m} \frac{m+2}{2}|\nabla u|^{2}\sigma_{m}(D^{2}u)\right]\mathrm{d}A\] \[+O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+O(\varepsilon) \|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}\] \[= \int_{\mathbb{S}^{n}}\left[A_{0}^{0}+A_{1}^{0}u+A_{2}^{0}u^{2} \right]\mathrm{d}A\] \[+\sum_{m=0}^{k-2}\int_{\mathbb{S}^{n}}\left[A^{m}+B^{m}\frac{m+2} {2}-A_{1}^{m+1}\frac{m+2}{2(m+1)}+A_{0}^{m+2}\frac{n-m-1}{2}\right]|\nabla u| ^{2}\sigma_{m}(D^{2}u)\mathrm{d}A\] \[+\int_{\mathbb{S}^{n}}\left[A^{k-1}+B^{k-1}\frac{k+1}{2}-A_{1}^{ k}\frac{k+1}{2k}\right]|\nabla u|^{2}\sigma_{k-1}(D^{2}u)\mathrm{d}A\] \[+\int_{\mathbb{S}^{n}}\left[A^{k}+B^{k}\frac{k+2}{2}\right]| \nabla u|^{2}\sigma_{k}(D^{2}u)\mathrm{d}A\] \[+O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+O(\varepsilon) \|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}.\] Note that for \(1\leqslant k\leqslant n\), \[\sum_{m=1}^{k}\int_{\mathbb{S}^{n}}C(n,m,k,\rho)|\nabla u|^{2}\sigma_{m}(D^{2 }u)\mathrm{d}A=O(\varepsilon)\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2},\] we have \[\int_{M}\sigma_{k}(\kappa)\mathrm{d}\mu_{g} = \int_{\mathbb{S}^{n}}\left[A_{0}^{0}+A_{1}^{0}u+A_{2}^{0}u^{2} \right]\mathrm{d}A+\int_{\mathbb{S}^{n}}\left[A^{0}+B^{0}-A_{1}^{1}+A_{0}^{2} \frac{n-1}{2}\right]|\nabla u|^{2}\mathrm{d}A \tag{3.20}\] \[+O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+O(\varepsilon) \|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}.\] Then by (3.10), (3.11), (3.12) and direct calculation, using (2.2), we get the formula (3.8). **Lemma 3.3** (Expression for \(\int_{M}\Phi\sigma_{k}(\kappa)\mathrm{d}\mu_{g}\)\((0\leqslant k\leqslant n)\)).: _Let \(M=\{(\rho(1+u(x)),x):x\in\mathbb{S}^{n}\}\) be a nearly spherical set in \(\mathbb{N}^{n+1}(K)\)\((K=1,0,-1)\), where \(u\in C^{3}(\mathbb{S}^{n})\). \(\Phi(r)\) is defined in (2.3). Suppose \(\|u\|_{W^{2,\infty}(\mathbb{S}^{n})}<\varepsilon\), then for any \(0\leqslant k\leqslant n\), there holds_ \[\int_{M}\Phi(r)\sigma_{k}(\kappa)\mathrm{d}\mu_{g} \tag{3.20}\] \[= \int_{\mathbb{S}^{n}}\binom{n}{k}\phi^{n-k}(\rho)\phi^{\prime k}( \rho)\Phi(\rho)\mathrm{d}A\] \[+\int_{\mathbb{S}^{n}}\binom{n}{k}\left\{\left[(n-k)\phi^{n-k-1}( \rho)\phi^{\prime k+1}(\rho)-Kk\phi^{n-k+1}(\rho)\phi^{\prime k-1}(\rho)\right] \Phi(\rho)\right.\] \[\left.+\phi^{n-k+1}(\rho)\phi^{\prime k}(\rho)\right\}\rho u \mathrm{d}A\] \[+\int_{\mathbb{S}^{n}}\binom{n}{k}\left\{\left[\frac{(n-k)(n-k-1 )}{2}\phi^{n-k-2}(\rho)\phi^{\prime k+2}(\rho)\right.\right.\] \[\left.\left.+K(k^{2}-kn-\frac{n}{2})\phi^{n-k}(\rho)\phi^{\prime k }(\rho)+\frac{k(k-1)}{2}\phi^{n-k+2}(\rho)\phi^{\prime k-2}(\rho)\right]\Phi(\rho)\right.\] \[\left.+(n-k+\frac{1}{2})\phi^{n-k}(\rho)\phi^{\prime k+1}(\rho)- Kk\phi^{n-k+2}(\rho)\phi^{\prime k-1}(\rho)\right\}\rho^{2}u^{2}\mathrm{d}A\] \[+\int_{\mathbb{S}^{n}}\binom{n}{k}\left\{\left[\frac{(n-k)(k+1) }{2n}\phi^{n-k-2}(\rho)\phi^{\prime k}(\rho)-K\frac{k(k-1)}{2n}\phi^{n-k}(\rho )\phi^{\prime k-2}(\rho)\right]\Phi(\rho)\right.\] \[\left.+\frac{k}{n}\phi^{n-k}(\rho)\phi^{\prime k-1}(\rho)\right\} \rho^{2}|\nabla u|^{2}\mathrm{d}A\] \[+O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+O(\varepsilon)\| \nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}.\] Proof.: Notice that \[\int_{M}\Phi\sigma_{k}(\kappa)\mathrm{d}\mu_{g} \tag{3.21}\] \[= \int_{\mathbb{S}^{n}}\sum_{m=0}^{k}\frac{(-1)^{m}\rho^{m}\phi^{ \prime k-m}(\rho(1+u))\phi^{n-m-1}(\rho(1+u))\Phi(\rho(1+u))}{(\phi^{2}+\rho^{ 2}|\nabla u|^{2})^{\frac{k+1}{2}}}\binom{n-m}{k-m}\] \[\times\left[\phi^{2}(\rho(1+u))\sigma_{m}(D^{2}u)+\frac{k+n-2m}{ n-m}\rho^{2}u^{i}u_{j}[T_{m}]_{i}^{j}(D^{2}u)\right]\mathrm{d}A,\] we expand \(\Phi(\rho(1+u))\) at \(u=0\) as \[\Phi(\rho(1+u))=\Phi(\rho)+\phi(\rho)\rho u+\frac{1}{2}\phi^{\prime}(\rho)\rho ^{2}u^{2}+o(u^{2}), \tag{3.22}\] and after a similar computation as for (3.8), we get the conclusion. **Lemma 3.4** (Expression for \(\int_{M}\phi^{\prime}\sigma_{k}(\kappa)\mathrm{d}\mu_{g}\)\((0\leqslant k\leqslant n)\)).: _Let \(M=\{(\rho(1+u(x)),x):x\in\mathbb{S}^{n}\}\) be a nearly spherical set in \(\mathbb{N}^{n+1}(K)\)\((K=1,-1)\), where \(u\in C^{3}(\mathbb{S}^{n})\). Suppose \(\|u\|_{W^{2,\infty}(\mathbb{S}^{n})}<\varepsilon\), then for any \(0\leqslant k\leqslant n\), there holds_ \[\int_{M}\phi^{\prime}(r)\sigma_{k}(\kappa)\mathrm{d}\mu_{g}\] \[= \int_{\mathbb{S}^{n}}\binom{n}{k}\phi^{n-k}(\rho)\phi^{\prime k+1} (\rho)\mathrm{d}A\] \[+\int_{\mathbb{S}^{n}}\binom{n}{k}\left[(n-k)\phi^{n-k-1}(\rho) \phi^{\prime k+2}(\rho)\right.\left.-K(k+1)\phi^{n-k+1}(\rho)\phi^{\prime k}( \rho)\right]\rho u\mathrm{d}A\] \[+\int_{\mathbb{S}^{n}}\binom{n}{k}\left[\frac{(n-k)(n-k-1)}{2}\phi^{n-k -2}(\rho)\phi^{\prime k+3}(\rho)\right.\] \[+\left.K(k^{2}-kn+k-\frac{3}{2}n-\frac{1}{2})\phi^{n-k}(\rho)\phi^{ \prime k+1}(\rho)\right.\] \[\left.+\frac{k(k+1)}{2}\phi^{n-k+2}(\rho)\phi^{\prime k-1}(\rho) \right]\rho^{2}u^{2}\mathrm{d}A\] \[+\int_{\mathbb{S}^{n}}\binom{n}{k}\left[\frac{(n-k)(k+1)}{2n}\phi ^{n-k-2}(\rho)\phi^{\prime k+1}(\rho)\right.\left.-K\frac{k(k+1)}{2n}\phi^{n-k }(\rho)\phi^{\prime k-1}(\rho)\right]\rho^{2}|\nabla u|^{2}\mathrm{d}A\] \[+O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+O(\varepsilon) \|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}. \tag{3.23}\] Proof.: Notice that \[\int_{M}\phi^{\prime}(r)\sigma_{k}(\kappa)\mathrm{d}\mu_{g} \tag{3.24}\] \[= \int_{\mathbb{S}^{n}}\sum_{m=0}^{k}\frac{(-1)^{m}\rho^{m}\phi^{ \prime k-m+1}(\rho(1+u))\phi^{n-m-1}(\rho(1+u))}{(\phi^{2}+\rho^{2}|\nabla u|^{ 2})^{\frac{k+1}{2}}}\binom{n-m}{k-m}\] \[\times\left[\phi^{2}(\rho(1+u))\sigma_{m}(D^{2}u)+\frac{k+n-2m}{n- m}\rho^{2}u^{i}u_{j}[T_{m}]_{i}^{j}(D^{2}u)\right]\mathrm{d}A,\] we expand \(\phi^{\prime}(\rho(1+u))\) at \(u=0\) as \[\phi^{\prime}(\rho(1+u))=\phi^{\prime}(\rho)-K\phi(\rho)\rho u-\frac{1}{2}K \phi^{\prime}(\rho)\rho^{2}u^{2}+o(u^{2}), \tag{3.25}\] and after a similar computation as for (3.8), we get the conclusion. ### Computation of \(\mathscr{A}_{k}(\Omega)-\mathscr{A}_{k}(\overline{B}_{\rho})\) First we compute the expression for \(\mathscr{A}_{k}(\Omega)-\mathscr{A}_{k}(\overline{B}_{\rho})\)\((-1\leqslant k\leqslant 1)\), then use iteration process to derive the expression for \(k\leqslant n\). Note that the principal curvatures of \(\partial\overline{B}_{\rho}\) are \(\kappa_{1}=\kappa_{2}=\cdots=\kappa_{n}=\frac{\phi^{\prime}(\rho)}{\phi(\rho)}\). From (2.21), we get \[\mathscr{A}_{-1}(\Omega)-\mathscr{A}_{-1}(\overline{B}_{\rho}) = \int_{\mathbb{S}^{n}}\left(\int_{\rho}^{\rho(1+u)}\phi^{n}(r) \mathrm{d}r\right)\mathrm{d}A \tag{3.26}\] \[= \int_{\mathbb{S}^{n}}\phi^{n}(\rho)\rho u\mathrm{d}A+\int_{ \mathbb{S}^{n}}\frac{n}{2}\phi^{n-1}(\rho)\phi^{\prime}(\rho)\rho^{2}u^{2} \mathrm{d}A\] \[+O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}.\] Since \(\mathscr{A}_{0}(\Omega)=\int_{M}\sigma_{0}(\kappa)\mathrm{d}\mu_{g}\), we can take \(k=0\) in (3.8) and get \[\mathscr{A}_{0}(\Omega)-\mathscr{A}_{0}(\overline{B}_{\rho}) = \int_{\mathbb{S}^{n}}n\phi^{n-1}(\rho)\phi^{\prime}(\rho)\rho u \mathrm{d}A \tag{3.27}\] \[+\int_{\mathbb{S}^{n}}\left[\frac{1}{2}\left(n(n-1)\phi^{n-2}( \rho)\phi^{\prime 2}(\rho)-nK\phi^{n}(\rho)\right)\right]\rho^{2}u^{2}\mathrm{d}A\] \[+\int_{\mathbb{S}^{n}}\frac{1}{2}\phi^{n-2}(\rho)\rho^{2}|\nabla u |^{2}\mathrm{d}A\] \[+O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+O(\varepsilon)\| \nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}.\] Combining (2.23), taking \(k=1\) in (3.8) and inserting (3.26), we get \[\mathscr{A}_{1}(\Omega)-\mathscr{A}_{1}(\overline{B}_{\rho}) \tag{3.28}\] \[= \int_{\mathbb{S}^{n}}\binom{n}{1}\left[(n-1)\phi^{n-2}(\rho)\phi^{ \prime 2}(\rho)-K\phi^{n}(\rho)\right]\rho u\mathrm{d}A\] \[+\int_{\mathbb{S}^{n}}\binom{n}{1}\left[\frac{(n-1)(n-2)}{2}\phi^ {n-3}(\rho)\phi^{\prime 3}(\rho)\right.\left.+K(1-\frac{3}{2}n)\phi^{n-1}(\rho) \phi^{\prime}(\rho)\right]\rho^{2}u^{2}\mathrm{d}A\] \[+\int_{\mathbb{S}^{n}}\binom{n}{1}\frac{n-1}{n}\phi^{n-3}(\rho) \phi^{\prime}(\rho)\rho^{2}|\nabla u|^{2}\mathrm{d}A+O(\varepsilon)\|u\|_{L^ {2}(\mathbb{S}^{n})}^{2}+O(\varepsilon)\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}\] \[+Kn(\mathscr{A}_{-1}(\Omega)-\mathscr{A}_{-1}(\overline{B}_{\rho })).\] \[= \int_{\mathbb{S}^{n}}\binom{n}{1}(n-1)\phi^{n-2}(\rho)\phi^{\prime 2 }(\rho)\rho u\mathrm{d}A\] \[+\int_{\mathbb{S}^{n}}\binom{n}{1}\left[\frac{(n-1)(n-2)}{2}\phi^ {n-3}(\rho)\phi^{\prime 3}(\rho)\right.+K(1-n)\phi^{n-1}(\rho)\phi^{\prime}( \rho)\right]\rho^{2}u^{2}\mathrm{d}A\] \[+\int_{\mathbb{S}^{n}}\binom{n}{1}\frac{n-1}{n}\phi^{n-3}(\rho) \phi^{\prime}(\rho)\rho^{2}|\nabla u|^{2}\mathrm{d}A\] \[+O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+O(\varepsilon)\| \nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}.\] **Lemma 3.5** (Expression for \(\mathscr{A}_{k}(\Omega)-\mathscr{A}_{k}(\overline{B}_{\rho})\)\((k\geqslant 0)\)).: _Let \(\Omega\subset\mathbb{N}^{n+1}(K)\)\((K=-1,1)\), \(M=\partial\Omega=\{(\rho(1+u(x)),x):x\in\mathbb{S}^{n}\}\), where \(u\in C^{3}(\mathbb{S}^{n})\). Suppose \(\|u\|_{W^{2,\infty}(\mathbb{S}^{n})}<\varepsilon\), then for any \(0\leqslant k\leqslant n\), there holds_ \[\mathscr{A}_{k}(\Omega)-\mathscr{A}_{k}(\overline{B}_{\rho}) = \int_{\mathbb{S}^{n}}\binom{n}{k}(n-k)\phi^{n-k-1}(\rho)\phi^{ \prime k+1}(\rho)\rho u\mathrm{d}A \tag{3.29}\] \[+\int_{\mathbb{S}^{n}}\binom{n}{k}\left[\frac{(n-k)(n-k-1)}{2} \phi^{n-k-2}(\rho)\phi^{\prime k+2}(\rho)\right.\] \[\qquad\qquad\left.-K\frac{(n-k)(k+1)}{2}\phi^{n-k}(\rho)\phi^{ \prime k}(\rho)\right]\rho^{2}u^{2}\mathrm{d}A\] \[+\int_{\mathbb{S}^{n}}\binom{n}{k}\frac{(n-k)(k+1)}{2n}\phi^{n-k- 2}(\rho)\phi^{\prime k}(\rho)\rho^{2}|\nabla u|^{2}\mathrm{d}A\] \[+O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+O(\varepsilon) \|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}.\] Proof.: By (3.26), (3.27) and (3.28), we have already checked the formulae for (3.29) with \(k=0,1\). It's easy to see that (3.29) also satisfies (2.24). Then by induction, we conclude our assertion. ## 4. Stability of Alexandrov-Fenchel inequalities In this section, we prove Theorem 1.4. For any fixed \(0\leqslant k\leqslant n-1\), assuming \(\mathscr{A}_{j}(\Omega)=\mathscr{A}_{j}(\overline{B}_{\rho})\)\((-1\leqslant j<k)\) and \(\mathrm{bar}(\Omega)=O\), we will get a Poincare-type estimate for our proof. Before that, the following estimate which was proved in [1, 2] is crucial. For the convenience of readers, we present it here: **Lemma 4.1** ([1, 2]).: _Let \(\Omega\) be a nearly spherical domain enclosed by \(M=\{(\rho(1+u(x)),x):x\in\mathbb{S}^{n}\}\) in \(\mathbb{N}^{n+1}(K)\)\((K=-1,1)\), and suppose \(u\) can be represented as_ \[u=\sum_{k=0}^{\infty}a_{k}Y_{k},\] _where \(\{Y_{k}\}_{k=0}^{\infty}\) corresponds the spherical harmonics which forms an orthonormal basis for \(L^{2}(\mathbb{S}^{n})\). If_ \[\mathrm{bar}(\Omega)=O,\] _then_ \[a_{1}^{2}=O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}. \tag{4.1}\] Proof.: Note that the eigenvalues of the Laplace-Beltrami operator \(\Delta\) on \(\mathbb{S}^{n}\) are \(\lambda_{k}=-k(n+k-1)\)\((k=0,1,\cdots)\). Let \(\{Y_{k}\}_{k=0,1,\cdots}\) be the eigenfunction corresponding to \(\lambda_{k}\) which constitutes an orthonormal basis in \(L^{2}(\mathbb{S}^{n})\). We can take \[Y_{0}=\frac{1}{\sqrt{\mathrm{Area}(\mathbb{S}^{n})}},\ \ Y_{1}=\frac{\sqrt{n+1}}{ \sqrt{\mathrm{Area}(\mathbb{S}^{n})}}x\cdot v, \tag{4.2}\] where \(v=(v_{1},v_{2},\cdots,v_{n+1})\in\mathbb{S}^{n}\), \(x\cdot v=\sum\limits_{l=1}^{n+1}x_{l}v_{l}\). From Definition 1.2, \(\mathrm{bar}(\Omega)=O\) is characterized by the necessary condition \[0=\int_{\Omega}\nabla_{p}^{\mathbb{N}^{n+1}(K)}\left(d_{K}^{2}(y,p)\right) \Big{|}_{p=O}\,\mathrm{d}\mu_{K}(y)=\int_{\Omega}2d_{K}(y,O)\nabla_{p}^{ \mathbb{N}^{n+1}(K)}[d_{K}(y,p)]\Big{|}_{p=O}\,\mathrm{d}\mu_{K}(y). \tag{4.3}\] By using the warped product metric for \(\mathbb{N}^{n+1}(K)\)\((K=-1,1)\), we have \[\int_{0}^{\rho}\int_{\mathbb{S}^{n}}r(1+u(x))^{2}\phi^{n}(r(1+u(x)))x_{l} \mathrm{d}A\mathrm{d}r=0,\ \ l=1,2,\cdots,n+1. \tag{4.4}\] By Taylor expansion of \(r(1+u)^{2}\phi^{n}(r(1+u))\) at \(u=0\), we have \[r(1+u)^{2}\phi^{n}(r(1+u))=r\phi^{n}(r)+\frac{\mathrm{d}}{\mathrm{d}r}\left(r ^{2}\phi^{n}(r)\right)u+o(u),\] then insert it into (4.4) and use \(\|u\|_{W^{2,\infty}(\mathbb{S}^{n})}<\varepsilon\), we get that for any \(1\leqslant l\leqslant n+1\), \[0=\int_{0}^{\rho}r\phi^{n}(r)\mathrm{d}r\int_{\mathbb{S}^{n}}x_{l}\mathrm{d} A+\rho^{2}\phi^{n}(\rho)\int_{\mathbb{S}^{n}}ux_{l}\mathrm{d}A+O(\varepsilon)\|u \|_{L^{2}(\mathbb{S}^{n})}. \tag{4.5}\] Note that \(\mathrm{bar}(\overline{B}_{\rho})=O\), by taking \(u=0\) in (4.4) we have \[\int_{\mathbb{S}^{n}}x_{l}\mathrm{d}A=0,\ \ l=1,2,\cdots,n+1. \tag{4.6}\] Combining (4.5) with (4.6), we know that \[\int_{\mathbb{S}^{n}}ux_{l}\mathrm{d}A=O(\varepsilon)\|u\|_{L^{2}(\mathbb{S} ^{n})},\ \ l=1,2,\cdots,n+1. \tag{4.7}\] Consequently, by (4.2), \[a_{1}=\int_{\mathbb{S}^{n}}uY_{1}\mathrm{d}A=\frac{\sqrt{n+1}}{\sqrt{\mathrm{ Area}(\mathbb{S}^{n})}}\sum\limits_{l=1}^{n+1}v_{l}\int_{\mathbb{S}^{n}}ux_{l} \mathrm{d}A=O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}. \tag{4.8}\] **Lemma 4.2** (Poincare-type Estimate).: _Let \(M=\{(\rho(1+u(x)),x):x\in\mathbb{S}^{n}\}\), where \(u\in C^{3}(\mathbb{S}^{n})\), and there exists small \(\varepsilon>0\) such that \(\|u\|_{W^{2,\infty}(\mathbb{S}^{n})}<\varepsilon\). Let \(\Omega\) be the domain enclosed by \(M\) in \(\mathbb{N}^{n+1}(K)\)\((K=-1,1)\) satisfying \(\mathrm{bar}(\Omega)=O\)._ 1. _If_ \(\mathscr{A}_{-1}(\Omega)=\mathscr{A}_{-1}(\overline{B}_{\rho})\)_, then_ \[\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}\geqslant 2(n+1)\|u\|_{L^{2}( \mathbb{S}^{n})}^{2}+O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}.\] (4.9) _._ 2. _If_ \(\mathscr{A}_{j}(\Omega)=\mathscr{A}_{j}(\overline{B}_{\rho})\)_, for any_ \(0\leqslant j<k\)_, where_ \(1\leqslant k\leqslant n\)_, then_ \[\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}\geqslant 2(n+1)\|u\|_{L^{2}(\mathbb{S}^{n}) }^{2}+O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+O(\varepsilon)\|\nabla u \|_{L^{2}(\mathbb{S}^{n})}^{2}.\] (4.10) Proof.: Consider the Fourier expansion \(u=\sum\limits_{k=0}^{\infty}a_{k}Y_{k}\) as in Lemma 4.1, then \[\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}=\sum\limits_{k=0}^{\infty}a_{k}^{2},\ \ \|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}=\sum\limits_{k=0}^{\infty}|\lambda_{k}| a_{k}^{2}.\] When \(k\geqslant 2\), we have that \(|\lambda_{k}|=k(n+k-1)\geqslant 2(n+1)\), then \[\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2} = \sum\limits_{k=0}^{\infty}|\lambda_{k}|a_{k}^{2}=\sum\limits_{k=2 }^{\infty}|\lambda_{k}|a_{k}^{2}+na_{1}^{2}\geqslant 2(n+1)\sum\limits_{k=2}^{ \infty}a_{k}^{2}+na_{1}^{2} \tag{4.11}\] \[= 2(n+1)\sum\limits_{k=0}^{\infty}a_{k}^{2}-(n+2)a_{1}^{2}-2(n+1)a _{0}^{2}\] \[= 2(n+1)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}-(n+2)a_{1}^{2}-2(n+1)a_{ 0}^{2}\] Now \(a_{1}^{2}\) has been estimated in Lemma 4.1. Note that \(a_{0}=\dfrac{1}{\sqrt{\operatorname{Area}(\mathbb{S}^{n})}}\int_{ \mathbb{S}^{n}}u\mathrm{d}A\), we estimate \(a_{0}^{2}\) in two cases. (1) Assume that \(\operatorname{Vol}(\Omega)=\operatorname{Vol}(\overline{B}_{\rho})\), by (3.26) we get \[\int_{\mathbb{S}^{n}}u\mathrm{d}A=-\dfrac{n}{2}\dfrac{\phi^{\prime}(\rho)}{ \phi(\rho)}\rho\int_{\mathbb{S}^{n}}u^{2}\mathrm{d}A+O(\varepsilon)\|u\|_{L^{ 2}(\mathbb{S}^{n})}^{2}, \tag{4.12}\] therefore, we get \[a_{0}^{2}=O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}. \tag{4.13}\] (2) Assume that \(\mathscr{A}_{j}(\Omega)=\mathscr{A}_{j}(\overline{B}_{\rho})\) (\(0\leqslant j<k\)), by (3.29), we have \[\int_{\mathbb{S}^{n}}u\mathrm{d}A = -\int_{\mathbb{S}^{n}}\left[\dfrac{n-j-1}{2}\dfrac{\phi^{\prime}( \rho)}{\phi(\rho)}-K\dfrac{j+1}{2}\dfrac{\phi(\rho)}{\phi^{\prime}(\rho)} \right]\rho u^{2}\mathrm{d}A \tag{4.14}\] \[-\int_{\mathbb{S}^{n}}\dfrac{j+1}{2n}\dfrac{1}{\phi(\rho)\phi^{ \prime}(\rho)}\rho|\nabla u|^{2}\mathrm{d}A+O(\varepsilon)\|u\|_{L^{2}( \mathbb{S}^{n})}^{2}+O(\varepsilon)\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}.\] Thus we have \[a_{0}^{2}=O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+O(\varepsilon)\| \nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}. \tag{4.15}\] Combine (4.11), (4.1), (4.13) and (4.15), we get the estimate (4.9) and (4.10). Now, Let's estimate \(\alpha^{2}(\Omega)\) by definition. **Lemma 4.3** (Estimation for \(\alpha^{2}(\Omega)\)).: _Let \(\Omega\) be a nearly spherical domain in \(\mathbb{N}^{n+1}(K)\) (\(K=-1,1\)) which is enclosed by \(M=\{(\rho(1+u(x)),x):x\in\mathbb{S}^{n}\}\). Suppose \(u\in C^{3}(\mathbb{S}^{n})\) and there exists \(\varepsilon>0\) such that \(\|u\|_{W^{2,\infty}(\mathbb{S}^{n})}<\varepsilon\), then_ \[\alpha^{2}(\Omega)\leqslant\dfrac{1}{n^{2}}\mathrm{Area}(\mathbb{S}^{n})\phi ^{2n}(\rho)\rho^{2}\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}+O(\varepsilon)\| \nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}. \tag{4.16}\] Proof.: Let \(\overline{B}_{\Omega}\) be a geodesic ball with geodesic radius \(R\) centered at \(O\) in \(\mathbb{N}^{n+1}(K)\) (\(K=-1,1\)) such that \[\operatorname{Vol}(\Omega)=\operatorname{Vol}(\overline{B}_{\Omega})=\mathrm{ Area}(\mathbb{S}^{n})\int_{0}^{R}\phi^{n}(r)\mathrm{d}r. \tag{4.17}\] Thus, \[\mathrm{Vol}(\Omega\Delta\overline{B}_{\Omega}) \tag{4.18}\] \[= \int_{\mathbb{S}^{n}}\left|\int_{0}^{\rho(1+u)}\phi^{n}(r)\mathrm{d }r-\int_{0}^{R}\phi^{n}(r)\mathrm{d}r\right|\mathrm{d}A\] \[= \int_{\mathbb{S}^{n}}\left|\int_{0}^{\rho(1+u)}\phi^{n}(r)\mathrm{ d}r-\frac{1}{\mathrm{Area}(\mathbb{S}^{n})}\int_{\mathbb{S}^{n}}\left(\int_{0}^{ \rho(1+u)}\phi^{n}(r)\mathrm{d}r\right)\mathrm{d}A\right|\mathrm{d}A\] \[= \left\|\int_{0}^{\rho(1+u)}\phi^{n}(r)\mathrm{d}r-\frac{1}{ \mathrm{Area}(\mathbb{S}^{n})}\int_{\mathbb{S}^{n}}\left(\int_{0}^{\rho(1+u)} \phi^{n}(r)\mathrm{d}r\right)\mathrm{d}A\right\|_{L^{1}(\mathbb{S}^{n})}\] \[\leqslant (\mathrm{Area}(\mathbb{S}^{n}))^{\frac{1}{2}}\left\|\int_{0}^{ \rho(1+u)}\phi^{n}(r)\mathrm{d}r-\frac{1}{\mathrm{Area}(\mathbb{S}^{n})}\int_ {\mathbb{S}^{n}}\left(\int_{0}^{\rho(1+u)}\phi^{n}(r)\mathrm{d}r\right) \mathrm{d}A\right\|_{L^{2}(\mathbb{S}^{n})},\] where in (4.18) we used Holder's inequality. By Poincare's inequality, \[\left\|\int_{0}^{\rho(1+u)}\phi^{n}(r)\mathrm{d}r-\frac{1}{ \mathrm{Area}(\mathbb{S}^{n})}\int_{\mathbb{S}^{n}}\left(\int_{0}^{\rho(1+u)} \phi^{n}(r)\mathrm{d}r\right)\mathrm{d}A\right\|_{L^{2}(\mathbb{S}^{n})} \tag{4.19}\] \[\leqslant \frac{1}{n}\left\|\nabla\left(\int_{0}^{\rho(1+u)}\phi^{n}(r) \mathrm{d}r\right)\right\|_{L^{2}(\mathbb{S}^{n})}=\frac{1}{n}\left\|\phi^{n}( \rho(1+u))\rho\nabla u\right\|_{L^{2}(\mathbb{S}^{n})}\] \[\leqslant \frac{1}{n}\rho\|\phi^{n}(\rho(1+u))\|_{L^{\infty}(\mathbb{S}^{n })}\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}.\] Thus \[\mathrm{Vol}(\Omega\Delta\overline{B}_{\Omega})\leqslant\frac{1}{n}\rho\left( \mathrm{Area}(\mathbb{S}^{n})\right)^{\frac{1}{2}}\|\phi^{n}(\rho(1+u))\|_{L^{ \infty}(\mathbb{S}^{n})}\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}. \tag{4.20}\] By Taylor expansion of \(\phi^{n}(\rho(1+u))\) at \(u=0\), \[\phi^{n}(\rho(1+u)) = \phi^{n}(\rho)+n\phi^{n-1}(\rho)\phi^{\prime}(\rho)\rho u \tag{4.21}\] \[+\frac{1}{2}\left[n(n-1)\phi^{n-2}(\rho)\phi^{\prime 2}(\rho)- Kn\phi^{n}(\rho)\right]\rho^{2}u^{2}+o(u^{2}),\] and notice the condition \(\|u\|_{C^{2}(\mathbb{S}^{n})}<\varepsilon\), we have \[\frac{\|\phi^{n}(\rho(1+u))\|_{L^{\infty}(\mathbb{S}^{n})}}{\phi^{n}(\rho)}=1+ O(\varepsilon). \tag{4.22}\] Therefore, \[\mathrm{Vol}(\Omega\Delta\overline{B}_{\Omega})\leqslant\left(\frac{1}{n}\rho \left(\mathrm{Area}(\mathbb{S}^{n})\right)^{\frac{1}{2}}\phi^{n}(\rho)+O( \varepsilon)\right)\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}. \tag{4.23}\] Thus \[\alpha^{2}(\Omega)\leqslant\mathrm{Vol}(\Omega\Delta\overline{B}_{\Omega})^{ 2}\leqslant\frac{1}{n^{2}}\rho^{2}\mathrm{Area}(\mathbb{S}^{n})\phi^{2n}(\rho) \|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}+O(\varepsilon)\|\nabla u\|_{L^{2}( \mathbb{S}^{n})}^{2}. \tag{4.24}\] We get the conclusion. From this lemma, we know that \[\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}\geqslant\left(\frac{n^{2}}{\mathrm{ Area}(\mathbb{S}^{n})\phi^{2n}(\rho)\rho^{2}}+O(\varepsilon)\right)\alpha^{2}( \Omega). \tag{4.25}\] Now we prove Theorem 1.4. Proof of Theorem 1.4.: By \(\mathscr{A}_{j}(\Omega)=\mathscr{A}_{j}(\overline{B}_{\rho})\) (fixed \(-1\leqslant j<k\)), that is when \(j=-1\), substitute (4.12) into (3.29), and when \(0\leqslant j<k\), substitute (4.14) into (3.29), after a direct computation and use (2.2) we have \[\mathscr{A}_{k}(\Omega)-\mathscr{A}_{k}(\overline{B}_{\rho}) = -\int_{\mathbb{S}^{n}}{n\choose k}\frac{(n-k)(k-j)}{2}\phi^{n-k- 2}(\rho)\phi^{\prime k}(\rho)\rho^{2}u^{2}\mathrm{d}A \tag{4.26}\] \[+\int_{\mathbb{S}^{n}}{n\choose k}\frac{(n-k)(k-j)}{2n}\phi^{n-k- 2}(\rho)\phi^{\prime k}(\rho)\rho^{2}|\nabla u|^{2}\mathrm{d}A\] \[+O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+O(\varepsilon) \|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}.\] Note that the coefficient of \(\int_{\mathbb{S}^{n}}|\nabla u|^{2}\mathrm{d}A\) is positive, by estimate (4.9) and (4.10) for \(\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}\), we have \[\mathscr{A}_{k}(\Omega)-\mathscr{A}_{k}(\overline{B}_{\rho}) \geqslant \left({n\choose k}\frac{(n-k)(k-j)}{4n}\phi^{n-k-2}(\rho)\phi^{ \prime k}(\rho)\rho^{2}+O(\varepsilon)\right)\|\nabla u\|_{L^{2}(\mathbb{S}^{ n})}^{2} \tag{4.27}\] \[+\left({n\choose k}\frac{(n-k)(k-j)}{2n}\phi^{n-k-2}(\rho)\phi^{ \prime k}(\rho)\rho^{2}+O(\varepsilon)\right)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}\] \[+O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+O(\varepsilon) \|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}\] \[\geqslant {n\choose k}\frac{(n-k)(k-j)}{4n}\phi^{n-k-2}(\rho)\phi^{\prime k }(\rho)\rho^{2}\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}.\] After combining (4.27) with (4.25), we finally conclude that \[\delta_{k,j}(\Omega) = \mathscr{A}_{k}(\Omega)-\mathscr{A}_{k}(\overline{B}_{\rho}) \tag{4.28}\] \[\geqslant \left({n\choose k}\frac{n(n-k)(k-j)}{4}\frac{\phi^{n-k-2}(\rho) \phi^{\prime k}(\rho)}{\phi^{2n}(\rho)\mathrm{Area}(\mathbb{S}^{n})}+O( \varepsilon)\right)\alpha^{2}(\Omega)\] \[= \left(\frac{n(n-k)(k-j)}{4\mathrm{Area}(\mathbb{S}^{n})}{n\choose k }\frac{\phi^{\prime k}(\rho)}{\phi^{n+k+2}(\rho)}+O(\varepsilon)\right) \alpha^{2}(\Omega).\] ## 5. Stability of weighted quermassintegral inequalities In this section, we discuss the stability of geometric inequalities involving weighted curvature integrals and quermassintegrals for nearly spherical sets in \(\mathbb{R}^{n+1}\) and \(\mathbb{H}^{n+1}\). ### Stability of weighted quermassintegral inequalities in \(\mathbb{R}^{n+1}\) We establish the inequality in \(\mathbb{R}^{n+1}\) which states that for any fixed \(0\leqslant k\leqslant n\), if \(\mathscr{A}_{l}(\Omega)=\mathscr{A}_{l}(B)\) (\(-1\leqslant l<k\)), where \(B\) is the unit ball in \(\mathbb{R}^{n+1}\), then \[\frac{\int_{M}\Phi\sigma_{k}(\kappa)\mathrm{d}\mu_{g}-\int_{\partial B}\Phi \sigma_{k}(\kappa)\mathrm{d}\mu_{g}}{\int_{\partial B}\Phi \sigma_{k}(\kappa)\mathrm{d}\mu_{g}}\geqslant C\overline{\alpha}^{2}(\Omega),\] where \(C>0\) is a constant independent of \(\Omega\), \(\Phi(r)=\int_{0}^{r}r\mathrm{d}r=\frac{1}{2}r^{2}\) is defined in (2.3), and \(\overline{\alpha}(\Omega)\) is defined in (1.3). It is the quantitative version of (1.19). Proof of Theorem 1.5.: Taking \(\rho=1,\ K=0,\ \phi(r)=r\) in (3.20), we get that for any nearly spherical set \(M=\{(1+u(x)),x):x\in\mathbb{S}^{n}\}\) in \(\mathbb{R}^{n+1}\), \[\int_{M}\Phi\sigma_{k}(\kappa)\mathrm{d}\mu_{g} = \frac{1}{2}\int_{\mathbb{S}^{n}}{n\choose k}\mathrm{d}A+\frac{1} {2}\int_{\mathbb{S}^{n}}{n\choose k}(n-k+2)u\mathrm{d}A \tag{5.1}\] \[+\frac{1}{2}\int_{\mathbb{S}^{n}}{n\choose k}\frac{(n-k+2)(n-k+1 )}{2}u^{2}\mathrm{d}A\] \[+\frac{1}{2}\int_{\mathbb{S}^{n}}{n\choose k}\frac{(n-k)(k+1)+4k} {2n}|\nabla u|^{2}\mathrm{d}A\] \[+O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+O(\varepsilon) \|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}.\] From \(\mathscr{A}_{-1}(\Omega)=\mathscr{A}_{-1}(B)\), we have \[\int_{\mathbb{S}^{n}}u\mathrm{d}A=-\int_{\mathbb{S}^{n}}\frac{n}{2}u^{2} \mathrm{d}A+O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}, \tag{5.2}\] and from \(\mathscr{A}_{j}(\Omega)=\mathscr{A}_{j}(B)\) (\(0\leqslant j<k\)), we have \[\int_{\mathbb{S}^{n}}u\mathrm{d}A=-\int_{\mathbb{S}^{n}}\frac{n-j-1}{2}u^{2}A -\int_{\mathbb{S}^{n}}\frac{j+1}{2n}|\nabla u|^{2}\mathrm{d}A+O(\varepsilon) \|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+O(\varepsilon)\|\nabla u\|_{L^{2}(\mathbb{S }^{n})}^{2}. \tag{5.3}\] These are proved in Proposition 4.3 and Lemma 5.2 of [22]. Then by substituting (5.2) in (5.1) when \(j=-1\) and substituting (5.3) in (5.1) when \(0\leqslant j<k\), \[\int_{M}\Phi\sigma_{k}(\kappa)\mathrm{d}\mu_{g}-\frac{1}{2}\int_{ \mathbb{S}^{n}}{n\choose k}\mathrm{d}A = \frac{1}{2}\int_{\mathbb{S}^{n}}{n\choose k}\frac{(n-k+2)(j-k+2)} {2}u^{2}\mathrm{d}A \tag{5.4}\] \[+\frac{1}{2}\int_{\mathbb{S}^{n}}{n\choose k}\frac{(n-k+2)(k-j)+2 k-2}{2n}|\nabla u|^{2}\mathrm{d}A\] \[+O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+O(\varepsilon) \|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}.\] We remark that the coefficient of \(\int_{\mathbb{S}^{n}}|\nabla u|^{2}\mathrm{d}A\) in (5.4) is positive. Therefore, under the condition \(\mathrm{bar}(\Omega)=O\), substitute \[\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}\geqslant 2(n+1)\|u\|_{L^{2}(\mathbb{S}^{ n})}^{2}+O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}\] (when \(j=-1\)) and \[\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}\geqslant 2(n+1)\|u\|_{L^{2}(\mathbb{S}^{ n})}^{2}+O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+O(\varepsilon)\| \nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}\] (when \(0\leqslant j<k\)) in (5.4), we get \[\int_{M}\Phi\sigma_{k}(\kappa)\mathrm{d}\mu_{g}-\int_{\partial B} \Phi\sigma_{k}(\kappa)\mathrm{d}\mu_{g} \geqslant \binom{n}{k}\frac{n-k+2}{2}\|u\|_{L^{2}(\mathbb{S}^{n})}^{2} \tag{5.5}\] \[+\binom{n}{k}\frac{(n-k+2)(k-j)+2k-2}{8n}\|\nabla u\|_{L^{2}( \mathbb{S}^{n})}^{2}\] \[+O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+O(\varepsilon) \|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}.\] \[\geqslant \binom{n}{k}\frac{(n-k+2)(k-j)+2k-2}{8n}\|\nabla u\|_{L^{2}( \mathbb{S}^{n})}^{2}.\] Combining the estimate proved in Lemma 5.3 of [22]: \[\overline{\alpha}^{2}(\Omega)\leqslant\frac{(n+1)^{2}}{n^{2}\mathrm{Area}( \mathbb{S}^{n})}\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}+O(\varepsilon)\| \nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}, \tag{5.6}\] we obtain the inequality \[\frac{\int_{M}\Phi\sigma_{k}(\kappa)\mathrm{d}\mu_{g}-\int_{\partial B }\Phi\sigma_{k}(\kappa)\mathrm{d}\mu_{g}}{\int_{\partial B }\Phi\sigma_{k}(\kappa)\mathrm{d}\mu_{g}} \geqslant \frac{(n-k+2)(k-j)+2k-2}{4n\mathrm{Area}(\mathbb{S}^{n})}\|\nabla u \|_{L^{2}(\mathbb{S}^{n})}^{2}\] \[\geqslant \left(\frac{n\left((n-k+2)(k-j)+2k-2\right)}{4(n+1)^{2}}+O( \varepsilon)\right)\overline{\alpha}^{2}(\Omega)\] as desired. ### Stability of weighted quermassintegral inequalities in \(\mathbb{H}^{n+1}\) We are going to establish the stability inequality in \(\mathbb{H}^{n+1}\) which states that for any fixed \(0\leqslant k\leqslant n\), \(-1\leqslant j<k\), if \(\mathscr{A}_{j}(\Omega)=\mathscr{A}_{j}(\overline{B}_{\rho})\) holds, then \[\int_{M}\Phi\sigma_{k}(\kappa)\mathrm{d}\mu_{g}-\int_{\partial\overline{B}_{ \rho}}\Phi\sigma_{k}(\kappa)\mathrm{d}\mu_{g}\geqslant C\alpha^{2}(\Omega),\] where \(C>0\) is a constant independent of \(\Omega\), and \(\Phi\) is defined in (2.3). This is the quantitative version of (1.20). Proof of (1.23) in Theorem 1.6.: Substituting (4.12) (when \(j=-1\)) and (4.14) (when \(0\leqslant j<k\)) in (3.20), using (2.2) we get \[\int_{M}\Phi\sigma_{k}(\kappa)\mathrm{d}\mu_{g}-\int_{\partial \overline{B}_{\rho}}\Phi\sigma_{k}(\kappa)\mathrm{d}\mu_{g} \tag{5.7}\] \[= \int_{\mathbb{S}^{n}}\binom{n}{k}\left\{\left[\frac{(n-k)(j-k)}{ 2}\phi^{n-k-2}(\rho)\phi^{\prime k}(\rho)+K\frac{k(k-j-2)}{2}\phi^{n-k}(\rho) \phi^{\prime k-2}(\rho)\right]\Phi(\rho)\right.\] \[\left.+\frac{n+1}{2}\phi^{n-k}(\rho)\phi^{\prime k+1}(\rho)+( \frac{j+1}{2}-k)\phi^{n-k}(\rho)\phi^{\prime k-1}(\rho)\right\}\rho^{2}u^{2} \mathrm{d}A\] \[+\int_{\mathbb{S}^{n}}\binom{n}{k}\left\{\left[\frac{(n-k)(k-j)} {2n}\phi^{n-k-2}(\rho)\phi^{\prime k}(\rho)-K\frac{k(k-j-2)}{2n}\phi^{n-k}( \rho)\phi^{\prime k-2}(\rho)\right]\Phi(\rho)\right.\] \[\left.+\frac{2k-j-1}{2n}\phi^{n-k}(\rho)\phi^{\prime k-1}(\rho) \right\}\rho^{2}|\nabla u|^{2}\mathrm{d}A\] \[+O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+O(\varepsilon) \|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}\] \[= C_{1}(n,k,j,\rho)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+C_{2}(n,k,j, \rho)\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}+O(\varepsilon)\|u\|_{L^{2}( \mathbb{S}^{n})}^{2}\] \[+O(\varepsilon)\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}.\] Notice that when \(K=-1\), \(\phi^{\prime}(\rho)=\cosh\rho\geqslant 1\), thus the coefficient of \(\int_{\mathbb{S}^{n}}|\nabla u|^{2}\mathrm{d}A\), is positive for \(-1\leqslant j<k\). Indeed, by using \(\Phi(\rho)=K(1-\phi^{\prime}(\rho))\), we can compute that \[C_{2}(n,k,j,\rho) = \binom{n}{k}\phi^{n-k-2}(\rho)\phi^{\prime k-2}(\rho)\left[\frac{ (n-k)(k-j)}{2n}\Phi(\rho)\right. \tag{5.8}\] \[\left.+\phi^{2}(\rho)\left(\frac{n(k-j)-j-1}{2n}\phi^{\prime}(\rho )-\frac{n(k-j)-2k}{2n}\right)\right]\rho^{2}>0.\] After using the Poincare-type estimate (4.9) and (4.10) in (5.7), we get \[\int_{M}\Phi\sigma_{k}(\kappa)\mathrm{d}\mu_{g}-\int_{\partial\overline{B}_{ \rho}}\Phi\sigma_{k}(\kappa)\mathrm{d}\mu_{g}\] \[= \int_{\mathbb{S}^{n}}\binom{n}{k}\left\{\left[\frac{(n-k)(j-k)}{ 2}\phi^{n-k-2}(\rho)\phi^{\prime k}(\rho)+K\frac{k(k-j-2)}{2}\phi^{n-k}(\rho) \phi^{\prime k-2}(\rho)\right]\Phi(\rho)\right.\] \[\left.+\frac{n+1}{2}\phi^{n-k}(\rho)\phi^{\prime k+1}(\rho)+( \frac{j+1}{2}-k)\phi^{n-k}(\rho)\phi^{\prime k-1}(\rho)\right\}\rho^{2}u^{2} \mathrm{d}A\] \[+\int_{\mathbb{S}^{n}}\binom{n}{k}\left\{\left[\frac{(n-k)(k-j)} {2n}\phi^{n-k-2}(\rho)\phi^{\prime k}(\rho)-K\frac{k(k-j-2)}{2n}\phi^{n-k}( \rho)\phi^{\prime k-2}(\rho)\right]\Phi(\rho)\right.\] \[\left.+\frac{2k-j-1}{2n}\phi^{n-k}(\rho)\phi^{\prime k-1}(\rho) \right\}\rho^{2}|\nabla u|^{2}\mathrm{d}A\] \[+O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+O(\varepsilon)\| \nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}\] \[= C_{1}(n,k,j,\rho)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+C_{2}(n,k,j, \rho)\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}+O(\varepsilon)\|u\|_{L^{2}( \mathbb{S}^{n})}^{2}\] \[+O(\varepsilon)\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}.\] Notice that when \(K=-1\), \(\phi^{\prime}(\rho)=\cosh\rho\geqslant 1\), thus the coefficient of \(\int_{\mathbb{S}^{n}}|\nabla u|^{2}\mathrm{d}A\), is positive for \(-1\leqslant j<k\). Indeed, by using \(\Phi(\rho)=K(1-\phi^{\prime}(\rho))\), we can compute that \[C_{2}(n,k,j,\rho) = \binom{n}{k}\phi^{n-k-2}(\rho)\phi^{\prime k-2}(\rho)\left[\frac{ (n-k)(k-j)}{2n}\Phi(\rho)\right. \tag{5.9}\] \[\left.+\phi^{2}(\rho)\left(\frac{n(k-j)-j-1}{2n}\phi^{\prime}(\rho )-\frac{n(k-j)-2k}{2n}\right)\right]\rho^{2}>0.\] After using the Poincare-type estimate (4.9) and (4.10) in (5.7), we get \[\int_{M}\Phi\sigma_{k}(\kappa)\mathrm{d}\mu_{g}-\int_{\partial\overline{B}_{ \rho}}\Phi\sigma_{k}(\kappa)\mathrm{d}\mu_{g}\] \[= \int_{\mathbb{S}^{n}}\binom{n}{k}\left\{\left[\frac{(n-k)(j-k)}{2} \phi^{n-k-2}(\rho)\phi^{\prime k}(\rho)+K\frac{k(k-j-2)}{2}\phi^{n-k}(\rho) \phi^{\prime k-2}(\rho)\right]\Phi(\rho)\right.\] \[\left.+\frac{n+1}{2}\phi^{n-k}(\rho)\phi^{\prime k+1}(\rho)+( \frac{j+1}{2}-k)\phi^{n-k}(\rho)\phi^{\prime k-1}(\rho)\right\}\rho^{2}u^{2} \mathrm{d}A\] \[+\int_{\mathbb{S}^{n}}\binom{n}{k}\left\{\left[\frac{(n-k)(k-j)}{2n} \phi^{n-k-2}(\rho)\phi^{\prime k}(\rho)-K\frac{k(k-j-2)}{2n}\phi^{n-k}(\rho) \phi^{\prime k-2}(\rho)\right]\Phi(\rho)\right.\] \[\left.+\frac{2k-j-1}{2n}\phi^{n-k}(\rho)\phi^{\prime k-1}(\rho) \right\}\rho^{2}|\nabla u|^{2}\mathrm{d}A\] \[+O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+O(\varepsilon)\| \nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}\] \[= C_{1}(n,k,j,\rho)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+C_{2}(n,k,j,\rho) \|\nabla u\|_{L^{ \[\geqslant (C_{1}(n,k,j,\rho)+C_{2}(n,k,j,\rho)n)\,\|u\|_{L^{2}(\mathbb{S}^{n}) }^{2}+\frac{1}{2}C_{2}(n,k,j,\rho)\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}\] \[+O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+O(\varepsilon)\| \nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}.\] \[\geqslant \binom{n}{k}\frac{(n-k)(k-j)}{4n}\phi^{n-k-2}(\rho)\phi^{\prime k- 2}(\rho)\Phi(\rho)\rho^{2}\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}, \tag{5.9}\] where we used the fact that \(C_{1}(n,k,j,\rho)+C_{2}(n,k,j,\rho)n\geqslant 0\) in the second inequality by a direct computation. Finally, combining (5.9) with the estimate (4.25) for \(\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}\), we obtain the inequality \[\int_{M}\Phi\sigma_{k}(\kappa)\mathrm{d}\mu_{g}-\int_{\partial \overline{B}_{\rho}}\Phi\sigma_{k}(\kappa)\mathrm{d}\mu_{g}\] \[\geqslant \left(\binom{n}{k}\frac{n(n-k)(k-j)}{4}\frac{\phi^{\prime k-2}( \rho)\Phi(\rho)}{\mathrm{Area}(\mathbb{S}^{n})\phi^{n+k+2}(\rho)}+O(\varepsilon )\right)\alpha^{2}(\Omega).\] We are now in the position to establish the stability of the second type of weighted inequality in \(\mathbb{H}^{n+1}\), which states that for any fixed \(0\leqslant k\leqslant n\), \(-1\leqslant j<k\), if \(\mathscr{A}_{j}(\Omega)=\mathscr{A}_{j}(\overline{B}_{\rho})\) holds, then \[\int_{M}\phi^{\prime}\sigma_{k}(\kappa)\mathrm{d}\mu_{g}-\int_{\partial \overline{B}_{\rho}}\phi^{\prime}\sigma_{k}(\kappa)\mathrm{d}\mu_{g} \geqslant C\alpha^{2}(\Omega),\] where \(C>0\) is a constant independent of \(\Omega\). This is the quantitative version of (1.21). Proof of (1.24) in Theorem 1.6.: Substituting (4.12) (when \(j=-1\)) and (4.14) (when \(0\leqslant j<k\)) in (3.23), using (2.2) we get \[\int_{M}\phi^{\prime}\sigma_{k}(\kappa)\mathrm{d}\mu_{g}-\int_{ \partial\overline{B}_{\rho}}\phi^{\prime}\sigma_{k}(\kappa)\mathrm{d}\mu_{g} \tag{5.10}\] \[= \int_{\mathbb{S}^{n}}\binom{n}{k}\left[\frac{(n-k)(j-k)}{2}\phi^{ n-k-2}(\rho)\phi^{\prime k+1}(\rho)-K\frac{1+n}{2}\phi^{n-k}(\rho)\phi^{\prime k +1}(\rho)\right.\] \[\left.+K\frac{(k+1)(k-j-1)}{2}\phi^{n-k}(\rho)\phi^{\prime k-1}( \rho)\right]\rho^{2}u^{2}\mathrm{d}A\] \[+\int_{\mathbb{S}^{n}}\binom{n}{k}\left[\frac{(n-k)(k-j)}{2n}\phi ^{n-k-2}(\rho)\phi^{\prime k+1}(\rho)\right.\] \[\left.-K\frac{(k+1)(k-j-1)}{2n}\phi^{n-k}(\rho)\phi^{\prime k-1}( \rho)\right]\rho^{2}|\nabla u|^{2}\mathrm{d}A\] \[+O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+O(\varepsilon)\| \nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}\] \[= C_{3}(n,k,j,\rho)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+C_{4}(n,k,j, \rho)\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}\] \[+O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+O(\varepsilon)\| \nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}.\] Note that when \(K=-1\), \(C_{4}(n,k,j,\rho)>0\), thus by the Poincare-type estimate (4.9) and (4.10), we have \[\int_{M}\phi^{\prime}\sigma_{k}(\kappa)\mathrm{d}\mu_{g}-\int_{ \partial\overline{B}_{\rho}}\phi^{\prime}\sigma_{k}(\kappa)\mathrm{d}\mu_{g} \tag{5.11}\] \[\geqslant (C_{3}(n,k,j,\rho)+C_{4}(n,k,j,\rho)n)\|u\|_{L^{2}(\mathbb{S}^{ n})}^{2}+\frac{1}{2}C_{4}(n,k,j,\rho)\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}\] \[+O(\varepsilon)\|u\|_{L^{2}(\mathbb{S}^{n})}^{2}+O(\varepsilon)\| \nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}\] \[\geq \frac{1}{2}C_{4}(n,k,j,\rho)\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}\] \[\geq \binom{n}{k}\frac{(n-k)(k-j)}{4n}\phi^{n-k-2}(\rho)\phi^{\prime k+ 1}(\rho)\rho^{2}\|\nabla u\|_{L^{2}(\mathbb{S}^{n})}^{2}, \tag{5.11}\] where we used the fact that \(C_{3}(n,k,j,\rho)+C_{4}(n,k,j,\rho)n\geqslant 0\) in the second inequality by a direct computation. Finally, combining (5.11) with the estimate (4.25), we conclude that \[\int_{M}\phi^{\prime}\sigma_{k}(\kappa)\mathrm{d}\mu_{g}-\int_{ \partial\overline{B}_{\rho}}\phi^{\prime}\sigma_{k}(\kappa)\mathrm{d}\mu_{g}\] \[\geqslant \left(\binom{n}{k}\frac{n(n-k)(k-j)}{4}\frac{\phi^{\prime k+1}( \rho)}{\mathrm{Area}(\mathbb{S}^{n})\phi^{n+k+2}(\rho)}+O(\varepsilon)\right) \alpha^{2}(\Omega).\] We remark that the quantitative weighted quermassintegral inequalities in \(\mathbb{S}^{n+1}\) can also be proved using this method under the same condition as in Theorem 1.6 for some special \(j<k\). However, we can't confirm that the coefficient of \(\int_{\mathbb{S}^{n}}|\nabla u|^{2}\mathrm{d}A\) is positive for all \(-1\leqslant j<k\) (\(0\leqslant k\leqslant n\)) in the approximate expression of the curvature integral deficit. Though we believe the quantitative weighted quermassintegral inequalities hold for all \(j<k\) in \(\mathbb{S}^{n+1}\). **Acknowledgments.** The authors would like to thank Professor Yong Wei for his helpful discussions and constant support. The authors were supported by National Key Research and Development Program of China 2021YFA1001800.
2308.14897
Statistically Efficient Variance Reduction with Double Policy Estimation for Off-Policy Evaluation in Sequence-Modeled Reinforcement Learning
Offline reinforcement learning aims to utilize datasets of previously gathered environment-action interaction records to learn a policy without access to the real environment. Recent work has shown that offline reinforcement learning can be formulated as a sequence modeling problem and solved via supervised learning with approaches such as decision transformer. While these sequence-based methods achieve competitive results over return-to-go methods, especially on tasks that require longer episodes or with scarce rewards, importance sampling is not considered to correct the policy bias when dealing with off-policy data, mainly due to the absence of behavior policy and the use of deterministic evaluation policies. To this end, we propose DPE: an RL algorithm that blends offline sequence modeling and offline reinforcement learning with Double Policy Estimation (DPE) in a unified framework with statistically proven properties on variance reduction. We validate our method in multiple tasks of OpenAI Gym with D4RL benchmarks. Our method brings a performance improvements on selected methods which outperforms SOTA baselines in several tasks, demonstrating the advantages of enabling double policy estimation for sequence-modeled reinforcement learning.
Hanhan Zhou, Tian Lan, Vaneet Aggarwal
2023-08-28T20:46:07Z
http://arxiv.org/abs/2308.14897v1
Statistically Efficient Variance Reduction with Double Policy Estimation for Off-Policy Evaluation in Sequence-Model Reinforcement Learning ###### Abstract Offline reinforcement learning aims to utilize datasets of previously gathered environment-action interaction records to learn a policy without access to the real environment. Recent work has shown that offline reinforcement learning can be formulated as a sequence modeling problem and solved via supervised learning with approaches such as decision transformer. While these sequence-based methods achieve competitive results over return-to-go methods, especially on tasks that require longer episodes or with scarce rewards, importance sampling is not considered to correct the policy bias when dealing with off-policy data, mainly due to the absence of behavior policy and the use of deterministic evaluation policies. To this end, we propose DPE: an RL algorithm that blends offline sequence modeling and offline reinforcement learning with Double Policy Estimation (DPE) in a unified framework with statistically proven properties on variance reduction. We validate our method in multiple tasks of OpenAI Gym with D4RL benchmarks. Our method brings a performance improvements on selected methods which outperforms SOTA baselines in several tasks, demonstrating the advantages of enabling double policy estimation for sequence-modeled reinforcement learning. ## 1 Introduction Many real-world reinforcement learning (RL) problems, such as autonomous vehicle coordination and data-driven problems[1; 2; 3] are widely used to solve sequential decision-making problems naturally in a way as the agent takes an action based on its observation, receives a reward from the environment, and then observes the next action[4; 5; 6], and so on [7; 8; 9]. Usually modeled as a Markov Decision Process (MDP), the agent would take an action solely based on its current state information (which represents the whole trajectory history), so a scheme where sequences are divided into each step and later solved with algorithms like Temporal Difference learning (TD-learning) [10] is proposed. This could be derived via Bellman Equations to solve RL problems mathematically. Recent advances in offline reinforcement learning (RL) algorithms provide a promising approach for sequential decision-making tasks without the need for online interactions with an environment [11; 12]. This approach is particularly appealing when online interactions are costly or when there is an abundance of offline experiences available. Recent works have demonstrated that generative models [13; 14; 15] that are widely used in language and vision tasks can be applied to maximize the likelihood of trajectories in an offline dataset without temporal difference learning [16], notably, Decision Transformer (DT) [17], which uses the transformer architecture [18] for decision-making. Such a pertaining paradigm in a supervised learning manner for RL can be considered known as Reinforcement learning via Supervised Learning (RvS) [19, 20, 21]. Instead of learning a value-based algorithm for decision-making, RvS-based methods often consider the learning task as a prediction problem: to predict an action that will lead to a certain outcome or reward when given a sequence of past states and actions (e.g., using causal transformer architectures). These methods have gained significant attention due to their algorithmic and implementation simplicity while bringing a robust performance on several offline-RL benchmarks. Learning an RvS policy \(\pi_{e}\) requires off-policy learning since we need to estimate the expected return of the learned policy \(\pi_{e}\) during training, from offline experiences/trajectories that are generated using a different behavior policy \(\pi_{b}\). We note that online policy evaluation is usually expensive, risky, or even unethical for many real-world problems [22]. When the actual environment is not accessible, these trajectories sampled by \(\pi_{b}\) can be used to evaluate \(\pi_{t}\), also known as off-policy evaluation (OPE) [7]. An accurate OPE is crucial to evaluate and optimize a policy during training from offline datasets, the concept of importance sampling (IS) rectifies the discrepancy between the distributions of the behavior policy \(\pi_{b}\) and the evaluation policy \(\pi_{e}\)[23]. IS-based off-policy evaluation methods have also seen lots of interest recently, especially for short-horizon problems [24, 25], including contextual bandits [26]. However, the application of IS to sequence modeling-based RvS methods is difficult due to a number of challenges. The behavior policies for collecting experience/trajectory data are often not available, while the evaluation policies in RvS methods are typically deterministic, making reweighting different experiences/trajectories inaccessible. Further, the variance of IS-based approaches tends to be too high to provide informative results, for long-horizon problems, since the variance of the product of importance weights may grow exponentially as the horizon goes long[27, 28]. Although it is intuitively to assume that replacing the behavior policy with its empirical estimation can harm the performance and increase the variance of a policy, recent works in several domains including multi-armed bandits[29] and off-policy evaluation [6, 30, 31] have shown that by applying an estimation of the behavior policy could improve the mean squared error of importance sampling policy evaluation [32]. In this paper, we study a problem that when given a dataset of trajectories sampled by a behavior policy and trajectories generated with sequence-modeling-based evaluation policy (in this paper we select Decision Transformer to demonstrate our approach), to estimate both behavior policy and target policy and then compute the importance sampling estimate which we call double policy estimation importance sampling. We further provide a theoretical analysis on the properties of such estimators and show that this double policy estimation will reduce the variance of the target policy learned. Specifically, we propose to introduce an asymptotic estimation for both behavior policy \(\pi_{b}\), which is used to sample and generate the dataset, and target evaluation policy, \(\pi_{t}\), which is the policy we are in an attempt to learn and correct, as double policy estimation, to calculate the likelihood ratio for all state-action pairs in the off-policy data. Although it may seem that such an estimation would bring even worse performance as it introduces more uncertainties[33, 34], recent research in several domains including multi-armed bandits [32, 35], Monte Carlo integration [36], and causal inference [24] has shown this estimating behavior could potentially improve the mean squared error of importance sampling policy evaluation which partially motivates this design. Another direct motivation is that specifically for many generation models based RvS methods like decision transformer in an offline reinforcement a common scenario is that both \(\pi_{b}\) and \(\pi_{t}\) are both inaccessible, which promotes a design for double policy estimation. We prove that DPE can statistically lower the mean squared error of importance sampling OPE with lower variance. We implement the proposed DPE on D4RL environments and compare DPE with SOTA baselines including DT [17], RvS [19], CQL [37], BEAR [38], UWAC [39], BC [40], and IQL [41]. We empirically found double policy estimation based on importance sampling also brings an improvement to the off-policy evaluation of the D4RL environment, where DPE achieves better performance than the original decision transformer on almost all datasets and outperforms the state-of-the-art baselines over several datasets with further analysis discussing the effects and properties of the proposed double policy estimator. Background ### Markov Decision Process and Sequence-Based Method in Reinforcement Learning We assume that the environment is a Markov decision process with a finite horizon and episodic nature, where the state space is denoted as \(\mathcal{S}\), the action space as \(\mathcal{A}\), and the environment possesses transition probabilities represented by \(P\), a reward function denoted as \(R\), a horizon length of \(H\), a discount factor of \(\gamma\), and initial state distribution of \(d_{0}\)[42; 32]. A policy, denoted as \(\pi\), is considered Markovian if it maps the current state to a probability distribution over actions. In contrast, a policy is classified as non-Markovian if its action distribution is dependent on past actions or states [43; 44]. We assume \(\mathcal{S}\) and \(\mathcal{A}\) are finite for simplicity and probability distributions are probability mass functions. In off-policy policy evaluation, we are given a fixed evaluation policy, \(\pi_{e}\), and a data set of \(m\) trajectories and the policies that generated them: \(\mathcal{D}\{\omega^{i},\pi_{b}^{(i)}\}_{i=1}^{m}\) where \(\omega^{i}\sim\pi_{b}^{(i)}\). We assume that \(\forall\{\omega^{i},\pi_{b}^{(i)}\}\in\mathcal{D}\), \(\pi_{b}^{(i)}\) is Markovian, i.e., actions in \(\mathcal{D}\) are independent of past states and actions gave the immediately preceding state [45]. Sequence-based methods in reinforcement learning, which is trained in reinforcement learning via supervised learning (RvS) manner such as Decision Transformer, train a model using supervised learning on a dataset with respect to trajectories to predict \(p_{\mathcal{D}(a|s,R)}\), i.e., given a cumulative reward \(R=\sum_{t}\gamma^{t}r_{t}\) to predict the probability of next action conditioning the current state. Then at the deployment stage, the model takes actions conditioned on a desired target return value. Our goal is to design an off-policy estimator that takes \(\mathcal{D}\) as input and estimates both behavior policy \(\pi_{b}\) and evaluation policy \(\pi_{e}\) for enabling importance sampling in sequence modeling methods. Decision Transformer processes a trajectory \(\mathbf{\omega}\) as a sequence consisting of 3 types of input to be tokenized: the states, actions selected, and the return-to-go [18; 17]. Specifically, it learns a deterministic model \(\pi_{\text{DT}(a_{t}|a_{-K:t},s_{-K:t},\mathbf{\tau}_{-K:t})}\) where \(-K\) denotes the past K sequences and is trained to predict the action token at timestamp \(t\). During the evaluation, DT is given a desired reward \(g_{0}\) and the initial stage \(s_{0}\) at the beginning and executes the action it generates. Once an action \(a_{t}\) is generated and then executed, the next state \(s_{t+1}\sim P(\cdot|s_{t},a_{t})\) and reward \(r_{t}=R(s_{t},a_{t})\) are observed, together with the return-to-go \(g_{t+1}=r_{t}-g_{t}\): this new sequence will be appended to the previous input. The process is repeated until the terminal state. DT is then trained under standard \(l_{2}\) loss as \(\nabla_{\theta}J(\pi_{DT})=\frac{1}{K}\sum_{k}\nabla_{\theta_{DT}}(a_{k}- \hat{a})^{2}\) in a supervised learning way. ### Importance Sampling in Reinforcement Learning Importance Sampling (IS) is a method for reweighting returns generated by a behavior policy \(\pi_{b}\) to produce an unbiased estimate of the returns for the evaluation policy. To obtain a reliable numerical integration of \(f\) as \(\theta=\int f(x)dx\), assuming there is a family of sampling distributions, \(p(x;\eta)\), with parameter \(\eta\), that generates a random trajectory \(\omega:=(s_{0},a_{0},r_{0},\cdots,s_{L-1},a_{L-1},r_{L-1})\) from \(p(x;\eta_{0})\), where \(g(\omega):=\sum_{t=0}^{L-1}\gamma^{t}r_{t}\) be the discounted return with preliminary fixed \(\eta_{0}\): an ordinary importance sampling (OIS) method provides an estimator of \(\theta\) in the form of \(\tilde{\theta}=\frac{1}{n}\sum_{i=1}^{n}\frac{f(x_{i})}{p(x;\eta_{0})}\). Then \(\tilde{\theta}\) is an unbiased estimator of \(\theta\) and \(\tilde{\theta}\) is guaranteed to converge to \(\theta\) as \(n\) goes to infinity according to the strong law of large numbers [46]. In Monte Carlo problems with high-dimensional \(x\), the target density \(p(x)\) can be writing in a chain-like decomposition as \(p(x)=p(x_{1})\prod_{t=2}^{d}p(x_{1}|x_{1:t-1})\), where \(x_{[1:t]}=(x_{1},\cdots,x_{t})\). With a set of \(m\) trajectories and the policy that generated each trajectory, the IS off-policy estimate of \(v(\pi_{e})\) is: \(\mathrm{IS}(\pi_{e},\mathcal{D})\coloneqq\frac{1}{m}\sum_{i=1}^{m}g(\omega^{(i )})\prod_{t=0}^{L-1}\frac{\pi_{e}(a_{t}^{(i)}|s_{t}^{(i)})}{\pi_{e}(a_{t}^{(i) }|s_{t}^{(i)})}\). We refer to this as the ordinary importance sampling (\(\mathrm{OIS}\)) estimator which uses the true behavior policy and refer to \(\frac{\pi_{e}(a|s)}{\pi_{e}(a|s)}\) as the OIS weight for action \(a\) in state \(s\). A standard approach to dealing with off-policy data is to correct the policy using importance sampling (IS) by applying cumulative density ratios \(\nu_{0:t}\)[47; 29]. Then the policy gradient \(Z(\theta)\) can be rewritten as an expectation over \(p_{\pi_{b}}\) and further estimated using an equivalent empirical expectation. The off-policy version of the classic REINFORCE algorithm [48] recognizes \(Z(\theta)=\mathbb{E}[\nu_{0:H}\sum_{t=0}^{H}r_{t}\sum_{t=0}^{H}g_{t}]\) (recall that \(E\) is understood as \(\mathbb{E}_{p_{\pi_{b}}}\)) and uses the estimated policy gradient given by replacing \(\mathbb{E}\) with \(\mathbb{E}_{n}\). Later works obtained a policy gradient in terms of Q-function as \(Z(\theta)=\mathbb{E}[\sum_{t=0}^{H}\nu_{0:t}g_{t}q_{t}]\)[49]. Related Work ### Sequence-Based method in Reinforcement Learning Much recent progress has been on formulating the offline decision-making procedure in offline reinforcement learning as a context-conditioned sequence modeling problem [16; 17]. Compared to the temporal difference methods, these works consider a paradigm that utilizes predictive models to generate desired actions from the observation sequence and the task specification like a supervised learning problem [20; 21; 19] rather than learning a Q-function or policy gradients. Specifically, the Decision Transformer model [17] trains the transformer architecture [18] as a model-free context-conditioned policy that takes the encoded reward-to-go, state, and action sequence as input to predict the action for the next step, and the Trajectory Transformer [16] trains transformer that first discretizes each dimension of the input sequence and shows that beam search can be used to improve upon the model-free performance. Various attempts have also been made to improve transformers in multi-agent RL [50; 51; 52; 53] and other areas including meta RL [54; 5], and multi-task RL[55]. However, these works do not consider the importance of sampling for offline reinforcement learning. Our work extended this area with the proposed double policy estimation and further improved the asymptotic variance of the ordinary method using the true sampling distribution. ### Importance Sampling in Reinforcement Learning The use of off-policy samples within reinforcement learning is a popular research area [56; 57; 58]. Many of them rely on OIS or variants of OIS to correct for bias. The use of importance sampling ensures unbiased estimates, but at the cost of considerable variance, as quantified by the ESS measure [59]. The problem of sampling error applies to any variant of importance sampling using OIS weights, e.g., weighted importance sampling and per-decision importance sampling [23], the doubly robust estimator [22], and the MAGIC estimator [60]. On-policy Monte Carlo policy evaluation is also subject to sampling error, as it is a specific case of ordinary importance sampling where the behavior policy and the evaluation policy are identical. Among these important sampling methods, [28] is the closest work but considers estimated behavior policy where their behavior policy estimate comes from the same set of data used to compute the importance sampling estimate; while we estimate the behavior policy to the training phase from the dataset and estimate the target policy from data generated from the target policy. ## 4 Methodology In this section, we present the primary focus of our work: double policy estimation (DPE) importance sampling that corrects for sampling error in sequence modeling-based reinforcement learning. The key idea is to obtain the maximum likelihood estimate of both behavior and evaluation policies \(\hat{\pi}_{b}^{\eta}\) and \(\hat{\pi}_{t}^{\psi}\) and use them for computing the DPE cumulative density ratio. We further analyze the theoretical properties of DPE and prove that it is guaranteed to reduce the asymptotic variance of policy parameters. A table of key notations with explanations is summarized in the Appendix. ### DPE for sequence modeling-based reinforcement learning Let \(\mathcal{D}\) be a set of off-policy trajectories of length \(H+1\) collected by a behavior policy \(\pi_{b}\), denoted by \(\mathcal{D}=\{\omega_{i},\ \forall i\}\) with each trajectory \(\omega_{i}=\{(s_{0}{}^{(i)},a_{0}{}^{(i)},r_{0}{}^{(i)},\cdots,s_{H}{}^{(i)},a _{H}{}^{(i)},r_{H}{}^{(i)})\}\). For known behavior policy \(\pi_{b}\) and evaluation policy \(\pi_{e}^{\theta}\), OIS leverages the cumulative density ratio \(\nu_{0:t}=\prod_{k=0}^{t}v_{k}\) (with density ratio \(v_{k}=\pi_{e}^{\theta}(a_{k}|s_{k})/\pi_{b}(a_{k}|s_{k})\)) to reweight the policy scores \(g_{t}=\nabla_{\theta}\log\pi_{e}^{\theta}(a_{t}|s_{t})\), such that they are unbiased estimates of the evaluation policy \(\pi_{e}^{\theta}\). In the off-policy version of the classic REINFORCE algorithm [48], the policy gradient under OIS is recognized as \(Z(\theta)=\mathbb{E}[\nu_{0:H}q_{0:H}\sum_{t=0}^{H}g_{t}]\), where \(q_{t:H}=\sum_{s=t}^{H}r_{s}\) is the return-to-go from step \(t\) to step \(H\) in trajectory \(\omega\) (generated from behavior policy \(\pi_{b}\)). OIS can be easily extended to its step-wise form [61; 49] with \(Z(\theta)=\mathbb{E}[\sum_{t=0}^{H}\nu_{0:t}q_{t:H}g_{t}]\). OIS has been commonly used in off-policy reinforcement learning. We note that when RL is recast as an offline sequence modeling problem (such as Decision Transformer [17] and RvS [19]), it also relies on off-policy learning. However, there are three challenges preventing OIS from being directly applied to sequence modeling-based RL. First, offline RL datasets often do not provide the actual behavior policy for collecting trajectories, making it impossible to access \(\pi_{b}\) in importance sampling. Second, sequence modeling-based RL usually are trained using a transformer structure to represent evaluation policy and to generate deterministic action outputs [17]. We need to extend them to stochastic policies to obtain \(\pi_{e}\) in importance sampling. Finally, OIS is known to have a high variance [62], also known as high sampling error in importance sampling[28]. Methods to reduce importance-sampling variance are needed for sequence modeling-based RL. To this end, we propose two maximum likelihood estimators of (stochastic) behavior and evaluation policies in sequence modeling-based RL, denoted by \(\hat{\pi}_{b}^{\eta}\) and \(\hat{\pi}_{e}^{\psi}\). A baseline return \(b_{t}^{\xi}\) is further estimated (using a mean-square error loss) in sequence modeling-based RL and is leveraged to mitigate the variance in policy learning. Given a set \(\mathcal{D}\) of \(m\) trajectories, the proposed DPE with respect to the off-policy version of classic REINFORCE algorithm [48] is defined as: \[Z_{\text{DPE}}(\theta|\eta,\psi,\xi,\mathcal{D})=\mathbb{E}\left[\left(q_{0:H} -b_{0}^{\xi}\right)\prod_{t=0}^{H}\frac{\pi_{e}^{\psi}(a_{t}|s_{t})}{\pi_{b}^{ \eta}(a_{t}|s_{t})}\left(\sum_{t=0}^{H}g_{t}\right)\right]. \tag{1}\] DPE can also be applied to the step-wise form [61, 49], by replacing the density ratio \(v_{k}\) with its estimator \(\hat{v}_{k}=\pi_{e}^{\theta}(a_{k}|s_{k})/\pi_{b}(a_{k}|s_{k})\) and by subtracting the return baseline \(b_{t}^{\xi}\), i.e., \[Z_{\text{DPE}}(\theta|\eta,\psi,\xi,\mathcal{D})=\mathbb{E}\left[\sum_{t=0}^{H }(q_{t:H}-b_{t}^{\xi})\hat{v}_{0:t}g_{t}\right]. \tag{2}\] The key idea of our DPE estimator for importance sampling is to leverage the maximum likelihood estimate of behavior and evaluation policies, denoted by \(\hat{\pi}_{b}^{\eta}\) and \(\hat{\pi}_{t}^{\psi}\) respectively. We introduce the proposed maximum likelihood estimators for \(\hat{\pi}_{b}^{\eta}\) and \(\hat{\pi}_{e}^{\psi}\) and minimum-mean-square estimator for \(b^{\xi}\) as following: Maximum likelihood estimator for behavior policy \(\hat{\pi}_{b}^{\eta}\).We consider estimating the \(\hat{\pi}_{b}\), with maximum likelihood as \(\hat{\pi}_{b}^{\eta}:=\operatorname*{argmax}_{\pi_{b}}\sum_{\omega\in\mathcal{ D}}\sum_{t}\text{log}\pi_{b}(a|\omega_{t-:t})\), so that it could provide a behavior policy action probability estimation while the training of DT. Specifically, in this work, for policy network estimator we consider learning \(\pi_{b}\) from \(\mathcal{D}\) as a Gaussian distribution over actions with mean and standard deviation estimated from a neural network. Maximum likelihood estimator for target policy \(\hat{\pi}_{t}^{\psi}\).One key insight in this paper is that when assuming a Gaussian policy for target policy estimation, the estimator would be minimizing the mean-square error of action predictions, thus it is identical to sequence modeling-based RL like DT with MSE loss where its variance is this MSE specifically to each timestep while training. When obtaining the target policy estimator, although for decision transformer \(\pi_{b}\) is often not directly available and \(\pi_{b}(a|s,R)\) cannot be used as this estimator, also estimating an ongoing learning method might be unstable and inefficient, we point out that this weight at specific timestep \(t\) can be considered as a Gaussian distribution with a mean of \(\hat{a}_{t}\) and variance of the corresponding MSE. We explain why this can serve as target policy estimation later in the main theorem in detail. Minimum-mean-square estimator for baseline \(b^{\xi}\).Since \(b^{\xi}\) is trained to predict return-to-go by minimizing loss \(\sum_{i=1}^{m}\left[q_{t:H}-b_{t}^{\xi}\right]^{2}\). This can be easily incorporated into sequence modeling-based Reinforcement Learning like Decision Transformer. Training sequence modeling based RL using DPE.We summarize the general architecture of the learning pipeline on Algorithm 1 of applying DPE to the sequence-modeling-based target policy (Decision Transformer). We first obtain an empirical estimator of the behavior policy \(\pi_{b}\) prior to the training of the Decision Transformer in a warm-up phase. Then during the training phase, we acquire the target policy estimator as a Gaussian distribution \(\hat{a}_{t}^{\eta}\sim\mathcal{N}(\hat{a}_{t},\sigma^{2})\) where \(\hat{a}_{t}\) is the mean generated from the decision transformer, \(\hat{\sigma^{2}}\) is the MSE that serves as variance from the loss calculated at a specific timestamp. We present a pseudocode of the DPE training procedure in the appendix. ### Problem formulation and DPE Objective In offline sequence modeling-based reinforcement learning, we are given a data set of \(m\) offline trajectories \(\omega=\{(s_{0},a_{0},r_{0}...)\}\), and the behavior policy \(\pi\) that is collected them. We denote the trajectories that are generated by the decision transformer as \(\hat{\omega}=\{(\hat{s_{0}},\hat{a_{0}},\hat{r_{0}}...)\}\) We consider the following two joint objectives: \[\left\{\begin{aligned} H(\pi_{t})=-\mathbb{E}\left\| \sum_{t}\text{log}(2\pi e\sigma_{t}^{2})\right\|,\\ L=-E_{\pi_{t}}\text{log}q(a_{t})\end{aligned}\right. \tag{3}\] where minimizing \(L-\beta H\) for \(\pi_{t}\), \(min(H(\pi_{t}))\) is to approximate the target policy decision transformer and \(L\) is to maximize the likelihood of \(a_{t}\). We then choose \(\pi_{b}(\eta)\) to maximize the likelihood and \(b(\xi)\) to minimize the squared error \(\sum_{i=1}^{n}w^{2}\cdot(G_{i}-b(\xi))^{2}\). Note that DPE objective can also be written as : \[\text{DPE}:=\frac{1}{m}\sum_{i=1}^{n}q(h_{t})\prod_{t=0}^{L-1} \frac{\hat{\pi}_{t}^{(i)}(a_{t}^{(i)}|s_{t}^{(i)})}{\hat{\pi}_{b}^{(i)}(a_{t}^ {(i)}|s_{t}^{(i)})}=\frac{1}{m}\sum_{i=1}^{n}\frac{\hat{w}_{\pi_{t}}(h_{t})}{ \hat{w}_{\pi_{t}}(h_{t})}q(h_{t}) \tag{4}\] The variance of \(\tilde{\theta}\) is given by \(\delta^{2}(f)/n\), where \(\delta^{2}=\delta^{2}(f)=\int\{\frac{f(x)}{p(x;\eta_{0})-\theta}\}^{2}p(x; \eta_{0})dx\), thus the distribution of \(\sqrt{n}(\tilde{\theta}-\theta)\) converges to Normal distribution \(\mathcal{N}(0,\delta^{2})\) as \(n\) increases to infinity according to central limit theorem. ### Theoretical Properties of DPE We analyze the asymptotic properties of the maximum likelihood estimator of behavior policy \(\pi_{b}^{\hat{\eta}}\) (with optimal parameters \(\hat{\eta}\)), the maximum likelihood estimator of evaluation policy \(\pi_{e}^{\hat{\psi}}\) (with optimal parameters \(\hat{\psi}\)), and the minimum mean-square error estimators of baseline \(b_{t}^{\xi}\) (with optimal parameters \(\hat{\xi}\)). We show that these estimators are able to reduce the variance of policy gradient estimates \(Z_{\text{DPE}}\). More precisely, for a given set of \(m\) off-policy trajectories \(\mathcal{D}=\{\omega_{i},\;\forall i\}\), we consider the gradient estimate \(Z_{\text{DPE}}\) with DPE (in both per-episode form as Eq. (1) and per-step form as Eq. (2)), i.e., \[Z_{\text{DPE}}=\frac{1}{m}\sum_{i=1}^{m}(q_{0:H}^{(i)}-b_{0}^{ \hat{\xi}})\hat{v}_{0:H}^{(i)}\left(\sum_{t=0}^{H}g_{t}^{(i)}\right)\text{ and }Z_{\text{DPE}}=\frac{1}{m}\sum_{i=1}^{m}\sum_{t=0}^{H}(q_{t:H}^{(i)}-b_{t}^{ \hat{\xi}})\hat{v}_{0:t}^{(i)}g_{t}^{(i)}. \tag{5}\] We show that the variance \(\text{Var}(Z_{\text{DPE}})\) using optimal estimators \(\hat{\psi}\), \(\hat{\eta}\) and \(\hat{\xi}\) is lower than the variance \(\text{Var}(Z_{\text{OIS}})\) using some ground truth \(\psi_{0}\), \(\eta_{0}\) and \(\xi_{0}\). We begin with recognizing that both per-episode and per-step DPE can be consolidated using a general form: \[Z_{\text{DPE}}=\frac{1}{n}\sum_{i=1}^{n}\frac{f(\omega_{i};\hat{ \psi})[G(\omega_{i})-b(\hat{\xi})]}{P(\omega_{i};\hat{\eta})} \tag{6}\] Next, we show a few lemmas demonstrating some properties of the estimators \(\hat{\psi}\), \(\hat{\eta}\), and \(\hat{\xi}\) and then prove the variance reduction lemma. **Lemma 1**.: _Let \(F_{\eta}=-\frac{1}{m}\sum_{i=1}^{m}\partial_{\eta}^{2}\log P(\omega_{i};\hat{ \eta_{0}})\) be the Fisher Information Matrix. We have_ \[\sqrt{m}(\hat{\eta}-\eta_{0})=\frac{1}{\sqrt{m}}F_{\eta}^{-1}\cdot \sum_{i=1}^{m}\partial_{\eta}\log P(\omega_{i};\eta_{0})+O(1) \tag{7}\] **Proof Sketch.** Since \(\hat{\eta}\) is the maximum likelihood estimator that optimizes \(P(\omega_{i};\eta)\), we have \(\partial_{\eta}\sum_{i=1}^{m}\log P(\omega_{i};\eta)=0\) at \(\eta=\hat{\eta}\). Expanding the left-hand side from \(\eta=\eta_{0}\) toward \(\eta=\hat{\eta}\), we have \(0=\sum_{i=1}^{m}\partial_{\eta}\log P(\omega_{i};\eta_{0})+\sum_{i=1}^{m} \partial_{\eta}^{2}\log P(\omega_{i};\hat{\eta}_{0})\cdot(\hat{\eta}-\eta_{0})+ o(||\hat{\eta}-\eta_{0}||_{2})\), which yields the desired result by rearranging the terms and leveraging Fisher Information Matrix \(F_{\eta}\). **Lemma 2**.: _Let \(F_{\xi}=\frac{1}{m}\sum_{i=1}^{m}[\partial_{\xi}b(\xi))]^{T}\cdot\partial_{\xi}b( \xi))\). For linear baseline estimators \(b(\xi)\), we have_ \[\sqrt{m}(\hat{\xi}-\xi_{0})=\frac{1}{\sqrt{m}}F_{\xi}^{-1}\cdot\sum_{i=1}^{m} \left[G(\omega_{i})-b(\xi_{0})\right]\cdot\partial_{\xi}b(\xi_{0})+O(1) \tag{8}\] **Proof Sketch.** Since \(\hat{\xi}\) is the minimum mean-square-error estimator optimizing \(\sum_{i=1}^{m}\left[G(\omega_{i})-b(\xi)\right]^{2}\), we have \(\partial_{\xi}\sum_{i=1}^{m}\left[G(\omega_{i})-b(\xi)\right]^{2}=0\). Expanding the left-hand side from \(\xi=\xi_{0}\) toward \(\xi=\hat{\xi}\), we have \(0=\partial_{\xi}\sum_{i=1}^{m}\left[G(\omega_{i})-b(\xi_{0})\right]^{2}+ \partial_{\xi}^{2}\sum_{i=1}^{m}\left[G(\omega_{i})-b(\xi)\right]^{2}(\hat{ \xi}-\xi_{0})+o(||\hat{\xi}-\xi_{0}||_{2}\). It yields the desired result using the fact that \(b(\xi)\) is linear (thus \(\partial_{\xi}^{2}b(\xi)=0\)) and using the definition of \(F_{\xi}\). **Theorem 1**.: _The asymptotic variance of \(Z_{\mathrm{DPE}}\), using optimal estimators \(\hat{\psi}\), \(\hat{\eta}\), and \(\hat{\xi}\), is always less than that of \(Z_{\mathrm{OIS}}\) using some \(\psi_{0}\), \(\eta_{0}\) and \(\xi_{0}\), i.e.,_ \[\text{var}(Z_{\mathrm{DPE}})=\text{var}(Z_{\mathrm{OIS}})-\text{var}(V_{A})- \text{var}(V_{B}) \tag{9}\] _where \(V_{A}\) and \(V_{B}\) are projections of \(\{\mu_{i}=f(\omega_{i};\hat{\psi})[G(\omega_{i})-b(\hat{\xi})]/P(\omega_{i}; \hat{\eta}),\ \forall i\}\) onto the row space of \(S_{\eta}=\partial_{\eta}\log P(\omega_{i};\eta_{0})\) and \(S_{\xi}=\partial_{\xi}b(\xi_{0})\), respectively._ **Proof Sketch.** We provide a sketch of the proof below and include the full proof in the appendix. Step 1: Define auxiliary function \(\mu_{i}=\mu(\omega_{i};\eta,\psi,\xi)=\frac{f(\omega_{i})[G(\omega_{i})-b(\xi_ {0})]}{P(\omega_{i};\eta)}\), such that \(Z_{\mathrm{DPE}}\) (which is \(\hat{\theta}\) in the notes with \(Z_{\mathrm{OIS}}\) being \(\theta\)) can be written in \(\sum_{i=1}^{n}\mu(x_{i};\theta,\xi,\eta)-\theta=0\). Then expand it from \(\eta_{0},\psi_{0},\xi_{0}\) to \(\hat{\eta},\hat{\psi},\hat{\xi}\), to obtain \(\sqrt{n}(\hat{\theta}-\theta)=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\mu(\omega_{i}; \theta,\xi_{0},\eta_{0})+E(\partial_{\eta}\mu)(\hat{\eta}-\eta)+E(\partial_{ \xi}\mu)\sqrt{n}(\hat{\xi}-\xi)+O(1)\). Step 2: Rearranging the terms, plugging in Lemma 1 and Lemma 2, and using the fact of \(\sum_{i=1}^{n}{S_{\eta}}^{\prime}F_{\eta}^{-1}S_{\eta}=1\) and \(\sum_{i=1}^{n}w_{i}^{2}S_{\xi}{}^{\prime}F_{\xi}^{-1}S_{\xi}=1\), we obtain the equation below, where define \(S_{\xi}\) and \(S_{\epsilon}ta\) here. Note that we use weights \(w_{i}=1\) throughout the proof. Step 3, Recognize that \(S_{\xi}\) and \(S_{\eta}\) are orthogonal. The two terms in C (i.e., A and B) can be viewed as projecting \(\mu_{i}\) onto orthogonal row spaces of \(S_{\xi}\) and \(S_{\eta}\), respectively. Define these as \(V_{A}\) and \(V_{B}\) The first term on the right hand side in \[\sqrt{n}(\hat{\theta}-\theta)=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\{\mu_{i}- \underbrace{E(\mu_{i}{S_{\eta}}^{\prime})F_{\eta}^{-1}\cdot S_{\eta}}_{V_{A}} -\underbrace{E(\mu_{i}{S_{\xi}}^{\prime})\cdot w_{i}^{2}F_{\xi}^{-1}}_{V_{B}} +O(1)\] is indeed OIS since \(\sqrt{n}(\hat{\theta}-\theta)=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\mu_{i}\). From Pythagorean relationship, we prove \(\text{var}(Z_{\mathrm{DPE}})=\text{var}(Z_{\mathrm{OIS}})-\text{var}(V_{A})- \text{var}(V_{B})\). The theorem shows that the use of the DPE estimator always reduces the asymptotic variance of the estimator of OIS. ## 5 Experiments In this section, we present an empirical study of applying Double Policy Estimator on Decision Transformer to verify the feasibility and effectiveness of our proposed method. We evaluate the performance of our proposed algorithm on the continuous control tasks from the D4RL benchmark and compare it with several popular SOTA baselines. Furthermore, we analyze some critical properties to confirm the rationality of our motivation. ### Experiment Setup We empirically evaluate the performance of our proposed algorithm on the **Gym Locomotion v2**: a series of continuous control tasks consisting of HalfCheetah, Hopper, and Walker2d datasets from the D4RL offline reinforcement learning benchmark [63] with medium, medium-replay, and medium-expert datasets which include mixed and suboptimal trajectories. Specifically, Medium dataset includes 1 million timesteps generated by a "medium" policy that achieves approximately one-third of the score of an expert policy; Medium-Replay includes 25k-400k timesteps that are gathered from the replay buffer of an agent trained to the performance of a medium policy; Medium-Expert includes 1 million timesteps generated by the medium policy and then concatenated with 1 million timesteps generated by an expert policy. ### Baseline Selection We compare our proposed algorithm to the following SOTA methods, where they aim to tackle the current challenges in offline reinforcement learning from different perspectives: Decision Transformer (DT) [17], reward-conditioned behavioral cloning (RvS) [19], Conservative Q-Learning (CQL) [37], BEAR [38], UWAC [39], behavior cloning (BC), and Implicit Q Learning (IQL)[41]. CQL and IQL represent the state-of-the-art in model-free offline RL; RvS and DT represent the state-of-the-art in sequence-modeling-based supervised learning. ### DPE weights implementation Note when proposing double policy estimation, there is no specific limitation on how \(\pi_{b}\) and \(\pi_{t}\) are estimated and how DPE weights are calculated. In this empirical section, we consider the following as one possible implementation: (1) We first apply CQL to train a neural network that generates mean and variances for Gaussian distributions as maximum likelihood estimation to obtain the estimated behavior policy \(\hat{\pi}_{b}\). (2) Then for each trajectory \(\omega_{i}\) we can calculate the estimated behavior weights as \(\hat{w}_{i}^{\pi_{b}}=\hat{\pi}_{b}(a_{i}|\omega_{i})\) (3) Next we train DT using \(l_{2}\) loss for updating each timestep, but we record the MSE \((a_{i}-\hat{a_{i}})^{2}\) as the variance, and \(\hat{a_{i}}\) as the mean for the Gaussian distribution, i.e. \(\mathcal{N}(a_{i},(a_{i}-\hat{a_{i}})^{2})\) as target policy estimation. (4) There are multiple ways to calculate these target weights, e.g. cumulative distribution function (CDF): \(P(a_{i}-\beta<\hat{a_{i}}\leq a_{i}+\beta)\) where \(\beta\) is a probability offset, or probability density function (PDF). In this empirical result, we consider using exponentiated clipped log-likelihood: \(\exp(l_{a}(\hat{a},(a_{i}-\hat{a_{i}})^{2}))\) with \(l_{\hat{a}}\) clipped at 0.05 and 0.995. ### General Performance We first evaluate and compare the performance of the proposed method with all selected baselines in terms of average reward in Table 1, where 0 represents a random policy and 100 represents an expert policy, with reward normalized per [63]. All results are averaged over 3 different seeds over the final 10 evaluations, we put the full results including the error bar of all baselines in the appendix. Overall, we find DPE applied DT achieves better performance than the original decision transformer on almost all datasets, and outperforms the state-of-the-art baselines over several datasets. Especially, in'medium-replay' datasets that include mixed optimal and sub-optimal trajectories, our method could bring a significant advancement in terms of reward. The finding that our proposed method attains competitive results stands in contrast to Decision Transformer which emphasizes the direct improvements brought by applying double policy estimation. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline Dataset & Environment & DPE & DT & RvS & CQL & BEAR & UWAC & BC & IQL \\ \hline \multirow{3}{*}{medium} & HalfCheetah & 45.4\(\pm\)0.3 & 42.6\(\pm\)0.1 & 41.6 & 44.4 & 41.7 & 42.2 & 43.1 & **47.4** \\ & Hopper & **69.8\(\pm\)1.9** & 67.6\(\pm\)1.0 & 60.2 & 58.8 & 52.1 & 50.9 & 63.9 & 66.3 \\ & Walker & 77.9\(\pm\)0.8 & 74.0\(\pm\)1.4 & 71.7 & **79.2** & 59.1 & 75.4 & 77.3 & 78.3 \\ \hline \multirow{3}{*}{medium-replay} & HalfCheetah & 40.5\(\pm\)1.5 & 36.6\(\pm\)0.8 & 38.0 & **46.2** & 38.6 & 35.9 & 4.3 & 44.2 \\ & Hopper & **94.6\(\pm\)0.7** & 79.4\(\pm\)7.0 & 73.5 & 48.6 & 33.7 & 25.3 & 30.9 & 94.5 \\ & Walker & **83.5\(\pm\)1.2** & 66.6\(\pm\)3.0 & 60.6 & 26.7 & 19.2 & 23.6 & 36.9 & 73.9 \\ \hline \multirow{3}{*}{medium-expert} & HalfCheetah & 82.5\(\pm\)5.8 & **87.8\(\pm\)2.6** & 92.2 & 62.4 & 53.4 & 42.7 & 59.9 & 86.7 \\ & Hopper & **108.2\(\pm\)1.6** & 107.6\(\pm\)1.8 & 101.7 & 104.6 & 96.3 & 44.9 & 79.6 & 91.5 \\ \cline{1-1} & Walker & 93.7\(\pm\)6.2 & 108.1\(\pm\)0.2 & 106.0 & 108.1 & 40.1 & 96.5 & 36.6 & **109.6** \\ \hline \hline \end{tabular} \end{table} Table 1: Overall performance of the normalized score of selected baselines on D4RL benchmark. All results are evaluated on ’v2’ environments and datasets. ### Discussions To demonstrate the actual effectiveness of reducing the variance, we also record the MSE from the final evaluation stage of both DPE and DT for off-policy evaluation in Fig. 2, the results show that using DPE weights could bring a generally lower MSE on all environments selected compared to DT, validating our efficiency on variance reduction. To visualize the source of effectiveness in the double importance weights estimation we record the distribution of \(\pi_{b}\) and \(\pi_{t}\) on the 'hopper' environment and provide a kernel density estimate plot in Fig. 3. The drastic difference from the two distributions could mean that the behavior policy estimated are acting as a correction weight to offset the probability sampling from the target policy distribution, leading to improved performance and reduced variance. As an example, an occasional sub-optimal trajectory that the target trajectory learned with high probability could be corrected by the low probability from the estimated behavior policy, making this a low-weight trajectory to learn from. ### Ablation Studies According to the object of DPE, the estimation of \(\pi_{b}\) still determines the target policy weights. In this section, we evaluate and compare several different ways to calculate the exact probability generated from the estimated behavior distribution marking as CDF \(\pm\)0.1, CDF \(\pm\)0.2, PDF, clipped PDF, and demonstrate the results over medium-replay datasets in terms of MSE in Figure 3. We see that despite some cases, most of the settings are similar regarding their prospective MSE, indicating that when a proper estimation of this Gaussian distribution is obtained, their method of sampling probability is not a major concern. Nevertheless, we find that using a clipped PDF for behavior probability selection brings the lowest MSE in general. Figure 1: MSE comparison with DT and DPE Figure 3: Ablations results on comparing different probability sampling methods on estimated \(\pi_{b}\) Figure 2: Comparing Kernel Density Estimate of estimated \(\pi_{b}\) and \(\pi_{t}\) on Hopper datasets. ## 6 Limitations and Social Impact There are several opportunities for future work. First, our approach requires a warm-up phase prior to the training of the decision transformer to obtain the estimated behavior policy. Also, as RvS methods perform poorly in stochastic environments as pointed out in [64], the currently proposed method cannot resolve such issues. We believe this work will result in positive social impacts as this will help to avoid unexpected behaviors from occasional unwanted trajectories from the dataset and make the trained model more stable. However, this can potentially be used for making harmful decisions under training with specific harmful and biased datasets. ## 7 Conclusion In this paper, we present DPE, a double policy estimation for importance sampling methods that are proven statistically efficient for variance reduction for Off-Policy Evaluation in Sequence-Modeled Reinforcement Learning. Computing both the behavior policy estimate and target estimate from the same set of data allows DPE to correct for the sampling error inherent to importance sampling with the true behavior policy in the offline dataset. We evaluated DPE applied decision transformer across several benchmarks against current and SOTA works and showed that it demonstrated competitive performances while improving the evaluation results of the Decision Transformer, especially on the dataset filled with sub-optimal trajectories, and confirming the effect of variance reduction through MSE comparison. Finally, we studied the possible cause for such improvements by visualizing the density of the estimated target policy and behavior policy.
2302.10472
Can Einstein (rings) surf Gravitational Waves?
How does the appearance of a strongly lensed system change if a gravitational wave is produced by the lens? In this work we address this question by considering a supermassive black hole binary at the center of the lens emitting gravitational waves propagating either colinearly or orthogonally to the line of sight. Specializing to an Einstein ring configuration (where the source, the lens and the observer are aligned), we show that the gravitational wave induces changes on the ring's angular size and on the optical path of photons. The changes are the same for a given pair of antipodal points on the ring, but maximally different for any pair separated by $90^{\circ}$. For realistic lenses and binaries, we find that the change in the angular size of the Einstein ring is dozens of orders of magnitude smaller than the precision of current experiments. On the other hand, the difference in the optical path induced on a photon by a gravitational wave propagating \textit{orthogonally} to the line of sight triggers, at peak strain, time delays in the range $\sim 0.01 - 1$ seconds, making the chance of their detection (and thus the use of Einstein rings as gravitational wave detectors) less hopeless.
Leonardo Giani, Cullan Howlett, Tamara M. Davis
2023-02-21T06:28:58Z
http://arxiv.org/abs/2302.10472v2
# Can Einstein (Rings) Surf Gravitational Waves? ###### Abstract How does the appearance of a strongly lensed system change if a gravitational wave is produced by the lens? In this work we address this question by considering a supermassive black hole binary at the center of the lens emitting gravitational waves propagating either colinearly or orthogonally to the line of sight. Specializing to an Einstein ring configuration (where the source, the lens and the observer are aligned), we show that the gravitational wave induces changes on the ring's angular size and on the optical path of photons. The changes are the same for a given pair of antipodal points on the ring, but maximally different for any pair separated by \(90^{\circ}\). For realistic lenses and binaries, we find that the change in the angular size of the Einstein ring is dozens of orders of magnitude smaller than the precision of current experiments. On the other hand, the difference in the optical path induced on a photon by a gravitational wave propagating _orthogonally_ to the line of sight triggers, at peak strain, time delays in the range \(\sim 0.01-1\) seconds, making the chance of their detection (and thus the use of Einstein rings as gravitational wave detectors) less hopeless. + Footnote †: slugcomment: November 3, 2021 ## 1. Introduction Einstein rings are spectacular distortions of the images from distant sources induced by the gravitational field of massive structures on the line of sight between them and the observer. The underlying physical process behind these distortions is _gravitational lensing_, which describes how the trajectories of photons are deformed in presence of a gravitational field due to the universality of the gravitational interaction. Whilst the phenomena can be qualitatively understood within Newtonian gravity [1], experiments performed over the last 100 years have shown that General Relativity (GR) is required to correctly describe its physics [2, 3, 4, 5, 6, 7]. From an historical point of view, gravitational lensing has been indeed one of the first detectable predictions, and thus pieces of observational evidence, in favour of Einstein's theory. Another, recently proven, prediction is the existence of gravitational radiation emitted by time-varying quadrupolar sources of Energy-Momentum. It is straightforward to show that the Einstein field equations, linearized around a Minkowski background metric, become wave equations whose solutions are dubbed _Gravitational Waves_ (GW). Indirect observational evidence of these was found as early back as the 1970s [8, 9], but direct detection has been possible only over the last decade thanks to the incommensurable effort of thousands of scientists [10]. Nowadays, GW astronomy is one of the most promising fields towards a better understanding of our Universe, both on astrophysical and cosmological scales [11, 12]. The intriguing idea of combining the two aforementioned gravitational phenomena has been subject of in-depth investigations over the last decades. One clear possibility is to study how the propagation of gravitational radiation is affected by the matter distribution along its path. Just like its electromagnetic analog, a gravitational wave may be subject to gravitational lensing, and it is believed that the first strongly lensed gravitational wave will be detected in the near future [13, 14, 15, 16]. On the other hand, another possibility is to study how gravitational waves may influence the path of photons in a strongly lensed system. This topic has been explored in Refs. [17, 18, 19, 20, 21], with focus on determining whether a strongly lensed system may be employed as a detector for very low frequency GWs of cosmological origin. Whilst the answer is in principle yes, it has been shown that an intrinsic degeneracy exists between the lensing configuration and the effects of the gravitational wave, making these two options indistinguishable. However, it seems that such degeneracy does not affect Einstein rings. The GW may also act as a lens for the propagation of photons, as discussed for example in Refs. [22, 23], which concluded that the probability of observing the phenomenon is extremely low. In this work we explore yet another possibility, where a strongly lensed system is perturbed by the propagation of a GW _generated by the lens itself_. Our goal is to quantify the impact of these lens-produced GWs on typical strong lensing observables, such as the angular separation between multiple images and their time delay, and assess the potential of strongly lensed systems as gravitational wave detectors. ## 2. Einstein Rings In a strongly lensed system, as long as the geometrical optics description is appropriate, the propagation of light rays from a source \(S\) is greatly modified by the gravitational field of a clumpy distribution of matter, the lens \(L\), located along the observer's line of sight. Whilst the presence of the lens makes it impossible to observe directly the source, the paths of a subset of the source photons (whose initial trajectories would have never reached the observer in the absence of the lens) are distorted in such a way that they eventually intersect the observer position \(O\). As a result, the observer will perceive these photons as belonging to spatially different, but otherwise identical sources. The angular separation between the source and its apparent images can be computed with the lens equation, which for a generic mass distribution in the thin lens approximation is \[\left(-\alpha\right)=\nabla_{\theta}\psi\left(\beta\right)\;, \tag{1}\] where \(\beta=\left(\beta_{1},\beta_{2}\right)\) and \(\alpha=\left(\alpha_{1},\alpha_{2}\right)\) are the position in the sky of the image and the source respectively measured with respect to the lens position, and \(\nabla_{\theta}\) is the two-dimensional angular gradient. The quantity \(\psi(\beta)\) appearing in Eq. (1) is the _lensing potential_ and reads \[\psi(\beta)\equiv\frac{2}{c^{2}}\frac{\mathcal{D}_{LS}}{\mathcal{D}_{L} \mathcal{D}_{S}}\int_{\beta}d\lambda\;\Phi\;. \tag{2}\] In the above equation \(c\) is the speed of light, and \(\Phi\) is the scalar gravitational potential in the Newtonian gauge, which is integrated along the path of the light ray parametrized by \(\lambda\), and depends on the angle \(\beta\). The \(\mathcal{D}_{i}\)'s are angular diameter distances, defined as \[\mathcal{D}_{i}(z_{i})=\frac{c}{1+z_{i}}\int_{0}^{z_{i}}\frac{dz}{H(z)}\;, \tag{3}\] where \(z_{i}\) is the cosmological redshift, \(H(z)\) is the Hubble parameter, and \(\mathcal{D}_{LS}\) is the angular diameter distance between the source and the lens, \(\mathcal{D}_{LS}=\mathcal{D}_{S}-\mathcal{D}_{L}\). Taking the divergence of Eq. (1), as long as the extent of the lens is small compared to cosmological distances, we can use the Poisson equation to relate the Laplacian of the lensing potential to the mass distribution of the lens \[\nabla_{\theta}^{2}\psi\left(\beta\right)=\frac{8\pi G_{N}}{c^{2}}\frac{ \mathcal{D}_{LS}}{\mathcal{D}_{L}\mathcal{D}_{S}}\Sigma(\beta)\;, \tag{4}\] where \(G_{N}\) is the Newtonian gravitational constant, and we have defined the surface mass density, \[\Sigma(\beta)\equiv\int_{\beta}d\lambda\;\rho_{L}\;, \tag{5}\] which depends on the mass distribution of the lens \(\rho_{L}\). For the assumptions behind Eqs. (1), (2), (4) and their derivation see, for example, Ref. [24]. Due to their modified paths, the arrival time at the observer position of the photons is delayed with respect to the one they would have in absence of the lens. Furthermore, different images will experience different delays, and we can define the time delay between two multiple images \(i,j\) as [25] \[\Delta_{ij}=\frac{D_{\Delta_{i}}}{c}\left(\frac{\left(\beta_{i}-\alpha\right) ^{2}}{2}-\frac{\left(\beta_{j}-\alpha\right)^{2}}{2}+\psi\left(\beta_{j} \right)-\psi\left(\beta_{i}\right)\right)\;, \tag{6}\] where the 'time-delay distance' is defined \[D_{\Delta_{t}}\equiv\left(1+z_{L}\right)\frac{\mathcal{D}_{L}\mathcal{D}_{S} }{\mathcal{D}_{LS}}\;. \tag{7}\] It may happen that the gravitational field of the lens is symmetric in the directions orthogonal to the line of sight, and the observer source and lens are perfectly aligned, i.e. \(\alpha=0\). In this case, instead of having multiple images of the same source, these will appear as a ring-shaped distribution of light, called an _Einstein Ring_. A diagrammatic description of the photons trajectories is given in Fig. 1. The angular size \(\theta_{E}\) of the ring is easily computed from Eq. (1), and it is straightforward to realize that the time delay \(\Delta_{ij}\) vanishes for any two points \(i,j\) on it since, by definition, \(\beta_{i}=\beta_{j}=\theta_{E}\). ## 3. Gravitational Waves Produced by the Lens We are interested in the effects that gravitational radiation emitted by the lens may have on the trajectories of strongly lensed photons. Gravitational waves perturb the space-time fabric inducing periodic fluctuations in the spatial separation between test particles on the planes perpendicular to the wave's direction of propagation. GWs have in general two polarizations, and are produced whenever a mass distribution has a time-varying quadrupole moment [26]. A binary system of two massive bodies orbiting around each other, provided that the system is not spherically or rotationally symmetric, will emit gravitational radiation. Gravitational waves detected so far have been produced by the inspiral and merging phase of binary systems composed of either stellar mass black holes, neutron stars, or both. Their typical frequencies and scalar amplitudes (_strain_) on Earth are of the order \(1-100\) Hz and \(10^{-21}\) respectively. As a result of the hierarchical formation of massive galaxies, supermassive black holes (SMBHs) are also expected to be found in binary systems, and hence are potential sources of yet-to-be-detected gravitational waves. However, the typical wavelength of the emitted radiation is orders of magnitude beyond the sensitivity of current ground-based interferometers, and the most promising avenue for detecting it is looking for correlation signatures between the pulse arrival times for a set of known pulsars, known as Pulsar Timing Arrays [27; 28]. In a typical strong lensing system the lens is a small and dense cluster of galaxies, which is therefore a suitable candidate to contain a binary system of supermassive black holes. The strain \(h\), and the frequency \(f_{\rm ISCO}\) of the gravitational wave at the last inner stable circular orbit, felt at an angular distance \(\mathcal{D}_{z_{i}}\) from the binary, are [29] \[f_{\rm ISCO}=4.7\left[\frac{\left(m_{1}+m_{2}\right)\left(1+z\right)}{10^{3}M_{ \odot}}\right]^{-1}\, \tag{8}\] \[h(f,z_{i})=\frac{8}{\sqrt{10}}\frac{(GM)^{\frac{5}{3}}}{c^{4}}\frac{\left( \frac{2\pi f_{\rm ISCO}}{1+z_{i}}\right)^{\frac{2}{3}}}{(1+z_{i})^{2}\mathcal{D }_{z_{i}}}\, \tag{9}\] where \(\mathcal{M}\) is the Chirp mass \[\mathcal{M}=\frac{(m_{1}m_{2})^{3/5}}{(m_{1}+m_{2})^{1/5}}\, \tag{10}\] and \(m_{1},m_{2}\) are the masses of the black holes. In the following, we will assume that the lens hosts a SMBH binary, and (for simplicity) that the radiation emitted is linearly polarized. We will consider two different configurations, one in which the GW propagates along the line of sight, and one where it propagates on the Lens plane, perpendicular to the line of sight, as depicted in Figs. 2 and 3 respectively. It is important to bear in mind that (to a very large degree) the light rays and the gravitational waves propagate with the same speed [30]. As a result, a single photon moving through a portion of space influenced by the gravitational wave will perceive a _constant_ amplitude \(h(t_{i})\) for the strain, where \(t_{i}\) is the time at which the photon's trajectory was first affected, and _not the strain we would observe on Earth for the same gravitational wave at the arrival time \(t_{f}\)_. Therefore, different photons influenced by the GW at different times will travel along different optical paths to the observer. ### GWs propagating along the line of sight Let us consider a GW propagating along the line of sight from the lens to the observer as depicted in Fig. 2. The gravitational wave will distort spatial displacements on the plane orthogonal to the direction of propagation, perturbing once more the optical path of the lensed photon. To assess quantitatively the impact of these perturbations, let us imagine an observer comoving with the GW emitting a photon \(\gamma^{\prime}\) at a point \(C\) perpendicular to the direction of propagation of the GW at a time \(t_{0}\). Eventually, this photon will intersect the trajectory of the one emitted by the source \(S\) at the point \(P^{\prime}\) at a time \(t_{1}\). A simple calculation in the TT gauge [26] shows that the distance traveled by the photon \(\gamma^{\prime}\) can be written \[\overline{CP^{\prime}}(t)\approx\overline{CP}\left(1-\frac{h_{P}}{2}\cos \left[\left(t+\frac{\overline{CP}}{2c}\right)\frac{2\pi c}{\lambda_{GW}}\right] \right)\mathrm{sinc}\left(\frac{\pi\overline{CP}}{\lambda_{GW}}\right)\,, \tag{11}\] Figure 1.— A schematic representation of the strong lensing configuration leading to an Einstein Ring. The observer \(O\), the source \(S\) (at angular diameter distance \(\mathcal{D}_{S}\)) and the lens \(L\) (at distance \(\mathcal{\tilde{D}}_{L}\)) are aligned, with \(\mathcal{D}_{LS}\) being the distance between the lens and the source. The yellow solid line represents the trajectory of a photon emitted by the source \(S\), bent by the gravitational field of the lens \(L\) in such a way that it will eventually meet the observer at \(O\). The trajectory that the photon would have without the lens is represented by the dotted yellow line. Because of the bending, the source will appear to the observer at the position \(S^{\prime}\), separated from its real location by an angle \(\theta_{E}\). where an 'overline' denotes the distance between two points (i.e., \(\overline{CP^{\prime}}\) is the distance between \(C\) and \(P^{\prime}\)). \(h_{P}\) is the strain of the gravitational wave felt at \(P\), which is inversely proportional to the distance from the GW source \(\overline{CL}\), while \(\lambda_{GW}\) is its wavelength. From this expression, since \(\mbox{sinc}\left(x\right)\) quickly decays to \(0\) when \(x\gg 1\), we can recognise that the distortion \(\overline{PP^{\prime}}\) hence becomes appreciable only when the distance from the gravitaional wave is of the order of the GW wavelength \(\overline{CP}\sim\mathcal{O}\left(\lambda_{GW}\right)\) or smaller.1 Footnote 1: The global maximum and minimum of \(\mbox{sinc}\left(x\right)\) are given by \(x=0\) and \(x\approx 1.43\pi\), corresponding to \(\mbox{sinc}\,0=1\) and \(\mbox{sinc}\,1.43\pi\approx-0.22\) respectively. Therefore we will assume that the perturbations induced by the GW are negligible for \(\overline{CP}\gtrsim 1.43\lambda_{GW}\). As such, in what follows we will describe effectively the perturbation induced by the GW as an instantaneous shift of the source-emitted photon's position from \(P\) to \(P^{\prime}\). With reference to Fig. 2, this happens at a distance \(\mathcal{D}_{C}\) from the observer along the line of sight, such that \(\mathcal{D}_{C}\theta_{E}\approx\overline{PC}\sim\lambda_{GW}\), when \(\mbox{sinc}\left(x\right)\) in Eq. (11) is of order 1. If \(h\geq 0\) (\(<0\)), the photon at \(E\) will propagate on a region of space which has been stretched (contracted) by an amount \(\overline{PP^{\prime}}\). Simple trigonometric considerations show that the observer would infer a slightly larger angular size for the Einstein ring \(\theta_{E}+\delta\theta\), such that \[\frac{\delta\theta}{\theta_{E}}\approx h\;. \tag{12}\] The change in the optical path of the photon due to its interaction with the GW, \(\Delta\gamma\), can then be computed as \[\Delta\gamma=OP^{\prime}-OP\approx OP\;h^{2}\;, \tag{13}\] which is clearly a second order quantity since, at first order in the small-angle approximation, \(\sin\theta\approx\tan\theta\) and hence \(OP^{\prime}=OP=\mathcal{D}_{C}\). Because of the different optical path the photon will also be redshifted by a factor, \[\Delta z\approx\frac{H(z_{C})}{c}\Delta\gamma\;, \tag{14}\] where \(H(z_{C})\) is the Hubble parameter at the redshift corresponding to the distance where the photon starts to interact effectively with the gravitational wave, \(D_{C}\). Figure 2.— A linearly polarized gravitational wave (in purple) is emitted by the lens \(L\) and propagates along the line of sight to the observer \(O\). A photon traveling from \(E\) to \(O\) will eventually propagate into a space-time region perturbed by the gravitational wave, where the spatial separation between two points is stretched or compressed of a factor \((1+h)\) in the directions perpendicular to the line of sight. This happens when the distance between the photon and the line of sight \(PC\) is comparable with the wavelength \(\lambda_{GW}\) of the gravitational wave. From the point of view of the observer, this is equivalent to observing a photon coming from \(P^{\prime}\), so that the source will appear to the observer at a position \(S^{\prime\prime}\) on the sky, separated from \(S^{\prime}\) by an angle \(\delta\theta\). ### GW propagating along the Lens plane A GW propagating perpendicular to the line of sight will stretch and compress the spatial separations between points along the lines-of-sight itself, perturbing the trajectories of the photons when their distance to the _lens plane_ becomes comparable with the wave-length \(\lambda_{GW}\), as shown in the previous section. Put in another way, the distance from the observer to the source changes and becomes \(\overline{OS^{\prime}}=\overline{OS}+\Delta L\), where \(\Delta L\) is the stretch or compression induced by the GW.2 From Eq. (11) we can estimate \(\Delta L\) to be of order \(\Delta L\approx h\lambda\). Thanks to the symmetry of the problem, we can thus assume that \({\cal D}_{S}\to{\cal D}_{S}^{\prime}={\cal D}_{S}\left(1+\Delta L/{\cal D}_{S}\right)\), \({\cal D}_{L}\to{\cal D}_{L}^{\prime}={\cal D}L\left(1+\Delta L/2{\cal D}_{L}\right)\), and \({\cal D}_{LS}\to{\cal D}_{LS}^{\prime}={\cal D}_{LS}\left(1+\Delta L/2{\cal D} _{LS}\right)\). We can thus rewrite the lensing equation (assuming for simplicity a point mass \(M\)) as: Footnote 2: Notice that the spatial extension of the lens in the direction of the line of sight should also change because of the gravitational wave, which we however neglect since we are working within the thin lens approximation. \[(\theta_{E}+\delta\theta)^{2}=4GM\frac{{\cal D}_{LS}^{\prime}}{{\cal D}_{L}^{ \prime}{\cal D}_{S}^{\prime}}\;, \tag{15}\] where we have assumed for simplicity a point mass \(M\).3 The above equation reduces, at first order in \(\Delta_{L}\), to Footnote 3: In Eq. (15) we have neglected the impact of the GW on the lensing potential. This is justified because the gravitational potential associated with a propagating GW, felt at a distance \(L\), is of order \(\simeq L^{2}\lambda_{GW}^{-2}h\) (see for example Eq. (9.40) of Ref.(11)). The change induced by the latter on the lensing potential needs to be compared with the (surface) mass of the lens, and the relative change is orders of magnitude smaller than the effect induced by the distortion of the optical path, whose lower bound is of order \(h\lambda/{\cal D}_{L}\) (which is \(\approx 10^{-19}\) for a lens at \(10^{3}\) Mpc). A more realistic lensing profile than a point mass lens gives different, but qualitatively similar results. Using spheres of matter in isothermal equilibrium to model an extended lens we obtain for the Einstein ring angular size (see for example Eq.(9.30) of Ref.[31]): \[\theta_{E}+\delta\theta_{a}=4\pi\left<v^{2}\right>\frac{{\cal D}_{LS}^{\prime }}{{\cal D}_{S}^{\prime}}\;, \tag{16}\] from which we obtain at first order \[\frac{\delta\theta}{\theta_{E}}\approx\frac{\Delta L}{2}\left[\frac{2{\cal D }_{L}}{{\cal D}_{S}-{\cal D}_{L}}-1\right]\;, \tag{17}\] which is qualitatively similar to Eq.(18). Due to the longer optical path traveled by the photon, we will also observe a delay in its arrival time of order \[\Delta t=\Delta\gamma/c\approx 10^{-15}\,{\rm s}\;. \tag{23}\] We consider all of the above to be completely immeasurable. However, in the configuration depicted in Fig. 3 we have instead that the strain at the point \(E\) is of order \(h\sim 10^{-9}\), and we can compute \[\frac{\delta\theta}{\theta_{E}}\approx 10^{-19}\;,\qquad\Delta\gamma\approx 10^{ 9}\,{\rm cm}\;,\qquad\Delta z\approx 10^{-19}\;. \tag{24}\] Notice that in this case the photon trajectories intersect the gravitational wave at distances from the binary system \(\approx 10\) orders of magnitude smaller than in the previous case, where the strain is 5 orders of magnitude bigger. As a result, the optical path difference \(\Delta\gamma_{B}\) increases significantly, and triggers a delay on the time of arrival of the photon of order \[\Delta\gamma/c\approx 0.01\,{\rm s}\;. \tag{25}\] Being the most significant of the effects discussed so far, we studied how the latter prediction changes if we consider different lensing configurations or different binary systems. In Fig. 4 we show the induced time delay as a function of (the square root of) the product of the two masses \(\sqrt{m_{1}m_{2}}\) in the binary, where the angles \(\theta_{E}\) entering the calculation were estimated using the lens equation for a point mass lens \(M_{Lens}\). The source is at redshift \(z=1\), and we consider two different distances from the observer to the lens and different lens masses. If we fix the masses of the black holes, the plots show that the time delay increases for lighter lenses. If we fix the lens mass instead, the time delay increases with heavier binaries. We found that, in the most optimistic scenario, time delays of the order of \(\sim 1\)s may be reached for black holes with masses \(m_{1},m_{2}\sim 10^{11}M_{\odot}\) within lenses with mass \(M_{Lens}\approx 10^{12}-10^{13}M_{\odot}\). Finally, the time delay is inversely proportional to \((1+z_{L})\), and hence increases when the lens is closer to the observer. ## 5. Discussion We computed the impact of the gravitational radiation emanated by two supermassive black holes on strong lensing observables in a typical configuration. Since the gravitational wave induces periodic fluctuations in the size of spatial displacements with dimension comparable to or smaller than its wavelength, the main effect on the system is to change Figure 3.— A linearly polarized gravitational wave (in purple) is emitted by the lens \(L\) and propagates in the lens plane towards \(E\). A photon approaching \(E\) will eventually propagate into a space-time region perturbed by the gravitational wave, where the spatial separation between two points is stretched of a factor \((1+h)\). This happens when the distance between the photon and \(E\) is comparable with the wavelength \(\lambda_{GW}\) of the gravitational wave. From the point of view of the observer, this is equivalent to observing a photon coming from \(E^{\prime}\), and the source will appear at the position \(S^{\prime\prime}\), separated from \(S^{\prime}\) of an angle \(\delta\theta\). the optical path of photons in their journey from the source to the observer. This, consequently, affects: \(i)\) the angular separation of the multiple images of a gravitationally lensed source, \(ii)\) their apparent redshift, and \(iii)\) their time of arrival at the observer position. To get a qualitative understanding of the phenomenon, we considered two different configurations, portrayed in Figs. 2 and 3, with the simplifying assumptions that the gravitational wave is linearly polarized and propagates colinearly or orthogonally to the line of sight. In the former case the ring's shape becomes ellipsoidal, with opposite periodic deformations along the \(x\) and \(y\) axes. In the latter case the ring is also distorted into an ellipse, but with the \(y\) axis fixed and the oscillations occuring only along the \(x\) direction. In this case we also observe the overall size (i.e. Einstein radius) of the ring to be oscillating, as a result of the fluctuations along the \(z\) direction. It is also worth noticing that in alternative theories of gravity usually one has also a scalar propagating degree of freedom, see for example Refs. [32, 33, 34] for \(f(R)\) theories. In the configuration of Fig. 2, such a scalar mode would affect the ring by changing its radius only, whereas in the configuration of Fig. 3 the effects of the scalar mode will be degenerate with the tensor ones. We conclude that, if observable, the ringing of Einstein rings could distinguish between alternative theories of gravity. Given the typical sensitivity of current experiments, of order \(\sim 0.01^{\prime\prime}\) for the angular separation of the images [35] and of the order of \(\sim 10^{-6}\) for the redshifts [36], there is no hope of detecting \(i)\) or \(ii)\) in either of the configurations of Figs. 2 or 3. On the other hand, for the configuration of Fig. 3, the significance of \(iii)\) varies between \(\sim 10^{-2}-10^{1}\,\mathrm{s}\). The possibility of observing this, on the basis of qualitative orders of magnitude considerations, seems less hopeless. Let us briefly put in perspective the challenges involved with such a measurement. Since we are targeting a gravitational wave with frequency \(\sim 10^{-7}-10^{-8}\,\mathrm{Hz}\), the duration of the transient signal is of the order of the \(\sim 10^{-1}-10^{0}\,\mathrm{yr}\). For an idealized Einstein ring, in the absence of a gravitational wave, there is no time delay between the arrival time of two photons coming from randomly chosen points on the ring. Let us choose two points \((A,B)\) separated by an angle \(\pi/2\) on the ring, lying on the principal axes \(x\) and \(y\) orthogonal to the direction of propagation of the gravitational wave. Because of the latter, lengths \(l\) on the \(x\) axes are increased of a factor \((1+h)\) whereas on the \(y\) axes they are decreased of a factor \((1-h)\). Therefore, the time delay will not vanish anymore and at peak strain, after a few months or a year, we should be able to measure a time delay between the light curves from the points \(A,B\) of the order Figure 4.— Plots of the time delay, in seconds, induced by a gravitational wave from a SMBH binary at peak strain as a function of the product of the black hole masses. The source is at redshift \(z_{S}=1\), and we considered two different lenses at redshift \(z_{L}=0.3\) and \(z_{L}=0.9\). The corresponding angular size of the Einstein ring is given in arcseconds. In the blue region the masses of the black holes constitute \(>10\%\) of the total mass of the lens, which we consider to be unrealistic. \(\sim 10^{-2}\,\)s for typical SMBH masses, and up to the order of \(10^{-1}-10^{1}\,\)s for exceptionally massive ones. Of course, to detect a time delay one also needs a time varying source, which adds to the difficulty of finding an appropriate lens system. A slightly more optimistic (but less realistic) scenario would involve two black holes with masses of order \(10^{11}M_{\odot}\) constituting a significant fraction (\(\geq 10\%\)) of the lens. In this case, the time delay would be of the order of few seconds. Of course, in this exercise we have made a number of simplifying assumptions which are unlikely to occur in real astrophysical systems, like considering a perfect Einstein ring arising from an idealized lens profile, and the ideal orientation and polarization of the gravitational wave with respect to the line of sight. Relaxing these assumptions should not change the qualitative picture given here, but will likely introduce systematics and reduce the likelihood of any detection. Nevertheless, and surprisingly, the intrinsic magnitude of the effect is not outrageous considering typical human time-scales. We conclude that, even if difficult, indirect detection of gravitational waves produced by supermassive black holes through long-term monitoring of strongly lensed systems might be feasible in the future. ## Acknowledgements We are grateful to Paul Lasky, Riccardo Sturani and Oliver F. Piattella for useful comments and suggestions. The authors acknowledge support from the Australian Government through the Australian Research Council Laureate Fellowship grant FL180100168.
2310.09332
Interplay between Parton Distribution Functions and New Physics signals
The analysis and the interpretation of the LHC data require a precise determination of Parton Distribution Functions (PDFs) in order to detect reliably potential signs of new physics. I present a systematic study designed to assess the risk of absorbing, and thus missing, signals from heavy new physics in the PDFs parameterisation during the High-Luminosity LHC run. I discuss the consequences of such a PDF "contamination" and consider possible solutions to it, for example the inclusion of other experimental data probing large-x regions at low-energy.
Elie Hammou
2023-10-13T18:00:04Z
http://arxiv.org/abs/2310.09332v1
# Interplay between Parton Distribution Functions and New Physics signals ###### Abstract The analysis and the interpretation of the LHC data require a precise determination of Parton Distribution Functions (PDFs) in order to detect reliably potential signs of new physics. I present a systematic study designed to assess the risk of absorbing, and thus missing, signals from heavy new physics in the PDFs parameterisation during the High-Luminosity LHC run. I discuss the consequences of such a PDF "contamination" and consider possible solutions to it, for example the inclusion of other experimental data probing large-x regions at low-energy. ## 1 Introduction PDFs are used to compute all theoretical predictions at hadron colliders. They describe the repartition on the proton momentum among its constitutive partons (quarks and gluons). Their dependence on the energy scale \(Q\) can be theoretically described with the DGLAP evolution equation, However, their dependence on the Bjorken \(x\) cannot be predicted perturbatively, it has to be fitted from data [1][2][3]. This study is assessing whether these fits could potentially absorb within the PDFs higher energies signals associated to new physics. The considered scenario is one where the SM Lagrangian is extended by additional terms corresponding to a BSM heavy field. Such contributions would have an impact on the high energy tails of some observables. If this data is used to perform a PDF fit assuming the SM in the theory predictions, the new physics could be "fitted away" provided the PDFs had sufficient degrees of freedom to adapt to the BSM induced shift without worsening the data-theory agreement of the other dataset included in the fit. The complete study can be found in a recent publication [4]. ## 2 Strategy for assessing the risk of PDF contamination ### General methodology In practice, this study relies on the use the "closure test" method developed by the NNPDF collaboration [5]. In a nutshell, this is a three-step procedure. First, one chooses a PDF set that is assumed to be the "true" description of the proton structure. I will refer to it as the "initial PDFs". Second, one generates pseudodata with Monte Carlo methods by convoluting the initial PDFs with partonic cross sections computed perturbatively from a chosen Lagrangian. Third and last step, one performs a PDF fit on the obtained pseudodata. The output PDFs are then compared to the initial ones. If they are compatible with each other, one can assume that the fitting method is sound. In this study, the method I just presented is modified slightly. Two types of closure tests are done. One in which the Lagrangian used to produce the pseudodata in the second step is the SM one, I will refer to this procedure as "baseline fit" and to its result as a "baseline PDFs". The other is performed with a Lagrangian containing additional new heavy fields, detailed below, in the second step. The fundamental point, is that in the third step, the PDFs are fitted only assuming the SM, as we do in real life from experimental data. This type of fit will be referred to as "contaminated fit" outputting "contaminated PDFs". Furthermore, the comparison here is between the baseline and contaminated PDFs. In the case that they are not compatible with each other, it would mean that the new physics has been fitted away in the contaminated PDFs. ### New heavy physics used for pseudodata generation To create the pseudodata, two new physics scenarios have been used, each involving a new heavy boson. One involving a heavy \(Z^{\prime}\) charged under the gauge group \(U(1)_{Y}\) and the other a heavy \(W^{\prime}\) charged under \(SU(2)_{L}\). Both these UV-complete models are matched to SM Effective Field Theory (SMEFT) Lagrangians with terms up to dimension six. This allows more flexibility in the choice of the parameters values. Then, the models are used to generate Drell-Yan (DY) pseudodata. The comparison of the predictions of the SM, the UV BSM, the SMEFT with only linear corrections and the SMEFT with both linear and quadratics can be seen in Fig. 1. As one can see, the middle panels show that in both cases the new physics introduced has a non trivial effect on the high energy tail of the distributions. The bottom panels show that in both cases the SMEFT models including only the linear corrections describe faithfully the UV physics up to 4 TeV. Those are the models that are used to generate the MC pseudodata which are added to the PDFs fits. ## 3 Contamination from Drell-Yan large invariant masses distributions ### Effect of new heavy bosons in PDF fits In the \(Z^{\prime}\) scenario, addition of the contaminated DY pseudodata worsened the quality of the fit. The PDFs were sufficiently constrained by the rest of the datasets to be unable to adapt to the BSM shift, resulting in a poor data-theory agreement for the DY processes. Consequently, the contaminated pseudodata was dropped from the fit and the baseline and contaminated PDFs were compatible with one another. No actual contamination has occurred. However, in the \(W^{\prime}\) scenario the addition of the contaminated pseudodata has not lowered the quality of the fit, allowing it to be properly included in it. The baseline and contaminated Figure 1: Predictions for Drell-Yan differential cross sections in dilepton invariant mass: \(p\bar{p}\to l^{-}\bar{\nu}\) with a \(W^{\prime}\) on the left and \(p\bar{p}\to l^{-}l^{+}\) with a \(Z^{\prime}\) on the right. PDFs were not compatible with each other suggesting that the impact of the \(W^{\prime}\) had been absorbed by the contaminated PDFs. This is due to the lack of constraints on the large-x antiquark distributions from the rest of the data included in the analysis. As a result, we would risk to miss the new physics while analysing this data. ### Impact of contamination on other observables On top of missing the new physics, the PDF contamination has another consequence. The contaminated PDFs are not compatible with the initial PDFs. Thus, they might produce non-physical discrepancies if used to compute theory predictions, even outside the DY sector. In Fig. 2 the theory predictions for diboson production processes are plotted and one can observe a systematic tension between the predictions made with the initial PDFs and the contaminated ones. Theses shifts are completely fictitious and entirely caused by the fact that the PDFs have been warped by the contamination. ## 4 Possible solutions to prevent contamination ### Ratios of observables A first obvious way to discriminate possible BSM contamination in a PDF fit is to consider two different processes which share the same parton channels. By taking the ratio of those observables, the importance of the PDF is greatly diminished. Any shift from SM predictions can then be attributed to one of the partonic cross-section. In Fig. 3, one can see the ratio of the diboson over the DY cross sections. One can observe a systematic deviation growing with the energy. This suggests the presence of new physics in one of the observables. Indeed, we know that the DY data has been generated with a theory featuring a new heavy \(W^{\prime}\). Without this prior knowledge, it would not be clear which of the two datasets is affected by the BSM signals. In this case only the DY pseudodata is included in the PDF fit and excluding it would effectively prevent the contamination. ### Constraints from low-energy datasets on large-x sea quarks Another option would be to reduce the degree of freedom available to the PDFs that allows them to adapt to the BSM signals. Practically, this corresponds to adding to the fit new datasets which would constrain the large-x sea quarks, where the PDF uncertainties are large. To be safe from heavy new physics contamination, one should consider low-energy observables. In our Figure 2: Predictions for \(W^{+}H\) (on the left) and \(W^{+}W^{-}\) (on the right) at the HL-LHC using the initial PDFs (truth) and the contaminated ones (Theory). study we had included data from the fixed target DY experiment SeaQuest [7]. The contamination of the PDF worsened the \(\chi^{2}\) function measuring the data-theory agreement for this experiment but not sufficiently to impact the fit and excluding the contaminated DY data. The strategy would be to increase the amount of data mapping this region. The EIC programme for instance will produce important inputs in this parameter space [8][9].
2302.09123
The $\mathbb C$-motivic Adams-Novikov spectral sequence for topological modular forms
We analyze the $\mathbb{C}$-motivic (and classical) Adams-Novikov spectral sequence for the $\mathbb{C}$-motivic modular forms spectrum $\mathit{mmf}$ (and for the classical topological modular forms spectrum $\mathit{tmf}$). We primarily use purely algebraic techniques, with a few exceptions. Along the way, we settle a previously unresolved detail about the multiplicative structure of the homotopy groups of $\mathit{tmf}$.
Daniel C. Isaksen, Hana Jia Kong, Guchuan Li, Yangyang Ruan, Heyi Zhu
2023-02-17T20:14:43Z
http://arxiv.org/abs/2302.09123v1
# The C-motivic Adams-Novikov spectral sequence for topological modular forms ###### Abstract. We analyze the C-motivic (and classical) Adams-Novikov spectral sequence for the C-motivic modular forms spectrum _tmf_ (and for the classical topological modular forms spectrum _tmf_). We primarily use purely algebraic techniques, with a few exceptions. Along the way, we settle a previously unresolved detail about the multiplicative structure of the homotopy groups of _tmf_. Key words and phrases:topological modular forms, motivic stable homotopy theory, Adams-Novikov spectral sequence, Adams spectral sequence, stable homotopy group 2010 Mathematics Subject Classification: Primary 14F42, 55Q10, 55T15; Secondary 55Q45 The first author was partially supported by National Science Foundation Grant DMS-2202267. The second author was supported by National Science Foundation grant DMS-1926686. The third author would like to thank the Max Planck Institute for Mathematics and the Hausdorff Research Institute for Mathematics for the hospitality. The goal of this manuscript is to carry out the Adams-Novikov spectral sequence for \(tmf\). In fact, we will work in the more general \(\mathbb{C}\)-motivic context and compute the motivic Adams-Novikov spectral sequence for the \(\mathbb{C}\)-motivic modular forms spectrum \(mmf\). The classical computation is easily recovered from the motivic computation by an algebraic localization. More specifically, there is a certain motivic element \(\tau\). Inverting \(\tau\) has the effect of collapsing \(\mathbb{C}\)-motivic computations to classical computations. In particular, \(\tau\)-torsion phenomena disappear in the classical context. Henceforth, we will work in the \(\mathbb{C}\)-motivic context. The interested reader can easily recover classical computations from our work by inverting \(\tau\). From another perspective, we also compute the \(\mathbb{C}\)-motivic effective slice spectral sequence for \(mmf\), since it agrees with the Adams-Novikov spectral sequence over \(\mathbb{C}\). This identification of spectral sequences does not appear to be cleanly stated in the literature, but it is a computational consequence of the weight \(0\) result of [15, Theorem 1]. Our goal is not merely to record the details of the Adams-Novikov spectral sequence, which have previously appeared in [1]. More specifically, we have attempted to give proofs that are as algebraic as possible. Such algebraic proofs are less likely to contain subtle mistakes, and they are more easily verifiable by machine. The motivic context provides us with additional algebraic tools that are not accessible in the strictly classical context. We also correct a few oversights and minor mistakes in the analysis of [1]. ### Algebraic philosophy We do not use any information from the sphere spectrum as input for our computations. We do, however, assume full knowledge of the algebraic structure of the motivic Adams and motivic Adams-Novikov \(E_{2}\)-pages for \(mmf\). This is consistent with our goal of using algebraic techniques whenever possible. It is also consistent with our philosophy that the role of \(tmf\) is to inform us about the sphere spectrum. By comparison, in [1] it is necessary to import the relation \(\eta^{2}\kappa=0\) to \(tmf\) from previous knowledge of the sphere spectrum. Fortunately for us, we have the relation \(h_{1}^{2}d=0\) in the Adams-Novikov \(E_{2}\)-page for \(mmf\). Because there are no elements in higher filtration, the relation \(\eta^{2}\kappa=0\) therefore has an entirely algebraic proof. A computation involving the Adams or Adams-Novikov spectral sequence breaks into two main stages. The first stage is entirely algebraic and involves the computation of the \(E_{2}\)-page. In the modern era, this first stage is typically conducted by machine. The computation of the \(E_{2}\)-pages for \(tmf\) is not elementary, but it can be done manually with enough patience [1, Section 7][1][1, Section 18]. The second stage of the process involves the computation of differentials and hidden extensions. This stage typically requires input from topology, so it cannot be fully automated because it is not entirely algebraic. Our contribution is to recognize that much of this topological second stage actually can be carried out using only algebraic information. The key idea is to use the additional structure of the motivic context in order to pass back and forth between the Adams and Adams-Novikov spectral sequences. Each \(E_{2}\)-page tells us some things about the homotopy groups of \(tmf\). The information contained in these \(E_{2}\)-pages does overlap, but not perfectly. The union of the information in both \(E_{2}\)-pages is strictly greater than the information in either one of the \(E_{2}\)-pages. We give several concrete examples of information available in only one of the two \(E_{2}\)-pages. 1. In the classical Adams \(E_{2}\)-page for \(tmf\), we have the relation \(h_{1}^{4}=0\). This implies the relation \(\eta^{4}=0\) in homotopy. However, in the classical Adams-Novikov \(E_{2}\)-page, the element \(h_{1}^{4}\) is non-zero and is hit by an Adams-Novikov \(d_{3}\) differential. Thus, the relation \(\eta^{4}=0\) has an entirely algebraic proof, but only in the Adams spectral sequence. 2. In fact, the relation \(h_{1}^{4}=0\) is a consequence of the Massey product \(h_{1}^{2}=\langle h_{0},h_{1},h_{0}\rangle\) in the Adams \(E_{2}\)-page. In the classical Adams-Novikov \(E_{2}\)-page, the corresponding Massey product \(\langle 2,h_{1},2\rangle\) is zero. Consequently, the Toda bracket \(\eta^{2}=\langle 2,\eta,2\rangle\) has an entirely algebraic proof, but only in the Adams spectral sequence. 3. In the classical Adams-Novikov \(E_{2}\)-page for \(tmf\), we have the relation \(h_{2}^{3}=h_{1}c\). This implies the relation \(\nu^{3}=\eta\epsilon\). However, in the classical Adams \(E_{2}\)-page, we have \(h_{2}^{3}=0\). In fact, there is a hidden \(\nu\) extension from \(h_{2}^{2}\) to \(h_{1}c\) in the Adams spectral sequence. Thus, the relation \(\nu^{3}=\eta\epsilon\) has an entirely algebraic proof, but only in the Adams-Novikov spectral sequence. 4. In fact, the relation \(h_{2}^{3}=h_{1}c\) is a consequence of the Massey product \(c=\langle h_{2},h_{1},h_{2}\rangle\) in the Adams-Novikov \(E_{2}\)-page. In the classical Adams \(E_{2}\)-page, the corresponding Massey product is zero. Consequently, the Toda bracket \(\epsilon=\langle\nu,\eta,\nu\rangle\) has an entirely algebraic proof, but only in the Adams-Novikov spectral sequence. See Lemma 2.20 for more detail on this example. In order to obtain one key Adams-Novikov differential, we use Bruner's theorem on the interaction between algebraic Steenrod operations [11] and Adams differentials in the context of the Adams spectral sequence. We refer to [1, Theorem 2.2] for a precise readable statement; see also [1] and [13]. The practical implementation of Bruner's theorem requires only algebraic information in the form of algebraic Steenrod operations on Ext groups. These operations can be computed by machine, although not as effectively as the additive and multiplicative structure of the Ext groups. The algebraic Steenrod operations are additional structure on top of what topologists usually think of as "standard homological algebra". In the context of the Adams-Novikov spectral sequence, we also rely on the Leibniz rule in the form \(d_{r}(x^{k})=kx^{k-1}d_{r}(x)\). Philosophically, this formula is connected to Bruner's theorem, although we do not know how to make a precise connection. As in the case of Bruner's theorem, it feels like slightly more information than is usually considered in standard homological algebra. We also draw attention to Proposition 4.5, in which we establish a hidden 2 extension in the 110-stem. Here we use some information about the homotopy groups of \(mmf/\tau^{2}\). One might argue that this information is not entirely of an algebraic nature. By comparison, the corresponding 2 extension in the Adams spectral sequence is hidden, but not particularly difficult [1, Theorem 9.8(110)]. ### Techniques Section 2.10 describes a particularly powerful method for studying the \(\mathbb{C}\)-motivic Adams-Novikov spectral sequence in a way that has no classical analogue. There is a map \(q:mmf/\tau\to\Sigma^{1,-1}mmf\) that can be viewed as projection to the top cell of the 2-cell \(mmf\)-module \(mmf/\tau\). The homotopy of \(mmf/\tau\) is entirely understood in an algebraic sense since it is isomorphic to the classical Adams-Novikov \(E_{2}\)-page for \(tmf\). Moreover, the map \(q\) maps onto the homotopy of \(mmf\) that is annihilated by \(\tau\). Thus \(q\) can be used to detect structure in \(mmf\) that is related to classes that are annihilated by \(\tau\). In practice, many specific questions about hidden extensions do not directly involve elements that are annihilated by \(\tau\). Frequently, if we multiply these elements by a power of \(\tau\) and a power of \(g\), then we end up with elements that are annihilated by \(\tau\). We can use \(q\) to understand these latter elements, and finally deduce information about the original elements. Table 5 lists numerous specific examples of this process. The majority of hidden extensions can be handled very easily in this way, although a few extensions require more complicated arguments. We avoid the use of Toda brackets whenever possible, but occasionally they are inevitable. In those cases where we must compute a Toda bracket, we once again rely exclusively on algebraic techniques. Namely, our Toda brackets arise from corresponding Massey products in either the Adams or Adams-Novikov \(E_{2}\)-page. The Moss Convergence Theorem [12] says that such algebraic Massey products detect Toda brackets in "well-behaved" situations. In practice, all of the situations that we study are well-behaved. ### The differentials on \(\Delta^{k}\) Having carried out the entire analysis of the motivic Adams-Novikov spectral sequence for \(mmf\), we can see in hindsight that there are a few key steps from which all of the other miscellaneous computations follow. Our experience shows that the key steps involve the differentials on elements of the form \(2^{j}\Delta^{k}\). This is not particularly surprising; we expect the element \(\Delta\) to play a dominant role since it represents \(v_{2}\)-periodicity. First, we establish \(d_{5}(\Delta)=\tau^{2}h_{2}g\) in Proposition 3.8. This follows immediately by comparison to the Adams spectral sequence, in which \(\tau^{2}h_{2}g\) is already zero in the \(E_{2}\)-page. Thus, we have an algebraic proof for \(d_{5}(\Delta)\). Then the Leibniz rule implies that \(d_{5}(\Delta^{2})=2\tau^{2}\Delta h_{2}g\). The Leibniz rule also implies that \(d_{5}(\Delta^{4})=4\tau^{2}\Delta^{3}h_{2}g\). However, \(4\tau^{2}\Delta^{3}h_{2}g\) is zero in the Adams-Novikov \(E_{2}\)-page. Because of the hidden 2 extension from \(2\tau^{2}h_{2}\) to \(\tau^{3}h_{1}^{3}\), the element \(\tau^{3}\Delta^{3}h_{1}^{3}g\) ought to play the role of \(4\tau^{2}\Delta^{3}h_{2}g\). This strongly suggests that there should be a differential \(d_{7}(\Delta^{4})=\tau^{3}\Delta^{3}h_{1}^{3}g\). In fact, this formula is correct (see Proposition 3.14), but it requires some work to give a precise proof. Our solution, once again, is to play the Adams and Adams-Novikov spectral sequences against each other. We used the Adams \(E_{2}\)-page to obtain the Adams-Novikov differential \(d_{5}(\Delta)\). Then we used the Leibniz rule in the Adams-Novikov spectral sequence to obtain \(d_{5}(\Delta^{2})\). In turn, this last Adams-Novikov differential implies an Adams differential \(d_{2}(\Delta^{2})\), or \(d_{2}(w_{2})\) in the notation of [1]. Next, we obtain an Adams differential \(d_{3}(\Delta^{4})\), or \(d_{3}(w_{2}^{2})\) in the notation of [1], by applying Bruner's theorem on the interaction between squaring operations and Adams differentials [1][2]. Finally, the Adams differential \(d_{3}(\Delta^{4})\) implies that there is an Adams-Novikov differential \(d_{7}(\Delta^{4})\). For more details, see Sections 3.3 and 3.4. Curiously, precise statements about the Adams-Novikov differential \(d_{7}(\Delta^{4})\) are missing from [1][1][1]. ### Main results Our main results are expressed in the charts in Section 7. For completeness, we express this in the form of a main theorem. **Theorem 1.1**.: _The charts in Section 7 represent the \(\mathbb{C}\)-motivic Adams-Novikov spectral sequence for the motivic modular forms spectrum \(mmf\), including complete descriptions of_ * _the_ \(E_{2}\)_-page._ * _all differentials._ * _the_ \(E_{\infty}\)_-page._ * _all hidden extensions by_ \(2\)_,_ \(\eta\)_, and_ \(\nu\)_._ The proof of Theorem 1.1 consists of the sum of a long list of miscellaneous computations, which are carried out throughout the manuscript. See especially the tables in Section 6. These tables summarize the main computational facts, and they give cross-references to more detailed proofs of each fact. Our work is not as complete as [1] because we have not completely analyzed the multiplicative structure. In principle, this could be done using the same techniques. We do study one family of multiplicative relations in more detail. Bruner and Rognes identify a family \(\nu_{k}\) of elements in the homotopy of \(tmf\). They mostly determine the products among these elements, but they leave one case unresolved. Our techniques settle this last detail about the \(2\)-primary multiplicative structure of the homotopy of \(tmf\). **Theorem 1.2**.: _In the context of [1], \(\nu_{4}\nu_{6}=\nu\nu_{2}M\)._ Theorem 1.2 is proved later as Corollary 5.12. In fact, it is a consequence of the more general Theorem 5.10, which offers a graceful simultaneous analysis of products \(\nu_{j}\nu_{k}\). Brunner and Rognes empirically observed the formula \[\nu_{i}\nu_{j}=(i+1)\nu\nu_{i+j}.\] Our proof shows that the coefficients \((i+1)\) arise naturally from the Leibniz rule \[d_{5}(\Delta^{i+1})=(i+1)\Delta^{i}d_{5}(\Delta).\] ### Future directions Our work raises some questions that deserve further study. **Problem 1.1**.: Compute the \(\overline{\kappa}\)-periodic C-motivic spectrum \(mmf\big{[}\overline{\kappa}^{-1}\big{]}\). Frequently, we detect elements and relations by first computing their products with various powers of \(g\) or \(\overline{\kappa}\). In other words, much of the structure of \(mmf\) is reflected in the \(\overline{\kappa}\)-periodic spectrum \(mmf[\overline{\kappa}^{-1}]\). This motivic spectrum is non-trivial, but its homotopy is entirely annihilated by \(\tau^{11}\). Consequently, its Betti realization is trivial, and it represents purely "exotic" motivic phenomena. We mention that [1] also studies \(g\)-periodic phenomena in \(tmf\), although not in a way that is particularly close to our perspective. **Problem 1.2**.: Develop better technology to deduce the differential \(d_{7}(\Delta^{4})=\tau^{3}\Delta^{3}h_{1}^{3}g\) directly from the differential \(d_{5}(\Delta)=\tau^{2}h_{2}g\). It is conceivable that \(d_{7}(\Delta^{4})\) could be deduced directly from \(d_{5}(\Delta)\) using a variant of Bruner's theorem that would apply in the Adams-Novikov spectral sequence, but we have not even formulated a precise statement of such a variant. There is a connection between Bruner's theorem and the Leibniz rule \(d_{r}(x^{2})=2xd_{r}(x)\), but the precise relationship is not clear to us. Another possible approach to Problem 1.2 might involve an enriched \(E_{2}\)-page in which the \(2\) extension from \(2\tau^{2}h_{2}\) to \(\tau^{3}h_{1}^{3}\) is not hidden. **Problem 1.3**.: Construct a spectral sequence whose \(E_{2}\)-page reflects the algebraic structure of both the Adams and Adams-Novikov \(E_{2}\)-pages. We frequently pass back and forth between the Adams and Adams-Novikov spectral sequences. In order to facilitate these transitions, Section 2.5 introduces a notion of correspondence between elements of the Adams spectral sequence and elements of the Adams-Novikov spectral sequence. This setup feels like a preliminary attempt to describe a richer connection between the two spectral sequences. It would be much more convenient and effective to compute in just a single spectral sequence that reflects the algebraic structure of both the Adams and Adams-Novikov spectral sequences. There are some preliminary indications that "bimodity homotopy theory" (also known as \(H\mathbb{F}_{2}\)-synthetic \(BP\)-synthetic homotopy theory) provides a context for this. ### Outline We begin in Section 2 with a discussion of tools that we will use to carry out our explicit computations. We describe both the motivic Adams and motivic Adams-Novikov spectral sequences for \(mmf\), and we establish notation for elements in these spectral sequences. We also establish notation for certain homotopy elements that we will use later. We draw particular attention to Sections 2.9 and 2.10, which establish a powerful tool for detecting hidden extensions. The basic idea is to use the motivic spectrum \(mmf/\tau\), whose homotopy is entirely algebraic. Our explicit computations begin in Section 3, where we establish all of the Adams-Novikov differentials. The propositions in this section are mostly in order of increasing length of differentials. However, we make some exceptions to this general rule to preserve the logical order, so each result only depends on previously proved results. Once the Adams-Novikov differentials are computed, we proceed to compute all hidden extensions by \(2\), \(\eta\), and \(\nu\) in Section 4. Most of these extensions follow immediately by comparison to the homotopy of \(mmf/\tau\), but there are several cases with more difficult proofs. Finally, in Section 5, we consider an explicit family of products that are particularly interesting. Our results on these products fill a gap in the product structure of \(\pi_{*}\mathit{tmf}\), as described in [1]. ### Conventions We work exclusively at the prime \(2\). There are interesting aspects to the computation of \(tmf\) at the prime \(3\) ([1, Chapter 5], [13], [14], [15, Chapter 13]), but we do not address that topic. We use the motivic Adams-Novikov spectral sequence to compute the homotopy groups of the \(2\)-localization of \(mmf\). We also use the \(E_{2}\)-page of the motivic Adams spectral sequence, which actually converges to the homotopy groups of the \(2\)-completion of \(mmf\). The distinction between localization and completion is not essential since only finitely generated abelian groups appear in our work. For expository simplicity, these localizations or completions do not appear in our notation. For example, the symbol \(\mathbb{Z}\) refers to the integers localized at \(2\), or to the \(2\)-adic integers. Similarly, \(\pi_{*,*}mmf\) refers to the motivic stable homotopy groups of the \(2\)-localization (or \(2\)-completion) of \(mmf\). The adjective "motivic" always refers exclusively to the \(\mathbb{C}\)-motivic context. We consider no base fields other than \(\mathbb{C}\). Many of our explicit results are labelled with the degrees in which they occur. These degrees may help the reader navigate the overall computation, especially in finding the relevant elements on Adams-Novikov charts. ### Acknowledgements We thank Tilman Bauer, Robert Bruner, and John Rognes for various discussions related to the production of this manuscript. We also appreciate stimulating discussions with the participants of the Winter 2023 eCHT reading seminar on the Adams spectral sequence for tmf. ## 2. Background In this section, we discuss the techniques that we will use later to carry out our computations. ### The \(\mathbb{C}\)-motivic modular forms spectrum \(mmf\) There is a \(\mathbb{C}\)-motivic \(E_{\infty}\)-ring spectrum \(mmf\) that can be viewed as the analogue of the classical topological modular forms spectrum \(tmf\)[10]. The Betti realization of \(mmf\) is the classical spectrum \(tmf\). Moreover, the cohomology of \(mmf\) is \(A\not\!\!/\,A(2)\), where \(A\) denotes the \(\mathbb{C}\)-motivic Steenrod algebra and \(A(2)\) is the subalgebra generated by \(\mathbb{S}\mathrm{q}^{1}\), \(\mathbb{S}\mathrm{q}^{2}\), and \(\mathbb{S}\mathrm{q}^{4}\). ### The \(\mathbb{C}\)-motivic Adams spectral sequence for \(mmf\) We abbreviate the motivic Adams spectral sequence for \(mmf\) by mAss. The cohomology of \(\mathbb{C}\)-motivic \(A(2)\) is the \(E_{2}\)-page of the mAss. The manuscript [16] computes the cohomology of \(\mathbb{C}\)-motivic \(A(2)\) using the motivic May spectral sequence, and it gives a complete description of its ring structure. The mAss \(E_{2}\)-page consists entirely of algebraic information, which we take as given. We grade the mAss \(E_{2}\)-page in the form \((s,f,w)\), where \(s\) is the topological stem, \(f\) is the Adams filtration, and \(w\) is the motivic weight. The motivic Adams differentials are recorded in [18]. However, this manuscript does not depend on previous knowledge of any Adams differentials, neither classical nor motivic. For completeness, we provide self-contained proofs for two Adams differentials in Proposition 3.19. We adopt the notation of [18] and [18] for the mAss. For the reader's convenience, Table 1 provides a concordance between our notation and the notation of [1]. Beware that the motivic generators \(u\) and \(\Delta u\) have no classical counterparts because they are annhilated by \(\tau\). ### The C-motivic Adams-Novikov spectral sequence for \(mmf\) The \(E_{2}\)-page of the classical Adams-Novikov spectral sequence for \(tmf\) is given by \(\operatorname{Ext}_{BP_{*}BP}^{**}(BP_{*},BP_{*}tmf)\), where \(BP\) denotes the Brown-Peterson spectrum. Analogously to the classical Adams-Novikov spectral sequence, one can construct a motivic Adams-Novikov spectral sequence by resolving with respect to the motivic Brown-Peterson spectrum. We abbreviate the motivic Adams-Novikov spectral sequence by mANss. We grade the mANss \(E_{2}\)-page in the form \((s,f,w)\), where \(s\) is the topological stem, \(f\) is the Adams-Novikov filtration, and \(w\) is the motivic weight. The mANss is easy to describe in classical terms. The motivic \(E_{2}\)-page can be obtained from its classical analogue by first assigning a third degree, called the weight, to be half of the total degree for each class, then adjoining a polynomial generator \(\tau\) of degree \((0,0,-1)\) (see, e.g. [1][18]). More explicitly, a classical element \(x\) in degree \((s,f)\) corresponds to a family of elements \(\{\tau^{n}x|n\geq 0\}\) in the mANss, where the motivic element \(x\) has degree \(\left(s,f,\frac{s+f}{2}\right)\). The \(E_{2}\)-page of the mANss consists entirely of algebraic information, which we take as given. For our purposes, the best way to compute this \(E_{2}\)-page is by the algebraic Novikov spectral sequence, which is worked out in detail in [1]. **Remark 2.1**.: The \(E_{2}\)-page of the classical Adams-Novikov spectral sequence for \(tmf\) is the cohomology of a version of the elliptic curve Hopf algebroid ([1][1]). By the change-of-rings theorem [1, Theorem 15.3], this is the same as the cohomology of the \begin{table} \begin{tabular}{l l l} \hline \hline \((s,f,w)\) & [18] & [18] \\ \hline \((0,1,0)\) & \(h_{0}\) & \(h_{0}\) \\ \((1,1,1)\) & \(h_{1}\) & \(h_{1}\) \\ \((3,1,2)\) & \(h_{2}\) & \(h_{2}\) \\ \((8,3,5)\) & \(c\) & \(c_{0}\) \\ \((8,4,4)\) & \(P\) & \(w_{1}\) \\ \((11,3,7)\) & \(u\) & \\ \((12,3,6)\) & \(a\) or \(\alpha\) & \(\alpha\) \\ \((14,4,8)\) & \(d\) & \(d_{0}\) \\ \((15,3,8)\) & \(n\) or \(\nu\) & \(\beta\) \\ \((17,4,10)\) & \(e\) & \(e_{0}\) \\ \((20,4,12)\) & \(g\) & \(g\) \\ \((25,5,13)\) & \(\Delta h_{1}\) & \(\gamma\) \\ \((32,7,17)\) & \(\Delta c\) & \(\delta\) \\ \((35,7,19)\) & \(\Delta u\) & \\ \((48,8,24)\) & \(\Delta^{2}\) & \(w_{2}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Generators of the motivic Adams \(E_{2}\)-page for \(mmf\) Hopf algebroid \((BP_{*}tmf,BP_{*}BP\otimes_{BP},BP_{*}tmf)\). See [22, Proposition 15.7 and Section 20] for more details. We do not rely on this perspective. ### Notation for the motivic Adams-Novikov spectral sequence Table 2 lists the multiplicative generators for the mANss \(E_{2}\)-page for \(mmf\). These generators are the starting point of our computation. One must be slightly careful with the definitions of some of these elements because they belong to cyclic groups of order greater than \(2\). In these cases, there is more than one possible generator. Specifically, this issue arises for the elements \(h_{2}\), \(P\), \(4a\), \(g\), and \(\Delta\). For \(P\), \(4a\), and \(g\), we simply choose arbitrary generators. **Remark 2.2**.: \((3,1,2)\) The choice of \(h_{2}\) makes little practical difference to us, as long as it is a generator of the mANss \(E_{2}\)-page in degree \((3,1,2)\). For definiteness, we take \(h_{2}\) to represent the homotopy element \(\nu\), assuming an a priori definition of \(\nu\) (for example, by appealing to the homotopy of the sphere spectrum or by appealing to a geometric construction of \(\nu\) involving quaternionic multiplication). The choice of \(\Delta\) also makes little practical difference. We choose \(\Delta\) in such a way to make our formulas easier to write. See Remark 3.9 and Remark 5.8 for more details. Note that the choice of \(\Delta\) depends on a previous choice of \(h_{2}\). **Remark 2.3**.: \((12,0,6)\) The notation \(4a\) does not appear to be natural and deserves some explanation. There are two closely related reasons why we find this notation to be convenient. First, the element \(4a\) is detected in the algebraic Novikov spectral sequence [1] by an element \(h_{0}^{2}a\). Second, the element \(2\cdot 4a\) turns out to be a permanent cycle that detects an element in \(\pi_{12,6}mmf\). This same homotopy element is detected by \(h_{0}^{3}a\) in the Adams spectral sequence for \(mmf\). The element \(g\) is a permanent cycle and therefore represents a homotopy class \(\overline{\kappa}\). Multiplication by \(g\) provides regular structure to the mANss for \(mmf\). We typically sort elements into families that are related by \(g\) multiplication. In other words, when we consider a particular element \(x\), we also typically consider the elements \(xg^{k}\) for all \(k\geq 0\) at the same time. Taken together, Figures 1 and 3 depict the \(E_{2}\)-page of the mANss for \(mmf\) graphically. The careful reader should superimpose these figures in order to obtain a full picture of the mANss. Figure 1 depicts a regular \(v_{1}\)-periodic pattern in the \(E_{2}\)-page, to be discussed in detail in Section 2.7. Figure 3 depicts the remaining classes. \begin{table} \begin{tabular}{l l} \hline \hline \((s,f,w)\) & generator \\ \hline \((0,0,-1)\) & \(\tau\) \\ \((1,1,1)\) & \(h_{1}\) \\ \((3,1,2)\) & \(h_{2}\) \\ \((5,1,3)\) & \(h_{1}v_{1}^{2}\) \\ \((8,0,4)\) & \(P\) \\ \((8,2,5)\) & \(c\) \\ \((12,0,6)\) & \(4a\) \\ \((14,2,8)\) & \(d\) \\ \((20,4,12)\) & \(g\) \\ \((24,0,12)\) & \(\Delta\) \\ \hline \hline \end{tabular} \end{table} Table 2. Generators of the motivic Adams-Novikov \(E_{2}\)-page for \(mmf\) ### Comparison between the mANss and the mAss **Definition 2.4**.: Let \(a\) be a permanent cycle in the mANss for \(mmf\), and let \(b\) be a permanent cycle in the mAss for \(mmf\). The elements \(a\) and \(b\)**correspond** if there exists a non-zero element in \(\pi_{*,*},mmf\) that is detected by \(a\) in the mANss for \(mmf\) and is detected by \(b\) in the mAss for \(mmf\). **Remark 2.5**.: Beware that a permanent cycle may detect more than one element in \(\pi_{*,*},mmf\), depending on the presence of permanent cycles in higher filtration. We ask only that the cosets detected by \(a\) and \(b\) intersect; they need not coincide. We give an explicit example. The element \(P\) of the mANss \(E_{\infty}\)-page detects two elements of \(\pi_{8,A}mmf\) because of the presence of \(\tau c\) in higher filtration. On the other hand, the element \(P\) of the mAss \(E_{\infty}\)-page detects infinitely many elements (which differ only by a 2-adic unit factor) because of the presence of \(Pl_{0}^{k}\) in higher filtration for \(k\geq 1\). This is an example of a corresponding pair of elements that do not detect precisely the same coset of homotopy elements. **Remark 2.6**.: It is possible that a single element of the mANss corresponds to two different elements of the mAss. For example, the element \(P\) of the mANss detects two elements of \(\pi_{8,4}mmf\) because of the presence of \(\tau c\) in higher filtration. These two homotopy elements are detected by \(\tau c\) and by \(P\) in the mAss. Consequently, the mANss element \(P\) corresponds to the mAss element \(P\), and it also corresponds to the mAss element \(\tau c\). Fortunately, this kind of complication never arises for us in practice. For example, none of the correspondences listed in Table 4 exhibit this type of behavior. **Remark 2.7**.: The element 2 of the mANss \(E_{\infty}\)-page detects a single element in homotopy since there are no elements in higher filtration. On the other hand, the element \(h_{0}\) of the mAss \(E_{\infty}\)-page detects infinitely many elements in homotopy, all of which differ by a 2-adic unit factor, because of the presence of \(h_{0}^{k}\) in higher filtration. Consequently, while 2 and \(h_{0}\) are a corresponding pair, they do not detect the same sets of homotopy elements. Rather, the homotopy elements detected by 2 form a subset of the homotopy elements detected by \(h_{0}\). Among the corresponding pairs listed in Table 4, the same phenomenon occurs for \(h_{2}\), \(g\), \(\Delta h_{1}\), and \(4\Delta^{2}\). In all of these cases, the homotopy elements detected by the mANss \(E_{\infty}\)-page element form a subset of the homotopy elements detected by the mAss \(E_{\infty}\)-page element. Multiplicative structure respects corresponding pairs. The following proposition establishes this principle precisely. **Proposition 2.8**.: _Let \(a\) and \(a^{\prime}\) be elements of the mANss \(E_{\infty}\)-page, and let \(b\) and \(b^{\prime}\) be elements of the mAss \(E_{\infty}\)-page. If \(a\) corresponds to \(a^{\prime}\), \(b\) corresponds to \(b^{\prime}\), and \(ab\) and \(a^{\prime}b^{\prime}\) are non-zero; then \(ab\) corresponds to \(a^{\prime}b^{\prime}\)._ Proof.: Let \(a\) and \(a^{\prime}\) detect a homotopy element \(\alpha\), and let \(b\) and \(b^{\prime}\) detect a homotopy element \(\beta\). Then \(ab\) and \(a^{\prime}b^{\prime}\) detect the product \(\alpha\beta\). **Remark 2.9**.: The motivic Thom reduction map \(BP\to H\mathbb{F}_{2}\) induces a map from the mANss for \(mmf\) to the mAss for \(mmf\). This map detects some corresponding pairs but not all of them. Namely, it detects the pairs involving \(h_{1}\), \(h_{2}\), and \(g\). These are the elements for which there is no filtration shift between the mANss and the mAss. ### Homotopy elements Table 3 lists some notation that we use for elements in the homotopy of \(mmf\). We use the same symbols as in [1] for our motivic versions. Beware that some of our homotopy elements may not be exactly compatible under Betti realization with the ones in [1]. We discuss the details of these ambiguities in the following paragraphs. We define elements in homotopy by specifying the elements in the mANss \(E_{\infty}\)-page that detect them. In some cases, it is already easy to see that these detecting elements survive to the \(E_{\infty}\)-page. For example, there are no possible targets for differentials on \(h_{1}\) and \(h_{2}\); nor can they be hit by differentials. Beware that we do not yet know that some of these detecting elements actually survive to the \(E_{\infty}\)-page. This will only become apparent after our analysis of Adams-Novikov differentials. In some cases, there are \(E_{\infty}\)-page elements in higher filtration. When this occurs, the specified element in the \(E_{\infty}\)-page detects more than one element in homotopy. For example, the element \(\tau h_{1}^{3}\) lies in filtration higher than the filtration of \(h_{2}\). Therefore, \(h_{2}\) detects two distinct elements in homotopy. In Table 3, this ambiguity occurs only for \(\nu\), \(\kappa_{4}\), and the elements of the form \(\nu_{k}\). The choice of \(\nu\) is of little practical signficance to us. For definiteness, we may use an a priori definition of \(\nu\), as discussed in Remark 2.2. The choices of \(\nu_{k}\) will be discussed later in Definition 5.4. The choice of \(\kappa_{4}\) is immaterial for our purposes, so it can be an arbitrary generator of \(\pi_{110,56}\). **Remark 2.10**.: \((20,4,12)\) Bruner and Rognes choose \(\overline{\kappa}\) by reference to the unit map \(S\to tmf\), together with a prior choice of \(\overline{\kappa}\) in \(\pi_{20}S\). For our purposes, we only need that \(\overline{\kappa}\) is detected by \(g\) in the mANss \(E_{\infty}\)-page, so we may choose \(\overline{\kappa}\) to be compatible with the one in [1]. There is a slight complication with \(\overline{\kappa}\). In [11] and [12], the symbol \(\overline{\kappa}\) is used for an element of \(\pi_{20,11}S^{0,0}\) that is detected by \(\tau g\) in the motivic Adams spectral sequence. The point is that \(g\) does not survive the May spectral sequence, so it does not exist in the motivic Adams spectral sequence. Here, we use \(\overline{\kappa}\) for an element of \(\pi_{20,12}mmf\). This element is detected by \(g\) in the Adams spectral sequence for \(mmf\). The unit map \(S^{0,0}\to mmf\) takes \(\overline{\kappa}\) to \(\tau\overline{\kappa}\). **Remark 2.11**.: Bruner and Rognes refer to the "edge homomorphism" in order to specify certain elements in \(\pi_{*}tmf\). From the perspective of the Adams-Novikov spectral sequence, this edge homomorphism takes a particularly convenient form that can be easily described as a surjection followed by an injection. The surjection takes \(\pi_{*}tmf\) onto its quotient by elements that are detected in strictly positive Adams-Novikov filtration. In other words, the \begin{table} \begin{tabular}{l l l} \hline \((s,w)\) & name & detected by \\ \hline \((1,1)\) & \(\eta\) & \(h_{1}\) \\ \((3,2)\) & \(\nu\) & \(h_{2}\) \\ \((8,5)\) & \(\epsilon\) & \(c\) \\ \((14,8)\) & \(\kappa\) & \(d\) \\ \((20,12)\) & \(\bar{\kappa}\) & \(g\) \\ \((25,13)\) & \(\eta_{1}\) & \(\Delta h_{1}\) \\ \((27,14)\) & \(\nu_{1}\) & \(2\Delta h_{2}\) \\ \((51,26)\) & \(\nu_{2}\) & \(\Delta^{2}h_{2}\) \\ \((96,48)\) & \(D_{4}\) & \(2\Delta^{4}\) \\ \((99,50)\) & \(\nu_{4}\) & \(\Delta^{4}h_{2}\) \\ \((110,56)\) & \(\kappa_{4}\) & \(\Delta^{4}d\) \\ \((123,62)\) & \(\nu_{5}\) & \(2\Delta^{5}h_{2}\) \\ \((147,74)\) & \(\nu_{6}\) & \(\Delta^{6}h_{2}\) \\ \((192,96)\) & \(M\) & \(\Delta^{8}\) \\ \hline \end{tabular} \end{table} Table 3. Some elememts of \(\pi_{*,*}mmf\) surjection maps \(\pi_{*}tmf\) onto the Adams-Novikov \(E_{\infty}\)-page in filtration \(0\). Then the injection is the inclusion of the Adams-Novikov \(E_{\infty}\)-page into the Adams-Novikov \(E_{2}\)-page in filtration \(0\). In other words, the edge homomorphism detects the homotopy elements that are detected in Adams-Novikov filtration \(0\). This description of the edge homomorphism applies equally well in the setting of \(\pi_{*,*}mmf\) and the motivic Adams-Novikov spectral sequence. The edge homomorphism depends on the choice of \(\Delta\) (see Remark 3.9). Beware that our choice of \(\Delta\) does not guarantee that our edge homomorphism is identical to the one discussed in [1]. Consequently, our definitions of the homotopy elements \(D_{4}\) and \(M\) in Table 3 may not be the same as [1, Definition 9.22]. All possible choices of \(\Delta\) differ by multiples of \(2\), so \(\Delta^{k}\) is well-defined up to multiples of \(2^{k}\). Therefore, our choices of \(D_{4}\) and \(M\) agree with the Bruner-Rognes definitions up to multiples of \(16\) and \(256\) respectively. ### \(v_{1}\)-periodicity Part of the mANss for \(mmf\) reflects \(v_{1}\)-periodic homotopy. The pattern of differentials in this part is similar to the Adams-Novikov differentials for \(ko\) (see [1, page 31]). We consider this part separately and omit them from computations of higher differentials. Beware that we are not employing an intrinsic definition of \(v_{1}\)-periodic homotopy. Rather, we are simply observing some specific structure in the mANss for \(mmf\). In the mANss \(E_{2}\)-page, consider elements of the form \(\tau^{4}h_{1}^{b}P^{m}(4a)^{\epsilon}\Delta^{n}\), where \(\epsilon\) equals \(0\) or \(1\) and \(m+\epsilon>0\). We refer to these elements as the \(v_{1}\)-periodic classes. Note that \(1\) and \(\Delta^{n}\) (as well as their \(\tau\) multiples and \(h_{1}\) multiples) are excluded from this family of elements. The knowledgeable reader may observe that these powers of \(\Delta\) satisfy an intrinsic definition of \(v_{1}\)-periodicity. Our family is constructed for its practical convenience, not for its intrinsic properties. The \(v_{1}\)-periodic elements, as we have defined them, only interact with each other through the Adams-Novikov differentials. However, the powers of \(\Delta\) support Adams-Novikov differentials that take values outside of the \(v_{1}\)-periodic family. Consequently, we consider them in conjunction with the non-\(v_{1}\)-periodic elements. Figures 1 and 2 display the \(v_{1}\)-periodic portions of the mANss \(E_{2}\)-pages and \(E_{\infty}\)-pages respectively. Our other charts exclude the \(v_{1}\)-periodic family. ### The spectrum \(mmf/\tau\) Consider the cofiber sequence \[\Sigma^{0,-1}mmf\xrightarrow{\tau}mmf\xrightarrow{i}mmf/\tau\xrightarrow{q} \Sigma^{1,-1}mmf \tag{2.12}\] of \(mmf\)-modules. The spectrum \(mmf/\tau\) is a \(2\)-cell \(mmf\)-module, in the sense that it is built from two copies of \(mmf\). We refer to \(i\) as inclusion of the bottom cell, and we refer to \(q\) as projection to the top cell. The mANss for \(mmf/\tau\) has a particularly simple algebraic form. The \(E_{2}\)-page is isomorphic to the \(E_{2}\)-page of the classical Adams-Novikov spectral sequence for \(tmf\), except that it has a third degree. However, this additional degree carries no extra information since it equals half of the total degree, i.e., the sum of the stem and the Adams-Novikov filtration. Moreover, the mANss for \(mmf/\tau\) collapses. There are no differentials, so the \(E_{\infty}\)-page equals the \(E_{2}\)-page. Even better, there are no possible hidden extensions for degree reasons. Consequently, the homotopy of \(mmf/\tau\) is isomorphic to the classical Adams-Novikov \(E_{2}\)-page for \(tmf\). Therefore, we take the homotopy of \(mmf/\tau\) as given since it is entirely algebraic information. The results discussed in this paragraph are \(tmf\) versions of the results in [11, Section 6.2], which are stated for the sphere spectrum. We use the notation of Table 2 in order to describe homotopy elements in \(\pi_{*,*}mmf/\tau\). On the other hand, we need to be more careful about notation for elements in \(\pi_{*,*}mmf\). We can specify elements in \(\pi_{*,*}mmf\) by giving detecting elements in the mANss \(E_{\infty}\)-page, but this only specifies homotopy elements up to higher filtration. See Section 2.6 for more discussion of choices of elements in \(\pi_{*,*}\)_mmf_. The mAss for \(mmf/\tau\) is isomorphic to the algebraic Novikov spectral sequence, for which we have complete information [1]. This is a \(tmf\) version of the results in [10], which are stated for the sphere spectrum. ### Inclusion and projection We discuss the inclusion \(i\) and the projection \(q\) from Equation (2.12) in more detail. Many of these ideas first appeared in [14, Chapter 5] in more primitive forms. We already observed that both \(i\) and \(q\) are \(mmf\)-module maps. Note that the inclusion \(i\) is a ring map, but the projection \(q\) is not. They induce maps of motivic Adams-Novikov spectral sequences. These spectral sequence maps are in fact module maps over the mANss for \(mmf\). Similarly, the induced maps of homotopy groups are \(\pi_{*,*}\)_mmf_-module maps. We describe the inclusion \(i:mmf\to mmf/\tau\) of the bottom cell in computational terms. If \(\alpha\) is a homotopy element that is not a multiple of \(\tau\), then \(i(\alpha)\) is an element of the mANss \(E_{2}\)-page that detects \(\alpha\). On the other hand, if \(\alpha\) is a multiple of \(\tau\), then \(i(\alpha)\) is zero. This fact is closely related to the observation that the motivic Adams-Novikov spectral sequence is the same as the \(\tau\)-Bockstein spectral sequence. Table 3 gives a number of values of \(i\). For example, we have \(i(\eta)=h_{1}\). In fact, we have defined the elements in the middle column of the table to have the appropriate values under \(i\). For later use, we describe the computational implication that \(q:mmf/\tau\to\Sigma^{1,-1}mmf\) is an \(mmf\)-module map. Let \(\alpha\) be an element of \(\pi_{*,*}\)_mmf_, and let \(x\) be an element of \(\pi_{*,*}\)_mmf_\(/\tau\). The object \(mmf/\tau\) is a right \(mmf\)-module, and \[x\cdot\alpha=x\cdot i(\alpha),\] where the dot on the left side represents the module action and the dot on the right side represents the multiplication of the ring spectrum \(mmf/\tau\). Then we have that \[q(x)\cdot\alpha=q(x\cdot\alpha)=q(x\cdot i(\alpha)), \tag{2.13}\] where the dot on the left represents multiplication in \(mmf\); the dot in the center represents the \(mmf\)-module action on \(mmf/\tau\); and the dot on the right represents multiplication in \(mmf/\tau\). We need a precise statement about the values of \(q\). Our desired statement has essentially the same content as [1, Theorem 9.19(1c)], which we reformulate into a form that is more convenient for us. **Proposition 2.14**.: _Let \(x\) be an element of the mANss \(E_{2}\)-page that is not divisible by \(\tau\), and suppose that there is a non-zero motivic Adams-Novikov differential \(d_{2\tau+1}(x)=\tau^{r}y\). If we consider \(x\) as an element of \(\pi_{*,*}\)_mmf_\(/\tau\), then the element \(q(x)\) of \(\pi_{*,*}\)_mmf_\(/\tau\) is detected by \(-\tau^{r-1}y\) in the mANss \(E_{\infty}\)-page._ Proof.: The proof is a chase of the right side of the diagram in which the rows are cofiber sequences. We start with the element \(x\) in \(\pi_{*,*}mmf/\tau\) in the bottom row. This element lifts to \(mmf/\tau^{r}\) in the middle row by [1, Theorem 9.19] because \(x\) survives to the \(E_{2r+1}\)-page. The map \(\beta\) is the "Bockstein" mentioned in [1, Theorem 9.19], so we have that \(\beta(x)\) equals \(-y\) in the upper right corner of the diagram. Then \(-y\) lifts to an element of \(\pi_{*,*}mmf\) in the middle row that is detected by \(-y\). Finally, multiply by \(\tau^{r-1}\) to obtain \(q(x)\). **Remark 2.15**.: Proposition 2.14 requires that \(x\) supports a non-zero Adams-Novikov differential. On the other hand, suppose that \(x\) is a permanent cycle. Then \(x\) is in the image of \(i\), and \(q(x)=0\) since the composition \(qi\) is zero. ### Hidden extensions We briefly review the notion of hidden extensions in spectral sequences. We adopt the following definition of hidden extensions. **Definition 2.16**.: [16, Definition 4.1.2] Let \(\alpha\) be an element in the target of a multiplicative spectral sequence, and suppose that \(\alpha\) is detected by an element \(a\) in the \(E_{\infty}\)-page of the spectral sequence. A hidden extension by \(\alpha\) is a pair of elements \(b\) and \(c\) of the \(E_{\infty}\)-page such that: 1. the product \(a\cdot b\) equals zero in the \(E_{\infty}\)-page. 2. the element \(b\) detects an element \(\beta\) in the target such that \(c\) detects the product \(\alpha\cdot\beta\). 3. if there exists an element \(\beta^{\prime}\) of the target that is detected by \(b^{\prime}\) such that \(\alpha\cdot\beta^{\prime}\) is detected by \(c\), then the filtration of \(b^{\prime}\) is less than or equal to the filtration of \(b\). We will use projection \(q\) to simplify our analysis of hidden extensions. We shall show that two different products in \(\pi_{*,*}mmf\) are the image of the same element in \(\pi_{*,*}mmf/\tau\). Therefore, they are equal. **Method 2.17**.: Suppose that \(\alpha\) is not divisible by \(\tau\), so \(i(\alpha)=a\), where \(a\) is an element of the mANss that detects \(\alpha\). Consider a possible hidden \(\alpha\) extension from \(b\) to \(c\) in the mANss for \(mmf\). If \(b\) and \(c\) detect classes \(\beta\) and \(\gamma\) that are annihilated by \(\tau\), then \(\beta\) and \(\gamma\) are in the image of projection \(q\) to the top cell. Let \(\overline{b}\) and \(\overline{c}\) be their pre-images in \(\pi_{*,*}(mmf/\tau)\). Since this latter object is algebraic and completely known, we can determine whether \(\overline{b}\) and \(\overline{c}\) are related by an extension by mere inspection. Equation (2.13) shows that \[q(\overline{b}\cdot a)=q(\overline{b}\cdot i(\alpha))=q(\overline{b})\cdot \alpha=\beta\cdot\alpha,\] where the first two dots represent multiplication in \(mmf/\tau\), while the last two dots represent multiplication in \(mmf\). If \(\overline{b}\cdot a\) equals \(\overline{c}\), then \(\beta\cdot\alpha\) equals \(q(\overline{c})=\gamma\), and there is a hidden \(\alpha\) extension from \(b\) to \(c\). On the other hand, if \(\overline{b}\cdot a\) equals zero, then \(\beta\cdot\alpha\) equals zero, and there is not a hidden \(\alpha\) extension from \(b\) to \(c\). In practice, Method 2.17 is very effective for determining hidden extensions. The main restriction is that it only applies to extensions between classes that are annihilated by \(\tau\). **Example 2.18**.: \((54,2,28)\) We illustrate Method 2.17 with a concrete example of the hidden 2 extension from \(\Delta^{2}h_{2}^{2}\) to \(\tau^{4}dg^{2}\) in the 54-stem. In this example, we assume some knowledge of the relevant Adams-Novikov differentials (see Section 3). Consequently, one should view this example as a deduction of a hidden extension from previously determined differentials. First, multiply by \(\tau g\). If we establish a hidden 2 extension from \(\tau\Delta^{2}h_{2}^{2}g\) to \(\tau^{5}dg^{3}\) in the 74-stem, then we can immediately conclude the desired extension in the 54-stem. This step already requires motivic technology, since both \(\Delta^{2}h_{2}^{2}g\) and \(dg^{3}\) are hit by classical Adams-Novikov differentials. The key point is that the two elements under consideration in the 74-stem are non-zero but annihilated by \(\tau\). They are annihilated by \(\tau\) because of the differentials \(d_{5}(\Delta^{3}h_{2})=\tau^{2}\Delta^{2}h_{2}^{2}g\) and \(d_{13}(2\Delta^{3}h_{2})=\tau^{6}dg^{3}\), to be proved later in Propositions 3.8 and 3.16. The elements \(\tau\Delta^{2}h_{2}^{2}g\) and \(\tau^{5}dg^{3}\) represent classes in \(\pi_{74,39}mmf\) that are annihilated by \(\tau\). Therefore, these elements lie in the image of \(q:\pi_{75,38}mmf/\tau\to\pi_{74,39}mmf\). By Proposition 2.14, the preimages in \(\pi_{75,38}mmf/\tau\) are \(\Delta^{3}h_{2}\) and \(2\Delta^{3}h_{2}\) respectively. These two elements are connected by a 2 extension. Therefore, their images under \(q\) are also connected by a 2 extension. ### Toda brackets For background on Massey products and Toda brackets, including statements of the May convergence theorem and the Moss convergence theorem, we refer readers to [16], [17], [18] and also [19], [1]. Massey products in the \(E_{2}\)-page of an Adams or Adams-Novikov spectral sequence are algebraic information since they are part of the structure of Ext groups. Some Toda brackets in homotopy can be deduced directly from these Massey products using the Moss convergence theorem. In order to apply this theorem, one must establish the absence of crossing differentials. Whenever we apply the Moss convergence theorem, there will be no possible crossing differentials. In other words, the crossing differentials condition is satisfied for algebraic reasons. Thus, the Toda brackets that we use are algebraic in the sense that they can be deduced directly from the algebraic structure of Ext. **Remark 2.19**.: In general, Massey products and Toda brackets are defined as sets, not elements. An equality of the form \(\langle\alpha,\beta,\gamma\rangle=\delta\) means that 1. \(\delta\) is contained in the bracket; 2. the bracket has zero indeterminacy. The following lemma gives an explicit example of an algebraic deduction of a Toda bracket. See Table 3 for an explanation of the notation. **Lemma 2.20**.: \((8,3,5)\) _The Toda bracket \(\langle v,\eta,\nu\rangle\) in \(\pi_{8,5}mmf\) is detected by \(c\) and has no indeterminacy._ Proof.: The proof follows several steps: 1. Establish the Massey product \(c=\langle h_{2},h_{1},h_{2}\rangle\) in the \(E_{2}\)-page of the mANss. 2. Check that there are no crossing differentials. 3. Check that the Toda bracket \(\langle\nu,\eta,\nu\rangle\) is well-defined and that it has no indeterminacy. 4. Apply the Moss convergence theorem to the Massey product and deduce the desired Toda bracket. For step (1), we check the following statements: 1. The Massey product is well-defined because of the relation \(h_{1}h_{2}=0\) in the \(E_{2}\)-page of the mANsss for \(mmf\) (see Figure 3). 2. The element \(c\) is contained in the Massey product [1, Equation (7.3)] [1]. 3. The indeterminacy is trivial by inspection. In more detail, the indeterminacy equals \(h_{2}\cdot E_{2}^{5,1,3}\). The only non-zero element of \(E_{2}^{5,1,3}\) is \(h_{1}v_{1}^{2}\), and \(h_{2}\cdot h_{1}v_{1}^{2}=0\). This last relation holds already in the \(E_{2}\)-page of the motivic algebraic Novikov spectral sequence [1]. For step (2), we need to check for crossing differentials for the relation \(h_{1}h_{2}\) in degree \((4,2,3)\). We are looking for non-zero Adams-Novikov differentials in degrees \((5,f,3)\), where \(f<1\). There are no possible sources for such differentials (see Figure 3). For step (3), we check that the Toda bracket is well-defined because \(\eta\nu\) is zero in \(\pi_{4,3}mmf\) for degree reasons. The indeterminacy equals \(\nu\cdot\pi_{5,3}mmf\), which is zero for degree reasons. For step (4), we apply the Moss convergence theorem. The theorem implies that there exists an element in \(\langle h_{2},h_{1},h_{2}\rangle\) that is a permanent cycle and that detects an element in \(\langle v,\eta,v\rangle\). Since there are no indeterminacies for both the Massey product and the Toda bracket, the permanent cycle must be \(c\). ## 3. Differentials In this section, we compute all differentials in the mANss for \(mmf\), proving hidden extensions and Toda brackets only as needed along the way. Our results are presented in logical order, so each proof only depends on earlier results. We return to a more exhaustive study of hidden extensions later in Section 4. **Theorem 3.1**.: _Table 6 lists all of the non-zero differentials on all of the indecomposable elements of each mANss \(E_{r}\)-page._ Proof.: The differentials are proved in the various propositions later in this section. The last column of Table 6 indicates the specific proposition that proves each differential. Some indecomposables do not support differentials. In most cases, this follows for degree reasons, i.e., because there are no possible targets. Proposition 3.30 handles two slightly more difficult cases. All differentials follow from straightforward applications of the Leibniz rule to the ones listed in Table 6. ### \(d_{3}\) differentials **Proposition 3.2**.: \((5,1,3)\)_\(d_{3}(h_{1}v_{1}^{2})=\tau h_{1}^{4}.\)_ Proof.: In the mAss \(E_{2}\)-page, \(h_{1}^{4}\) is a non-zero element that is annihilated by \(\tau\). By inspection, \(h_{1}^{4}\) corresponds to the element of the same name in the mANss. Therefore, \(\tau h_{1}^{4}\) must be hit by an Adams-Novikov differential, and there is only one possibility. **Proposition 3.3**.: \((12,0,6)\)_\(d_{3}(4a)=\tau Ph_{1}^{3}.\)_ Proof.: For degree reasons, \(d_{3}(P)=0\). Thus Proposition 3.2 implies that \(d_{3}(P\cdot h_{1}v_{1}^{2})=\tau Ph_{1}^{4}.\) We have the relation \(P\cdot h_{1}v_{1}^{2}=h_{1}\cdot 4a\) in the Adams-Novikov \(E_{2}\)-page. Note that this relation arises from a hidden \(h_{1}\) extension from \(h_{0}^{2}a\) to \(\overline{Ph_{1}^{4}}\) in the algebraic Novikov spectral sequence [1]. Therefore, \(4a\) must also support a \(d_{3}\) differential, and there is only one possibility. The Leibniz rule, combined with Proposition 3.2 and Proposition 3.3, implies some additional \(d_{3}\) differentials. By inspection, the other multiplicative generators do not support \(d_{3}\) differentials. **Remark 3.4**.: All of the \(d_{3}\) differentials are \(h_{1}\)-periodic, in the sense that they can be computed in the localization of the mANss \(E_{2}\)-page in which \(h_{1}\) is inverted. This localized spectral sequence computes the homotopy of the \(\eta\)-periodic spectrum \(mmf[\eta^{-1}]\). See [1, Section 6.1] for a related discussion. ### Corresponding pairs Earlier in Section 2.5, we discussed the notion of elements from the mANss and from the mAss that correspond. Having computed the \(d_{3}\) differentials, we are now in a position to establish a number of corresponding pairs that will be used in later arguments. **Theorem 3.5**.: _Table 4 lists some pairs of elements that correspond._ Proof.: We discuss the correspondence between \(2\Delta h_{2}\) and \(an\) in detail. Most of the other corresponding pairs are established with essentially the same argument. Some slightly more difficult cases are established later in Lemmas 3.10 and 3.34. For degree reasons, the element \(2\Delta h_{2}\) of the mANss for \(mmf\) cannot support an Adams-Novikov differential, nor can it be hit by an Adams-Novikov differential. (Beware that \(\Delta h_{2}\) does support a differential.) Therefore, \(2\Delta h_{2}\) detects some element \(\alpha\) in \(\tau_{27,14}mmf\). The inclusion \(i:mmf\to mmf/\tau\) induces a map (3.6) of motivic Adams spectral sequences. The spectral sequence on the right is identified with the algebraic Novikov spectral sequence that converges to the classical Adams-Novikov \(E_{2}\)-page for \(tmf\)[12]. The element \(\alpha\) in the lower left corner maps to \(2\Delta h_{2}\) in the lower right corner. This latter element is detected by \(an\) in filtration \(6\) in the upper right corner [1]. Therefore, \(\alpha\) is detected in the upper left corner in filtration at most \(6\). The only possible value is \(an\). **Remark 3.7**.: Previous knowledge of the \(d_{3}\) differentials is required in order to conclude that \(2\Delta h_{2}\) (and other elements as well) does not support an Adams-Novikov differential. For example, it is conceivable that \(d_{25}(2\Delta h_{2})=\tau^{12}h_{1}^{26}\). However, we already know that \(\tau^{12}h_{1}^{26}\) is hit by the differential \(d_{3}(\tau^{11}h_{1}^{22}\cdot h_{1}v_{1}^{2})\). ### \(d_{5}\) differentials Having determined all \(d_{3}\) differentials, one can mechanically compute the \(E_{4}\)-page. Through the \(22\)-stem, no additional differentials are possible for degree reasons, so the \(E_{4}\)-page equals the \(E_{\infty}\)-page in that range. **Proposition 3.8**.: \((24,0,12)\) _There exists a generator \(\Delta\) of the mANss \(E_{2}\)-page in degree \((24,0,12)\) such that \(d_{5}(\Delta)=\tau^{2}h_{2}g\)._ Proof.: The mAss element \(h_{2}g\) is annihilated by \(\tau^{2}\) in the \(E_{2}\)-page. Moreover, \(\tau h_{2}g\) does not support a hidden \(\tau\) extension in the mAss because of the presence of \(\overline{\tau h_{2}g}\) in the homotopy of \(mmf/\tau\). More precisely, projection to the top cell takes \(\overline{\tau h_{2}g}\) to \(\tau h_{2}g\), so \(\tau h_{2}g\) must detect homotopy elements that are annihilated by \(\tau\). The mANss element \(h_{2}g\) corresponds to the mAss element \(h_{2}g\) because of Table 4 and Proposition 2.8. Therefore, \(\tau^{2}h_{2}g\) must be hit by an Adams-Novikov differential. The only \begin{table} \begin{tabular}{l l l l} \hline \hline mANss degree & mANss element & mAss element & mAss degree \\ \hline \((0,0,0)\) & \(2\) & \(h_{0}\) & \((0,1,0)\) \\ \((1,1,1)\) & \(h_{1}\) & \(h_{1}\) & \((1,1,1)\) \\ \((3,1,2)\) & \(h_{2}\) & \(h_{2}\) & \((3,1,2)\) \\ \((14,2,8)\) & \(d\) & \(d\) & \((14,4,8)\) \\ \((20,4,12)\) & \(g\) & \(g\) & \((20,4,12)\) \\ \((25,1,13)\) & \(\Delta h_{1}\) & \(\Delta h_{1}\) & \((25,5,13)\) \\ \((27,1,14)\) & \(2\Delta h_{2}\) & \(an\) & \((27,6,14)\) \\ \((48,0,24)\) & \(4\Delta^{2}\) & \(\Delta^{2}h_{0}^{2}\) & \((48,10,24)\) \\ \((110,2,56)\) & \(\Delta^{4}d\) & \(\Delta^{4}d\) & \((110,20,56)\) \\ \hline \hline \end{tabular} \end{table} Table 4. Some corresponding elements in the motivic Adams and motivic Adams-Novikov spectral sequences possibility is a \(d_{5}\) differential whose source is in degree \((24,0,12)\). Since \(\tau^{2}h_{2}g\) is not a multiple of \(2\), the source of the differential must be a generator. **Remark 3.9**.: \((24,0,12)\) Proposition 3.8 does not uniquely specify \(\Delta\). Since \(4\tau^{2}h_{2}g\) is zero in the mANss \(E_{2}\)-page, \(\Delta\) is only well-defined up to multiples of \(4\). Later in Remark 5.8 we will make a further refinement in the definition of \(\Delta\). Also note that the choice of \(\Delta\) depends on a previous choice of \(h_{2}\), as in Remark 2.2. The Leibniz rule, together with Proposition 3.8, implies additional \(d_{5}\) differentials. The other multiplicative generators of the \(E_{5}\)-page do not support differentials. Of particular note is the differential \[d_{5}(\Delta^{2})=2\Delta d_{5}(\Delta)=2\tau^{2}\Delta h_{2}g.\] This easy computation is an Adams-Novikov version of Bruner's theorem on the interaction between Adams differentials and algebraic squaring operations [1][1]. However, its corresponding Adams differential \(d_{2}(\Delta^{2})=\tau^{2}ang\) is not as easy to obtain by direct analysis of the Adams spectral sequence [1]. The difficulty is that \(\Delta^{2}\) is not the value of an algebraic squaring operation since \(\Delta\) is not present in the Adams \(E_{2}\)-page. By "postponing" the differential that hits \(\tau^{2}h_{2}g\) from algebra to topology, we obtain an easier argument for the differential on \(\Delta^{2}\). **Lemma 3.10**.: \((48,0,24)\) _The element \(4\Delta^{2}\) of the mANss for nmuf corresponds to \(\Delta^{2}h_{0}^{2}\) in the mAss for nmuf._ Proof.: Having established that \(d_{5}(\Delta^{2})=2\tau^{2}\Delta h_{2}g\) as a consequence of the Leibniz rule and Proposition 3.8, we conclude that \(4\Delta^{2}\) does not support an Adams-Novikov differential for degree reasons. (Beware that \(2\Delta^{2}\) does support a differential, but we do not need to know that already.) Note that \(4\Delta^{2}\) is detected in the algebraic Novikov spectral sequence by \(\Delta^{2}h_{0}^{2}\), which has filtration 10. Using the argument in the proof of Theorem 3.5, we conclude that \(4\Delta^{2}\) corresponds to an element in the mAss with filtration at most 10. However, there are three possibilities: \(\Delta^{2}\), \(\Delta^{2}h_{0}\), and \(\Delta^{2}h_{0}^{2}\). The top horizontal map of Diagram (3.6) takes \(\Delta^{2}\) and \(\Delta^{2}h_{0}\) to elements of the same name. These elements detect \(\Delta^{2}\) and \(2\Delta^{2}\) in the Adams-Novikov \(E_{2}\)-page. This means that \(4\Delta^{2}\) cannot correspond to \(\Delta^{2}\) or \(\Delta^{2}h_{0}\). ### \(d_{7}\) differentials The main goal of this section is to establish some \(d_{7}\) differentials in Proposition 3.14 and Proposition 3.21. In order to obtain these differentials, we will need some hidden extensions and some later differentials. We establish these other results first, in order to preserve strict logical order. **Lemma 3.11**.: \((3,1,2)\) _There is a hidden \(2\) extension from \(2h_{2}\) to \(\tau h_{1}^{3}\)._ Proof.: According to Table 4 and Proposition 2.8, the mANss element \(2h_{2}\) corresponds to the mAss element \(h_{0}h_{2}\). The element \(h_{0}h_{2}\) supports an \(h_{0}\) extension in the mAss \(E_{2}\)-page, so \(2h_{2}\) must support a \(2\) extension in the mANss. There is only one possible target for this extension. **Remark 3.12**.: The hidden extension of Lemma 3.11 is the first in an infinite family of similar hidden extensions from the elements \(2h_{2}g^{k}\) to the elements \(\tau h_{1}^{3}g^{k}\). For \(k\geq 1\), these extensions are "exotic" in the sense that they do not occur classically, since both \(2h_{2}g^{k}\) and \(h_{1}^{3}g^{k}\) are the targets of classical Adams-Novikov differentials. **Lemma 3.13**.: \((27,1,14)\) _There is a hidden \(2\) extension from \(2\Delta h_{2}\) to \(\tau\Delta h_{1}^{3}\)._ Proof.: We already observed in Table 4 that \(2\Delta h_{2}\) and \(\Delta h_{1}\cdot h_{1}^{2}\) correspond to \(an\) and \(\Delta h_{1}^{3}\) in the mAss. In the mAss \(E_{2}\)-page, we have the relation \(h_{0}\cdot an=\tau\Delta h_{1}^{3}\). Therefore, there must be a hidden 2 extension between the corresponding Adams-Novikov elements. **Proposition 3.14**.: __ 1. \((24,0,12)\)__\(d_{7}(4\Delta)=\tau^{3}h_{1}^{3}g\)_._ 2. \((48,0,24)\)__\(d_{7}(2\Delta^{2})=\tau^{3}\Delta h_{1}^{3}g\)_._ Proof.: Proposition 3.8 says that \(\tau^{2}h_{2}g\) is hit by an Adams-Novikov differential, so \(2\tau^{2}h_{2}g\) is also hit by an Adams-Novikov differential. Remark 3.12 says that there is a hidden 2 extension from \(2h_{2}g\) to \(\tau h_{1}^{3}g\). Therefore, \(\tau^{3}h_{1}^{3}g\) is hit by a differential, and there is just one possible source for this differential. The proof for the second differential is essentially the same. We need a hidden 2 extension from \(2\Delta h_{2}g\) to \(\tau\Delta h_{1}^{3}g\), which follows from Lemma 3.13 and multiplication by \(g\). **Remark 3.15**.: Proposition 3.8 and Proposition 3.14 show that both \(2\tau h_{2}g^{k}\) and \(\tau^{2}h_{1}^{3}g^{k}\) are annihilated by \(\tau\). In hindsight, we can see that the hidden 2 extensions connecting them are examples of Method 2.17. Their pre-images in \(mmf/\tau\) are \(2\Delta g^{k-1}\) and \(4\Delta g^{k-1}\), which are related by 2 extensions. However, beware that we needed the hidden 2 extension from \(2h_{2}\) to \(\tau h_{1}^{3}\) in order to establish the differential \(d_{7}(4\Delta)\). An independent proof of Lemma 3.11 is necessary in order to avoid a circular argument. Before finishing the analysis of the \(d_{7}\) differential in Proposition 3.21, we deduce some higher differentials. **Proposition 3.16**.: \((75,1,38)\)__\(d_{13}(2\Delta^{3}h_{2})=\tau^{6}dg^{3}\)_._ Proof.: We have the relation \(ang\cdot an=\tau^{4}dg^{3}\) in the mAss \(E_{2}\)-page because of the relations \(a^{2}n=\tau d\cdot\Delta h_{1}\) and \(\Delta h_{1}\cdot n=\tau^{3}g^{2}\)[18, Theorem 4.13]. According to Table 4 and Proposition 2.8, the mANss elements \(2\Delta h_{2}g\), \(2\Delta h_{2}\), \(d\), and \(g\) correspond to the mAss elements \(ang\), \(an\), \(d\), and \(g\). This means that there is a hidden \(2\Delta h_{2}\) extension from \(2\Delta h_{2}g\) to \(\tau^{4}dg^{3}\) in the mANss. Using the Leibniz rule and Proposition 3.8, we already know that \(2\tau^{2}\Delta h_{2}g\) is hit by the differential \(d_{5}(\Delta^{2})\). Therefore, \(\tau^{6}dg^{3}\) must also be hit by a differential. There are two possibilities for this differential: \(d_{11}(\tau\Delta^{3}h_{1}^{3})\) and \(d_{13}(\Delta^{3}h_{2})\). However, \(\tau\Delta^{3}h_{1}^{3}\) is a product \(\tau(\Delta h_{1})^{3}\) of permanent cycles, so it cannot support a differential. **Remark 3.17**.: The proof of Proposition 3.16 contains an example of Method 2.17. There is a hidden \(2\Delta h_{2}\) extension from \(2\tau\Delta h_{2}g\) to \(\tau^{5}dg^{3}\). Both of these elements are annihilated by \(\tau\). Their pre-images under projection to the top cell of \(mmf/\tau\) are \(\Delta^{2}\) and \(2\Delta^{3}h_{2}\) respectively, which are related by a \(2\Delta h_{2}\) extension. **Proposition 3.18**.: \((56,2,29)\)__\(d_{9}(\Delta^{2}c)=\tau^{4}h_{1}dg^{2}\)_._ Proof.: Recall from Example 2.18 that there is a hidden 2 extension from \(\Delta^{2}h_{2}^{2}\) to \(\tau^{4}dg^{2}\). The argument for this hidden extension uses Proposition 3.8 and Proposition 3.16. Therefore, \(\tau^{4}h_{1}dg^{2}\) must be hit by a differential because \(2h_{1}=0\). There is only one possible differential. **Proposition 3.19**.: _In the mAss for mmf, we have the Adams differentials:_ 1. \((48,8,24)\)__\(d_{2}(\Delta^{2})=\tau^{2}ang\)_._ 2. \((96,16,48)\)__\(d_{3}(\Delta^{4})=\tau^{8}ng^{4}\)_._ Proof.: We start with the Adams-Novikov differential \(d_{5}(\Delta^{2})=2\tau^{2}\Delta h_{2}g\). We know from Table 4 and Proposition 2.8 that \(2\Delta h_{2}g\) corresponds to the element \(ang\) in the mAss. Therefore, \(\tau^{2}ang\) must be hit by some Adams differential, and the only possibility is that \(d_{2}(\Delta^{2})\) equals \(\tau^{2}ang\). Next, we apply Bruner's theorem on the interaction between Adams differentials and algebraic squaring operations. We refer to [1, Theorem 5.6] for a precise readable statement, although [1], [1] and [14] are preceding references. We apply Bruner's theorem with \(x=\Delta^{2}\), \(r=2\), and \(i=8\); so \(s=8\), \(t=56\), \(v=v(48)=1\), and \(\overline{a}=h_{0}\). We obtain that \[d_{*}\operatorname{Sq}^{8}(\Delta^{2})=\operatorname{Sq}^{9}d_{2}(\Delta^{2}) \mathrel{\mathop{+}\limits_{\sim}}h_{0}\cdot\Delta^{2}\cdot d_{2}(\Delta^{2} )=\operatorname{Sq}^{9}(\tau^{2}ang)+h_{0}\cdot\Delta^{2}\cdot\tau^{2}ang= \operatorname{Sq}^{9}(\tau^{2}ang).\] Next, we compute that \(\operatorname{Sq}^{9}(\tau^{2}ang)=\tau^{4}\cdot\tau\Delta h_{1}\cdot n^{2} \cdot g^{2}\), using the Cartan formula for algebraic squaring operations, as well as the formulas \(\operatorname{Sq}^{2}(a)=\tau\Delta h_{1}\), \(\operatorname{Sq}^{3}(n)=n^{2}\), and \(\operatorname{Sq}^{4}(g)=g^{2}\)[1, Theorem 1.20]. Finally, apply the relation \(\Delta h_{1}\cdot n=\tau^{3}g^{2}\) to obtain the Adams differential \(d_{3}(\Delta^{4})=\tau^{8}ng^{4}\). **Remark 3.20**.: The careful reader may object to our use of a motivic version of Bruner's theorem in the proof of Proposition 3.19, while only the classical version of the theorem has a published proof. In fact, this concern is irrelevant here. One can use the classical Bruner's theorem to establish the classical Adams \(d_{3}\) differential and then deduce the motivic version of the differential. **Proposition 3.21**.: \((96,0,48)\)_\(d_{7}(\Delta^{4})=\tau^{3}\Delta^{3}h_{1}^{3}g\)._ Proof.: Table 4 shows that the mANss element \(4\Delta^{2}\) corresponds to the mAss element \(\Delta^{2}h_{0}^{2}\). Therefore, Proposition 2.8 shows that the mANss element \(16\Delta^{4}\) corresponds to the mAss element \(\Delta^{4}h_{0}^{4}\). Proposition 3.19 shows that \(\Delta^{4}\) does not survive the mAss. Therefore, \(\Delta^{4}h_{0}^{4}\) does not detect homotopy elements that are divisible by \(16\). Consequently, the corresponding element \(16\Delta^{4}\) in the mANss does not detect homotopy elements that are divisible by \(16\). This means that \(\Delta^{4}\) must support an Adams-Novikov differential. There are two possible values for this differential: \(\tau^{3}\Delta^{3}h_{1}^{3}g\) and \(\tau^{9}h_{1}dg^{4}\). However, Proposition 3.18 shows that the latter element is already hit by the differential \(d_{9}(\tau^{5}\Delta^{2}cg^{2})=\tau^{9}h_{1}dg^{4}\). ### \(d_{9}\) differentials At this point, we have determined all differentials \(d_{r}\) for \(r\leq 7\). It remains to study higher differentials, although some higher differentials have already been determined in earlier propositions. We continue to proceed roughly in order of increasing values of \(r\), although we occasionally need some Toda brackets, hidden extensions, and later differentials to preserve strict logical order. **Proposition 3.22**.: \((171,1,86)\)_\(d_{13}(2\Delta^{7}h_{2})=\tau^{6}\Delta^{4}dg^{3}\)._ Proof.: The argument is nearly identical to the proof of Proposition 3.16. The mAss \(E_{2}\)-page relation \(\Delta^{4}ang\cdot an=\tau^{4}\Delta^{4}dg^{3}\) implies that there is a hidden \(2\Delta h_{2}\) extension from \(2\Delta^{5}h_{2}g\) to \(\tau^{4}\Delta^{4}dg^{3}\) in the mANss. We already know that \(2\tau^{2}\Delta^{5}h_{2}g\) is hit by the differential \(d_{5}(\Delta^{6})\). Therefore, \(\tau^{6}\Delta^{4}dg^{3}\) must also be hit by a differential. There are two possibilities for this differential: \(d_{11}(\tau\Delta^{7}h_{1}^{3})\) and \(d_{13}(2\Delta^{7}h_{2})\). The former possibility is ruled out by the decomposition \(\tau\Delta^{6}h_{1}^{2}\cdot\Delta h_{1}\) and the observation that both \(\Delta^{6}h_{1}^{2}\) and \(\Delta h_{1}\) survive past the \(E_{11}\)-page for degree reasons. **Lemma 3.23**.: \((150,2,76)\) _There is a hidden \(2\) extension from \(\Delta^{6}h_{2}^{2}\) to \(\tau^{4}\Delta^{4}dg^{2}\)._ Proof.: The proof is similar to the argument in Example 2.18. We already know the differentials \(d_{5}(\Delta^{7}h_{2})=\tau^{2}\Delta^{6}h_{2}^{2}g\) and \(d_{13}(2\Delta^{7}h_{2})=\tau^{6}\Delta^{4}dg^{3}\) from Propositions 3.8 and 3.22. Therefore, projection to the top cell detects a hidden 2 extension from \(\tau\Delta^{6}h_{2}^{2}g\) to \(\tau^{5}\Delta^{4}dg^{3}\). Finally, use \(\tau g\) multiplication to deduce the hidden 2 extension on \(\Delta^{6}h_{2}^{2}\). **Proposition 3.24**.: \((80,2,41)\)__\(d_{9}(\Delta^{3}c)=\tau^{4}\Delta h_{1}dg^{2}\)_._ 2. \((176,2,89)\)__\(d_{9}(\Delta^{7}c)=\tau^{4}\Delta^{5}h_{1}dg^{2}\)_._ Proof.: We saw in Example 2.18 that \(\tau^{4}dg^{2}\) detects a multiple of 2. Therefore, \(\Delta h_{1}\cdot\tau^{4}dg^{2}\) must detect zero since \(\Delta h_{1}\) does not support a 2 extension for degree reasons. Therefore, \(\tau^{4}\Delta h_{1}dg^{2}\) must be hit by a differential, and there is only one possibility. The argument for the second differential is nearly identical. Lemma 3.23 shows that the element \(\tau^{4}\Delta^{4}dg^{2}\) detects a multiple of 2. Therefore, \(\Delta h_{1}\cdot\tau^{4}\Delta^{4}dg^{2}\) must detect zero, and there is only one differential that can hit it. **Proposition 3.25**.: \((152,2,77)\)__\(d_{9}(\Delta^{6}c)=\tau^{4}\Delta^{4}h_{1}dg^{2}\)_._ Proof.: The argument is similar to the proof of Proposition 3.18. Lemma 3.23 shows that \(\tau^{4}\Delta^{4}dg^{2}\) detects a multiple of 2. Therefore, \(\tau^{4}\Delta^{4}h_{1}dg^{2}\) must be hit by a differential because \(2h_{1}=0\). There is only one possible differential. **Lemma 3.26**.: \((25,1,13)\) _The Toda bracket \(\langle\eta,\nu,\tau^{2}\kappa\rangle\) is detected by \(\Delta h_{1}\) and has indeterminacy detected by \(P^{3}h_{1}\)._ Proof.: By inspection, the Toda bracket is well-defined and has indeterminacy detected by \(P^{3}h_{1}\) (which is a \(v_{1}\)-periodic element). We use the Moss convergence theorem in the mAss for \(mmf\). By [11, Definition 4.4(1)], we have the Massey product \(\Delta h_{1}=\langle h_{1},h_{2},\tau^{2}g\rangle\) in the \(E_{2}\)-page of the mAss for \(mmf\). There are no possible crossing differentials in the mAss for \(mmf\). Finally, Table 4 implies that the mAss elements \(h_{1}\), \(h_{2}\), and \(\tau^{2}g\) detect \(\eta\), \(\nu\), and \(\tau^{2}\overline{\kappa}\) respectively (see also Table 3). **Lemma 3.27**.: \((25,1,13)\) _There is a hidden \(\nu\) extension from \(\Delta h_{1}\) to \(\tau^{2}cg\)._ Proof.: Lemmas 2.20 and 3.26 show that the Toda brackets \(\langle\nu,\eta,\nu\rangle\) and \(\langle\eta,\nu,\tau^{2}\kappa\rangle\) are detected by \(c\) and \(\Delta h_{1}\) respectively. The hidden \(\nu\) extension follows from the shuffling relation \[\nu\langle\eta,\nu,\tau^{2}\bar{\kappa}\rangle=\langle\nu,\eta,\nu\rangle\tau ^{2}\bar{\kappa}.\] **Lemma 3.28**.: \((25,1,13)\) _There is a hidden \(\eta\) extension from \(2\Delta h_{2}\) to \(\tau^{2}cg\)._ Proof.: As in the proof of Lemma 3.27, the element \(\tau^{2}cg\) detects \(\langle\eta,\nu,\tau^{2}\overline{\kappa}\rangle\nu\), which equals \(\eta\langle\nu,\tau^{2}\overline{\kappa},\nu\rangle\). Therefore, \(\tau^{2}cg\) is the target of a hidden \(\eta\) extension. There are two possible sources for such an extension: \(\tau\Delta h_{1}^{3}\) and \(2\Delta h_{2}\). The former possibility is ruled out by Lemma 3.13, which shows that \(\tau\Delta h_{1}^{3}\) is the target of a hidden 2 extension. **Proposition 3.29**.: \((1)\)__\((49,1,25)\)__\(d_{9}(\Delta^{2}h_{1})=\tau^{4}cg^{2}\)_._ \((2)\)__\((73,1,37)\)__\(d_{9}(\Delta^{3}h_{1})=\tau^{4}\Delta cg^{2}\)_._ \((145,1,73)\)__\(d_{9}(\Delta^{6}h_{1})=\tau^{4}\Delta^{4}cg^{2}\)_._ \((4)\)__\((169,1,85)\)__\(d_{9}(\Delta^{7}h_{1})=\tau^{4}\Delta^{5}cg^{2}\)_._ Proof.: It follows from Lemma 3.28 that there is a hidden \(\eta\) extension from \(2\Delta h_{2}g\) to \(\tau^{2}cg^{2}\). Proposition 3.8 and the Leibniz rule imply that \(d_{5}(\Delta^{2})=2\tau^{2}\Delta h_{2}g\). Therefore, \(\tau^{4}cg^{2}\) must be hit by some differential, and there is only one possibility. Having established the first differential, we can compute that \[d_{9}(\Delta^{3}h_{1}^{2})=\Delta h_{1}\cdot d_{9}(\Delta^{2}h_{1})=\tau^{4} \Delta h_{1}cg^{2}.\] Since \(\Delta^{3}h_{1}^{2}=\Delta^{3}h_{1}\cdot h_{1}\), it follows that \(d_{9}(\Delta^{3}h_{1})\) equals \(\tau^{4}\Delta cg^{2}\). Similarly, \[d_{9}(\Delta^{7}h_{1}^{2})=\Delta^{5}h_{1}\cdot d_{9}(\Delta^{2}h_{1})=\tau^{ 4}\Delta^{5}h_{1}cg^{2},\] from which it follows that \(d_{9}(\Delta^{7}h_{1})\) equals \(\tau^{4}\Delta^{5}cg^{2}\). However, we need to observe that \(d_{9}(\Delta^{5}h_{1})\) is zero. The only possible non-zero value for \(d_{9}(\Delta^{5}h_{1})\) is \(\tau^{4}\Delta^{3}cg^{2}\), but this is ruled out by the observation that \(\tau^{4}\Delta^{3}cg^{2}\) supports a \(d_{9}\) differential by Proposition 3.24. Finally, note that \(d_{9}(\Delta^{7}h_{1}^{2})=\Delta h_{1}\cdot d_{9}(\Delta^{6}h_{1})\). The value of \(d_{9}(\Delta^{7}h_{1}^{2})\) was computed in the previous paragraph. It follows that \(d_{9}(\Delta^{6}h_{1})\) equals \(\tau^{4}\Delta^{4}cg^{2}\). **Proposition 3.30**.: __ 1. \(d_{9}(\Delta^{4}c)=0\)_._ 2. \(d_{9}(\Delta^{5}c)=0\)_._ Proof.: It follows from Proposition 3.29 that \(\tau^{4}\Delta^{4}cg^{2}\) and \(\tau^{4}\Delta^{5}cg^{2}\) are targets of \(d_{9}\) differentials, so they cannot support \(d_{9}\) differentials. This implies that \(\Delta^{4}c\) and \(\Delta^{5}c\) cannot support \(d_{9}\) differentials. The Leibniz rule, together with the differentials given in Propositions 3.24, 3.25, 3.29, and 3.30, determines all \(d_{9}\) differentials. ### \(d_{11}\) differentials **Lemma 3.31**.: \((14,2,8)\) _There is a hidden \(\epsilon\) extension from \(d\) to \(\tau h_{1}^{2}g\)._ Proof.: We will show that there is a hidden \(\epsilon\) extension from \(h_{1}d\) to \(\tau h_{1}^{3}g\). The desired extension follows immediately. The relation \(h_{1}c=h_{2}^{3}\) in the mANss \(E_{2}\)-page implies that \(\eta\epsilon\) equals \(\nu^{3}\). Also, the relation \(h_{2}^{2}d=4g\) implies that \(\nu^{2}\kappa=4\overline{\kappa}\). Then \[\eta\epsilon\kappa=\nu^{3}\kappa=4\nu\overline{\kappa}=\tau\eta^{3}\overline{ \kappa}.\] The last equality uses the hidden 2 extension from \(2h_{2}\) to \(\tau h_{1}^{3}\), as shown in Lemma 3.11. **Lemma 3.32**.: \((39,3,21)\) _There is a hidden \(\nu\) extension from \(\Delta h_{1}d\) to \(\tau^{3}h_{1}^{2}g^{2}\)._ Proof.: The element \(\Delta h_{1}d\) detects the product \(\eta_{1}\cdot\kappa\). Lemma 3.27 implies that \(\nu\cdot\eta_{1}\cdot\kappa\) equals \(\tau^{2}\epsilon\kappa\overline{\kappa}\). Lemma 3.31 implies that this last product equals \(\tau^{3}\eta^{2}\overline{\kappa}^{2}\), which is detected by \(\tau^{3}h_{1}^{2}g^{2}\). **Proposition 3.33**.: __ 1. \((62,2,32)\) _\(d_{11}(\Delta^{2}d)=\tau^{5}h_{1}g^{3}\)._ 2. \((158,2,80)\) _\(d_{11}(\Delta^{6}d)=\tau^{5}\Delta^{4}h_{1}g^{3}\)._ Proof.: The element \(\tau^{5}h_{1}^{2}g^{3}\) detects \(\tau^{5}\eta^{2}\overline{\kappa}^{3}\). Lemma 3.32 implies that \(\tau^{5}\eta^{2}\overline{\kappa}^{3}\) equals \(\tau^{2}\nu\overline{\kappa}\cdot\eta_{1}\cdot\kappa\). Because of Proposition 3.8, we know that \(\tau^{2}\nu\overline{\kappa}\) is zero. Therefore, \(\tau^{5}h_{1}^{2}g^{3}\) is hit by some differential. The only possibility is that \(d_{11}(\Delta^{2}h_{1}d)=\tau^{5}h_{1}^{2}g^{3}\). It follows that \(d_{11}(\Delta^{2}d)=\tau^{5}h_{1}g^{3}\). For the second formula, multiply by the permanent cycle \(\Delta^{4}h_{1}\) to see that \(d_{11}(\Delta^{6}h_{1}d)\) equals \(\tau^{5}\Delta^{4}h_{1}^{2}g^{3}\). It follows that \(d_{11}(\Delta^{6}d)\) equals \(\tau^{5}\Delta^{4}h_{1}g^{3}\). ### \(d_{13}\) differentials We have already established some \(d_{13}\) differentials in Propositions 3.16 and 3.22 because we needed those results in order to compute shorter differentials. We now finish the computation of the \(d_{13}\) differentials. **Lemma 3.34**.: \((110,2,56)\) _The element \(\Delta^{4}d\) of the mANss for mmf corresponds to the element of the same name in the mAss for mmf._ Proof.: We have already analyzed all possible Adams-Novikov differentials of length \(11\) or less, and there are no other possible values for a differential on \(\Delta^{4}d\). Therefore, \(\Delta^{4}d\) is a permanent cycle in the mANss for \(mmf\). Now the argument given in the proof of Theorem 3.5 applies. The mANss element \(\Delta^{4}d\) is detected in filtration \(20\) in the Adams \(E_{2}\)-page for \(mmf/\tau\). Therefore, \(\Delta^{4}d\) corresponds to an element of the mAss with Adams filtration at most \(20\). There is only one possible element in the mAss with sufficiently low filtration. **Lemma 3.35**.: __ 1. \((39,3,21)\) _There is a hidden_ \(\eta\) _extension from_ \(\Delta h_{1}d\) _to_ \(2\tau^{2}g^{2}\)_._ 2. \((135,3,69)\) _There is a hidden_ \(\eta\) _extension from_ \(\Delta^{5}h_{1}d\) _to_ \(2\tau^{2}\Delta^{4}g^{2}\)_._ Proof.: Table 4 shows that the elements \(\Delta h_{1}\) and \(d\) in the mANss for \(mmf\) correspond to elements of the same name in the mAss for \(mmf\). The product \(\Delta h_{1}\cdot h_{1}d\) is non-zero in the mAss \(E_{2}\)-page and also in the mAss \(E_{\infty}\)-page because there are no possible differentials that could hit it. (Note that this product is non-zero in the motivic context, but the corresponding classical product is zero in the \(E_{2}\)-page of the Adams spectral sequence for \(tmf\).) Therefore, \(\Delta h_{1}d\) must support a hidden \(\eta\) extension in the mANss for \(mmf\). There are three possible targets for this extension: \(\tau^{2}g^{2}\), \(2\tau^{2}g^{2}\), and \(3\tau^{2}g^{2}\). The first and last possibilities are ruled out by the relation \(2\eta=0\). The argument for the second extension is nearly identical. Table 4 and Proposition 2.8 imply that the mANss element \(\Delta^{5}h_{1}d\) corresponds to the mAss element \(\Delta^{4}\cdot\Delta h_{1}\cdot d\). The product \(\Delta^{4}\cdot\Delta h_{1}\cdot h_{1}d\) is non-zero in the mAss \(E_{\infty}\)-page, so \(\Delta^{5}h_{1}d\) must support a hidden \(\eta\) extension in the mANss. The only possible target for this extension is \(2\tau^{2}\Delta^{4}g^{2}\). **Proposition 3.36**.: __ 1. \((81,3,42)\)__\(d_{13}(\Delta^{3}h_{1}c)=2\tau^{6}g^{4}\)_._ 2. \((177,3,90)\)__\(d_{13}(\Delta^{7}h_{1}c)=2\tau^{6}\Delta^{4}g^{4}\)_._ Proof.: Lemma 3.35 implies that there is a hidden \(\eta\) extension from \(\Delta h_{1}dg^{2}\) to \(2\tau^{2}g^{4}\). Proposition 3.24 shows that \(\tau^{4}\Delta h_{1}dg^{2}\) is hit by a differential. Therefore, \(2\tau^{6}g^{4}\) must also be hit by a differential. There is only one possible source for this differential. The proof for the second formula is similar. There is a hidden \(\eta\) extension from \(\Delta^{5}h_{1}dg^{2}\) to \(2\tau^{2}\Delta^{4}g^{4}\). Since \(\tau^{4}\Delta^{5}h_{1}dg^{2}\) is hit by a differential, \(2\tau^{6}\Delta^{4}g^{4}\) must also be hit by a differential. ### \(d_{23}\) differentials **Lemma 3.37**.: \((75,3,38)\) _There is a hidden \(\eta_{1}\) extension from \(\tau\Delta^{3}h_{1}^{3}\) to \(\tau^{9}g^{5}\)._ Proof.: According to Table 4, the mANss elements \(\Delta h_{1}\) and \(g\) correspond to elements of the same name in the mAss. In the mAss \(E_{2}\)-page, the relations given in [11, Theorem 4.13] imply that \(\tau(\Delta h_{1})^{4}=\tau^{9}g^{5}\). Therefore, in the mANss, \(\tau^{9}g^{5}\) detects the product \(\tau\eta_{1}^{4}\). On the other hand, \(\tau\Delta^{3}h_{1}^{3}\) detects the product \(\tau\eta_{1}^{3}\) in the mANss. **Remark 3.38**.: \((75,3,39)\) _Beware that \(\Delta^{3}h_{1}^{3}\) does not support a hidden \(\eta_{1}\) extension. Rather, it supports a non-hidden extension since \(\Delta^{4}h_{1}^{4}\) is non-zero. However, \(\Delta^{4}h_{1}^{4}\) is annihilated by \(\tau\), which allows for the hidden extension on \(\pi\Delta^{3}h_{1}^{3}\)._ **Proposition 3.39**.: \((121,1,61)\ d_{23}(\Delta^{5}h_{1})=\tau^{11}g^{6}\)_._ Proof.: The hidden extension of Lemma 3.37 implies that there is a hidden \(\eta_{1}\) extension from \(\tau\Delta^{3}h_{1}^{3}g\) to \(\tau^{9}g^{6}\). We already know that \(\tau^{3}\Delta^{3}h_{1}^{3}g\) is zero because of the differential \(d_{7}(\Delta^{4})\) from Proposition 3.21. Therefore, \(\tau^{11}g^{6}\) must be the value of some differential, and there is only one possibility. ## 4. Hidden extensions In Section 3, we established several hidden extensions in the mANss for _mmf_ as steps towards computing differentials. In this section, we finish the analysis of all hidden extensions by \(2,\eta\), and \(\nu\). Our work does not completely determine the ring structure of \(\pi_{*,*}\)_mmf_ because there exist hidden extensions by other elements. Up to one minor uncertainty, the entire ring structure of \(\pi_{*}\)_tmf_ is determined in [1]. **Theorem 4.1**.: _Up to multiples of \(g\) and \(\Delta^{8}\), Tables 7, 8 and 9 list all hidden extensions by \(2,\eta\), and \(\nu\) in the mANss for mmf._ Proof.: Some of the non-zero hidden extensions are established in the previous results because we needed them to compute Adams-Novikov differentials. The remaining non-zero hidden extensions are proved in the following results. The last columns of the tables indicate the specific proofs for each extension. There are some possible hidden extensions that turn out not to occur. Most of these possibilities can be ruled out using Method 2.17. For example, consider the possible hidden \(\eta\) extension from \(\tau\Delta h_{1}^{3}\) to \(\tau^{2}cg\). Because of multiplication by \(\tau g\), we may instead consider the possible hidden \(\eta\) extension from \(\tau^{2}\Delta h_{1}^{3}g\) to \(\tau^{3}cg^{2}\). These last two elements are annihilated by \(\tau\), so they are in the image of projection to the top cell. By inspection, there is no \(\eta\) extension in the homotopy of \(mmf/\tau\) in the appropriate degree. A few miscellaneous cases remain, but their proofs are straightforward. For example, * \((65,3,34)\) there is no hidden \(2\) extension from \(\Delta^{2}h_{2}d\) to \(\tau^{3}\Delta h_{1}g^{2}\) because the latter element supports an \(h_{1}\) extension. * \((24,0,12)\) there is no hidden \(\nu\) extension from \(8\Delta\) to \(\tau\Delta h_{1}^{3}\) because the first element is annihilated by \(g\) while the second element is not. **Proposition 4.2**.: _Table 5 lists some hidden extensions in the mANss for mmf._ \begin{table} \begin{tabular}{l l l l l l} \hline \hline \((s,f,w)\) & source & type & target & reason & \\ \hline \((51,1,26)\) & \(2\Delta^{2}h_{2}\) & \(2\) & \(\tau\Delta^{2}h_{1}^{3}\) & \(d_{5}(2\Delta^{3})=2\tau^{2}\Delta^{2}h_{2}g\) & \(d_{7}(4\Delta^{3})=\tau^{3}\Delta^{2}h_{1}^{3}g\) \\ \((54,2,28)\) & \(\Delta^{2}h_{2}^{2}\) & \(2\) & \(\tau^{4}dg^{2}\) & \(d_{5}(\Delta^{3}h_{2})=\tau^{2}\Delta^{2}h_{2}^{2}g\) & \(d_{13}(2\Delta^{3}h_{2})=\tau^{6}dg^{3}\) \\ \((99,1,50)\) & \(2\Delta^{4}h_{2}\) & \(2\) & \(\tau\Delta^{4}h_{1}^{3}\) & \(d_{5}(2\Delta^{5})=2\tau^{2}\Delta^{4}h_{2}g\) & \(d_{7}(4\Delta^{5})=\tau^{3}\Delta^{4}h_{1}^{3}g\) \\ \((123,1,62)\) & \(2\Delta^{5}h_{2}\) & \(2\) & \(\tau\Delta^{5}h_{1}^{3}\) & \(d_{5}(\Delta^{6})=2\tau^{2}\Delta^{5}h_{2}g\) & \(d_{7}(2\Delta^{6})=\tau^{3}\Delta^{5}h_{1}^{3}g\) \\ \((147,1,74)\) & \(2\Delta^{6}h_{2}\) & \(2\) & \(\tau\Delta^{6}h_{1}^{3}\) & \(d_{5}(2\Delta^{7})=2\tau^{2}\Delta^{6}h_{2}g\) & \(d_{7}(4\Delta^{7})=\tau^{3}\Delta^{6}h_{1}^{3}g\) \\ \((51,1,26)\) & \(\Delta^{2}h_{2}\) & \(\eta\) & \(\tau^{2}\Delta cg\) & \(d_{5}(\Delta^{3})=\tau^{2}\Delta^{2}h_{2}g\) & \(d_{9}(\Delta^{3}h_{1})=\tau^{4}\Delta cg^{2}\) \\ \((99,1,50)\) & \(\Delta^{4}h_{2}\) & \(\eta\) & \(\tau^{9}g^{5}\) & \(d_{5}(\Delta^{5})=\tau^{2}\Delta^{4}h_{2}g\) & \(d_{23}(\Delta^{5}h_{1})=\tau^{11}g^{6}\) \\ \((123,1,62)\) & \(2\Delta^{5}h_{2}\) & \(\eta\) & \(\tau^{2}\Delta^{4}cg\) & \(d_{5}(\Delta^{6})=2\tau^{2}\Delta^{5}h_{2}g\) & \(d_{9}(\Delta^{6}h_{1})=\tau^{4}\Delta^{4}cg^{2}\) \\ \hline \hline \end{tabular} \end{table} Table 5. Some hidden extensions deduced from Method 2.17 Proof.: All of these extensions follow from Method 2.17, using the differentials in the last two columns of Table 5. To illustrate, we discuss the first extension in the table. In order to obtain the extension from \(2\Delta^{2}h_{2}\) to \(\tau\Delta^{2}h_{1}^{3}\), we can establish a hidden 2 extension from \(2\tau\Delta^{2}h_{2}g\) to \(\tau^{2}\Delta^{2}h_{1}^{3}g\). Then the desired extension follows immediately. The elements \(2\tau\Delta^{2}h_{2}g\) and \(\tau^{2}\Delta^{2}h_{1}^{3}g\) are annihilated by \(\tau\) in the \(E_{\infty}\)-page of the mANsss for \(mmf\). Therefore, they detect elements in \(\pi_{71,37}mmf\) that are in the image of \(\pi_{72,36}mmf/\tau\) under projection to the top cell. By inspection, these preimages are \(2\Delta^{3}\) and \(4\Delta^{3}\). These latter elements are connected by a 2 extension, so their images are also connected by a 2 extension. The other extensions have essentially the same proof. First multiply by an appropriate power of \(g\). Then pull back to \(\pi_{*,*},mmf/\tau\), where the extension is visible by inspection. **Remark 4.3**.: \((124,6,63)\) _The hidden \(\eta\) extension from \(\tau^{2}\Delta^{4}cg\) to \(\tau^{9}\Delta h_{1}g^{5}\) in Table 5 deserves further discussion. Note that \(\Delta^{4}cg\) and \(\tau\Delta^{4}cg\) support \(\eta\) extensions that are not hidden. However, \(\tau^{2}\Delta^{4}h_{1}cg\) is zero, so \(\tau^{2}\Delta^{4}cg\) can support a hidden \(\eta\) extension. This explains why the \(E_{\infty}\)-page chart in Figure 5 shows both an \(h_{1}\) extension and a hidden \(\eta\) extension on the element \(\Delta^{4}cg\) in the 124-stem._ _The subtleties of this situation are illuminated by consideration of homotopy elements. Let \(\alpha\) be an element of \(\pi_{124,65}mmf\) that is detected by \(\Delta^{4}cg\). The element \(\tau^{2}\alpha\) is detected by \(\tau^{2}\Delta^{4}cg\). The hidden \(\eta\) extension implies that \(\tau^{2}\eta\alpha\) is detected by \(\tau^{9}\Delta h_{1}g^{5}\)._ _Now let \(\beta\) be an element in \(\pi_{122,64}\) that is detected by \(\Delta^{4}h_{2}^{2}g\). Note that \(\tau^{2}\beta\) must be zero because \(\tau^{2}\Delta^{2}h_{2}^{2}g\) is zero and because there are no \(E_{\infty}\)-page elements in higher filtration. Then \(\nu\beta\) is detected by \(h_{2}\cdot\Delta^{4}h_{2}^{2}g\), which equals \(\Delta^{4}h_{1}cg\)._ _Both \(\eta\alpha\) and \(\nu\beta\) are detected by the same element of the \(E_{\infty}\)-page, but they are not equal. The first product is not annihilated by \(\tau^{2}\), while the latter product is annihilated by \(\tau^{2}\). In fact, the difference between \(\eta\alpha\) and \(\nu\beta\) is detected by \(\tau^{7}\Delta h_{1}g^{5}\). This phenomenon corresponds to the classical relation \(\nu^{2}\nu_{4}=\eta\epsilon_{4}+\eta_{1}\overline{\kappa}^{4}\)[1, Proposition 9.17]._ **Remark 4.4**.: \((65,3,34)\) _The chart in [1] shows a hidden \(\eta\) extension from \(\Delta^{2}h_{2}d\) to \(\Delta h_{1}^{2}g^{2}\) in the 66-stem. According to Definition 2.16, this is not a hidden extension because of the presence of \(\Delta h_{1}g^{2}\) in higher filtration._ \begin{table} \begin{tabular}{l l l l l l} \hline \hline \((s,f,w)\) & source & type & target & reason & \\ \hline \((124,6,63)\) & \(\tau^{2}\Delta^{4}cg\) & \(\eta\) & \(\tau^{9}\Delta h_{1}g^{5}\) & \(d_{9}(\Delta^{6}h_{1})=\tau^{4}\Delta^{4}cg^{2}\) & \(d_{23}(\Delta^{6}h_{1}^{2})=\tau^{11}\Delta h_{1}g^{6}\) \\ \((129,3,66)\) & \(\Delta^{5}h_{1}c\) & \(\eta\) & \(\tau^{7}\Delta^{2}h_{1}^{2}g^{4}\) & \(d_{9}(\Delta^{7}h_{1}^{2}){=}\tau^{4}\Delta^{5}h_{1}cg^{2}\) & \(d_{23}(\Delta^{7}h_{1}^{3})=\tau^{11}\Delta^{2}h_{2}^{2}g^{6}\) \\ \((147,1,74)\) & \(\Delta^{6}h_{2}\) & \(\eta\) & \(\tau^{2}\Delta^{5}cg\) & \(d_{5}(\Delta^{7})=\tau^{2}\Delta^{6}h_{2}g\) & \(d_{9}(\Delta^{7}h_{1})=\tau^{4}\Delta^{5}cg^{2}\) \\ \((161,3,82)\) & \(\Delta^{6}h_{2}d\) & \(\eta\) & \(\tau^{3}\Delta^{5}h_{1}^{2}g^{2}\) & \(d_{5}(\Delta^{7})=\tau^{2}\Delta^{6}h_{2}dg\) & \(d_{11}(\Delta^{7}h_{1}d){=}\tau^{5}\Delta^{5}h_{1}^{2}g^{3}\) \\ \((0,0,0)\) & \(4\) & \(\nu\) & \(\tau h_{1}^{3}\) & \(d_{5}(\Delta h_{2}d)=4\tau^{2}g^{2}\) & \(d_{7}(4\Delta g)=\tau^{3}h_{1}^{3}g^{2}\) \\ \((48,0,24)\) & \(4\Delta^{2}\) & \(\nu\) & \(\tau\Delta^{2}h_{1}^{3}\) & \(d_{5}(\Delta^{3}h_{2}d)=4\tau^{2}\Delta^{2}g^{2}\) & \(d_{7}(4\Delta^{3}g)=\tau^{3}\Delta^{2}h_{1}^{3}g^{2}\) \\ \((51,1,26)\) & \(2\Delta^{2}h_{2}\) & \(\nu\) & \(\tau^{4}dg^{2}\) & \(d_{5}(2\Delta^{3})=2\tau^{2}\Delta^{2}h_{2}g\) & \(d_{13}(2\Delta^{3}h_{2})=\tau^{6}dg^{3}\) \\ \((57,3,30)\) & \(\Delta^{2}h_{2}^{3}\) & \(\nu\) & \(2\tau^{4}g^{3}\) & \(d_{5}(\Delta^{3}h_{2}^{2})=\tau^{2}\Delta^{2}h_{2}^{3}g\) & \(d_{13}(\Delta^{3}h_{2}^{3})=2\tau^{6}g^{4}\) \\ \((96,0,48)\) & \(4\Delta^{4}\) & \(\nu\) & \(\tau\Delta^{4}h_{1}^{3}\) & \(d_{5}(\Delta^{5}h_{2}d)=4\tau^{2}\Delta^{4}g^{2}\) & \(d_{7}(4\Delta^{5}g)=\tau^{3}\Delta^{4}h_{1}^{3}g^{2}\) \\ \((144,0,72)\) & \(4\Delta^{6}\) & \(\nu\) & \(\tau\Delta^{6}h_{1}^{3}\) & \(d_{5}(\Delta^{7}h_{2}d)=4\tau^{2}\Delta^{6}g^{2}\) & \(d_{7}(4\Delta^{7}g)=\tau^{3}\Delta^{6}h_{1}^{3}g^{2}\) \\ \((147,1,74)\) & \(2\Delta^{6}h_{2}\) & \(\nu\) & \(\tau^{4}\Delta^{4}dg^{2}\) & \(d_{5}(2\Delta^{7})=2\tau^{2}\Delta^{6}h_{2}g\) & \(d_{13}(2\Delta^{7}h_{2})=\tau^{6}\Delta^{4}dg^{3}\) \\ \((153,3,78)\) & \(\Delta^{6}h_{2}^{3}\) & \(\nu\) & \(2\tau^{4}\Delta^{4}g^{3}\) & \(d_{5}(\Delta^{7}h_{2}^{2})=\tau^{2}\Delta^{6}h_{2}^{2}g\) & \(d_{13}(\Delta^{7}h_{2}^{3})=2\tau^{6}\Delta^{4}g^{4}\) \\ \hline \hline \end{tabular} \end{table} Table 5: Some hidden extensions deduced from Method 2.17 Nevertheless, there is a relevant point here about multiplicative structure. Because of the presence of \(\tau^{3}\Delta h_{1}g^{2}\) in higher filtration, the element \(\Delta^{2}h_{2}d\) detects two homotopy elements. One of these elements is annihilated by \(\eta\), and one is not. The product \(\nu_{2}\kappa\) is one of the two homotopy elements that are detected by \(\Delta^{2}h_{2}d\). In fact, \(\nu_{2}\kappa\) is the homotopy element that is not annihilated by \(\eta\). This follows from the hidden \(\eta\) extension from \(\Delta^{2}h_{2}\) to \(\tau^{2}\Delta cg\) and the hidden \(\kappa\) extension from \(\Delta cg\) to \(\tau\Delta h_{1}^{2}g^{2}\). **Proposition 4.5**.: \((110,2,56)\) _There is a hidden \(2\) extension from \(\Delta^{4}d\) to \(\tau^{6}\Delta^{2}h_{1}^{2}g^{3}\)._ Proof.: The proof is a variation on Method 2.17, in which we use the long exact sequence \[\begin{CD}\pi_{*,*}mmf@>{}>{}>\pi_{*,*}mmf/\tau^{2}@>{}>{}>\pi_{*-1,*+2}mmf@>{ \tau^{2}}>{}>\pi_{*-1,*}mmf\end{CD}\] induced by the cofiber sequence \[mmf@>{}>{}>mmf/\tau^{2}@>{}>{}>\Sigma^{1,-2}mmf@>{\tau^{2}}>{}>\Sigma^{1,0} mmf.\] We will show that there is a hidden \(2\) extension from \(\tau^{4}\Delta^{4}dg^{3}\) to \(\tau^{10}\Delta^{2}h_{1}^{2}g^{6}\). The desired \(2\) extension follows immediately by multiplication by \(\tau^{4}g^{3}\). Recall from Proposition 3.22 that there is a differential \(d_{13}(2\Delta^{7}h_{2})=\tau^{6}\Delta^{4}dg^{3}\). Also, it follows from Proposition 3.39 that there is a differential \(d_{23}(\Delta^{7}h_{1}^{3})=\tau^{11}\Delta^{2}h_{1}^{2}g^{6}\). Therefore, \(\tau^{4}\Delta^{4}dg^{3}\) and \(\tau^{10}\Delta^{2}h_{1}^{2}g^{6}\) detect elements in \(\pi_{170,88}mmf\) that are annihilated by \(\tau^{2}\). Hence they have preimages in \(\pi_{171,86}mmf/\tau^{2}\) under projection to the top cell. By inspection, these preimages are \(2\Delta^{7}h_{2}\) and \(\tau\Delta^{7}h_{1}^{3}\). In the mANsss for \(mmf\), there is a differential \(d_{5}(\Delta^{7})=\tau^{2}\Delta^{6}h_{2}g\). However, in the mANss for \(mmf/\tau^{2}\), the element \(\tau^{2}\Delta^{6}h_{2}g\) is already zero in the \(E_{2}\)-page. Therefore, \(\Delta^{7}\) is a permanent cycle in the mANss for \(mmf/\tau^{2}\). Recall the hidden \(2\) extension from \(2h_{2}\) to \(\tau h_{1}^{3}\) established in Lemma 3.11. Multiplication by \(\Delta^{7}\) gives a hidden \(2\) extension in the mANss \(E_{\infty}\)-page for \(mmf/\tau^{2}\) from \(2\Delta^{7}h_{2}\) to \(\tau\Delta^{7}h_{1}^{3}\). Finally, apply projection to the top cell to obtain the hidden \(2\) extension from \(\tau^{4}\Delta^{4}dg^{3}\) to \(\tau^{10}\Delta^{2}h_{1}^{2}g^{6}\). **Proposition 4.6**.: \((50,2,26)\) _There is a hidden \(\nu\) extension from \(\Delta^{2}h_{1}^{2}\) to \(\tau^{2}\Delta h_{1}cg\)._ Proof.: This follows from \(\Delta h_{1}\) multiplication on the hidden extension from \(\Delta h_{1}\) to \(\tau^{2}cg\) established in Lemma 3.27. The next several lemmas establish some Toda brackets that we will use to deduce further hidden extensions. All of these Toda brackets are deduced from algebraic information, i.e., from Massey products in the mANss \(E_{2}\)-page. **Lemma 4.7**.: \((32,2,17)\) _The Toda bracket \(\langle\nu^{2},2,\eta_{1}\rangle\) is detected by \(\Delta c\) and has no indeterminacy._ Proof.: We have the Massey product \(c=\langle h_{2}^{2},h_{0},h_{1}\rangle\) in the motivic algebraic Novikov \(E_{2}\)-page [1]. The May convergence theorem [15, Theorem 4.16] implies that \(c=\langle h_{2}^{2},2,h_{1}\rangle\) in the mANss \(E_{2}\)-page. Multiply by \(\Delta\) to obtain \[\Delta c=\langle h_{2}^{2},2,h_{1}\rangle\Delta=\langle h_{2}^{2},2,\Delta h_{ 1}\rangle.\] The second equality holds because there is no indeterminacy by inspection. There are no crossing differentials, so the Moss convergence theorem [14, Theorem 1.2][15, Theorem 4.16] implies that \(\Delta c\) detects the Toda bracket. By inspection, the bracket has no indeterminacy. **Lemma 4.8**.: \((128,2,65)\) _The Toda bracket \(\left\langle v_{2}^{2},2,\eta_{1}\right\rangle\) is detected by \(\Delta^{5}c\) and has no indeterminacy._ Proof.: As in the proof of Lemma 4.8, we have the Massey product \(c=\left\langle h_{2}^{2},2,h_{1}\right\rangle\) in the mANss \(E_{2}\)-page. Multiply by \(\Delta^{5}\) to obtain \[\Delta^{5}c=\Delta^{4}\langle h_{2}^{2},2,h_{1}\rangle\Delta=\left\langle \Delta^{4}h_{2}^{2},2,\Delta h_{1}\right\rangle.\] The second equality holds because there is no indeterminacy by inspection. There are no crossing differentials, so the Moss convergence theorem [14, Theorem 1.2][10, Theorem 4.16] implies that \(\Delta^{5}c\) detects the Toda bracket. By inspection, the bracket has no indeterminacy. **Lemma 4.9**.: \((35,7,21)\) _The Toda bracket \(\left\langle v^{2},2,e\kappa\right\rangle\) is detected by \(h_{1}dg\) and has no indeterminacy._ Proof.: We have the Massey product \(h_{1}dg=\langle h_{2}^{2},h_{0},cg\rangle\) in the motivic algebraic Novikov \(E_{2}\)-page [11]. The May convergence theorem [14][10, Theorem 4.16] implies that \(h_{1}dg=\langle h_{2}^{2},2,cg\rangle\) in the mANss \(E_{2}\)-page. There are no crossing differentials, so the Moss convergence theorem [14, Theorem 1.2][10, Theorem 4.16] implies that \(h_{1}dg\) detects the Toda bracket. By inspection, the bracket has no indeterminacy. **Lemma 4.10**.: \((131,7,69)\) _The Toda bracket \(\left\langle v_{2}^{2},2,e\kappa\right\rangle\) is detected by \(\Delta^{4}h_{1}dg\) and has no indeterminacy._ Proof.: As in the proof of Lemma 4.9, we have the Massey product \(h_{1}dg=\langle h_{2}^{2},2,cg\rangle\) in the mANss \(E_{2}\)-page. Multiply by \(\Delta^{4}\) to obtain \[\Delta^{4}h_{1}dg=\Delta^{4}\langle h_{2}^{2},h_{0},cg\rangle=\langle\Delta^{ 4}h_{2}^{2},h_{0},cg\rangle.\] The second equality holds because there is no indeterminacy by inspection. There are no crossing differentials, so the Moss convergence theorem [14, Theorem 1.2][10, Theorem 4.16] implies that \(\Delta^{4}h_{1}dg\) detects the Toda bracket. By inspection, the bracket has no indeterminacy. **Proposition 4.11**.: _There are hidden \(\nu\) extensions:_ 1. \((32,2,17)\) _from_ \(\Delta c\) _to_ \(\tau^{2}h_{1}dg\)_._ 2. \((128,2,65)\) _from_ \(\Delta^{5}c\) _to_ \(\tau^{2}\Delta^{4}h_{1}dg\)_._ Proof.: Recall from Lemma 4.7 that the Toda bracket \(\left\langle v^{2},2,\eta_{1}\right\rangle\) is detected by \(\Delta c\). We have \[\langle v^{2},2,\eta_{1}\rangle\nu=\langle v^{2},2,\nu\cdot\eta_{1}\rangle= \langle v^{2},2,\tau^{2}e\kappa\rangle.\] The first equality holds because there is no indeterminacy by inspection. The second equality follows from the hidden \(\nu\) extension of Lemma 3.27. Lemma 4.9 implies that \(\tau^{2}h_{1}dg\) detects the last Toda bracket. The proof for the second hidden extension is nearly identical. Consider the equalities \[\langle v_{2}^{2},2,\eta_{1}\rangle\nu=\langle v_{2}^{2},2,\nu\cdot\eta_{1} \rangle=\langle v_{2}^{2},2,\tau^{2}e\kappa\rangle,\] and use Lemma 4.8 and Lemma 4.10. **Proposition 4.12**.: _There are hidden \(\nu\) extensions:_ 1. \((97,1,49)\) _from_ \(\Delta^{4}h_{1}\) _to_ \(\tau^{9}g^{5}\)_._ 2. \((122,2,62)\) _from_ \(\Delta^{5}h_{1}^{2}\) _to_ \(\tau^{9}\Delta h_{1}g^{5}\)_._ 3. \((147,3,75)\) _from_ \(\Delta^{6}h_{1}^{3}\) _to_ \(\tau^{9}\Delta^{2}h_{1}^{2}g^{5}\) Proof.: We prove the third hidden extension. Then the first two hidden extensions follow from multiplication by \(\Delta h_{1}\). Proposition 4.5 and Lemma 3.23 imply that there is a hidden \(\epsilon\) extension from \(\Delta^{6}h_{2}\) to \(\tau^{10}\Delta^{2}h_{1}^{2}g^{5}\). We also have a hidden \(2\) extension from \(2\Delta^{6}h_{2}\) to \(\tau\Delta^{6}h_{1}^{3}\), as shown in Proposition 4.2. It follows that there must be a hidden \(\nu\) extension from \(\Delta^{6}h_{1}^{3}\) to \(\tau^{9}\Delta^{2}h_{1}^{2}g^{5}\). **Proposition 4.13**.: \((110,2,56)\) _There is a hidden \(\epsilon\) extension from \(\Delta^{4}d\) to \(\tau\Delta^{4}h_{1}^{2}g\)._ Proof.: We showed in Lemma 3.31 that there is a hidden \(\epsilon\) extension from \(d\) to \(\tau h_{1}^{2}g\). Multiply by \(\Delta^{4}h_{1}\) to obtain a hidden \(\epsilon\) extension from \(\Delta^{4}h_{1}d\) to \(\tau\Delta^{4}h_{1}^{2}g\). Finally, use \(h_{1}\) multiplication to obtain the hidden extension on \(\Delta^{4}d\). **Proposition 4.14**.: \((135,3,69)\) _There is a hidden \(\nu\) extension from \(\Delta^{5}h_{1}d\) to \(\tau^{3}\Delta^{4}h_{1}^{2}g^{2}\)._ Proof.: By Lemma 3.26, the element \(\Delta h_{1}\) detects the Toda bracket \(\langle\eta,\nu,\tau^{2}\bar{\kappa}\rangle\). Recall from Table 3 that \(\kappa_{4}\) is an element of \(\pi_{110,56}mmf\) that is detected by the permanent cycle \(\Delta^{4}d\). Then the element \(\Delta^{5}h_{1}d\) detects \(\langle\eta,\nu,\tau^{2}\bar{\kappa}\rangle\kappa_{4}\). Now shuffle to obtain \[\nu\langle\eta,\nu,\tau^{2}\bar{\kappa}\rangle\kappa_{4}=\langle\nu,\eta,\nu \rangle\tau^{2}\bar{\kappa}\cdot\kappa_{4}.\] Recall from Lemma 2.20 that \(\epsilon=\langle\nu,\eta,\nu\rangle\). Also recall from Proposition 4.13 that there is a hidden \(\epsilon\) extension from \(\Delta^{4}d\) to \(\tau\Delta^{4}h_{1}^{2}g\). We conclude that \(\epsilon\cdot\tau^{2}\bar{\kappa}\cdot\kappa_{4}\) is detected by \(\tau^{3}\Delta^{4}h_{1}^{2}g^{2}\). ## 5. The elements \(\nu_{k}\) The multiplicative structure of classical \(\pi_{*}tmf\) at the prime \(2\) has been completely computed, with one exception [1, p. 19]. We will use the mANss for \(mmf\) in order to resolve this last piece of \(2\)-primary multiplicative structure. As discussed in Remark 2.11, our choices of homotopy elements are not necessarily strictly compatible with the choices in [1]. However, our choices do agree up to multiples of certain powers of \(2\). Our computations below in Proposition 5.9, Theorem 5.10, Corollary 5.12, Proposition 5.13, and Proposition 5.15 lie in groups of order at most \(8\), so the possible discrepancies are irrelevant. We will frequently multiply by the element \(\tau\bar{\kappa}\) in \(\pi_{20,11}mmf\) in order to detect elements and relations. Beware that multiplication by \(\tau\bar{\kappa}\) is not injective in general. However, in all degrees that we study, multiplication by \(\tau\bar{\kappa}\) is in fact an isomorphism. Recall the projection \(q:mmf/\tau\to mmf\) to the top cell that was discussed in detail in Section 2.9. We will rely heavily on this map in order to transfer the algebraic information in \(\pi_{*,*}mmf/\tau\) into homotopical information about \(\pi_{*,*}mmf\). **Lemma 5.1**.: _The element \(q(\Delta^{k+1})\) of \(\pi_{*,*}mmf\) is detected by \(-(k+1)\tau\Delta^{k}h_{2}g\) in Adams-Novikov filtration \(5\)._ Proof.: If \(k+1\) is not a multiple of \(4\), then we have the non-zero differential \(d_{5}(\Delta^{k+1})=(k+1)\tau^{2}\Delta^{k}h_{2}g\). Proposition 2.14 implies that \(q(\Delta^{k+1})\) is detected by \(-(k+1)\tau\Delta^{k}h_{2}g\). If \(k+1\) is congruent to \(4\) modulo \(8\), then we have the non-zero differential \(d_{7}(\Delta^{k+1})=\tau^{3}\Delta^{k}h_{1}^{3}g\). Proposition 2.14 implies that \(q(\Delta^{k+1})\) is detected by \(\tau^{2}\Delta^{k}h_{1}^{3}g\) in filtration \(7\). This implies that \(q(\Delta^{k+1})\) is detected by zero in filtration \(5\). If \(k+1\) is a multiple of \(8\), then \(\Delta^{k}\) is a permanent cycle, so \(q(\Delta^{k+1})\) equals zero. This implies that \(q(\Delta^{k+1})\) is detected by zero in filtration \(5\). **Remark 5.2**.: For uniformity, we have stated Lemma 5.1 for all values of \(k\). As shown in the proof of the lemma, there are in fact three cases, depending on the value of \(k\). If \(k+1\) is not a multiple of \(4\), then \(-(k+1)\tau\Delta^{k}h_{2}g\) is a non-zero element in the mANss \(E_{\infty}\)-page. On the other hand, if \(k+1\) is a multiple of \(4\), then \(-(k+1)\tau\Delta^{k}h_{2}g\) is zero in the \(E_{\infty}\)-page since \(\tau\Delta^{k}h_{2}g\) is an element of order \(4\). In these cases, the lemma says that \(q(\Delta^{k+1})\) is detected by zero in filtration \(5\). In other words, \(q(\Delta^{k+1})\) is detected in filtration strictly greater than \(5\), if it is non-zero. In fact, \(q(\Delta^{k+1})\) is detected by \(\tau^{2}\Delta^{k}h_{1}^{3}g\) in filtration \(7\) when \(k+1\) is congruent to \(4\) modulo \(8\). Also, \(q(\Delta^{k+1})\) is zero when \(k+1\) is a multiple of \(8\) because \(\Delta^{k+1}\) is a permanent cycle. **Lemma 5.3**.: _The element \(q(\Delta^{k+1})\) is a multiple of \(\tau\overline{\kappa}\)._ Proof.: Lemma 5.1 shows that \(q(\Delta^{k+1})\) is detected by \(-(k+1)\tau\Delta^{k}h_{2}g\). By inspection, all possible values of \(q(\Delta^{k+1})\) are multiples of \(\tau\overline{\kappa}\). **Definition 5.4**.: Let \(\nu_{k}\) be the element of \(\pi_{24k+3,12k+2}mmf\) such that \(q(\Delta^{k+1})\) equals \(-\tau\overline{\kappa}\cdot\nu_{k}\). Note that \(\nu_{k}\) exists because of Lemma 5.3. Multiplication by \(\tau\overline{\kappa}\) is an isomorphism in the relevant degrees, so \(\nu_{k}\) is specified uniquely. We choose a minus sign in the defining formula of Definition 5.4 for later convenience. **Remark 5.5**.: Bruner and Rognes consider \(\nu_{3}\) and \(\nu_{7}\) to be "honorary" members of the family of elements \(\nu_{k}\). They are not multiplicative generators; \(\nu_{3}\) is non-zero but decomposable, and \(\nu_{7}\) equals zero. Definition 5.4 also implies that \(\nu_{7}\) is zero. This follows from the observation that \(q(\Delta^{8})\) equals zero since \(\Delta^{8}\) is a permanent cycle. The careful reader will note that the elements \(\nu_{k}\) were already partially defined in Table 3 in Section 2.6. The following lemma shows that the two approaches to \(\nu_{k}\) are compatible. Table 3 leaves some ambiguity in the definition of \(\nu_{k}\), and Definition 5.4 resolves that ambiguity. **Lemma 5.6**.: _The element \(\nu_{k}\) is detected by \((k+1)\Delta^{k}h_{2}\) in Adams-Novikov filtration \(1\)._ Proof.: Lemma 5.1 determines the mANss \(E_{\infty}\)-page elements that detect \(q(\Delta^{k+1})\). Then Definition 5.4 means that \(-\tau\overline{\kappa}\cdot\nu_{k}\) is detected by those same elements. Multiplication by \(\tau g\) is an isomorphism in the relevant degrees, so the detecting elements for \(\nu_{k}\) are then determined. **Remark 5.7**.: Similarly to Remark 5.2, Lemma 5.6 includes three cases. If \(k+1\) is not a multiple of \(4\), then \((k+1)\Delta^{k}h_{2}\) is a non-zero element of the mANss \(E_{\infty}\)-page. If \(k+1\) is a multiple of \(4\), then \((k+1)\Delta^{k}h_{2}\) is zero since \(\Delta^{k}h_{2}\) is an element of order \(4\). This means that \(\nu_{k}\) is detected in filtration strictly greater than \(1\), if it is non-zero. In fact, \(\nu_{k}\) is detected by \(\tau\Delta^{k}h_{1}^{3}\) in filtration \(3\) if \(k+1\) is congruent to \(4\) modulo \(8\), and \(\nu_{k}\) is zero if \(k+1\) is a multiple of \(8\). **Remark 5.8**.: Earlier in Remark 2.2, we chose \(h_{2}\) so that it detects the element \(\nu\). Lemma 5.6 shows that \(\nu_{0}\) is also detected by \(h_{2}\), but that does not guarantee that it equals \(\nu\) because of the presence of \(\tau h_{1}^{3}\) in higher filtration. We can only conclude that \(\nu\) and \(\nu_{0}\) are equal up to multiples of \(4\). If \(\nu\) equals \(5\nu_{0}\), then we compute that \[q(5\Delta)=-5\tau\overline{\kappa}\cdot\nu_{0}=-\tau\overline{\kappa}\cdot\nu.\] So we may replace \(\Delta\) by \(5\Delta\), if necessary, and assume without loss of generality that \(\nu_{0}\) equals \(\nu\). This replacement is compatible with our previous choice of \(\Delta\) in Remark 3.9, which specified \(\Delta\) only up to multiples of \(4\). **Proposition 5.9**.: \(\nu_{k+8}=\nu_{k}\cdot M.\) Proof.: Using Equation (2.13), we have \[q(\Delta^{k+9})=q(\Delta^{k+1}\cdot\Delta^{8})=q(\Delta^{k+1}\cdot i(M))=q(\Delta ^{k+1})\cdot M=-\tau\overline{\kappa}\cdot\nu_{k}\cdot M.\] Here we are using that \(i(M)=\Delta^{8}\), which is equivalent to the definition that \(M\) is detected by \(\Delta^{8}\) (see Table 3). On the other hand, \(q(\Delta^{k+9})\) equals \(-\tau\overline{\kappa}\cdot\nu_{k+8}\) by Definition 5.4. Finally, multiplication by \(-\tau\overline{\kappa}\) is an isomorphism in the relevant degrees. Proposition 5.9 means that for practical purposes, we only need to consider the elements \(\nu_{k}\) for \(0\leq k\leq 7\). **Theorem 5.10**.: \[\nu_{j}\nu_{k}=(k+1)\nu_{j+k}\nu_{0}.\] Proof.: The proof splits into two cases, depending on whether \(k+1\) is a multiple of \(4\). First, we handle the (more interesting) situation when \(k+1\) is not a multiple of \(4\). We address the case when \(k+1\) is a multiple of \(4\) below in a separate Proposition 5.13. The proof techniques for the two cases are similar, but the details are somewhat different. Multiplication by \(\tau\overline{\kappa}\) is an isomorphism in the relevant degrees, so it suffices to establish our relation after multiplication by \(\tau\overline{\kappa}\). Using Equation (2.13), we have \[q((k+1)\Delta^{j+k+1}h_{2})=q(\Delta^{j+k+1}\cdot(k+1)h_{2})=q( \Delta^{j+k+1}\cdot i((k+1)\nu_{0}))=\] \[=q(\Delta^{j+k+1})\cdot(k+1)\nu_{0}=-\tau\overline{\kappa}\cdot \nu_{j+k}\cdot(k+1)\nu_{0}.\] Here we are using that \(i((k+1)\nu_{0})=(k+1)h_{2}\); in other words, \((k+1)\nu_{0}\) is detected by \((k+1)h_{2}\). This requires that \(k+1\) is not a multiple of \(4\). Otherwise, \((k+1)\nu_{0}\) is a multiple of \(\tau\), and \(i((k+1)\nu_{0})\) is zero. We will now compute \(q((k+1)\Delta^{j+k+1}h_{2})\) another way. We have \(i(\nu_{k})=(k+1)\Delta^{k}h_{2}\); in other words, \(\nu_{k}\) is detected by the non-zero element \((k+1)\Delta^{k}h_{2}\), as shown in Lemma 5.6. This requires that \(k+1\) is not a multiple of \(4\). Otherwise, \(\nu_{k}\) is a multiple of \(\tau\), and \(i(\nu_{k})\) is zero. Then we have \[q((k+1)\Delta^{j+k+1}h_{2})=q(\Delta^{j+1}\cdot(k+1)\Delta^{k}h_{2})=q(\Delta ^{j+1}\cdot i(\nu_{k}))=q(\Delta^{j+1})\cdot\nu_{k}=-\tau\overline{\kappa} \cdot\nu_{j}\cdot\nu_{k}.\] **Remark 5.11**.: The exact form of the equation in Theorem 5.10 is guided by the structure of our proof. One could also write \[\nu_{i}\nu_{j}=(i+1)\nu\nu_{i+j},\] which more closely aligns with the notation in [1]. All of the elements \(\nu_{k}\) are in odd stems, so they pairwise anti-commute. **Corollary 5.12**.: \((246,2,124)\)_\(\nu_{4}\nu_{6}=\nu\nu_{2}M\)._ Proof.: Theorem 5.10 implies that \(\nu_{4}\nu_{6}\) equals \(7\nu_{10}\nu_{0}\), which equals \(-7\nu_{0}\nu_{10}\) by graded commutativity. By Remark 5.8 and Proposition 5.9, the latter expression equals \(-7\nu\nu_{2}M\). Finally, \(\nu\nu_{2}M\) belongs to a group of order \(4\), so \(-7\nu\nu_{2}M\) equals \(\nu\nu_{2}M\). We now return to the case of Theorem 5.10 in which \(k+1\) is a multiple of \(4\). **Proposition 5.13**.: _If \(k+1\) is a multiple of \(4\), then \(\nu_{j}\cdot\nu_{k}=(k+1)\nu_{j+k}\nu_{0}\)._ Proof.: First, let \(k+1\) be a multiple of \(8\), so \(\nu_{k}\) is zero. The element \(\nu_{j+k}\nu_{0}\) belongs to a group whose order divides \(8\), so \((k+1)\nu_{j+k}\nu_{0}\) is zero. In other words, the equality holds because both sides are zero. Next, let \(k+1\) be congruent to \(4\) modulo \(8\). Let \(\alpha\) be an element of \(\pi_{*,*}mmf\) that is detected by \(\Delta^{k}h_{1}^{3}\). The element \(\nu_{k}\) is detected by \(\tau\Delta^{k}h_{1}^{3}\), according to Remark 5.7. Since there are no elements in higher filtration, we can conclude that \(\nu_{k}\) equals \(\tau\alpha\). We have \[q(\Delta^{j+k+1}h_{1}^{3})=q(\Delta^{j+1}\cdot\Delta^{k}h_{1}^{3})=q(\Delta^{ j+1}\cdot i(\alpha))=q(\Delta^{j+1})\cdot\alpha=-\tau\overline{\kappa}\cdot\nu_{j} \cdot\alpha=-\overline{\kappa}\cdot\nu_{j}\cdot\nu_{k}.\] Now we add the assumption that \(j+1\) is not congruent to \(4\) modulo \(8\). Given the assumption that \(k+1\) is congruent to \(4\) modulo \(8\), we get that \(j+k+1\) is not congruent to \(7\) modulo \(8\). Then \(\Delta^{j+k+1}h_{1}^{3}\) is a permanent cycle, so \(q(\Delta^{j+k+1}h_{1}^{3})\) is zero. Together with the computation in the previous paragraph, this implies that \(\nu_{j}\cdot\nu_{k}\) is zero since multiplication by \(\overline{\kappa}\) is an isomorphism in the relevant degrees. Note also that \((k+1)\nu_{j+k}\nu_{0}\) is zero because it belongs to a group whose order divides \(4\). Finally, we must consider the case when \(j+1\) is congruent to \(4\) modulo \(8\), i.e., that \(j+k+1\) is congruent to \(7\) modulo \(8\). Then \(q(\Delta^{j+k+1}h_{1}^{3})\) is detected by \(\tau^{10}\Delta^{j+k-4}h_{1}^{2}\varsigma^{6}\) because of Proposition 2.14 and the differential \(d_{23}(\Delta^{j+k+1}h_{1}^{3})=\tau^{11}\Delta^{j+k-4}h_{1}^{2}\varsigma^{6}\). This means that \(-\overline{\kappa}\cdot\nu_{j}\cdot\nu_{k}\) is detected by \(\tau^{10}\Delta^{j+k-4}h_{1}^{2}\varsigma^{6}\). It follows that \(\nu_{j}\cdot\nu_{k}\) is detected by \(\tau^{10}\Delta^{j+k-4}h_{1}^{2}\varsigma^{6}\). Finally, this latter element also detects \((k+1)\nu_{j+k}\nu_{0}\) because of the hidden \(2\) extensions in the \(150\)-stem and their multiples under \(\Delta^{8}\) multiplication (see Table 7). **Remark 5.14**.: As shown in the proof, most cases of Proposition 5.13 hold because both sides of the equation are zero. Both sides of the equation are non-zero precisely when \(j+1\) and \(k+1\) are congruent to \(4\) modulo \(8\). Bruner and Rognes establish some relations that reduce the ambiguity in their definitions of \(\nu_{k}\). Finally, we will show that our elements defined in Definition 5.4 satisfy those same relations. We have already discussed the choice of \(\nu_{0}\) in Remark 5.8. The only additional requirements are the relations \[\nu_{0}D_{4}=2\nu_{4}\] \[\nu_{1}\nu_{5}=2\nu_{0}\nu_{6}\] \[\nu_{2}\nu_{4}=3\nu_{0}\nu_{6}.\] The first formula is proved in Proposition 5.15, while the last two are specific instances of Theorem 5.10. **Proposition 5.15**.: \((99,1,50)\)_\(\nu_{0}D_{4}=2\nu_{4}\)._ Proof.: Because of Lemma 5.6, both products are detected by \(2\Delta^{4}h_{2}\). However, they are not necessarily equal because of the presence of \(\tau\Delta^{4}h_{1}^{3}\) in higher filtration. We will show that \(\tau\overline{\kappa}\cdot\nu D_{4}\) equals \(\tau\overline{\kappa}\cdot 2\nu_{4}\). Our desired relation follows immediately because multiplication by \(\tau\overline{\kappa}\) is an isomorphism in the relevant degree. Using Equation (2.13), we have \[q(2\Delta^{5})=q(\Delta\cdot 2\Delta^{4})=q(\Delta\cdot i(D_{4}))=q(\Delta) \cdot D_{4}=-\tau\overline{\kappa}\cdot\nu\cdot D_{4}.\] Here we are using that \(i(D_{4})=2\Delta^{4}\), which is equivalent to the definition that \(D_{4}\) is detected by \(2\Delta^{4}\) (see Table 3). On the other hand, we also have \[q(2\Delta^{5})=q(\Delta^{5}\cdot 2)=q(\Delta^{5}\cdot i(2))=q(\Delta^{5})\cdot 2=- \tau\overline{\kappa}\cdot\nu_{4}\cdot 2.\] ## 6 Tables \begin{table} \begin{tabular}{l l l l l l} \hline \hline \((s,f,w)\) & Toda bracket & detected by & indet & proof & used in \\ \hline \((8,2,5)\) & \(\langle\nu,\eta,\nu\rangle\) & \(c\) & \(0\) & Lemma 2.20 & 3.27, 4.14 \\ \((25,1,13)\) & \(\langle\eta,\nu,\tau^{2}\overline{\kappa}\rangle\) & \(\Delta h_{1}\) & \(P^{3}h_{1}\) & Lemma 3.26 & 3.27, 3.28, 4.14 \\ \((32,2,17)\) & \(\langle\nu^{2},2,\eta_{1}\rangle\) & \(\Delta c\) & \(0\) & Lemma 4.7 & 4.11 \\ \((128,2,65)\) & \(\langle\nu_{2}^{2},2,\eta_{1}\rangle\) & \(\Delta^{5}c\) & \(0\) & Lemma 4.8 & 4.11 \\ \((35,7,21)\) & \(\langle\nu^{2},2,e\overline{\kappa}\rangle\) & \(h_{1}dg\) & \(0\) & Lemma 4.9 & 4.11 \\ \((131,7,69)\) & \(\langle\nu_{2}^{2},2,e\overline{\kappa}\rangle\) & \(\Delta^{4}h_{1}dg\) & \(0\) & Lemma 4.10 & 4.11 \\ \hline \hline \end{tabular} \end{table} Table 10: Some Toda brackets \begin{table} \begin{tabular}{l l l l} \hline \hline \((s,f,w)\) & source & target & proof \\ \hline \((0,0,0)\) & \(4\) & \(\tau h_{1}^{3}\) & Proposition 4.2 \\ \((25,1,13)\) & \(\Delta h_{1}\) & \(\tau^{2}cg\) & Lemma 3.27 \\ \((32,2,17)\) & \(\Delta c\) & \(\tau^{2}h_{1}dg\) & Proposition 4.11 \\ \((39,3,21)\) & \(\Delta h_{1}d\) & \(\tau^{3}h_{1}^{2}g^{2}\) & Lemma 3.32 \\ \((48,0,24)\) & \(4\Delta^{2}\) & \(\tau\Delta^{2}h_{1}^{3}\) & Proposition 4.2 \\ \((50,2,26)\) & \(\Delta^{2}h_{1}^{2}\) & \(\tau^{2}\Delta h_{1}cg\) & Proposition 4.6 \\ \((51,1,26)\) & \(2\Delta^{2}h_{2}\) & \(\tau^{4}dg^{2}\) & Proposition 4.2 \\ \((57,3,30)\) & \(\Delta^{2}h_{2}^{3}\) & \(2\tau^{4}g^{3}\) & Proposition 4.2 \\ \((96,0,48)\) & \(4\Delta^{4}\) & \(\tau\Delta^{4}h_{1}^{3}\) & Proposition 4.2 \\ \((97,1,49)\) & \(\Delta^{4}h_{1}\) & \(\tau^{9}g^{5}\) & Proposition 4.12 \\ \((122,2,62)\) & \(\Delta^{5}h_{1}^{2}\) & \(\tau^{9}\Delta h_{1}g^{5}\) & Proposition 4.12 \\ \((128,2,65)\) & \(\Delta^{5}c\) & \(\tau^{2}\Delta^{4}h_{1}dg\) & Proposition 4.11 \\ \((135,3,69)\) & \(\Delta^{5}h_{1}d\) & \(\tau^{3}\Delta^{4}h_{1}^{2}g^{2}\) & Proposition 4.14 \\ \((144,0,72)\) & \(4\Delta^{6}\) & \(\tau\Delta^{6}h_{1}^{3}\) & Proposition 4.2 \\ \((147,1,74)\) & \(2\Delta^{6}h_{2}\) & \(\tau^{4}\Delta^{4}dg^{2}\) & Proposition 4.2 \\ \((147,3,75)\) & \(\Delta^{6}h_{1}^{3}\) & \(\tau^{9}\Delta^{2}h_{1}^{2}g^{5}\) & Proposition 4.12 \\ \((153,3,78)\) & \(\Delta^{6}h_{2}^{3}\) & \(2\tau^{4}\Delta^{4}g^{3}\) & Proposition 4.2 \\ \hline \hline \end{tabular} \end{table} Table 9: Hidden \(\nu\) extensions ## 7. Charts The following charts display the \(E_{2}\)-page, \(E_{9}\)-page, and \(E_{\infty}\)-page of the mANss for \(mmf\). Each of these pages is free as a module over \(\mathbb{Z}[\Delta^{8}]\), where \(\Delta^{8}\) is a class in the 192-stem. For legibility, we display the \(v_{1}\)-periodic elements on separate charts. See Section 2.7 for discussion of \(v_{1}\)-periodicity. To obtain the full \(E_{2}\)-page, one must superimpose Figures 1 and 3. To obtain the full \(E_{\infty}\)-page, one must superimpose Figures 2 and 5. We describe each chart in slightly more detail. * Figure 1 shows the \(v_{1}\)-periodic portion of the mANss \(E_{2}\)-page, together with all differentials that are supported by the displayed elements. * Figure 2 shows the \(v_{1}\)-periodic portion of the mANss \(E_{\infty}\)-page. * Figure 3 shows the non-\(v_{1}\)-periodic portion of the mANss \(E_{2}\)-page, together with all \(d_{3}\), \(d_{5}\), and \(d_{7}\) differentials that are supported by the displayed elements. * Figure 4 shows the non-\(v_{1}\)-periodic portion of the mANss \(E_{9}\)-page, together with all differentials that are supported by the displayed elements. * Figure 5 shows the non-\(v_{1}\)-periodic portion of the mANss \(E_{\infty}\)-page, together with all hidden extensions by \(2\), \(\eta\), and \(\nu\). ### Elements For each fixed stem and filtration, the mANss consists of a \(\mathbb{Z}[\tau]\)-module. We use a graphical notation to describe these modules. Our notation represents the associated graded object of a filtration that is related to the powers of \(2\). * An open box \(\square\) indicates a copy of \(\mathbb{Z}[\tau]\) in the associated graded object. * A solid gray dot \(\bullet\) indicates a copy of \(\mathbb{F}_{2}[\tau]\) in the associated graded object. * A solid colored dot indicates a copy of \(\mathbb{F}_{2}[\tau]/\tau^{r}\) in the associated graded object. The value of \(r\) is encoded in the color of the dot, as shown in Table 11. * Short vertical lines indicate extensions by \(2\). Our graphical notation has the advantages of flexibility, compactness, and convenience. We illustrate with two examples. **Example 7.1**.: In Figure 3 at degree \((48,0)\), one sees \(\bullet\). This notation indicates a copy of \(\mathbb{Z}[\tau]\). More precisely, it represents the filtration \(4\mathbb{Z}[\tau]\subseteq 2\mathbb{Z}[\tau]\subseteq\mathbb{Z}[\tau]\) whose filtration quotients are \(\mathbb{Z}[\tau]\), \(\mathbb{F}_{2}[\tau]\), and \(\mathbb{F}_{2}[\tau]\). This particular filtration is relevant for our mANsss computation because \(2\mathbb{Z}[\tau]\) is the subgroup of \(d_{5}\) cycles, and \(4\mathbb{Z}[\tau]\) is the subgroup of \(d_{7}\) cycles. **Example 7.2**.: In Figure 5 at degree \((120,24)\), one sees \(\bullet\). This notation indicates the \(\mathbb{Z}[\tau]\)-module \[\frac{\mathbb{Z}[\tau]}{8,4\tau^{2},2\tau^{6},\tau^{11}},\] which is somewhat cumbersome to describe in traditional notation. More precisely, it represents the filtration \[\frac{4\mathbb{Z}[\tau]}{8,4\tau^{2}}\subseteq\frac{2\mathbb{Z}[\tau]}{8,4 \tau^{2},2\tau^{6}}\subseteq\frac{\mathbb{Z}[\tau]}{8,4\tau^{2},2\tau^{6}, \tau^{11}}.\] whose filtration quotients are \(\mathbb{F}_{2}[\tau]/\tau^{2}\), \(\mathbb{F}_{2}[\tau]/\tau^{6}\), and \(\mathbb{F}_{2}[\tau]/\tau^{11}\). The blue, magenta, and orange dots correspond to these filtration quotients, as shown in Table 11. ### Differentials Lines of negative slope indicate Adams-Novikov differentials. The differentials are colored according to their lengths, as described in Table 12. These color choices are compatible with our choice of colors for \(\tau\) torsion in Section 7.1, in the following sense. An Adams-Novikov \(d_{2r+1}\) differential always takes the form \(d_{2r+1}(x)=\tau^{r}y\), and it creates \(\tau^{r}\) torsion in the following page. We use matching colors for \(d_{2r+1}\) and for \(\tau^{r}\) torsion. ### Extensions * Solid lines of slope 1 indicate \(h_{1}\) multiplications. The colors of these lines are determined by the \(\tau\) torsion of the targets. * Arrows of slope 1 indicate infinite families of elements that are connected by \(h_{1}\) multiplications. The colors of the arrows reflect the \(\tau\) torsion of the elements. * Solid lines of slope \(1/3\) indicate \(h_{2}\) multiplications. The colors of these lines are determined by the \(\tau\) torsion of the targets. * Dashed lines indicate hidden extensions by \(2\), \(\eta\), and \(\nu\). Some of these lines are curved solely for the purpose of legibility. * The colors of dashed lines indicate the \(\tau\) torsion of the targets of the extensions. For example, the vertical dashed line in the \(23\)-stem of Figure 5 is blue because its value \(\tau h_{1}^{3}g\) is annihilated by \(\tau^{2}\). Figure 5 shows an \(h_{1}\) extension and also a hidden \(\eta\) extension on the element \(\Delta^{4}cg\) in degree \((124,6,65)\). See Remark 4.3 for an explanation. \begin{table} \begin{tabular}{l l} \hline \hline color & slope & \(d_{r}\) \\ \hline red & \(-3\) & \(d_{3}\) \\ blue & \(-5\) & \(d_{5}\) \\ green & \(-7\) & \(d_{7}\) \\ cyan & \(-9\) & \(d_{9}\) \\ brown & \(-11\) & \(d_{11}\) \\ magenta & \(-13\) & \(d_{13}\) \\ orange & \(-23\) & \(d_{23}\) \\ \hline \hline \end{tabular} \end{table} Table 12: Color interpretations for Adams-Novikov differentials \begin{table} \begin{tabular}{l l} \hline \hline \(n\) & color \\ \hline \(\mathbf{F}_{2}[\tau]\) & \(\bullet\) gray \\ \(\mathbf{F}_{2}[\tau]/\tau\) & \(\bullet\) red \\ \(\mathbf{F}_{2}[\tau]/\tau^{2}\) & \(\bullet\) blue \\ \(\mathbf{F}_{2}[\tau]/\tau^{3}\) & \(\bullet\) green \\ \(\mathbf{F}_{2}[\tau]/\tau^{4}\) & \(\bullet\) cyan \\ \(\mathbf{F}_{2}[\tau]/\tau^{5}\) & \(\bullet\) brown \\ \(\mathbf{F}_{2}[\tau]/\tau^{6}\) & \(\bullet\) magenta \\ \(\mathbf{F}_{2}[\tau]/\tau^{11}\) & \(\bullet\) orange \\ \hline \hline \end{tabular} \end{table} Table 11: Color interpretations for elements Figure 1: The \(v_{1}\)-periodic portion of the C-motivic Adams-Novikov \(E_{2}\)-page for _nmf_ Figure 2: The \(v_{1}\)-periodic portion of the C-motivic Adams-Novikov \(E_{\infty}\)-page for _nmf_ Figure 1: The C-entictic Address Network for the map with different kinds of length at least 7 Figure 2: The C-entic Address Network for the map with different kinds of length at least 7
2303.01656
Feature Completion Transformer for Occluded Person Re-identification
Occluded person re-identification (Re-ID) is a challenging problem due to the destruction of occluders. Most existing methods focus on visible human body parts through some prior information. However, when complementary occlusions occur, features in occluded regions can interfere with matching, which affects performance severely. In this paper, different from most previous works that discard the occluded region, we propose a Feature Completion Transformer (FCFormer) to implicitly complement the semantic information of occluded parts in the feature space. Specifically, Occlusion Instance Augmentation (OIA) is proposed to simulates real and diverse occlusion situations on the holistic image. These augmented images not only enrich the amount of occlusion samples in the training set, but also form pairs with the holistic images. Subsequently, a dual-stream architecture with a shared encoder is proposed to learn paired discriminative features from pairs of inputs. Without additional semantic information, an occluded-holistic feature sample-label pair can be automatically created. Then, Feature Completion Decoder (FCD) is designed to complement the features of occluded regions by using learnable tokens to aggregate possible information from self-generated occluded features. Finally, we propose the Cross Hard Triplet (CHT) loss to further bridge the gap between complementing features and extracting features under the same ID. In addition, Feature Completion Consistency (FC$^2$) loss is introduced to help the generated completion feature distribution to be closer to the real holistic feature distribution. Extensive experiments over five challenging datasets demonstrate that the proposed FCFormer achieves superior performance and outperforms the state-of-the-art methods by significant margins on occluded datasets.
Tao Wang, Mengyuan Liu, Hong Liu, Wenhao Li, Miaoju Ban, Tuanyu Guo, Yidi Li
2023-03-03T01:12:57Z
http://arxiv.org/abs/2303.01656v2
# Feature Completion Transformer for Occluded Person Re-identification ###### Abstract Occluded person re-identification (Re-ID) is a challenging problem due to the destruction of occluders. Most existing methods focus on visible human body parts through some prior information (such as pose information, semantic segmentation, human body parsing, etc.). However, when complementary occlusions occur, features in occluded regions can interfere with matching, which affects performance severely. In this paper, different from most previous works that discard the occluded region, we propose a Feature Completion Transformer (FFFormer) to implicitly complement the semantic information of occluded parts in the feature space. Specifically, Occlusion Instance Augmentation (OIA) is proposed to simulates real and diverse occlusion situations on the holistic image. These augmented images not only enrich the amount of occlusion samples in the training set, but also form pairs with the holistic images. Subsequently, a dual-stream architecture with a shared encoder is proposed to learn paired discriminative features from pairs of inputs. Without additional semantic information, an occluded-holistic feature sample-label pair can be automatically created. Then, Feature Completion Decoder (FCD) is designed to complement the features of occluded regions by using learnable tokens to aggregate possible information from self-generated occluded features. Finally, we propose the Cross Hard Triplet (CHT) loss to further bridge the gap between complementing features and extracting features under the same ID. In addition, Feature Completion Consistency (FC\({}^{2}\)) loss is introduced to help the generated completion feature distribution to be closer to the real holistic feature distribution. Extensive experiments over five challenging datasets demonstrate that the proposed FCFormer achieves superior performance and outperforms the state-of-the-art methods by significant margins on occluded datasets. Person Re-identification, Transformer, Occlusion, Feature Completion ## I Introduction Person Re-Identification (Re-ID) involves identifying a person-of-interest across multiple non-overlapping cameras [1]. This task has a wide range of application backgrounds in many fields, such as video surveillance, activity analysis, sport understanding, and tracking. In the past few years, most of the existing methods mainly focus on the holistic person Re-ID problem, which assumes that pedestrians' body is fully visible. However, in real-world scenarios, such as stations, airports, and shopping malls, person images from surveillance cameras can be easily occluded by some obstacles, _e.g._ plants, umbrellas, cars, or other pedestrians, which poses a challenge for holistic Re-ID to identify persons with incomplete and invisible body parts. Therefore, the task of occluded person re-identification [2] is of significant practical importance. Occluded person Re-ID faces more challenging problems because of the following three reasons: (1) The limited number of occlusion samples in the training dataset [3] makes the model sensitive to diverse occlusions. (2) Occlusions will introduce a lot of noise information, which will interfere with feature extraction. (3) Occlusion will cause the loss of appearance information, making the extracted features less discriminative and incorrect semantic alignment for matching, as shown in Figure 0(a) and 0(b). To address the above problems, many occluded ReID methods are proposed. For the first problem, some researches [4, 5] employ the occlusion augmentation strategy to improve the robustness of the model. However, these occlusion augmentation strategies still cannot simulate the real environment well. Most of them focus on more occlusion samples for training, while do not improving the diversity of occlusion. For the second challenge, a large proportion of them exploits additional cues (e.g. pose estimation, semantic parsing or human mask) to indicate non-occluded body parts. For example, PGFA [3] directly utilizes pose information to indicate non-occluded body parts on the spatial feature map. PVPM [6] and HoReID [7] use graph-based approaches to model topology information by learning node-to-node or edge-to-edge correspondence to further mine the visible parts. These methods use external models directly at the inference stage to extract additional semantic information. However, those approaches are sensitive and error-prone when facing complex backgrounds or severe occlusions. For the third problem, some methods [8, 9] attempt to utilize GANs to predict the occluded part at the image level to restore the holistic pedestrian. However, the generated regions are still not convincing, resulting in limited Fig. 1: Illustration of part/complementary occlusion and our proposed feature completion paradigm. FCFormer represents an occluded person image by using the transformer decoder to implicitly recover the missing features in occluded regions. performance. A recent work [10] performs feature completion on occluded regions by using occluded body part landmarks and utilizing the region based encoder-decoder architecture, which efficiently captures spatial information to recover invisible parts from neighboring features. But this method still needs extra key-points information and pre-designed the regions in the feature space, which is not flexible enough. Currently, there is no effective approach to address all three of these issues simultaneously. To this end, we propose an _Feature Completion Transformer_ (FCFormer) for occluded person ReID to mitigate the impact of the three challenges. Specifically, the proposed method could adaptively complement the occluded features, as shown in Figure 1. As Figure 3 shows, the encoder is responsible for extracting global and local information. On this basis, four designs are proposed to alleviate the above three problems. **Firstly**, to obtain rich occlusion samples, we build an _Occlusion Instances Library_ (OIL) that contains 17 classes of occlusion samples obtained from the COCO [11] and Occluded-duke [3] training sets. Then we propose an _Occlusion Instance Augmentation_ (OIA) strategy that produces more diverse occluded training images pairs by pasting image patches from OIL with a specific strategy. As Figure 2 shows, compared with the existing occlusion augmentation strategies, such as random erase [12] and OAMN augmentation [4], the introduce of rich occlusion samples can better simulate real-world occlusion scenarios. **Secondly**, a dual-stream encoder paradigm with shared weights is proposed, and the generated holistic-occluded image pairs are taken as input to obtain aligned feature pairs. In general, holistic pedestrian features contain more ID representations than occluded pedestrian features. **Thirdly**, based on the above prior information, a self-supervised _Feature Completion Decoder_ (FCD) module is proposed to complement occluded pedestrian features. FCD has a learnable completion embedding representation, which enables the model to automatically complement missing features under the supervision of holistic pedestrian features without pre-defining human body regions and additional labels. **Finally**, to bridge the large feature gap between the occluded scene and the holistic scene, we design a _Cross Hard Triplet Loss_ (CHT) for metric learning. In addition, we propose a _Feature Completion Consistency Loss_ (FC\({}^{2}\)) to explicitly narrow the distribution discrepancy between completion features and holistic features in high-dimensional space, thereby ensuring that FCD could conduct feature completion and train in a self-supervised manner. The main contributions can be summarized as follows: **(1)**: We propose an Occlusion Instances Library and an Occlusion Instance Augmentation (OIA) strategy for Occluded ReID task, which brings more realistic image-level occlusion enhancement. **(2)**: We propose a Feature Completion Decoder to exploit learnable tokens for occluded features completion. Compared to pervious works, our method is more flexible without any pre-designed regions. **(3)**: We design a Cross Hard Triplet Loss and a Feature Completion Consistency Loss to impel model's perception ability and feature completion ability respectively. **(4)**: To prove the effectiveness of our method, we perform experiments on occluded and holistic Re-ID datasets. The results validate the proposed method performs favorably against state-of-the-art methods. ## II Related Work ### _Occluded Person Re-Identification_ Existing methods can be roughly divided into three categories, including part-to-part matching based methods, extra-clues based methods and feature recovery based methods. Part-to-part matching methods address the occlusion issue by evaluating the similarity between the aligned local features. Sun et al. [13] present a network called Part-based Convolution Baseline (PCB) that uniformly partitions the feature map and learns local features directly. Zhang et al. [14] aligns features by finding the shortest path among local features. sun et al. [15] propose a Visibility-aware Part Model (VPM), which learns to perceive the visibility of regions by self-supervised learning. Jia et al. [16] propose MoS measures the similarity between person images by using the Jaccard similarity, which formulates the occluded person re-ID as a set matching problem without alignment. The second category is extra-clues based methods, which leverages external cues to locate the human body part such as segmentation, pose estimation or body parsing. Song et al. [17] propose a mask-guided attention model to extract discriminative and robust features invariant to background clutters. Miao et al. [3] introduce Pose-Guided Feature Alignment (PGFA) that utilizes gaussian pose heatmap to mine discriminative parts without occlusion. Gao et al. [6] propose a Pose-guided Visible Part Matching (PVPM) model to learn discriminative local features with pose-guided attentions. Wang et al. [7] propose HOReID that utilize GCN to embed the high-order relation and human-topology information between various body joints. The last category is feature recovery based methods, which mainly focus on how to recover the features of occluded regions. Hou et al. [10] propose Spatial and Temporal Region Fig. 2: Examples of occluded pedestrians and introducing Augmentation. (a) shows the real-world occlusion scenarios. (b) and (c) show augmentation stray introduced by OAMN [4] and our proposed OIA respectively. Feature Completion (RFC) to recover semantics of occluded regions in feature space for image and video occluded person Re-ID respectively. Yu et al. present that occluded person image features can be reconstructed by its neighborhoods to tackle the problem of occluded person Re-ID. However, the above recovery-based methods all seriously need the support of additional semantic information. Different from the above methods, our method simulates a variety of occlusions, and introduces learnable completion tokens to complete the feature completion of occluded areas in a "self-supervised" paradigm, which can be well adapted to various occlusion problems. ### _Occlusion Augmentation_ The existing person re-identification model is difficult to deal with the occlusion problem, and the interference caused by occlusion limits their robustness. One of the factors is the limited occluded samples in the training set [18], making the model unable to learn the relationship between occlusions and pedestrians. An effective way to solve this problem is the occlusion augmentation strategy. Currently, occlusion augmentation strategies can be divided into three categories: (1) Random erase. Zhong et al. [12] propose a method to randomly erase pixel values directly on the image and replace them with random values. This method is simple and helps to reduce the risk of overfitting, but has low generalization. (2) Random cropping and random pasting. Chen et al. [4] randomly cropped a rectangular patch in the training images, and then scaled the cropped area and pasted it in the set four areas randomly. (3) Occluded sample augmentation. Jia et al. [5] proposed a method that crops different occlusions from train set and randomly synthesize occlusions for each training batch. Compared with strategy (2), the generated images can better simulate the real occlusion scene, which makes the model could implicitly learn more robust features. However, the occlusions in the Occluded-Duke training set are utilized to synthesize, where the categories and number of occlusions are limited, and these methods do not take full advantage of occlusion semantics and paired features brought by augmentation. ## III Proposed Method In this section, we introduce the proposed Feature Completion Transformer (FCFormer) in detail. Firstly, in order to alleviate the model sensitivity problem caused by the small number of occluded samples, we introduce an online data augmentation module named Occlusion Instance Augmentation (OIA) that produces image pairs and occlusion masks (see section III-A for details). Then, a shared dual-stream encoder is proposed to extract pairwise aligned features (see section III-B for details), which could better learn the occlusion relationship from the paired images. A Feature Completion Decoder (FCD) is further proposed to complement human body features in occluded regions (see section III-C for details). The flowchart of our method is shown in Figure 3 and the overall FCFormer approach is outlined in Algorithm 1. ### _Occlusion Instance Augmentation_ Most of the existing occlusion augmentation strategies use random cropping to obtain occlusions from randomly sampled images, and stitch the randomly cropped pictures to form Fig. 3: Overall architecture of feature completion transformer (FCFormer). FCFormer consists of three parts (Occluded Instance Augmentation, Dual stream architecture, and Feature Completion Stream) and two losses (Cross Hard Triplet Loss and Feature Completion Consistency Loss). The holistic-occlusion sample pairs generated from OIA are fed into dual stream architecture with shared encoder. The non-shared parts of dual architecture are used to train the specified tasks. Then FCD takes the learnable tokens and occluded features as input to recovery holistic features. We propose CHT Loss to allow the model to better perform metric learning among three different modal features (occlusion, holistic, and completion features). At last, FC\({}^{2}\) Loss is proposed to guide FCD to generate a completion feature similar enough to the holistic feature. In the test stage, the features from three branches are utilized for retrieving. an occlusion scene. However, the cropped occlusion has no explicit semantic information and cannot well simulate the occlusion in the real environment. To address above issues, we propose the diversity occlusion instance augmentation strategy. The strategy contains Occlusion Instance Library (OIL) and Occlusion Instance Augmentation strategy (OIA). **Occlusion Instance Library.** In order to better utilize occlusion augmentation to solve the occlusion problem, we propose a general occlusion dataset. By utilizing this dataset, more diverse and realistic occlusion situations can be simulated. We first merge the Occluded-Duke [3] training set and the COCO [11] training set, then utilize the Mask R-CNN [19] to obtain the instance bounding box. In order to avoid the interference of noise information, we erase the pixels irrelevant to the instance. As shown in Figure 4, we manually selected 17 common classes (such as pedestrians, vehicles, bicycles, umbrella, etc.) as occlusion samples with a total of 1000 images. **Occlusion Instance Augmentation.** Empirically, Some common occlusions have position priors in detected person image (for example, as the fig 2(a) shows, vehicles are generally in the lower half of the image and are unlikely to appear in other areas of the image). So we determine the augmentation position according to the category of occlusion. As Table I shows, the OIL is divided into two sets: strong position priors set \(O_{s}\) and weak position priors set \(O_{w}\). For strong position priors set, we align the bottom edge and place them randomly in the horizontal directions. However, for the weak position prior set, its location in the detection box can be relatively random. Specifically, given an image batch \(X\), for each \(x_{i}\in X\) and \(x_{i}\in\mathbb{R}^{H\times W\times C}\), where \(H,W,C\) denote the high, width, and channel dimension respectively. Our augmentation scheme has following few steps: (1) Randomly select an occlusion sample \(x_{ab}\) from \(OIL\). (2) Randomly Scale the occlusion \(x_{ab}\) to 10%\(\sim\)70% of image size \(H\times W\). It can be described as \[\epsilon=\frac{\delta(H\times W)}{h_{o}\times w_{o}}, \tag{1}\] where \(h_{o}\times w_{o}\) denotes the size of chosen occlusion sample \(x_{ab}\), \(\epsilon\) is the ratio of \(h_{o}\) and \(w_{o}\), \(\delta\sim\mathcal{U}(0.1,0.7)\) is the scaling factor, \(\mathcal{U}\) denotes the uniform distribution. (3) Determine the augmentation area. If \(x_{ob}\in O_{s}\), we choose augmentation location \((h,w)\), where \(h\in\{H-\epsilon h_{o},H\}\) and \(w\in\{0,W\}\). If \(x_{ob}\in O_{w}\), we randomly put the occlusion sample onto training image \(x_{i}\). Finally, our OIA could be formulated as: \[x_{op}=\psi(x_{ob},\epsilon)+M\odot\rho(x_{i}), \tag{2}\] where \(x_{op}\) denotes the augmented image patches, \(\psi(\cdot,\cdot)\) is the resize operation, \(M\) denotes the occlusion binary mask sampled from resized occlusion \(x_{ob}\), and \(\rho(\cdot)\) means clipping operation. At last, \(x_{op}\) will be paste on the holistic image \(x_{i}\). The overall process is shown in Figure 5. Following above precess, we obtain an occluded copies of each training image. The augmented image is denoted as \(x_{o}\). ### _Dual Stream Architecture_ Following method [20] and [21], we build our feature extractor based on ViT [22]. Given a pair of holistic-occluded images \(x_{h}\in\mathbb{R}^{H\times W\times C}\) and \(x_{o}\in\mathbb{R}^{H\times W\times C}\) from OIA, the shared encoder will split the input image into \(N\) non-overlapping patches by using a 2D convolution layer \(p(\cdot)\), then patch embeddings \(E_{p}\in\mathbb{R}^{N\times d}\) can be obtained, where \(d\) denotes the embedding dimension. In order to alleviate the impact of camera perspective, we follow the method in [20] and set a learnable parameter \(E_{cm}\) to learn camera perspective information. At the same time, a learnable global embedding \(E_{g}\) is prepended to the patch embeddings. \[E_{p}=p(x), \tag{3}\] \[E_{input}=Concat(E_{g},E_{p})+P_{E}+\lambda_{cm}E_{cm}, \tag{4}\] where \(\lambda_{cm}\) is the ratio of camera embeddings, \(P_{E}\) is the learnable position embedding. Then ViT will take \(E_{input}\) as input. The output feature for each stream is \(f\in\mathbb{R}^{(N+1)\times C}\), where \(N+1\) denotes the image tokens and one global token, \(c\) is the channel dimension. The global tokens \(f_{og}\in\mathbb{R}^{1\times C}\) and \(f_{bg}\in\mathbb{R}^{1\times C}\) are treat as global feature. Several methods [13, 23] have proved that fine-grained part features are effective for person re-identification tasks. Thus, the rest of image tokens \(f_{ot}\in\mathbb{R}^{N\times C}\) and \(f_{ht}\in\mathbb{R}^{N\times C}\) will be fed into two parallel streams to generate local features. Following the methods [20] and [21], we split image tokens into \(M_{n}\) parts and concatenate the corresponding global features for each part. After obtained the \(M_{n}\) parts features, we feed them into a non-shared transformer layer and finally get local feature Fig. 4: Some occlusion samples from Occlusion Instance Library. Fig. 5: Schematic diagram of Occlusion Instance Augmentation. \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multirow{3}{*}{OIL} & \multicolumn{2}{c|}{Strong prior set \(O_{s}\)} & \multicolumn{2}{c|}{Weak prior set \(O_{w}\)} \\ \cline{2-4} & car & truck & umbrella & Backpack \\ \cline{1-1} & bicycle & motorcycle & suitcase & road sign \\ \cline{1-1} & fire hydrant & table & Kite & tennis racket \\ \cline{1-1} & pedestrian & chair & suitcase & billboard \\ \cline{1-1} & - & bench & - & - \\ \hline \end{tabular} \end{table} TABLE I: Classes in Occlusion Instance Library. representations \(f_{hp}\in\mathbb{R}^{M_{w}\times C}\) and \(f_{op}\in\mathbb{R}^{M_{w}\times C}\). Normalized features \(\hat{f_{hp}}\) and \(f_{op}\) can be obtained through a BNNeck [24]. In short, dual stream architecture consists of a shared ViT backbone and two non-shared transformer layers. The shared backbone can extract general features, while the unshared transformer layer will learn specific patterns for occluded and holistic Re-ID tasks. _Loss:_ To guarantee that the local features are related to the identity and ensure their discriminability, we train it with following identity loss: \[\mathcal{L}_{id_{g}}=\frac{1}{B}\sum_{i}\log(p(\hat{f_{og}})\cdot p(\hat{f_{hg} }^{\dagger})), \tag{5}\] \[\mathcal{L}_{id_{i}}=\frac{1}{B}\sum_{i}\log(p(\hat{f_{hp}}^{\dagger})), \tag{6}\] \[\mathcal{L}_{id_{i}}=\frac{1}{B}\sum_{i}\log(p(\hat{f_{op}}^{\dagger})), \tag{7}\] \[\mathcal{L}_{id}=\mathcal{L}_{id_{i}}+\mathcal{L}_{id_{i}}+\mathcal{L}_{id_{i }}, \tag{8}\] where \(B\) denotes the batch size, \(p(\cdot)\) denotes the probability function. ### _Feature Completion Stream_ Even though the occlusion augmentation strategy and the training method of shared dual-stream network can force the model to better focus on unoccluded parts, the lack of human body part information caused by occlusion has not received much attention. Therefore, we propose a feature completion stream to recover the features of the occluded parts. The feautre completion stream consist of a Feature Completion Decoder (FCD) and a transformer layer, as shown in Figure 3. The FCD is illustrated in Figure 6, which consists of a hybrid feature embedding, several transformer layers, and training loss. _Hybrid Feature Embedding:_ we consider the completion pedestrian features to be composed of occlusion features and recovery features. Inspired by MAE [25], our approach reconstructs the holistic features from the latent occluded features. For each occluded input features, learnable prototype completion tokens \(T_{c}\in\mathbb{R}^{1\times K\times C}\) are prepended to aggregate missing feature information from \(f_{ot}\). It is worth noting that the position of MAE's mask token is fixed within the batch, which means that each instance can use the same set of mask tokens for feature completion. However, unlike the MAE, the occlusion positions of samples generated by OIA are randomly sampled, so we need to conduct completion on each instance. Thus, we fuse the learnable prototype tokens \(T_{c}\) with occlusion features to obtain instance-level tokens. Formally, \[T_{b}=({W_{1}}^{T}f_{ot})T_{c}, \tag{9}\] where \(W_{1}\in\mathbb{R}^{N\times K}\) is linear projection, \(T_{b}\in\mathbb{R}^{B\times K\times C}\) is the instance completion tokens. Here, \(K=\alpha N\), and \(\alpha\) is a hyper-parameter to control the number of tokens. Then we map \(f_{ot}\) into \(L\) dimension, and concatenate with \(T_{b}\). Formally, \[f_{r}=Concat(f_{og},T_{b},(W_{2}\,f_{ot})), \tag{10}\] where \(Concat(\cdot)\) is operation of vector combination, \(W_{2}\in\mathbb{R}^{N\times L}\) is linear projection, and \(L=(1-\alpha)N\). \(f_{og}\) is prepended to provide global information. Finally, reconstruct feature \(f_{r}\in\mathbb{R}^{B\times(N+1)\times C}\) is obtained. As mentioned above, due to the randomness of the position of the occluded region, the prepended features need to learn different regions for each instance. However, the operation of prepending limits the position learning, so we use a learnable position embedding to assist the model to encode the token position of each instance. Formally, \[f_{r}=(W_{3}f_{r})P_{e}+f_{r}, \tag{11}\] where \(W_{3}\) is the parameter of convolution, \(P_{e}\) is the positional embedding. _Transformer Layers:_ the consolidated features are sent to the transformer layers to complement unoccluded body parts. Following the Transformer [26], the query, key, and value can be formulated as: \[Q=W_{a}f_{r},K=W_{k}f_{r},V=W_{v}f_{r}, \tag{12}\] where \(W_{a},W_{k},W_{v}\) are weights of linear projections. Through the attention mechanism, the final completion feature can be expressed as: \[f_{m}=softmax(\frac{QK}{\sqrt{k}})V \tag{13}\] \[f_{cp}=Wf_{m}+b. \tag{14}\] _Training Loss:_ Here we use the holistic feature \(f_{br}\) as the target, which is taken from the complete branch of the dual-stream branch, thus forming a self-supervised training method. The MSE loss function is utilized to drive FCD for training. The training loss can be defined as: \[\mathcal{L}_{fcd}=\|f_{cp}-f_{br}\|_{2}^{2}. \tag{15}\] ### _Overall Training Loss_ We propose a Cross Hard Triplet Loss (CHT) to help the model to better perform metric learning among three different modes of features. (occluded features, holistic features, and complete features). Furthermore, In order to successfully complete the occluded pedestrian features, we propose a Feature Completion Consistency Loss (FC\({}^{2}\)). It is worth noting that Fig. 6: Illustration of the proposed feature completion decoder. this method does not require any additional labels and can be trained in a self-supervised manner. _Cross Hard Triplet Loss:_ Here, we set holistic feature \(f_{hp}\) as anchor, and we want ensure that \(f_{hp}\) is closer to all positive samples than other negative samples, which is the same as original triplet loss [27]. However, most previous models only measure features in a single modality. We find the hardest pair of positive samples and the hardest pair of negative samples among holistic features, occluded features, and complementary features for optimization. Formally, \[p_{1}=\operatorname*{arg\,max}_{j}|f_{hp}^{i}-f_{op}^{j}|_{+},p_{2}= \operatorname*{arg\,max}_{j}|f_{hp}^{i}-f_{c}^{j}|_{+}, \tag{16}\] \[n_{1}=\operatorname*{arg\,min}_{j}|f_{hp}^{i}-f_{op}^{j}|_{-},n_ {2}=\operatorname*{arg\,min}_{j}|f_{hp}^{i}-f_{c}^{j}|_{-},\] (17) \[\mathcal{L}_{cht}=\sum_{i}^{B}\max([|\!|f_{hp}^{i}-f_{op}^{p}|\!| _{2}^{2}-|\!|f_{hp}^{i}-f_{op}^{m}|\!|_{2}^{2}+\alpha],0)\] (18) \[\quad+\sum_{i}^{B}\max([|\!|f_{hp}^{i}-f_{c}^{p}|\!|_{2}^{2}-|\!| f_{hp}^{i}-f_{c}^{m}|\!|_{2}^{2}+\alpha],0),\] where \(p_{1}\) is the index of the hardest positive sample in \(f_{op}\), \(p_{2}\) is the index of the hardest positive sample in \(f_{c}\), \(n_{1}\) is the index of the hardest negative sample in \(f_{op}\), and \(n_{2}\) is the index of the hardest negative sample in \(f_{c}\). \(\alpha\) is a margin hyperparameter. _Feature Completion Consistency Loss:_ it is almost impossible to complete the ideal holistic feature distribution. So in order to make the distribution of features obtained by completion consistent with the distribution of holistic features, our goal then becomes to minimize the difference between the completed feature distribution and the holistic feature distribution: \[\mathcal{L}_{fc^{2}}=\frac{1}{B}\sum_{i}[p(\hat{f}_{c}^{i})\log p(\hat{f}_{c} ^{i})-p(\hat{f}_{c}^{i})\log p(\hat{f}_{hp}^{i})] \tag{19}\] where \(\hat{f}_{c}\) denotes the feature embedding after BNNeck, \(p(\cdot)\) denotes the predicted probability of given features. ### _Training and Inference_ In the training stage, dual stream architecture and feature completion decoder are trained together with the overall objective loss, which is formulated as Eq.20. \[\mathcal{L}=\mathcal{L}_{id}+\mathcal{L}_{fcd}+\mathcal{L}_{cht}+\mathcal{L}_ {fc^{2}}, \tag{20}\] During inference stage, the model is relatively simple, a VIT encoder and a feature completion decoder can perform the inference. ## IV Experiments ### _Datasets and Evaluation Metrics_ To illustrate the effectiveness of our method, we evaluate our method on five Re-ID datasets for two tasks including occluded person Re-ID, and holistic person Re-ID. **Occluded-Duke**[3] consists of 15,618 training images of 702 persons, 2,210 occluded query images of 519 identities and 17,661 gallery images of 1110 persons. It is a subset of DukeMTMC-reID [40], which is currently the most challenging dataset due its complex scene. **P-DukeMTMC**[21] is the subset of the DukeMTMC-reID, which consists of 12,927 training images with 665 identities, 2163 occluded images for query and 9053 images without occlusion for gallery. **Occluded-REID**[2] is captured by the mobile phone, which consist of 2,000 images of 200 occluded persons. Each identity has five full-body person images and five occluded person images with different types of severe occlusions. **Market-1501**[41] contains 1,501 identities observed from 6 camera viewpoints, 12,936 training images of 751 identities, 19,732 gallery images and 2,228 queries of 750 persons. **DukeMTMC-reID**[40] contains 36,411 images of 1,404 identities captured from 8 camera viewpoints. It contains 16,522 training images, 17,661 gallery images and 2,228 queries. **Evaluation Metrics.** We adopt Cumulative Matching Characteristic (CMC) curves and mean average precision (mAP) to evaluate different Re-ID models. ### _Implementation Details_ Our encoder is followed TransReID [20], which consists of 12 transformer layers. The initial weights of encoder are pre-trained on ImageNet-21K and then finetuned on ImageNet-1K. During the training and testing time, input images are both resized to 256 \(\times\) 128. The training images are augmented with random horizontal flipping, and padding. The number of the split tokens \(M_{N}\) is set to 4 for Occluded-Duke dataset (Details are described in Sec. IV-E ). The number of decoder layer is set to 2. And the number of token \(\alpha\) in FCD is 0.7. The hidden dimension \(D\) is set to 768. The transformer decoder is same with [26]. The batch size is set to 64 with 4 images per ID. The learing rate is initialized at 0.008 with cosine learning rate decay. We conduct all experiments on one RTX 3090 GPU. ### _Comparison with the State-of-the-Art Models_ We compare our method with the state-of-the-art methods on seven benchmarks including occluded person ReID and holistic person ReID. **Results on Occluded datasets.** Table II shows the results on three occluded datasets, _i.e._, Occluded-Duke and P-DukeMTMC. As table shows, two classes of methods will be compared: CNN based ReID methods [3, 4, 6, 7, 10, 13, 16, 29, 30, 31, 33, 34, 35, 36, 37, 38, 39, 42, 43, 44] and Transformer based ReID methods [20, 21, 23, 28]. The results show that the proposed FCFormer consistently achieves the competitive performance on Occluded datasets. It is worth noting that some previous methods adopt additional models, such as PFD [21] and HOReID [7], both utilize skeleton topology information to cluster unoccluded features, thus achieving competitive results. In comparison, FCFormer achieves the best result with Rank-1 accuracy of 71.3% and mAP of 60.9% on the challenging Occluded-Duke dataset without any external clues, which outperforms the previous state-of-the-art methods by a large margin (at least +3.2%Rank-1/+0.8%mAP). On the P-DukeMC dataset, our FCFormer achieves 91.5% Rank-1 accuracy and 80.7% mAP, surpassing the state-of-the-art model QPM [39] by 2.1% and 6.3% in terms of Rank-1 accuracy and mAP respectively. In addition, FCFormer is flexible and scalable. We change the step of sliding-window, and also introduce the Re-ranking technology. As the table shows, FCFormer\({}^{\dagger}\) achieves 73.0%/63.1% on Rank-1/mAP on Occluded-Duke dataset by simply reducing the step size, surpassing others by at least 4.9%/3% in terms of Rank-1 and mAP respectively. Further, with the help of re-ranking, our model FCFormer\({}^{\dagger}\) + Re-ranking achieves the highest results by far, reaching 79.4% Rank-1 and 77.2% mAP. **Results on Holistic ReID datasets.** We conduct experiments on two holistic ReID datasets including Market-1501, and DukeMTMC-reID. Table III shows the results on Market-1501 and DukeMTMC-reID datasets. Specifically, our method FCFormer achieves comparable performance (95.0%/86.8% Rank-1 accuracy and 89.7%/78.8% mAP, respectively) on Market-1501 and DukeMTMC-reID datasets. As shown in the results, PFD achieves better results than ours on the holistic \begin{table} \begin{tabular}{|l|c|c|c|c c c|c c|c c|} \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Backbone} & \multirow{2}{*}{Extra-clue} & \multicolumn{4}{c|}{Occluded-Duke} & \multicolumn{2}{c|}{P-DukeMTMC} & \multicolumn{2}{c|}{Occluded-REID} \\ \cline{4-13} & & & Rank-1 & Rank-5 & Rank-10 & mAP & Rank-1 & mAP & Rank-1 & mAP \\ \hline Part-Aligned (ICCV 17) [29] & GoogLeNet & ✗ & 28.8 & 44.6 & 51.0 & 44.6 & - & - & - & - \\ PCB (ECCV 18) [13] & ResNet50 & ✗ & 42.6 & 57.1 & 62.9 & 33.7 & 79.4 & 63.9 & 41.3 & 38.9 \\ Part Bilinear (ECCV 18) [30] & GoogLeNet & ✓ & 36.9 & - & - & - & - & - & - & - \\ FD-GAN (NIPS 18) [31] & ResNet50 & ✓ & 40.8 & - & - & - & - & - & - & - \\ DSR (CVPR 18) [32] & ResNet50 & ✗ & 40.8 & 58.2 & 65.2 & 30.4 & - & - & 72.8 & 62.8 \\ SFR (Arixi 18) [33] & FCN & ✗ & 42.3 & 60.3 & 67.3 & 32.0 & - & - & - & - \\ Ad-Occluded (CVPR 18) [34] & ResNet50 & ✗ & 44.5 & - & - & 32.2 & - & - & - \\ PGFA (ICCV 19) [3] & ResNet50 & ✓ & 51.4 & 68.6 & 74.9 & 37.3 & 85.7 & 72.4 & 80.7 & 70.3 \\ PVPM (CVPR 20) [6] & ResNet50 & ✓ & 47.0 & - & - & 37.7 & 85.1 & 69.9 & 70.4 & 61.2 \\ ISP (ECCV 20) [35] & ResNet50 & ✗ & 62.8 & 78.1 & 82.9 & 52.3 & 89.0 & 74.7 & - & - \\ HOReID (CVPR 20) [7] & ResNet50 & ✓ & 55.1 & - & - & 43.8 & - & - & 80.3 & 70.2 \\ SORN (IC5VT 21) [36] & ResNet50 & ✓ & 57.6 & 73.9 & 79.0 & 46.3 & - & - & - & - \\ MSs (AAAI 21) [16] & ResNet50 & ✗ & 61.0 & 77.4 & 79.1 & 49.2 & - & - & - & - \\ OAMN (ICCV 21) [4] & ResNet50 & ✗ & 62.6 & 77.5 & - & 46.1 & - & - & - & - \\ RFCnet (TPAMI 21) [10] & ResNet50 & ✓ & 63.9 & 77.6 & 82.1 & 54.5 & - & - & - & - \\ Pirt (ACM MM 21) [37] & ResNet50-bin & ✓ & 60.0 & - & - & 50.9 & - & - & - & - \\ PGFL- (ACM MM 21) [38] & ResNet50 & ✓ & 63.0 & - & - & 54.1 & - & 80.7 & 70.3 \\ QPM (TMM 22) [39] & ResNet50 & ✗ & 64.4 & 79.3 & 84.2 & 49.7 & 89.4 & 74.4 & - & - \\ \hline TransReID (ICCV 21) [20] & ViT-B & ✗ & 64.2 & - & - & 55.7 & - & - & 70.2\(\star\) & 67.3\(\star\) \\ PAT (CVPR 21) [23] & Hybrid & ✗ & 64.5 & - & - & 53.6 & - & - & 81.6 & 72.1 \\ DRL (MM 22) [5] & Hybrid & ✗ & 65.8 & 80.4 & 85.2 & 53.9 & - & - & - & - \\ PFD (AAAI 22) [21] & ViT-B & ✓ & 67.7 & 80.1 & 85.0 & 60.1 & - & 79.8 & 81.5 \\ FED (CVPR 22) [28] & ViT-B & ✗ & 68.1 & - & - & 56.4 & - & **86.3** & 79.3 \\ **FCFormer (Ours)** & ViT-B & ✗ & **71.3** & **84.1** & **87.1** & **60.9** & **91.5** & **80.7** & 84.9 & **86.2** \\ \hline TransReID (ICCV 21) [20] & ViT-B & ✗ & **66.4** & - & - & 59.2 & - & - & - \\ PFD\({}^{\dagger}\) (AAAI 22) [21] & ViT-B & ✓ & 69.5 & - & - & 61.8 & - & 81.5 & 83.0 \\ **FCFormer (Ours)** & ViT-B & ✗ & **73.0** & **84.9** & **88.6** & **63.1** & **92.4** & **82.5** & 83.6 & **85.7** \\ \hline PGFA [3] + Re-ranking & ResNet50 & ✓ & 52.4 & 68.6 & 74.9 & 46.8 & - & - & - \\ HOReID [7] + Re-ranking & ResNet50 & ✓ & 58.3 & - & - & 49.2 & - & - & - \\ Pirt [37] + Re-ranking & ResNet50-bin & ✓ & 62.1 & - & - & 59.3 & - & - & - \\ PFD [21] + Re-ranking & ViT-B & ✓ & 71.7 & 79.4 & 82.3 & 71.7 & - & - & - \\ **FCFormer (Ours)** + Re-ranking & ViT-B & ✗ & **76.8** & **85.7** & **88.4** & **75.8** & 89.1 & **85.3** & 84.6 & **86.9** \\ **FCFormer\({}^{\dagger}\) (Ours)** + Re-ranking & ViT-B & ✗ & **79.4** & **86.7** & **89.3** & **77.2** & 90.6 & **87.1** & - & - \\ \hline \end{tabular} \end{table} TABLE II: Performance comparison with state-of-the-art methods on Occluded-Duke and P-DukeMTMC. † means the encoder is with a small step sliding-window setting. Hybrid denotes ResNet50 + transformer encoder. datasets. This is mainly because PFD incorporates skeleton information during encoding and decoding process. Moreover, PFD uses more parts for feature retrieval, and the feature dimension is larger, so the features are more fine-grained. In addition, FCFormer applies a shared backbone, the occlusion branch in the Dual Stream Architecture will affect the back-propagation of the complete feature. Our method FCFormer, although not designed for holistic person re-identification, still outperforms CNN-based methods without the help of external models and has a small gap with current state-of-the-art models, illustrating the robustness of FCFormer. ### _Performance under Transfer setting_ In order to further verify the effectiveness of the method, we follow the methods [6, 7, 39] that adopts Market-1501 or MSMT17 as training set, then directly evaluate on occluded dataset (Occluded-REID and P-DukeMTMC). A number of methods [3, 54, 55, 56, 13, 6, 13, 57, 58, 59, 60, 57] are involved in the comparison. Table II and IV show that FCFormer achieves the competitive Rank-1 accuracy and mAP on Occluded-REID dataset and P-DukeMTMC dataset respectively. On the P-DukeMTMC dataset, our FCFormer outperforms QPM by 1.0% and 8.3% in terms of Rank-1 accuracy and mAP respectively. On the Occluded-REID dataset, FCFormer produces the comparable results with FED [28], achieving 84.9% Rank-1. We fail to achieve the highest Rank-1 performance on Occluded-REID for the following reasons: First, the Transformer has poor cross-domain generalization ability on small datasets. Secondly, we use the same backbone as TransReID, which contains some dataset-specific tokens, further limiting the generalization ability. However, it is worth noting that our method achieves 86.2% mAP on Occluded-REID and 39.4% mAP on P-DukeMTMC, surpassing previous model by at least 4.7% and 8.3% on Occluded-REID and P-DukeMTMC respectively. The reason for this is that the completion feature participates in the feature distance calculation, which helps the average accuracy. ### _Ablation Study_ In this part, we perform ablation studies on Occluded-Duke dataset to futher analyze the effectiveness of each component. **Effectiveness of proposed Modules.** we present the ablation studies of Dual-structure, Occlusion Instance Augmentation (OIA) and Feature Completion Decoder(FCD). Table V shows the experimental results. Index-1 denotes the baseline transformer structure [20]. (1) From index-1 and index-2, we can see that the OIA module can improve 4.1% Rank-1 accuracy and 3.2% mAP compared with baseline model. OIA successfully incorporates a wider occlusion ratio and a variety of occlusion samples into the training set. (2) From index-2 and index-3, when dual-structure is performed, the dual-structure improves performance by 3.1% Rank-1 accuracy and 2.5% mAP. This demonstrates that dual-stream structure can effectively enable the shared part to learn a general occlusion and non-occlusion pattern, while the non-shared part completes the training of specific tasks. (3) The performance gain brought by FCD can be demonstrated by index-6 and index-8. After adding FCD to index-6, the performance increased by 1.9% and 1.7% on rank-1 and mAP respectively. By comparing index-3 and index-5, FCD could increase the performance by 0.8% Rank-1 and 1.2% mAP. This shows that the features obtained by the holistic branch are more robust with the help of CHT loss, making the supervision data of FCD more precise and the completion features produced by FCD more discriminative. **Effectiveness of proposed Losses.** we also present the ablation studies of Cross Hard Triplet Loss (CHT loss) and Feature Completion Consistency Loss (FC\({}^{2}\) loss). (1) Index-3 and index-6 show that the proposed CHT loss can contribute to network, improving the performance by 1.9% Rank-1 and 2.8% mAP. The CHT loss allows it to find the most difficult features that have different modalities than the anchors in the fused features. This is advantageous for attracting features with the same identity but distinct modalities in the feature space. (2) Both (Index-5, Index-7) and (Index-8, index-9) demonstrate that FC\({}^{2}\) loss can guide FCD to bring the generated pedestrian feature distribution closer to the holistic feature distribution, enabling FCD to complete pedestrian features while inputting occlusion features. \begin{table} \begin{tabular}{l|c c c c} \hline Methods & Rank-1 & Rank-5 & Rank-10 & mAP \\ \hline \hline IDE (CVPR 17) [56] & 36.0 & 49.3 & 55.2 & 19.7 \\ HACNN (CVPR 18) [54] & 30.4 & 42.1 & 49.0 & 17.0 \\ PCB (ECCV 18) [13] & 43.6 & 57.1 & 63.3 & 24.7 \\ OSNet (ICCV 19) [55] & 33.7 & 46.5 & 54.0 & 20.1 \\ Part Bilimen (ECCV 18) [30] & 39.2 & 50.6 & 56.4 & 25.4 \\ ISP* (ECCV 20) [35] & 46.3 & 55.9 & 60.8 & 26.4 \\ PGFA\({}^{*}\) (TNNLS 21) [3] & 48.2 & 59.6 & 65.8 & 26.8 \\ PVPM (CVPR 20) [6] & 51.5 & 64.4 & 69.6 & 29.2 \\ QPM (MMM 22) [39] & 57.3 & **69.9** & **75.5** & 31.1 \\ \hline **FCFormer** (Market \(\rightarrow\) P-Duke) & **58.3** & 69.7 & 74.4 & **39.4** \\ **FCFormer** (MSMT17 \(\rightarrow\) P-Duke) & **58.2** & 69.8 & 74.6 & **41.4** \\ \hline \end{tabular} \end{table} TABLE IV: Performance comparisons on P-DukeMTMC under transfer setting. \begin{table} \begin{tabular}{l|c c|c c} \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c|}{Market-1501} & \multicolumn{3}{c}{DukeMTMC} \\ \cline{2-5} & Rank-1 & mAP & Rank-1 & mAP \\ \hline PCB (ECCV 18) [13] & 92.3 & 77.4 & 81.8 & 66.1 \\ DSR (CVPR 18) [32] & 83.6 & 64.3 & - & - \\ BOT (CVPRW 19) [45] & 94.1 & 85.7 & 86.4 & 76.4 \\ VPM (CVPR 19) [15] & 93.0 & 80.8 & 83.6 & 72.6 \\ MVPM (ICCV 19) [46] & 91.4 & 80.5 & 83.4 & 70.0 \\ SFT (ICCV 19) [47] & 93.4 & 82.7 & 86.9 & 73.2 \\ CAMA (CVPR 19) [48] & 94.7 & 84.5 & 85.8 & 72.9 \\ IANet (CVPR 19) [49] & 94.4 & 83.1 & 87.1 & 73.4 \\ Circle (CVPR 20) [50] & 94.2 & 84.9 & - & - \\ SPReID (CVPR 18) [51] & 92.5 & 81.3 & 84.4 & 70.1 \\ P’Net (ICCV 19) [52] & 95.2 & 85.6 & 86.5 & 73.1 \\ PGFA (CVPR 19) [3] & 91.2 & 76.8 & 82.6 & 65.5 \\ AANet (CVPR 19) [53] & 93.9 & 82.5 & 86.4 & 72.6 \\ HOReID (CVPR 20) [7] & 94.2 & 84.9 & 86.9 & 75.6 \\ Pirt (TMM 22) [37] & 94.1 & 86.3 & 88.9 & 77.6 \\ \hline TransReID (ICCV 21) [20] & 95.0 & 88.2 & 89.6 & 80.6 \\ PAT (CVPR 21) [23] & 95.4 & 88.0 & 88.8 & 78.2 \\ DRL-Net (TMM 22) [5] & 94.7 & 86.9 & 88.1 & 76.6 \\ PFD (AAAI 22) [21] & **95.5** & **89.6** & **90.6** & **82.2** \\ FED (CVPR 22) [28] & 95.0 & 86.3 & 89.4 & 78.0 \\ FCFormer (_Ours_) & 95.0 & 86.8 & 89.7 & 78.8 \\ \hline \end{tabular} \end{table} TABLE III: Performance comparison with state-of-the-art models on Market-1501 and DukeMTMC-reID datasets. **Analysis of the number of part tokens \(M_{n}\).** The number of part tokens determines the granularity of output features. Figure 7 shows the performance of the REID influenced by the number of part tokens \(M_{n}\). As we can see, FCFormer achieves the best Rank-1/mAP when \(M_{n}=4\). The initial performance improves as \(M_{n}\) increases, which shows adequate part tokens can effectively help the model learn more robust occlusion features. However, increasing its value further contributes to performance decline. We conclude from Figure 7 that redundant tokens increase the dimensionality of features, and the model pays attention to more details, which weakens the anti-noise ability. **Analysis of scaling ratio \(\delta\) in OIA.** Figure 8 depicts the experimental findings of FCFormer at various fixed occlusion ratios. We note that the scaling ratio of 0.4 allows the FCFormer to perform at its best. Figure 9 shows that OIA may generate some undesired augmented images. For example, large portions of the human body's information are preserved in the augmented images produced by tiny occlusion ratios (Figure (a)a). As a result, the network is unable to learn the relationship between person and occluder effectively. Another scenario about the small scaling ratio is that the occlusion sample does not hit the target person, resulting in an invalid augmentation (Figure (b)b). On the contrary, a high occlusion ratio (\(\delta>0.8\)) (Figure (c)c) results in a significant loss of pedestrian information, preventing the model from learning ID-related properties. Therefore, to imitate the real occlusion situation and prevent people from being entirely obscured, we take a uniform distribution centered around \(\delta=0.4\) (\(\delta\sim\mathcal{U}(0.1,0.7)\)) instead of a fixed scaling ratio to scale the size of occlusion samples in III-A. **Analysis of augmentation locations in OIA.** Table VI shows that OIA uses different position enhancement methods. The first involves pasting images for each instance in a batch at a "Fixed" location, and the second involves pasting images for each instance in a batch at "Random" locations. As mentioned in III-C, our FCD adapts MAE's notion [25] of restoring entire features using implicit unoccluded features. According to the results, the "random" enhancement approach can perform better than the "fixed" method. This is because the "fixed" method only selects a random position at the beginning, and then every instance in the batch follows this position, which reduces the diversity of occlusion relationships. **Analysis of different backbones.** In this section, we conduct experiments to show how the encoder Influences the \begin{table} \begin{tabular}{c|c c c c c c|c c c c} \hline Index & Baseline & Dual-structure & OIA & FCD & CHT loss & FC\({}^{2}\) loss & R-1 & R-5 & R-10 & mAP \\ \hline \hline 1 & ✓ & & & & & & 59.3 & 76.5 & 82.2 & 50.0 \\ 2 & ✓ & & ✓ & & & & 63.4(+4.1) & 77.1 & 82.6 & 53.2(+3.2) \\ 3 & ✓ & ✓ & ✓ & & & & 66.5(+7.2) & 78.6 & 83.6 & 55.7(+5.7) \\ 4 & ✓ & & & ✓ & ✓ & & & 65.1(+5.8) & 77.9 & 83.1 & 54.8(+4.8) \\ 5 & ✓ & ✓ & ✓ & ✓ & & & 67.3(+8.0) & 79.5 & 84.1 & 56.9(+6.9) \\ 6 & ✓ & ✓ & ✓ & ✓ & & ✓ & 68.4(+9.1) & 81.7 & 85.7 & 58.5(+8.5) \\ 7 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & 67.8(+8.5) & 80.9 & 85.0 & 57.8(+7.8) \\ 8 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & 70.3(+11.0) & 82.6 & 86.8 & 60.2(+10.2) \\ 9 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & 71.3(+12.0) & 84.1 & 87.1 & 60.9(+10.9) \\ \hline \hline \end{tabular} \end{table} TABLE V: Ablation study over Occluded-Duke. \begin{table} \begin{tabular}{c|c c c} \hline Augmentation & R-1 & R-5 & R-10 & mAP \\ \hline \hline Fixed & 67.8 & 83.2 & 86.1 & 57.4 \\ Random & 71.3 & 84.1 & 87.1 & 60.9 \\ \hline \end{tabular} \end{table} TABLE VI: Different augmentation methods within a batch. Fig. 8: Parameter analysis for the scaling ratio \(\delta\) in OIA. Fig. 7: Parameter analysis for the number of local tokens. Fig. 9: The augmented image of (a) small scaling ratio. (b) failure case. (c) large scaling ratio. performance. CNN-based and Transformer-based backbones are compared in Table VII. SS means only the backbone is used for training. From the table, we can observer that the ResNet series and the Transformer series have a huge performance gap in occlusion scenarios. Here ResNet-508 is similar to the PCB [13] that directly learns local features. The second line shows that the FCFormer method takes ResNet-50 as backbone, surpassing ResNet$ by 5% Rank-1 accuracy and 1.9% mAP. It demonstrates that our proposed paradigm is effective. Unlike ResNet, Transformer has a high performance starting point when dealing with occlusion problems. ViT-Small has better performance and less parameters compared to ResNet-50. DeiT-B achieves the comparable results compared with ViT-B, the reason is that ViT series are pre-trained on ImageNet-21K and then finetuned on ImageNet-1K, but DeiT series only pre-trained on ImageNet-1K. The performance of ViT-L is lower than ViT-B, which shows that training large models on small dataset can easily lead to overfitting. ### _Qualitative Analysis_ In this section, we present qualitative experimental results and demonstrate the superiority of our proposed FCFormer. **Probability scores of part tokens.** We present the probability scores after the non-shared transformer layers for some occluded pedestrian images. As shown in Figure 11, occlusion heavily affects the probability scores of tokens in the baseline model. However, the recovered feature tokens obtained from FCD have significantly improved scores in occluded regions, which indicates that the feature completion decoder uses uncoled information to compensate occluded region features. **Visualization of the Feature Completion Transformer.** In Figure 10, we present the retrieval results of RFCnet [10] \begin{table} \begin{tabular}{c c|c c c c} \hline Backbone & Param. & R-1 & R-5 & R-10 & mAP \\ \hline \hline ResNet508 & - & 42.6 & 57.1 & 62.9 & 33.7 \\ ResNet50 & 25.6M & 47.6 & 64.3 & 71.1 & 35.6 \\ ResNet101 & 44.7M & 47.7 & 62.5 & 68.7 & 35.6 \\ \hline ViT-B\(\&\) & - & 59.3 & 76.5 & 82.2 & 50.0 \\ ViT-S* & 22M & 63.7 & 77.9 & 82.9 & 53.3 \\ ViT-B* & 86M & **71.3** & **84.1** & **87.1** & **60.9** \\ ViT-L* & 307M & 68.8 & 79.9 & 83.9 & 57.9 \\ DeiT-S* & 22M & - & - & - & - \\ DeiT-B* & 86M & 69.5 & 82.9 & 86.1 & 59.7 \\ DeiT-L* & 307M & - & - & - & - \\ \hline \hline \end{tabular} \end{table} TABLE VII: Comparison of different backbones. \(*\) denotes that camera perspective information is introduced. Fig. 10: Retrieval results of RFCnet and our proposed FCFormer on Occluded-DukeMTMC dataset. and our FCFormer. The retrieval results show that RFCnet is prone to mix the information of the target person and obstacles, resulting in retrieving a wrong person with similar obstacle or treating the front person as target. This is because pre-designing the regions still does not tackle the extreme occlusion problem well. Unlike RFCnet's completion approach, we implicitly learn the completion features from the occlusion features without encoding and decoding on the feature space map. **Visualization of attention map of feature completion decoder.** We visualize the attention heatmaps for feature completion decoder in Figure 12 by using Grad-CAM [58]. The model concentrates on body parts with significant information when the pedestrian is not occluded. On the contrary, if the target person is occluded, FCD will pay attention to the position of the human body on the occluder's edge, as this helps to complement missing features with nearby features. ## V Conclusion In this paper, we propose a Feature Completion Transformer (FCFormer) framework to alleviate occluded person re-identification problem. The core idea is to construct a variety of occlusion sample pairs, and train a decoder that can complement incomplete pedestrian features through known occlusion relationships without relying on extra semantic model. Specifically, OIA provides a new occlusion augmentation scheme with generality, providing a variety of holistic-occlusion sample pairs. Subsequently, a dual stream structure can use the shared encoder to train two different branches for holistic and occluded images respectively by using holistic-occlusion sample pairs. Then we propose a feature completion decoder (FCD) to recovery holistic features from occluded features. Extensive experiments on occluded and holistic datasets demonstrate the effectiveness of our proposed method. In the future, we intend to extend the proposed occlusion augmentaiton and feature completion paradigm to various computer vision tasks in order to achieve a unified solution to the occlusion problem.
2310.19121
Simple strategy for simulation of large area of axially symmetric metasurfaces
Metalenses are composed of nanostructures for focusing light and have been widely explored in many exciting applications. However, their expanding dimensions pose simulation challenges. We propose a method to simulate metalenses in a timely manner using vectorial wave and ray tracing models. We sample the metalens' radial phase gradient and locally approximate it by a linear phase response. Each sampling point is modeled as a binary blazed grating, employing the chosen nanostructure, to build a transfer function set. The metalens transmission or reflection is then obtained by applying the corresponding transfer function to the incoming field on the regions surrounding each sampling point. Fourier optics is used to calculate the scattered fields under arbitrary illumination for the vectorial wave method and a Monte Carlo algorithm is used in the ray tracing formalism. We validated our method against finite difference time domain simulations at 632 nm and we were able to simulate metalenses larger than 3000lambda0 in diameter on a personal computer.
Augusto Martins, Achiles F. da Mota, Chris Stanford, Taylor Contreras, Justo Martin-Albo, Alexander Kish, Carlos Escobar, Adam Para, Roxanne Guenette
2023-10-29T19:16:06Z
http://arxiv.org/abs/2310.19121v1
# Simple strategy for simulation of large area of axially symmetric metasurfaces ###### Abstract Metalenses are composed of nanostructures for focusing light and have been widely explored in many exciting applications. However, their expanding dimensions pose simulation challenges. We propose a method to simulate metalenses in a timely manner using vectorial wave and ray tracing models. We sample the metalens' radial phase gradient and locally approximate it by a linear phase response. Each sampling point is modeled as a binary blazed grating, employing the chosen nanostructure, to build a transfer function set. The metalens transmission or reflection is then obtained by applying the corresponding transfer function to the incoming field on the regions surrounding each sampling point. Fourier optics is used to calculate the scattered fields under arbitrary illumination for the vectorial wave method and a Monte Carlo algorithm is used in the ray tracing formalism. We validated our method against finite difference time domain simulations at 632 nm and we were able to simulate metalenses larger than 3000\(\lambda_{0}\) in diameter on a personal computer. ## 1 Introduction Metalenses consist of subwavelength nanostructures that locally modify the phase profile of an incoming beam to focus it [1, 2, 3]. The unprecedented degree of light manipulation, compactness, and compatibility with standard nanofabrication processes make them attractive substitutes for conventional refractive optics systems. These cutting-edge devices have been designed for a wide range of wavelengths (from 50 nm to 3600 nm) and are now developed for industrial applications [3]. Diffraction limited focusing [4], wide field of view imaging [5, 6], achromatic focusing [7], and endoscopic imaging [8] are a few examples of the breadth of applications enabled by metalenses. Recently, mass-manufacturable metalenses with diameters on the order of a few centimeters have been demonstrated [9, 10, 11, 12]. Normally, metalenses are simulated under certain approximations or advanced techniques that reduce the computational burden. Conventional rigorous numerical methods to solve Maxwell's equations, such as finite difference time domain (FDTD) and finite elements method (FEM), are not suitable because they require enormous computational resources [13]. Several strategies to accelerate these methods have been proposed, including hardware acceleration for the FDTD method [14], augmented partial factorization (APF) [15], and low-overhead distribution on a GPU-based simulation [16]. These approaches have successfully simulated metalenses with dimensions as large as 600\(\lambda_{0}\)[16]. However, the most common approach to simulate even larger area metalenses is the local approximation method [17, 6, 18, 14], where the transmitted field by each nanopost is assumed to be constant and equal to its array response. This approach, however, does not account for the coupling among nanostructures [1], and cannot simulate metagratings Here, we propose a method to accurately simulate large area metalenses in a timely manner. Our strategy is based on sampling the metalens phase gradient profile and modelling each point as a binary blazed grating using the nanopost or metagrating design of choice. We then build a transfer function library for each blazed grating under plane wave incidence for different incident angles and polarizations. A similar approach has been recently demonstrated to simulate quantum emitters' response near periodic patterned hyperbolic metamaterials [19, 20]. Our model allows us to simulate the metalens response to an arbitrary field distribution under a wave vector and ray tracing models. The model expands the field and rays in terms of the diffraction orders of the metalens, allowing for an unprecedented analysis of the focused field. We compared our method against rigorous FDTD simulations and managed to reduce the simulation time and memory requirements by at least on order of magnitude whilst keeping the result accurate. We used our method to simulate metalenses with diameters larger than \(3000\lambda_{0}\) and separate the field into the contributions of each diffraction order. ## 2 Metalens model rationale We propose a model based on a transfer-function approach that considers coupling among different posts. The metasurface phase gradient is sampled on \(N\) positions, as illustrated in Fig. 1. The \(i^{th}\) patch can be modelled as a blazed binary grating with period given by the grating equation as \[P_{i}=\frac{2\pi}{G(\vec{r}_{i1})} \tag{1}\] where \(G(\vec{r}_{i1})\equiv G_{i1}=\partial\phi(r)/\partial r\) is the radial phase gradient value calculated at the \(i^{th}\) sampling point. Any radial phase profile can be sampled using our phase gradient library, as shown in Fig. 1. The blazed binary grating is modeled according to the nanostructure or metagrating design being used such as rectangular, elliptical posts, among others, as shown in Fig. 1 (c). This approximation is solely used to obtain the transmission and/or reflection coefficients of that region and do not account for the phase profile curvature. We address this issue by locally correcting the phase gradient according to the ideal phase profile, as will be discussed later. We calculate the blazed binary grating transmission and reflection coefficients under plane wave excitation at different angles of incidence and polarizations using the rigorous coupled-wave analysis (RCWA) method [21, 22]. The transfer function is defined as the Figure 1: (a) and (b) show the sampling of the gradient of an arbitrary radial phase profile and the region they cover on the phase profile, respectively. (c) depicts the equivalent blazed binary grating approximation on the metalens. or field amplitudes of the transmitted/reflected waves as a function of the incoming in-plane wave-vector. We solve Maxwell's equations for a supercell of the blazed binary grating on a region with a cross-section area given by \(d\times P_{i}\), where \(d\) is the unit cell size of a single post on the metalens design, as shown in Fig. 1 (c). Note that depending on the metasurface nanopost unit cell size, we may have to use a supercell larger than \(P_{i}\) to fit an integer number of posts inside it. With this approach, the coupling among the posts is fully modeled, but the coupling between different regions is neglected. To generate each blazed binary grating, we perform the phase profile sampling as necessary. After calculating the transfer functions, they are used to obtain the scattering properties of the metalens. The transfer function calculation needs to be realized once, and it can be hastened using accurate and faster methods [14, 15, 16]. To save computation effort, we calculate a single transfer function for each radial sector to later rotate them to the new system of coordinates taking advantage of the metalens rotation symmetry. ### Vectorial wave model The metalens is modeled as a spatially patched transfer function that modulates an arbitrary field distribution and relies on the angular spectrum formalism for the propagation in free space [23]. As explained in the previous section, we sample the metalens phase profile gradient with linear patches as shown in Fig. 2(a). Moreover, we also split the metalens along the azimuthal direction, leveraging its rotation symmetry to use the same transfer function but properly spatially rotated, as shown in Figs. 2(b) and 2(c). That is, we approximate the metalens phase profile by a piece-wise linear phase profile at the points \(\vec{r}_{i,j}\) creating \(I\) sectors radially, and we split each sector into \(J_{i}\) regions where \(i\) indicates a given radial sector, as represented in Fig. 2. The total transmitted or reflected fields by the metalens are calculated as the sum of the contribution of each sector \[\vec{E}(\vec{r})=\sum_{i=1}^{I}\sum_{j=1}^{J_{i}}\vec{E}_{ij}(\vec{r})W_{ij}( \vec{r}) \tag{2}\] where \(\vec{E}_{i,j}(\vec{r})\) is the electric field distribution calculated on sector \(j\) of region \(i\) and \(W_{ij}(\vec{r})\) is a window function, as shown in Fig. 2(c), that limits the region area and is given by Eq. (S2). \(\vec{r}\) is defined as a radial vector centered on the metalens (see Fig. 2 (a)). If the incoming field distribution has a Fourier transform given by \(\vec{E}_{\textbf{0}}(\vec{k}_{\parallel})\) (bold symbols represent the Fourier transformed quantities), the transmitted field spatial spectrum on axial region j of sector i (see Fig. 2) is given by \[\vec{E}_{ij}(\vec{k}_{\parallel})=\int\,dx^{2}\stackrel{{ \leftrightarrow}}{{T}}_{ij}(\vec{k}_{\parallel},\vec{\kappa}_{ \parallel})\cdot\vec{E}_{\textbf{0}}(\vec{\kappa}_{\parallel}) \tag{3}\] Figure 2: (a) show how the different sectors are set up based on the gradient sampling. (b) shows the angular splitting for each sector, forming a given region. \(\vec{r}\) is defined as the position vector centered on the metalens and \(\vec{r}_{ij}\) is the vector pointing to the center of the region ij. \(\hat{\rho}_{ij}\) and \(\hat{\phi}_{ij}\) are the blazed direction versor and the corresponding orthogonal versor of region ij, respectively. where \(\widetilde{\mathbf{T}}_{ij}\) (\(\widetilde{k}_{\parallel}\), \(\widetilde{\kappa}_{\parallel}\)) is the transfer function defined as a tensor given by Eq. (S4). Eq. (3) assumes that the metalens acts as a linear operator on the incoming field. Such ansatz is corroborated by the linear property of Maxwell's equations [19, 20, 23]. As shown in the SI, after operating the transfer function on the incoming field distribution, the transmitted field spectrum can be calculated as \[\mathbf{\tilde{E}}_{ij}(\widetilde{k}_{\parallel})=\sum_{g}\mathbf{\tilde{e}}_{ijg}( \widetilde{k}_{\parallel}-\widetilde{G}_{ijg}) \tag{4}\] where \(\mathbf{\tilde{e}}_{ijg}(\widetilde{k}_{\parallel})\) is the spectrum of the output field produced by the \(g\)-th diffraction order and is given by Eq. (S10). The reciprocal vectors are defined according to the versors \(\hat{\rho}_{ij}\) and \(\hat{\phi}_{ij}\), shown in Fig. 2 (c). Finally, to obtain the field in real space, we simply perform an inverse Fourier transform in \(\mathbf{\tilde{e}}_{ijg}(\widetilde{k}_{\parallel})\). Applying the inverse Fourier transform and its shifting property in Eq. (4) we have that \[\widetilde{E}_{ij}(\widetilde{r}_{\parallel})=\sum_{g}e^{-j\widetilde{G}_{ijg }\cdot\widetilde{r}_{\parallel}}\int\,dk_{\parallel}^{2}\mathbf{\tilde{e}}_{ijg}( \widetilde{k}_{\parallel})e^{j\widetilde{k}_{\parallel}\cdot\widetilde{r}_{ \parallel}} \tag{5}\] As seen in Eq. 5, the total field on the \((i,j)\) region is given by the linear superposition of the diffracted fields modulated by the linear phase of the order. Furthermore, the phase term \(e^{-j\widetilde{G}_{ijg}\cdot\widetilde{r}_{\parallel}}\) controls the central position of the diffracted field spectrum and creates a local linear phase profile distribution, which is a consequence from the linear patching approach. This effect induces aberrations on the wavefront and can be detrimental to the properties of the metalens. To eliminate this problem, we correct this phase term by substituting it to the original phase profile, \(\phi(\widetilde{r}_{\parallel})\), as discussed in the SI. ### Ray tracing Although more precise and faster in describing the behavior of large-area metasurfaces, the computational burden of the vectorial approach can be further reduced by applying the proposed method on the ray tracing method. The metalens is now treated as a phase discontinuity on the ray path [24] but also accounting for the diffraction efficiency of the metalens. Given a ray with in-plane momentum \(\vec{\kappa}_{\parallel}\) incoming at a point \(\vec{r}_{\parallel}\) on the metalens (see Fig. 2(a)), we can model the diffraction by treating it as a probabilistic event with a Monte-Carlo algorithm. That is, there are three possible outcomes for the ray in our model after it interacts with the metalens: it can either be diffracted back (reflection), forward (transmission), or absorbed if the structure is lossy. The probabilities are taken as the diffraction efficiencies of the patch approximation obtained by solving Maxwell's equations. For the scattering processes we define \(T_{G}(\widetilde{k}_{\parallel},\vec{P})\) and \(R_{G}(\widetilde{k}_{\parallel},\vec{P})\) as being the probabilities of a ray with incoming in-plane momentum \(\widetilde{k}_{\parallel}\) and polarization state \(\vec{P}\) being diffracted in transmission and reflection, respectively, and \(G\in\mathbb{D}\) is a given diffraction order in the set of stored diffraction orders space (\(\mathbb{D}\)). From energy conservation, we can define the probability of a ray being absorbed as \(A\), as shown in the SI. We define a cumulative probability distribution \(f\) as \[f[i]=\begin{cases}\sum_{j=1}^{i}V[j]&1\leq i\leq M+N+1\\ 0&i=0\end{cases} \tag{6}\] where, \(V[i]\equiv[T_{G_{1}},T_{G_{2}},\cdots T_{G_{N}}\), \(R_{G_{1}},R_{G_{2}},\cdots R_{G_{M}},A]\) and we omitted the region indices for clarity. Therefore, we can use \(f\) to define intervals where a given outcome might happen. The diffraction order can then be found by generating a random number and analysing in which interval it has fallen. Given the uniformly distributed random variable \(\chi\in[0,1]\), we can obtain \(i\), and consequently the outcome of the event from the sequence \(V\), by solving the following equation \[g(\chi)=i,\ \text{if}\ f[i-1]<\chi\leq f[i] \tag{7}\] where \(g\) is the resulting diffraction index and index of the sequence \(V\), which is used to map a given event. With the scattering event determined, the resulting ray can either be scattered or absorbed. If it is absorbed, then no other ray is produced. On the contrary, if the ray is reflected or transmitted, then a new ray is generated with momentum given by \[\vec{k}=\vec{\kappa}_{\parallel}+\vec{G}_{i}\pm\hat{n}k_{n} \tag{8}\] where \(\hat{n}\) is the normal vector to the metasurface, \(\vec{G}_{i}\) is the reciprocal vector of the g-th order, and \(k_{n}\) is the resulting ray wave-vector along the normal direction, which can be found from the dispersion equation in the medium. Note that the reciprocal vector radial component is corrected locally according to the local phase gradient instead of using the blazed binary grating momentum. As shown in the SI, we can also obtain the diffracted polarization by the metalens. Each ray can be labelled according to the corresponding diffraction order it originated from, allowing a better understanding of the metalens behavior, without the need to perform separate simulations, see Figs. (S1) and (S2) in the SI for more detail. Such discrimination is also possible on the vectorial wave method, but it would require storing a field distribution of each order, which is impractical. ## 3 Results ### Focusing profile comparison We compare the focusing profile of a metalens operating at 632 nm against FDTD simulations. The ray tracing map distribution is obtained by calculating the ray density crossing the plane \(y=0\) and the field distributions were normalized with respect to their peak values on the focal plane to highlight the field distribution. The simulation parameters are discussed in section 3.3. We use 1200 nm tall glass-based nanoposts in our design [12, 25]. The metalens focal length is 200 \(\mu\)m with a diameter of 100 \(\mu\)m (NA = 0.25) and it encodes a hyperbolic phase profile. The vectorial wave based model qualitatively reproduces the FDTD field distribution with good fidelity, even accounting for the appearance of higher-order focal spots as shown in Figs. 3(a) and 3(b), respectively, at normal incidence. The model can also reproduce the field distribution at 20\({}^{o}\) of incidence as shown by Figs. 3(e) and 3(f), respectively. The ray tracing model also Figure 3: Field intensity distribution focused by a metalens with a focal length of 200 \(\mu\)m and NA=0.25 calculated using different methods. The operating wavelength is 632 nm in all cases, (a)–(d) [(f)–(i)] show the longitudinal field amplitude distributions focused by the metalens at normal [20\({}^{o}\)] incidence calculated using the vectorial wave linear patch, FDTD, ray tracing with linear patching. (e) and (j) show the transversal cuts of the field distributions on the focal plane at normal and 20\({}^{o}\) incidence, respectively. simulates the focusing field distribution and the high order focal spots as shown in Figs. 3(c) and 3(g). However, it accounts only for the power flow of the diffracted rays and idealizes the intensity distribution as it does not model interference among the rays. The interference leads to the well-known diffraction limit in optics and can be visualized by the field intensity distribution on the focal plane, as shown in Figs. 3(d) and 3(h) at normal and oblique ( \(20^{\circ}\)) incidence, respectively. All plots were normalized with to their own power flux and peak intensity of the ideal Airy disk distributrion. The field distributions at the focal plane obtained by our method presents good agreement with the corresponding distributions FDTD at normal and oblique incidences. The transversal cuts of the point spread function at oblique incidence are highly distorted due to off-axis aberration - mainly coma, as shown in Fig.3(j). We also calculated the focusing efficiency on an circle with diameter ten times larger then the PSF full width at half maximum (FWHM). At normal incidence, we obtained 73.5%, 72.3% and 71.5% with the FDTD, vectorial wave and ray tracing methods respectively. Finally, the ray tracing model allows us to easily tag each ray according to the diffraction order it originated. Fig. (4)(a) shows the ray tracing of the simulated metalens at normal incidence. To avoid overcrowding, the plot is limited to 250 rays in total and discriminates the diffraction orders according to the ray color, which are limited to the \([-5,5]\) range. The negative orders gives rise to additional short real focal spots, whereas the positive ones to virtual focal spots. We can calculate the diffraction efficiencies of each order by tallying the number of rays directed into them. Fig. 4(b) shows the resulting normalized diffraction orders histograms produced at normal and oblique incidence, respectively. We also applied our model to simulate the performance of a doublet metalens based on [26], as shown in Fig. S1 of the SI. ### Diffraction efficiency calculation After qualitatively showing that our models can calculate the focusing field profile of a metalens, we assess the efficiency prediction of our model against numerical simulations performed using FDTD. We used a glass-based nanopost design described in [25] where metagratings are used to increase the metalens numerical aperture. Here, we scan a beam across a metalens and calculate the diffraction efficiency of the first few orders as a function of its position from the metalens center. The metalens focal length is 7 mm with a diameter of 15 mm. We use an unpolarized Gaussian beam with 100 \(\mu\)m (158 \(\lambda\)) waist. The simulation region for each scan point is 200 \(\mu\)m \(\times\) 200 \(\mu\)m. The simulation results at normal, \(10^{\circ}\) and \(20^{\circ}\) of incidence are shown in (Figs. 5(a)-5(c)), Figs. 5(d)-5(f) and Figs. 5(g)-5(i), respectively. The calculated diffraction efficiencies of orders m=-2,-1,0 and m=2 and n=0 correspond to each column of Fig. 5, where the metalens focusing comes from the m=-1 order. All methods agree for all angles of incidence and diffraction orders. Figure 4: Ray tracing of a hyperbolic metalens with \(f=200\mu m\) and NA=0.25. (a) and (c) show the resulting ray tracing for the metalens illuminated at normal and oblique (\(20^{\circ}\)) incidence, respectively. We only show 250 rays on this plot and limited the diffraction order to \(\pm 5\). (b) and (d) show the diffraction orders normalized histograms of the transmitted rays for normal and oblique incidences, respectively. In particular, the efficiency of the \(m=-1\) order is the highest at normal incidence, and it is almost constant throughout the whole scan, remaining higher than 60%, which is an good indication that the proposed design focuses light efficiently. These results highlight how accurately our model can simulate the metalens even at oblique incidence when coupling among adjacent posts takes over. These results have also been compared against experimental data with a good match, see [25]. ### Computational resources In this section, we compare each method's simulation time and memory requirements against FDTD simulations. Note that this estimation does not account for the computational resources used in calculating the transfer functions. We simulate fused silica glass-based metalenses, operating at 632 nm [25], with NA = 0.5 and different focal lengths (f). The FDTD simulations are performed on the commercial software Lumerical inc. using a uniform mesh with 20 nm sampling in all directions. The simulation volume is set to \(2\text{R}\times 2\text{R}\times 2\text{ }\mu m\), where R is the metalens radius (\(R=f\tan(\text{asin}(NA))\). Moreover, the FDTD simulations are performed only to calculate the transmitted near field by the metalens, and the free space propagations can be obtained using the angular spectrum formalism [23]. The FDTD simulations are carried out on the FAS cluster with 128 cpus distributed over 4 nodes. The proposed approaches are executed in serial on a personal laptop and have room for improvement if a parallelization scheme is used. One could easily distribute the simulation of each patch to different CPUs to hasten the simulation. The field matrices on the wave optics model are calculated using 200 nm of sampling. Finally, the ray tracing model is calculated using a ray density of 5000 rays/\(\mu m^{2}\) hitting the metalens. Figs. 6(a) and 6(b) show the memory required and the time consumed for each method Figure 5: Diffraction efficiencies of different orders scattered by the metalens when scanned radially by an unpolarized Gaussian beam with 100 \(\mu\)m of waist and different angles of incidence along the scanning line. From left to right, the columns show the diffraction efficiency of the -2,-1,0,1, and 2 orders, respectively. The first, second and third rows show, respectively, the results at normal incidence, 10\({}^{\circ}\) and 20\({}^{\circ}\) of incidence. The green, red, and blue lines show the ray tracing model, our vectorial wave model, and the FDTD model results. The operating wavelength is 632 nm. The metalens focal length is 7 mm with a diameter of 15 mm. as a function of the metalens radius (increasing focal lengths). Note that even though the models developed here run in serial, the time required for the simulation is considerably lower compared to the FDTD simulation. Furthermore, the proposed approach requires at least one order of magnitude less memory. ## 4 Large area metalens focusing profile simulations Our approach's low computational time and minimal memory usage enable the simulation of large-area metasurfaces. Here, we simulate the focusing profile of hyperbolic and quadratic metalenses [3, 6] operating at 632 nm, utilizing the same post-design described in the preceding sections. The metalenses focal length are 1 mm with a diameter of 2.26 mm (NA = 0.75), which amounts to \(\sim 3755\lambda_{0}\). The metalenses are excited by plane waves at normal and 20\({}^{\circ}\) of incidence. Figs. 7(a)-7(d) and 7(e)-7(h) show the longitudinal field intensity distribution focused by the metalenses calculated using the wave optics and ray tracing models, respectively. A metalens with this size would take over 100 hours and require over a petabyte of memory to simulate on a FDTD cluster with over 400 cpus running in parallel, as shown in Fig. 6(b), only to obtain the near field. Our model requires approximately two orders of magnitude less time and memory to simulate the same metalens. Both models display similar focused field distributions, but the ray tracing model only accounts for the power flow without any interference effects. At normal incidence, the mirror-symmetric focusing profile of the hyperbolic profile is obtained in both models, and both feature a weak second-order focusing around z=500 \(\mu m\), as shown in Figs. 7(a) and 7(e). When illuminated at 20\({}^{o}\) of incidence, both models show a strong aberrated focal profile due to coma, and a slightly stronger second-order focusing as shown in Figs. 7(b) and 7(f). The focusing field distributions of the quadratic profile at normal incidence, obtained with the vectorial wave optics and ray tracing methods, are shown in Figs. 7(c) and 7(g), respectively, and do not have mirror symmetry around the focal plane, which is a manifestation of spherical aberration [6]. The quadratic field distribution remains almost the same at oblique incidence, as shown in Figs. 7(d) and 7(h), due to its wider field of view [27]. ## 5 Conclusion We propose a strategy to simulate large area metalenses by patching it into smaller parts that can be simulated faster using Maxwell's equations. The patching process is similar to the phase Figure 6: (a) and (b) show, respectively, the memory and time required to simulate glass-based metalenses with different radius (\(R\)) (dots) and the estimated values using polynomial interpolation from the computed data (dashed lines). The FDTD simulations are used only to calculate the transmitted near field and are performed using Lumerical on the FAS cluster with 128 cpus distributed over 4 nodes using parallelization. The other models were run in serial and have room for improvement. All simulations are performed on a normal incident plane wave operating at 632 nm. sampling used to design a metasurface profile. However, we sample the phase gradient instead, and a corresponding blazed binary grating models each sampling point. Maxwell's equations are then rigorously solved for each linear piece using a single supercell of the blazed binary grating to obtain a transfer function that is a function of the angle of incidence and polarization. The same transfer function can be reused to model different metalenses or metasurfaces with radial phase profiles for a given metasurface post design. Thus, after the transfer function is obtained, the metalens transmitted or reflected fields can be simulated either using a ray tracing approach or a vectorial wave model, depending on the desired application. We found very good agreement between our model, FDTD simulations, and experiment. This approach reduces the time and memory requirements by orders of magnitude, allowing the simulation of metalenses with diameters larger than 3755\(\lambda_{0}\) on a personal computer. Funding.Content in the funding section will be generated entirely from details submitted to Prism. Authors may add placeholder text in the manuscript to assess length, but any text added to this section in the manuscript will be replaced during production and will display official funder names along with any grant numbers provided. If additional details about a funder are required, they may be added to the Acknowledgments, even if this duplicates information in the funding section. See the example below in Acknowledgements. For preprint submissions, please include funder names and grant numbers in the manuscript. Acknowledgments.Most computations in this paper were run on the FASRC cluster supported by the FAS Division of Science Research Computing Group at Harvard University. This document was prepared by using in part the resources of the Fermi National Accelerator Laboratory (Fermilab) and the Noble Liquid Test Facility (NLTF), a U.S. Department of Energy, Office of Science, Office of High Energy Physics HEP User Facility. Fermilab is managed by Fermi Research Alliance, LLC (FRA), acting under Contract No. DE-AC02-07CH11359. Disclosures.The authors declare no conflicts of interest. Data availability.Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. Supplemental document.See Supplement 1 for supporting content.
2304.05497
Revisiting Single-gated Mixtures of Experts
Mixture of Experts (MoE) are rising in popularity as a means to train extremely large-scale models, yet allowing for a reasonable computational cost at inference time. Recent state-of-the-art approaches usually assume a large number of experts, and require training all experts jointly, which often lead to training instabilities such as the router collapsing In contrast, in this work, we propose to revisit the simple single-gate MoE, which allows for more practical training. Key to our work are (i) a base model branch acting both as an early-exit and an ensembling regularization scheme, (ii) a simple and efficient asynchronous training pipeline without router collapse issues, and finally (iii) a per-sample clustering-based initialization. We show experimentally that the proposed model obtains efficiency-to-accuracy trade-offs comparable with other more complex MoE, and outperforms non-mixture baselines. This showcases the merits of even a simple single-gate MoE, and motivates further exploration in this area.
Amelie Royer, Ilia Karmanov, Andrii Skliar, Babak Ehteshami Bejnordi, Tijmen Blankevoort
2023-04-11T21:07:59Z
http://arxiv.org/abs/2304.05497v1
# Revisiting Single-gated Mixtures of Experts ###### Abstract Mixture of Experts (MoE) are rising in popularity as a means to train extremely large-scale models, yet allowing for a reasonable computational cost at inference time. Recent state-of-the-art approaches usually assume a large number of experts, and require training all experts jointly, which often lead to training instabilities such as the router collapsing. In contrast, in this work, we propose to revisit simple single-gate MoE, which allows for more practical training. Key to our work are **(i)** a base model branch acting both as an early-exit and an _ensembling regularization_ scheme, **(ii)** a simple and efficient _asynchronous_ training pipeline without router collapse issues, and finally **(iii)** an automatic _per-sample_ clustering-based initialization. We show experimentally that the proposed model obtains efficiency-to-accuracy trade-offs comparable with other more complex MoE, and outperforms non-mixture baselines. This showcases the merits of even a simple single-gate MoE, and motivates further exploration in this area. 1 ## 1 Introduction Neural networks are designed to extract a fixed set of exhaustive features for any given image. However, images exhibit varying levels of complexity, from simple cases such as single objects on a white background to images with clutter and difficult camera angles. Treating both of these cases equally can be wasteful from an efficiency perspective: This intuition has given rise to very active research in the fields of conditional computing **[**]** and early-exiting **[**]**, **[**]**. In conditional computing, subparts of the network are turned on or off dynamically based on the input image. This allows to increase the network's capacity at training time without affecting the computational cost at inference. The same conditional behavior applies in early exiting but across the depth dimension: The prediction can be finalized early in the network for simple images, avoiding unnecessary further computing. In particular, Mixture of Experts (MoE) have gained a lot of traction in recent years for conditional computing. For instance, transformers with a massive number of parameters are now becoming the new normal for natural language processing [D,,,, ]. Similar models are also starting to emerge in computer vision, leveraging extremely large datasets and numerous routing decisions [\(\boxed{\mathbf{\Omega}}\)]. The success of these large-scale conditional models begs the question whether similar results are also achievable for datasets and architectures of a smaller scale, more commonly used by practitioners (e.g. ResNet-18 on ImageNet). In this work, we introduce three ingredients to make simple single-gate MoE competitive with other state-of-the-art MoE, across a variety of architectures and dataset sizes. In particular, our training pipeline remains efficient and stable in all cases, in contrast to more complex dynamic routings [\(\boxed{\mathbf{\Sigma}}\)], and avoids introducing new ad-hoc losses [\(\boxed{\mathbf{\Sigma}}\)]. Specifically, we make the following contributions: * In Section 2.2, we introduce a single-gate MoE that consistently outperforms its non-mixture counterparts on various architectures. A key improvement in our proposed model is a _base network_ branch whose features facilitate the initial expert selection. We also show that this base model acts as an excellent regularizer when ensembled with specialized experts, improving the overall performance. * In Algorithm 1, we also formulate a _simple and efficient_ training procedure for the model, which is both stable and asynchronous. These results indicate that simple single-gate MoE is a promising direction to enable conditional computing for both small training- and inference- computational budgets. * Finally, in Section 2.3, we propose and evaluate a simple threshold rule to dynamically adapt the computational budget at inference, without retraining, which combines early-exiting through the base model and selecting a dynamic number of experts per sample. ## 2 Proposed Model ### The Mixture of Experts Setup for Image Classification A Mixture of Experts (MoE) consists of a set of \(K\)_experts_, \((e_{k})_{1\dots K}\), each outputting a distribution over the target classes; The execution of these experts is conditioned by the _gate_\(g\) (or _router_), which outputs a probability distribution over the set of experts. The total likelihood of the model on the training dataset \(\mathcal{D}\), which we want to maximize, is expressed as: \[\mathcal{L}(D)=\mathop{\mathbb{E}}_{(x,y)\sim D}\left[\sum_{k=1}^{K}g(k|x)\ e_{k}(y|x)\right] \tag{1}\] A successful MoE relies on the gate learning a decomposition of the input space across \(K\) clusters, such that experts specialize on the resulting subsets; The key underlying assumption is that this compositional approach outperforms a single model trained on the entire dataset. At inference time, the gate is thresholded, such that only one - or few - experts are executed, to control the accuracy/efficiency trade-off. Unfortunately, the standard MoE suffers from three major issues: **(i)** Because the experts only have a local view of the training set, regulated by the gate, their mixture is more prone to overfitting than a single model trained on the whole dataset [\(\boxed{\mathbf{\Pi}}\)]. **(ii)** Jointly training the gate and experts raises a chicken-and-egg problem: The gate has to route samples to the experts most likely to classify them successfully, but weaker experts need data to improve. This problem often leads to the gate collapsing, i.e., only feeding input samples to very few experts, which defeats the purpose of using an MoE in the first place. **(iii)** The initial data subsets defined by the gate strongly influences the expert training. Thus, a naive random initialization may even further worsen the gate collapse issue if, for instance, an expert is heavily favored over others at initialization. In this work, we propose two key changes to the MoE framework to alleviate the aforementioned issues and improve performance of simple single-gate MoE models. **First**, we introduce a novel generic knowledge branch, which we refer to as the _base model_: This module is trained on the whole dataset, and we use it **(i)** to tackle potential overfitting: It acts as a form of regularization for the experts by ensembling their outputs with this initial base prediction; **(ii)** to initialize the gate, using the feature space induced by the base model for clustering the training samples. and **(iii)** as an early-exiting branch that avoids executing any expert when not necessary, and is conditionally activated based on the input image. **Second**, we describe a simple, lightweight training scheme that first initializes the experts' subsets by clustering the base model's embeddings and then keeps the gate and experts independent during training in order to avoid the gate collapse issue. In Section2.2, we describe the proposed model's key components and training scheme; The model architecture is summarized in Figure1. Then, in Section2.3 we describe a simple conditional computing mechanism to obtain even more computationally-efficient models. ### Model Summary Architecture.The **base model**\(\phi\) is a simple, ideally lightweight, network trained on the whole dataset, and is executed for every input. Its purpose is multi-fold: First, it is ensembled with the selected expert. While previous works [2, 2] often ensemble specialized experts together in MoE, we show in our experiments that ensembling one expert with this non-specialist branch is consistently more beneficial. Second, the base model acts as an early exit output at inference time, avoiding redundant expert computations for the easier samples (see Section2.3). Finally, we reuse the early layers of the base model as inputs to the gate and the experts which allows us to reduce computational load even further. The **gate**\(g\) is a simple linear layer taking as input the pre-logits of the base model. At training time, it outputs a probability distribution over experts, \(g(k|x)\), allowing for direct Figure 1: _(left)_ MoE define a gate, \(g\), that selects () which expert to execute based on the current representation () of input \(x\). At inference, a unique expert is picked (in **bold**). (right) our proposed architecture maintains a full-depth base model, \(\phi\), which is (i) ensembled with the expert output, (ii) used as inputs to the experts and gate, and (iii) acts as an early exit () at inference. Grad-CAM [2] visuals reveal that the selected expert focuses on fine-grained details, while the base model attends to general features. The other non-selected experts produce poorly focused activation maps. backpropagation through these weights. At inference, we only select and execute the most probable expert, i.e., \(g_{\text{test}}(k|x)=\mathbbm{1}\left(k=\arg\max_{k^{\prime}}g(k^{\prime}|x)\right)\); \(\mathbbm{1}\left(\cdot\right)\) being the indicator function. We also discuss in Section2.3 how we can dynamically select the number of active experts, rather than always defaulting to the top-1 expert. **Experts** are neural networks whose input is an intermediate feature map of the base model. This design choice yields two benefits: **(i)** The experts' early features are shared and frozen, which reduces the number of trainable parameters and reduces the risk of experts overfitting (in particular when training on small datasets); and **(ii)** this allows the model to reuse computations from the base model at inference time, further improving efficiency. Finally, **ensemblers** are shallow neural networks, one for each expert, combining outputs of the base model and the expert selected by the gate. We experiment with both stacking and bagging ensembling methods. In the text, we also refer to \(e_{k}^{\prime}(y|x)=d_{k}(\phi(y|x);e_{k}(y|x))\) as the classification output of the ensembler \(d_{k}\), which ensembles the \(k\)-th expert and base model \(\phi\). Training Procedure.We summarize our asynchronous training scheme in Algorithm1, in which the gate and experts are trained independently in parallel. This training procedure relies on three key insights: **First**, to avoid gate collapse, we keep the gate weights fixed while training the experts. This makes the model heavily dependent on the gate initialization. Thus, to define a meaningful initial gate \(g_{0}\), we cluster the pretrained base model's embeddings using \(K\)-means []. A similar initialization scheme has been used in the hierarchical classification literature []; In contrast, we do not restrict this initial clustering step to be a hard assignment to a unique expert, nor to be on a per-class basis. A **second** issue stems from uncalibrated outputs []; Training an ensembler \(d_{k}\) jointly with its expert \(e_{k}\) often leads to \(d_{k}\) heavily favoring the base model, preventing the expert from specializing. This behavior is likely due to the base model being overly confident on many training samples: In fact, this is particularly apparent on small datasets where the base model is already close to perfectly fitting the training set, e.g., on CIFAR-100. To avoid this problem, we only start training \(d_{k}\)_after_ fully training the corresponding expert \(e_{k}\). **Thirdly**, because the experts are initialized with the base model's pretrained weights, but are then trained on a specialized subset of the data given by the gate, they might "forget" classes they never see. This is similar to _catastrophic forgetting_[]. While the proposed ensemblers partially alleviate this issue by providing additional regularization, we find that it is often beneficial to also route non-assigned samples the experts: Specifically, in step 4 and 5 of Algorithm1, the gate \(g_{0}\) is "smoothed" using the transformation: \(\Gamma:\cdot\mapsto\text{clip}(\cdot,\gamma,1.)\), where \(\gamma\) is a hyperparameter. We experimented with **(i)** using the smoothed gate weights to re-weight the loss of all samples, including negative ones (as portrayed in **Algorithm1**) or **(ii)** using these gate weights as sampling probabilities when forming the training batch. Both yield similar results, and while (i) is simpler to implement, we find that (ii) is more practical for large datasets, as it often leads to faster convergence, hence reduced training times. An alternative joint training scheme.We also consider extending the training scheme to handle joint end-to-end training of the gate and experts by using the Expectation Maximization (**EM**) algorithm [] to alleviate the "chicken-and-egg" problem. EM alternates between two steps: (**E**) computing new gate weights, updated based on the current experts' performance, and (**M**) separately training the experts according to this new assignment, while forcing the gate to match it. Taking into account training costs, it is beneficial to keep a low number of **E** steps (\(N_{E}\)) as every update of the posterior requires synchronization across all experts. In fact, one can show that Algorithm1 is equivalent to setting \(N_{E}=0\). In prac tice, we observe that larger values \(N_{E}\) can lead to higher accuracies (e.g., on tiny-ImageNet: **+1.37%** without ensemblers, and **+0.43%** with ensemblers). However, the improved performance is often not worth the higher training costs, hence we only report results using **Algorithm 1** in our experiments section. We describe the derivation of the EM variant and experiments in more details in the **supplemental material**. ### Anytime Inference via Early-Exiting and Dynamic Ensembling By design, our framework integrates a straightforward option for early-exiting by directly outputting the base model's predictions in easy cases, to improve computational efficiency further. Following previous early-exiting literature [11, 12, 13, 14], our model decides whether to early-exit or to execute the gate-selected expert by thresholding the base model's confidence at inference time. We also consider other early-exit designs in the **supplemental material**. Additionally, it is clear from Equation 1 that MoE can be viewed as an ensemble of experts weighted by the gate, rather than using only the top-1 expert as is usually done for efficiency purposes. Similar to [11, 12], we propose to threshold the gate outputs to determine which experts to include dynamically at inference. In order to combine both the early-exiting and expert ensembling behaviors under a unique thresholding rule, we introduce the quantity \(\alpha_{\mathbf{k}}(\mathbf{x})=\mathbf{g}(\mathbf{k}|\mathbf{x})(\mathbf{1} -\max_{\mathbf{y}}\phi(\mathbf{y}|\mathbf{x}))\). From a probabilistic perspective, \(\alpha_{k}(x)\) can be interpreted as the joint probability that the sample \(x\) is not early-exited, _and_ that the gate routes \(x\) to the \(k\)-th expert. Intuitively, if this quantity is below a certain threshold for all experts, it means that the base model has a high confidence and the gate does not confidently route the sample to any expert; thus we should early exit. Thus, given a trained gate \(g\) and experts (with their ensemblers) \(e^{\prime}_{k}\), we define the **anytime model**\(p^{\text{at-}\tau}\), which combines both early-exiting and dynamic experts ensembling, as follows: \[ee(x) =1\text{ iff }\forall k\in[1,K],\ \alpha_{k}(x)<\tau \tag{2}\] \[p^{\text{at-}\tau}(y|x) =ee(x)\phi(y|x)+(1-ee(x))\sum_{k=1}^{K}\ \mathbb{1}(\alpha_{k}(x)\geq\tau)\ g(k|x)\ e^{\prime}_{k}(y|x) \tag{3}\] where \(\mathbb{1}(\cdot)\) is the indicator function, and \(\tau\in[0,1]\) is a hyperparameter. We show in experiments that varying \(\tau\) allows the model to quickly achieve a wide range of computational budgets at inference time, without any retraining. ## 3 Related Work **Conditional computing** aims to learn a sparse connectivity pattern conditioned on the input sample. To achieve such pattern, many works add a mixture of expert layers at several stages throughout the network [1, 2, 3, 4, 5, 6, 7]. Unlike single-gated MoE, the increased number of routing decisions incurs some practical drawbacks: (i) At inference, a new submodel has to be loaded in memory for every routing decision, which becomes increasingly cumbersome as the number of gates increases and (ii) all gates and experts have to be trained synchronously which leads to huge models to train and training instabilities. This often leads to complex training pipelines, e.g. relying on reinforcement learning [1, 2, 4] to learn the routing mechanism. More recently, [4] has proposed a simpler k-means-like routing mechanism that evolves during training via moving average. However, they also report that the training pipeline requires large batch sizes, and is prone to mode collapse. Simpler **single-gate mixture of experts** have also been successfully applied to neural networks for various applications such as image classification [1], detection [2], retrieval [1] and scene recognition [2]. More recently, [3] has shown that such models can achieve significant accuracy/efficiency gains in the large-scale regime. However, their training pipeline relies on several unclear heuristics. Orthogonal to these, hierarchical classification is a subclass of MoE, in which the routing is learned on a _per-class_ basis, aiming to route all samples of a ground-truth class to the same expert. Several works [1, 2, 3, 4] directly leverage an external class taxonomy (such as WordNet [4]). A follow-up line of thought extracts such information from a pretrained classifier [4, 2], or even learns the optimal taxonomy jointly with the image representations [1, 2]. Such models have been shown to improve the efficiency/accuracy trade-off in classification tasks. However, this class-based routing is a limiting assumption, and per-sample routing has been shown to outperform hierarchical classification models when correctly parametrized [1, 2]. Finally, MoE can be seen as an **ensembling** technique whose weights are learned by the gate. While it is common to assume each sample is routed to a unique expert to maximize efficiency, some works [1, 2, 4] have considered combining several experts to boost accuracy. In contrast, we show that combining one specialized expert with the generic knowledge base model with simple ensemble methods such as averaging or linear stacking [1] is generally more efficient than ensembling multiple specialized experts. ## 4 Experiments We perform experiments on datasets of different scales: CIFAR-100 [3], tiny-ImageNet [1] (a downscaled subset of ImageNet with 200 classes and 110k images), and ILSVRC2012 [1]. We use ResNets [1] with different depths as our main backbone architecture. For CIFAR-100 and tiny-ImageNet, we use a modified variant of ResNets which eliminates the first two downscaling operations (strided convolution and max-pooling), commonly used in the literature [1, 2, 3]; We dub it "_tiny-ResNet_" or **tr** for short. We follow previously established training pipelines to train our baseline models, specifically [1] for CIFAR-100 and [2] for tiny-ImageNet. For ImageNet, we additionally perform experiments on MobileNetv3-small [1], and use the standard checkpoints provided in torchvision's model zoo [1] as base models. In all of our experiments, we initialize the experts with pretrained weights from the base model and train them with the same hyperparameters and data augmentations as the baseline, although using fewer training iterations (200 epochs for CIFAR100, 100 for tiny-ImageNet and 40 for ImageNet). The features of the base model are kept frozen. In this section, to assess the benefits of our proposed model and training scheme, we compare our proposed method to **(i)** the backbone models at different depths, **(ii)** an ensembling baseline with equivalent computational cost, **(iii)** hierarchical classification, and **(iv)** two recent dynamic routing works [**,**]. We also report results of an ablation experiment on using different ensembling methods. Finally, we report detailed hyperparameters used for the experiments, and further ablations in the supplemental material. ### Results on Small and Medium-scale Datasets We first report results on CIFAR-100 and tiny-ImageNet for tiny-ResNets of different depths in Table 1. All results are reported with 20 experts, branching off the base model after the third residual block, and _without any early-exiting or dynamic ensembling_. These results show that even simple single-gate MoE can significantly improve the efficiency/accuracy trade-off over standard CNNs: Our proposed method consistently outperforms the backbone network for an equivalent MAC count. The only downside is that MoE generally has a higher parameter count at training time. Nevertheless, our asynchronous training scheme allows us to train experts independently across multiple devices efficiently. We also observe that using a deeper base model is often more beneficial than using deeper experts in terms of accuracy. \begin{table} \begin{tabular}{|c|c||c||c|c|c|} \hline Base & \multirow{2}{*}{Expert} & \multirow{2}{*}{top-1 acc} & \multicolumn{2}{c|}{MACs} & \multicolumn{2}{c|}{\#params x ie7} \\ Model & & & (x ie9) & inference & trainable \\ \hline \multicolumn{2}{|c|}{tr18 baseline} & - & 77.95 & 0.56 & 1.12 & 1.12 \\ \hline \hline \multirow{2}{*}{tr10} & tr10 & \(77.96\pm 0.20\) & 0.37 & 0.96 & 9.29 \\ \cline{2-6} & tr18 & \(78.78\pm 0.22\) & 0.52 & 1.55 & 21.1 \\ \hline \hline \multicolumn{2}{|c|}{tr34 baseline} & - & 78.60 & 1.16 & 2.13 & 2.13 \\ \hline \multicolumn{2}{|c|}{tr18} & tr10 & \(79.78\pm 0.05\) & 0.67 & 1.58 & 9.29 \\ \cline{2-6} & tr18 & \(79.90\pm 0.22\) & 0.82 & 2.17 & 21.1 \\ \hline \multicolumn{2}{|c|}{tr50 baseline} & - & 80.10 & 1.30 & 2.37 & 2.37 \\ \hline \multicolumn{2}{|c|}{tr34} & tr10 & \(80.48\pm 0.17\) & 1.28 & 2.59 & 9.29 \\ \hline \end{tabular} \end{table} Table 1: Main CIFAR-100 (_left_) and tiny-ImageNet (_right_) results. Each number is reported over three random seeds. All settings have 20 experts, whose first three blocks are the (frozen) layers of the base model. We report accuracy and efficiency metrics (number of operations and number of parameters) across various base and experts architectures. Figure 2: Accuracy vs MACs performance of our “anytime” variant models using a simple thresholding rule on CIFAR-100 (_left_) and tiny-ImageNet (_right_). Comparison to ensembling.A natural baseline to compare to is to ensemble the base model with a unique "expert" trained on the whole dataset: The resulting model has the same cost as its MoE counterpart, minus the negligible cost of the linear layer gate. We also analyze the impact of the number of experts on the model: Adding more experts leads to higher specialization, but also more potential routing errors; thus, it is not evident that increasing the number of experts benefits the model. We report the corresponding results in Table 2: All MoE outperform the ensembling baseline, which shows the benefit of specialized experts. However, the impact of going from 10 to 20 experts is minimal: The benefit of splitting the data into specialized subsets starts to fade above 10 experts for these datasets. Anytime Inference Models.We explore the effect of the anytime inference model introduced in Section 2.3. To fully understand the scope of this dynamic behavior, we evaluate the accuracy of all models across various thresholds. We then plot the convex envelope of this set of curves, as shown in Figure 2. We observe that dynamically deciding for each sample whether to early exit through the base model or to use one or more experts consistently improves the overall accuracy/efficiency trade-off. Furthermore, this simple thresholding rule allows us to quickly adapt the model's computational budget without retraining. ### Results on ImageNet Main results.We report results on ImageNet experiments for ResNet and MobileNet backbones in Table 3. Our previous observations still hold: The MoE model outperforms both the ensembling baseline and the backbone, and early-exiting based on the base model's confidences helps further reduce computations for a limited drop in accuracy. We also note that, unlike ResNet18, ResNet34's MobileNetv3's performance start to saturate and even slightly decreases with more experts. This behavior implies that the optimal number of experts is not only a property of the dataset but also of the architecture of both the experts and the base model. \begin{table} \begin{tabular}{|c||c||c|} \hline tr18-tr18 & CIFAR100 & tiny-ImageNet \\ \hline \hline base model & 77.95 & 60.42 \\ \hline \hline baseline (1 expert) & 78.99 \(\pm\) 0.29 & 63.83 \(\pm\) 0.08 \\ \hline 5 experts & 79.67 \(\pm\) 0.14 & 65.42 \(\pm\) 0.15 \\ 10 experts & 79.84 \(\pm\) 0.13 & 65.72 \(\pm\) 0.17 \\ 20 experts & **79.90**\(\pm\) 0.22 & **66.26**\(\pm\) 0.05 \\ \hline \end{tabular} \end{table} Table 2: Impact of the number of experts on the model accuracy and comparison to the ensembling baseline (equivalent to using only one expert in our model). \begin{table} \begin{tabular}{|c||c|c|c|} \hline ResNet18 & None & \(\tau=0.75\) & \(\tau=0.5\) \\ \hline \hline baseline (1 expert) & 71.50 & 71.50 & 71.13 \\ \hline 4 experts & 72.17 & 72.11 & 71.68 \\ \hline 20 experts & **72.38** & **72.38** & **71.73** \\ \hline MACs & 2.64e9 & 2.18e9 & 2.03e9 \\ \hline \end{tabular} \begin{tabular}{|c||c|c|} \hline None & \(\tau=0.75\) & \(\tau=0.5\) \\ \hline \hline [MISSING_PAGE_POST] 9 & 4.42e9 & 4.06e9 \\ \hline \end{tabular} \begin{tabular}{|c||c|c|} \hline 8.13e7 & 6.83e7 & 6.36e7 \\ \hline **(73.31\%, 3.66 GMACs)** & **(b)** ResNet34 base model & **(b)** MobileNetv3-small \\ \multicolumn{3}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \multicolumn{3}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \multicolumn{3}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \multicolumn{3}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \multicolumn{3}{ **Comparison to Dynamic Routing baselines.** In this section, we compare our model to two recent dynamic routing works. In Table 4 (_left_) we compare to DeepMoE [12] on Imagenet: The model is a two-times wider ResNet-18 trained end-to-end with a sparsity constraint forcing the router in each layer to only activate roughly half of the channels for each sample. In Table 4 (_right_), we compare to RMN [2] on tiny-ImageNet: Each residual block is replicated 8 times (a total of \(8^{4}\) different computational paths), and for each, a routing based on a moving average of initial k-means centroids is learned. We could not compare directly to RMN on ImageNet as [2] uses a modified ResNet architecture including additional Squeeze&Excite [12] layers. Nevertheless, in terms of relative accuracy improvement and MACs, we observe that our model yields comparable trade-offs, despite using a single gate, hence significantly fewer computational paths. Furthermore, in contrast to both approaches, we can easily reach various computational budgets without retraining via early-exiting. ### Ablation experiments In the supplemental material, we report further ablation experiments on (i) different early-exiting procedures an (ii) an alternative joint training based on Expectation-Maximization. **Per-sample vs per-class clustering initialization.** Hierarchical classifiers are single-gated models whose routing is learned on a strict _per-class_ basis. In the literature, the class taxonomy is either based on external knowledge [12, 13, 14, 15], clustering pretrained embeddings [16, 17], or jointly learned alongside the features [12]. We implement a per-class variant of our model, following the clustering process of [16, 17]. In Table 5, we show that per-sample routing always outperforms its per-class counterpart. Furthermore, we observe a significant accuracy gain if we re-evaluate the per-class model using the samples' ground-truth classes as an oracle for predicting the correct expert. This indicates that learning the per-class routing is the main bottleneck. In fact, a sample \((x,y)\) incorrectly mapped to an expert that has never seen class \(y\) is very detrimental to accuracy, even if this is partially compensated by the effect of ensembling, In contrast, per-sample routing allows several experts to gain knowledge about the same class and introduces a more flexible notion of diversity among experts. \begin{table} \begin{tabular}{|c||c|c|c|c|c|} \hline ImageNet & gates & acc. & MACs & \#params \\ Resnet-18 & & x1e9 & (train) x1e9 & & base & acc. & \#params \\ \hline ours & 1 & 72.17 & 2.64 & 5.10 & & \\ \hline +early-exit \(\tau=0.75\) & 1 & 72.11 & 2.18 & 5.10 & \\ \hline DeepMoE [12] & 17 & 70.95 & 1.81 & 7.02 & & \\ \hline \end{tabular} \begin{tabular}{|c||c|c|c|c|} \hline tiny-ImageNet & gates & \begin{tabular}{c} base \\ \end{tabular} & acc. & MACs & (train) \\ \hline tr18 & & 60.42 & 64.66 & 2.69e9 & 5.98e7 \\ \hline + early-exit (\(\tau=0.75\)) & 1 & 60.42 & 64.58 & 2.44e9 & 5.98e7 \\ \hline RMN (no SE) [12] & 5 & 61.78 & 64.30 & 2.22e9 & 9.02e7 \\ \hline \end{tabular} \end{table} Table 4: Comparison to recent dynamic routing literature. On the left, we compare our 4-experts ImageNet model with Wide-DeepMoE-18 from [12]. On the right, we compare our tr18-tr10 10 experts model with the tiny-ResNet18-based model of [12], excluding additional Squeeze&Excite layers. For [12], we also report the corresponding base model accuracy as we could not reproduce their baseline training results to use in our experiments. \begin{table} \begin{tabular}{|c|c|c|} \hline top-1 acc & w/o ensembling & w/ ensembling \\ \hline per-sample (**ours**) & \(\mathbf{63.11}\pm 0.11\) & \(\mathbf{65.72}\pm 0.10\) \\ \hline per-class & \(62.48\pm 0.13\) & \(63.85\pm 0.08\) \\ \hline \hline per-class + _oracle_ & \(68.8\pm 0.10\) & \(67.99\pm 0.04\) \\ \hline \end{tabular} \end{table} Table 5: Comparison to hierarchical classification. We introduce a per-class variant of our model following [12, 13], and an oracle variant in which the gate follows the true class-to-expert distribution. Results are reported on tiny-ImageNet with 10 experts, in the tr18-tr18 configuration. #### 4.4.2 Analysing the Ensemblers' performance In Table 6, we compare results for different ensembling scheme with equivalent computation costs: Even without ensembling, MoE outperforms the base model. Nevertheless, all ensembling methods perform strictly better than trusting the selected expert only. Second, we observe that **stacking** and **bagging** always outperform ensembling the top-2 experts, which shows the benefit of ensembling with the base model rather than another specialized expert. We use **bagging** in our experiments as it is simpler, non-parametric, and does not require training. #### 4.4.3 Qualitative Analysis of The Experts' Specialization Pattern. Finally, we qualitatively analyze the experts' behavior: For each sample, we record which expert reaches minimal cross-entropy loss on that given sample. We then display the resulting class distribution across experts of the whole training set (see Appendix 3): The experts do end up specializing to specific subsets of the data, although, in contrast to hierarchical classification, many classes are still clearly split across several experts. Furthermore, most of these specialization patterns are consistent across the number of experts. Finally, while some experts clearly account for more classes than others, no expert is ever fully inactive. Looking more closely at the routed samples, we also see that the router uncovers natural intra-class variations, as illustrated for the class king penguin in Figure 3: The class king penguin is split across (i) close-up images where the orange beak is very visible, and end up being mapped to the same expert as oranges, bellpepper etc, which it is often confused with, and (ii) far away images which are instead grouped with other animal classes. ## 5 Conclusions In this work, we revisit the single-gate Mixture of Experts (MoE) for convolutional architectures. Specifically, we augment MoE with a novel ensembling scheme and a simple asynchronous and stable training pipeline leveraging a clustering-based initialization. Our model consistently reaches higher accuracy than hierarchical classifiers and a 1-expert ensembling baseline, revealing the benefits of training specialized experts with per-sample routing. Moreover, maintaining the base model as an independent branch allows us to further save computations at inference time using a simple threshold-based conditional rule. Finally, our model is competitive with recent multi-layer MoE dynamic routing works, despite a smaller number of routers and experts, and provides a more lightweight and stable training pipeline. In the future, we plan to further improve our model's training efficiency by investigating different sampling strategies based on the gate outputs. \begin{table} \begin{tabular}{|c||c|c|c|} \hline base - expert & tr10-tr10 & tr10-tr18 & tr18-tr18 \\ \hline \hline base model & 56.29 & 56.29 & 60.42 \\ \hline \hline no ensembling & 57.80 & 59.53 & 63.82 \\ \hline top-2 experts & 58.69 & 60.55 & 64.19 \\ stacking & **60.84** & 64.63 & 66.29 \\ bagging & 60.54 & **64.77** & **66.32** \\ \hline \end{tabular} \end{table} Table 6: Impact of the ensembler design on accuracy (tiny-ImageNet, 20 experts) Figure 3: Per-sample routing uncovers meaningful intra-class modes.
2306.10185
Spatial-SpinDrop: Spatial Dropout-based Binary Bayesian Neural Network with Spintronics Implementation
Recently, machine learning systems have gained prominence in real-time, critical decision-making domains, such as autonomous driving and industrial automation. Their implementations should avoid overconfident predictions through uncertainty estimation. Bayesian Neural Networks (BayNNs) are principled methods for estimating predictive uncertainty. However, their computational costs and power consumption hinder their widespread deployment in edge AI. Utilizing Dropout as an approximation of the posterior distribution, binarizing the parameters of BayNNs, and further to that implementing them in spintronics-based computation-in-memory (CiM) hardware arrays provide can be a viable solution. However, designing hardware Dropout modules for convolutional neural network (CNN) topologies is challenging and expensive, as they may require numerous Dropout modules and need to use spatial information to drop certain elements. In this paper, we introduce MC-SpatialDropout, a spatial dropout-based approximate BayNNs with spintronics emerging devices. Our method utilizes the inherent stochasticity of spintronic devices for efficient implementation of the spatial dropout module compared to existing implementations. Furthermore, the number of dropout modules per network layer is reduced by a factor of $9\times$ and energy consumption by a factor of $94.11\times$, while still achieving comparable predictive performance and uncertainty estimates compared to related works.
Soyed Tuhin Ahmed, Kamal Danouchi, Michael Hefenbrock, Guillaume Prenat, Lorena Anghel, Mehdi B. Tahoori
2023-06-16T21:38:13Z
http://arxiv.org/abs/2306.10185v1
Spatial-SpinDrop: Spatial Dropout-based Binary Bayesian Neural Network with Spintronics Implementation ###### Abstract Recently, machine learning systems have gained prominence in real-time, critical decision-making domains, such as autonomous driving and industrial automation. Their implementations should avoid overconfident predictions through uncertainty estimation. Bayesian Neural Networks (BayNNs) are principled methods for estimating predictive uncertainty. However, their computational costs and power consumption hinder their widespread deployment in edge AI. Utilizing Dropout as an approximation of the posterior distribution, binarizing the parameters of BayNNs, and further to that implementing them in spintronics-based computation-in-memory (CIM) hardware arrays provide can be a viable solution. However, designing hardware Dropout modules for convolutional neural network (CNN) topologies is challenging and expensive, as they may require numerous Dropout modules and need to use spatial information to drop certain elements. In this paper, we introduce MC-SpatialDropout, a spatial dropout-based approximate BayNNs with spintronics emerging devices. Our method utilizes the inherent stochasticity of spintronic devices for efficient implementation of the spatial dropout module compared to existing implementations. Furthermore, the number of dropout modules per network layer is reduced by a factor of \(9\times\) and energy consumption by a factor of \(94.11\times\), while still achieving comparable predictive performance and uncertainty estimates compared to related works. MC-Dropout, Spatial Dropout, Bayesian neural network, Uncertainty estimation, Spintronic ## I Introduction Neural networks are brain-inspired computational methods that, in some cases, can even outperform human counterparts [1]. Consequently, applications of NNs have increased rapidly in recent years and have become the cornerstone of modern computing paradigms. Furthermore, NNs are commonly deployed in real-time safety-critical tasks such as computer-aided medical diagnostics, industrial robotics, and autonomous vehicles. Conventional (point estimate) Neural Networks (NNs) typically learn a single point value for each parameter. However, they do not account for the uncertainty in the data nor in the model, leading to overconfident predictions and in turn to safety violations. This is particularly true when the data generation process is noisy or the training data is either incomplete or insufficient to capture the complexity of the actual phenomenon being modelled. In safety-critical domains where machine learning systems make human-centered decisions, an uncertainty measure is essential for informed decision-making. On the other hand, Bayesian Neural Networks (BayNNs), which put prior distributions over the model parameters and learn the posterior distribution using approximation techniques (e.g., Monte Carlo (MC)-Dropout [2]), present a systematic method for training uncertainty-aware neural networks. However, the computational costs and high-performance requirements of BayNNs can be prohibitive for edge devices. Therefore, dedicated NN hardware accelerators such as Compute-in-Memory (CiM) architectures with emerging Non-Volatile resistive Memories (NVMs) have been explored. CiM architectures enable the Matrix-Vector Multiplication (MVM) operation of NNs to be carried out directly inside the memory, overcoming the memory limitations of traditional von-Neumann architectures. Among the NVM technologies, SpinTransfer-Torque Magnetic Random Access Memory (STT-MRAM) is particularly appealing due to its nanosecond latency, high endurance (\(10^{12}\) cycles), and low switching energy (\(10\) fJ) [3]. Additionally, algorithmic approaches, such as Binarization which typically reduces the bit precision of NNs to \(1\)-bit, lead to smaller computational time and model size. Therefore, they are an attractive options for BayNNs to mitigate their inherent costs. Moreover, this approach allows for the direct mapping of BayNN parameters to STT-MRAM-based CiM hardware. Existing work [4, 5] proposed to binarize the parameters of BayNNs and implement them on STT-MRAM-based CiM hardware resulting in a highly efficient solution. Although this approach can achieve high algorithmic performance and hardware efficiency compared to existing works, designing Dropout modules in the case of convolutional NN (CNN) topologies is challenging and expensive due to the nature of implementation. In this paper, we present an algorithm-hardware co-design approach that not only solves the challenges of implementing the Dropout-based BayNNs approach, but also reduces the number of Dropout modules required per layer. The main contributions of this paper are as follows: * We propose _MC-SpatialDropout_, which uses spatial Dropout for Bayesian approximation. Our method is mathematically equivalent to the MC-Dropout-based approach, enabling uncertainty-aware predictions. * We present an STT-MRAM-based CiM architecture for the proposed _MC-SpatialDropout_-based BayNNs. Our approach leverages the inherent stochasticity of STT-MRAM for the Dropout module and deterministic behavior for parameter storage. This allows the reuse of the array designed for conventional binary NNs (BNNs), and only the peripheral circuitry is adapted for Bayesian inference. * We also propose reliable and adaptable sensing scheme for stochastic STT-MRAM specifically designed to implement the dropout concept for both linear and convolutional layers. Our method is targeting CNN topologies and reduces the number of Dropout modules in a layer by \(9\times\) and energy consumption by \(94.11\times\), while maintaining comparable predictive performance and uncertainty estimates. The remainder of this paper is organized as follows: Section II provides the background for our work, Section III describes the proposed MC-SpatialDropout, Section IV presents both the algorithmic and hardware results for our approach and finally, in Section V, we conclude the paper. ## II Background ### _Spintronics_ MRAM have gained significant attention due to their fast switching, high endurance, and CMOS compatibility [6]. The main component of MRAM devices is the Magnetic Tunnel Junction (MTJ), which comprises two ferromagnetic layers: the reference layer and the free layer, separated by a thin insulating layer. The magnetization of the reference layer is fixed in one direction, while the free layer can have its magnetization reversed between two stable positions: parallel or antiparallel to that of the reference layer. The resistance of the stack depends on the relative orientations of the layer magnetizations, with a high resistance state in the antiparallel configuration and a low resistance state in the parallel configuration. ### _Uncertainty in Deep Learning_ Uncertainty estimation is vital in deep learning, especially for safety-critical applications, as it provides insight into the model's confidence in its predictions, enhancing the trustworthiness of decision-making. There are two main types of uncertainty: epistemic, which results from the limitations of the model and can be reduced with more data or improved architectures, and aleatoric, which arises from noise in the data and cannot be mitigated. Obtaining uncertainty estimates bolsters robustness by identifying out-of-distribution (OOD) data points and avoiding overconfident predictions. OOD data refers to data whose distribution is completely different from the training (in-distribution (ID)) data. In this paper, we focus on aleatoric uncertainty estimation and evaluate the effectiveness of our method for OOD detection. ### _Bayesian NNs_ BayNNs offer a principled approach to uncertainty estimation in neural networks. Several approximation methods exist for BayNNs, such as variational inference and Markov Chain Monte Carlo methods. One popular approximation technique is Monte Carlo Dropout (MC-Dropout), which leverages dropout for Bayesian inference. Dropout [7] is a common regularization technique used to reduce overfitting and neuron co-adaptation by randomly setting neuron outputs to zero during training. The dropout operation can be described as \(\hat{\mathbf{Z}}=\mathbf{M}\odot\mathbf{Z}\), where \(\mathbf{M}\) is a binary mask generated by sampling from a Bernoulli distribution, \(\odot\) represents element-wise multiplication, and \(\hat{\mathbf{Z}}\) and \(\hat{\mathbf{Z}}\) are intermediate activation and dropped out intermediate activation of a layer, respectively. MC-Dropout provides an approximation of the true posterior distribution with relatively low computational and memory overhead compared to other methods such as variational inference (VI) [8] and the ensemble approach [9]. This is because the ensemble approach requires inference in multiple NNs, and VI requires learning the parameters of the variational distribution, which require storage. Since the MC-Dropout method has the same number of parameters as conventional NNs, it leads to minimal additional computation and memory requirements, making it suitable for a wide range of applications, including those with limited resources. The optimization objective for MC-Dropout can be represented as \[\mathcal{L}(\boldsymbol{\theta})_{\text{MC-Dropout}}=\mathcal{L}(\boldsymbol{ \theta},\mathcal{D})+\lambda\sum_{l=1}^{L}||\boldsymbol{\theta}_{l}||_{2}^{2} \tag{1}\] where \(\mathcal{L}(\boldsymbol{\theta},\mathcal{D})\) represents the task-specific loss function, such as categorical cross-entropy for classification or mean squared error for regression, and \(||\boldsymbol{\theta}||_{2}^{2}\) is the regularization term. Also, \(\boldsymbol{\theta}\) summarizes all learnable parameters, i.e., \(\boldsymbol{\theta}=\{\mathbf{W}_{l},\mathbf{b}_{l}\mid l=1,\cdots,L\}\), where \(\mathbf{W}_{l}\) denote the weight matrices and \(\mathbf{b}_{l}\) the biases for the layer \(l\). During inference, dropout is applied multiple times, and the outputs are averaged to obtain the predictive distribution. Hence, the posterior predictive distribution over the output \(y\), i.e., \[p(\mathbf{y}|\mathbf{x},\mathcal{D})=\int p(\mathbf{y}|\mathbf{x},\boldsymbol{ \theta})p(\boldsymbol{\theta}|\mathcal{D})d\boldsymbol{\theta} \tag{2}\] is approximated by \[p(\mathbf{y}|\mathbf{x},\mathcal{D})\approx\frac{1}{T}\sum_{t=1}^{T}p(\mathbf{ y}|\mathbf{x},\boldsymbol{\theta},\mathbf{M}_{t})\quad\text{with}\quad\mathbf{M}_{t} \sim\mathcal{B}(\rho). \tag{3}\] Here, \(\mathcal{D}\) denotes the dataset, \(\mathbf{x}\) is the input, \(\mathbf{y}\) is the output, and the entries of \(\mathbf{M}_{t}\) are independently sampled from a Bernoulli distribution with (dropout) probability \(\rho\). ### _Mapping of Convolutional Layers to CiM Architecture_ To perform the computation inside the CiM architecture, a critical step is the mapping of the different layers of the NN to crossbar arrays. Standard NNs contain mainly Fully Connected (FC) layers and convolutional layers. While the mapping of FC layers is straightforward in a crossbar array as the shape of the weight matrices is 2D (\(\mathbb{R}^{m\times n}\)), mapping convolutional layers is challenging due to their 4D shapes (\(\mathbb{R}^{K\times K\times C_{in}\times C_{out}}\)). Here, \(K\) denotes the shape of kernels, and \(C_{in}\) represents the number of input channels. Implementing convolutional layers requires implementing multiple kernels with different shapes and sizes. There are two popular mapping strategies for mapping the convolutional layer exists. In the mapping strategy 1, each kernel of shape \(K\times K\times C_{in}\) is unrolled to a column of the crossbar [10]. On the other hand, in the mapping strategy 2, each kernel is mapped to \(K\times K\) smaller crossbars with a shape of \(C_{in}\times C_{out}\)[11]. ## III Proposed Method ### _Problem Statement and Motivation_ The convolution operation is performed differently in CiM architectures compared to GPUs. In CiM architectures, moving windows (MWs) with a shape of \(K\times K\) are applied to each input feature map (IFM) in one cycle (see Fig. 1(a)). In the next cycle, the MWs will "slide over" the IFMs with a topology-defined stride \(S\) for \(N\) cycles. Assuming \(K>S\), some of the elements in the MWs for the next \(K-S\) cycles will be the same as in the previous cycles, a concept known as Fig. 1: a) Input feature map of a convolutional layer, b) moving windows from all the input feature maps are flattened for the conventional mapping, c) weight sharing. This is illustrated by the green input feature (IF) in Fig. 1(a). The Dropout module designed in [4, 5] drops each element of the MWs with a probability \(P\) in each cycle. Therefore, it essentially re-samples the dropout mask of each MW of IFMs in each cycle. Consequently, the dropout masks of the shared elements in the MWs will change in each input cycle, leading to inconsistency. An ideal Dropout module should only generate dropout masks for new elements of the MWs. Designing a Dropout module that drops each element of the MWs depending on the spatial location of the MWs in the IFMs is challenging and may lead to complex circuit design. Additionally, the number of rows in crossbars typically increases from one layer to another due to the larger \(C_{in}\). Consequently, the number of Dropout modules required will be significantly higher. Furthermore, the MWs are reshaped depending on the weight mapping discussed in Section II-D. For mapping strategy 1, the MWs from IFMs are flattened into a vector of length \(K\times K\times C_{in}\). However, for mapping strategy 2, IFMs are flattened into \(K\times K\) vectors of length \(C_{in}\), as depicted in Fig. 1(a) and (b). As a result, designing a generalizable Dropout model is challenging. ### _MC-SpatialDropout as Bayesian Approximation_ In an effort to improve the efficiency and accuracy of Bayesian approximation techniques, we propose the MC-SpatialDropout method. The proposed MC-SpatialDropout technique expands upon the MC-Dropout [2] and MC-SpinDrop [4, 5] methods by utilizing spatial dropout as a Bayesian approximation. Our approach drops an entire feature with a probability \(p\). This means that all the elements of a feature map in Fig. 1(a) are dropped together. However, each feature map is dropped independently of the others. As a result, the number of Dropout modules required for a layer will be significantly reduced, and the design effort of the dropout module will also be lessened. The primary objective of this approach is to address the shortcomings of MC-Dropout arising from its independent treatment of elements of the features. In contrast, MC-SpatialDropout exploits the spatial correlation of IFs, which is particularly advantageous for tasks involving image or spatial data. By doing so, it facilitates a more robust and contextually accurate approximation of the posterior distribution. This enables the model to capture more sophisticated representations and account for dependencies between features. In terms of the objective function for the MC-SpatialDropout, Soyed et al. [4, 5] showed that minimizing the objective function of MC-Dropout (see Equation (1)) is not beneficial for BNNs and suggested a BNN-specific regularization term. In this paper, instead of defining a separate loss function for MC-SpatialDropout, we define the objective function as: \[\mathcal{L}(\mathbf{\theta})_{\text{MC-SpatialDropout}}=\mathcal{L}(\mathbf{\theta}, \mathcal{D})+\lambda\sum_{l=1}^{L}||\mathbf{W}_{l}||_{2}^{2}. \tag{4}\] Therefore, the objective function is equivalent to Equation (1) for MC-Dropout. However, the second part of the objective function is the regularization term applied to the (real valued) "proxy" weights (\(\mathbf{W}_{l}\)) of BNN instead of binary weights. It encourages \(\mathbf{W}_{l}\) to be close to zero. By keeping a small value for the \(\lambda\), it implicitly ensures that the distribution of weights is centered around zero. Also, we normalize the weights by \[\mathbf{\hat{W}}_{l}=\frac{\mathbf{W}_{l}-\mu_{l}^{\mathbf{W}}}{\sigma_{ \mathbf{W}}^{\mathbf{W}}}, \tag{5}\] to ensure, the weight matrix has zero mean and unit variance before binarization. Where \(\mu^{\mathbf{W}}\) and \(\sigma^{\mathbf{W}}\) are the mean and variance of the weight matrix of the layer \(l\). This process allows applying L2 regularization in BNN training and [12] showed that it improves inference accuracy by reducing quantization error. Since our work is targeted for BNN, regularization is only applied to the weight matrixes. The difference is that our method approximate Equation (2) by: \[p(\mathbf{y}|\mathbf{x},\mathcal{D})\approx\frac{1}{T}\sum_{t=1}^{T}p(\mathbf{ y}|\mathbf{x},\mathbf{\theta},\mathbf{\hat{M}}_{t})\quad\text{with}\quad\mathbf{\hat{M}}_{t }\sim\mathcal{B}(\rho). \tag{6}\] Here, during training and Bayesian inference, the dropout mask \(\mathbf{\hat{M}}_{t}\) sampled spatially correlated manner for the output feature maps (OFMs) of each layer from a Bernoulli distribution with (dropout) probability \(\rho\). The dropout masks correspond to whether a certain spatial location in the OFMs (i.e., a certain unit) is dropped or not. For Bayesian inference, we perform \(T\) Monte Carlo sampling to approximate the posterior distribution. Each Monte Carlo sample corresponds to forward passing the input \(\mathbf{x}\) through the NN with unique spatial dropout masks \(\mathbf{\hat{M}}_{t}\quad t=1,\cdots,T\), resulting in a diverse ensemble of networks. By averaging the predictions from the Monte Carlo samples, we effectively perform Bayesian model averaging to obtain the final prediction. Proper arrangement of layers is important for the MC-SpatialDropout based Bayesian inference. The Spatial Dropout layer can be applied before each convolutional layer in a layerwise MC-SpatialDropout method. Additionally, the SpatialDropout layer can be applied to the extracted features of a CNN topology in a topology-wise MC-SpatialDropout method. Fig. 2 shows the block diagram for both approaches. ### _Designing Spatial-SpinDrop Module_ As mentioned earlier, in the proposed MC-SpatialDropout, feature maps can independently be dropped with a probability \(p\). Due to the nature of input application in CiM architectures, this implicitly means dropping different regions of crossbars depending on the mapping strategy. These challenges are associated with designing the Dropout module for the proposed MC-SpatialDropout based BayNN. For the mapping strategy 1, as depicted in Fig. 1(b), each \(K\times K\) subset of input comes from a feature map. This means that if an input feature is dropped, the corresponding \(K\times K\) subset of input should also be dropped for all \(C_{out}\) and \(N\) cycles of inputs. This implies that dropping each \(K\times K\) row of a crossbar together for \(N\) cycles is equivalent to applying spatial dropout. However, each group of rows should be dropped independently of one another. Additionally, their dropout mask should be sampled only in the first cycle. For the remaining \(N-1\) cycles of input, the dropout mask should remain consistent. In contrast, in the mapping strategy 2 (see Fig. 1(c)), the elements of a MW are applied in parallel to each \(K\times K\) crossbar at the same index. As a result, dropping an IF would lead to dropping each index of rows in all the \(K\times K\) crossbars together. Similarly, each row of a crossbar is dropped independently of one another, and the dropout mask is sampled Fig. 2: Block diagram of the location of the proposed MC-SpatialDropout in a) a layer-wise fashion, b) a topology specific fashion. at the first input cycle and remains consistent for the remaining \(N-1\) cycles of input. Furthermore, if the spatial dropout is applied to the extracted feature maps of a CNN (see Fig. 2), then depending on the usage of the adaptive average pool layer, the design of the Spin-SpatialDrop will differ. If a CNN topology does not use an adaptive average pool layer, then \(H\times W\) groups of rows are dropped together. This is because the flattening operation essentially flattens each IF into a vector. These vectors are combined into a larger vector representing the input for the classifier layer. However, since input for the FC layer is applied in one cycle only, there is no need to hold the dropout mask. The Spin-SpatialDrop module for the mapping strategy 1 can be adjusted for this condition. Lastly, if a CNN topology does use an adaptive average pool layer, then the SpinDrop module proposed by [4, 5] can be used. This is because the adaptive average pool layer averages each IF to a single point, giving a vector with total \(C_{out}\) elements. Therefore, the Dropout module for the proposed MC-SpatialDropout should be able to work in four different configurations. Consequently, we propose a novel spintronic-based spatial Dropout design, called _Spatial-SpinDrop_. The Spatial-SpinDrop module leverages the stochastic behavior of the MTJ for spatial dropout. The proposed scheme is depicted in Fig. 3. In order to generate a stochastic bitstream using the MTJ, the first step involves a writing scheme that enables the generation of a bidirectional current through the device. This writing circuit consists of four transistors, allocated to a "SET" and a "RESET" modules. The "SET" operation facilitates the stochastic writing of the MTJ, with a probability corresponding to the required dropout probability. On the other hand, the "RESET" operation restores the MTJ to its original state. During the reading operation of the MTJ, the resistance of the device is compared to a reference element to determine its state. The reference resistance value is chosen such as it falls between the parallel and anti-parallel resistances of the MTJ. For the reading phase, a two-stage architecture is employed for better flexibility and better control of the reading phase for the different configurations discussed earlier. The module operates as follows: after a writing step in the MTJ, the signal \(V_{pol}\) allows a small current to flow through the MTJ and the reference cell (_REF_), if and only if the signal \(hold\) is activated. Thus, the difference in resistance is translated into a difference in voltages (\(V_{MTJ}\) and \(V_{ref}\)). The second stage of the amplifier utilizes a StrongARM latch structure [13] to provide a digital representation of the MTJ state. The _Ctrl_ signal works in two phases. When _Ctrl = 0_, \(\overline{Out}\) and \(Out\) are precharged at _VDD_. Later, when _Ctrl = 1_, the discharge begins, resulting in a differential current proportional to the gate voltages (\(V_{MTJ}\) and \(V_{ref}\)). The latch converts the difference of voltage into two opposite logic states in \(\overline{Out}\) and \(Out\). Once the information from the MTJ is captured and available at the output, the signal \(hold\) is deactivated to anticipate the next writing operation. To enable the dropout, a series of AND gates and transmission gates are added, allowing either access to the classical decoder or to the stochastic word-line (WL). As long as the \(hold\) signal is deactivated, no further reading operation is permitted. Such a mechanism allows the structure to maintain the same dropout configuration for a given time and will be used during \(N-1\) cycles of inputs to allow the dropping of the IF in strategies 1 and 2. In the first strategy, the AND gate receives as input \(K\times K\) WLs from the same decoder, see Fig. 4(a). While in strategy 2, the AND gate receives one row per decoder, as presented in Fig. 4(b). For the last two configurations, the \(hold\) signal is activated for each reading operation, eliminating the need to maintain the dropout mask for \(N-1\) cycles. ### _MC-SpatialDropout-Based Bayesian Inference in CiM_ The proposed MC-SpatialDropout-Based Bayesian inference can be leveraged on the two mapping strategies discussed in Section II-D. In both strategies, one or more crossbar arrays with MTJs at each crosspoints are employed in order to encode the binary weights into the resistive states of the MTJs. Fig. 4: Crossbar design for the MC-SpatialDropout based on mapping strategy (a) 1 and (b) strategy2. In (b), only the Dropout module and WL decoder are shown, Everything else is abstracted. Fig. 3: (a) Writing and (b) reading schemes for the MTJ. Specifically, for the mapping strategy 1, we divide the WLs of the crossbar into \(K\times K\) groups and connect one dropout module to each group, as shown in Fig.4(a). In Fig. 3(b), this strategy involves connecting \(K\times K\) WLs to an AND gate. The AND gate receives the signal delivered by the decoder as its input. This configuration allows for the selective activation or deactivation of a group of WLs. To facilitate the activation of multiple consecutive addresses in the array, an adapted WL decoder is utilized. The bit-line and source-line drivers were used to manage the analog input and output for the MVM operation. Also, a group-wise selection of WLs is performed concurrently and the intermediate result for MVM operation is accumulated into an accumulator block until all the WLs are selected for each layer. We utilized MUXes to select the different bit-lines that are sensed and converted by ADC. The shift-adder modules are used to shift and accumulate the partial sums coming from the array. Finally, a digital comparator and averaging block are used to implement the activation function. For the last layer, the average operation is performed with an averaging block. For the mapping strategy 2, a similar architecture to strategy 1 is employed. The key distinction relies upon the utilization of \(K\times K\) crossbars in parallel to map the binary weights of a layer. Also, the dropout modules are connected to a similar WL index in each of the crossbar arrays, as shown in Fig. 4(b). Here, the same AND gate in the Dropout module receives signals from different decoders and the result is sent to each row of the \(K\times K\) crossbars. For instance, the first WL of each crossbar of a layer connects the same Dropout module. All the WLs decoders are connected to a dropout block in gray in Fig. 4(b) comprising \(C_{in}\) dropout modules. It is worth mentioning that the dropout is used during the reading phase only, therefore, the dropout module is deactivated during the writing operation and WL decoders are used normally. ## IV Results ### _Simulation Setup_ We evaluated the proposed MC-SpatialDropout on predictive performance using VGG, ResNet-18, and ResNet-20 topologies on the CIFAR-10 dataset. All the models were trained with SGD optimization algorithm, minimizing the proposed learning objective (4) with \(\lambda\) chosen between \(1\times 10^{-5}\) and \(1\times 10^{-7}\), and the binarization algorithm from [12] was used. Also, all the models are trained with \(\rho=15\%\) dropout probability. The validation dataset of the CIFAR-10 is split 80:20 with 20% of the data used for the cross-validation and 80% used for evaluation. To assess the effectiveness of our method in handling uncertainty, we generated six additional OOD datasets: 1) Gaussian noise (\(\hat{\mathcal{D}}_{1}\)): Each pixel of the image is generated by sampling random noise from a unit Gaussian distribution, \(\mathbf{x}\sim\mathcal{N}(0,1)\), 2) Uniform noise (\(\hat{\mathcal{D}}_{2}\)): Each pixel of the image is generated by sampling random noise from a uniform distribution, \(\mathbf{x}\sim\mathcal{U}(0,1)\), 3) CIFAR-10 with Gaussian noise (\(\hat{\mathcal{D}}_{3}\)): Each pixel of the CIFAR-10 images is corrupted with Gaussian noise, 4) CIFAR-10 with uniform noise (\(\hat{\mathcal{D}}_{4}\)): Each pixel of the CIFAR-10 images is corrupted with uniform noise, 5) SVHN: Google street view house numbers dataset, and 6) STL10: a dataset containing images from the popular ImageNet dataset. Each of these OOD datasets contains \(8000\) images, and the images have the same dimensions as the original CIFAR-10 dataset (\(32\times 32\) pixels). During the evaluation phase, an input is classified as OOD or ID as follows: \[\begin{cases}\text{OOD},&\text{if }\max\left(\mathcal{Q}\left(\frac{1}{T}\sum_{t =1}^{T}\mathbf{y}_{t}\right)\right)<0.9\\ \text{ID},&\text{otherwise}.\end{cases} \tag{7}\] Here, \(\mathbf{y}_{t}\) is the softmax output of the stochastic forward pass at MC run \(t\) with \(T\) MC runs, the function \(\mathcal{Q}(\cdot)\) calculates the 10th percentile across a set of values, and the function \(\max(\cdot)\) determines the maximum confidence score across output classes. Overall, OOD or ID is determined by whether the maximum value from the 10th percentile of the averaged outputs is less than 0.9 (for OOD) or not (for ID). The intuition behind our OOD detection is that the majority of confidence score of the \(T\) MC runs is expected to be high and close to one another (low variance) for ID data and vice versa for OOD data. The hardware-level simulations for the proposed method were conducted on the Cadence Virtuoso simulator with 28nm-FDSOI STMicroelectronics technology library for the respective network topologies and dataset configurations. ### _Predictive Performance and Uncertainty Estimation_ The predictive performance of the approach is close to the existing conventional BNNs, as shown in Table I. Furthermore, in comparison to Bayesian approaches [4, 5], our proposed approach is within \(1\%\) accuracy. Furthermore, the application of Spatial-SpinDrop before the convolutional layer and at the extracted feature maps can also achieve comparable performance (\(\sim 0.2\%\)), see Fig. 2. This demonstrates the capability of the proposed approach in achieving high predictive performance. However, note that applying Spatial-SpinDrop before all the convolutional layers can reduce the performance drastically, e.g., accuracy reduces to \(75\%\) on VGG. This is because at shallower layers, the number of OFMs is lower in comparison, leading to a high chance that most of the OFMs are being omitted (dropped). Also, as shown by [4, 5], BNNs are more sensitive to the dropout rate. Therefore, a lower Dropout probability between \(10-20\%\) is suggested. In terms of OOD detection, our proposed method can achieve up to 100% OOD detection rate across various model architectures and six different OOD datasets (\(\hat{\mathcal{D}}_{1}\) through \(\hat{\mathcal{D}}_{6}\)), as depicted in Table II. There are some variations across different architectures and OOD datasets. However, even in these cases, our method can consistently achieve a high OOD detection rate, with the lowest detection rate being \(64.39\%\) on the ResNet-18 model with \(\hat{\mathcal{D}}_{4}\) dataset and Spatial-SpinDrop applied to extracted feature maps. However, when the Spatial-SpinDrop is applied to the convolutional layers of the last residual block, OOD detection rate on \(\hat{\mathcal{D}}_{4}\) dataset improved to \(97.39\%\), a \(33.00\%\) improvement. Therefore, we suggest applying the Spatial-SpinDrop to the last convolutional layers to achieve a higher OOD detection rate at the cost of a small accuracy reduction. Consequently, the result suggests that the MC-SpatialDropout method is a robust and reliable approach to OOD detection across various model architectures and datasets. ### _Overhead Analysis_ The proposed Spatial-SpinDrop modules were evaluated for the area, power consumption, and latency as shown in Table III and compared with the SpinDrop approach presented in [4, 5]. These evaluations were conducted using a crossbar array with dimensions of \(64\times 32\) and scaled for the VGG topology. In layer-wise application of spatial Dropout, the Dropout modules applied to convolutional layers of the last VGG block. Also, for topology-wise application of spatial Dropout, Dropout modules are applied to the extracted feature maps. In our evaluation, a configuration of \(C_{in}=256\), \(K=3\) and \(C_{out}=512\) is used. At first, in terms of area, the SpinDrop method requires one dropout module per row in the crossbar structure, while our method only requires one dropout module per \(K\times K\) group of rows. Therefore, the area and the power consumption of dropout modules are reduced by a factor of 9. In terms of latency for the dropout modules, we achieve \(15ns\) in all cases. Indeed, to generate 1 bit, for a given number of rows, the dropout module needs to be written, however, such latency can be further decreased by increasing the writing voltages of the MTJ. Furthermore, in the case, the adaptive average pool layer is not used, the power consumption and the area for the SpinDrop approach increases greatly (\(\times 9\)). While in the proposed approach, the adaptative Average pool layer does not impact the total energy and area, as mentioned in Section III-C and shown in Table III. Table IV compares the energy consumption of the proposed approach with the State-Of-The-Art implementation based on the MNIST dataset. For the evaluation, we used NVSIM, and we estimated the total energy for a LeNet-5 architecture to be consistent with the approach presented in [4]. When compared to the SpinDrop approach in [4] our approach is \(2.94\times\) more energy efficient. Furthermore, when compared to RRAM technology, our solution is \(13.67\times\) more efficient. Finally, in a comparison with classic FPGA implementation, the proposed approach achieves substantial energy savings of up to \(94.11\times\). ## V Conclusion In this paper, we present MC-SpatialDropout, an efficient spatial dropout-based approximation for Bayesian neural networks. The proposed method exploits the probabilistic nature of spintronic technology to enable Bayesian inference. Implemented on a spintronic-based Computation-in-Memory fabric with STT-MRAM, MC-SpatialDropout achieves improved computational efficiency and power consumption. ## Acknowledgments This work was supported by a joint ANR-DFG grant Neuspin Project ANR-21-FAI1-0008.
2308.07155
Full analysis of the scalar-induced gravitational waves for the curvature perturbation with local-type non-Gaussianities
Primordial black holes (PBHs) are supposed to form through the gravitational collapse of regions with large density fluctuations. The formation of PBHs inevitably leads to the emission of scalar-induced gravitational wave (SIGW) signals, offering a unique opportunity to test the hypothesis of PBHs as a constituent of dark matter (DM). Previous studies have calculated the energy spectrum of SIGWs in local-type non-Gaussian models, primarily considering the contributions from the $F_{\mathrm{NL}}$-order or the $G_{\mathrm{NL}}$-order while neglecting connected diagrams. In this study, we extend the previous work by (i) considering the full contribution of non-Gaussian diagrams up to the $G_{\mathrm{NL}}$-order; (ii) deriving the generic scaling of the SIGW energy spectrum in the infrared region. We derive semi-analytical results applicable to arbitrary primordial power spectra and numerically evaluate the energy spectrum of SIGWs for a log-normal power spectrum.
Chen Yuan, De-Shuang Meng, Qing-Guo Huang
2023-08-14T14:09:12Z
http://arxiv.org/abs/2308.07155v2
Full analysis of the scalar-induced gravitational waves for the curvature perturbation with local-type non-Gaussianities ###### Abstract Primordial black holes (PBHs) are supposed to form through the gravitational collapse of regions with large density fluctuations. The formation of PBHs inevitably leads to the emission of scalar-induced gravitational wave (SIGW) signals, offering a unique opportunity to test the hypothesis of PBHs as a constituent of dark matter (DM). Previous studies have calculated the energy spectrum of SIGWs in local-type non-Gaussian models, primarily considering the contributions from the \(F_{\rm NL}\)-order or the \(G_{\rm NL}\)-order while neglecting connected diagrams. In this study, we extend the previous work by (i) considering the full contribution of non-Gaussian diagrams up to the \(G_{\rm NL}\)-order; (ii) deriving the generic scaling of the SIGW energy spectrum in the infrared region. We derive semi-analytical results applicable to arbitrary primordial power spectra and numerically evaluate the energy spectrum of SIGWs for a log-normal power spectrum. ## I Introduction The nature of dark matter (DM) poses a fundamental enigma in astrophysics that has been puzzling for decades. Although its existence can be inferred from its gravitational effects, there remains a significant dearth of knowledge regarding its composition and properties. Among the potential DM candidates, primordial black holes (PBHs) have attracted considerable attention. PBHs are hypothesized to have formed through the gravitational collapse of regions with over-density during the radiation-dominated epoch immediately after the corresponding perturbation mode entered the horizon [1; 2; 3; 4]. And the mass of PBHs is related to the comoving wavelength of the perturbation mode. Numerous studies have been conducted to constrain the abundance of PBHs across a wide mass range [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19]. However, the question of whether PBHs within the mass range of \([10^{-16},10^{-14}]M_{\odot}\) and \([10^{-13},10^{-12}]M_{\odot}\) could account for the entirety of DM remains unresolved (see e.g., [20] for review of constraints on PBHs). Non-Gaussianity, characterized by deviations from Gaussian statistics, plays a significant role in the abundance of PBHs by affecting the tail of the probability density function (PDF) of curvature perturbations [21; 22; 23; 24; 25; 26; 27; 28; 29]. As a result, PBH formation might be significantly enhanced or suppressed by non-Gaussian effects. The recent detection of gravitational waves (GWs) from the merger of two black holes by the LIGO-Virgo Collaboration [30; 31] has inaugurated the era of GW astronomy and sparked renewed interest in the potential role of PBH as constituents of DM [32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42]. When the primordial scalar power spectrum experiences amplification on small scales, the quadratic terms of linear scalar perturbations give rise to a second-order tensor mode that can overwhelm the inflationary first-order tensor mode. This second-order tensor mode is known as scalar-induced gravitational waves (SIGWs) [43; 44]. The SIGWs generated during the formation of PBHs provide a new way to hunt for PBHs [42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 82; 84; 86; 88; 89; 91; 87; 88; 89; 92; 89; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 153; 159; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 213; 213; 214; 215; 216; 217; 218; 219; 22; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 251; 252; 253; 254; 255; 256; 257; 258; 261; 259; 262; 270; 271; 272; 273; 274; 275; 276; 277; 278; 289; 290; 291; 292; 293; 294; 295; 296; 297; 298; 300; 299; 310; 311; 312; 314; 315; 316; 317; 318; 319; 320; 321; 324; 325; 326; 327; 328; 329; 330; 334; 336; 338; 339; 340; 341; 342; 343; 344; 345; 346; 347; 348; 35; 358; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 52; 53; 54; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 93; 940; 88; 89; 95; 96; 97; 98; 99; 101; 112; 113; 114; 115; 116; 117; 118; 119; 122; 123; 124; 125; 126; 127; 128; 129; 131; 140; 141; 15; 158; 159; 161; 170; 171; 173; 175; 176; 177; 177; 178; 179; 181; 190; 191; 193; 192; 193; 194; 195; 196; 197; 198; 199; 201; 203; 204; 205; 206; 207; 208; 209; 2110; 211; 214; 215; 216; 217; 218; 219; 232; 241; 219; 25; 263; 271; 28; 228; 297; 28; 298; 299; 301; 312; 32; 333; 34; 35; 36; 37; 38; 39; 40; 41; 43; 41; 42; 443; 44; 45; 46; 47; 48; 49; 51; 52; 53; 54; 57; 58; 59; 60; 72; 73; 74; 75; 76; 77; 78; 83; 84; 85; 86; 87; 89; 99; 90; 910; 191; 192; 193; 194; 195; 196; 197; 198; 199; 202; 210; 2111; 215; 216; 217; 219; 233; 234; 235; 236; 237; 35; 36; 37; 38; 39; 50; 39; 510; 53; 54; 56; 57; 58; 59; 61; 70; 71; 72; 73; 74; 75; 76; 78; 89; 93; 940; 95; 96; 101; 197; 198; 199; 203; 204; 205; 206; 207; 208; 209; 211; 221; 224; 225; 207; 208; 2213; 236; 237; 241; 25; 258; 26; 26; 271; 28; 299; 310; 209; 211; 227; 28; 293; 27; 294; 28; 295; 296; 297; 29 The energy spectrum of SIGWs Let's begin from the FRW perturbated metric in Newton gauge, namely \[\mathrm{d}s^{2}=a^{2}\left\{-(1+2\phi)\mathrm{d}\eta^{2}+\left[(1-2\psi)\delta_{ ij}+\frac{h_{ij}}{2}\right]\mathrm{d}x^{i}\mathrm{d}x^{j}\right\}, \tag{1}\] where \(\phi\) and \(\psi\) are the scalar modes, \(h_{ij}\) is the transverse and traceless second-order tensor mode and \(a\) is the scale factor. During radiation dominated (RD) period, the stress tensor is described by perfect fluid and \(\phi=\psi\) in the absence of anisotropies. The equation of motion for \(h_{ij}\) is given by the second-order perturbative Einstein equation. In momentum space, we have \[h^{\prime\prime}_{\lambda,\mathbf{k}}(\eta)+2\mathcal{H}h^{\prime}_{\lambda, \mathbf{k}}(\eta)+k^{2}h_{\lambda,\mathbf{k}}(\eta)=4S_{\lambda,\mathbf{k}}( \eta), \tag{2}\] where \(\mathcal{H}\equiv a^{\prime}/a\) is the conformal Hubble parameter and the prime stands for the derivative with respect to the conformal time \(\eta\). The subscript \(\lambda\) indicates the two different polarization modes of gravitational waves, which are represented by \(+\) and \(\times\). The source term \(S_{\lambda,\mathbf{k}}(\eta)\) in eq. (2) reads [43; 44] \[S_{\lambda,\mathbf{k}}(\eta)=\int\frac{\mathrm{d}^{3}q}{(2\pi)^{3}}Q_{ \lambda}(\mathbf{k},\mathbf{q})F(q,|\mathbf{k}-\mathbf{q}|,\eta)\Phi_{q}\Phi_ {|\mathbf{k}-\mathbf{q}|}, \tag{3}\] where \(F(p,q,\eta)\) is given by \[F(p,q,\eta)=3T(p\eta)T(q\eta)+\frac{1}{\mathcal{H}}\left[T^{\prime}(p\eta)T(q \eta)+T(p\eta)T^{\prime}(q\eta)\right]+\frac{1}{\mathcal{H}^{2}}T^{\prime}(p \eta)T^{\prime}(q\eta), \tag{4}\] and \(T(k\eta)\) is the transfer function, encoding the linear evolution of the scalar mode \(\phi_{k}\) after re-entering the horizon following the end of inflation and is given by the first-order Einstein equation, namely \[\phi_{k}(\eta)\equiv\Phi_{k}T(k\eta)=\Phi_{k}\frac{9}{(k\eta)^{2}}\left(\frac {\sin(k\eta/\sqrt{3})}{k\eta/\sqrt{3}}-\cos(k\eta/\sqrt{3})\right), \tag{5}\] where \(\Phi_{k}\) represents the initial value of \(\phi_{k}\) when it enters the horizon and is also the value of \(\phi_{k}\) at the end of inflation because scalar perturbation remains conserved on super-horizon scales. Note that \(F(p,q,\eta)\) is symmetric for \(p\) and \(q\) and unbolded symbols represent the modulus of a vector, and the same convention applies below in this paper. The projection factor \(Q_{\lambda}(\mathbf{k},\mathbf{q})\) in Eq. (3) is defined by \[Q_{\lambda}(\mathbf{k},\mathbf{q})\equiv e_{ij}^{\lambda}(\mathbf{k})q_{i}q_{ j}, \tag{6}\] where the polarization tensors are defined as \(e_{ij}^{+}=(e_{i}e_{j}-\bar{e}_{i}\bar{e}_{j})/\sqrt{2}\) and \(e_{ij}^{\times}=(e_{i}\bar{e}_{j}+\bar{e}_{i}e_{j})/\sqrt{2}\) and \(e(\mathbf{k})\) and \(\bar{e}(\mathbf{k})\) are a pair of orthogonal basis vectors perpendicular to \(\mathbf{k}\). It obeys the following symmetries: \[Q_{\lambda}(\mathbf{k},\mathbf{q})=Q_{\lambda}(\mathbf{k},\mathbf{q}\pm \mathbf{k})=Q_{\lambda}(-\mathbf{k},\mathbf{q})=Q_{\lambda}(\mathbf{k},- \mathbf{q})=Q_{\lambda}(-\mathbf{k},-\mathbf{q}). \tag{7}\] Here we choose \(e=(1,0,0)\), \(\bar{e}=(0,1,0)\) and \(\mathbf{k}=(0,0,k)\) and we can express the vector \(\mathbf{q}\) explicitly as \[\mathbf{q}=q(\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta). \tag{8}\] Then we can directly write out \[Q_{\lambda}(\mathbf{k},\mathbf{q})=\frac{q^{2}}{\sqrt{2}}\sin^{2}\theta\times \left\{\begin{array}{ll}\cos(2\phi),&\lambda=+\\ \sin(2\phi),&\lambda=\times\end{array}\right.. \tag{9}\] Eq. (2) can be solved by Green's function \[h_{\lambda,\mathbf{k}}(\eta)=\frac{4}{a(\eta)}\int_{0}^{\eta}g_{k}(\eta;\eta^{ \prime})a(\eta^{\prime})S_{\lambda,\mathbf{k}}(\eta)\mathrm{d}\eta^{\prime}, \tag{10}\] where the Green's function takes the form \(g_{k}(\eta;\eta^{\prime})=\frac{1}{k}\sin(k\eta-k\eta^{\prime})\) during RD. The power spectrum, \(P_{\lambda}\left(k,\eta\right)\), and the dimensionless power spectrum of GWs, \(\mathcal{P}_{\lambda}\left(k,\eta\right)\), are defined as \[\left\langle h_{\lambda,\mathbf{k}}(\eta)h_{\lambda^{\prime},\mathbf{k}^{\prime }}(\eta)\right\rangle=(2\pi)^{3}\delta^{3}\left(\mathbf{k}+\mathbf{k}^{\prime} \right)\delta^{\lambda\lambda^{\prime}}P_{\lambda}\left(k,\eta\right)=(2\pi)^{ 3}\delta^{3}\left(\mathbf{k}+\mathbf{k}^{\prime}\right)\delta^{\lambda\lambda^ {\prime}}\frac{2\pi^{2}}{k^{3}}\mathcal{P}_{\lambda}\left(k,\eta\right). \tag{11}\] The energy density of GWs, \(\Omega_{\mathrm{GW}}(k,\eta)\), is an observed quantity, defined as the energy of GWs per logarithm frequency normalized by the critical energy, \(\rho_{c}(\eta)\), takes the form \[\Omega_{\mathrm{GW}}(k,\eta)\equiv\frac{1}{\rho_{c}}\frac{\mathrm{d}\rho_{ \mathrm{GW}}}{\mathrm{d}\ln k}=\frac{1}{48}\left(\frac{k}{\mathcal{H}}\right) ^{2}\sum_{\lambda=+,\times}\overline{\mathcal{P}_{\lambda}\left(k,\eta\right)}, \tag{12}\] where the overbar denotes the time average. The density parameter at the matter-radiation equality is \(\Omega_{\mathrm{GW}}(k)\simeq\Omega_{\mathrm{GW}}(k,k\eta\to\infty)\) and the quality that would be observed today can be obtained by \(\Omega_{\mathrm{GW},0}(k)=\Omega_{r}\times\Omega_{\mathrm{GW}}(k)\), where \(\Omega_{r}\) is the density parameter of radiation today. Using Eq. (3), Eq. (10), Eq. (11) and Eq. (12), \(\Omega_{\mathrm{GW}}(k)\) can be expressed by \[\Omega_{\mathrm{GW}}(k)=\frac{k^{3}}{6\pi^{2}}\left(\frac{k}{ \mathcal{H}}\right)^{2}\sum_{\lambda=+,\times} \int\frac{\mathrm{d}^{3}q\mathrm{d}^{3}q^{\prime}}{(2\pi)^{6}}Q_{ \lambda}\left(\mathbf{k},\mathbf{q}\right)Q_{\lambda}\left(\mathbf{k}^{\prime },\mathbf{q}^{\prime}\right)\overline{I\left(q,\left|\mathbf{k}-\mathbf{q} \right|,k\eta\to\infty\right)\tilde{I}\left(q^{\prime},\left|\mathbf{k}^{ \prime}-\mathbf{q}^{\prime}\right|,k\eta\to\infty\right)} \tag{13}\] \[\times\left\langle\left\langle\Phi_{\mathbf{q}}\Phi_{\mathbf{k}-\mathbf{q}} \Phi_{\mathbf{q}^{\prime}}\Phi_{\mathbf{k}^{\prime}-\mathbf{q}^{\prime}}\right\rangle \right\rangle,\] where we define \(\left\langle\left\langle\Phi_{\mathbf{q}}\Phi_{\mathbf{k}-\mathbf{q}}\Phi_{\mathbf{q}^{\prime} }\Phi_{\mathbf{k}^{\prime}-\mathbf{q}^{\prime}}\right\rangle\right\rangle\) as the remaining part after extracting \((2\pi)^{3}\delta^{3}\left(\mathbf{k}+\mathbf{k}^{\prime}\right)\) from the terms containing \(\delta^{3}\left(\mathbf{k}+\mathbf{k}^{\prime}\right)\) in the four-point function \(\left\langle\Phi_{\mathbf{q}}\Phi_{\mathbf{k}-\mathbf{q}}\Phi_{\mathbf{q}^{\prime}}\Phi_{\mathbf{k }^{\prime}-\mathbf{q}^{\prime}}\right\rangle\), i.e. \[\left\langle\Phi_{\mathbf{q}}\Phi_{\mathbf{k}-\mathbf{q}}\Phi_{\mathbf{q}^{\prime}}\Phi_{\mathbf{k }^{\prime}-\mathbf{q}^{\prime}}\right\rangle\equiv(2\pi)^{3}\delta^{3}\left(\mathbf{k }+\mathbf{k}^{\prime}\right)\left\langle\left\langle\Phi_{\mathbf{q}}\Phi_{\mathbf{k}-\bm {q}}\Phi_{\mathbf{q}^{\prime}}\Phi_{\mathbf{k}^{\prime}-\mathbf{q}^{\prime}}\right\rangle \right\rangle, \tag{14}\] and the kernel function \(\tilde{I}(p,q,\eta)\) is defined as \[\tilde{I}(p,q,\eta)\equiv\int\mathrm{d}\eta^{\prime}\frac{a(\eta^{\prime})}{ a(\eta)}g_{k}(\eta;\eta^{\prime})F(p,q,\eta^{\prime}), \tag{15}\] which contains all the time-dependent terms. By substituting Eq. (4), Eq. (5), Eq. (9) and Eq. (15) into Eq. (13), applying coordinate transformations \(u=q/k\), \(v=|\mathbf{k}-\mathbf{q}|/k\) and \(u^{\prime}=q^{\prime}/k\), \(v^{\prime}=|\mathbf{k}-\mathbf{q}^{\prime}|/k\), and then averaging over time, we obtain \(\Omega_{\mathrm{GW}}(k)\): \[\Omega_{\mathrm{GW}}(k)=\frac{k^{3}}{6\pi^{2}}\int\frac{\mathrm{d}^{3}q \mathrm{d}^{3}q^{\prime}}{(2\pi)^{6}}\cos 2(\phi-\phi^{\prime})I\left(u,v\right)I \left(u^{\prime},v^{\prime}\right)\left\langle\left\langle\zeta_{\mathbf{q}}\zeta_{ \mathbf{k}-\mathbf{q}}\zeta_{\mathbf{q}^{\prime}}\zeta_{\mathbf{k}^{\prime}-\mathbf{q}^{\prime}} \right\rangle\right\rangle, \tag{16}\] where we have transformed the scalar perturbation \(\Phi\) into the comoving curvature perturbation \(\zeta\) using the relation \(\Phi=(2/3)\zeta\) and absorbed the coefficient \(16/81\) and the remaining projection term after removing \(\cos 2(\phi-\phi^{\prime})\) into the kernel function. The resulting new kernel function can be expressed as \[I\left(u,v\right)I\left(u^{\prime},v^{\prime}\right)= \frac{9\left(u^{2}+v^{2}-3\right)\left(u^{\prime 2}+v^{\prime 2}-3 \right)}{1024u^{3}u^{3}v^{3}v^{\prime 3}}\left[4u^{2}-\left(u^{2}-v^{2}+1\right)^{2} \right]\left[4u^{\prime 2}-\left(u^{\prime 2}-v^{\prime 2}+1\right)^{2}\right] \tag{17}\] \[\times\Bigg{\{}\left[\left(u^{2}+v^{2}-3\right)\ln\left(\left| \frac{(u-v)^{2}-3}{(u+v)^{2}-3}\right|\right)+4uv\right]\left[\left(u^{\prime 2 }+v^{\prime 2}-3\right)\ln\left(\left|\frac{(u^{\prime}-v^{\prime})^{2}-3}{(u^{ \prime}+v^{\prime})^{2}-3}\right|\right)+4u^{\prime}v^{\prime}\right]\] \[+\pi^{2}\left(u^{2}+v^{2}-3\right)\left(u^{\prime 2}+v^{\prime 2}-3 \right)\Theta\left(u+v-\sqrt{3}\right)\Theta\left(u^{\prime}+v^{\prime}- \sqrt{3}\right)\Bigg{\}},\] where \(\Theta\) is the Heaviside function. It is worth noting that in Eq. (16), we have retained the momentum dependence in the integral variable and four-point function without making a change of variables. This is for the convenience of future calculations when dealing with non-Gaussianity. The following relations will be frequently used in subsequent calculations. The transformation relation for the integral variables is as follows: \[\int\mathrm{d}^{3}q\to\int_{0}^{\infty}\mathrm{d}u\int_{|1-u|}^{1+u}\mathrm{d}v \int_{0}^{2\pi}\mathrm{d}\phi uvk^{3}, \tag{18}\] and \(\cos\theta\) and \(\sin\theta\) can be expressed by \[\cos\theta=\frac{1+u^{2}-v^{2}}{2u},\quad\sin\theta=\sqrt{1-\frac{(1+u^{2}-v^{2})^ {2}}{4u^{2}}}. \tag{19}\] ## III GWs induced by local-type non-Gaussian curvature perturbations The local-type non-Gaussian curvature perturbation \(\zeta\) is expanded in terms of the Gaussian part \(\zeta_{g}\) in real space as \[\zeta\left(\zeta_{g}\right)=\zeta_{g}+F_{\rm NL}\left(\zeta_{g}^{2}-\left\langle \zeta_{g}^{2}\right\rangle\right)+G_{\rm NL}\zeta_{g}^{3}, \tag{20}\] where \(F_{\rm NL}\) and \(G_{\rm NL}\) are the dimensionless non-Gaussian parameters, related to the commonly used notations \(f_{\rm NL}\) and \(g_{\rm NL}\) by \(F_{\rm NL}\equiv 3/5f_{\rm NL}\) and \(G_{\rm NL}\equiv 9/25g_{\rm NL}\) respectively. In momentum space, the curvature perturbation is expanded by convolution of the Gaussian part \[\zeta_{\mathbf{k}}=\zeta_{g}(\mathbf{k})+F_{\rm NL}\int\frac{{\rm d}^{3}p}{(2\pi)^{3}} \zeta_{g}(\mathbf{p})\zeta_{g}(\mathbf{k}-\mathbf{p})+G_{\rm NL}\int\frac{{\rm d}^{3}p_{1} {\rm d}^{3}p_{2}}{(2\pi)^{6}}\zeta_{g}(\mathbf{p_{1}})\zeta_{g}(\mathbf{p_{2}})\zeta_{ g}(\mathbf{k}-\mathbf{p_{1}}-\mathbf{p_{2}}). \tag{21}\] Note that we neglect the Fourier transform of the constant term \(F_{\rm NL}\left\langle\zeta_{g}^{2}\right\rangle\) since this term leads to \(\delta(\mathbf{k})\) and does not contribute to the SIGW power spectrum in the following calculation. The power spectrum \(P_{g}(k)\) and the dimensionless power spectrum \(\mathcal{P}_{g}(k)\) of the Gaussian part curvature perturbation are defined as \[\left\langle\zeta_{g}\left(\mathbf{k}\right)\zeta_{g}\left(\mathbf{k}^{\prime}\right) \right\rangle=(2\pi)^{3}\delta^{3}\left(\mathbf{k}+\mathbf{k}^{\prime}\right)P_{g} \left(k\right)=(2\pi)^{3}\delta^{3}\left(\mathbf{k}+\mathbf{k}^{\prime}\right)\frac{2 \pi^{2}}{k^{3}}\mathcal{P}_{g}\left(k\right). \tag{22}\] The influence of non-Gaussianity in curvature perturbations on the GWs spectrum is manifested in the four-point function in Eq. (16). By substituting Eq. (21) into Eq. (16) and employing Wick's theorem, we can obtain the total GWs energy density spectrum up to the \(G_{\rm NL}\) order in local non-Gaussian expansion. Next, we will decompose the GWs spectrum into different powers of \(F_{\rm NL}\) and \(G_{\rm NL}\). ### Gaussian part The leading order is the Gaussian part, in which case we have \[\left\langle\zeta_{\mathbf{q}}\zeta_{\mathbf{k}-\mathbf{q}}\zeta_{\mathbf{q}^{ \prime}}\zeta_{\mathbf{k}^{\prime}-\mathbf{q}^{\prime}}\right\rangle_{g}= \left\langle\zeta_{g}\left(\mathbf{q}\right)\zeta_{g}\left( \mathbf{q}^{\prime}\right)\right\rangle\left\langle\zeta_{g}\left(\mathbf{k}- \mathbf{q}\right)\zeta_{g}\left(\mathbf{k}^{\prime}-\mathbf{q}^{\prime}\right) \right\rangle+\left\langle\zeta_{g}\left(\mathbf{q}\right)\zeta_{g}\left( \mathbf{k}^{\prime}-\mathbf{q}^{\prime}\right)\right\rangle\left\langle\zeta_{ g}\left(\mathbf{k}-\mathbf{q}\right)\zeta_{g}\left(\mathbf{q}^{\prime}\right)\right\rangle \tag{23}\] where the third term on the right-hand side of the above equation is zero because it corresponds to a disconnected diagram that does not contribute to the physical mechanism, and it also does not contain the \(\delta^{3}\left(\mathbf{k}+\mathbf{k}^{\prime}\right)\) term. Due to symmetry, the contributions of the first two terms on the right-hand side of the above equation are equal. By substituting this equation into Eq. (16), we can obtain the Gaussian part of the GWs spectrum \[\Omega_{\rm GW}^{g}(k)= \frac{k^{3}}{6\pi^{2}}\int\frac{{\rm d}^{3}q}{(2\pi)^{3}}I^{2} \left(u,v\right)P_{g}\left(q\right)P_{g}\left(\left|\mathbf{k}-\mathbf{q}\right|\right)\times 2\] (24) \[= \frac{1}{3}\int_{0}^{\infty}{\rm d}u\int_{\left|1-u\right|}^{1+u} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Performing Wick contraction on the above two six-point functions, there are three distinct non-zero contractions denoted as 'hybrid' term, 'Z' term, and 'C' term as named in ref.[114]. Then we have \[\Omega_{\rm GW}^{F_{\rm NL}^{2}}(k)=\Omega_{\rm GW}^{hybrid}(k)+\Omega_{\rm GW }^{C}(k)+\Omega_{\rm GW}^{Z}(k). \tag{26}\] Note that the 'hybrid' term is a disconnected term, the 'C' term and the 'Z' term are connected terms. Ref.[82] omitted all disconnected items. We now demonstrate each of these three parts in detail. For the 'hybrid' term, one example of the contraction is shown as follows: \[\left\langle\contraction{\zeta_{g}(\mathbf{p_{1}})\zeta_{g}(\mathbf{q-p_{1}})\zeta_{g}(\mathbf{ k-q})\zeta_{g}(\mathbf{p_{2}})\zeta_{g}(\mathbf{q^{\prime}-p_{2}})\zeta_{g}(\mathbf{k^{ \prime}-q^{\prime}})}{\zeta_{g}(\mathbf{k^{\prime}-q^{\prime}})}\right\rangle, \tag{27}\] which is commonly referred to as a disconnected diagram and the term \(\delta^{3}(\mathbf{q+q^{\prime}})\) is present. According to symmetry, there are two other contractions that yield the same result. Therefore, we need to multiply by a symmetry factor, which in this case is 2. The calculation of a disconnected diagram is relatively straightforward, because in this case \(\cos 2(\phi-\phi^{\prime})=1\) and thus disappears in the integral. Expanding the correlation function and using the appearing delta functions to eliminate redundant integrals, we can obtain: \[\Omega_{\rm GW}^{hybrid}(k)= \frac{F_{\rm NL}^{2}k^{3}}{6\pi^{2}}\int\frac{{\rm d}^{3}q}{(2\pi) ^{3}}I^{2}\left(u,v\right)\int\frac{{\rm d}^{3}p_{1}}{(2\pi)^{3}}4P_{g}\left(p _{1}\right)P_{g}\left(\left|\mathbf{q-p_{1}}\right|\right)P_{g}\left(\left|\mathbf{k-q }\right|\right)\times 2 \tag{28}\] \[= \frac{2F_{\rm NL}^{2}}{3}\int_{0}^{\infty}{\rm d}u\int_{\left|1-u \right|}^{1+u}{\rm d}v\int_{0}^{\infty}{\rm d}u_{1}\int_{\left|1-u_{1}\right|}^ {1+u_{1}}{\rm d}v_{1}I^{2}(u,v)\frac{1}{u^{2}v^{2}u_{1}^{2}v_{1}^{2}}{\cal P}_ {g}\left(u_{1}uk\right){\cal P}_{g}\left(v_{1}uk\right){\cal P}_{g}\left(vk \right),\] where the second equality in the above equation is obtained by performing the coordinate transformation \(u_{1}=p_{1}/q\) and \(v_{1}=\left|\mathbf{q-p_{1}}\right|/q\). For the 'Z' term, one example of the contraction is shown as follows: \[\left\langle\contraction{\zeta_{g}(\mathbf{p_{1}})\zeta_{g}(\mathbf{q-p_{1}})\zeta_{g}(\bm {k-q})\zeta_{g}(\mathbf{p_{2}})\zeta_{g}(\mathbf{q^{\prime}-p_{2}})\zeta_{g}(\mathbf{k^{ \prime}-q^{\prime}})}{\zeta_{g}(\mathbf{k^{\prime}-q^{\prime}})}\right\rangle, \tag{29}\] which is commonly referred to as a connected diagram and the term \(\delta^{3}(\mathbf{q+q^{\prime}})\) is not satisfied. The symmetry factor in this case is 4. Then the calculation will be more complicated than the disconnected diagram because \(\cos 2(\phi-\phi^{\prime})\) will be retained in the integral. In this case, we have \[\Omega_{\rm GW}^{Z}(k)= \frac{F_{\rm NL}^{2}k^{3}}{6\pi^{2}}\int\frac{{\rm d}^{3}q{\rm d} ^{3}q^{\prime}}{(2\pi)^{6}}\cos 2(\phi-\phi^{\prime})I\left(u,v\right)I\left(u^{ \prime},v^{\prime}\right)4P_{g}\left(\left|\mathbf{k-q}\right|\right)P_{g}\left( \left|\mathbf{k-q-q^{\prime}}\right|\right)\times 4\] \[= \frac{F_{\rm NL}^{2}}{3\pi^{2}}\int_{0}^{\infty}{\rm d}u\int_{ \left|1-u\right|}^{1+u}{\rm d}v\int_{0}^{\infty}{\rm d}u^{\prime}\int_{\left|1 -u^{\prime}\right|}^{1+u^{\prime}}{\rm d}v^{\prime}\int_{0}^{2\pi}{\rm d}\phi \int_{0}^{2\pi}{\rm d}\phi^{\prime}\cos 2(\phi-\phi^{\prime})I(u,v)I(u^{\prime},v^{ \prime})\frac{uvu^{\prime}v^{\prime}}{v^{3}v^{3}v^{3}w_{012}^{3}}\] \[\times{\cal P}_{g}\left(vk\right){\cal P}_{g}\left(v^{\prime}k \right){\cal P}_{g}\left(w_{012}k\right)\] \[= \frac{2F_{\rm NL}^{2}}{3\pi}\int_{0}^{\infty}{\rm d}u\int_{ \left|1-u\right|}^{1+u}{\rm d}v\int_{0}^{\infty}{\rm d}u^{\prime}\int_{\left|1 -u^{\prime}\right|}^{1+u^{\prime}}{\rm d}v^{\prime}\int_{0}^{2\pi}{\rm d} \varphi_{1}\cos 2\varphi_{1}I(u,v)I(u^{\prime},v^{\prime})\frac{uvu^{\prime}v^{ \prime}}{v^{3}v^{\prime 3}w_{012}^{3}}\] \[\times{\cal P}_{g}\left(vk\right){\cal P}_{g}\left(v^{\prime}k \right){\cal P}_{g}\left(w_{012}k\right),\] where the third equality is obtained by performing a coordinate transformation \(\varphi_{1}=\phi-\phi^{\prime}\) and \(\varphi_{2}=\phi+\phi^{\prime}\) and then we have \[\int_{0}^{2\pi}{\rm d}\phi\int_{0}^{2\pi}{\rm d}\phi^{\prime}\to\frac{1}{2} \int_{0}^{2\pi}{\rm d}\varphi_{1}\int_{0}^{4\pi}{\rm d}\varphi_{2}=2\pi\int_{0} ^{2\pi}{\rm d}\varphi_{1}, \tag{31}\] as \(\varphi_{2}\) does not appear in the integral. Additionally, \(w_{012}\) is defined as follows: \[w_{012}^{2}=\frac{\left|\mathbf{k-q-q^{\prime}}\right|^{2}}{k^{2}}=1+u^{2}+u^{ \prime 2}+2uu^{\prime}(\sin\theta\sin\theta^{\prime}\cos\varphi_{1}+\cos\theta \cos\theta^{\prime})-2u\cos\theta-2u^{\prime}\cos\theta^{\prime}, \tag{32}\] where we have used the following relations \[\mathbf{q}\cdot\mathbf{q^{\prime}} =uu^{\prime}k^{2}\left[\sin\theta\sin\theta^{\prime}(\cos\phi\cos \phi^{\prime}+\sin\phi\sin\phi^{\prime})+\cos\theta\cos\theta^{\prime}\right], \tag{33}\] \[\mathbf{k}\cdot\mathbf{q} =uk^{2}\cos\theta,\] (34) \[\mathbf{k}\cdot\mathbf{q^{\prime}} =u^{\prime}k^{2}\cos\theta^{\prime}, \tag{35}\] and Eq. (19) to replace \(\sin\) and \(\cos\). As to the 'C' term, one example of the contraction is shown as follows: \[\left\langle\contraction{\zeta_{g}(\mathbf{p_{1}})\zeta_{g}(\mathbf{q-p_{1}})\zeta_{g}(\mathbf{p_ {2}})\zeta_{g}(\mathbf{k-q-p_{2}})\zeta_{g}(\mathbf{q^{\prime}})\zeta_{g}(\mathbf{k^{\prime }}-\mathbf{q^{\prime}})}{\zeta_{g}(\mathbf{k^{\prime}}-\mathbf{q^{\prime}})}\right\rangle, \tag{36}\] and the symmetry factor in this case is 8. Then we have \[\Omega_{\rm GW}^{C}(k)= \frac{F_{\rm NL}^{2}k^{3}}{6\pi^{2}}\int\frac{\mathrm{d}^{3}q \mathrm{d}^{3}q^{\prime}}{(2\pi)^{6}}\cos 2(\phi-\phi^{\prime})I\left(u,v\right)I \left(u^{\prime},v^{\prime}\right)2P_{g}\left(|\mathbf{q-q^{\prime}}|\right)P_{g} \left(|\mathbf{k-q^{\prime}}|\right)P_{g}\left(q^{\prime}\right)\times 8 \tag{37}\] \[= \frac{2F_{\rm NL}^{2}}{3\pi}\int_{0}^{\infty}\mathrm{d}u\int_{|1 -u|}^{1+u}\mathrm{d}v\int_{0}^{\infty}\mathrm{d}u^{\prime}\int_{|1-u^{\prime} |}^{1+u^{\prime}}\mathrm{d}v^{\prime}\int_{0}^{2\pi}\mathrm{d}\varphi_{1}\cos 2 \varphi_{1}I(u,v)I(u^{\prime},v^{\prime})\frac{uvu^{\prime}v^{\prime}}{u^{ \prime 3}v^{\prime 3}w_{12}^{3}}\] \[\times\mathcal{P}_{g}\left(u^{\prime}k\right)\mathcal{P}_{g} \left(v^{\prime}k\right)\mathcal{P}_{g}\left(w_{12}k\right),\] where \[w_{12}^{2}=\frac{|\mathbf{q-q^{\prime}}|^{2}}{k^{2}}=u^{2}+u^{\prime 2}-2uu^{ \prime}(\sin\theta\sin\theta^{\prime}\cos\varphi_{1}+\cos\theta\cos\theta^{ \prime}). \tag{38}\] ### \(F_{\rm NL}^{4}\) terms For terms containing \(F_{\rm NL}^{4}\), the GWs spectrum can be expressed in the following form: \[\Omega_{\rm GW}^{F_{\rm NL}^{4}}(k)= \frac{F_{\rm NL}^{4}k^{3}}{6\pi^{2}}\int\frac{\mathrm{d}^{3}q \mathrm{d}^{3}q^{\prime}}{(2\pi)^{6}}\cos 2(\phi-\phi^{\prime})I\left(u,v \right)I\left(u^{\prime},v^{\prime}\right)\int\frac{\mathrm{d}^{3}p_{1} \mathrm{d}^{3}p_{2}\mathrm{d}^{3}p_{3}\mathrm{d}^{3}p_{4}}{(2\pi)^{12}} \tag{39}\] \[\times\left\langle\left\langle\zeta_{g}(\mathbf{p_{1}})\zeta_{g}(\bm {q-p_{1}})\zeta_{g}(\mathbf{p_{2}})\zeta_{g}(\mathbf{k-q-p_{2}})\zeta_{g}(\mathbf{p_{3}}) \zeta_{g}(\mathbf{q^{\prime}-p_{3}})\zeta_{g}(\mathbf{p_{4}})\zeta_{g}(\mathbf{k^{\prime }-q^{\prime}-p_{4}})\right\rangle\right\rangle.\] Performing Wick contraction on the above eight-point function, there are three distinct non-zero contractions denoted as'reducible' term, 'planar' term, and 'non-planar' term as also named in ref.[114]. Then we can write \[\Omega_{\rm GW}^{F_{\rm NL}^{4}}(k)=\Omega_{\rm GW}^{re}(k)+\Omega_{\rm GW}^{ planar}(k)+\Omega_{\rm GW}^{np}(k). \tag{40}\] We now demonstrate each of these three parts in detail. For the'reducible' term, one example of the contraction is shown as follows: \[\left\langle\contraction{\zeta_{g}(\mathbf{p_{1}})\zeta_{g}(\mathbf{q-p_{1}})\zeta_{g}(\bm {p_{2}})\zeta_{g}(\mathbf{k-q-p_{2}})\zeta_{g}(\mathbf{p_{3}})\zeta_{g}(\mathbf{q^{\prime }-p_{3}})\zeta_{g}(\mathbf{p_{4}})\zeta_{g}(\mathbf{k^{\prime}-q^{\prime}-p_{4}})}{ \zeta_{g}(\mathbf{k^{\prime}-q^{\prime}-p_{4}})}\right\rangle, \tag{41}\] which is a disconnected diagram and the symmetry factor is 8. Expanding the correlation function and using the appearing delta functions to eliminate redundant integrals, we can obtain: \[\Omega_{\rm GW}^{re}(k)= \frac{F_{\rm NL}^{4}k^{3}}{6\pi^{2}}\int\frac{\mathrm{d}^{3}q}{(2 \pi)^{3}}I^{2}\left(u,v\right)\int\frac{\mathrm{d}^{3}p_{1}\mathrm{d}^{3}p_{2}}{ (2\pi)^{6}}P_{g}\left(p_{1}\right)P_{g}\left(|\mathbf{q-p_{1}}|\right)P_{g}\left(p_ {2}\right)P_{g}\left(|\mathbf{k-q-p_{2}}|\right)\times 8 \tag{42}\] \[= \frac{F_{\rm NL}^{4}}{3}\int_{0}^{\infty}\mathrm{d}u\int_{|1-u|}^ {1+u}\mathrm{d}v\int_{0}^{\infty}\mathrm{d}u_{1}\int_{|1-u_{1}|}^{1+u_{1}} \mathrm{d}v_{1}\int_{0}^{\infty}\mathrm{d}u_{2}\int_{|1-u_{2}|}^{1+u_{2}} \mathrm{d}v_{2}I^{2}(u,v)\frac{1}{u^{2}v^{2}u_{1}^{2}v_{1}^{2}u_{2}^{2}v_{2}^{2}}\] \[\times\mathcal{P}_{g}\left(u_{1}uk\right)\mathcal{P}_{g}\left(v_ {1}uk\right)\mathcal{P}_{g}\left(u_{2}vk\right)\mathcal{P}_{g}\left(v_{2}vk \right),\] where the second equality in the above equation is obtained by performing the coordinate transformation \(u_{1}=p_{1}/q\), \(v_{1}=|\mathbf{q-p_{1}}|/q\) and \(u_{2}=p_{2}/|\mathbf{k-q}|\), \(v_{2}=|\mathbf{k-q-p_{2}}|/|\mathbf{k-q}|\). For the 'planar' term, one example of the contraction is shown as follows: \[\left\langle\overline{\zeta_{g}(\mathbf{p_{1}})\zeta_{g}(\mathbf{q}-\mathbf{p_{1}})\zeta_{g}( \mathbf{p_{2}})\zeta_{g}(\mathbf{k}-\mathbf{q}-\mathbf{p_{2}})\zeta_{g}(\mathbf{p_{3}})\zeta_{g}( \mathbf{q^{\prime}}-\mathbf{p_{3}})\zeta_{g}(\mathbf{p_{4}})}\right\rangle_{\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! In particular, we normalize the power spectrum \(\mathcal{P}_{g}(p)\) to be \[A=\int\frac{\mathrm{d}p}{p}\mathcal{P}_{g}(p), \tag{51}\] then \(\Omega_{\mathrm{GW}}^{G_{\mathrm{NL}}}(k)=12AG_{\mathrm{NL}}\Omega_{\mathrm{GW }}^{g}(k)\) holds, where \(A\) represents the variance of the Gaussian part of the dimensionless curvature perturbation spectrum \(\mathcal{P}_{g}(p)\). ### \(G_{\mathrm{NL}}^{2}\) terms For terms containing \(G_{\mathrm{NL}}^{2}\), considering symmetry, the GWs spectrum can be expressed in the following form: \[\Omega_{\mathrm{GW}}^{G_{\mathrm{NL}}^{2}}(k)= \frac{G_{\mathrm{NL}}^{2}k^{3}}{6\pi^{2}}\int\frac{\mathrm{d}^{3} q\mathrm{d}^{3}q^{\prime}}{(2\pi)^{6}}\cos 2(\phi-\phi^{\prime})I\left(u,v\right)I \left(u^{\prime},v^{\prime}\right)\int\frac{\mathrm{d}^{3}p_{1}\mathrm{d}^{3} p_{2}\mathrm{d}^{3}p_{3}\mathrm{d}^{3}p_{4}}{(2\pi)^{12}}\] \[\times\Bigg{[}2\left\langle\langle\zeta_{g}(\mathbf{p_{1}})\zeta_{g} (\mathbf{p_{2}})\zeta_{g}(\mathbf{q-p_{1}-p_{2}})\zeta_{g}(\mathbf{p_{3}})\zeta_{g}(\mathbf{p_{ 4}})\zeta_{g}(\mathbf{k-q-p_{3}-p_{4}})\zeta_{g}(\mathbf{q^{\prime}})\zeta_{g}(\mathbf{k^{ \prime}-q^{\prime}})\rangle\right\rangle\] \[\left.+4\left\langle\langle\zeta_{g}(\mathbf{p_{1}})\zeta_{g}(\mathbf{p_{ 2}})\zeta_{g}(\mathbf{q-p_{1}-p_{2}})\zeta_{g}(\mathbf{k-q})\zeta_{g}(\mathbf{p_{3}})\zeta_ {g}(\mathbf{p_{4}})\zeta_{g}(\mathbf{q^{\prime}-p_{3}-p_{4}})\zeta_{g}(\mathbf{k^{\prime}- q^{\prime}})\rangle\right\rangle\Bigg{]}. \tag{52}\] Performing Wick contraction on the above eight-point function, there are four distinct non-zero contractions and we name them as '2loop' term, 'tri' term, 'ring1' term, and 'ring2' term. Then we have \[\Omega_{\mathrm{GW}}^{G_{\mathrm{NL}}^{2}}(k)=\Omega_{\mathrm{GW}}^{2loop}(k) +\Omega_{\mathrm{GW}}^{tri}(k)+\Omega_{\mathrm{GW}}^{ring1}(k)+\Omega_{ \mathrm{GW}}^{ring2}(k). \tag{53}\] We now demonstrate each of these four parts in detail. The '2loop' term can be easily obtained as \[\Omega_{\mathrm{GW}}^{2loop}(k)=54G_{\mathrm{NL}}^{2}\int\frac{ \mathrm{d}^{3}p_{1}\mathrm{d}^{3}p_{2}}{(2\pi)^{6}}P_{g}(p_{1})P_{g}(p_{2}) \Omega_{\mathrm{GW}}^{g}(k)=54G_{\mathrm{NL}}^{2}\int\frac{\mathrm{d}p_{1} \mathrm{d}p_{2}}{p_{1}p_{2}}\mathcal{P}_{g}(p_{1})\mathcal{P}_{g}(p_{2}) \Omega_{\mathrm{GW}}^{g}(k), \tag{54}\] and for power spectrum satisfying Eq. (51), we have \(\Omega_{\mathrm{GW}}^{2loop}(k)=54A^{2}G_{\mathrm{NL}}^{2}\Omega_{\mathrm{ GW}}^{g}(k)\). For the 'tri' term, one example of the contraction is shown as follows: (55) which is a disconnected diagram and the symmetry factor in this case is 6. Expanding the correlation function and using the appearing delta functions to eliminate redundant integrals, we can obtain: \[\Omega_{\mathrm{GW}}^{tri}(k)= \frac{G_{\mathrm{NL}}^{2}k^{3}}{6\pi^{2}}\int\frac{\mathrm{d}^{3} q}{(2\pi)^{3}}I^{2}\left(u,v\right)\int\frac{\mathrm{d}^{3}p_{1}\mathrm{d}^{3} p_{2}}{(2\pi)^{6}}4P_{g}\left(p_{1}\right)P_{g}\left(p_{2}\right)P_{g} \left(|\mathbf{q-p_{1}-p_{2}}|\right)P_{g}\left(|\mathbf{k-q}|\right)\times 6 \tag{56}\] \[= G_{\mathrm{NL}}^{2}\int_{0}^{\infty}\mathrm{d}u\int_{|1-u|}^{1+u} \mathrm{d}v\int_{0}^{\infty}\mathrm{d}u_{1}\int_{|1-u_{1}|}^{1+u_{1}}\mathrm{d }v_{1}\int_{0}^{\infty}\mathrm{d}u_{2}\int_{|1-u_{2}|}^{1+u_{2}}\mathrm{d}v_{2 }I^{2}(u,v)\frac{1}{u^{2}v^{2}u_{1}^{2}v_{1}^{2}u_{2}^{2}v_{2}^{2}}\] \[\times\mathcal{P}_{g}\left(u_{1}uk\right)\mathcal{P}_{g}\left(u_{ 2}v_{1}uk\right)\mathcal{P}_{g}\left(v_{2}v_{1}uk\right)\mathcal{P}_{g}\left( vk\right),\] where the second equality in the above equation is obtained by performing the coordinate transformation \(u_{1}=p_{1}/q\), \(v_{1}=|\mathbf{q-p_{1}}|/q\) and \(u_{2}=p_{2}/|\mathbf{q-p_{1}}|\), \(v_{2}=|\mathbf{q-p_{1}-p_{2}}|/|\mathbf{q-p_{1}}|\). For the 'ring1' term, one example of the contraction is shown as follows: (57) and the symmetry factor in this case is 36. Then we have \[\Omega_{\rm GW}^{ring1}(k)= \frac{G_{\rm NL}^{2}k^{3}}{6\pi^{2}}\int\frac{{\rm d}^{3}q{\rm d}^{ 3}q^{\prime}}{(2\pi)^{6}}\cos 2(\phi-\phi^{\prime})I\left(u,v\right)I\left(u^{ \prime},v^{\prime}\right)\int\frac{{\rm d}^{3}p}{(2\pi)^{3}}2P_{g}\left(p \right)P_{g}\left(q^{\prime}\right)P_{g}\left(|\mathbf{k-q^{\prime}}|\right)P_{g} \left(|\mathbf{q+q^{\prime}+p}|\right)\times 36 \tag{58}\] \[= \frac{3G_{\rm NL}^{2}}{4\pi^{2}}\int_{0}^{\infty}{\rm d}u\int_{|1 -u|}^{1+u}{\rm d}v\int_{0}^{\infty}{\rm d}u^{\prime}\int_{|1-u^{\prime}|}^{1+u^ {\prime}}{\rm d}v^{\prime}\int_{0}^{\infty}{\rm d}u_{1}\int_{|1-u_{1}|}^{1+u_{ 1}}{\rm d}v_{1}\int_{0}^{2\pi}{\rm d}\varphi_{1}\int_{0}^{2\pi}{\rm d}\varphi_ {2}\] \[\times\cos 2\varphi_{1}I(u,v)I(u^{\prime},v^{\prime})\frac{uvu^{ \prime}v^{\prime}u_{1}v_{1}}{(u^{\prime}v^{\prime}u_{1}w_{123})^{3}}{\cal P}_{ g}\left(u^{\prime}k\right){\cal P}_{g}\left(v^{\prime}k\right){\cal P}_{g} \left(u_{1}k\right){\cal P}_{g}\left(u_{123}k\right),\] where we perform the coordinate transformation \(\varphi_{1}=\phi-\phi^{\prime}\), \(\varphi_{2}=\phi-\phi_{1}\), and \(\varphi_{3}=\phi+\phi^{i}\). Additionally, \(w_{123}\) is defined as follows: \[w_{123}^{2}= \frac{|\mathbf{q+q^{\prime}+p}|^{2}}{k^{2}} \tag{59}\] \[= u^{2}+{u^{\prime}}^{2}+u_{1}^{2}+2uu^{\prime}\left(\sin\theta \sin\theta^{\prime}\cos\varphi_{1}+\cos\theta\cos\theta^{\prime}\right)+2uu_{1 }(\sin\theta\sin\theta_{1}\cos\varphi_{2}+\cos\theta\cos\theta_{1})\] \[+2u^{\prime}u_{1}\left[\sin\theta^{\prime}\sin\theta_{1}\cos( \varphi_{1}-\varphi_{2})+\cos\theta^{\prime}\cos\theta_{1}\right],\] where \(\sin\) and \(\cos\) are replaced using Eq. (19). For the 'ring2' term, one example of the contraction is shown as follows: \[\left\langle\contraction{\zeta_{g}(\mathbf{p_{1}})\zeta_{g}(\mathbf{p_{2}})\zeta_{g}(\mathbf{q -p_{1}-p_{2}})\zeta_{g}(\mathbf{k-q})\zeta_{g}(\mathbf{p_{3}})\zeta_{g}(\mathbf{p_{4}}) \zeta_{g}(\mathbf{q^{\prime}-p_{3}-p_{4}})\zeta_{g}(\mathbf{k^{\prime}-q^{\prime}})} \right\rangle, \tag{60}\] and the symmetry factor in this case is 18. Then we have \[\Omega_{\rm GW}^{ring2}(k)= \frac{G_{\rm NL}^{2}k^{3}}{6\pi^{2}}\int\frac{{\rm d}^{3}q{\rm d}^ {3}q^{\prime}}{(2\pi)^{6}}\cos 2(\phi-\phi^{\prime})I\left(u,v\right)I\left(u^{ \prime},v^{\prime}\right)\int\frac{{\rm d}^{3}p}{(2\pi)^{3}} \tag{61}\] \[\times\alpha P_{g}\left(p\right)P_{g}\left(|\mathbf{k-q}|\right)P_{g} \left(|\mathbf{k-q^{\prime}}|\right)P_{g}\left(|\mathbf{k+p-q-q^{\prime}}|\right) \times 18\] \[= \frac{3G_{\rm NL}^{2}}{4\pi^{2}}\int_{0}^{\infty}{\rm d}u\int_{|1 -u|}^{1+u}{\rm d}v\int_{0}^{\infty}{\rm d}u^{\prime}\int_{|1-u^{\prime}|}^{1+u^ {\prime}}{\rm d}v^{\prime}\int_{0}^{\infty}{\rm d}u_{1}\int_{|1-u_{1}|}^{1+u_{ 1}}{\rm d}v_{1}\int_{0}^{2\pi}{\rm d}\varphi_{1}\int_{0}^{2\pi}{\rm d} \varphi_{2}\] \[\times\cos 2\varphi_{1}I(u,v)I(u^{\prime},v^{\prime})\frac{uvu^{ \prime}v^{\prime}u_{1}v_{1}}{(u_{1}vv^{\prime}w_{0123})^{3}}{\cal P}_{g} \left(u_{1}k\right){\cal P}_{g}\left(vk\right){\cal P}_{g}\left(v^{\prime}k \right){\cal P}_{g}\left(w_{0123}k\right),\] where \(w_{0123}\) is defined as follows: \[w_{0123}^{2}= \frac{|\mathbf{k+p-q-q^{\prime}}|^{2}}{k^{2}} \tag{62}\] \[= 1+u^{2}+u^{\prime 2}+u_{1}^{2}+2u_{1}\cos\theta_{1}-2u\cos\theta-2u ^{\prime}\cos\theta^{\prime}+2uu^{\prime}\left(\sin\theta\sin\theta^{\prime}\cos \varphi_{1}+\cos\theta\cos\theta^{\prime}\right)\] \[-2uu_{1}(\sin\theta\sin\theta_{1}\cos\varphi_{2}+\cos\theta\cos \theta_{1})-2u^{\prime}u_{1}\left[\sin\theta^{\prime}\sin\theta_{1}\cos(\varphi_ {1}-\varphi_{2})+\cos\theta^{\prime}\cos\theta_{1}\right].\] ### \(G_{\rm NL}^{3}\) terms For terms containing \(G_{\rm NL}^{3}\), considering symmetry, the GWs spectrum can be expressed in the following form: \[\Omega_{\rm GW}^{G_{\rm NL}^{3}}(k)= \frac{G_{\rm NL}^{3}k^{3}}{6\pi^{2}}\int\frac{{\rm d}^{3}q{\rm d}^ {3}q^{\prime}}{(2\pi)^{6}}\cos 2(\phi-\phi^{\prime})I\left(u,v\right)I\left(u^{\prime},v^{ \prime}\right)\int\frac{{\rm d}^{3}p_{1}{\rm d}^{3}p_{2}{\rm d}^{3}p_{3}{\rm d }^{3}p_{4}{\rm d}^{3}p_{5}{\rm d}^{3}p_{6}}{(2\pi)^{18}}4\langle\langle\zeta_{g}( \mathbf{p_{1}})\zeta_{g}(\mathbf{p_{2}})\] \[\times\zeta_{g}(\mathbf{q-p_{1}-p_{2}})\zeta_{g}(\mathbf{p_{3}})\zeta_{g}( \mathbf{p_{4}})\zeta_{g}(\mathbf{k-q-p_{3}-p_{4}})\zeta_{g}(\mathbf{p_{5}})\zeta_{g}(\mathbf{p_{6 }})\zeta_{g}(\mathbf{q^{\prime}-p_{5}-p_{6}})\zeta_{g}(\mathbf{k^{\prime}-q^{\prime}}) \rangle.\] Performing Wick contraction on the above ten-point function, there are two distinct non-zero contractions and we name them as the 'lloop' term and the '3loop' term. Then we have \[\Omega_{\rm GW}^{G_{\rm NL}^{3}}(k)=\Omega_{\rm GW}^{lloop}(k)+\Omega_{\rm GW }^{Sloop}(k), \tag{64}\] and we can easily obtain \[\Omega_{\rm GW}^{1loop}(k)= 6G_{\rm NL}\int\frac{{\rm d}^{3}p}{(2\pi)^{3}}P_{g}(p)\left(\Omega _{\rm GW}^{tri}(k)+\Omega_{\rm GW}^{ring1}(k)+\Omega_{\rm GW}^{ring2}(k)\right)\] \[= 6G_{\rm NL}\int\frac{{\rm d}p}{p}\mathcal{P}_{g}(p)\left(\Omega _{\rm GW}^{tri}(k)+\Omega_{\rm GW}^{ring1}(k)+\Omega_{\rm GW}^{ring2}(k)\right), \tag{65}\] and for power spectrum satisfying Eq. (51), we have \(\Omega_{\rm GW}^{1loop}(k)=6AG_{\rm NL}\left(\Omega_{\rm GW}^{tri}(k)+\Omega _{\rm GW}^{ring1}(k)+\Omega_{\rm GW}^{ring2}(k)\right)\). While the '3loop' term can be expressed as \[\Omega_{\rm GW}^{3loop}(k)= 108G_{\rm NL}^{3}\int\frac{{\rm d}^{3}p_{1}{\rm d}^{3}p_{2}{\rm d }^{3}p_{3}}{(2\pi)^{9}}P_{g}(p_{1})P_{g}(p_{2})P_{g}(p_{3})\Omega_{\rm GW}^{g} (k)\] \[= 108G_{\rm NL}^{3}\int\frac{{\rm d}p_{1}{\rm d}p_{2}{\rm d}p_{3}} {p_{1}p_{2}p_{3}}\mathcal{P}_{g}(p_{1})\mathcal{P}_{g}(p_{2})\mathcal{P}_{g}( p_{3})\Omega_{\rm GW}^{g}(k), \tag{66}\] and for power spectrum satisfying Eq. (51), we have \(\Omega_{\rm GW}^{3loop}(k)=108A^{3}G_{\rm NL}^{3}\Omega_{\rm GW}^{g}(k)\). ### \(G_{\rm NL}^{4}\) terms For terms containing \(G_{\rm NL}^{4}\), considering symmetry, the GWs spectrum can be expressed in the following form: \[\Omega_{\rm GW}^{G_{\rm NL}^{4}}(k)= \frac{G_{\rm NL}^{4}k^{3}}{6\pi^{2}}\int\frac{{\rm d}^{3}q\ {\rm d}^{3}q^{\prime}}{(2\pi)^{6}}\cos 2 \left(\phi-\phi^{\prime}\right)I(u,v)I\left(u^{\prime},v^{\prime}\right)\int \frac{{\rm d}^{3}p_{1}\ {\rm d}^{3}p_{2}\ {\rm d}^{3}p_{3}\ {\rm d}^{3}p_{4}\ {\rm d}^{3}p_{5}\ {\rm d}^{3}p_{6}\ {\rm d}^{3}p_{7}\ {\rm d}^{3}p_{8}}{(2\pi)^{24}}\] \[\times\left\langle\left\langle\zeta_{g}\left(\mathbf{p_{1}}\right) \zeta_{g}\left(\mathbf{p_{2}}\right)\zeta_{g}\left(\mathbf{q-p_{1}-p_{2}}\right)\zeta_{ g}\left(\mathbf{p_{3}}\right)\zeta_{g}\left(\mathbf{p_{4}}\right)\zeta_{g}\left(\mathbf{k-q-p_{3}-p_{4}} \right)\zeta_{g}\left(\mathbf{p_{5}}\right)\zeta_{g}\left(\mathbf{p_{6}}\right)\zeta_ {g}\left(\mathbf{q^{\prime}-p_{5}-p_{6}}\right)\right.\right. \tag{67}\] \[\left.\left.\zeta_{g}\left(\mathbf{p_{7}}\right)\zeta_{g}\left(\mathbf{p_ {8}}\right)\zeta_{g}\left(\mathbf{k^{\prime}-q^{\prime}-p_{7}-p_{8}}\right)\right\rangle\right\rangle\] Performing Wick contraction on the above twelve-point function, there are 7 distinct non-zero contractions and we name them as the '2loops' term, the '4loop' term, the 'double' term, the 'bubble' term, the'sand clock'(sc) term, the '2rings' term and the 'net' term. Then we have \[\Omega_{\rm GW}^{G_{\rm NL}^{4}}(k)=\Omega_{\rm GW}^{2loops}(k)+\Omega_{\rm GW }^{4loop}(k)+\Omega_{\rm GW}^{double}(k)+\Omega_{\rm GW}^{bubble}(k)+\Omega_ {\rm GW}^{sc}(k)+\Omega_{\rm GW}^{2rings}(k)+\Omega_{\rm GW}^{net}(k), \tag{68}\] and we now demonstrate each of these 7 parts in detail. We can easily obtain \[\Omega_{\rm GW}^{2loops}(k)= 9G_{\rm NL}^{2}\int\frac{{\rm d}^{3}p_{1}{\rm d}^{3}p_{2}}{(2 \pi)^{6}}P_{g}(p_{1})P_{g}(p_{2})\left(\Omega_{\rm GW}^{tri}(k)+\Omega_{\rm GW }^{ring1}(k)+\Omega_{\rm GW}^{ring2}(k)\right) \tag{69}\] \[= 9G_{\rm NL}^{2}\int\frac{{\rm d}p_{1}{\rm d}p_{2}}{p_{1}p_{2}} \mathcal{P}_{g}(p_{1})\mathcal{P}_{g}(p_{2})\left(\Omega_{\rm GW}^{tri}(k)+ \Omega_{\rm GW}^{ring1}(k)+\Omega_{\rm GW}^{ring2}(k)\right),\] and for power spectrum satisfying Eq. (51), we have \(\Omega_{\rm GW}^{2loops}(k)=9A^{2}G_{\rm NL}^{2}\left(\Omega_{\rm GW}^{tri}(k )+\Omega_{\rm GW}^{ring1}(k)+\Omega_{\rm GW}^{ring2}(k)\right)\). The '4loop' term can also be easily obtained as \[\Omega_{\rm GW}^{4loop}(k)=81G_{\rm NL}^{4}\int\frac{{\rm d}p_{1}{\rm d}p_{2} {\rm d}p_{3}{\rm d}p_{4}}{p_{1}p_{2}p_{3}p_{4}}\mathcal{P}_{g}(p_{1})\mathcal{ P}_{g}(p_{2})\mathcal{P}_{g}(p_{3})\mathcal{P}_{g}(p_{4})\Omega_{\rm GW}^{g}(k), \tag{70}\] and for power spectrum satisfying Eq. (51), we have \(\Omega_{\rm GW}^{4loop}(k)=81A^{4}G_{\rm NL}^{4}\Omega_{\rm GW}^{g}(k)\). The 'double' term is a disconnected diagram and one example of the contraction is shown as follows: (71) where \(\mathbf{p_{9}}\equiv\mathbf{q-p_{1}-p_{2}}\), \(\mathbf{p_{10}}\equiv\mathbf{k-q-p_{3}-p_{4}}\), \(\mathbf{p_{11}}\equiv\mathbf{q^{\prime}-p_{5}-p_{6}}\) and \(\mathbf{p_{12}}\equiv\mathbf{k^{\prime}-q^{\prime}-p_{7}-p_{8}}\). The symmetry factor in this case is 72. Expanding the correlation function and using the appearing delta functions to eliminate redundant integrals, we can obtain: \[\Omega_{\rm GW}^{double}(k)= \frac{G_{\rm NL}^{4}k^{3}}{6\pi^{2}}\int\frac{{\rm d}^{3}q}{(2\pi)^ {3}}I^{2}\left(u,v\right)\int\frac{{\rm d}^{3}p_{1}{\rm d}^{3}p_{2}{\rm d}^{3}p _{3}{\rm d}^{3}p_{4}}{(2\pi)^{12}}P_{g}\left(p_{1}\right)P_{g}\left(p_{2}\right) P_{g}\left(|\mathbf{q-p_{1}-p_{2}}|\right)\] \[\times P_{g}\left(p_{3}\right)P_{g}\left(p_{4}\right)P_{g}\left(| \mathbf{k-q-p_{3}-p_{4}}|\right)\times 72\] \[= \frac{3G_{\rm NL}^{4}}{4}\int_{0}^{\infty}{\rm d}u\int_{|1-u|}^{1+u }{\rm d}v\int_{0}^{\infty}{\rm d}u_{1}\int_{|1-u_{1}|}^{1+u_{1}}{\rm d}v_{1} \int_{0}^{\infty}{\rm d}u_{2}\int_{|1-u_{2}|}^{1+u_{2}}{\rm d}v_{2}\int_{0}^{ \infty}{\rm d}u_{3}\int_{|1-u_{3}|}^{1+u_{3}}{\rm d}v_{3}\int_{0}^{\infty}{ \rm d}u_{4}\int_{|1-u_{4}|}^{1+u_{4}}{\rm d}v_{4}\] \[\times I^{2}(u,v)\frac{}{u^{2}v^{2}u_{1}^{2}v_{1}^{2}u_{2}^{2}v_{ 2}^{2}u_{3}^{2}v_{3}^{3}u_{4}^{2}v_{4}^{2}}\mathcal{P}_{g}\left(u_{1}uk\right) \mathcal{P}_{g}\left(u_{2}v_{1}uk\right)\mathcal{P}_{g}\left(v_{2}v_{1}uk \right)\mathcal{P}_{g}\left(u_{3}vk\right)\mathcal{P}_{g}\left(u_{4}v_{3}vk \right)\mathcal{P}_{g}\left(v_{4}v_{3}vk\right),\] where the second equality in the above equation is obtained by performing the coordinate transformation \(u_{1}=p_{1}/q\), \(v_{1}=|\mathbf{q-p_{1}}|/q\), \(u_{2}=p_{2}/|\mathbf{q-p_{1}}|\), \(v_{2}=|\mathbf{q-p_{1}-p_{2}}|/|\mathbf{q-p_{1}}|\), \(u_{3}=p_{3}/|\mathbf{k-q}|\), \(v_{3}=|\mathbf{k-q-p_{3}}|/|\mathbf{k-q}|\), \(u_{4}=p_{4}/|\mathbf{k-q-p_{3}}|\), \(v_{4}=|\mathbf{k-q-p_{3}-p_{4}}|/|\mathbf{k-q-p_{3}}|\). For the 'bubble' term, one example of the contraction is shown as follows: \[\left\langle\overbrace{\zeta_{g}(\mathbf{p_{1}})\zeta_{g}(\mathbf{p_{2}})\zeta_{g}( \mathbf{p_{0}})\zeta_{g}(\mathbf{p_{3}})\zeta_{g}(\mathbf{p_{4}})\zeta_{g}(\mathbf{p_{10}}) \zeta_{g}(\mathbf{p_{5}})\zeta_{g}(\mathbf{p_{6}})\zeta_{g}(\mathbf{p_{11}})\zeta_{g}(\bm {p_{7}})\zeta_{g}(\mathbf{p_{8}})\zeta_{g}(\mathbf{p_{12}})}^{\prime}\right\rangle, \tag{73}\] where \(\mathbf{p_{9}}\), \(\mathbf{p_{10}}\), \(\mathbf{p_{11}}\) and \(\mathbf{p_{12}}\) are defined the same as above and the symmetry factor in this case is 648. Then we have \[\Omega_{\rm GW}^{bubble}(k)= \frac{G_{\rm NL}^{4}k^{3}}{6\pi^{2}}\int\frac{{\rm d}^{3}q{\rm d}^ {3}q^{\prime}}{(2\pi)^{6}}\cos 2(\phi-\phi^{\prime})I\left(u,v\right)I\left(u^{\prime},v^{ \prime}\right)\int\frac{{\rm d}^{3}p_{1}{\rm d}^{3}p_{2}{\rm d}^{3}p_{3}}{(2\pi )^{9}} \tag{74}\] \[\times P_{g}\left(p_{1}\right)P_{g}\left(p_{2}\right)P_{g}\left(p_ {3}\right)P_{g}\left(|\mathbf{k-p_{1}-p_{2}}-\mathbf{p_{3}}\right)P_{g}\left(|\mathbf{q-p_{1 }-p_{2}}|\right)P_{g}\left(|\mathbf{q^{\prime}-p_{1}-p_{2}}|\right)\times 648\] \[= \frac{27G_{\rm NL}^{4}}{64\pi^{4}}\int_{0}^{\infty}{\rm d}u\int_{|1 -u|}^{1+u}{\rm d}v\int_{0}^{\infty}{\rm d}u^{\prime}\int_{|1-u^{\prime}|}^{1+u ^{\prime}}{\rm d}v^{\prime}\int_{0}^{\infty}{\rm d}u_{1}\int_{|1-u_{1}|}^{1+u_{ 1}}{\rm d}v_{1}\int_{0}^{\infty}{\rm d}u_{2}\int_{|1-u_{2}|}^{1+u_{2}}{\rm d}v _{2}\int_{0}^{\infty}{\rm d}u_{3}\int_{|1-u_{3}|}^{1+u_{3}}{\rm d}v_{3}\] \[\times\int_{0}^{2\pi}{\rm d}\varphi_{1}\int_{0}^{2\pi}{\rm d} \varphi_{2}\int_{0}^{2\pi}{\rm d}\varphi_{3}\int_{0}^{2\pi}{\rm d}\varphi_{4} \cos 2\varphi_{1}I(u,v)I(u^{\prime},v^{\prime})\frac{uvu^{\prime}v^{\prime}u_{1}v_{1} u_{2}v_{2}u_{3}v_{3}}{(u_{1}u_{2}u_{3}w_{0345}w_{134}w_{234})^{3}}\] \[\times\mathcal{P}_{g}\left(u_{1}k\right)\mathcal{P}_{g}\left(u_{2}k \right)\mathcal{P}_{g}\left(u_{3}k\right)\mathcal{P}_{g}\left(w_{0345}k\right) \mathcal{P}_{g}\left(w_{134}k\right)\mathcal{P}_{g}\left(w_{234}k\right),\] where we perform the coordinate transformation \(\varphi_{1}=\phi-\phi^{\prime}\), \(\varphi_{2}=\phi-\phi_{1}\), \(\varphi_{3}=\phi-\phi_{2}\), \(\varphi_{4}=\phi-\phi_{3}\), and \(\varphi_{5}=\phi+\phi^{\prime}\). Additionally, \(w_{0345}\), \(w_{134}\), and \(w_{234}\) are defined as follows: \[w_{0345}^{2}= \frac{|\mathbf{k-p_{1}-p_{2}-p_{3}}|^{2}}{k^{2}}\] (75) \[= 1+u_{1}^{2}+u_{2}^{2}+u_{3}^{2}-2u_{1}\cos\theta_{1}-2u_{2}\cos \theta_{2}-2u_{3}\cos\theta_{3}+2u_{1}u_{2}\left[\sin\theta_{1}\sin\theta_{2} \cos(\varphi_{2}-\varphi_{3})+\cos\theta_{1}\cos\theta_{2}\right]\] \[+2u_{1}u_{3}\left[\sin\theta_{1}\sin\theta_{3}\cos(\varphi_{2}- \varphi_{4})+\cos\theta_{1}\cos\theta_{3}\right]+2u_{2}u_{3}\left[\sin\theta_{2} \sin\theta_{3}\cos(\varphi_{3}-\varphi_{4})+\cos\theta_{2}\cos\theta_{3}\right],\] \[w_{134}^{2}= \frac{|\mathbf{q-p_{1}-p_{2}}|^{2}}{k^{2}}\] \[= u^{2}+u_{1}^{2}+u_{2}^{2}-2uu_{1}\left[\sin\theta\sin\theta_{1} \cos\varphi_{2}+\cos\theta\cos\theta_{1}\right]-2uu_{2}(\sin\theta\sin\theta_{2} \cos\varphi_{3}+\cos\theta\cos\theta_{2})\] \[+2u_{1}u_{2}(\sin\theta_{1}\sin\theta_{2}\cos(\varphi_{2}-\varphi_{3} )+\cos\theta_{1}\cos\theta_{2}),\] \[w_{234}^{2}= \frac{|\mathbf{q^{\prime}-p_{1}-p_{2}}|^{2}}{k^{2}}\] (76) \[= u^{\prime 2}+u_{1}^{2}+u_{2}^{2}-2u^{\prime}u_{1}\left[\sin\theta^{ \prime}\sin\theta_{1}\cos(\varphi_{1}-\varphi_{2})+\cos\theta^{\prime}\cos \theta_{1}\right]-2u^{\prime}u_{2}(\sin\theta^{\prime}\sin\theta_{2}\cos( \varphi_{1}-\varphi_{3})+\cos\theta^{\prime}\cos\theta_{2})\] \[+2u_{1}u_{2}(\sin\theta_{1}\sin\theta_{2 where \(\mathbf{p_{9}}\), \(\mathbf{p_{10}}\), \(\mathbf{p_{11}}\) and \(\mathbf{p_{12}}\) are defined the same as above and the symmetry factor in this case is 648. Then we have \[\Omega_{\rm GW}^{\rm ew}(k)= \frac{G_{\rm NL}^{4}k^{3}}{6\pi^{2}}\int\frac{{\rm d}^{3}q{\rm d}^{ 3}q^{\prime}}{(2\pi)^{6}}\cos 2(\phi-\phi^{\prime})I\left(u,v\right)I\left(u^{ \prime},v^{\prime}\right)\int\frac{{\rm d}^{3}p_{1}{\rm d}^{3}p_{2}{\rm d}^{3}p _{3}}{(2\pi)^{9}} \tag{79}\] \[\times P_{g}\left(p_{1}\right)P_{g}\left(p_{2}\right)P_{g}\left(p _{3}\right)P_{g}\left(|\mathbf{q-p_{1}-p_{2}}|\right)P_{g}\left(|\mathbf{q^{\prime}-p_ {1}-p_{2}}|\right)P_{g}\left(\mathbf{k-q-q^{\prime}+p_{1}+p_{2}-p_{3}}\right)\times 648\] \[= \frac{27G_{\rm NL}^{4}}{64\pi^{4}}\int_{0}^{\infty}{\rm d}u\int_{ |1-u|}^{1+u}{\rm d}v\int_{0}^{\infty}{\rm d}u^{\prime}\int_{|1-u^{\prime}|}^{1+ u}{\rm d}v^{\prime}\int_{0}^{\infty}{\rm d}u_{1}\int_{|1-u_{1}|}^{1+u_{1}}{\rm d }v_{1}\int_{0}^{\infty}{\rm d}u_{2}\int_{|1-u_{2}|}^{1+u_{2}}{\rm d}v_{2}\int_ {0}^{\infty}{\rm d}u_{3}\int_{|1-u_{3}|}^{1+u_{3}}{\rm d}v_{3}\] \[\times\int_{0}^{2\pi}{\rm d}\varphi_{1}\int_{0}^{2\pi}{\rm d} \varphi_{2}\int_{0}^{2\pi}{\rm d}\varphi_{3}\int_{0}^{2\pi}{\rm d}\varphi_{4} \cos 2\varphi_{1}I(u,v)I(u^{\prime},v^{\prime})\frac{uvu^{\prime}v^{\prime}u_{1}v_{ 1}u_{2}v_{2}u_{3}v_{3}}{(u_{1}u_{2}u_{3}w_{134}w_{234}w)^{3}}\] \[\times{\cal P}_{g}\left(u_{1}k\right){\cal P}_{g}\left(u_{2}k \right){\cal P}_{g}\left(u_{3}k\right){\cal P}_{g}\left(w_{134}k\right){\cal P }_{g}\left(w_{234}k\right){\cal P}_{g}\left(wk\right),\] where we perform the coordinate transformation the same as above. Additionally, \(w\) is defined as follows: \[w^{2}= \frac{|\mathbf{k-q-q^{\prime}+p_{1}+p_{2}-p_{3}}|^{2}}{k^{2}} \tag{80}\] \[= 1+u^{2}+u^{\prime 2}+u_{1}^{2}+u_{2}^{2}+u_{3}^{2}-2u\cos\theta-2u^{ \prime}\cos\theta^{\prime}+2u_{1}\cos\theta_{1}+2u_{2}\cos\theta_{2}-2u_{3} \cos\theta_{3}\] \[+2uu^{\prime}\left[\sin\theta\sin\theta^{\prime}\cos\varphi_{1}+ \cos\theta\cos\theta^{\prime}\right]-2uu_{1}\left[\sin\theta\sin\theta_{1}\cos \varphi_{2}+\cos\theta\cos\theta_{1}\right]-2uu_{2}\left[\sin\theta\sin\theta_{ 2}\cos\varphi_{3}+\cos\theta\cos\theta_{2}\right]\] \[+2uu_{3}\left[\sin\theta\sin\theta_{3}\cos\varphi_{4}+\cos\theta \cos\theta_{3}\right]-2u^{\prime}u_{1}\left[\sin\theta^{\prime}\sin\theta_{1} \cos(\varphi_{1}-\varphi_{2})+\cos\theta^{\prime}\cos\theta_{1}\right]\] \[-2u^{\prime}u_{2}\left[\sin\theta^{\prime}\sin\theta_{2}\cos( \varphi_{1}-\varphi_{3})+\cos\theta^{\prime}\cos\theta_{2}\right]+2u^{\prime}u_ {3}\left[\sin\theta^{\prime}\sin\theta_{3}\cos(\varphi_{1}-\varphi_{4})+\cos \theta^{\prime}\cos\theta_{3}\right]\] \[+2u_{1}u_{2}\left[\sin\theta_{1}\sin\theta_{2}\cos(\varphi_{2}- \varphi_{3})+\cos\theta_{1}\cos\theta_{2}\right]-2u_{1}u_{3}\left[\sin\theta_{1} \sin\theta_{3}\cos(\varphi_{2}-\varphi_{4})+\cos\theta_{1}\cos\theta_{3}\right]\] \[-2u_{2}u_{3}\left[\sin\theta_{2}\sin\theta_{3}\cos(\varphi_{3}- \varphi_{4})+\cos\theta_{2}\cos\theta_{3}\right].\] For the '2rings' term, one example of the contraction is shown as follows: \[\left\langle\zeta_{g}(\mathbf{p_{1}})\zeta_{g}(\mathbf{p_{2}})\zeta_{g}(\mathbf{p_{9}}) \zeta_{g}(\mathbf{p_{3}})\zeta_{g}(\mathbf{p_{4}})\zeta_{g}(\mathbf{p_{10}})\zeta_{g}(\mathbf{ p_{5}})\zeta_{g}(\mathbf{p_{6}})\zeta_{g}(\mathbf{p_{11}})\zeta_{g}(\mathbf{p_{7}})\zeta_{g}(\mathbf{p_{8}}) \zeta_{g}(\mathbf{p_{12}})\right\rangle, \tag{81}\] where \(\mathbf{p_{9}}\), \(\mathbf{p_{10}}\), \(\mathbf{p_{11}}\) and \(\mathbf{p_{12}}\) are defined the same as above and the symmetry factor in this case is 648. Then we have \[\Omega_{\rm GW}^{2rings}(k)= \frac{G_{\rm NL}^{4}k^{3}}{6\pi^{2}}\int\frac{{\rm d}^{3}q{\rm d}^ {3}q^{\prime}}{(2\pi)^{6}}\cos 2(\phi-\phi^{\prime})I\left(u,v\right)I\left(u^{\prime},v^{ \prime}\right)\int\frac{{\rm d}^{3}p_{1}{\rm d}^{3}p_{2}{\rm d}^{3}p_{3}}{(2\pi)^ {9}} \tag{82}\] \[\times P_{g}\left(p_{1}\right)P_{g}\left(p_{2}\right)P_{g}\left(p _{3}\right)P_{g}\left(|\mathbf{q-p_{1}-p_{2}}|\right)P_{g}\left(|\mathbf{k-q+p_{1}+p_{2}} |\right)P_{g}\left(\mathbf{k-q-q^{\prime}+p_{1}+p_{2}-p_{3}}\right)\times 648\] \[= \frac{27G_{\rm NL}^{4}}{64\pi^{4}}\int_{0}^{\infty}{\rm d}u\int_{|1 -u|}^{1+u}{\rm d}v\int_{0}^{\infty}{\rm d}u^{\prime}\int_{|1-u^{\prime}|}^{1+u^{ \prime}}{\rm d}v^{\prime}\int_{0}^{\infty}{\rm d}u_{1}\int_{|1-u_{1}|}^{1+u_{1}}{ \rm d}v_{1}\int_{0}^{\infty}{\rm d}u_{2}\int_{|1-u_{2}|}^{1+u_{2}}{\rm d}v_{2} \int_{0}^{\infty}{\rm d}u_{3}\int_{|1-u_{3}|}^{1+u_{3}}{\rm d}v_{3}\] \[\times\int_{0}^{2\pi}{\rm d}\varphi_{1}\int_{0}^{2\pi}{\rm d} \varphi_{2}\int_{0}^{2\pi}{\rm d}\varphi_{3}\int_{0}^{2\pi}{\rm d}\varphi_{4} \cos 2\varphi_{1}I(u,v)I(u^{\prime},v^{\prime})\frac{uvu^{\prime}v^{\prime}u_{1}v_{ 1}u_{2}v_{2}u_{3}v_{3}}{(u_{1}u_{2}u_{3}w_{134}w_{0134}w)^{3}}\] \[\times{\cal P}_{g}\left(u_{1}k\right){\cal P}_{g}\left(u_{2}k \right){\cal P}_{g}\left(u_{3}k\right){\cal P}_{g}\left(w_{134}k\right){\cal P }_{g}\left(w_{0134}k\right){\cal P}_{g}\left(wk\right),\] where we perform the coordinate transformation the same as above. Additionally, \(w_{0134}\) is defined as follows: \[w_{0134}^{2}= \frac{|\mathbf{k-q+p_{1}+p_{2}}|^{2}}{k^{2}}\] (83) \[= 1+u^{2}+u_{1}^{2}+u_{2}^{2}-2u\cos\theta+2u_{1}\cos\theta_{1}+2u_ {2}\cos\theta_{2}-2uu_{1}\left[\sin\theta\sin\theta_{1}\cos\varphi_{2}+\cos \theta\cos\theta_{1}\right]\] \[-2uu_{2}\left[\sin\theta\sin\theta_{2 have \[\Omega_{\rm GW}^{net}(k)= \frac{G_{\rm NL}^{4}k^{3}}{6\pi^{2}}\int\frac{{\rm d}^{3}q{\rm d}^{ 3}q^{\prime}}{(2\pi)^{6}}\cos 2(\phi-\phi^{\prime})I\left(u,v\right)I\left(u^{ \prime},v^{\prime}\right)\int\frac{{\rm d}^{3}p_{1}{\rm d}^{3}p_{2}{\rm d}^{3}p _{3}}{(2\pi)^{9}} \tag{85}\] \[\times P_{g}\left(p_{1}\right)P_{g}\left(p_{2}\right)P_{g}\left(p _{3}\right)P_{g}\left(\left|\mathbf{q-p_{1}-p_{2}}\right|\right)P_{g}\left(\left| \mathbf{q^{\prime}-p_{2}-p_{3}}\right|\right)P_{g}\left(\mathbf{k-q+p_{1}-p_{3}}\right) \times 1296\] \[= \frac{27G_{\rm NL}^{4}}{32\pi^{4}}\int_{0}^{\infty}{\rm d}u\int_{ \left|1-u\right|}^{1+u}{\rm d}v\int_{0}^{\infty}{\rm d}u^{\prime}\int_{\left| 1-u^{\prime}\right|}^{1+u^{\prime}}{\rm d}v^{\prime}\int_{0}^{\infty}{\rm d}u _{1}\int_{\left|1-u_{1}\right|}^{1+u_{1}}{\rm d}v_{1}\int_{0}^{\infty}{\rm d}u _{2}\int_{\left|1-u_{2}\right|}^{1+u_{2}}{\rm d}v_{2}\int_{0}^{\infty}{\rm d} u_{3}\int_{\left|1-u_{3}\right|}^{1+u_{3}}{\rm d}v_{3}\] \[\times\int_{0}^{2\pi}{\rm d}\varphi_{1}\int_{0}^{2\pi}{\rm d} \varphi_{2}\int_{0}^{2\pi}{\rm d}\varphi_{3}\int_{0}^{2\pi}{\rm d}\varphi_{4} \cos 2\varphi_{1}I(u,v)I(u^{\prime},v^{\prime})\frac{uvu^{\prime}v^{\prime}u_{1}v_{ 1}u_{2}v_{2}u_{3}v_{3}}{(u_{1}u_{2}u_{3}w_{134}w_{245}w_{0135})^{3}}\] \[\times P_{g}\left(u_{1}k\right)\mathcal{P}_{g}\left(u_{2}k\right) \mathcal{P}_{g}\left(u_{3}k\right)\mathcal{P}_{g}\left(w_{134}k\right) \mathcal{P}_{g}\left(w_{245}k\right)\mathcal{P}_{g}\left(w_{0135}k\right),\] where we perform the coordinate transformation the same as above. Additionally, \(w_{245}\) and \(w_{0135}\) are defined as follows: \[w_{245}^{2}= \frac{|\mathbf{q^{\prime}-p_{2}-p_{3}}|^{2}}{k^{2}} \tag{86}\] \[= u^{\prime 2}+u_{2}^{2}+u_{3}^{2}-2u^{\prime}u_{2}\left[\sin\theta^{ \prime}\sin\theta_{2}\cos(\varphi_{1}-\varphi_{3})+\cos\theta^{\prime}\cos \theta_{2}\right]-2u^{\prime}u_{3}(\sin\theta^{\prime}\sin\theta_{3}\cos( \varphi_{1}-\varphi_{4})+\cos\theta^{\prime}\cos\theta_{3})\] \[+2u_{2}u_{3}(\sin\theta_{2}\sin\theta_{3}\cos(\varphi_{3}-\varphi _{4})+\cos\theta_{2}\cos\theta_{3}),\] \[w_{0135}^{2}= \frac{|\mathbf{k-q+p_{1}-p_{3}}|^{2}}{k^{2}}\] (87) \[= 1+u^{2}+u_{1}^{2}+u_{3}^{2}-2u\cos\theta+2u_{1}\cos\theta_{1}-2u _{3}\cos\theta_{3}-2uu_{1}\left[\sin\theta\sin\theta_{1}\cos\varphi_{2}+\cos \theta\cos\theta_{1}\right]\] \[+2uu_{3}\left[\sin\theta\sin\theta_{3}\cos\varphi_{4}+\cos\theta \cos\theta_{3}\right]-2u_{1}u_{3}\left[\sin\theta_{1}\sin\theta_{3}\cos( \varphi_{2}-\varphi_{4})+\cos\theta_{1}\cos\theta_{3}\right].\] ### \(F_{\rm NL}^{2}G_{\rm NL}\) terms For terms containing \(F_{\rm NL}^{2}G_{\rm NL}\), considering symmetry, the GWs spectrum can be expressed in the following form: \[\Omega_{\rm GW}^{F_{\rm NL}^{2}G_{\rm NL}}(k)= \frac{F_{\rm NL}^{2}G_{\rm NL}k^{3}}{6\pi^{2}}\int\frac{{\rm d}^{3} q{\rm d}^{3}q^{\prime}}{(2\pi)^{6}}\cos 2(\phi-\phi^{\prime})I\left(u,v\right)I\left(u^{ \prime},v^{\prime}\right)\int\frac{{\rm d}^{3}p_{1}{\rm d}^{3}p_{2}{\rm d}^{3} p_{3}{\rm d}^{3}p_{4}}{(2\pi)^{12}} \tag{88}\] \[\times\Bigg{[}8\left\langle\langle\zeta_{g}(\mathbf{p_{1}})\zeta_{g} (\mathbf{q-p_{1}})\zeta_{g}(\mathbf{p_{2}})\zeta_{g}(\mathbf{p_{3}})\zeta_{g}(\mathbf{k-q-p_{2}- p_{3}})\zeta_{g}(\mathbf{p_{4}})\zeta_{g}(\mathbf{q^{\prime}-p_{4}})\zeta_{g}(\mathbf{k^{ \prime}-q^{\prime}})\rangle\right.\] \[\left.+4\left\langle\langle\zeta_{g}(\mathbf{p_{1}})\zeta_{g}(\mathbf{q-p _{1}})\zeta_{g}(\mathbf{p_{2}})\zeta_{g}(\mathbf{k-q-p_{2}})\zeta_{g}(\mathbf{p_{3}})\zeta_{ g}(\mathbf{p_{4}})\zeta_{g}(\mathbf{q^{\prime}-p_{3}-p_{4}})\zeta_{g}(\mathbf{k^{\prime}-q^{ \prime}})\rangle\right.\right)\Bigg{]}.\] Performing Wick contraction on the eight-point function, there are four distinct non-zero contractions and we name them as the 'loop' term, the '\(F^{2}G(1)\)' term, the '\(F^{2}G(2)\)' term, and the '\(F^{2}G(3)\)' term. Then we have \[\Omega_{\rm GW}^{F_{\rm NL}^{2}G_{\rm NL}}(k)=\Omega_{\rm GW}^{loop}(k)+\Omega_ {\rm GW}^{F^{2}G(1)}(k)+\Omega_{\rm GW}^{F^{2}G(2)}(k)+\Omega_{\rm GW}^{F^{2}G( 3)}(k), \tag{89}\] and we now demonstrate each of these four parts in detail. The 'loop' term can be easily obtained as \[\Omega_{\rm GW}^{loop}(k)=6G_{\rm NL}\int\frac{{\rm d}p}{p}\mathcal{P}_{g}(p) \Omega_{\rm GW}^{F_{\rm NL}^{2}}(k), \tag{90}\] and for power spectrum satisfying Eq. (51), we have \(\Omega_{\rm GW}^{loop}(k)=6AG_{\rm NL}\Omega_{\rm GW}^{F_{\rm NL}^{2}}(k)\). One example of the contraction of the '\(F^{2}G(1)\)' term is shown as follows: \[\left\langle\overbrace{\zeta_{g}(\mathbf{p_{1}})\zeta_{g}(\mathbf{q-p_{1}})\zeta_{g}(\mathbf{ p_{2}})\zeta_{g}(\mathbf{k-q-p_{2}})\zeta_{g}(\mathbf{p_{3}})\zeta_{g}(\mathbf{p_{4}})\zeta_{g}(\mathbf{q^{ \prime}-p_{3}-p_{4}})\zeta_{g}(\mathbf{k^{\prime}-q^{\prime}})}^{\!\! and the symmetry factor in this case is 24. Expanding the correlation function and using the appearing delta functions to eliminate redundant integrals, we can obtain: \[\Omega_{\rm GW}^{F^{2}G(1)}(k)= \frac{F_{\rm NL}^{2}G_{\rm NL}k^{3}}{6\pi^{2}}\int\frac{{\rm d}^{3} q{\rm d}^{3}q^{\prime}}{(2\pi)^{6}}\cos 2(\phi-\phi^{\prime})I\left(u,v\right)I\left(u^{ \prime},v^{\prime}\right)\int\frac{{\rm d}^{3}p}{(2\pi)^{3}}4P_{g}\left(p \right)P_{g}\left(\left|\mathbf{q-p}\right|\right)P_{g}\left(\left|\mathbf{q-q^{\prime }}\right|\right)P_{g}\left(\mathbf{k-q^{\prime}}\right)\times 24 \tag{92}\] \[= \frac{F_{\rm NL}^{2}G_{\rm NL}}{\pi^{2}}\int_{0}^{\infty}{\rm d}u \int_{\left|1-u\right|}^{1+u}{\rm d}v\int_{0}^{\infty}{\rm d}u^{\prime}\int_{ \left|1-u^{\prime}\right|}^{1+u^{\prime}}{\rm d}v^{\prime}\int_{0}^{\infty}{ \rm d}u_{1}\int_{\left|1-u_{1}\right|}^{1+u_{1}}{\rm d}v_{1}\int_{0}^{2\pi}{ \rm d}\varphi_{1}\int_{0}^{2\pi}{\rm d}\varphi_{2}\cos 2\varphi_{1}I(u,v)I(u^{ \prime},v^{\prime})\] \[\times\frac{uvu^{\prime}v^{\prime}u_{1}v_{1}}{(u_{1}v^{\prime}w_ {13}w_{12})^{3}}{\cal P}_{g}\left(u_{1}k\right){\cal P}_{g}\left(w_{13}k\right) {\cal P}_{g}\left(w_{12}k\right){\cal P}_{g}\left(v^{\prime}k\right),\] where \(w_{12}\) and \(w_{13}\) are defined the same as Eq. (38) and Eq. (45). One example of the contraction of the '\(F^{2}G(2)\)' term is shown as follows: \[\left\langle\contraction{\zeta_{g}(\mathbf{p_{1}})\zeta_{g}(\mathbf{q-p_{1}})\zeta_{g}(\mathbf{p_ {2}})\zeta_{g}(\mathbf{p_{3}})\zeta_{g}(\mathbf{k-q-p_{2}-p_{3}})\zeta_{g}(\mathbf{p_{4}}) \zeta_{g}(\mathbf{q^{\prime}-p_{4}})\zeta_{g}(\mathbf{k^{\prime}-q^{\prime}})}{\zeta_ {g}(\mathbf{k^{\prime}-q^{\prime}})}\right\rangle, \tag{93}\] and the symmetry factor in this case is 12. Expanding the correlation function and using the appearing delta functions to eliminate redundant integrals, we can obtain: \[\left\langle\contraction{\zeta_{g}(\mathbf{p_{1}})\zeta_{g}(\mathbf{q-p_{1}}) \zeta_{g}(\mathbf{p_{2}})\zeta_{g}(\mathbf{p_{3}})\zeta_{g}(\mathbf{k-q-p_{2}-p_{3}})\zeta _{g}(\mathbf{p_{4}})\zeta_{g}(\mathbf{q^{\prime}-p_{4}})\zeta_{g}(\mathbf{k^{\prime}-q^{ \prime}})}{\zeta_{g}(\mathbf{k^{\prime}-q^{\prime}})}\right\rangle, \tag{94}\] where \(w_{23}\) and \(w_{012}\) are defined the same as Eq. (46) and Eq. (32). One example of the contraction of the '\(F^{2}G(3)\)' term is shown as follows: \[\left\langle\contraction{\zeta_{g}(\mathbf{p_{1}})\zeta_{g}(\mathbf{q-p_{1}})\zeta_{g}(\mathbf{ p_{2}})\zeta_{g}(\mathbf{p_{3}})\zeta_{g}(\mathbf{k-q-p_{2}-p_{3}})\zeta_{g}(\mathbf{p_{4}}) \zeta_{g}(\mathbf{q^{\prime}-p_{4}})\zeta_{g}(\mathbf{k^{\prime}-q^{\prime}})}{\zeta_ {g}(\mathbf{k^{\prime}-q^{\prime}})}\right\rangle, \tag{95}\] and the symmetry factor in this case is 24. Expanding the correlation function and using the appearing delta functions to eliminate redundant integrals, we can obtain: \[\left\langle\contraction{\zeta_{g}(\mathbf{p_{1}})\zeta_{g}(\mathbf{q-p_{1}}) \zeta_{g}(\mathbf{p_{2}})\zeta_{g}(\mathbf{p_{3}})\zeta_{g}(\mathbf{k-q-p_{2}-p_{3}}) \zeta_{g}(\mathbf{p_{4}})\zeta_{g}(\mathbf{q^{\prime}-p_{4}})\zeta_{g}(\mathbf{k^{\prime}- q^{\prime}})}{\zeta_{g}(\mathbf{k^{\prime}-q^{\prime}})}\right\rangle, \tag{96}\] and the symmetry factor in this case is 24. Expanding the correlation function and using the appearing delta functions to eliminate redundant integrals, we can obtain: \[\Omega_{\rm GW}^{F^{2}G(3)}(k)= \frac{F_{\rm NL}^{2}G_{\rm NL}k^{3}}{6\pi^{2}}\int\frac{{\rm d}^{3 }q{\rm d}^{3}q^{\prime}}{(2\pi)^{6}}\cos 2(\phi-\phi^{\prime})I\left(u,v\right)I \left(u^{\prime},v^{\prime}\right)\int\frac{{\rm d}^{3}p}{(2\pi)^{3}}8P_{g} \left(p\right)P_{g}\left(\left|\mathbf{q-p}\right|\right)P_{g}\left(\left|\mathbf{k-q^{ \prime}}\right|\right) \tag{97}\] \[\times P_{g}\left(\mathbf{q-q^{\prime}-p}\right)\times 24\] \[= \frac{2F_{\rm NL}^{2}G_{\rm NL}}{\pi^{2}}\int_{0}^{\infty}{\rm d}u \int_{\left|1-u\right|}^{1+u}{\rm d}v\int_{0}^{\infty}{\rm d}u^{\prime}\int_{ \left|1-u^{\prime}\right|}^{1+u^{\prime}}{\rm d}v^{\prime}\int_{0}^{\infty}{ \rm d}u_{1}\int_{\left|1-u_{1}\right|}^{1+u_{1}}{\rm d}v_{1}\int_{0}^{2\pi}{ \rm d}\varphi_{1}\int_{0}^{2\pi}{\rm d}\varphi_{2}\cos 2\varphi_{1}I(u,v)I(u^{ \prime},v^{\prime})\] \[\times\frac{uvu^{\prime}v^{\prime}u_{1}v_{1}}{(u_{1}v^{\prime}w_ {13}w_{12})^{3}}{\cal P}_{g}\left(u_{1}k\right){\cal P}_{g}\left(v^{\prime}k \right){\cal P}_{g}\left(w_{13}k\right){\cal P}_{g}\left(w_{12}k\right),\] where \(w_{13}\) is defined the same as Eq. (45) and \(w_{123}\) is defined as follows: \[w_{123}^{2}= \frac{\left|\mathbf{q-q^{\prime}-p}\right|^{2}}{k^{2}} \tag{98}\] \[= u^{2}+u^{\prime 2}+u_{1}^{2}-2uu^{\prime}\left(\sin\theta\sin\theta^{ \prime}\cos\varphi_{1}+\cos\theta\cos\theta^{\prime}\right)-2uu_{1}(\sin\theta \sin\theta_{1}\cos\varphi_{2}+\cos\theta\cos\theta_{1})\] \[+2u^{\prime}u_{1}\left[\sin\theta^{\prime}\sin\theta_{1}\cos( \varphi_{1}-\varphi_{2})+\cos\theta^{\prime}\cos\theta_{1}\right].\] ### \(F_{\rm NL}^{2}G_{\rm NL}^{2}\) terms For terms containing \(F_{\rm NL}^{2}G_{\rm NL}^{2}\), considering symmetry, the GWs spectrum can be expressed in the following form: \[\Omega_{\rm GW}^{F_{\rm NL}^{2}G_{\rm NL}^{2}}(k)= \frac{F_{\rm NL}^{2}G_{\rm NL}^{2}k^{3}}{6\pi^{2}}\int\frac{{\rm d }^{3}q{\rm d}^{3}q^{\prime}}{(2\pi)^{6}}\cos 2(\phi-\phi^{\prime})I\left(u,v \right)I\left(u^{\prime},v^{\prime}\right)\int\frac{{\rm d}^{3}p_{1}{\rm d}^{ 3}p_{2}{\rm d}^{3}p_{3}{\rm d}^{3}p_{4}{\rm d}^{3}p_{5}{\rm d}^{3}p_{6}}{(2\pi )^{18}} \tag{98}\] \[\times\Bigg{[}4\langle\langle\zeta_{g}(\mathbf{p_{1}})\zeta_{g}(\mathbf{q }-\mathbf{p_{1}})\zeta_{g}(\mathbf{p_{2}})\zeta_{g}(\mathbf{p_{3}})\zeta_{g}(\mathbf{k}-\mathbf{q}- \mathbf{p_{2}}-\mathbf{p_{3}})\zeta_{g}(\mathbf{p_{4}})\zeta_{g}(\mathbf{q^{\prime}}-\mathbf{p_{4 }})\zeta_{g}(\mathbf{p_{5}})\zeta_{g}(\mathbf{p_{6}})\] \[\times\zeta_{g}(\mathbf{k^{\prime}}-\mathbf{q^{\prime}}-\mathbf{p_{5}}-\mathbf{p_ {6}}))\rangle+2\langle\langle\zeta_{g}(\mathbf{p_{1}})\zeta_{g}(\mathbf{q}-\mathbf{p_{1}}) \zeta_{g}(\mathbf{p_{2}})\zeta_{g}(\mathbf{k}-\mathbf{q}-\mathbf{p_{2}})\zeta_{g}(\mathbf{p_{3}}) \zeta_{g}(\mathbf{p_{4}})\] \[\times\zeta_{g}(\mathbf{q^{\prime}}-\mathbf{p_{3}}-\mathbf{p_{4}})\zeta_{g}( \mathbf{p_{5}})\zeta_{g}(\mathbf{p_{6}})\zeta_{g}(\mathbf{k^{\prime}}-\mathbf{q^{\prime}}-\bm {p_{5}}-\mathbf{p_{6}}))\rangle\Bigg{]}.\] Performing Wick contraction on the ten-point function, there are 9 distinct non-zero contractions and we name them as the 'loops' term, the '\(F^{2}G^{2}(1)\)' term, the '\(F^{2}G^{2}(2)\)' term, the '\(F^{2}G^{2}(3)\)' term, the '\(F^{2}G^{2}(4)\)' term, the '\(F^{2}G^{2}(5)\)' term, the '\(F^{2}G^{2}(6)\)' term, the '\(F^{2}G^{2}(7)\)' term and the '\(8F^{2}G^{2}(8)\)' term. Then we have \[\Omega_{\rm GW}^{F_{\rm NL}^{2}G_{\rm NL}^{2}}(k)= \Omega_{\rm GW}^{loops}(k)+\Omega_{\rm GW}^{F^{2}G^{2}(1)}(k)+ \Omega_{\rm GW}^{F^{2}G^{2}(2)}(k)+\Omega_{\rm GW}^{F^{2}G^{2}(3)}(k)+\Omega_ {\rm GW}^{F^{2}G^{2}(4)}(k)+\Omega_{\rm GW}^{F^{2}G^{2}(5)}(k)+\Omega_{\rm GW }^{F^{2}G^{2}(6)}(k) \tag{99}\] \[+\Omega_{\rm GW}^{F^{2}G^{2}(7)}(k)+\Omega_{\rm GW}^{F^{2}G^{2}( 8)}(k),\] and we now demonstrate each of these 9 parts in detail. The 'loops' term can be easily obtained as \[\Omega_{\rm GW}^{loops}(k)= 9G_{\rm NL}^{2}\int\frac{{\rm d}p_{1}{\rm d}p_{2}}{p_{1}p_{2}}{ \cal P}_{g}(p_{1}){\cal P}_{g}(p_{2})\Omega_{\rm GW}^{F_{\rm NL}^{2}}(k)+3G_{ \rm NL}\int\frac{{\rm d}p_{1}}{p_{1}}{\cal P}_{g}(p_{1})\left(\Omega_{\rm GW} ^{F_{\rm NL}^{2}G_{\rm NL}}(k)-6G_{\rm NL}\int\frac{{\rm d}p_{2}}{p_{2}}{\cal P }_{g}(p_{2})\Omega_{\rm GW}^{F_{\rm NL}^{2}}(k)\right) \tag{100}\] \[= 3G_{\rm NL}\int\frac{{\rm d}p_{1}}{p_{1}}{\cal P}_{g}(p_{1}) \left(\Omega_{\rm GW}^{F_{\rm NL}^{2}G_{\rm NL}}(k)-3G_{\rm NL}\int\frac{{ \rm d}p_{2}}{p_{2}}{\cal P}_{g}(p_{2})\Omega_{\rm GW}^{F_{\rm NL}^{2}}(k) \right),\] and for power spectrum satisfying Eq. (51), we have \(\Omega_{\rm GW}^{loops}(k)=3AG_{\rm NL}\Omega_{\rm GW}^{F_{\rm NL}^{2}G_{ \rm NL}}(k)-9A^{2}G_{\rm NL}^{2}\Omega_{\rm GW}^{F_{\rm NL}^{2}}(k)\). The '\(F^{2}G^{2}(1)\)' term is a disconnected diagram and one example of the contraction is shown as follows: \[\left\langle\contraction{\zeta_{g}(\mathbf{p_{1}})\zeta_{g}(\mathbf{q}-\mathbf{p_{1}})\zeta_{g}( \mathbf{p_{2}})\zeta_{g}(\mathbf{p_{3}})\zeta_{g}(\mathbf{k}-\mathbf{q}-\mathbf{p_{2}}-\mathbf{p_{3}}) \zeta_{g}(\mathbf{p_{4}})\zeta_{g}(\mathbf{q^{\prime}}-\mathbf{p_{4}})\zeta_{g}(\mathbf{p_{5}}) \zeta_{g}(\mathbf{p_{6}})\zeta_{g}(\mathbf{k^{\prime}}-\mathbf{q^{\prime}}-\mathbf{p_{5}}-\mathbf{ p_{6}})}\right\rangle. \tag{101}\] The symmetry factor in this case is 12. Expanding the correlation function and using the appearing delta functions to eliminate redundant integrals, we can obtain: \[\Omega_{\rm GW}^{F^{2}G^{2}(1)}(k)= \frac{F_{\rm NL}^{2}G_{\rm NL}^{2}k^{3}}{6\pi^{2}}\int\frac{{\rm d }^{3}q}{(2\pi)^{3}}I^{2}\left(u,v\right)\int\frac{{\rm d}^{3}p_{1}{\rm d}^{3}p_ {2}{\rm d}^{3}p_{3}}{(2\pi)^{9}}4P_{g}\left(p_{1}\right)P_{g}\left(p_{2}\right) P_{g}\left(p_{3}\right)P_{g}\left(|\mathbf{q}-\mathbf{p_{1}}|\right)P_{g}\left(|\mathbf{k}-\mathbf{q}-\mathbf{p_{2}}- \mathbf{p_{3}}|\right)\times 12 \tag{102}\] \[= F_{\rm NL}^{2}G_{\rm NL}^{2}\int_{0}^{\infty}{\rm d}u\int_{|1-u|}^{ 1+u}{\rm d}v\int_{0}^{\infty}{\rm d}u_{1}\int_{|1-u_{1}|}^{1+u_{1}}{\rm d}v_{1} \int_{0}^{\infty}{\rm d}u_{2}\int_{|1-u_{2}|}^{1+u_{2}}{\rm d}v_{2}\int_{0}^{ \infty}{\rm d}u_{3}\int_{|1-u_{3}|}^{1+u_{3}}{\rm d}v_{3}I^{2}(u,v)\] \[\times\frac{1}{u^{2}v^{2}u_{1}^{2}v_{1}^{2}u_{2}^{2}v_{2}^{2}u_{3 }^{2}v_{3}^{2}}{\cal P}_{g}\left(u_{1}uk\right){\cal P}_{g}\left(v_{1}uk \right){\cal P}_{g}\left(u_{2}vk\right){\cal P}_{g}\left(u_{3}v_{2}vk\right){ \cal P}_{g}\left(v_{3}v_{2}vk\right),\] where the second equality in the above equation is obtained by performing the coordinate transformation \(u_{1}=p_{1}/q\), \(v_{1}=|\mathbf{q}-\mathbf{p_{1}}|/q\), \(u_{2}=p_{2}/|\mathbf{k}-\mathbf{q}|\), \(v_{2}=|\mathbf{k}-\mathbf{q}-\mathbf{p_{2}}|/|\mathbf{k}-\mathbf{q}|\), \(u_{3}=p_{3}/|\mathbf{k}-\mathbf{q}-\mathbf{p_{2}}|\), \(v_{3}=|\mathbf{k}-\mathbf{q}-\mathbf{p_{2}}-\mathbf{p_{3}}|/|\mathbf{k}-\mathbf{q}-\mathbf{p_{2}}|\). One example of the contraction of the '\(F^{2}G^{2}(2)\)' term is shown as follows: \[\left\langle\contraction{\zeta_{g}(\mathbf{p_{1}})\zeta_{g}(\mathbf{q}-\mathbf{p_{1}})\zeta_{g}( \mathbf{p_{2}})\zeta_{g}(\mathbf{p_{3}})\zeta_{g}(\mathbf{k}-\mathbf{q}-\mathbf{p_{2}}-\mathbf{p_{3}}) \zeta_{g}(\mathbf{p_{4}})\zeta_{g}(\mathbf{q^{\ and the symmetry factor in this case is 72. Expanding the correlation function and using the appearing delta functions to eliminate redundant integrals, we can obtain: \[\Omega_{\rm GW}^{F^{2}G^{2}(2)}(k)= \frac{F_{\rm NL}^{2}G_{\rm NL}^{2}k^{3}}{6\pi^{2}}\int\frac{{\rm d }^{3}q{\rm d}^{3}q^{\prime}}{(2\pi)^{6}}\cos 2(\phi-\phi^{\prime})I\left(u,v\right)I \left(u^{\prime},v^{\prime}\right)\int\frac{{\rm d}^{3}p_{1}{\rm d}^{3}p_{2}} {(2\pi)^{6}} \tag{104}\] \[\times 4P_{g}\left(p_{1}\right)P_{g}\left(p_{2}\right)P_{g}\left( \left|\mathbf{q}-\mathbf{p}_{1}\right|\right)P_{g}\left(\left|\mathbf{k}+\mathbf{p}_{1}+\mathbf{p} _{2}-\mathbf{q}\right|\right)P_{g}\left(\mathbf{q}+\mathbf{q}^{\prime}-\mathbf{p}_{1}\right) \times 72\] \[= \frac{3F_{\rm NL}^{2}G_{\rm NL}^{2}}{4\pi^{3}}\int_{0}^{\infty}{ \rm d}u\int_{\left|1-u^{\prime}\right|}^{1+u}{\rm d}v\int_{0}^{\infty}{\rm d }u^{\prime}\int_{\left|1-u^{\prime}\right|}^{1+u^{\prime}}{\rm d}v^{\prime} \int_{0}^{\infty}{\rm d}u_{1}\int_{\left|1-u_{1}\right|}^{1+u_{1}}{\rm d}v_{1} \int_{0}^{\infty}{\rm d}u_{2}\int_{\left|1-u_{2}\right|}^{1+u_{2}}{\rm d}v_{2}\] \[\times\int_{0}^{2\pi}{\rm d}\varphi_{1}\int_{0}^{2\pi}{\rm d} \varphi_{2}\int_{0}^{2\pi}{\rm d}\varphi_{3}\cos 2\varphi_{1}I(u,v)I(u^{ \prime},v^{\prime})\frac{uvu^{\prime}v^{\prime}u_{1}v_{1}u_{2}v_{2}}{(u_{1}u_{ 2}w_{13}w_{0134}w_{123})^{3}}{\cal P}_{g}\left(u_{1}k\right){\cal P}_{g}\left( u_{2}k\right)\] \[\times{\cal P}_{g}\left(w_{13k}\right){\cal P}_{g}\left(w_{0134}k \right){\cal P}_{g}\left(w_{123}k\right),\] where \(w_{13}\) is defined in Eq. (45) and \(w_{0134}\) and \(w_{123}\) are defined as follows: \[w_{0134}^{2}= \frac{|\mathbf{k}-\mathbf{q}+\mathbf{p_{1}}+\mathbf{p_{2}}|^{2}}{k^{2}} \tag{105}\] \[= 1+u^{2}+u_{1}^{2}+u_{2}^{2}-2u\cos\theta+2u_{1}\cos\theta_{1}+2u _{2}\cos\theta_{2}-2uu_{1}\left[\sin\theta\sin\theta_{1}\cos\varphi_{2}+\cos \theta\cos\theta_{1}\right]\] \[-2uu_{2}\left[\sin\theta\sin\theta_{2}\cos\varphi_{3}+\cos\theta \cos\theta_{3}\right]+2u_{1}u_{2}\left[\sin\theta_{1}\sin\theta_{2}\cos( \varphi_{2}-\varphi_{3})+\cos\theta_{1}\cos\theta_{2}\right],\] \[w_{123}^{2}= \frac{|\mathbf{q}+\mathbf{q}^{\prime}-\mathbf{p}_{1}|^{2}}{k^{2}}\] \[= u^{2}+u^{\prime 2}+u_{1}^{2}+2uu^{\prime}\left(\sin\theta\sin \theta^{\prime}\cos\varphi_{1}+\cos\theta\cos\theta^{\prime}\right)-2uu_{1} \left(\sin\theta\sin\theta_{1}\cos\varphi_{2}+\cos\theta\cos\theta_{1}\right)\] \[-2u^{\prime}u_{1}\left[\sin\theta^{\prime}\sin\theta_{1}\cos( \varphi_{1}-\varphi_{2})+\cos\theta^{\prime}\cos\theta_{1}\right].\] One example of the contraction of the '\(F^{2}G^{2}(3)\)' term is shown as follows: \[\left\langle\zeta_{g}(\mathbf{p_{1}})\zeta_{g}(\mathbf{q}-\mathbf{p_{1}})\zeta_{g}(\mathbf{p_{2 }})\zeta_{g}(\mathbf{p_{3}})\zeta_{g}(\mathbf{k}-\mathbf{q}-\mathbf{p_{2}}-\mathbf{p_{3}})\zeta_{g} (\mathbf{p_{4}})\zeta_{g}(\mathbf{q^{\prime}}-\mathbf{p_{4}})\zeta_{g}(\mathbf{p_{5}})\zeta_{g} (\mathbf{p_{6}})\zeta_{g}(\mathbf{k^{\prime}}-\mathbf{q^{\prime}}-\mathbf{p_{5}}-\mathbf{p_{6}}) \right\rangle. \tag{107}\] and the symmetry factor in this case is 72. Expanding the correlation function and using the appearing delta functions to eliminate redundant integrals, we can obtain: \[\Omega_{\rm GW}^{F^{2}G^{2}(3)}(k)= \frac{F_{\rm NL}^{2}G_{\rm NL}^{2}k^{3}}{6\pi^{2}}\int\frac{{\rm d }^{3}q{\rm d}^{3}q^{\prime}}{(2\pi)^{6}}\cos 2(\phi-\phi^{\prime})I\left(u,v\right)I\left(u^{ \prime},v^{\prime}\right)\int\frac{{\rm d}^{3}p_{1}{\rm d}^{3}p_{2}}{(2\pi)^{6}} \tag{108}\] \[\times{\rm d}P_{g}\left(p_{1}\right)P_{g}\left(p_{2}\right)P_{g} \left(\left|\mathbf{q}-\mathbf{p}_{1}\right|\right)P_{g}\left(\left|\mathbf{k}-\mathbf{q}-\mathbf{q ^{\prime}}+\mathbf{p_{1}}-\mathbf{p_{2}}\right|\right)\times 72\] \[= \frac{3F_{\rm NL}^{2}G_{\rm NL}^{2}}{4\pi^{3}}\int_{0}^{\infty}{ \rm d}u\int_{\left|1-u\right|}^{1+u}{\rm d}v\int_{0}^{\infty}{\rm d}u^{\prime} \int_{\left|1-u^{\prime}\right|}^{1+u^{\prime}}{\rm d}v^{\prime}\int_{0}^{ \infty}{\rm d}u_{1}\int_{\left|1-u_{1}\right|}^{1+u_{1}}{\rm d}v_{1}\int_{0}^{ \infty}{\rm d}u_{2}\int_{\left|1-u_{2}\right|}^{1+u_{2}}{\rm d}v_{2}\] \[\times\int_{0}^{2\pi}{\rm d}\varphi_{1}\int_{0}^{2\pi}{\rm d} \varphi_{2}\int_{0}^{2\pi}{\rm d}\varphi_{3}\cos 2\varphi_{1}I(u,v)I(u^{ \prime},v^{\prime})\frac{uvu^{\prime}v^{\prime}u_{1}v_{1}u_{2}v_{2}}{(u_{1}u_{2}w _{13}w_{23}w_{01234})^{3}}{\cal P}_{g}\left(u_{1}k\right){\cal P}_{g}\left(u_{2}k \right){\cal P}_{g}\left(w_{13}k\right)\] \[\times{\cal P}_{g}\left(w_{23}k\right){\cal P}_{g}\left(w_{01234}k \right),\] where \(w_{13}\) and \(w_{23}\) are defined in Eq. (45) and Eq. (46) and \(w_{01234}\) are defined as follows: \[w_{01234}^{2}= \frac{|\mathbf{k}-\mathbf{q}-\mathbf{q^{\prime}}+\mathbf{p_{1}}-\mathbf{p_{2}}|^{2}}{k^ {2}} \tag{109}\] \[= 1+u^{2}+u^{\prime 2}+u_{1}^{2}+u_{2}^{2}-2u\cos\theta-2u^{\prime} \cos\theta^{\prime}+2u_{1}\cos\theta_{1}-2u_{2}\cos\theta_{2}+2uu^{\prime}\left[ \sin\theta\sin\theta^{\prime}\cos\varphi_{1}+\cos\theta\cos\theta^{\prime}\right]\] \[-2uu_{1}\left[\sin\theta\sin\theta_{1}\cos\varphi_{2}+\cos\theta \cos\theta_{1}\right]+2uu_{2}\left[\sin\theta\sin\theta_{2}\cos\varphi_{3}+\cos \theta\cos\theta_{2}\right]\] \[-2u^{\prime}u_{1}\left[\sin\theta^{\prime}\sin\theta_{1}\cos( \varphi_{1}-\varphi_{2})+\cos\theta^{\prime}\cos\theta_{1}\right]+2u^{\prime}u_{2} \left[\sin\theta^{\prime}\sin\theta_{2}\cos(\varphi_{1}-\varphi_{3})+\cos\theta^{ \prime}\cos\theta_{2}\right]\] \[-2u_{1}u_{2}\left[\sin\theta_{1}\sin\theta_{2}\cos(\varphi_{2}- \varphi_{3})+\cos\theta_{1}\cos\theta_{2}\right].\] One example of the contraction of the '\(F^{2}G^{2}(4)\)' term is shown as follows: \[\left\langle\zeta_{g}(\mathbf{p and the symmetry factor in this case is 36. Expanding the correlation function and using the appearing delta functions to eliminate redundant integrals, we can obtain: \[\Omega_{\rm GW}^{F^{2}G^{2}(4)}(k)= \frac{F_{\rm NL}^{2}G_{\rm NL}^{2}k^{3}}{6\pi^{2}}\int\frac{{\rm d}^ {3}q{\rm d}^{3}q^{\prime}}{(2\pi)^{6}}\cos 2(\phi-\phi^{\prime})I\left(u,v\right)I \left(u^{\prime},v^{\prime}\right)\int\frac{{\rm d}^{3}p_{1}{\rm d}^{3}p_{2}} {(2\pi)^{6}} \tag{111}\] \[\times 4P_{g}\left(p_{1}\right)P_{g}\left(p_{2}\right)P_{g}\left(| \boldsymbol{q-p_{1}}|\right)P_{g}\left(|\boldsymbol{q^{\prime}-p_{2}}|\right)P _{g}\left(|\boldsymbol{k-q-q^{\prime}}|\right)\times 36\] \[= \frac{3F_{\rm NL}^{2}G_{\rm NL}^{2}}{8\pi^{3}}\int_{0}^{\infty}{ \rm d}u\int_{|1-u|}^{1+u}{\rm d}v\int_{0}^{\infty}{\rm d}u^{\prime}\int_{|1-u^ {\prime}|}^{1+u^{\prime}}{\rm d}v^{\prime}\int_{0}^{\infty}{\rm d}u_{1}\int_{| 1-u_{1}|}^{1+u_{1}}{\rm d}v_{1}\int_{0}^{\infty}{\rm d}u_{2}\int_{|1-u_{2}|}^{ 1+u_{2}}{\rm d}v_{2}\] \[\times\int_{0}^{2\pi}{\rm d}\varphi_{1}\int_{0}^{2\pi}{\rm d} \varphi_{2}\int_{0}^{2\pi}{\rm d}\varphi_{3}\cos 2\varphi_{1}I(u,v)I(u^{ \prime},v^{\prime})\frac{uvu^{\prime}v^{\prime}u_{1}u_{1}u_{2}v_{2}}{(u_{1}u_ {2}w_{13}w_{24}w_{012})^{3}}{\cal P}_{g}\left(u_{1}k\right){\cal P}_{g}\left(u _{2}k\right){\cal P}_{g}\left(w_{13}k\right)\] \[\times{\cal P}_{g}\left(w_{24}k\right){\cal P}_{g}\left(w_{012}k \right),\] where \(w_{13}\) and \(w_{012}\) are defined in Eq. (45) and Eq. (32) and \(w_{24}\) is defined as follows: \[w_{24}^{2} =\frac{|\boldsymbol{q^{\prime}-p_{2}}|^{2}}{k^{2}}=u^{\prime 2}+u_{2}^{ 2}-2u^{\prime}u_{2}\left[\sin\theta^{\prime}\sin\theta_{2}\cos(\varphi_{1}- \varphi_{3})+\cos\theta^{\prime}\cos\theta_{2}\right]. \tag{112}\] One example of the contraction of the '\(F^{2}G^{2}(5)\)' term is shown as follows: \[\left\langle\overline{\zeta_{g}(\boldsymbol{p_{1}})\overline{\zeta_{g}( \boldsymbol{q-p_{1}})}\zeta_{g}(\boldsymbol{p_{2}})\zeta_{g}(\boldsymbol{k-q- p_{2}})\zeta_{g}(\boldsymbol{p_{3}})\zeta_{g}(\boldsymbol{p_{4}})\overline{ \zeta_{g}(\boldsymbol{q^{\prime}-p_{3}-p_{4}})}\zeta_{g}(\boldsymbol{p_{5}}) \zeta_{g}(\boldsymbol{p_{6}})\zeta_{g}(\boldsymbol{k^{\prime}-q^{\prime}-p_{5 }-p_{6}})}\right\rangle. \tag{113}\] and the symmetry factor in this case is 144. Expanding the correlation function and using the appearing delta functions to eliminate redundant integrals, we can obtain: \[\Omega_{\rm GW}^{F^{2}G^{2}(5)}(k)= \frac{F_{\rm NL}^{2}G_{\rm NL}^{2}k^{3}}{6\pi^{2}}\int\frac{{\rm d }^{3}q{\rm d}^{3}q^{\prime}}{(2\pi)^{6}}\cos 2(\phi-\phi^{\prime})I\left(u,v\right)I \left(u^{\prime},v^{\prime}\right)\int\frac{{\rm d}^{3}p_{1}{\rm d}^{3}p_{2}} {(2\pi)^{6}} \tag{114}\] \[\times 2P_{g}\left(p_{1}\right)P_{g}\left(|\boldsymbol{q-p_{1}}| \right)P_{g}\left(|\boldsymbol{k-q+p_{1}}|\right)P_{g}\left(|\boldsymbol{q+q^{ \prime}-p_{1}-p_{2}}|\right)\times 144\] \[= \frac{3F_{\rm NL}^{2}G_{\rm NL}^{2}}{4\pi^{3}}\int_{0}^{\infty}{ \rm d}u\int_{|1-u|}^{1+u}{\rm d}v\int_{0}^{\infty}{\rm d}u^{\prime}\int_{|1-u ^{\prime}|}^{1+u^{\prime}}{\rm d}v^{\prime}\int_{0}^{\infty}{\rm d}u_{1}\int_{ |1-u_{1}|}^{1+u_{1}}{\rm d}v_{1}\int_{0}^{\infty}{\rm d}u_{2}\int_{|1-u_{2}|}^{ 1+u_{2}}{\rm d}v_{2}\] \[\times\int_{0}^{2\pi}{\rm d}\varphi_{1}\int_{0}^{2\pi}{\rm d} \varphi_{2}\int_{0}^{2\pi}{\rm d}\varphi_{3}\cos 2\varphi_{1}I(u,v)I(u^{ \prime},v^{\prime})\frac{uvu^{\prime}v^{\prime}u_{1}v_{1}u_{2}v_{2}}{(u_{1}u_{2 }w_{13}w_{013}w_{1234})^{3}}{\cal P}_{g}\left(u_{1}k\right){\cal P}_{g}\left(u _{2}k\right){\cal P}_{g}\left(w_{13}k\right)\] \[\times{\cal P}_{g}\left(w_{013}k\right){\cal P}_{g}\left(w_{1234}k \right),\] where \(w_{013}\) and \(w_{1234}\) are defined as follows: \[w_{013}^{2}= \frac{|\boldsymbol{k-q+p_{1}}|^{2}}{k^{2}}=1+u^{2}+u_{1}^{2}-2u\cos \theta+2u_{1}\cos\theta_{1}-2uu_{1}\left[\sin\theta\sin\theta_{1}\cos\varphi_{2}+ \cos\theta\cos\theta_{1}\right], \tag{115}\] \[w_{1234}^{2}= \frac{|\boldsymbol{q+q^{\prime}-p_{1}-p_{2}}|^{2}}{k^{2}}\] (116) \[= u^{2}+u^{\prime 2}+u_{1}^{2}+u_{2}^{2}+2uu^{\prime}\left[\sin\theta \sin\theta^{\prime}\cos\varphi_{1}+\cos\theta\cos\theta^{\prime}\right]-2uu_{1} \left[\sin\theta\sin\theta_{1}\cos\varphi_{2}+\cos\theta\cos\theta_{1}\right]\] \[-2uu_{2}\left[\sin\theta\sin\theta_{2}\cos\varphi_{3}+\cos\theta \cos\theta_{2}\right]-2u^{\prime}u_{1}\left[\sin\theta^{\prime}\sin\theta_{1} \cos(\varphi_{1}-\varphi_{2})+\cos\theta^{\prime}\cos\theta_{1}\right]\] \[-2u^{\prime}u_{2}\left[\sin\theta^{\prime}\sin\theta_{2}\cos( \varphi_{1}-\varphi_{3})+\cos\theta^{\prime}\cos\theta_{2}\right]+2u_{1}u_{2} \left[\sin\theta_{1}\sin\theta_{2}\cos(\varphi_{2}-\varphi_{3})+\cos\theta_{1} \cos\theta_{2}\right].\] One example of the contraction of the '\(F^{2}G^{2}(6)\)' term is shown as follows: \[\left\langle\overline{\zeta_{g}(\boldsymbol{p_{1}})\overline{\zeta_{g}( \boldsymbol{q-p_{1}})}\zeta_{g}(\boldsymbol{p_{2}})\zeta_{g}(\boldsymbol{k-q- p_{2}})\zeta_{g}(\boldsymbol{p_{3}})\zeta_{g}(\boldsymbol{p_{4}})\overline{\zeta_{g}( \boldsymbol{q^{\prime}-p_{3}-p_{4}})}\zeta_{g}(\boldsymbol{p_{5}})\zeta_{g}( \boldsymbol{p_{6}})\zeta_{g}(\boldsymbol{k^{\prime}-q^{\prime}-p_{5}-p_{6}})} \right\rangle. \tag{117}\] and the symmetry factor in this case is 72. Expanding the correlation function and using the appearing delta functions to eliminate redundant integrals, we can obtain: \[\Omega_{\rm GW}^{F^{2}G^{2}(6)}(k)= \frac{F_{\rm NL}^{2}G_{\rm NL}^{2}k^{3}}{6\pi^{2}}\int\frac{{\rm d}^ {3}q{\rm d}^{3}q^{\prime}}{(2\pi)^{6}}\cos 2(\phi-\phi^{\prime})I\left(u,v\right)I \left(u^{\prime},v^{\prime}\right)\int\frac{{\rm d}^{3}p_{1}{\rm d}^{3}p_{2}}{(2 \pi)^{6}} \tag{118}\] \[\times 2P_{g}\left(p_{1}\right)P_{g}\left(p_{2}\right)P_{g}\left( \left|\mathbf{q-p_{1}}\right|\right)P_{g}\left(\left|\mathbf{k-q-p_{2}}\right|\right)P_ {g}\left(\left|\mathbf{q-q^{\prime}}\right|\right)\times 72\] \[= \frac{3F_{\rm NL}^{2}G_{\rm NL}^{2}}{8\pi^{3}}\int_{0}^{\infty}{ \rm d}u\int_{\left|1-u\right|}^{1+u}{\rm d}v\int_{0}^{\infty}{\rm d}u^{\prime }\int_{\left|1-u^{\prime}\right|}^{1+u^{\prime}}{\rm d}v^{\prime}\int_{0}^{ \infty}{\rm d}u_{1}\int_{\left|1-u_{1}\right|}^{1+u_{1}}{\rm d}v_{1}\int_{0}^ {\infty}{\rm d}u_{2}\int_{\left|1-u_{2}\right|}^{1+u_{2}}{\rm d}v_{2}\] \[\times\int_{0}^{2\pi}{\rm d}\varphi_{1}\int_{0}^{2\pi}{\rm d} \varphi_{2}\int_{0}^{2\pi}{\rm d}\varphi_{3}\cos 2\varphi_{1}I(u,v)I(u^{ \prime},v^{\prime})\frac{uvu^{\prime}v^{\prime}u_{1}v_{1}u_{2}v_{2}}{(u_{1}u_{2 }w_{13}w_{014}w_{12})^{3}}{\cal P}_{g}\left(u_{1}k\right){\cal P}_{g}\left(u_{ 2}k\right){\cal P}_{g}\left(w_{13}k\right)\] \[\times{\cal P}_{g}\left(w_{014}k\right){\cal P}_{g}\left(w_{12}k \right),\] where \(w_{014}\) is defined as follows: \[w_{014}^{2}= \frac{\left|\mathbf{k-q-p_{2}}\right|^{2}}{k^{2}}=1+u^{2}+u_{2}^{2}-2u \cos\theta-2u_{2}\cos\theta_{2}+2uu_{2}\left[\sin\theta\sin\theta_{2}\cos \varphi_{3}+\cos\theta\cos\theta_{2}\right]. \tag{119}\] One example of the contraction of the '\(F^{2}G^{2}(7)\)' term is shown as follows: \[\left\langle\contraction{\zeta_{g}(\mathbf{p_{1}})\zeta_{g}(\mathbf{q-p_{1}})\zeta_{g}(\mathbf{ p_{2}})\zeta_{g}(\mathbf{k-q-p_{2}})\zeta_{g}(\mathbf{p_{3}})\zeta_{g}(\mathbf{p_{4}})\zeta_{g}( \mathbf{q^{\prime}-p_{3}-p_{4}})\zeta_{g}(\mathbf{p_{5}})\zeta_{g}(\mathbf{p_{6}})\zeta_{g }(\mathbf{k^{\prime}-q^{\prime}-p_{5}-p_{6}})}{}\right\rangle. \tag{120}\] and the symmetry factor in this case is 144. Expanding the correlation function and using the appearing delta functions to eliminate redundant integrals, we can obtain: \[\Omega_{\rm GW}^{F^{2}G^{2}(7)}(k)= \frac{F_{\rm NL}^{2}G_{\rm NL}^{2}k^{3}}{6\pi^{2}}\int\frac{{\rm d }^{3}q{\rm d}^{3}q^{\prime}}{(2\pi)^{6}}\cos 2(\phi-\phi^{\prime})I\left(u,v\right)I \left(u^{\prime},v^{\prime}\right)\int\frac{{\rm d}^{3}p_{1}{\rm d}^{3}p_{2}}{ (2\pi)^{6}} \tag{121}\] \[\times 2P_{g}\left(p_{1}\right)P_{g}\left(p_{2}\right)P_{g}\left( \left|\mathbf{q-p_{1}}\right|\right)P_{g}\left(\left|\mathbf{k-q-p_{2}}\right|\right)P_ {g}\left(\left|\mathbf{q^{\prime}-p_{1}-p_{2}}\right|\right)\times 144\] \[= \frac{3F_{\rm NL}^{2}G_{\rm NL}^{2}}{4\pi^{3}}\int_{0}^{\infty}{ \rm d}u\int_{\left|1-u\right|}^{1+u}{\rm d}v\int_{0}^{\infty}{\rm d}u^{\prime }\int_{\left|1-u^{\prime}\right|}^{1+u^{\prime}}{\rm d}v^{\prime}\int_{0}^{ \infty}{\rm d}u_{1}\int_{\left|1-u_{1}\right|}^{1+u_{1}}{\rm d}v_{1}\int_{0}^{ \infty}{\rm d}u_{2}\int_{\left|1-u_{2}\right|}^{1+u_{2}}{\rm d}v_{2}\] \[\times\int_{0}^{2\pi}{\rm d}\varphi_{1}\int_{0}^{2\pi}{\rm d} \varphi_{2}\int_{0}^{2\pi}{\rm d}\varphi_{3}\cos 2\varphi_{1}I(u,v)I(u^{ \prime},v^{\prime})\frac{uvu^{\prime}v^{\prime}u_{1}v_{1}u_{2}v_{2}}{(u_{1}u_{2 }w_{13}w_{014}w_{234})^{3}}{\cal P}_{g}\left(u_{1}k\right){\cal P}_{g}\left(u _{2}k\right){\cal P}_{g}\left(w_{13}k\right)\] \[\times{\cal P}_{g}\left(w_{014}k\right){\cal P}_{g}\left(w_{234}k \right),\] where \(w_{234}\) is defined as follows: \[w_{234}^{2}= \frac{\left|\mathbf{q^{\prime}-p_{1}-p_{2}}\right|^{2}}{k^{2}} \tag{122}\] \[= u^{\prime 2}+u_{1}^{2}+u_{2}^{2}-2u^{\prime}u_{1}\left[\sin\theta^{ \prime}\sin\theta_{1}\cos(\varphi_{1}-\varphi_{2})+\cos\theta^{\prime}\cos\theta_ {1}\right]-2u^{\prime}u_{2}\left[\sin\theta^{\prime}\sin\theta_{2}\cos(\varphi_{1} -\varphi_{3})+\cos\theta^{\prime}\cos\theta_{2}\right]\] \[+2u_{1}u_{2}\left[\sin\theta_{1}\sin\theta_{2}\cos(\varphi_{2}- \varphi_{3})+\cos\theta_{1}\cos\theta_{2}\right].\] One example of the contraction of the '\(F^{2}G^{2}(8)\)' term is shown as follows: \[\left\langle\contraction{\zeta_{g}(\mathbf{p_{1}})\zeta_{g}(\mathbf{q-p_{1}})\zeta_{g}(\mathbf{p_{2 }})\zeta_{g}(\mathbf{p_{3}})\zeta_{g}(\mathbf{k-q-p_{2}-p_{3}})\zeta_{g}(\mathbf{p_{4}}) \zeta_{g}(\mathbf{q^{\prime}-p_{4}})\zeta_{g}(\mathbf{p_{5}})\zeta_{g}(\mathbf{p_{6}}) \zeta_{g}(\mathbf{k^{\prime}-q^{\prime}-p_{5}-p_{6}})}{}\right\rangle. \tag{123}\] and the symmetry factor in this case is 144. Expanding the correlation function and using the appearing delta functions to eliminate redundant integrals, we can obtain: \[\Omega_{\rm GW}^{F^{2}G^{2}(8)}(k)= \frac{F_{\rm NL}^{2}G_{\rm NL}^{2}k^{3}}{6\pi^{2}}\int\frac{{\rm d }^{3}q{\rm d}^{3}q^{\prime}}{(2\pi)^{6}}\cos 2(\phi-\phi^{\prime})I\left(u,v\right)I \left(u^{\prime},v^{\prime}\right)\int\frac{{\rm d}^{3}p_{1}{\rm d}^{3}p_{2}}{(2 \pi)^{6}}\] (124) \[\times 4P_{g}\left(p_{1}\right)P_{g}\left(p_{2}\right)P_{g}\left( \left|\mathbf{q-p_{1}}\right|\right)P_{g}\left(\left|\mathbf{k-p_{1}-p_{2}}\right| \right)P_{g}\left(\left|\mathbf{k+q^{\prime}-p_{1}-p_{2}}\right|\right)\times 144\] \[= \frac{3F_{\rm NL}^{2}G_{\rm NL}^{2}}{2\pi^{3}}\int_{0}^{\infty}{ \rm d}u\int_{\left|1-u\right|}^{1+u}{\rm d}v\int_{0}^{\infty}{\rm d}u^{\prime }\int_{\left|1-u^{\prime}\right|}^{1+u^{\prime}}{\rm d}v^{\prime}\int_{0}^{ \infty}{\rm d}u_{1}\int_{\left|1-u_{1}\right|}^{1+u_{1}}{\rm d}v_{1}\int_ where \(w_{13}\) is defined in Eq. (45) and \(w_{034}\) and \(w_{0234}\) are defined as follows: \[w_{034}^{2} = \frac{|\mathbf{k}-\mathbf{p_{1}}-\mathbf{p_{2}}|^{2}}{k^{2}} \tag{125}\] \[= 1+u_{1}^{2}+u_{2}^{2}-2u_{1}\cos\theta_{1}-2u_{2}\cos\theta_{2}+2 u_{1}u_{2}\left[\sin\theta_{1}\sin\theta_{2}\cos(\varphi_{2}-\varphi_{3})+\cos \theta_{1}\cos\theta_{2}\right],\] \[w_{0234}^{2} = \frac{|\mathbf{k}+\mathbf{q^{\prime}}-\mathbf{p_{1}}-\mathbf{p_{2}}|^{2}}{k^{2}}\] (126) \[= 1+u^{\prime 2}+u_{1}^{2}+u_{2}^{2}+2u^{\prime}\cos\theta^{\prime}-2 u_{1}\cos\theta_{1}-2u_{2}\cos\theta_{2}-2u^{\prime}u_{1}\left[\sin\theta^{ \prime}\sin\theta_{1}\cos(\varphi_{1}-\varphi_{2})+\cos\theta^{\prime}\cos \theta_{1}\right]\] \[-2u^{\prime}u_{2}\left[\sin\theta^{\prime}\sin\theta_{2}\cos( \varphi_{1}-\varphi_{3})+\cos\theta^{\prime}\cos\theta_{3}\right]+2u_{1}u_{2} \left[\sin\theta_{1}\sin\theta_{2}\cos(\varphi_{2}-\varphi_{3})+\cos\theta_{1 }\cos\theta_{2}\right].\] ## IV Log-dependent behavior in the infrared region In this section, we will demonstrate that all the non-Gaussian diagrams have a similar scaling in the infrared region, characterized by the following logarithmic dependence: \[\Omega_{\rm GW}\propto\left(\frac{k}{k_{\star}}\right)^{3}\ln^{2}\left(\frac{4 k_{\star}^{2}}{3k^{2}}\right), \tag{127}\] where \(k_{\star}\) is a reference scale which we will discuss below and the slope index is given by: \[n_{\rm GW}\equiv\frac{{\rm d}\ln\Omega_{\rm GW}}{{\rm d}\ln k}=3-\frac{4}{\ln \frac{4k^{2}}{3k^{2}}}. \tag{128}\] This logarithmic scaling law was initially investigated in [75] for the Gaussian case, where the authors considered a generic power spectrum with a peak at \(k_{\star}\). More recently, in [114], the authors also identified logarithmic scaling for \(F_{\rm NL}^{2}\) terms and \(F_{\rm NL}^{4}\) terms. In this study, we provide a proof for the "tri" term as an example using the methodology outlined in [75]. First of all, we rewrite the "tri" term as follows: \[\Omega_{\rm GW}^{tri}(k)= \frac{G_{\rm NL}^{2}}{4\pi^{2}}\int_{0}^{\infty}{\rm d}u\int_{|1- u|}^{1+u}\!{\rm d}v\int_{0}^{\infty}{\rm d}u_{1}\int_{|1-u_{1}|}^{1+u_{1}} \!{\rm d}v_{1}\int_{0}^{\infty}{\rm d}u_{2}\int_{|1-u_{2}|}^{1+u_{2}}\!{\rm d} v_{2}\int_{0}^{2\pi}{\rm d}\varphi_{2}\int_{0}^{2\pi}{\rm d}\varphi_{3} \tag{129}\] \[\times I^{2}(u,v)\frac{uvu_{1}v_{1}u_{2}v_{2}}{(u_{1}u_{2}w_{134}v )^{3}}\mathcal{P}_{g}\left(u_{1}k\right)\mathcal{P}_{g}\left(u_{2}k\right) \mathcal{P}_{g}\left(w_{134}k\right)\mathcal{P}_{g}\left(uk\right).\] To effectively analyze the scaling, we consider a generic power spectrum with a peak at \(k_{\star}\) and introduce two parameters, \(k_{-}\) and \(k_{+}\), in such a way that the power spectrum is mainly distributed in \(k\in[k_{-},k_{+}]\) and we neglect the portion beyond this range. Since the integral involves terms of the form \(\mathcal{P}_{g}(u_{1}k)\mathcal{P}_{g}(u_{2}k)\mathcal{P}_{g}(uk)\), it follows that \(u_{1}k\), \(u_{2}k\), and \(uk\) are constrained within the range \([k_{-},k_{+}]\). Consequently, this imposes lower and upper limits on the variables \(u_{1}\), \(u_{2}\), and \(u\), namely \[\Omega_{\rm GW}^{tri}(k)= \frac{G_{\rm NL}^{2}}{4\pi^{2}}\int_{k_{-}/k}^{k_{+}/k}{\rm d}u \int_{|1-u|}^{1+u}\!{\rm d}v\int_{k_{-}/k}^{k_{+}/k}{\rm d}u_{1}\int_{|1-u_{1}| }^{1+u_{1}}\!{\rm d}v_{1}\int_{k_{-}/k}^{k_{+}/k}{\rm d}u_{2}\int_{|1-u_{2}|}^ {1+u_{2}}\!{\rm d}v_{2}\int_{0}^{2\pi}{\rm d}\varphi_{2}\int_{0}^{2\pi}{\rm d} \varphi_{3} \tag{130}\] \[\times I^{2}(u,v)\frac{uvu_{1}v_{1}u_{2}v_{2}}{(u_{1}u_{2}w_{134}v )^{3}}\mathcal{P}_{g}\left(u_{1}k\right)\mathcal{P}_{g}\left(u_{2}k\right) \mathcal{P}_{g}\left(w_{0134}k\right)\mathcal{P}_{g}\left(uk\right).\] Since we are interested in the infrared region where \(k\ll k_{\star}\), it follows that \(u\), \(u_{1}\), and \(u_{2}\) are much greater than 1. Consequently, we can simplify the above equation employing the first mean value theorem for definite integrals \[\Omega_{\rm GW}^{tri}(k)= \frac{2G_{\rm NL}^{2}}{\pi^{2}}\left(\frac{k_{+}-k_{-}}{k}\right)^{3 }I^{2}(u^{\star},v^{\star})\frac{u^{\star}v^{\star}u_{1}^{\star}v_{1}^{\star}u_ {2}^{\star}v_{2}^{\star}}{(u_{1}^{\star}u_{2}^{\star}u_{134}^{\star}v^{\star})^ {3}}\mathcal{P}_{g}\left(u_{1}^{\star}k\right)\mathcal{P}_{g}\left(u_{2}^{\star}k \right)\mathcal{P}_{g}\left(u_{0134}^{\star}k\right)\mathcal{P}_{g}\left(u^{\star} k\right), \tag{131}\] where \(u^{\star},u_{1}^{\star},u_{2}^{\star}\in[k_{-}/k,k_{+}/k]\) and \(v^{\star},v_{1}^{\star},v_{2}^{\star}\) are in the range of \([u^{\star}-1,u^{\star}+1]\), \([u_{1}^{\star}-1,u_{1}^{\star}+1]\) and \([u_{2}^{\star}-1,u_{2}^{\star}+1]\) respectively. \(w_{0134}^{\star}\) is defined as replacing \(u,v,u_{1},v_{1},u_{2},v_{2},\varphi_{2},\varphi_{3}\) in Eq. (83) with \(u^{\star},v^{\star},u_{1}^{\star},v_{1}^{\star},u_{2}^{\star},v_{2}^{\star}, \varphi_{2}^{\star},\varphi_{3}^{\star}\) and we have \(\varphi_{2}^{\star},\varphi_{3}^{\star}\in[0,2\pi]\). By expanding \(u^{\star},u_{1}^{\star},u_{2}^{\star}\) at \(k_{\star}/k\) to leading order where \(k_{\star}\in[k_{-},k_{+}]\) is a reference scale, we obtain \[\Omega_{\rm GW}^{tri}(k)\propto\left(\frac{k}{k_{\star}}\right)^{3}I^{2} \left(\frac{k_{\star}}{k},\frac{k_{\star}}{k}\right). \tag{132}\] Using the following asymptotic behavior for \(u\gg 1\): \[I^{2}(u,u)\simeq\frac{9}{4}\ln^{2}\left(\frac{4u^{2}}{3}\right), \tag{133}\] we finally get Eq. (127). All the scaling of non-Gaussian diagrams in the infrared region can be shown in the similar way. It has been argued in [75] that this log-dependent scaling could be smoking gun for SIGW. However, the mean value theorem could not give us the exact value of the reference scale, \(k_{\star}\), and one should treat \(k_{\star}\) as free parameter in GW data analysis. Moreover, the value of \(k_{\star}\) is different for different power spectrum and different non-Gaussian diagrams. Next, we will show that the scaling of the total energy spectrum also follows Eq. (127). First of all, we write down the total energy spectrum in the infrared region in a generic form as follows: \[\Omega_{\rm GW}(k)=\sum_{i}A_{i}\left(\frac{k}{k_{\star i}}\right)^{3}\ln^{2} \left(\frac{4k_{\star i}^{2}}{3k^{2}}\right), \tag{134}\] where \(A_{i}\) denotes the amplitude of the \(i\)-th non-Gaussian energy spectrum and \(k_{\star i}\) is the reference scale obtained using the mean value theorem for the \(i\)-th non-Gaussian energy spectrum. The above equation can be re-written as \[\Omega_{\rm GW}(k)=k^{3}\sum_{i}\frac{A_{i}}{k_{\star i}^{3}}\left(c_{i}^{2}+ 2c_{i}\ln\left(\frac{4k_{\star}^{2}}{3k^{2}}\right)+\ln^{2}\left(\frac{4k_{ \star}^{2}}{3k^{2}}\right)\right), \tag{135}\] where we introduce \(c_{i}\equiv\ln\left(\frac{k_{\star i}^{2}}{k_{\star}^{2}}\right)\) and \(k_{\star}\) is a reference scale. Note that \(k_{\star i}\) is obtained using the mean value theorem and it is in the range of \([k_{-},k_{+}]\). On the other hand, we choose \(k_{\star}\) also to be in the range of \([k_{-},k_{+}]\). Then we have \[\frac{k_{-}}{k_{+}}\lesssim\frac{k_{\star}}{k_{\star i}}\lesssim\frac{k_{+}}{k _{-}} \tag{136}\] In the infrared region where \(k\ll k_{-}\), we obtain the relation \[c_{i}\ll\ln\left(\frac{4k_{\star}^{2}}{3k^{2}}\right),\text{if}\,\frac{k}{k_{ \star}}\ll\frac{k_{-}}{k_{+}} \tag{137}\] This indicates that, in the infrared region where \(k/k_{\star}\ll k_{-}/k_{+}\), the \(c_{i}\) terms in Eq. (135) are negligible and we can obtain \[\Omega_{\rm GW}(k)\simeq k^{3}\ln^{2}\left(\frac{4k_{\star}^{2}}{3k^{2}} \right)\sum_{i}\frac{A_{i}}{k_{\star i}^{3}}, \tag{138}\] This finally lead us to Eq. (127) and the slope index is still given by Eq. (128). Note that the region where \(k/k_{\star}\ll k_{-}/k_{+}\) depends on the width of the power spectrum through \(k_{-}/k_{+}\). For narrow spectrum, it reduces to \(k_{-}/k_{+}\simeq 1\). While for some wide spectrum where \(k_{+}\) is several orders of magnitude larger than \(k_{-}\), Eq. (128) is a approximation only for sufficient small \(k\). Our results show that, the SIGWs from PBH formation will also exhibit a log-dependent scaling in the infrared region, regardless of the specific shape of the power spectrum even in the non-Gaussian case. This log-dependent scaling comes from the oscillating behavior of the evolution of the scalar perturbations during RD, and could be a smoking gun for detecting SIGW from PBHs. To have deeper insights into the influence of non-Gaussianities on the energy spectrum SIGWs, we consider a log-normal shape spectrum which is widely used when studying the SIGW spectrum (e.g., [66; 112; 115; 116; 117]), namely \[\mathcal{P}_{g}(k)=\frac{A}{\sqrt{2\pi\sigma_{*}^{2}}}\exp\left(-\frac{\ln^{2 }\left(k/k_{\star}\right)}{2\sigma_{*}^{2}}\right), \tag{139}\] where the dimensionless parameter \(\sigma_{*}\) is related to the the width of the spectrum and we normalize the power spectrum in such way that \(\int\mathcal{P}_{g}(k)d\ln k=A\). Given that analytical results for the non-Gaussian diagrams are challenging to obtain, in this regard, we use the Cuba.jl package [118; 119] to present the numerical results. Fig. 1 illustrates the energy spectrum of SIGW for each non-Gaussian term. As anticipated, the spectra exhibit peaks centered around \(k_{\star}\). Furthermore, the spectra sharply decrease for \(k/k_{*}\gtrsim 2\), with the non-Gaussian terms displaying larger drop-off wavelength compared to the Gaussian term due to momentum conservation. Moreover, the non-Gaussian energy spectrum demonstrate a log-dependent scaling described by Eq. (127) in the infrared region. In Fig. 2, we present the total energy spectrum of SIGW by today for some representative values of \(F_{\rm NL}\), \(G_{\rm NL}\) and \(A\). As shown in Fig. 2, the role of non-Gaussian corrections might change the amplitude and the shape of the energy spectrum. The shape of the energy spectrum is modified mainly around \(k_{*}\) but in the infrared region it exhibits a Figure 1: The unscaled (By setting \(A=1\), \(F_{\rm NL}=1\) and \(G_{\rm NL}=1\)) energy spectrum of SIGW generated by a log-normal power spectrum described by Eq. (139) with \(\sigma_{*}=0.2\). The Figure 2: The total energy spectrum of SIGW generated by a log-normal power spectrum described by Eq. (139) with \(\sigma_{*}=0.2\). The brown line in each panel denotes the power-law sensitivity curve of LISA, assuming a 4 year detection time. We set \(A=10^{-3}\) for both panels. Left panel: The energy spectrum in the absence of \(G_{\rm NL}\). Right panel: The energy spectrum in the absence of \(F_{\rm NL}\). log-dependent scaling given by Eq. (128). ## V Conclusion In this paper, we study the full impacts of a local-type non-Gaussianities up to \(G_{\rm NL}\) order on SIGW and derive semi-analytical results for arbitrary primordial power spectrum. All the non-Gaussian terms to the energy spectrum of SIGW exhibit a log-dependent scaling in the infrared region. This log-dependent scaling distinguishes SIGW from other GW energy spectra generated by currently known physical processes, making it a smoking gun for detecting the SIGW. Recently, NANOGrav collaboration searched the SIGW data for various power spectrum and claimed that the NANOGrav 15-year data is well fit in the low-frequency tail of SIGW [103], indicating the significance of the log-dependent scaling in searching the SIGW signals. _Acknowledgments._ D-S.M. would like to thank Guang-shang Chen for his enthusiastic help in computer and programming. The work is supported by the National Key Research and Development Program of China Grant No.2020YFC2201502, grants from NSFC (grant No. 11975019, 11991052, 12047503), Key Research Program of Frontier Sciences, CAS, Grant NO. ZDBS-LY-7009, CAS Project for Young Scientists in Basic Research YSBR-006, the Key Research Program of the Chinese Academy of Sciences (Grant NO. XDPB15). We acknowledge the use of HPC Cluster of ITP-CAS. C.Y. acknowledges financial support provided under the European Union's H2020 ERC Advanced Grant "Black holes: gravitational engines of discovery" grant agreement no. Gravitas-101052587. Views and opinions expressed are however those of the author only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 101007855. No 101007855.
2308.08266
A recurrence relation for generalised connection coefficients
We formulate and prove a general recurrence relation that applies to integrals involving orthogonal polynomials and similar functions. A special case are connection coefficients between two sets of orthonormal polynomials, another example is integrals of products of Legendre functions.
Jing Gao, Arieh Iserles
2023-08-16T10:05:17Z
http://arxiv.org/abs/2308.08266v1
# A recurrence relation for generalised connection coefficients ###### Abstract We formulate and prove a general recurrence relation that applies to integrals involving orthogonal polynomials and similar functions. A special case are connection coefficients between two sets of orthonormal polynomials, another example is integrals of products of Legendre functions. ###### Contents * 1 A universal recurrence for connection coefficients * 2 Few examples * 2.1 A pair of Legendre weights * 2.2 Connection coefficients * 2.3 Matrices \(\mathscr{D}\) that occur in the analysis of orthonormal systems * 3 Associated Legendre functions * 4 Conclusions ## 1 A universal recurrence for connection coefficients Let two Borel measures, \(\,\mathrm{d}\mu\) and \(\,\mathrm{d}\nu\), be given. We denote by \(\mathscr{P}=\{p_{n}\}_{n\in\mathbb{Z}_{+}}\) and \(\mathscr{O}=\{q_{n}\}_{n\in\mathbb{Z}_{+}}\) respectively the orthonormal polynomials with respect to \(\,\mathrm{d}\mu\) and \(\,\mathrm{d}\nu\). It is elementary that both \(\mathscr{P}\) and \(\mathscr{O}\) obey three-term recurrence relations of the form \[b_{n}p_{n+1}(x) =(x-a_{n})p_{n}(x)-b_{n-1}p_{n-1}(x),\qquad n\in\mathbb{Z}_{+}, \tag{1.1}\] \[d_{n}q_{n+1}(x) =(x-c_{n})q_{n}(x)-d_{n-1}q_{n-1}(x), \tag{1.2}\] (Chihara 1978), where \(b_{-1}=d_{-1}=0\) and \(b_{n},d_{n}>0\) for \(n\in\mathbb{Z}_{+}\). The purpose of this short paper is to investigate the elements of the infinite matrix \(\mathscr{D}\), where \[\mathscr{D}_{m,n}=\int_{-\infty}^{\infty}p_{m}(x)q_{n}(x)\,\mathrm{d}\eta(x), \qquad m,n\in\mathbb{Z}_{+}, \tag{1.3}\] where \(\,{\rm d}\eta\) is yet another, possibly signed, Borel measure - note that \(\,{\rm d}\mu,\,\,{\rm d}\nu\) and \(\,{\rm d}\eta\) need not be distinct. Indeed, the case \(\,{\rm d}\nu=\,{\rm d}\eta\) is classical and corresponds to _connection coefficients_ (Ismail 2005, p. 253), expressing one set of orthonormal polynomials in terms of another. Expressions of the form (1.3) feature in (Iserles 2023) and in forthcoming publications of the current authors. It is our contention in this paper that they all obey a very helpful recurrence relation. While both the recurrence and its proof are elementary, they are (to the best of our knowledge) new and they have significant ramifications, some of which are described in the sequel. **Lemma 1**: _It is true that_ \[b_{m-1}{\mathscr{D}}_{m-1,n}+a_{m}{\mathscr{D}}_{m,n}+b_{m}{\mathscr{D}}_{m+1, n}=d_{n-1}{\mathscr{D}}_{m,n-1}+c_{n}{\mathscr{D}}_{m,n}+d_{n}{\mathscr{D}}_{m,n+1} \tag{1.4}\] _for all \(m,n\in{\mathbb{Z}}_{+}\)._ _Proof_ It follows from (1.1) and \(b_{m}>0\) that \[p_{m+1}(x)=\frac{1}{b_{m}}[(x-a_{m})p_{m}(x)-b_{m-1}p_{m-1}(x)]\] and substitution in (1.3) results in \[{\mathscr{D}}_{m+1,n} =\frac{1}{b_{m}}\int_{-\infty}^{\infty}[(x-a_{m})p_{m}(x)-b_{m-1} p_{m-1}(x)]q_{n}(x)\,{\rm d}\eta(x)\] \[=\frac{1}{b_{m}}\int_{-\infty}^{\infty}xp_{m}(x)q_{n}(x)\,{\rm d} \eta(x)-\frac{a_{m}}{b_{m}}{\mathscr{D}}_{m,n}-\frac{b_{m-1}}{b_{m}}{\mathscr{D }}_{m-1,n}.\] However, it is a consequence of (1.2) that \[xg_{n}(x)=d_{n}q_{n+1}(x)+c_{n}q_{n}(x)+d_{n-1}q_{n-1}(x)\] and substitution within the integral yields (1.4) after elementary algebra. \(\Box\) Let us ponder briefly the meaning of (1.4). It is essentially a 'cross rule', \[\begin{array}{ccccc}&n-1&n&n+1\\ m-1&\vdots&\vdots&\vdots&\vdots\\ &\vdots&\vdots&\vdots\\ m&\vdots&\vdots&\vdots\\ &\vdots&\vdots&\vdots\\ m&\vdots&\vdots&\vdots\\ m+1&\cdots&\cdots&\cdots&\cdots\\ \end{array}\] Once the boundary conditions \({\mathscr{D}}_{m,0}\) and \({\mathscr{D}}_{0,n}\), are provided and because \({\mathscr{D}}_{m,-1}={\mathscr{D}}_{-1,n}=0\),1 they allow us in tandem with (1.4) to fill in the remainder of \({\mathscr{D}}\) in a recursive manner. Footnote 1: Since \(p_{-1},q_{-1}\equiv 0\). Our narrative is focussed on orthonormal polynomials but it can be easily generalised to other normalisations, e.g. to monic orthogonal polynomials. In Section 3 we apply similar construction to Legendre functions, which in general are not polynomial. ## 2 Few examples ### A pair of Legendre weights The trivial case \(\,\mathrm{d}\mu=\,\mathrm{d}\nu=\,\mathrm{d}\eta\) results in a unit matrix, as it of course should. As a somewhat less elementary example let us consider \(\,\mathrm{d}\mu(x)=\,\mathrm{d}\nu(x)=\chi_{(-1,1)}(x)\,\mathrm{d}x\) and \(\,\mathrm{d}\eta(x)=\chi_{(-1,1)}(x)x^{2}\,\mathrm{d}x\). Thus, \(p_{n}=q_{n}=\sqrt{n+\frac{1}{2}}\mathrm{P}_{n}\), normalised standard Legendre polynomials, and one can prove easily from the standard three-term recurrence for Legendre polynomials that \[a_{n},c_{n}\equiv 0,\qquad b_{n}=d_{n}=\frac{n+1}{\sqrt{(2n+1)(2n+3)}}.\] Therefore it follows from (1.4) that \[\frac{m}{\sqrt{(2m-1)(2m+1)}}\mathcal{D}_{m-1,n}+\frac{m+1}{\sqrt {(2m+1)(2m+3)}}\mathcal{D}_{m+1,n}\] \[=\frac{n}{\sqrt{(2n-1)(2n+1)}}\mathcal{D}_{m,n-1}+\frac{n+1}{ \sqrt{(2n+1)(2n+3)}}\mathcal{D}_{m,n+1}.\] Finally, \(\mathcal{D}\) is symmetric, \(\mathcal{D}_{0,n}=0\) for \(n\geq 3\) (because \(x^{2}p_{0}\) is then orthogonal to \(p_{n}\)) and \[\mathcal{D}_{0,0}=\frac{1}{3},\qquad\mathcal{D}_{0,1}=0,\qquad\mathcal{D}_{0, 2}=\frac{2\sqrt{5}}{15}.\] In general, \(\mathcal{D}_{m,n}=0\) for \(m+n\) odd and for \(|m-n|\geq 3\), while \[\mathcal{D}_{n,n}=\frac{2n^{2}+2n-1}{(2n-1)(2n+3)},\qquad\mathcal{D}_{n+2,n}= \mathcal{D}_{n,n+2}=\frac{(n+1)(n+2)}{(2n+3)\sqrt{(2n+1)(2n+5)}}.\] This can be verified at once from the above recurrence relation. ### Connection coefficients \(\mathcal{Q}\) is a basis of the space of polynomials, hence each \(p_{m}\) can be expanded as a linear combination of \(q_{0},q_{1},\ldots,q_{m}\): the coefficients are \(\mathcal{D}_{m,n}\) with \(\,\mathrm{d}\eta=\,\mathrm{d}\nu\). These _connection coefficients_ (Ismail 2005, p. 255-261) are of importance in the theory of orthogonal polynomials and they are known explicitly in some situations, e.g. \[\mathrm{P}_{m}^{(\gamma,\delta)}(x)=\sum_{n=0}^{m}\mathcal{D}_{m,n}\mathrm{P}_ {n}^{(\alpha,\beta)}(x),\qquad\alpha,\beta,\gamma,\delta>-1,\] where \[\mathcal{D}_{m,n} =\frac{(\gamma+n+1)_{m-n}(m+\gamma+\delta+1)_{n}}{(m-n)!\Gamma( \alpha+\beta+2n+1)}\Gamma(\alpha+\beta+n+1)\] \[\quad\times{}_{3}F_{2}\bigg{[}\begin{matrix}-m+n,m+n+\gamma+ \delta+1,\alpha+n+1;\\ \gamma+n+1,\alpha+\beta+2n+2;\end{matrix}1\bigg{]}\] (Ismail 2005, Thm. 9.1.1).2 Footnote 2: Jacobi polynomials, of course, are not orthonormal but the formula can be easily transformed to our setting. The matrix \(\mathcal{D}\) of connection coefficients is lower triangular, invertible and \(\mathcal{D}^{-1}\) yields the connection coefficients expressing the \(q_{m}\)s in terms of \(p_{0},p_{1},\ldots,p_{m}\). As an example (which we did not find in literature) let us 'connect' two kinds of Laguerre polynomials: express \(\mathrm{L}_{m}^{(\alpha)}\) in terms of \(\mathrm{L}_{n}^{(\beta)}\) for \(n\leq m\), where \(\alpha,\beta>0\). We have \[\mathrm{L}_{n+1}^{(\beta)}(x)=\biggl{(}-\frac{x}{n+1}+\frac{2n+\beta+1}{n+1} \biggr{)}\mathrm{L}_{n}^{(\beta)}(x)-\frac{n+\beta}{n+1}\mathrm{L}_{n-1}^{( \beta)}(x),\] while the orthonormal Laguerre polynomials (with a positive multiple of \(x^{n}\)) are \[\tilde{\mathrm{L}}_{n}^{(\beta)}(x)=(-1)^{n}\sqrt{\frac{n!}{\Gamma(n+\beta+1) }}\mathrm{L}_{n}^{(\beta)}(x)\] and they obey the three-term recurrence relation with \[b_{n}=\sqrt{(n+1)(n+1+\beta)},\quad a_{n}=2n+1+\beta,\qquad n\in\mathbb{Z}_{+}.\] The recurrence (1.4) yields \[\sqrt{m(m+\alpha)}\mathcal{D}_{m-1,n}+(2m+1+\alpha)\mathcal{D}_{m,n}+\sqrt{(m+1)(m+1+\alpha)}\mathcal{D}_{m+1,n}\] \[=\sqrt{n(n+\beta)}\mathcal{D}_{m,n-1}+(2n+1+\beta)\mathcal{D}_{m,n }+\sqrt{(n+1)(n+1+\beta)}\mathcal{D}_{m,n+1}, \tag{2.1}\] where \[\mathcal{D}_{m,n}=\int_{0}^{\infty}\tilde{L}_{m}^{(\alpha)}(x)\tilde{L}_{n}^{ (\beta)}(x)x^{\beta}\mathrm{e}^{-x}\,\mathrm{d}x.\] This need be accompanied by boundary conditions. It is clear that \(\mathcal{D}_{0,n}\equiv 0\) for \(n\in\mathbb{N}\) (recall that \(\mathcal{D}\) is lower triangular!) while, since \(\tilde{\mathrm{L}}_{0}^{(\beta)}\equiv 1/\sqrt{\Gamma(1+\beta)}\), we have \[\mathcal{D}_{m,0} =\frac{1}{\sqrt{\Gamma(1+\beta)}}\int_{0}^{\infty}\tilde{L}_{m}^{ (\alpha)}(x)x^{\beta}\mathrm{e}^{-x}\,\mathrm{d}x\] \[=(-1)^{m}\sqrt{\frac{m!}{\Gamma(m+\alpha+1)\Gamma(1+\beta)}}\int_ {0}^{\infty}\mathrm{L}_{m}^{(\alpha)}(x)x^{\beta}\mathrm{e}^{-x}\,\mathrm{d}x.\] To derive \(\mathfrak{d}_{m}=\int_{0}^{\infty}\mathrm{L}_{m}^{(\alpha)}(x)x^{\beta} \mathrm{e}^{-x}\,\mathrm{d}x\) we use the standard recurrence relation for Laguerre polynomials (Olver, Lozier, Boisvert & Clark 2010, 18.12.13), \[\sum_{m=0}^{\infty}\mathrm{L}_{m}^{(\alpha)}(x)z^{m}=\frac{1}{(1-z)^{1+\alpha }}\exp\biggl{(}-\frac{xz}{1-z}\biggr{)},\qquad 0<z<1.\] Therefore \[\sum_{m=0}^{\infty}\mathfrak{d}_{m}z^{m}=\frac{1}{(1-z)^{1+\alpha}}\int_{0}^ {\infty}\exp\biggl{(}-\frac{x}{1-z}\biggr{)}x^{\beta}\,\mathrm{d}x=\Gamma(1+ \beta)(1-z)^{\beta-\alpha}.\] Expanding the binomial function we obtain \[\mathscr{D}_{m,0}=(-1)^{m}\sqrt{\frac{\Gamma(1+\beta)}{m!\Gamma(m+1+\alpha)}}( \alpha-\beta)_{m},\qquad m\in\mathbb{Z}_{+}.\] Substitution in (2.1) conforms that the explicit formula for Laguerre-Laguerre connection coefficients is \[\mathscr{D}_{m,n}=(-1)^{m-n}\frac{(\alpha-\beta)_{m-n}}{(m-n)!}\sqrt{\frac{m! \Gamma(n+1+\beta)}{n!\Gamma(m+1+\alpha)}},\qquad 0\leq n\leq m.\] ### Matrices \(\mathscr{D}\) that occur in the analysis of orthonormal systems The recurrence (2.1) is independent of the choice of \(\,\mathrm{d}\eta\). A particular choice of a _signed_ measure \(\,\mathrm{d}\eta\), together with the choice \(\alpha=\beta\), features in (Iserles 2023) in the analysis of certain systems orthonormal in \(\mathrm{L}_{2}(a,b)\). It has been ito a large extent the original motivation to consider expressions of the form (1.1). Specifically, requiring \(\alpha>0\), the functions \[\varphi_{n}(x)=\sqrt{\frac{n!}{\Gamma(n+1+\alpha)}}x^{\alpha/2}\mathrm{e}^{-x /2}\mathrm{L}_{n}^{(\alpha)}(x),\qquad n\in\mathbb{Z}_{+},\] form an orthonormal set in \(\mathrm{L}_{2}[0,\infty)\) and the _differentiation matrix_ \[\mathscr{E}_{m,n}=\int_{0}^{\infty}\,\frac{\mathrm{d}\varphi_{m}(x)}{\, \mathrm{d}x}\varphi_{n}(x)\,\mathrm{d}x,\qquad m,n\in\mathbb{Z}_{+},\] is skew symmetric. The evaluation of the entries of \(\mathscr{E}\) led in (Iserles 2023) to the evaluation of expressions of the form \[\int_{0}^{\infty}\tilde{\mathrm{L}}_{m}^{(\alpha)}(x)\tilde{\mathrm{L}}_{n}^{ (\alpha)}(x)\frac{\,\mathrm{d}x^{\alpha}\mathrm{e}^{-x}}{\,\mathrm{d}x}\, \mathrm{d}x,\qquad m,n\in\mathbb{Z}_{+},\] hence (1.1) with Laguerre weights \(\,\mathrm{d}\mu(x)=\,\mathrm{d}\nu(x)=\chi_{(0,\infty)}(x)x^{\alpha}\mathrm{e} ^{-x}\,\mathrm{d}x\) and the _signed_ measure \[\,\mathrm{d}\eta(x)=\frac{\,\mathrm{d}x^{\alpha}\mathrm{e}^{-x}}{\,\mathrm{d} x}\,\mathrm{d}x=(\alpha-x)x^{\alpha-1}\,\mathrm{d}x,\qquad x>0.\] (As a matter of fact, because of orthogonality, we might replace this by \(\,\mathrm{d}\eta(x)=\alpha x^{\alpha-1}\,\mathrm{d}x\).) The recurrence (2.1) remains valid (with \(\beta=\alpha\)). The \(\mathscr{E}_{m,n}\)s have been derived explicitly in (Iserles 2023) by long algebra - instead we can verify in a straightforward manner that \[\mathscr{E}_{m,n}=-\frac{1}{2}\sqrt{\frac{m!\Gamma(n+1+\alpha)}{\Gamma(m+1+ \alpha)n!}},\qquad m\geq n+1,\] with symmetric completion for \(m<n\), is consistent with (2.1) and obeys the boundary conditions for \(n=0\) and \(m=0\). In addition, \(\mathscr{E}_{m,m}=0\). Similar expression, again originating in (Iserles 2023), is \[\mathcal{G}_{m,n}=\int_{-1}^{1}(1-x^{2})^{\alpha-1}\mathrm{P}_{m}^{(\alpha, \alpha)}(x)\mathrm{P}_{n}^{(\alpha,\alpha)}(x)\,\mathrm{d}x,\qquad m,n\in\mathbb{ Z}_{+},\] where \(\alpha>0\) and \(\mathrm{P}_{n}^{(\alpha,\alpha)}\) is an ultraspherical polynomial. Thus, \(\,\mathrm{d}\mu(x)=\,\mathrm{d}\nu(x)=\chi_{(-1,1)}(x)(1-x^{2})^{\alpha}\, \mathrm{d}x\) and \(\,\mathrm{d}\eta(x)=\chi_{(-1,1)}(x)(1-x^{2})^{\alpha-1}\,\mathrm{d}x\). It is enough to consider the case \(m+n\) even, otherwise \(\mathcal{G}_{m,n}\) vanishes. Using exactly the same approach as for Laguerre weights above (or a much longer and more convoluted algebra in (Iserles 2023)) we can prove that \[\mathcal{G}_{m,n}=\frac{4^{\alpha}}{\alpha}\frac{\Gamma(m+1+\alpha)\Gamma(n+1 +\alpha)}{n!\Gamma(m+1+2\alpha)},\qquad m\geq n,\quad m+n\text{ even},\] with symmetric completion for \(m\geq n\). ## 3 Associated Legendre functions The solutions of the differential equation \[(1-x^{2})y^{\prime\prime}-2xy^{\prime}+\bigg{[}\nu(\nu+1)-\frac{\mu^{2}}{1-x^ {2}}\bigg{]}y=0,\qquad x\in(-1,1),\] where \(\mu,\nu\in\mathbb{C}\), are _Legendre functions_\(\mathrm{P}_{\nu}^{\mu}\) and _associated Legendre functions_\(\mathrm{Q}_{\nu}^{\mu}\). These functions feature in a wide range of applications, not least as a main component of _spherical harmonics._ They admit a number of confusingly different normalisations (Abramowitz & Stegun 1966, Arfken 1966, Courant & Hilbert 1953, Olver et al. 2010) yet, in this paper, being interested in \(\nu\in\mathbb{Z}_{+}\) and \(\mu\in\{0,1,\ldots,\nu\}\) and restricting our attention to the Legendre functions \(\mathrm{P}_{n}^{m}\), we use the expression \[\mathrm{P}_{n}^{m}(x)=\frac{(-n)_{m}(1+n)_{m}}{m!}\left(\frac{1-x}{1+x} \right)^{m/2}{}_{2}F_{1}\bigg{[}\begin{array}{ll}-n,n+1;&\frac{1-x}{ 2}\end{array}\bigg{]}, \tag{3.1}\] where \((z)_{k}=z(z+1)\cdots(z+k-1)\) is the familiar _Pochhammer symbol_ and \({}_{2}F_{1}\) is a _hypergeometric function_. Note that the \(\mathrm{P}_{n}^{m}\)s are, in general, non-polynomial. Thus, while \(\mathrm{P}_{n}^{0}=\mathrm{P}_{n}\), the standard Legendre polynomial, \(\mathrm{P}_{n}^{1}(x)=-\sqrt{1-x^{2}}\mathrm{P}_{n}^{\prime}(x)\) - cf. (Olver et al. 2010, 14.6.1) for explicit expressions for \(\mathrm{P}_{n}^{m}\). In our forthcoming work on multivariate expansions we have encountered the problem to calculate expressions of the form \[g_{\ell,n}^{m}=g_{n,\ell}^{m}=\int_{-1}^{1}\frac{\mathrm{P}_{\ell}^{m}(x) \mathrm{P}_{n}^{m}(x)}{\sqrt{1-x^{2}}}\,\mathrm{d}x,\qquad m\leq\min\{\ell,n\}. \tag{3.2}\] Since it follows readily from (3.1) that \(\mathrm{P}_{\ell}^{m}(-x)=(-1)^{m+\ell}\mathrm{P}_{\ell}^{m}(x)\), we deduce that \(g_{\ell,n}^{m}=0\) when \(\ell+n\) is odd. Our starting point is the three-term recurrence relation \[(n-m+1)\mathrm{P}_{n+1}^{m}(x)-(2n+1)x\mathrm{P}_{n}^{m}(x)+(m+n)\mathrm{P}_{ n-1}^{m}(x)=0, \tag{3.3}\] which can be easily derived from (Olver et al. 2010, 14.10.3). We proceed similarly to Section 1. Combining (3.2) and (3.3), we have \[\int_{-1}^{1}\frac{\mathrm{P}_{\ell}^{m}(x)}{\sqrt{1-x^{2}}}\bigg{[} \frac{n-m+1}{2n+1}\mathrm{P}_{n+1}^{m}(x)+\frac{m+n}{2n+1}\mathrm{P}_{n-1}^{m}( x)\bigg{]}\,\mathrm{d}x=\int_{-1}^{1}\frac{x\mathrm{P}_{\ell}^{m}(x)\mathrm{P}_{n}^{ m}(x)}{\sqrt{1-x^{2}}}\,\mathrm{d}x\] \[=\int_{-1}^{1}\bigg{[}\frac{\ell-m+1}{2\ell+1}\mathrm{P}_{\ell+1 }^{m}(x)+\frac{m+\ell}{2\ell+1}\mathrm{P}_{\ell-1}^{m}(x)\bigg{]}\mathrm{P}_{n }^{m}(x)\,\mathrm{d}x\] and deduce the recurrence \[\frac{n-m+1}{2n+1}g_{n+1,\ell}^{m}+\frac{m+n}{2n+1}g_{n-1,\ell}^{m}=\frac{ \ell-m+1}{2\ell+1}g_{n,\ell+1}^{m}+\frac{m+\ell}{2\ell+1}g_{n,\ell-1}^{m}, \tag{3.4}\] valid for all \(m\leq\min\{\ell,n\}\). Once \(n+\ell\) is even, all the terms vanish, but not so for an odd \(n+\ell\). For completeness, the recurrence (3.4) requires boundary conditions. Given \(m\in\mathbb{Z}_{+}\), we need \(g_{n,m}^{m}=g_{m,n}^{m}\), \(n\geq m\). In addition we need \(g_{-1,m}^{m}+g_{m,1}^{m}=0\), \(m\in\mathbb{N}\): this follows at once from (3.1). It is an immediate consequence of (3.1) that \[\mathrm{P}_{m}^{m}(x)=(-1)^{m}\frac{(2m)!}{2^{m}m!}(1-x^{2})^{m/2},\qquad m\in \mathbb{Z}_{+},\] therefore \[g_{m,n}^{m}=(-1)^{m}\frac{(2m)!}{2^{m}m!}\int_{-1}^{1}(1-x^{2})^{(m-1)/2} \mathrm{P}_{n}^{m}(x)\,\mathrm{d}x,\qquad n\geq m.\] Of course, our interest is restricted to even values of \(m+n\), otherwise \(g_{m,n}^{m}\) vanishes. Let \[G_{n}^{m}=\int_{-1}^{1}(1-x^{2})^{(m-1)/2}\mathrm{P}_{n}^{m}(x)\,\mathrm{d}x, \qquad H_{n}^{m}=\int_{-1}^{1}x(1-x^{2})^{(m-1)/2}\mathrm{P}_{n}^{m}(x)\, \mathrm{d}x \tag{3.5}\] for \(0\leq m\leq n\), \(m+n\) even. Thus, \(g_{m,n}^{m}=(-1)^{m}(2m!)/(2^{m}m!)G_{n}^{m}\). It follows from (3.3), though, that \[(n-m+1)G_{n+1}^{m}+(m+n)G_{n-1}^{m}=(2n+1)H_{n}^{m}.\] Because of (Olver et al. 2010, 14.10.5) it is true that \[(1-x^{2})\frac{\mathrm{dP}_{n}^{m}(x)}{\mathrm{d}x}=(m+n)\mathrm{P}_{n-1}^{m} (x)-nx\mathrm{P}_{n}^{m}(x)\] and it follows that \[H_{n}^{m}=\frac{1}{m+1}[(m+n)G_{n-1}^{m}-nH_{n}^{m}]\] - we deduce that \[H_{n}^{m}=\frac{m+n}{m+n+1}G_{n-1}^{m}.\] It now follows from (3.5) that \[G_{n+1}^{m}=\frac{n^{2}-m^{2}}{(n+1)^{2}-m^{2}}G_{n-1}^{m}\] and \[g_{m,n+1}^{m}=\frac{n^{2}-m^{2}}{(n+1)^{2}-m^{2}}g_{m,n-1}^{m},\qquad m\leq n-1.\] (Of course, all this makes sense only for an odd \(m+n\).). To start the recursion we need \(g_{m,m}^{m}\) and to this end we note from (3.1) that \[\mathrm{P}_{m}^{m}(x)=(-1)^{m}\frac{(2m)!}{2^{m}m!}(1-x^{2})^{m/2}\] and direct integration yields \[g_{m,m}^{m}=\pi(2m)!\bigg{[}\frac{1}{4^{m}}{2m\choose m}\bigg{]}^{2}\,,\qquad m \in\mathbb{Z}_{+}.\] It is now a straightforward (yet tedious calculation) that \[g_{m,m+2k}^{m}=g_{m+2k,m}^{m}=\pi\frac{(2m)!}{4^{2m+2k}}{2k\choose k}{2m\choose m }{2m+2k\choose m+k},\qquad k\in\mathbb{Z}_{+}. \tag{3.6}\] Using the recurrence (3.4) we derive explicitly, by long yet straightforward algebra, \[g_{m+1,m+2k+1}^{m}=g_{m+2k+1,m+1}=\frac{2\pi(m+1)}{4^{2m+2k+1}}(2m)!{2k\choose k }{2m+1\choose m}{2m+2k+1\choose m+k} \tag{3.7}\] for all \(m,k\in\mathbb{Z}_{+}\). Remaining values of \(g_{n,\ell}^{m}\), \(m\leq\min\{n,\ell\}\), can be filled in using the recurrence (3.4). Bearing in mind formulae (3.6) and (3.7), we may expect that the remaining values of \(g_{n,\ell}^{m}\) would be similarly neat and regular. Indeed, \[g_{m+2,m}^{m}=g_{m,m+2}^{m}=\frac{\pi(2m)!}{4^{2m+1}}{2m\choose m}{2m+1\choose m },\qquad m\in\mathbb{Z}_{+},\] vindicating this expectation. However, \[g_{m+2,m+2}^{m}=\frac{\pi(2m+1)!}{2\cdot 4^{2m+2}}\frac{(8m^{2}+20m+11)}{2m+3} {2m\choose m}{2m+3\choose m+1}\qquad m\in\mathbb{Z}_{+},\] and expressions do not become nicer for larger values of \(k\). Yet, for practical uses in approximation algorithms, all we need is numerical values of the \(g_{n,\ell}^{m}\)s, and this can be easily produced by the recurrence (3.4). Conclusions Our point of departure, Lemma 1, is an exceedingly simple result with trivial proof. Yet, its ramifications are far and wide, since it allows for simple evaluation of connection coefficients and their generalisations and applies whenever a three-term recurrence relation is available for a set of functions: this is obviously true for orthogonal polynomials but also, for example, for Legendre functions \(\mathrm{P}_{n}^{m}\) which in general are neither polynomial nor orthogonal. There is no free lunch: to use the recurrence (1.4) or its generalisations we need first to determine boundary conditions. Yet, this is typically much easier than computing \(\mathcal{D}_{m,n}\) for all \(m,n\in\mathbb{Z}_{+}\) and it often suffices to do so numerically.
2304.10924
Short incompressible graphs and $2$-free groups
Consider a finite connected $2$-complex $X$ endowed with a piecewise Riemannian metric and whose fundamental group is freely indecomposable, of rank at least $3$, and in which every $2$-generated subgroup is free. In this paper we show that we can always find a connected graph $\Gamma\subset X$ such that $\pi_1 \Gamma\simeq {\mathbb F}_2 \hookrightarrow\pi_1 X$ (in short, a $2$-incompressible graph) whose length satisfies the following curvature-free inequality: $\ell(\Gamma)\leq 4\sqrt{2\text{Area}(X)}$. This generalizes a previous inequality proved by Gromov for closed Riemannian surfaces with negative Euler characteristic. As a consequence we obtain that the volume entropy of such $2$-complexes with unit area is always bounded away from zero.
Florent Balacheff, Wolfgang Pitsch
2023-04-21T12:47:39Z
http://arxiv.org/abs/2304.10924v2
# Short incompressible graphs and \(2\)-free groups ###### Abstract. Consider a finite connected \(2\)-complex \(X\) endowed with a piecewise Riemannian metric and whose fundamental group is freely indecomposable, of rank at least \(3\), and in which every \(2\)-generated subgroup is free. In this paper we show that we can always find a connected graph \(\Gamma\subset X\) such that \(\pi_{1}\Gamma\simeq\mathbb{F}_{2}\hookrightarrow\pi_{1}X\) (in short, a \(2\)-incompressible graph) whose length satisfies the following curvature-free inequality: \(\ell(\Gamma)\leq 4\sqrt{2\operatorname{Area}(X)}\). This generalizes a previous inequality proved by Gromov for closed Riemannian surfaces with negative Euler characteristic. As a consequence we obtain that the volume entropy of such \(2\)-complexes with unit area is always bounded away from zero. Key words and phrases:Incompressible graphs, \(2\)-free groups, systolic area, volume entropy 2020 Mathematics Subject Classification: Primary: 53C23; Secondary: 20F05, 20F34 The first author acknowledges support by the FSE/AEI/MICINN grant RYC-2016-19334 and by the FEDER/AEI/MICINN grant PID2021-125625NB-I00. The second author acknowledges support by the FSE/AEI/MICINN grant PID2020-116481GB-I00. Both authors acknowledges support by the AGAUR grant 2021-SGR-01015. and (2) the induced map \(i_{*}:\pi_{1}\Gamma\to\pi_{1}X\) is injective. We then define \[L_{2}(X):=\inf_{\Gamma}\ell(\Gamma)\] where the infimum is taken over all \(2\)-incompressible graphs \(\Gamma\) and \(\ell(\Gamma)\) denotes the total length of \(\Gamma\) for the length metric induced by \(X\). This is a metric invariant closely related to the _Margulis constant_\(\mu(X)\) which is by definition the largest number \(L\) such that at any point \(x\) the subgroup of \(\pi_{1}X\) generated by loops based at \(x\) with length less than \(L\) is cyclic, see [2, Definition 4.1]. In fact it can be easily checked that \[\mu(X)\leq L_{2}(X)\leq 2\mu(X). \tag{1.1}\] The natural metric invariant \(L_{2}\) belongs to a larger family of invariants defined as follows. For any finite connected \(2\)-complex \(X\) endowed with a piecewise Riemannian metric define the increasing sequence of positive numbers \(\{L_{k}(X)\}_{k\geq 1}\) by setting \(L_{k}(X):=\inf_{\Gamma}\ell(\Gamma)\) where the infimum is taken over graphs which are \(k\)-incompressible (that is, such that \(\pi_{1}\Gamma\simeq\mathbb{F}_{k}\hookrightarrow\pi_{1}X\)). These numbers are well defined without any particular assumption on the fundamental group of \(X\) by setting \(L_{k}(X)=\infty\) if \(X\) does not admit any \(k\)-incompressible graph. Observe that \(L_{1}(X)\) is nothing but the _systole_ of \(X\) (the shortest length of a non-contractible loop) in the case where the fundamental group of \(X\) is \(1\)-free. So the higher invariants \(L_{k}(X)\) can be thought of as a generalization of the systole. In this context it is natural to define for any finitely presented group \(G\) its _\(k\)-free systolic area_ by the formula \[\mathfrak{S}_{k}(G):=\inf_{\pi_{1}X=G}\operatorname{Area}(X)/L_{k}^{2}(X)\] where the infimum is taken over the set of finite connected \(2\)-complexes \(X\) with given fundamental group \(G\) and endowed with a piecewise Riemannian metric. Obviously \(\mathfrak{S}_{k}(G)=0\) for any \(k\geq 1\) if \(G\) is free. For a \(1\)-free group \(G\), the invariant \(\mathfrak{S}_{1}(G)\) coincides with the notion of systolic area as defined in [1, p.337]. According to [1, Theorem 6.7.A], any \(1\)-free group \(G\) which is not free satisfies the following inequality: \[\mathfrak{S}_{1}(G)\geq 1/100.\] The current best lower bound known is \(\pi/16\), see [20]. The main purpose of this article is to prove the following analog for \(2\)-free groups. **Theorem 1.1**.: _Any \(2\)-free group \(G\) which is freely indecomposable and of rank at least \(3\) satisfies the following inequality:_ \[\mathfrak{S}_{2}(G)\geq 1/32.\] Therefore the new invariant \(\mathfrak{S}_{2}\) is non-trivial for a large natural class of groups. Theorem 1.1 can be restated as follows: _any finite connected \(2\)-complex \(X\) endowed with a piecewise Riemannian metric whose fundamental group is \(2\)-free and freely indecomposable, but not cyclic, satisfies the following estimate:_ \[L_{2}(X)\leq 4\sqrt{2\operatorname{Area}(X)}.\] So Theorem 1.1 generalizes the result [1, Theorem 5.4.A] that any Riemannian closed orientable surface \(S\) of genus at least \(2\) satisfies \(L_{2}(S)\leq 2\sqrt{2\operatorname{Area}(S)}\) Observe that here the assumption on the genus ensures that the fundamental group \(\pi_{1}(S)\) is \(2\)-free. See also [Gromov 1983, Theorem 6.6.C] for a higher dimensional generalization of this last inequality. Combined with inequality (1.1), Theorem 1.1 also provides an analog in the context of \(2\)-complexes of a curvature-free inequality between the volume and the Margulis constant obtained for Riemannian manifolds whose fundamental group is \(2\)-free, see [Sabourau 2017, Theorem 4.5, item (1)]. For now we do not see how to adapt our strategy to prove an analog of Theorem 1.1 for \(k>2\), but it seems reasonable to conjecture that for each such \(k\) the invariant \(\mathfrak{S}_{k}\) is uniformly bounded from below for any \(k\)-free group freely indecomposable with rank at least \(k+1\). Lastly, Theorem 1.1 implies the following curvature-free inequality relating the volume entropy and the area. Recall that the volume entropy \(h(Y)\) of a finite connected complex \(Y\) (of any dimension) endowed with a piecewise Riemannian metric is defined as the exponential growth rate of the number of homotopy classes with length at most \(L\), namely \[h(Y)=\lim_{L\to\infty}\frac{1}{L}\cdot\log\left(\text{card}\{[\gamma]\in\pi_{ 1}Y\mid\gamma\text{ based loop of length at most }L\}\right).\] This definition does not depend on the chosen point where loops are based. As a consequence of Theorem 1.1 we get the following. **Corollary 1.2**.: _Any finite connected \(2\)-complex \(X\) endowed with a piecewise Riemannian metric whose fundamental group is \(2\)-free, freely indecomposable and of rank at least \(3\), satisfies the following estimate:_ \[h(X)\cdot\sqrt{\operatorname{Area}(X)}\geq 3\log 2/(4\sqrt{2}).\] There is no reason for the above constant to be optimal, but this result generalizes the following (sharp) estimate [Katok 1982] that for \(S\) an orientable closed surface whose fundamental group is \(2\)-free the inequality \(h(S)\cdot\sqrt{\operatorname{Area}(S)}\geq 2\sqrt{\pi}\) is always satisfied. This corollary also improves a previous result, due to Babenko and privately communicated to the authors, proving an analog lower bound with a worst constant but valid without the freely indecomposable assumption. **Acknowledgements.** We would like to thank i. Babenko and S. Sabourau for valuable exchanges. ## 2. Topology of small balls in piecewise flat \(2\)-complexes Consider a finite connected \(2\)-complex \(X\) endowed with a piecewise flat metric, and fix a point \(x\) in \(X\). In this section we focus on the topology of closed balls \[B(x,r):=\{y\in X\ |\ d(y,x)\leq r\}\] and their boundary spheres \[\partial B(x,r):=\{y\in X\ |\ d(y,x)=r\}\] for relatively small radius \(r>0\). Our starting point is the following result proved in [Katz, Rudyak & Sabourau 2006, Corollary 6.8]: **Proposition 2.1**.: _For any \(r>0\), the triangulation of \(X\) can be refined in such a way that both \(B(x,r)\) and \(\partial B(x,r)\) are CW-subspaces of \(X\)._ As a direct consequence we find that **Proposition 2.2**.: _For any \(r>0\) and any \(x\in X\), the fundamental group of \(B(x,r)\) is finitely presented._ Proof.: According to Proposition 2.1 choose a refinement of the triangulation of \(X\) such that \(B(x,r)\) is a CW-subspace of \(X\). Since \(X\) is compact, any triangulation contains finitely many simplices, as does the triangulation of the closed ball \(B(x,r)\). Hence its fundamental group is finitely presented. We now turn to the boundary spheres and show that they generically admit trivial tubular neighborhoods. **Proposition 2.3**.: _For all but finitely many values of \(r>0\), the boundary sphere \(\partial B(x,r)\) is a finite graph, and for each connected component \(C\) of \(\partial B(x,r)\), there exists an open neighborhood of \(C\) in \(X\) homeomorphic to \(C{\times}]0,1[\)._ Proof.: Denote by \(f=d(x,\cdot):X\to\mathbb{R}_{+}\) the function _distance to the point \(x\)_. Recall that the Reeb space \(R(f)\) is the quotient of \(X\) by the relation that identifies two points \(y_{0}\) and \(y_{1}\) if and only if \(d(x,y_{0})=d(x,y_{1})\) and both points belong to the same connected component of the level set \(f^{-1}(f(y_{0}))\). The space \(R(f)\) admits a length structure induced from \(X\). By construction we have a canonical projection map \(p:X\to R(f)\) which is \(1\)-Lipschitz. We argue as in [Katz, Rudyak & Sabourau 2006, Section 4]: the function \(f\) is a semi-algebraic function, then standard arguments show that \(R(f)\) is a finite graph and that \(R(f)\) admits a finite subdivision such that the natural map \(p\) yields a trivial bundle over the interior of each edge. For all distances \(r\) but the finitely many ones corresponding to the vertices of the subdivision, if \(C\) is a connected component of \(f^{-1}(r)\), then by triviality of the bundle the connected component of \(p^{-1}(]-\varepsilon,\varepsilon[)\) containing \(C\) is an open neighborhood of \(C\) of the desired form provided \(\varepsilon\) is small enough. More precisely, \(\varepsilon\) has to be chosen at most equal to the shortest distance from \(p(C)\) to one of the two ends of the edge containing it. In the last part of this section we focus on the image in \(X\) of the fundamental group of small metric balls. Consider the map \(i_{*}:\pi_{1}(B(x,r),x)\to\pi_{1}(X,x)\) induced by the inclusion \(B(x,r)\subset X\). According to [Rudyak & Sabourau 2008, Proposition 3.2] (see also [Katz, Rudyak & Sabourau 2006]), if \(\pi_{1}X\) is \(1\)-free, \(\operatorname{Im}i_{*}\) is trivial if the radius \(r\) is small enough and satisfies \(r<L_{1}(X)/2\). The last result of this section describes how \(\operatorname{Im}i_{*}\) remains simple with a weaker assumption on the radius. **Proposition 2.4**.: _Suppose that \(\pi_{1}X\) is a \(2\)-free group and fix \(r\in(0,L_{2}(X)/4)\)._ _Then the image of the map \(i_{*}:\pi_{1}(B(x,r),x)\to\pi_{1}(X,x)\) induced by the inclusion \(B(x,r)\subset X\) is either trivial, or isomorphic to \(\mathbb{Z}\)._ Proof.: Suppose that \(\operatorname{Im}i_{*}\) is not trivial. We first prove that \(\operatorname{Im}i_{*}\) is locally cyclic, that is, every pair of elements in the group generates a cyclic group. For this let \(\gamma_{1}\), \(\gamma_{2}\) be two non-contractible loops of \(X\) contained in \(B(x,r)\) and based at \(x\). As \(\pi_{1}(X,x)\) is \(2\)-free, these loops span in \(\pi_{1}(X,x)\) a free subgroup \(H(\gamma_{1},\gamma_{2})\) of rank at most \(2\). Fix \(\delta>0\) such that \(2r+\delta<L_{2}(X)/2\). We first decompose each \(\gamma_{i}\) into segments of length at most \(\delta\). Then for \(i=1,2\) write \(\gamma_{i}\) as a concatenation of loops \(c_{i,1}*\ldots*c_{i,n_{i}}\) based at \(x\) where each \(c_{i,k}\) is made of the union of one of these small segments together with two shortest paths from its extremal points to \(x\). Any of these loops \(c_{i,k}\) based at \(x\) lies by construction in \(B\) and has length at most \(2r+\delta<L_{2}(X)/2\). So a graph made of the union of any two of these loops is of total length \(<L_{2}(X)\), hence the subgroup in \(\pi_{1}(X,x)\) generated by any of these pairs of loops is cyclic (if not zero). Then the subgroup \(H(\{c_{i,j}\})\) generated by all the homotopy classes of the loops \(\{c_{i,j}\}\) is abelian as its generators pairwise commute. In particular there exists some positive \(k\) such that \(H(\{c_{i,j}\})\simeq\mathbb{Z}^{k}\) as \(\pi_{1}X\) is torsion-free. But \(\pi_{1}X\) is also \(2\)-free so that \(k=1\). It implies that \(H(\gamma_{1},\gamma_{2})=\mathbb{Z}\) and hence \(\operatorname{Im}i_{*}\) is locally cyclic. As \(\operatorname{Im}i_{*}\) is also finitely generated according to Proposition 2.3, we deduce that it is cyclic. Furthermore, as \(\operatorname{Im}i_{*}\) has no torsion, since \(\pi_{1}(X)\) is torsion-free, it is isomorphic to \(\mathbb{Z}\). ## 3. Geometry of small balls in piecewise flat \(2\)-complexes In this section we prove the central technical result of this paper. Consider a finite connected \(2\)-complex \(X\) endowed with a piecewise flat metric and whose fundamental group is \(2\)-free, freely indecomposable and of rank at least \(3\). By standard compactness arguments there always exists a minimal graph \(\Gamma\) which is \(2\)-incompressible and whose length satisfies the lower bound \(\ell(\Gamma)=L_{2}(X)\). Fix any point \(x\) on \(\Gamma\). **Theorem 3.1**.: _For all but finitely many values of \(r\in(0,L_{2}(X)/4)\), the following inequality holds true:_ \[\ell(\partial B(x,r))\geq r.\] In particular, using the coarea formula we derive the following lower bound which implies Theorem 1.1: \[\operatorname{area}(B(x,L_{2}(X)/4))\geq L_{2}(X)^{2}/32.\] Proof.: Fix \(r\in(0,L_{2}(X,x)/4)\) so that Proposition 2.3 applies and set \(B:=B(x,r)\). Denote by \(X_{1},\ldots,X_{k}\) the path connected components of \(X\setminus\operatorname{int}(B)\) with non-empty interior, and by \(C_{1},\ldots,C_{n}\) the essential boundary connected components of \(B\) defined as the maximal connected subgraphs of \(\partial B\) such that the following decomposition holds true: \[B\cap(X_{1}\sqcup\ldots\sqcup X_{k})=C_{1}\sqcup\ldots\sqcup C_{n}.\] According to Proposition 2.1, each \(C_{i}\) is a connected finite graph, and there exists an open neighbourhood \(U\) of \(C_{1}\sqcup\ldots\sqcup C_{n}\) in \(X\) such that \[U\stackrel{{\text{hom}}}{{\simeq}}(C_{1}\times]0,1[)\sqcup\ldots \sqcup(C_{n}\times]0,1[).\] According to Proposition 2.4, the inclusion \(i:B\hookrightarrow X\) induces a homomorphism of fundamental groups of rank at most \(1\). So each graph \(C_{i}\) satisfies either \(i_{*}(\pi_{1}C_{i})=0\) or \(i_{*}(\pi_{1}C_{i})=\mathbb{Z}\). Furthermore, if \(\operatorname{rank}i_{*}(\pi_{1}C_{i})=\operatorname{rank}i_{*}(\pi_{1}C_{j})=1\), then the subgroup generated by both these subgroups is a subgroup of \(i_{*}(\pi(B))=\mathbb{Z}\) and hence is again isomorphic to \(\mathbb{Z}\). In particular elements in \(i_{*}(\pi_{1}C_{i})\) commute with those in \(i_{*}(\pi_{1}C_{j})\). Let \(Y=(X_{1}\sqcup\ldots\sqcup X_{k})\sim\) where \(x\sim y\) if and only if \(x\) and \(y\) belong to the same connected component \(C_{i}\) for some \(i\in\{1,\ldots,n\}\). Denote by \(a_{1},\ldots,a_{n}\) the points in \(Y\) that are images of the boundary graphs \(C_{1},\ldots,C_{n}\) under the projection map \[f:X_{1}\sqcup\ldots\sqcup X_{k}\to Y.\] The space \(Y\) decomposes into a disjoint union \[Y_{1}\sqcup\ldots\sqcup Y_{k}\] of path-connected components \(Y_{1},\ldots,Y_{k}\) such that \(X_{j}=f^{-1}(Y_{j})\). Define for each \(j=1,\ldots,k\) the subset \(I_{j}\subset\{1,\ldots,n\}\) such that \(a_{l}\in Y_{j}\Leftrightarrow l\in I_{j}\). Therefore \(\{1,\ldots,n\}=I_{1}\sqcup\ldots\sqcup I_{k}\), and \[B\cap X_{j}=\sqcup_{l\in I_{j}}C_{l}.\] If \(k=n\), we may assume, up to reindexing the boundary graphs that \(a_{j}\in Y_{j}\) for each \(j=1,\ldots,n\) (or equivalently that \(I_{j}=\{j\}\)). If \(k<n\) then \(|I_{j}|\geq 2\) for some \(j\in\{1,\ldots,k\}\) and the following holds true. **Lemma 3.2**.: _Assume that \(|I_{j}|\geq 2\). Then \(i_{*}\left(\pi_{1}C_{l}\right)=\mathbb{Z}\) for all \(l\in I_{j}\)._ Proof.: By contradiction, let \(l\in I_{j}\) be such that \(i_{*}\left(\pi_{1}C_{l}\right)=0\) and fix a neighborhood \(U_{l}\) of \(C_{l}\) such that \(U_{l}\simeq C_{l}\times]0,1[\). By construction \(U_{l}\) is connected, \(X=U_{l}\cup(X\setminus C_{l})\), and because \(|I_{j}|\geq 2\) the open set \(X\setminus C_{l}\) is also connected. Observe that \(A_{l}:=U_{l}\cap(X\setminus C_{l})\) has exactly two connected components and choose a point \(x_{1}\) and \(x_{2}\) in each one of them. Fix a path \(\beta\) in \(U_{l}\) and and a path \(\gamma\) in \(X\setminus C_{l}\) both joining \(x_{1}\) and \(x_{2}\). We denote by \(\varphi_{1}:\pi_{1}(A_{l},x_{1})\to\pi_{1}(U_{l},x_{1})\) and \(\psi_{1}:\pi_{1}(A_{l},x_{1})\to\pi_{1}(X\setminus C_{l},x_{1})\) the homomorphisms induced by the respective inclusion maps, and we define two homomorphisms \(\varphi_{2}:\pi_{1}(A_{l},x_{2})\to\pi_{1}(U_{l},x_{1})\) and \(\psi_{2}:\pi_{1}(A_{l},x_{2})\to\pi_{1}(X\setminus C_{l},x_{1})\) by setting \[\varphi_{2}(\alpha)=\beta^{-1}\alpha\beta\text{ and }\psi_{2}(\alpha)=\gamma^{-1} \alpha\gamma.\] We also define a homomorphism \(\mu:\mathbb{Z}\simeq\langle a\rangle\to\pi_{1}(X,x_{1})\) by setting \[\mu(a)=\beta^{-1}\gamma.\] By the Van Kampen theorem [Bourbaki 2016, p.422, Proposition 2], there exists a unique surjective homomorphism \[M:\pi_{1}(U_{l},x_{1})*\pi_{1}(X\setminus C_{l},x_{1})*\mathbb{Z}\to\pi_{1}(X,x_ {1})\] which coincides with \(\mu\) on the factor \(\mathbb{Z}\) and with the homomorphisms induced by the respective natural inclusions on the two factors \(\pi_{1}(U_{l},x_{1})\) and \(\pi_{1}(X\setminus C_{l},x_{1})\), and whose kernel is normally generated by the elements of the form 1. \(\varphi_{2}(v)a\psi_{2}(v)^{-1}a^{-1}\) for \(v\in\pi_{1}(A_{l},x_{2})\); 2. \(\varphi_{1}(v)\psi_{1}(v)^{-1}\) for \(v\in\pi_{1}(A_{l},x_{1})\). As the image of \(\pi_{1}(C_{l})\simeq\pi_{1}(U_{l},x_{1})\) is trivial in \(\pi_{1}(X,x_{1})\), the homomorphisms \(\varphi_{1}\) and \(\varphi_{2}\) are trivial, and consequently the surjective homomorphism \(M\) factorizes as \[M:\pi_{1}(X\setminus C_{l},x_{1})*\mathbb{Z}\to\pi_{1}(X,x_{1})\] with kernel normally generated by the elements of the form 1. \(\psi_{2}(v)\) for \(v\in\pi_{1}(A_{l},x_{2})\); 2. \(\psi_{1}(v)\) for \(v\in\pi_{1}(A_{l},x_{1})\). By definition all these relations are written in the group \(\pi_{1}(X\setminus C_{l},x_{1})\). So if we denote by \(H\) the quotient of \(\pi_{1}(X\setminus C_{l},x_{1})\) by these relations, \(M\) induces an isomorphism \[\overline{M}:H*\mathbb{Z}\to\pi_{1}(X,x_{1})\] contradicting the fact that the fundamental group of \(X\) is freely indecomposable. We will assume that the minimal \(2\)-incompressible graph \(\Gamma\) in \(X\) satisfies the following properties: (1) \(\Gamma\) is transverse to \(C_{1}\sqcup\ldots\sqcup C_{n}\), (2) our chosen point \(x\in\Gamma\) is the only possible vertex of degree \(1\) by possibly deleting the other vertices of degree one together with their incident edges, and (3) the remaining vertices have all valence \(3\). If not, we could approximate the minimal graph \(\Gamma\) by a family \(\{\Gamma_{i}\}\) of \(2\)-incompressible graphs obeying these assumptions and such that \(L(\Gamma_{i})\to L_{2}(X)\) when \(i\to\infty\), and argue in the sequel in a similar way just replacing \(L(\Gamma)\) by \(L(\Gamma_{i})\). So \(\Gamma\) is made of an arc connecting \(x\) to one of the two following connected graphs with cyclic number equal to \(2\): It may happen that the above mentioned arc is reduced to the point \(x\) (or equivalently, that \(x\) belongs to the graph above). As the graph \(\Gamma\) is \(2\)-incompressible, the subgraph \(\Gamma\cap B\) has cyclic number at most \(1\) according to Proposition 2.4, and the graph \(\Gamma\) escapes from \(B\) and so necessarily traverses the essential boundary \(C_{1}\sqcup\ldots\sqcup C_{n}\). Set \(\Gamma_{j}:=\Gamma\cap X_{j}\) and observe that some of these graphs may be empty (but not all). Furthermore let \(\Gamma_{0}=\Gamma\cap B\) be the remaining part of the graph \(\Gamma\) which completes the decomposition as follows: \[\Gamma=\Gamma_{0}\cup\Gamma_{1}\cup\ldots\cup\Gamma_{k}.\] Now construct a new graph \(\overline{\Gamma}\) starting from \(\Gamma\), and obtained by deleting \(\Gamma_{0}\) and pasting all the boundary graphs as follows: \[\overline{\Gamma}:=(\Gamma\setminus\Gamma_{0})\cup(C_{1}\cup\ldots\cup C_{n}).\] We shall see that we can always extract from \(\overline{\Gamma}\) a \(2\)-incompressible graph \(\Gamma^{\prime}\), and this implies the desired lower bound. Indeed the \(2\)-incompressible graph \(\Gamma^{\prime}\) will satisfy \(\ell(\Gamma^{\prime})\geq L_{2}(X)\) as well as \(\ell(\Gamma^{\prime})\leq\ell(\Gamma)-r+\sum_{j=1}^{n}\ell(C_{j})\) as \(\ell(\Gamma_{0})\geq r\). Given that \(\ell(\Gamma)=L_{2}(X)\) we get the announced lower bound \[\ell(\partial B)\geq\sum_{j=1}^{n}\ell(C_{j})\geq r.\] So it remains to prove that we can always extract a \(2\)-incompressible \(\Gamma^{\prime}\) from \(\overline{\Gamma}\). We argue as follows. _Suppose first that the inclusion \(B\subset X\) induces the zero morphism: \(i_{*}(\pi_{1}B)=0\)._ In particular any boundary component \(C\) satisfies \(i_{*}(\pi_{1}C)=0\) as its fundamental group factors through \(i_{*}(\pi_{1}B)\). Thus Lemma 3.2 implies that \(k=n\). The key point is that there exists a unique \(j\in\{1,\ldots,n\}\) such that \(i_{*}(\pi_{1}X_{j})\neq 0\). Indeed, given that \(i_{*}(\pi_{1}B)=0\) and applying the Van Kampen theorem to the covering \(\{B,X_{1},\ldots,X_{n}\}\) of \(X\), we get that \(\pi_{1}(X)\simeq\pi_{1}X_{1}*\ldots*\pi_{1}(X_{n})\). As \(\pi_{1}X\) is freely indecomposable, only one of these free factors is non-trivial. So the \(2\)-incompressible graph \(\Gamma\), which has cyclic number \(2\), must intersect the boundary graph \(C_{j}\) of the non-trivial piece \(X_{j}\). Fix two homotopically independent loops \(c_{1}\) and \(c_{2}\) of \(\Gamma\) based at the same point, let say \(p\), of the boundary graph \(C_{j}\). If they are not entirely contained in \(X_{j}\), and as \(\pi_{1}(B\cup(\cup_{l\neq j}X_{l}))=0\), we can for each of the \(c_{i}\)'s homotop each of their subarcs lying outside \(X_{j}\) into a subarc of \(C_{j}\) without moving their respective endpoints. Therefore we can homotop \(c_{1}\) and \(c_{2}\) into two new homotopically independent loops still based at \(p\) and lying in \(\Gamma_{j}\cup C_{j}\subset\overline{\Gamma}\). Therefore, as wanted, we can extract a \(2\)-incompressible subgraph from \(\overline{\Gamma}\). _Suppose now that the inclusion \(B\subset X\) induces a morphism of rank \(1\): \(i_{*}(\pi_{1}B)=\mathbb{Z}\)._ Fix an element \(a\) of \(\pi_{1}B\) that generates \(i_{*}(\pi_{1}B)=\mathbb{Z}\) and a closed curve \(c\) of \(\Gamma\) based at \(x\) and homotopically independent from \(a\). The loop \(c\) necessarily escapes from \(B\). Denote by \(p_{1},\ldots,p_{N}\) the intersection points along \(c\) with \(\partial B\) (it may happen that \(p_{i}=p_{i+1}\) for some \(i\)). Fix for \(i=1,\ldots,N\) a path \(\delta_{i}\) contained in \(B\) from \(x\) to \(p_{i}\). We can decompose the loop \(c\) into a concatenation of loops \(c_{i}\) based at \(x\), each one being made by first following \(\delta_{i}\), then the portion denoted by \(\eta_{i}\) of \(c\) from \(p_{i}\) to the next intersection point \(p_{i+1}\), and then going back to \(x\) using \(\delta_{i+1}^{-1}\). One of these loops must be homotopically independent from the generator \(a\) of \(\pi_{1}B\): the loop \(c\) does not homotopically commute with \(a\), and thus at least one of the \(c_{i}\)'s does not homotopically commute with \(a\) too. Again, this loop that we denote simply by \(c_{i}\) necessarily escapes from \(B\) and the corresponding portion \(\eta_{i}\) lies outside \(\operatorname{int}(B)\). Let \(X_{j}\) be the arc connected component of \(X\setminus\operatorname{int}(B)\) that contains \(\eta_{i}\). _If \(X_{j}\) has more more than one boundary component_, then by Lemma 3.2 all boundary components are homotopically non-trivial in \(B\) and in \(X\), and we argue as follows. Suppose first that the endpoints of \(\eta_{i}\) belong to two distinct boundary graphs \(C_{k}\) and \(C_{l}\) for some \(k\neq l\). First observe that \(k\) and \(l\) both necessarily belong to the same subset \(I_{j}\) as \(\eta_{i}\subset X_{j}\). Moreover \(i_{*}(\pi_{1}C_{k})=\mathbb{Z}\) and \(i_{*}(\pi_{1}C_{l})=\mathbb{Z}\) as already observed. Fix two non-trivial loops \(b_{k}\in C_{k}\) and \(b_{l}\in C_{l}\) respectively based at \(p_{i}\) and \(p_{i+1}\). Set \(\delta=\delta_{i}^{-1}*\delta_{i+1}\). Observe that the homotopy classes of \(\eta_{i}*b_{l}*\eta_{i}^{-1}\) and \(c_{i}*(\delta*b_{l}*\delta^{-1})*c_{i}^{-1}\) (where \(c_{i}\) is viewed as a loop based at \(p_{i}\)) coincide. If the loop \(\eta_{i}*b_{l}*\eta_{i}^{-1}\) was not homotopically independent with \(b_{k}\), it would imply that their homotopy classes satisfy the equality \([c_{i}]\cdot a^{n}\cdot[c_{i}^{-1}]=a^{m}\) for some \(m,n\in\mathbb{Z}\setminus\{0\}\). But it is impossible as \(c_{i}\) was chosen homotopically independent with the class \(a\). So the two loops \(\eta_{i}*b_{l}*\eta_{i}^{-1}\) and \(b_{k}\) based at \(p_{i}\) are homotopically independent and both contained in \(\Gamma_{j}\cup C_{k}\cup C_{l}\subset\overline{\Gamma}\). So their union forms a \(2\)-incompressible graph \(\Gamma^{\prime}\subset\overline{\Gamma}\). Now suppose that both endpoints of \(\eta_{i}\) belong to the same connected boundary component \(C_{l}\), and fix some subarc \(\alpha\) in \(C_{l}\) from \(p_{i}\) to \(p_{i+1}\). The closed curve \(c_{i}\) (viewed as a loop based at \(p_{i}\)) is homotopic to the concatenation of the loop \(\eta_{i}*\alpha^{-1}\) with the loop \(\alpha*\delta_{i+1}^{-1}*\delta_{i}\). The second loop is included in \(B\) and therefore its homotopy class \([\alpha*\delta_{i+1}^{-1}*\delta_{i}]\) is equal to \(a^{k}\) for some \(k\in\mathbb{Z}\). Hence the first loop \(\eta_{i}*\alpha^{-1}\) is homotopically independent from \(a\). Now choose a loop \(\gamma\) in \(C_{l}\) that is homotopically non-trivial in \(\pi_{1}X\) (and therefore whose homotopy class lies in the cyclic subgroup \(\langle a\rangle\)), and a path \(\tau\) in \(C_{l}\). Then \[\gamma\cup\tau\cup\eta_{i}*\alpha^{-1}\subset\overline{\Gamma}\] is the desired \(2\)-incompressible subgraph. _If \(X_{j}\) has a unique boundary component \(C_{l}\)_, observe that \(i_{*}(\pi_{1}(C_{l}))\neq 0\). For if it is trivial, by applying the Van Kampen theorem to the covering of \(X\) by the open set \(X\setminus X_{j}\) and its complement \(X_{j}\) slightly enlarged so that these two open sets overlap along a half-tubular neighborhood \(U\simeq C_{l}\times]0,1[\) of \(C_{l}\), we would get a non-trivial free decomposition \(\pi_{1}X\simeq\pi_{1}X_{j}*\pi_{1}(X\setminus X_{j})\) where both pieces are non-trivial: a contradiction. Finally, because the loop \(c_{i}\) is homotopically independent from the class \(a\), we can extract a 2-incompressible subgraph from \(C_{l}\cup\eta_{i}\subset\overline{\Gamma}\). ## 4. A Universal bound for the volume entropy We conclude by explain how to derive Corollary 1.2 from Theorem 1.1. Proof of Corollary 1.2.: Let \(X\) be a finite connected 2-complex \(X\) endowed with a piecewise Riemannian metric whose fundamental group is 2-free, freely indecomposable and of rank at least 3. According to Theorem 1.1 we can find a 2-incompressible graph \(\Gamma\hookrightarrow X\) with induced length at most \(4\sqrt{2}\sqrt{\operatorname{area}X}\). The fact that \(\pi_{1}\Gamma\simeq\mathbb{F}_{2}\) implies by [Kapovich & Nagnibeda 2007] (see also [Lim 2008]) that \[\ell(\Gamma)\cdot h(\Gamma)\geq 3\log 2\] where \(h(\Gamma)\) denotes the volume entropy of the finite connected 1-dimensional complex \(\Gamma\) for the piecewise Riemannian metric induced by \(X\). The injection \(\pi_{1}\Gamma\hookrightarrow\pi_{1}X\) ensures that \(h(X)\geq h(\Gamma)\), from which we derive the desired lowerbound: \[h(X)\cdot\sqrt{\operatorname{area}X}\geq\frac{1}{4\sqrt{2}}\,h(\Gamma)\cdot \ell(\Gamma)\geq\frac{3\log 2}{4\sqrt{2}}.\]
2305.18089
Inverse Protein Folding Using Deep Bayesian Optimization
Inverse protein folding -- the task of predicting a protein sequence from its backbone atom coordinates -- has surfaced as an important problem in the "top down", de novo design of proteins. Contemporary approaches have cast this problem as a conditional generative modelling problem, where a large generative model over protein sequences is conditioned on the backbone. While these generative models very rapidly produce promising sequences, independent draws from generative models may fail to produce sequences that reliably fold to the correct backbone. Furthermore, it is challenging to adapt pure generative approaches to other settings, e.g., when constraints exist. In this paper, we cast the problem of improving generated inverse folds as an optimization problem that we solve using recent advances in "deep" or "latent space" Bayesian optimization. Our approach consistently produces protein sequences with greatly reduced structural error to the target backbone structure as measured by TM score and RMSD while using fewer computational resources. Additionally, we demonstrate other advantages of an optimization-based approach to the problem, such as the ability to handle constraints.
Natalie Maus, Yimeng Zeng, Daniel Allen Anderson, Phillip Maffettone, Aaron Solomon, Peyton Greenside, Osbert Bastani, Jacob R. Gardner
2023-05-25T02:15:25Z
http://arxiv.org/abs/2305.18089v1
# Inverse Protein Folding Using ###### Abstract Inverse protein folding--the task of predicting a protein sequence from its backbone atom coordinates--has surfaced as an important problem in the "top down", _de novo_ design of proteins. Contemporary approaches have cast this problem as a conditional generative modelling problem, where a large generative model over protein sequences is conditioned on the backbone. While these generative models very rapidly produce promising sequences, independent draws from generative models may fail to produce sequences that reliably fold to the correct backbone. Furthermore, it is challenging to adapt pure generative approaches to other settings, e.g., when constraints exist. In this paper, we cast the problem of improving generated inverse folds as an optimization problem that we solve using recent advances in "deep" or "latent space" Bayesian optimization. Our approach consistently produces protein sequences with greatly reduced structural error to the target backbone structure as measured by TM score and RMSD while using fewer computational resources. Additionally, we demonstrate other advantages of an optimization-based approach to the problem, such as the ability to handle constraints. ## 1 Introduction _De novo_ protein design, the design of amino acid sequences that will fold into protein structures that achieve a set of desired biochemical or functional properties, is one of the central bioengineering challenges of the 21st century [31; 12; 55; 40]. Recent advances in computational methods for protein folding [37; 43; 17; 5] have led to the rise in these "top-down" approaches to protein design, where one directly starts with protein structures that achieve some goal, and seek amino acid sequences that achieve that structure. This approach is especially promising when coupled with recent work like RFdiffusion [73], which _directly generate_ protein structures that achieve desired properties. This approach to protein design requires a solution to the _inverse folding problem_[77; 1; 35] of designing amino acid sequences that fold into a given backbone structure. Recent inverse folding approaches using large language modelling have surged in accuracy given the new wealth of accurate, computationally-determined protein structure data available, and now achieve state of the art performance [29; 32; 64; 3; 36; 4; 72]. However, these approaches have tangible disadvantages in practice. In protein engineering, it's common to devote significant resources to solving a specific, focused design problem, often under numerous developability and design constraints. Generative and one-shot prediction approaches, however, are more suitable for "wide" results, demonstrating the success of one-shot or few-shot attempts to inverse fold large databases of structures, rather than deep, focused efforts to inverse fold a handful of specific structures with high accuracy using iterated feedback. The focused setting, where one seeks to inverse fold specific target structures very well rather than achieve good performance on average, is arguably more important in protein engineering. In this paper, rather than solving inverse folding through pure generation, we develop a blackbox optimization-focused pipeline that leverages recent rapid advancements in "latent space" or "deep" Bayesian optimization [45; 63; 21; 48; 75; 58; 22; 24; 25], where generation provides initial solutions that are iteratively improved. Given an amino acid sequence \(\mathbf{x}\) and an objective function \(f(\mathbf{x})\) that measures (for example) the similarity between the computationally folded structure of \(\mathbf{x}\) and the target structure, we seek to maximize \(f(\mathbf{x})\). Concretely, we make the following contributions: 1. In contrast to recent work on inverse protein folding, we cast the task as an optimization problem. Rather than solving inverse folding as a one shot prediction task, this enables us to iteratively refine the design of a sequence, resulting in sequences that fold computationally to significantly better matches. Furthermore, our approach uses a significantly smaller generative model, enabling it to use less computational resources overall. 2. We develop an optimization pipeline, BO-IF, leveraging recent work on latent space Bayesian optimization. Our approach deploys more accessible computational resources, using a "small" transformer language model with only 47M parameters, trainable in a few days on a single RTX A6000. We use this model in concert with Bayesian optimization to solve inverse protein folding problems, and make our pipeline publicly available using standard software libraries like BoTorch [6] at [https://github.com/nataliemaus/bo-if](https://github.com/nataliemaus/bo-if). 3. We demonstrate that our method has substantial advantages over pure generation: 1. **Accuracy.** Our approach **reduces the structural error** of sequences produced by ESM-IF [29] to the target **by 48\(\%\) on average** as measured by 1-TM score, and **28\(\%\)** as measured by RMSD. 2. **Efficiency.** On a single GPU, the end-to-end optimization time considering and folding 150,000 sequences _sequentially_ is roughly equivalent to parallel generation and folding due to the size of the generative models used, with both approaches taking approximately 50 GPU hours. 4. We extend our method to additional settings, using extensions of Bayesian optimization (BO) from the literature. In particular: 1. We demonstrate our approach's ability to optimize under blackbox constraints by adapting constrained BO to this setting. 2. We demonstrate that we are able to design _diverse sets_ of sequences that fold to the target structure by leveraging recent work on using BO for diverse generation. [46]. ## 2 Background and Related Work **Computational protein folding.** Utilizing a Convolutional Neural Network (CNN) backbone and starting from features derived from multiple sequence alignment (MSAs), AlphaFold demonstrated the feasibility of learning a protein-specific potential and predicting a protein's structure from its sequence through potential minimization via gradient descent [60]. However, analogous to earlier approaches, its performance declines in the absence of homologous structures [78; 37]. AlphaFold2 further improved upon this by incorporating an SE(3)-equivariant Transformer network for refining atomic coordinates, thereby enhancing generalization capabilities to structures without homologs [37]. Subsequent advancements, including RosettaFold and AlphaFold Multimer, have further refined these methodologies, enabling the generation of accurate models of protein-protein complexes [17; 5]. ESM-Fold tackled this problem from a sequence-only perspective, building on a 15-billion parameter protein language model [43; 56], achieving near parity with alignment-based methods while being orders of magnitude faster. This significant acceleration in folding is what facilitates our exploration of the inverse folding problem through optimization and sampling in our current work. **Designing sequences for target functionality.** Across the many subdomains of bioengineering, the three dimensional structure and topology of proteins determine their function [11, 68, 9]. The field of _de novo_ protein design attempts to generate new amino acid sequences--with no necessary relationship to those found in nature--that achieve a desired complex fold, and thus produce desired function. Over the last several decades, many approaches have been developed to solve the task of computational protein design, with most focus on statistical approaches [61, 59] and expensive biophysical simulations [1, 10]. Only recently have deep learning methods been shown to take advantage of the wealth of known [8] or predicted [37] protein structures. [51] describe "protein design as an optimization problem: given a user-defined structure and function, find one or a few low-energy amino acid sequences stably adopting the desired structure and performing the targeted function." Some of the largest barriers to computational protein design have only been addressed recently. For example, generating a reliable protein backbone _de novo_ has been made more accessible by RFDiffusion [73]. Nonetheless, it is remains difficult to confidently describe the sequence or set of sequences that may be optimal for a specified structure[76], to ensure those sequences meet necessary design constraints [30], and to fully validate the function of designed sequences with meaningful functional readouts [66]. While existing protein design methods are able to output unconstrained amino acid sequences that fold into a specific structure [73, 51, 29], real-world protein design applications require that those proposed sequences satisfy both the inverse-folding challenge _and_ conform to arbitrary constraints. These additional constraints enable the production and characterization of those proteins or ensure their safe and effective use in research, industrial, diagnostic or therapeutic settings. Specifically, a high-throughput protein engineering platform could process tens to hundreds or more candidate sequences in a single batch [14, 71]. However, the designed proteins would require specific sequence constraints to ensure sufficient protein yield, stability and solubility [2, 69]. Alternatively, for downstream applications like therapeutic development, constraints would be needed to avoid unnecessary immunogenicity via humanization [52] and/or minimization of aggregation propensity [53]. **Black-box and Bayesian optimization.** In black-box optimization, we seek to find the minimizer of a function, \(\operatorname*{arg\,min}_{\mathbf{x}}f(\mathbf{x})\). Commonly, \(f(\mathbf{x})\) is assumed to be expensive to evaluate and unknown (i.e., a "black box"). Classically, this problem has been considered in the setting where the search space is continuous. However, many applications across the natural sciences and engineering require optimizing over discrete and structured input spaces, such as amino acid sequences. Bayesian optimization (BO) [50, 47, 62] is an approach to sample-efficient black-box optimization. At iteration \(t\) of Bayesian optimization, one has access to a dataset \(\mathcal{D}=[(\mathbf{x}_{i},y_{i})]_{i=1}^{t}\), where \(y_{i}\) denotes Figure 1: Different target backbones (blue) inverse folded by ESM-IF (yellow) and Bayesian optimization (pink). Our method, BO-IF, consistently finds proteins that better match the target structure as evidenced visually by the better alignment and by the higher TM-scores achieved. Arrows indicate example regions of mismatch. the (possibly noisy) objective value of the input \(\mathbf{x}_{i}\). This data is used to build a probabilistic _surrogate model_--commonly a Gaussian process (GP) [54]--of the objective function. This supervised surrogate informs a policy--often called an _acquisition function_--about what sample to evaluate next. After the objective function is evaluated on a candidate, this observation is added to the dataset \(\mathcal{D}\) and the surrogate model is updated in iteration \(t+1\). BO then proceeds by collecting observations of the objective function in an iterated sequential fashion, successively building a larger dataset \(\mathcal{D}\) and therefore a better supervised model of the objective which results in a better policy. **Bayesian optimization for biological discovery.** Bayesian Optimization (BO) has become a powerful tool for biological discovery due to the vast space of potential biological sequences and structures. Several methods have been proposed that apply BO over discrete spaces [49; 7]. Notably, a stream of research has used BO to discover new molecules within the latent spaces of Variational Autoencoders [41; 23]. These methods either directly encode and decode from string representations [45; 63], or utilize more complex graphical or grammatical structures [34; 42]. Concurrently, BO has been applied to optimize protein sequences for desired functionalities. Techniques such as Khan et al. [39] and Romero et al. [57] have explored the protein fitness landscape and designed new antibodies using BO, while Stanton et al. [63] explored Bayesian optimization for sequence design without the need for a large pretraining corpus. These advances demonstrate the potential and effectiveness of BO in the field of biological discovery. ## 3 Inverse Folding as Optimization Inverse folding seeks to design amino acid sequences that fold to a desired structure. Commonly, one focuses on backbone structure alone, ignoring side chains [29]. Formally, we are given as input a sequence of spatial coordinates \(\mathbf{x}=(x_{1},...,x_{n})\), with \(x_{i}\in\mathbb{R}^{3k}\) representing the 3D spatial coordinates of \(k\) backbone atoms per amino acid in the structure. The goal is to produce a sequence of amino acids \(\mathbf{y}=(y_{1},...,y_{n})\) that folds to a structure with backbone coordinates determined by \(\mathbf{x}\). ### Bayesian Optimization Strategy **Inverse folding as generation.** In a generative approach to inverse folding, Hsu et al. [29] train an auto-regressive model of the conditional distribution, \[\Psi(\mathbf{y}\mid\mathbf{x})=\prod_{i=1}^{n}p(y_{i}\mid y_{1:i-1},\mathbf{x }),\] where a supervised dataset of structure and native sequence pairs \([(\mathbf{x}_{i},\mathbf{y}_{i})]_{i=1}^{n}\) is collected using a combination of experimentally determined structures and high confidence structures produced using computational folding techniques [37; 43]. Hsu et al. [29] train \(\Psi\) as a GVP-GNN [35] followed by a generic encoder-decoder Transformer [70]. Given a new target sequence \(\mathbf{x}^{+}\), amino acid sequences may be sampled from the trained autoregressive model above, \(\mathbf{y}^{+}\sim\Psi(\mathbf{y}\mid\mathbf{x}^{+})\). **Inverse folding as optimization.** To formulate the inverse folding task as an optimization problem, we formulate an objective function that measures how closely the sequence \(\mathbf{y}\) folds to the structure \(\mathbf{x}^{+}\) and then optimize it. Letting \(\mathcal{F}:\mathcal{Y}\rightarrow\mathcal{X}\) denote a transformation that maps sequences to backbone atom coordinates using a computational folding model like AlphaFold2 or ESMFold, we seek to solve optimization problems of the form \[\mathbf{y}^{+}=\operatorname*{arg\,min}_{\mathbf{y}}\mathcal{E}(\mathcal{F}( \mathbf{y}),\mathbf{x}^{+}), \tag{1}\] where \(\mathcal{E}(\mathbf{x},\mathbf{x}^{\prime})\) measures the structural error between \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\). For example, \(\mathcal{E}\) could be taken to be \(1-\)TM score [79], or RMSD. We first focus on solving the unconstrained optimization problem as presented above before discussing extensions including constraints and finding diverse solutions. Even with access to computational folding models, this optimization problem is challenging because it is over the discrete space of \(20^{n}\) possible amino acid sequences of length \(n\). In the experiments we conduct, \(n\) ranges from 100 to 150. Furthermore, fast computational folding models like ESMFold [43] are sufficiently large that folding a sequence takes on the order of seconds on average, even when batched to the maximum capacity of an RTX A6000. We are therefore moderately computationally limited, and exploring a few hundred thousand sequences takes on the order of a few GPU days. **Latent space Bayesian optimization** To solve the optimization problem, Equation 1, we utilize recently developed latent space Bayesian optimization techniques that adapt Bayesian optimization from continuous blackbox optimization problems to discrete and structured ones [67; 13; 23; 26; 15; 38; 45; 63]. Latent space Bayesian optimization seeks to leverage the representation learning capabilities of deep generative models, most commonly variational autoencoders (VAEs) [41] to aid in optimization. Briefly, a VAE consists of an encoder \(\Phi(\mathbf{z}\mid\mathbf{y}):\mathcal{Y}\rightarrow\mathcal{P}(\mathcal{Z})\) mapping from amino acid sequences \(\mathcal{Y}\) to a distribution over a continuous latent space \(\mathcal{Z}\), and a decoder \(\Gamma(\mathbf{y}\mid\mathbf{z}):\mathcal{Z}\rightarrow\mathcal{P}(\mathcal{Y})\) that (probabilistically) reverses this process. At a high level, the idea is to perform optimization over the latent space \(\mathcal{Z}\) of the VAE, rather than the discrete and structured space of amino acid sequences \(\mathcal{Y}\). This constrains the optimizer to sequences in \(\mathcal{Y}\) that the VAE can generate, but simplifies the optimization problem considerably in return. Given a trained VAE, the optimization problem in Equation 1 becomes continuous: \[\mathbf{y}^{+}\approx\Gamma(\mathbf{z}^{+})\quad\text{where}\quad\mathbf{z}^{+ }=\operatorname*{arg\,min}_{\mathbf{z}\in\mathcal{Z}}\mathcal{E}(\mathcal{F}( \Gamma(\mathbf{z})),\mathbf{x}^{+})\] Here, we abuse notation and use \(\Gamma(\mathbf{z}):=\operatorname*{arg\,max}_{\mathbf{y}}\Gamma(\mathbf{y} \mid\mathbf{z})\) to denote the most likely decoding of the latent vector \(\mathbf{z}\)--note that in this setting, we are generally uninterested in the expected behavior of the objective over the _distribution_\(\Gamma(\mathbf{y}\mid\mathbf{z})\), because we are ultimately interested in the final sequence \(\mathbf{y}^{+}\). This objective function takes a latent code \(\mathbf{z}\), generates a sequence \(\mathbf{y}\) via the decoder, which is then folded by \(\mathcal{F}\) and evaluated. Standard Bayesian optimization can now be directly applied to the maximization over \(\mathbf{z}\). We use LOL-BO[45] as our base Bayesian optimization routine, although with larger VAE and surrogate models than used in the original small molecule setting. ### Model Architectures **Transformer VAE.** In this work, we pretrain an autoregressive VAE encoder-decoder Transformer architecture [70] with 6 encoder layers and 6 decoder layers totaling 47 million parameters on a randomly selected subset of 1.5 million protein sequences with lengths of 100-300 amino acids from Uniref [65]. The encoder maps from amino acid sequences down to a total latent dimensionality of 1024, each amino acid is a separate token. The model was trained using the standard VAE ELBO with the KL divergence term multiplied by a factor of \(10^{-4}\). **Surrogate model.** In order to support Bayesian optimization with hundreds of thousands of queries, we use sparse variational Gaussian process approximations [27], and in particular the parametric Gaussian process regressor (PPGRP) model of [33]. This surrogate model is trained on pairs of latent codes \(\mathbf{z}\) and corresponding quality observations \(\mathcal{E}(\mathcal{F}(\Gamma(\mathbf{z})),\mathbf{x}^{+})\). Because of the high dimensionality of the latent space and the sometimes poor performance of kernel methods on high dimensional data, we use a small deep kernel [74] with two fully connected hidden layers of dimensionality 256 to reduce down to 256 dimensions. ### Extensions **Constrainted optimization.** Protein engineering is almost always done under numerous constraints that seek to limit cost, immunogenicity, and other risk factors. Extending Equation 1 to handle one or more constraints enables searching for amino acid sequence solutions that achieve desired design properties, for example high likelihood under a language model of "natural" proteins or high similarity to human-like proteins. We consider optimizing subject to constraints on the output sequence: \[\mathbf{z}^{+}=\operatorname*{arg\,min}_{\mathbf{z}\in\mathcal{Z}}\mathcal{E} (\mathcal{F}(\Gamma(\mathbf{z}),\mathbf{x}^{+}))\text{ s.t. }\forall i\,c_{i}(\Gamma(\mathbf{z}))\leq 0,\] where \(c_{i}(\cdot)\) are black-box constraints that might operate directly on the sequence \(\mathbf{y}:=\Gamma(\mathbf{z})\) or even on the computationally determined structure \(\mathcal{F}(\mathbf{y})\). In constrained Bayesian optimization (e.g., [18; 20; 28; 16]), additional surrogate models are trained to model the constraint functions \(c_{i}(\cdot)\), and information from these additional surrogates is incorporated into the acquisition process. The most straightforward adaptation is to use SCBO[16] as the optimization routine, which is the constrained analog of the TuRBO algorithm used by LOL-BO[45]. **Joint training of constraint surrogates.** Maus et al. [45] found that end-to-end joint variational training of the VAE and GP surrogate model significantly improves optimization performance. Adapting this idea to the constrained setting, we train the constraint surrogate models end-to-end jointly with the objective surrogate and VAE. This involves optimizing the following ELBO, derived for the joint model over the \(k\) GPs involved (1 for the objective, \(k-1\) for each of the \(k-1\) constraints): \[\mathcal{L}_{\text{joint}}(\theta_{\Phi},\theta_{\Gamma},\theta_{\text{GP}_{1: k}})=\mathbb{E}_{\Phi(\mathbf{z}|\mathbf{y})}\left[\sum_{i=1}^{k}\mathcal{L}_{ \text{GP}_{i}}\left(\theta_{\text{GP}_{i}},\theta_{\Phi};\mathbf{z},\mathbf{q }\right)\right]+\mathcal{L}_{\text{VAE}}(\theta_{\Phi},\theta_{\Gamma}; \mathbf{y}),\] where \(\mathbf{q}\) denotes the set of acquired structure errors obtained so far during optimization. **Finding diverse solutions.** A common need in protein engineering is to diversify sequence solutions to diversify risk of a given design. This is particularly true in therapeutic development where each sequence carries unique liabilities into drug development. ROBOT[46] is an extension of the BO framework to enforce sequence diversity during optimization. We use edit distance \(\delta(\mathbf{y},\mathbf{y}^{\prime})\) as a notion of diversity between two sequences, and solve the following series of optimization problems: \[\mathbf{y}_{1}^{+} =\operatorname*{arg\,min}_{\mathbf{y}}\mathcal{E}(\mathcal{F}( \mathbf{y}),\mathbf{x}^{+})\] \[\mathbf{y}_{i}^{+} =\operatorname*{arg\,min}_{\mathbf{y}}\mathcal{E}(\mathcal{F}( \mathbf{y}),\mathbf{x}^{+})\text{ s.t. }\delta(\mathbf{y},\mathbf{y}_{j}^{+})\geq\tau\text{ for }j=1,...,i-1\] This procedure produces a set of low error sequences \(\mathbf{y}^{+},...,\mathbf{y}_{i}^{+}\) so that each pair of sequences is separated by an edit distance of at least \(\tau\). ## 4 Experiments We separate the evaluation of our method, which we call BO-IF, into two phases. First, we evaluate the optimization performance of applying our pipeline to backbone structures. Second, we evaluate extensions of our approach to the constrained setting and to finding diverse sets of high scoring sequences. We additionally evaluate the relative computational costs of both approaches. **Implementation details and hyperparameters.** We implement our pipeline leveraging BoTorch [6] and GPyTorch [19], with code available at [https://github.com/nataliemaus/bo-if](https://github.com/nataliemaus/bo-if). Other than the model architecture details specifically described previously in the paper, all hyperparameters for all Bayesian optimization methods used are set to the defaults used by their respective papers. Optimization runs is initialized with 1,000 sequences sampled from the ESM-IF model. **Baseline.** Throughout this section, we use the GVP Transformer model of Hsu et al. [29] (also known as ESM-IF) as a recent, "gold standard" approach that (1) still achieves state-of-the-art performance for inverse folding via generation, and (2) maintains publicly available open source software. We initially evaluated performance with both the high and low temperature settings recommended by the authors (\(T=1e-6\) and \(T=1.0\)). We found that repeated sampling with the lower temperature failed to improve TM score beyond the first handful of samples. We therefore report results against the higher temperature value of 1.0. We emphasize that repeated sampling is reasonably standard usage of these models - e.g., this is the recommended approach of Watson et al. [73] for finding an inverse fold to generated structures. ### Model Statistics and Efficiency We report statistics on pretraining and optimization runtime for both our approach and ESM-IF in Table 1. For an input protein backbone structure during inference, both optimization and sampling were performed using a single RTX A6000 with 48 GB of VRAM. In order to fully utilize GPU resources, both approaches are capable of batch evaluation. We sample 20 samples from the ESM-IF decoder in batch parallel, and for Bayesian optimization, we leverage the batch acquisition capabilities of Thompson sampling to evaluate batches of 10 samples in parallel. These results suggest that iterative batch optimization using BO and decoding from a large language model have roughly comparable overhead for generating candidate sequences. Finally, we observe that roughly \(83\%\) of the total running time for both methods is spent computationally folding candidate sequences using ESMFold [43]. ### Unconditional Inverse Folding of Protein Backbones Next, we evaluate whether using optimization to make focused, concerted efforts to inverse fold structures leads to improved structural similarity. Ideally, this evaluation should be done on roughly random protein structures that are unseen by both ESM-IF and our pretrained model. To accomplish this, we utilize a recently proposed generative model over protein structures, RFdiffusion [73]. RFdiffusion unconditionally generates backbone structures of protein monomers. Successfully optimizing sequences that fold to generated structures has the additional benefit that it suggests a path forward for a full end-to-end top down protein design pipeline, where desirable structures are conditionally generated with RFdiffusion and then inverse folded with BO-IF. We generate 24 protein backbones using RFdiffusion, each ranging between 100 to 150 amino acids in length. We perform focused evaluations on particular structures, evaluating a total of 150,000 sequences for each backbone, requiring roughly a GPU day per structure for both methods. We believe these samples serve as a challenging and unbiased benchmark for assessing the performance of both conventional inverse folding methods and our optimization-focused approach. As described before, we directly target the (computationally determined) structural similarity between inverse folded sequences and the target structure as an objective. To measure structural similarity, we computationally fold a sequence, and compute similarity to the target structure according to two metrics: (1) TM Score as computed by TM-align[80], and (2) RMSD. Results across all proteins are plotted in Figure 2. Optimization produces sequences whose computationally determined structures have on average **48%** lower structural error as measured by \(1-\text{TM Score}\), and never fails to improve TM Score across all structures. These correspond to a reduction in RMSD of **28%** on average. In Figure 3, we find that ESM-IF and BO produce structures with comparable fold confidence as measured by pLDDT, with most proteins folding reasonably confidently on average. If higher scores are desired, this might be achievable, for example, by using pLDDT as a constraint. ### Optimization with constraints In this section, we present a case study on leveraging constrained Bayesian optimization. Protein engineering is often subject to diverse sequence constraints--discrete and continuous--to enable effective production, testing and validation of designed proteins in target applications. As an example constraint, immunogenicity describes the liability of a therapeutic molecule to produce an unwanted immune response in a patient, resulting in rapid drug clearance, adverse events and limitations of the drug's effectiveness. The humanness of a sequence--how similar it is to sequences found in the human proteome--is inversely correlated with the likelihood of immunogenicity, and thus a common constraint used in drug development. We design inverse folds for \(9\) protein structures subject to the constraint that humanness of the resulting sequence is at least \(80\%\). To measure humanness, we fine-tune a classifier from the 150M parameter ESM2 [44] model using 47,000 human and 64,000 non-human protein sequences downloaded from Uniref [65]. Our classifier achieves a test accuracy of 93%. Across all structures, we note that only 0.2% of sequences generated using ESM-IF satisfy this constraint while also achieving TM score > 0.8. In Figure 5 **(Left)**, we show results comparing sequence optimization under this constraint to sequences generated from ESM-IF. For generated sequences, we discard sequences that fail to meet the humanization threshold. On average, optimization reduces structural error by 48.364% (standard error \(\pm\) 9.480%) and RMSD by 34.454% (standard error \(\pm\) 7.082%) compared to ESM-IF. \begin{table} \begin{tabular}{c c c} \hline \hline & Bayesian Optimization & GVP Transformer \\ \hline Model Parameters & 47M & 142M \\ GPUs & 1\(\times\) RTX A6000 & 32\(\times\) RTX 8000 \\ Pretraining Time (GPU days) & 6 & 653 \\ Optimization Runtime (GPU hours @ 150k evaluations) & 50 & 48 \\ \hline \hline \end{tabular} \end{table} Table 1: Details on model statistics and training ### Finding Diverse Sequences We apply ROBOT [46] to seek a diverse set of 5 sequences as described in subsection 3.3. We define two sequences to be diverse if they have a minimum edit distance of 20 (i.e., \(\tau=20\)). Figure 5 **(Right)** shows results comparing TM scores achieved by ROBOT and ESM-IF for multiple target backbones. For each backbone, bar height denotes the average \(1-\)TM score achieved across the 5 sequences found for that structure, and error bars represent the range of TM scores (i.e., min and max). For comparison, we also plot the best sequence found by BO. Taking the average across these mean TM-scores, optimization reduces structural error by 48.640% (standard error \(\pm\) 8.003%) and RMSD by 33.617% (standard error \(\pm\) 5.520%) compared to ESM-IF. In addition to summary statistics, we Figure 3: Despite lower structural error, sequences produced by Bayesian Optimization (BO-IF) and ESM-IF display similar fold confidences on all protein backbones, with BO achieving a slightly higher mean pLDDT score of 0.732 compared to 0.710 with ESM-IF. Figure 2: Error in backbone structures computationally folded from inverse folds as measured by 1-TM score (computed by TM-align) across 24 target protein backbones. On average, optimization reduces structural error by **48%** (standard error \(\pm\) 0.69%). This corresponds to RMSD reduction of **28%** (standard error \(\pm\) 0.81%) on average. display an example multiple sequence alignment for the diverse inverse folds found in Figure 4. See the appendix for additional results seeking a diverse set of 10 sequences (rather than only 5) as well as multiple sequence alignments for all diverse sets of inverse folds optimized. ## 5 Discussion and Limitations Generative modeling for protein structures is an exciting new technology. However, translating generated structures into physical proteins demands a solution to inverse folding: knowing what a final product should look like is not the same as knowing how to make it. We have demonstrated that Bayesian optimization can be an effective strategy for focused inverse folding of particular structures. **Limitations.** While there is clearly promise in optimization for focused inverse folding, we clarify a few important limitations of our method. First, we are considering on the order of tens or hundreds of thousands of sequences per structure, rather than one shot predictions. While our approach yields substantial improvements for particular target structures, considering hundreds of thousands of sequences for each of a large dataset of proteins isn't feasible. Perhaps most crucially, the objective functions we consider in this paper depend directly on computational folding. If Alphafold or ESMFold are inaccurate, so are our structures. This is notably in Figure 4: Example multiple sequence alignment produced by optimization finding 5 diverse inverse folds for an input structure achieving an average TM score of \(0.95\pm 0.002\). Colors represent diversity in physicochemical properties. Figure 5: **(Left) Inverse folding performed subject to the constraint that humanness is at least 0.8 (see text). (Right) Finding diverse sets of \(5\) inverse folds with pairwise edit distance \(\geq 20\). Bar heights denote average value for each structure, with error bands denoting min and max values. The best single result achieved by standard BO-IF is included for comparison. See the appendix for additional results finding larger diverse sets of \(10\) inverse folds. Both plots display a random subset of our generated structures.** contrast to other metrics like native sequence recovery, where by definition recovering the "known" sequence achieves the correct fold. That is not to say that native sequence recovery is not without its own problems: notably, it can only be used for structures for which native sequences are available (i.e., not generated structures), and as a metric it discourages sequence diversity. Nevertheless, our results clearly demonstrate that inverse folding through Bayesian optimization is a compelling and efficient potential alternative approach to one-shot prediction in situations where the goal is to develop sequences that achieve the best possible map to an individual target structure. Furthermore, leveraging the optimization literature brings with it considerable advantages, such as the ability to handle constraints, multiple objectives, etc.
2306.01088
Environmental Dependence of Type Ia Supernovae in Low-Redshift Galaxy Clusters
We present an analysis of 102 type Ia supernovae (SNe Ia) in nearby (z < 0.1), x-ray selected galaxy clusters. This is the largest such sample to date and is based on archival data primarily from ZTF and ATLAS. We divide our SNe Ia into an inner cluster sample projected within $r_{500}$ of the cluster center and an outer cluster sample projected between $r_{500}$ and $2\,r_{500}$. We compare these to field samples of SNe Ia at similar redshifts in both quiescent and star-forming host galaxies. Based on SALT3 fits to the light curves, we find that the inner cluster SNe Ia have a higher fraction of fast-evolving objects (SALT3 $x_1 < -1$) than the outer cluster or field quiescent samples. This implies an intrinsically different population of SNe Ia occurs in inner cluster environments, beyond known correlations based on host galaxy alone. Our cluster samples show a strongly bimodal $x_1$ distribution with a fast-evolving component that dominates the inner cluster objects ($\gtrsim$ 75%) but is just a small fraction of SNe Ia in field star-forming galaxies ($\lesssim$ 10%). We do not see strong evidence for variations in the color (SALT3 $c$) distributions among the samples and find only minor differences in SN Ia standardization parameters and Hubble residuals. We suggest that the age of the stellar population drives the observed distributions, with the oldest populations nearly exclusively producing fast-evolving SNe Ia.
Conor Larison, Saurabh W. Jha, Lindsey A. Kwok, Yssavo Camacho-Neves
2023-06-01T19:02:46Z
http://arxiv.org/abs/2306.01088v2
# Environmental Dependence of Type Ia Supernovae in Low-Redshift Galaxy Clusters ###### Abstract We present an analysis of 102 type Ia supernovae (SNe Ia) in nearby (\(z<0.1\)), x-ray selected galaxy clusters. This is the largest such sample to date and is based on archival data primarily from ZTF and ATLAS. We divide our SNe Ia into an inner cluster sample projected within \(r_{500}\) of the cluster center and an outer cluster sample projected between \(r_{500}\) and \(2\,r_{500}\). We compare these to field samples of SNe Ia at similar redshifts in both quiescent and star-forming host galaxies. Based on SALT3 fits to the light curves, we find that the inner cluster SNe Ia have a higher fraction of fast-evolving objects (SALT3 \(x_{1}<-1\)) than the outer cluster or field quiescent samples. This implies an intrinsically different population of SNe Ia occurs in inner cluster environments, beyond known correlations based on host galaxy alone. Our cluster samples show a strongly bimodal \(x_{1}\) distribution with a fast-evolving component that dominates the inner cluster objects (\(\gtrsim 75\%\)) but is just a small fraction of SNe Ia in field star-forming galaxies (\(\lesssim 10\%\)). We do not see strong evidence for variations in the color (SALT3 \(c\)) distributions among the samples and find only minor differences in SN Ia standardization parameters and Hubble residuals. We suggest that the age of the stellar population drives the observed distributions, with the oldest populations nearly exclusively producing fast-evolving SNe Ia. Type Ia supernovae (1728), Light curves (918), Galaxy clusters (584), Field galaxies (533), Cosmological parameters (339) + Footnote †: NSF Graduate Research Fellow 0000-0002-4882-887X]Conor Larson 0000-0002-3189-7088]Saurabh W. Jha 0000-0002-0701-8885]Lindsey A. Kwok 0000-0002-0702-0888]Yssavo Camacho-Neves ## 1 Introduction Due to their high and standardizable luminosity (Phillips, 1993), type Ia supernovae (SNe Ia) are a key part of the cosmic distance ladder. Measurements of SN Ia distances led to the discovery of the accelerating expansion of the Universe (Riess et al., 1998; Perlmutter et al., 1999) and are used to determine the local value of the Hubble constant (Riess et al., 2009, 2011, 2016, 2022; Burns et al., 2018; Dhawan et al., 2018; Freedman et al., 2019). SNe Ia also contribute to the chemical enrichment of galaxies and are the dominant source of iron-group elements (Nomoto et al., 2013). Despite their great importance, the fundamental astrophysics of SNe Ia, including their progenitor channels and explosion mechanisms, is not well understood. Currently, the only consensus is that SNe Ia result from exploding carbon-oxygen white dwarfs (for reviews, see e.g., Jha et al., 2019; Liu et al., 2023). The environments of supernovae provide important clues to their astrophysics. For example, the association of core-collapse supernovae with recent star-formation points to a massive star origin. SNe Ia, in contrast, occur in every type of host galaxy, though they are most common in star-forming galaxies. Because star-formation is correlated with other galaxy properties, this also means SNe Ia occur more frequently in bluer galaxies, morphologically late-type galaxies, and lower-mass galaxies (van den Bergh, 1990; Mannucci et al., 2005; Sullivan et al., 2006; Brown et al., 2019). Not only is the SN Ia rate higher in certain types of host galaxies, the light curve properties of the SNe Ia are also connected to their environment (Hamuy et al., 1996, 2000; Branch et al., 1996; Sullivan et al., 2006; Rigault et al., 2013). This in turn means that SN Ia environments are linked to their standardization, and may impact the use of SNe Ia as cosmological probes, because host galaxy properties vary with redshift. The relationship between environment and SN Ia light curve properties also has important implications for progenitor and explosion models. Significant evidence has accumulated for an environmental dependence to SN Ia luminosity, even after light-curve standardization. The first indications of a Hubble residual correlated with host environment were based on the global stellar mass or the star-formation rate of the host galaxy (Kelly et al., 2010; Sullivan et al., 2010; Lampeitl et al., 2010). Of ten a "mass-step" is now applied in cosmological analyses to correct for this (Sullivan et al., 2011; Betoule et al., 2014; Scolnic et al., 2018; Smith et al., 2020). Correlations with SN Ia Hubble residual have also been found using other host-galaxy environmental attributes, including projected separation from the host nucleus, host-galaxy metallicity, and host-galaxy dust content (D'Andrea et al., 2011; Galbany et al., 2012; Meldorf et al., 2022). In addition to "global" host properties, SN Ia luminosity has also been correlated with "local" measurements of stellar mass, star-formation rate, and specific star-formation rate (sSFR; Rigault et al., 2013; Roman et al., 2018; Jones et al., 2018; Rose et al., 2019; Rigault et al., 2020; Briday et al., 2022). Of key importance is understanding the causation behind these correlations. The light curve or luminosity of a SN Ia is surely not directly influenced by its host-galaxy stellar mass, for example. Instead, presumably the local or host-galaxy environment is indirectly related to the kinds of white dwarf progenitor systems available to explode. The distributions of metallicity or age of the progenitor population may be the intermediaries that link the environment with the supernova explosion. There are already indications that the age of the stellar population (distinctly correlated with host-galaxy properties described above) may be the dominant factor in shaping the kinds of SNe Ia that occur (Rose et al., 2019, 2020; Lee et al., 2020; Kang et al., 2020; Wiseman et al., 2021, 2023). While the physical causal mechanism may not yet be conclusively known, the empirical correlations between SNe Ia and their host-galaxy (or smaller) scale environments are well established. It is intriguing to ask then whether these empirical correlations hold at even larger scales. Here we revisit the nature of SNe Ia found in clusters of galaxies. Much work has been done on the rate of SNe Ia in cluster host galaxies, which estimates the number of SNe Ia that occur in a galaxy normalized by the galaxy's stellar mass. Early studies of SNe Ia in low-redshift galaxy cluster members show that the SN Ia rate in elliptical hosts is similar to or perhaps elevated compared to the rate in the field (Sharon et al., 2007; Mannucci et al., 2008; Dilday et al., 2010; Maoz et al., 2010; Sand et al., 2012). Higher-redshift galaxy cluster studies have also measured the SN Ia rate in clusters and found similar trends (Sharon et al., 2010; Barbary et al., 2012; Freundlich & Maoz, 2021; Toy et al., 2023). The uncertainties in many of these studies have been dominated by small number statistics, however. Beyond merely the rate of SNe Ia in galaxy clusters, it is interesting to compare their light-curve properties and standardized luminosities with SNe Ia in the field. Meyers et al. (2012), using Hubble Space Telescope (HST) data of high-redshift (\(z\simeq 1\)) cluster SNe Ia, found no significant differences with field SNe Ia, but with only a small sample size of six cluster SNe Ia with elliptical hosts. Xavier et al. (2013) found evidence that SNe Ia in galaxy clusters (with a sample size of 48 objects) at intermediate redshift (\(z\simeq 0.1\)-0.5) had faster-evolving light curves than those in the field, even when restricting both samples to passive galaxies. They ascribed this effect to the older average age of cluster passive galaxies compared to field passive galaxies. Recently, Toy et al. (2023) used a larger sample of 70 cluster SNe Ia at redshifts \(z\simeq 0.1\)-0.9 and also found evidence for faster decline rates compared to field SNe Ia (though not specifically restricting to only passive galaxies). This trend appears to continue down to low redshift, but published samples are sparse (Germany et al., 2004). Our analysis investigates the properties of nearby (\(z<0.1\)) SNe Ia in x-ray selected galaxy clusters. The x-ray selection assures a higher fidelity cluster sample than the typically optically-selected clusters at higher redshift. Our total sample includes 102 cluster SNe Ia and hundreds of field objects for comparison, also improving statistics compared to previous work. The advent of large-area time-domain surveys, e.g., PTF (Law et al., 2009), ASASSN (Holoien et al., 2017), ATLAS (Tonry et al., 2018), and ZTF (Bellm et al., 2019), allows us to build a large sample of nearby cluster SNe Ia with multicolor light curves through archival research, rather than requiring a dedicated observing program (e.g., Reiss et al., 1998; Gal-Yam et al., 2008). ## 2 Data and Methods ### Galaxy cluster catalog For this study, we use the Meta Catalog of X-ray Detected Clusters of Galaxies (MCXC; Piffaretti et al., 2011) to select a sample of 663 rich galaxy clusters within our redshift range of interest, \(z<0.1\). The catalog also includes information about the cluster sizes, x-ray luminosities, and inferred masses. Specifically, we rely on the catalog \(r_{500}\) measurement, the radius of the cluster at which the mass density is 500 times the critical density of the Universe at the cluster redshift. The MCXC \(r_{500}\) values depend on assumptions about cluster relations that are detailed by Piffaretti et al. (2011), and the catalog adopts a flat \(\Lambda\)CDM cosmology with \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{M}=0.3\), and \(\Omega_{\Lambda}=0.7\). We adopt this cosmological model in our analysis for consistency. ### Supernova samples To build our SN Ia samples, we select SNe projected within \(2r_{500}\) of each cluster, converting the MCXC tabulated \(r_{500}\) from a physical to angular size using the angular diameter distance appropriate for the cluster redshift. We split the cluster SN Ia sample into an inner cluster sample, for SNe Ia within \(r_{500}\), and an outer cluster sample between \(r_{500}\) and \(2r_{500}\). The inner cluster sample probes the centers of our clusters, typically including the extent of observed x-ray emission, and populated by mainly early-type, quiescent galaxies (Giovanelli & Haynes, 1985). The outer cluster sample includes SNe Ia that extend out to approximately the virial radius of the clusters (Reiprich et al., 2013; Walker et al., 2019). An example of one of the clusters in our sample, hosting two SNe Ia, is shown in Figure 1. We reiterate that our sample division is based on the projected separation; we discuss below how we estimate cluster membership and contamination below and in Section 3.1. We identify our supernova samples by querying the Transient Name Server1 (for objects discovered after 2016) and the IAU List of Supernovae2 for older objects. We restrict our sample to SNe that have been spectroscopically classified as regular SNe Ia, and we check the classification by manually inspecting light curves (see Section 2.3). Footnote 1: [https://www.wis-tns.org/](https://www.wis-tns.org/) Footnote 2: [http://www.cbat.eps.harvard.edu/lists/Supernovae.html](http://www.cbat.eps.harvard.edu/lists/Supernovae.html) We visually associate each potential cluster SN Ia with a host galaxy matched against the NASA Extragalactic Database (NED)3 or SIMBAD (Wenger et al., 2000). In some cases the supernova host galaxy is ambiguous, but if there is a large, bright galaxy near the SN position, we take this to be the host (e.g., SN 2008bf is identified with NGC 4055). In our inner cluster sample, there were two supernovae, SN 2020wcj and SN 2020yji, that did not have identifiable hosts, while in our outer cluster sample, there were three such supernovae: SN 2020ags, SN 2020vnr, and SN 2022rdt. Footnote 3: The NASA/IPAC Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. Each SN Ia that had an identifiable host was associated with a NED source: either the host galaxy itself, or absent that, a WISE source (Cutri et al., 2021), from which we collated photometry. We adopted the NED host-galaxy spectroscopic redshift if available (the majority of objects), or else we used the redshift from the SN spectrum as reported in the supernova discovery or classification. Following Carr et al. (2022), we adopt redshift uncertainties of \(\sigma_{z}=0.0001\) or \(\sigma_{z}=0.005\) for host-galaxy spectroscopic redshifts or SN spectrum redshifts, respectively. If only a host photometric redshift was available, we adopt \(\sigma_{z}=0.01\). We use these redshifts to verify that each SN host galaxy is a member of its cluster. We follow Xavier et al. (2013) and calculate the membership probability with \[p=\frac{1}{\sqrt{2\pi\left(\sigma_{\rm SN}^{2}+\sigma_{\rm CL}^{2}\right)}} \int_{-z_{d}}^{+z_{d}}\exp\left[-\frac{(z-[z_{\rm SN}-z_{\rm CL}])^{2}}{2 \left(\sigma_{\rm SN}^{2}+\sigma_{\rm CL}^{2}\right)}\right]\,dz \tag{1}\] where \(z_{\rm SN}\) and \(\sigma_{\rm SN}\) are the redshift and redshift uncertainty of the supernova (given by \(\sigma_{z}\) above), \(z_{\rm CL}\) and \(\sigma_{\rm CL}\) are the redshift and redshift uncertainty of the cluster, and \(z_{d}\) is three times the velocity dispersion of the cluster in redshift space. We adopt the cluster redshifts as tabulated in the MCXC catalog and set \(\sigma_{\rm CL}=0\) as this uncertainty is negligible compared to \(\sigma_{\rm SN}\). We use the cluster scaling relation given by Zhang et al. (2011) to map the catalogued \(r_{500}\) to a cluster velocity dispersion that is used to calculate \(z_{d}\). We assume cluster membership for any supernova that yields \(p>0.5\). In order to compare our cluster supernova samples to the field, we construct samples of SNe Ia in quiescent field galaxies and star-forming field galaxies. For our purposes, the "field" includes any \(0.01<z<0.1\) galaxies outside of our rich x-ray clusters: we do not attempt to eliminate galaxies in groups, in poorer clusters (e.g., optically selected), or in otherwise overdense environments. To identify quiescent and star-forming field galaxies, we use the catalog from Chang et al. (2015), which contains star formation rates (SFRs) and stellar masses for around 850,000 galaxies based on Sloan Digital Sky Survey (SDSS) + WISE photometry (York et al., 2000; Wright et al., 2010). We base our quiescent or star-forming classification on Chang et al. (2015), except with Figure 1: SDSS optical (color) and ROSAT PSPC x-ray (contours) image of the galaxy cluster MCXC J2310.4+0734 at \(z=0.0424\). The inner white circle corresponds to \(r_{500}=0.73\) Mpc (\(0.24^{\circ}\) radius) and the outer circle is twice that radius, roughly the virial radius of the cluster. North is up and east is to the left. The position of the type-Ia SN 2020acwj, part of our outer cluster sample, is shown in green and the position of SN 2021wy, a fast-declining SN Ia in our inner cluster sample, is shown near the center of the cluster in pink. slightly stricter criteria4 to avoid the ambiguity of galaxies that lie in the "green valley" of star formation (Salim, 2014). Footnote 4: For galaxies with \((r-z)_{\rm rest}<0.625\), we classify those with \((u-r)_{\rm rest}\geq 2.1\) as quiescent, and those with \((u-r)_{\rm rest}\leq 1.9\) as star-forming. For galaxies with \((r-z)_{\rm rest}\geq 0.625\), our quiescent galaxies have \((u-r)_{\rm rest}\geq 1.6(r-z_{\rm rest}+1.1)\) and star-forming galaxies have \((u-r)_{\rm rest}\leq 1.6(r-z_{\rm rest}+0.9\). See Figure 2 of Chang et al. (2015). While we can create large enough field supernova samples even restricting the sky area to the SDSS footprint, our cluster SNe Ia cover the whole sky. For the cluster SN host galaxies with SDSS photometry (just under half of the sample; Chang et al., 2015), we use the same quiescent/star-forming classification as the field galaxies above.5 For the cluster host galaxies without SDSS photometry, we rely on WISE colors alone, classifying such galaxies as star-forming if \(W2-W3=[4.6\mu]-[12\mu]>1.4\), and quiescent otherwise (see Figure 12 of Wright et al., 2010). Footnote 5: One supernova in our outer cluster sample, SN 2020ny, has a host that falls in the green valley between our SDSS color regions, so we do not include its host in either the cluster quiescent or cluster star-forming samples. ### Supernova photometry We use archival photometry for our cluster and field supernova samples. The bulk of these data is drawn from the Zwicky Transient Facility (ZTF; Bellm et al., 2019) and the Asteroid Terrestrial-impact Last Alert System (ATLAS; Tonry et al., 2018). For the ZTF data we used the forced photometry service (Masci et al., 2019) to obtain _gri_ magnitudes6. We also used the ATLAS forced photometry service (Smith et al., 2020; Shingles et al., 2021) to gather \(oc\) (the wide orange and cyan ATLAS passbands) supernova light curves. We also made extensive use of the ALeRCE broker (Forster et al., 2021) to examine light curves and compare photometric data. For supernovae with both ZTF and ATLAS photometry, we confirmed consistency in SN Ia light curve fits (see Section 2.4) compared to ZTF data alone. Because we have a large field supernova sample, we restrict it to exclusively use photometry from ZTF, ATLAS, or both. Footnote 6: ZTF \(i\)-band data are initially proprietary, so we only used the \(i\)-band photometry through mid-2021, publicly released in ZTF DR16. Our cluster samples included 10 supernovae with PanSTARRS1 (Tonry et al., 2012) photometric data from the Young Supernova Experiment (YSE; Jones et al., 2021) first light-curve data release (Aleo et al., 2022). We included these _gri_ photometry in our analysis (but we did not include the \(z\)-band). For cluster SNe Ia that predated these surveys (before 2016), we retrieved available Johnson-Cousins _BVRI_(Bessell, 1990) and SDSS _gri_(York et al., 2000) photometry from varied sources via the Open Supernova Catalog (Guillochon et al., 2017). ### Supernova light curve fitting We employ the SALT3 model to fit our SN Ia light curves (Guy et al., 2007; Kenworthy et al., 2021), combined with Tripp (1998) standardization, using the SNCosmo package (Barbary et al., 2016). Recent work has shown that the switch to SALT3 over SALT2 causes negligible difference in cosmological parameter estimation but reduces calibration errors (Taylor et al., 2023). SALT3 fits a multicolor SN Ia light curve with three parameters: \(x_{0}\), which captures the peak flux in the \(B\) band; \(x_{1}\), which parameterizes the light-curve decline (and rise) rate; and \(c\), which measures the supernova color (corresponding approximately to \(B\)\(-\)\(V\)). A smaller \(x_{1}\) indicates a faster-evolving light curve and a larger \(c\) denotes a redder color. From the SALT3 fits we can define a peak \(B\) magnitude \[m_{B}=-2.5\,\log{(x_{0})}+10.5 \tag{2}\] where by convention \(m_{B}=10.5\) corresponds to \(x_{0}=1\)(Kenworthy et al., 2021). We can then derive a standardized magnitude and distance modulus with a light-curve width and color correction: \[\mu_{\rm obs}=m_{B}+\alpha x_{1}-\beta\,c-M_{B} \tag{3}\] where \(\mu_{\rm obs}\) represents the inferred distance modulus, and \(\alpha\), \(\beta\), and \(M_{B}\) are fit parameters that we describe in Section 3.2. We exclude any SNe that have less than five photometric measurements in total. We correct for effects of Milky Way dust extinction in our SALT3 model fits, with an assumed Milky Way \(R_{V}=3.1\) and \(E(B-V)\) values along the line of sight to our SNe from the dust maps of Schlegel et al. (1998), recalibrated in Schlafly & Finkbeiner (2011). We Figure 2: Redshift histogram for SNe in our cluster and field samples. The cluster sample redshift values are the host cluster redshifts, while the field sample redshifts are from the host galaxy or the supernova. The counts for the field samples have been divided by 5 to bring them on the same scale as the smaller cluster samples. make use of the NED extinction calculator tool through an existing Python script.7 Footnote 7: [https://github.com/mmechtley/ned_extinction_calc](https://github.com/mmechtley/ned_extinction_calc) To create our final supernova samples, we apply light curve quality and fit parameter cuts. As is typical in cosmological analyses, we require SALT3 fits with \(|x_{1}|<3.0\) and \(|c|<0.3\), and uncertainties \(\sigma(x_{1})<1.0\) and \(\sigma(c)<0.2\). We also require a fit uncertainty on the time of maximum light \(\sigma(t_{0})<0.5\) days. For our cluster samples, we further manually inspect the light curve fits and demand that the light curves have both pre-maximum and post-maximum data. ## 3 Results ### Light-Curve Properties In our inner cluster sample, we have 54 SNe Ia with adequate light curves and SALT3 parameter values that fall within our cutoff ranges and our outer cluster sample contains 48 SNe that pass these cuts. Our field quiescent and field star-forming samples have 372 and 405 SNe, respectively, that pass the cuts. Figure 2 shows a histogram of the redshifts for these SN Ia samples. The median redshifts for the field quiescent, field star-forming, inner cluster, and outer cluster samples are: 0.062, 0.060, 0.044, and 0.045, respectively. This lower-redshift cluster sample may be a result of the x-ray selection. We explore potential effects of the slightly different redshift distributions below. The cosmic age difference between the field quiescent sample median redshift (\(z=0.062\)) and the inner cluster sample median (\(z=0.044\)) is about 230 Myr for our adopted cosmology. Figure 3 shows the distributions of the SALT3 \(x_{1}\) and \(c\) parameters for our inner cluster and outer cluster samples, compared with the field quiescent and field star-forming samples. The most striking differences are seen in \(x_{1}\). In SALT2 (and SALT3) model training this light-curve shape parameter is created to have zero mean and unit standard deviation (Guy et al., 2007; Kenworthy et al., 2021) across the training set. However, here we see a strong environmental dependence in the \(x_{1}\) distribution. A statistical summary of these distributions is given in the "unimodal" columns of Table 1. While the field star-forming sample is not far from a mean of zero and standard deviation of one, the other samples are markedly different. Fast-declining (lower \(x_{1}\)) SNe Ia have long been known to preferentially occur in quiescent galaxies (Hamuy et al., 1996, 2000; Branch et al., 1996), and this is borne out comparing our field quiescent and field star forming samples. Moreover, fast-declining SNe Ia _dominate_ the inner cluster sample, where the \(x_{1}\) distribution is strongly peaked approximately two standard deviations lower than the mean of the training data. To better understand the environmental dependence in supernova properties, it is useful to control for the host galaxy type. In Figure 4 we limit the cluster samples to comprise quiescent host galaxies only and compare these with the field quiescent sample. Our quiescent inner cluster and outer cluster samples consist of 45 SNe Ia and 29 SNe Ia, respectively. The quiescent outer cluster \(x_{1}\) distribution is similar to the field quiescent one, whereas the quiescent inner cluster distribution is even more strongly peaked with fast-declining SNe. To quantitatively compare these distributions, we employ a two-sample Anderson-Darling (A-D) test, which tests the null hypothesis whether two empirical samples are drawn from the same distribution (Pettitt, 1976). Calculating the test statistic between our quiescent inner cluster and outer cluster \(x_{1}\) samples, we find \(p<0.001\), indicating a clear difference in these populations. We similarly find \(p<0.001\) for the quiescent inner cluster and field quiescent samples, but for the quiescent outer cluster and field quiescent samples we do not find evidence for different \(x_{1}\) distributions, \(p>0.25\). This suggests the inner cluster sample is the standout among quiescent host galaxies. \begin{table} \begin{tabular}{|c|c|c c c c|c c c c c|} \hline & & \multicolumn{4}{c|}{unimodal \(x_{1}\)} & \multicolumn{4}{c|}{bimodal \(x_{1}\)} \\ \hline \hline Sample & \(N_{\rm SN}\) & Mean & Std Dev. & Median & MAD & \(f_{1}\) & \(\mu_{1}\) & \(\sigma_{1}\) & \(\mu_{2}\) & \(\sigma_{2}\) \\ \hline \hline inner cluster & 54 & \(-1.49\) & 1.15 & \(-1.82\) & 0.55 & \(0.76^{+0.06}_{-0.06}\) & \(-2.06^{+0.08}_{-0.08}\) & \(0.48^{+0.07}_{-0.06}\) & \(+0.40^{+0.18}_{-0.10}\) & \(0.56^{+0.21}_{-0.14}\) \\ \hline outer cluster & 48 & \(-0.62\) & 1.19 & \(-0.44\) & 1.15 & \(0.40^{+0.08}_{-0.07}\) & \(-1.91^{+0.08}_{-0.08}\) & \(0.32^{+0.08}_{-0.06}\) & \(+0.25^{+0.12}_{-0.12}\) & \(0.60^{+0.12}_{-0.09}\) \\ \hline full cluster & 102 & \(-1.08\) & 1.24 & \(-1.53\) & 0.91 & \(0.59^{+0.05}_{-0.05}\) & \(-2.01^{+0.06}_{-0.06}\) & \(0.43^{+0.05}_{-0.04}\) & \(+0.28^{+0.10}_{-0.11}\) & \(0.59^{+0.09}_{-0.07}\) \\ \hline field quiescent & 372 & \(-0.76\) & 1.13 & \(-0.80\) & 0.92 & \(0.33^{+0.12}_{-0.09}\) & \(-1.93^{+0.18}_{-0.16}\) & \(0.50^{+0.10}_{-0.10}\) & \(-0.20^{+0.21}_{-0.11}\) & \(0.86^{+0.10}_{-0.11}\) \\ \hline field star-forming & 405 & \(+0.19\) & 0.97 & \(+0.27\) & 0.56 & \(0.07^{+0.02}_{-0.02}\) & \(-1.90^{+0.21}_{-0.09}\) & \(0.45^{+0.19}_{-0.12}\) & \(0.35^{+0.06}_{-0.05}\) & \(0.77^{+0.04}_{-0.04}\) \\ \hline inner cluster quiescent & 45 & \(-1.74\) & 0.97 & \(-2.04\) & 0.43 & \(0.85^{+0.05}_{-0.06}\) & \(-2.08^{+0.09}_{-0.09}\) & \(0.49^{+0.08}_{-0.07}\) & \(+0.36^{+0.20}_{-0.19}\) & \(0.45^{+0.29}_{-0.16}\) \\ \hline outer cluster quiescent & 29 & \(-0.77\) & 1.05 & \(-0.52\) & 0.98 & \(0.42^{+0.10}_{-0.09}\) & \(-1.89^{+0.09}_{-0.09}\) & \(0.27^{+0.09}_{-0.06}\) & \(+0.11^{+0.11}_{-0.08}\) & \(0.52^{+0.14}_{-0.09}\) \\ \hline full cluster quiescent & 74 & \(-1.36\) & 1.11 & \(-1.76\) & 0.61 & \(0.68^{+0.05}_{-0.06}\) & \(-2.04^{+0.07}_{-0.07}\) & \(0.44^{+0.06}_{-0.05}\) & \(+0.14^{+0.11}_{-0.09}\) & \(0.53^{+0.12}_{-0.09}\) \\ \hline \end{tabular} \end{table} Table 1: Number of supernovae for each sample, as well as the mean, standard deviation, median absolute deviation for the SALT3 \(x_{1}\) parameter distributions for each sample (unimodal) and double Gaussian fits (bimodal) to the \(x_{1}\) distribution, with \(f_{1}\) indicating the fraction in the fast-declining population. Performing a similar analysis for star-forming host galaxies is hampered by small number statistics. Our star-forming inner cluster and outer cluster samples consist of only 7 and 15 SNe Ia, respectively. If we combined these to form a cluster star-forming host sample, we find \(p=0.024\) for the A-D test between the cluster star-forming and field star-forming \(x_{1}\) distributions. There is thus only marginal evidence for a population difference between cluster and field SNe Ia in star-forming hosts. In contrast to the \(x_{1}\) distributions, the right panels of Figure 3 show relatively similar SALT3 \(c\) across all of our cluster and field samples. A-D tests confirm this impression; we find no evidence for significant population differences in the color distributions of these SNe. Distinct from the field star-forming sample, the \(x_{1}\) distributions for the quiescent hosts (field or cluster) look bimodal. Two-population models for SNe Ia, driven by their \(x_{1}\) distributions, have been explored before (e.g., recently by Wojtak et al. 2023, who fit such a model to a full sample of SNe Ia; Figure 4: Histogram of \(x_{1}\) parameter values for SNe in our inner cluster and outer cluster samples, restricted to quiescent host galaxies. These are compared to the field quiescent host galaxy sample (whose counts are scaled down by 5 to ease comparison). Figure 3: _Top:_ Histograms of the full SALT3 \(x_{1}\) and \(c\) parameter distributions for our cluster SN Ia samples. _Bottom:_ Histograms showing the same distributions as above, but for our field quiescent and field star-forming samples. see section 4), but isolating objects in quiescent hosts (and especially our inner cluster sample), brings out the bimodality clearly. We investigate a two-population \(x_{1}\) distribution by running a Markov-Chain Monte Carlo (MCMC) fit to a double Gaussian model. The fit parameters are \(\mu_{1}\) and \(\mu_{2}\), the \(x_{1}\) means of faster and slower declining populations, respectively; \(\sigma_{1}\) and \(\sigma_{2}\), the widths of the two populations; and \(f_{1}\), the fraction of the sample in the faster-declining population (so that the fraction of the slower-declining population is \(1-f_{1}\)). The results of these fits are summarized in Table 1. In all samples we find a fast-declining population centered at \(x_{1}\simeq-2\) that is narrower in width8 than a broader, slower-declining population centered at \(x_{1}\simeq+0.3\), with slight variations between samples. There is a strong environmental variation in the fraction of objects in the fast-declining population, from approximately 75% in the inner cluster sample (and 85% if we restrict to quiescent inner cluster hosts) all the way down to just 7% in the field star-forming sample. Footnote 8: We note that part of this narrower width may be ascribed to the truncation at \(x_{1}>-3\). We illustrate these results visually in Figure 5. Different than in Table 1, in Figure 5, we fix the two population Gaussians (\(\mu_{1}\), \(\sigma_{1}\), \(\mu_{2}\), \(\sigma_{2}\)) as fit to the _full cluster_ sample (upper left panel). Then for the other three samples displayed (inner cluster, field quiescent, and field star-forming), we only re-fit for \(f_{1}\), to better isolate the changing fraction of fast-declining supernovae. We obtain \(f_{1}\) values of \(75\pm 6\)%, \(43\pm 3\)%, and \(8\pm 2\)% for these three samples, respectively. Not only is there a vast difference compared to the field star-forming sample, there is even a nearly 5\(\sigma\) difference in \(f_{1}\) between the inner cluster and field quiescent samples. Clearly, the inner cluster environment produces a different population of SNe Ia than would be predicted for similar host galaxies in the field. Some of the differences between the full double Gaussian fits in Table 1 and the model fixed to the full cluster sample can also be seen in Figure 5. The inner cluster sample fast-declining population is even slightly faster than the full cluster sample. The peaks in the field quiescent data are broader and not as well separated as in the full cluster sample, and the slower-declining peak in the field star-forming sample is also somewhat broader than the corresponding population in the cluster samples. The difference in the \(x_{1}\) distribution between our inner cluster SNe Ia sample and the outer cluster leads us to examine how this varies as a function of the projected distance from the center of the cluster. In Figure 6, we see a clear paucity of slowly-declining SNe Ia near the cluster centers. There is also a hint that the fast-declining population may become slightly slower-declining in the outskirts of the clusters. We note that we only observe the projected separation of the supernova and its host galaxy within the cluster, so some of our inner cluster sample objects may physically be part of our outer cluster sample (or even further out). We use a cluster galaxy number density model (Carlberg et al., 1997) to estimate that up to \(\sim\)28% of our inner cluster sample could be contaminants. For 54 total inner cluster SNe Ia, this means up to \(\sim\)15 could be projected from further out. If those objects follow the outer cluster \(x_{1}\) distribution (Table 1), approximately 60% (9) of those should be from the slowly-declining population, with about 6 in the fast-declining population. Subtracting these out of our inner cluster sample would leave \(41-6\) = 35 fast-decliners (\(x_{1}<-1\)) and just \(13-9\) = 4 slower-decliners (\(x_{1}>-1\)), corresponding to a projection-corrected inner cluster \(f_{1}\) approaching 90% (35 out of 39)! If we apply a similar projection correction for just the quiescent inner cluster galaxies, we would find that our \(f_{1}\) value would go from \(\sim\)85% to \(\sim\)97%, with only one (estimated projection-corrected) slow decliner! While these would imply an extreme population skew for inner cluster SNe Ia, we can rule out the possibility that _all_ inner cluster objects are fast-decliners: SN 2018bgs is in the brightest cluster galaxy (BCG) and is in the slower-declining population. We further caution that our inner cluster and outer clus Figure 5: Bimodal fits to the \(x_{1}\) distributions in different samples. Here we fix the double Gaussian parameters (\(\mu_{1}\), \(\sigma_{1}\), \(\mu_{2}\), \(\sigma_{2}\)) to fit the full cluster sample (upper left). The fast-declining population is shown in coral and the slower-declining population is shown in blue. Using this fixed model, in each of the other panels we fit only \(f_{1}\), the fraction of objects in the fast-declining population, for the inner cluster (upper right), field quiescent (lower left), and field star-forming (lower right) samples. ter separation is a simplification based on assuming a spherical geometry can adequately describe the clusters. Though we have constructed a nearby, \(z<0.1\), sample, we can still investigate trends with redshift. In Figure 7 we show the quiescent inner cluster sample \(x_{1}\) distribution as a function of redshift, comparing it to the field quiescent sample. Note that at redshifts \(z<0.06\), there is only one SN with \(x_{1}\)\(>0\) in the quiescent inner cluster sample: SN 2008bf. All of the other slower-declining SNe in this sample are at higher redshifts. Both the inner cluster quiescent sample and the field quiescent sample in Figure 7 show a trend towards larger \(x_{1}\) as redshift increases, even in the faster-declining population. This could be a result of Malmquist bias, as slower-evolving SNe Ia tend to be more luminous before standardization. Such a luminosity bias could not be used to explain the lack of slower-declining SNe Ia at low redshift in the quiescent inner cluster sample, however, as these brighter SNe should be most easily detected, and they are clearly present in the field quiescent sample. ### Standardization and Cosmological Distances We now turn our attention to examining whether these environmental differences among the samples persist through SN Ia standardization and inferred distances for cosmology. As mentioned in Section 2.4, in order to do a cosmological analysis with the Tripp (1998) standardization, we need to fit for the fit parameters \(\alpha\), \(\beta\), and \(M_{B}\). We also fit for \(\sigma_{\rm int}\), a measure of the intrinsic scatter that exists within our SN samples. For our fits, we use an MCMC implemented through the emcee package (Foreman-Mackey et al., 2013). \begin{table} \begin{tabular}{c|c c c c} \hline Sample & \(\alpha\) & \(\beta\) & \(M_{B}\) & \(\sigma_{\rm int}\) (mag) & RMS (mag) \\ \hline \hline inner cluster & \(0.137^{+0.023}_{-0.023}\) & \(2.575^{+0.311}_{-0.307}\) & \(-19.249^{+0.043}_{-0.042}\) & \(0.165^{+0.022}_{-0.019}\) & 0.171 \\ \hline outer cluster & \(0.132^{+0.025}_{-0.005}\) & \(2.371^{+0.322}_{-0.325}\) & \(-19.331^{+0.035}_{-0.014}\) & \(0.177^{+0.025}_{-0.021}\) & 0.175 \\ \hline field quiescent & \(0.155^{+0.007}_{-0.007}\) & \(2.131^{+0.077}_{-0.077}\) & \(-19.308^{+0.010}_{-0.010}\) & \(0.134^{+0.006}_{-0.006}\) & 0.142 \\ \hline field quiescent (\(z<0.06\)) & \(0.162^{+0.014}_{-0.014}\) & \(2.381^{+0.126}_{-0.127}\) & \(-19.285^{+0.022}_{-0.022}\) & \(0.166^{+0.011}_{-0.010}\) & 0.177 \\ \hline field star-forming & \(0.111^{+0.088}_{-0.008}\) & \(2.567^{+0.076}_{-0.076}\) & \(-19.255^{+0.007}_{-0.007}\) & \(0.127^{+0.005}_{-0.005}\) & 0.134 \\ \hline full field & \(0.129^{+0.005}_{-0.005}\) & \(2.370^{+0.065}_{-0.056}\) & \(-19.266^{+0.006}_{-0.006}\) & \(0.135^{+0.004}_{-0.004}\) & 0.142 \\ \hline \end{tabular} \end{table} Table 2: Fit parameters obtained through our cosmological MCMC procedure for each sample. Figure 6: _Top:_ histogram of our full quiescent cluster sample in gray, with the fit bimodal populations overlayed in the coral and blue colors respectively. The vertical dashed lines represent the fit means for each population distribution for this sample. _Bottom:_ the \(x_{1}\) parameter values of our cluster SNe Ia in quiescent hosts as a function of their projected distance from their cluster center. The inner cluster portion of the sample is represented by black diamonds. The outer cluster portion of the sample is represented by green circles. We can see that SNe closer to the center of the cluster tend to have much faster-evolving light curves. Figure 7: Trends in the \(x_{1}\) parameter values of our field quiescent and quiescent inner cluster samples as a function of redshift. The black and orange points are the binned median values for our quiescent inner cluster and field quiescent samples respectively. The errors on each point are the standard error for the bin. The points are positioned at the centers of each bin. Our log-likelihood function, \(\ln\mathcal{L}\), is defined via the relation \[-2\ln\mathcal{L}=\sum_{i}\ln\left(2\pi[\sigma_{\mathrm{obs},i}^{2}+\sigma_{ \mathrm{int}}^{2}]\right)+\frac{(\mu_{\mathrm{obs},i}-\mu_{\mathrm{cosmo},i})^{ 2}}{\sigma_{\mathrm{obs},i}^{2}+\sigma_{\mathrm{int}}^{2}}, \tag{4}\] where \(\mu_{\mathrm{cosmo}}\) is the distance modulus derived from our assumed cosmology, \[\mu_{\mathrm{cosmo}}=5\log\left(d_{L}/\mathrm{Mpc}\right)+25 \tag{5}\] \[d_{L}=\frac{c(1+z)}{H_{0}}\int_{0}^{z}\frac{dz^{\prime}}{\sqrt{\Omega_{M}(1+z ^{\prime})^{3}+\Omega_{\Lambda}}}, \tag{6}\] and \(\sigma_{\mathrm{obs}}\) is the distance modulus uncertainty for each SN. This uncertainty comprises the variances and covariances of the SALT3 fit parameters, redshift uncertainty, and a negligible contribution from lensing effects (given our low-redshift sample; Jonsson et al., 2010). For our cluster samples, we use the cluster redshifts as the cosmological redshifts, converted to the cosmic microwave background (CMB) frame. We make no correction for cluster peculiar velocities and include a 300 km s\({}^{-1}\) peculiar velocity contribution to the redshift uncertainty (Legel et al., 2018), and further restrict the sample to \(z>0.01\). For our field samples, we convert host redshifts to the CMB frame and also correct for peculiar velocities, following Peterson et al. (2022) and Carr et al. (2022), and using the velocity fields of Carrick et al. (2015) and Said et al. (2020). We assume a peculiar velocity uncertainty of 150 km s\({}^{-1}\) for our field objects. For the fit parameters, we adopt uniform priors on \(\alpha\), \(\beta\), and \(M_{B}\), and a logarithmic prior on \(\sigma_{\mathrm{int}}\) with \(\hat{p}\propto 1/\sigma_{\mathrm{int}}\). We iterate our fit twice, removing \(2\sigma\) outliers in Hubble residual (\(\mu_{\mathrm{obs}}-\mu_{\mathrm{cosmo}}\)) after the first pass and rerunning to obtain our final values. The results of this analysis for all of our samples are summarized in Table 2. In Figure 8, we show the corner plot for the inner cluster and outer cluster distributions from our MCMC analysis. The results between the two samples are largely consistent, though there is a hint of a \(1.5\sigma\) offset in \(M_{B}\): \(0.082\pm 0.055\) mag. The inner cluster sample and, to a lesser extent, the outer cluster sample also show covariance between \(M_{B}\) and \(\alpha\) that can largely be ascribed to the \(x_{1}\) distributions in these samples. For the inner cluster sample especially, the average \(x_{1}\) is far from \(x_{1}=0\) that defines \(M_{B}\), inducing a correlation with the slope \(\alpha\). In Figure 9, we show the inferred distances to our cluster SNe Ia compared to the assumed cosmological model. The SN distance moduli and their uncertainties depend upon the fit parameters (\(\alpha\), \(\beta\), \(M_{B}\), and \(\sigma_{int}\)) and the individual SN light-curve parameters. The redshifts are taken to be the CMB-frame cluster redshifts. Though the inner cluster and outer cluster samples have different light curve properties, there are not major differences in the inferred distances. Both cluster samples give residual RMS of approximately Figure 8: Corner plot for our inner cluster (black) and outer cluster (green) sample fit parameters. The best fit values and errors are on top of each corresponding column, with the inner cluster sample values on top and the outer cluster values below them. Figure 9: Inferred distances for the inner cluster and outer cluster samples, adopting best-fit values for the parameters \(\alpha\), \(\beta\), \(M_{B}\), and \(\sigma_{\mathrm{int}}\) for each sample separately. The blue curve shows the predicted distance moduli from our assumed cosmology. The bottom panel shows the Hubble residuals, \(\mu_{\mathrm{obs}}-\mu_{\mathrm{cosmo}}\). The points represent the samples after the \(2\sigma\) outlier removal. 0.17 mag, matching, for example, the RMS seen in a ZTF sample (Dhawan et al., 2022). Comparing our cosmological fits from the cluster samples to the field samples, we see in Table 2 the field sample RMS is slightly lower than the cluster SNe Ia, with RMS of 0.142 mag for the field quiescent sample and 0.132 mag for the field star-forming samples. Based on Figure 7, we noted the possibility of Malmquist bias affecting the field quiescent sample at \(z>0.06\). If we restrict the field quiescent sample to \(z<0.06\), Table 2 shows a higher RMS residual of 0.177 mag, comparable to the cluster samples. The small sizes of our cluster samples mean that the best-fit parameters \(\alpha\), \(\beta\), and \(M_{B}\) are uncertain enough to be consistent with both the field quiescent and field star-forming samples. However, the best-fit values diverge between the field quiescent and star-forming samples themselves, with \(\sim\)3\(\sigma\) differences for each parameter. The field host star-formation rate is highly correlated with host stellar mass. In Figure 10 we show the Hubble residual for the field samples as a function of stellar mass, using fit parameters from the combined "full field" sample tabulated in Table 2, and color-coding the galaxies as either quiescent or star-forming. Dividing the sample at a stellar mass \(\log(M_{\star}/M_{\odot})=10.5\), we see that nearly all of the field quiescent hosts are in the higher-mass bin. The differences in the population parameters \(\alpha\), \(\beta\), and \(M_{B}\) manifest themselves as a Hubble residual offset. We recover a mass step of \(0.050\pm 0.011\) mag, consistent with other low-redshift measurements (Betoule et al., 2014; Jones et al., 2018), including recent investigations using SALT3 (Jones et al., 2022). Alternatively, we could derive a "star-formation step" of \(0.051\pm 0.011\) mag between the field quiescent and star-forming host SNe Ia in Figure 10. If we allow for different \(\alpha\), \(\beta\), and \(\sigma_{\rm int}\) for these two samples and marginalize over them, the star-formation step is \(0.053\pm 0.012\) mag, given by the \(M_{B}\) offset between the field quiescent and field star-forming galaxies in Table 2. It is reassuring that these slightly different approaches give a robust estimate of the step. Figure 10 also shows the weighted mean Hubble residual of our inner cluster sample with a yellow star. Unfortunately we do not have the requisite data to estimate stellar masses for all of our cluster host galaxies, so for display we assume the mean to be the same as the field quiescent sample. Interestingly, the inner cluster sample weighted mean Hubble residual is more similar to the field star-forming sample, and differs from the field quiescent sample by 0.061 \(\pm\) 0.023 mag, a 2.7\(\sigma\) offset. ## 4 Discussion We have identified clear differences between samples of low-redshift SNe Ia in cluster and field environments. It is important to interrogate to what extent our sample selection affects our results. For example, we have only included "normal" SNe Ia, uniformly excluding any objects that were spectroscopically classified as 1991bg-like or 1991T/99aa-like from all samples. These would contribute to the fast-declining and slow-declining populations, respectively, and are further correlated with quiescent and star-forming environments. SN Ia classifiers do not always provide this level of granularity; the quality and phase of the classification spectrum can affect whether a subtype designation can be determined, and there is a continuum between normal SN Ia and these subtypes. Nevertheless, investigating the objects excluded by this selection, we find only one or two objects of each subclass for our cluster sample. Including them would not significantly alter our conclusions. Similarly, in our analysis we also uniformly excluded objects with \(|x_{1}|>3\), out of range of the SALT3 model. The number of objects rejected with \(x_{1}<-3\) or \(x_{1}>+3\) are 3/0, 0/0, 21/7, and 0/5, for our inner cluster, outer cluster, field quiescent, and field star-forming samples, respectively. If we assume the too-fast-declining objects are part of the fast-declining population (and conversely for the too-slow objects), our estimates of the fraction in the fast-declining population \(f_{1}\) (see Figure 5) would change by +1.2%, 0.0%, +3.0%, and \(-0.2\)% for those four samples. Standardization of fast-declining objects, especially relevant to the inner cluster sample may benefit from using other light curve fitting tools like MLCS2k2 or SNooPy (Jha et al., 2007; Burns et al., 2011, 2014) that can better handle faster-evolving SNe Ia. Figure 10: Hubble residuals for our field quiescent and star-forming host samples versus host-galaxy stellar mass. The green line represents our adopted dividing line of \(10^{10.5}\)\(M_{\odot}\) for the low-mass and high-mass samples. The black points are the weighted average of the mass bins, while the gold star with the black outline represents the weighted average of our full inner cluster sample, assuming the same average host-galaxy stellar mass as the field quiescent sample. Outliers beyond 2\(\sigma\) have been removed from the sample. The \(x_{1}\) distributions of the inner cluster and outer cluster samples are clearly bimodal (Figure 5), and this bimodality is also strongly suggested in the field quiescent population. Recently, Wojtak et al. (2023) used a hierarchical Bayesian model to similarly identify two populations of SNe Ia in the parameter space of (\(x_{1},c,m_{B}-\mu\)). As in our analysis, they find the greatest separation between the two populations in the \(x_{1}\) distributions and also note the correlation with host-galaxy properties. In their model the two populations furthermore have slightly different color (\(c\)) distributions, interpreted as arising from different intrinsic colors and dust reddening. We do not find conclusive evidence for differences in the \(c\) distributions among our samples (Figure 3), but this should be explored further, especially as our cluster (and even field quiescent) samples bring the two populations into much sharper relief compared to a full sample covering all environments. Our field quiescent \(x_{1}\) distribution is consistent with other nearby samples of SNe Ia in quiescent host galaxies (Rigault et al., 2013; Kim et al., 2019), showing similar bimodality. Measurements at higher redshift tend to have more unimodal \(x_{1}\) distributions (Xavier et al., 2013; Chen et al., 2022), typically with a higher mean \(x_{1}\) than we find here (though perhaps excepting Lampeitl et al., 2010). This could be a result of redshift evolution in quiescent galaxies and the SNe Ia they host, but Malmquist bias may also be playing a role. Comparing our cluster SNe Ia samples to those at higher redshift (Xavier et al., 2013; Toy et al., 2023), we confirm the tendency of the cluster SNe Ia to be faster evolving than their field counterparts. We also confirm with higher statistics the suggestions by Xavier et al. (2013) that 1. SNe Ia closer to the cluster center have a higher fraction of fast-declining objects than farther out, 2. cluster passive galaxies have a higher fraction of faster-declining objects than field passive galaxies, and 3. the fast-declining cluster SNe Ia are slightly more extreme (even lower \(x_{1}\)) than field quiescent SNe Ia (Table 1; Figure 5). There are a few possibilities why SNe Ia from inner cluster host galaxies may have different properties than SNe Ia in the field, even restricting the samples to quiescent galaxies only. We note that whatever the cause, it must be intrinsic to the supernovae. Extrinsic factors like host-galaxy dust may affect the brightness or color of the SNe Ia, but cannot alter the light curve shape (\(x_{1}\)) distribution to the large extent we see. Metallicity has been suggested as a factor in SN Ia variation (D'Andrea et al., 2011; Childress et al., 2013), and the deep gravitational potential well at the centers of galaxy clusters should retain metals better than field quiescent galaxies. However, studies of low-redshift cluster galaxies show they are only slightly more metal rich (\(\lesssim\) 0.05 dex) than field counterparts (Ellison et al., 2009; Lara-Lopez et al., 2022). Observations of the intracluster medium do show a metal enhancement near low-\(z\) cluster centers (Lovisari and Reiprich, 2019), but the metals escape the galaxies (and subsequent generations of stars) similarly to field quiescent galaxies. It is unlikely, then, that differences in progenitor stellar metallicities are driving the differences seen in our cluster SNe Ia. The most likely explanation for the properties of the cluster SNe Ia is the age of the stellar population from which they arise. Quiescent massive galaxies host fast-declining SNe Ia, and these galaxies by definition will have preferentially older stars. A correlation between low \(x_{1}\) and mean stellar age is expected and observed in field samples (Gupta et al., 2011; Kang et al., 2020) and age is implicated as the chief driver behind supernova standardization differences with host-galaxy properties (recently in, e.g., Briday et al., 2022; Lee et al., 2022; Wang et al., 2023; Wiseman et al., 2023). Turning to galaxy clusters, Xavier et al. (2013) analyzed the ages of cluster SN Ia host galaxies and found that these hosts were older on average than field host galaxies. This is in accord with results that early type galaxies in nearby galaxy clusters are older than similar early type galaxies in the field by 1-2 Gyr (van Dokkum and Stanford, 2003; Thomas et al., 2005; Renzini, 2006), an effect that has also been seen at higher redshifts (Webb et al., 2020). It is further intriguing that the strong shift to a fast-declining population in clusters is seen most clearly in our nearby sample, with a less pronounced trend at high redshift (Toy et al., 2023). This suggests that the fast-evolving population is tracing the oldest SN Ia progenitors. The bimodality in the \(x_{1}\) distribution may be a hint that there is a qualitative difference between objects in the fast-evolving population and others. A more gradual evolution in the progenitor population might be more compatible with a gradual shift in a unimodal \(x_{1}\) distribution, but that is not what we observe. Given the wide range of possible SN Ia progenitors and explosion mechanisms, it is enticing to speculate the \(x_{1}\) bimodality is a signal of the emergence of a different SN Ia progenitor scenario in the oldest stellar populations. In a single-degenerate model with a Chandrasekhar-mass C/O white dwarf, for instance, the oldest SN Ia have red giant companions (see Maoz et al., 2014, for a review). Conversely, the delay times in typical double-degenerate SN Ia models reflect the initial separation distribution for binary white dwarfs; if this is a power law as conventionally assumed, the qualitatively different behavior we observe for the oldest SN Ia might be unexpected. Our results strengthen the case that older stellar populations produce atypical SNe Ia. The bimodality of the \(x_{1}\) distribution also suggests that caution is warranted in assuming SNe Ia from old populations (in passive galaxies, for example) are continuously connected to those from younger populations. Though we do not find evidence for large differ ences in SN Ia standardized luminosity that could depend on age (especially in our cluster objects; Figure 10), deriving age-dependent corrections from a passive galaxy sample (or potentially disregarding cluster versus field distinctions) may lead to results that are not applicable to the majority of SNe Ia in star-forming galaxies or otherwise younger environments (Kim et al., 2019; Rose et al., 2019, 2020; Kang et al., 2020; Lee et al., 2020, 2022; Murakami et al., 2021; Wang et al., 2023). ## 5 Summary and Conclusions Using archival data, we have constructed the largest to date sample of SNe Ia that occur within rich, nearby (\(z<0.1\)), x-ray selected clusters of galaxies. We divide them into inner cluster (projected \(r/r_{500}<1\)) and outer cluster (\(1<r/r_{500}<2\)) samples and compare these to samples of SNe Ia in field quiescent and star-forming galaxies. With SALT3 light-curve fits to archival optical photometry, the cluster samples show a strongly bimodal distribution in light curve shape (SALT3 \(x_{1}\)) and we find a significant difference in the population of fast-evolving (low \(x_{1}<-1\)) SNe Ia in the clusters compared to field galaxies. Our inner cluster sample contains a much higher fraction of fast-evolving objects compared to the outer cluster sample or even a sample of field quiescent galaxies. These in turn have a higher fast-evolving fraction than field star-forming galaxies. We find no strong evidence of differences in the color (SALT3 \(c\)) distribution between the samples, and relatively small differences in standardization parameters (\(\alpha,\beta,M_{B},\sigma_{\rm int}\)) and standardized luminosities (Hubble residual). A key takeaway is that environmental correlations in SN Ia properties extend beyond galactic scales: the inner cluster sample of SNe Ia is intrinsically different from outer cluster objects and this difference persists even when comparing inner cluster quiescent host galaxies with outer cluster or field quiescent hosts. We suggest that the age of the stellar population is the more direct explanatory cause of these results, with the oldest stellar populations producing almost exclusively fast-evolving SNe Ia. Future work can better clarify the galactic and local differences between low-redshift inner cluster SNe Ia and other samples. Direct measurements and comparison of stellar ages (and perhaps metallicities) at the positions of these inner cluster supernovae and for their host galaxies should yield insight. We have shown that large samples of cluster SNe Ia provide a unique window into SN Ia populations and encourage further such observations, including extending to higher redshift. Upcoming large sky-area surveys, like the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST) will be transformative, allowing orders-of-magnitude increase in sample size and higher-significance determinations of sample differences. Moreover, a continued focus on nearby cluster SNe Ia will be important, not only because these can be the best studied, but also they crucially come from the oldest populations. Such objects will likely be the key to unveiling the causal mechanism at work (e.g., different progenitor scenarios). While we do not see strong trends affecting Hubble residuals in our cluster samples, our results nevertheless have implications for SN Ia cosmology. Our cluster SNe Ia show somewhat higher scatter on the Hubble diagram than the field star-forming sample (Table 2), so excluding these (few in number) cluster objects would slightly improve cosmological samples. More worrisome are systematic uncertainties, especially if older stellar populations systematically produce different SNe Ia. The oldest supernovae at any redshift are those with delay times approaching the age of the Universe at that redshift, a clearly evolving quantity. Conversely, the youngest SNe Ia should have similar ages at all redshifts. Isolating these supernovae, by restricting samples to star-forming host galaxies, for instance, may prove a helpful strategy to reduce both statistical and systematic uncertainties for cosmology. ## Acknowledgements We thank Andrew Baker, Yu-Yen Chang, Ryan Foley, and Jack Hughes for helpful discussions. We are grateful to Erik Peterson for help with the peculiar velocity maps used in our analysis. C.L. acknowledges support from the National Science Foundation Graduate Research Fellowship under grant No. DGE-2233066. S.W.J. is grateful for support of ground-based supernova cosmology research at Rutgers University through DOE award DE-SC0010008. L.A.K. acknowledges support by NASA FINESST fellowship 80NSSC22K1599. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. The ZTF forced-photometry service was funded under the Heising-Simons Foundation grant #12540303 (PI: Graham). Jupyter (Beg et al., 2021), Astropy (Astropy Collaboration et al., 2013, 2018), Matplotlib (Hunter, 2007), NumPy (Harris et al., 2020), pandas (McKinney, 2010; Pandas development team, 2020), SciPy (Virtanen et al., 2020), emcece (Foreman-Mackey et al., 2013), corner (Foreman-Mackey, 2016)
2305.11913
Machine learning for phase-resolved reconstruction of nonlinear ocean wave surface elevations from sparse remote sensing data
Accurate short-term predictions of phase-resolved water wave conditions are crucial for decision-making in ocean engineering. However, the initialization of remote-sensing-based wave prediction models first requires a reconstruction of wave surfaces from sparse measurements like radar. Existing reconstruction methods either rely on computationally intensive optimization procedures or simplistic modelling assumptions that compromise the real-time capability or accuracy of the subsequent prediction process. We therefore address these issues by proposing a novel approach for phase-resolved wave surface reconstruction using neural networks based on the U-Net and Fourier neural operator (FNO) architectures. Our approach utilizes synthetic yet highly realistic training data on uniform one-dimensional grids, that is generated by the high-order spectral method for wave simulation and a geometric radar modelling approach. The investigation reveals that both models deliver accurate wave reconstruction results and show good generalization for different sea states when trained with spatio-temporal radar data containing multiple historic radar snapshots in each input. Notably, the FNO demonstrates superior performance in handling the data structure imposed by wave physics due to its global approach to learn the mapping between input and output in Fourier space.
Svenja Ehlers, Marco Klein, Alexander Heinlein, Mathies Wedler, Nicolas Desmars, Norbert Hoffmann, Merten Stender
2023-05-18T12:30:26Z
http://arxiv.org/abs/2305.11913v2
Machine learning for phase-resolved reconstruction of nonlinear ocean wave surface elevations from sparse remote sensing data ###### Abstract Accurate short-term prediction of phase-resolved water wave conditions is crucial for decision-making in ocean engineering. However, the initialization of remote-sensing-based wave prediction models first requires a reconstruction of wave surfaces from sparse measurements like radar. Existing reconstruction methods either rely on computationally intensive optimization procedures or simplistic modeling assumptions that compromise real-time capability or accuracy of the entire prediction process. We therefore address these issues by proposing a novel approach for phase-resolved wave surface reconstruction using neural networks based on the U-Net and Fourier neural operator (FNO) architectures. Our approach utilizes synthetic yet highly realistic training data on uniform one-dimensional grids, that is generated by the high-order spectral method for wave simulation and a geometric radar modeling approach. The investigation reveals that both models deliver accurate wave reconstruction results and show good generalization for different sea states when trained with spatio-temporal radar data containing multiple historic radar snapshots in each input. Notably, the FNO-based network performs better in handling the data structure imposed by wave physics due to its global approach to learn the mapping between input and desired output in Fourier space. keywords: deep operator learning, Fourier neural operator, nonlinear ocean waves, phase-resolved surface reconstruction, X-band radar images, radar inversion + Footnote †: journal: Ocean Engineering ## 1 Introduction Offshore installations and vessels are strongly impacted by the dynamics of the surrounding ocean waves. Thus, accurate predictions of future wave conditions are desirable for enhancing their safe and efficient operation. For this purpose, several numerical wave prediction methods have been developed, involving two fundamental steps: the _assimilation_ of initial wave conditions from wave measurement data, followed by the _forecast_ of future wave evolution. While one line of research focuses on predicting simplified phase-averaged wave quantities based on statistical parameters, marine applications such as wind turbine installation, helicopter landings, or controlling wave energy converters require phase-resolved spatio-temporal wave information \(\eta(x,t)\) to identify periods of low wave conditions or enable extreme event warning. The X-band radar is a remote sensing device that can obtain such phase-resolved wave information. However, the radar backscatter is affected by the geometrical mechanism of tilt and shadowing modulation, creating a nonlinear and sparse relationship between radar measurement intensities \(\xi(x,t)\) and the actual ocean wave surface elevation \(\eta(x,t)\). This makes a reconstruction of wave information from radar information necessary in the assimilation step, which is also referred to as _radar inversion_ and is graphically exemplified in Figure 1. Contemporary phase-resolved wave prediction methods face a trade-off between accuracy and real-time capability. To achieve computationally efficient predictions, linear wave theory (LWT) is commonly employed during the forecast step (cf. Morris et al., 1998; Naaijen and Wijaya, 2014; Hilmer and Thornhill, 2015), along with prior spectral- or texture-analysis-based reconstruction of initial wave conditions from radar data (Borge et al., 2004; Dankert and Rosenthal, 2004). However, these reconstruction methods necessitate additional calibration by wave buoys or rely on simplified assumptions concerning the radar backscatter. Furthermore, the accuracy of the linear approach decreases remarkably for larger temporal horizons of prediction and increasing wave steepness (Lunser et al., 2022), necessitating a wave prediction using nonlinear wave models, especially for capturing safety-critical events such as rogue waves (Ducrozet et al., 2007; Kharif et al., 2009). Comparative studies on phase-resolved nonlinear ocean wave prediction have demonstrated that the high-order spectral (HOS) method, introduced by West et al. (1987) and Dommermuth and Yue (1987), provides the best forecast accuracy (Klein et al., 2020). While the HOS forecast step itself is also numerically efficient, the assimilation step currently represents the weakest part in the entire process (Kollisch et al., 2018): the inversion of initial conditions relies on an optimization procedure of the wave model parameters for the subsequent forecast (Wu, 2004; Blondel-Courpie, 2009), which decreases the possible forecast horizon of the entire prediction and hinders the real-time capability so far (Desmars, 2020). Even though the alternative for the HOS inversion proposed by Kollisch et al. (2018) is able to improve the real-time capability, this method instead assumes an unrealistic radar snapshots data rate \(\Delta t_{\rm r}\), making it not suitable for real-world applications (Desmars, 2020). The aforementioned shortcomings of conventional ocean wave reconstruction and forecasting methods have motivated the exploration of alternatives based on machine learning (ML) techniques. For instance, ML methods are able to forecast simple phase-averaged wave quantities such as significant wave height \(H_{\rm s}\), peak period \(T_{\rm p}\) or mean wave direction (cf. Deo et al., 2001; Asma et al., 2012; James et al., 2018; Wu et al., 2020; Yevnin and Toledo, 2022). Recent advancements have also allowed for the more complex task of forecasting the spatio-temporal evolution of phase-resolved wave fields, achieved by training multilayer perceptrons (MLPs) (Desouky and Abdelkhalik, 2019; Law et al., 2020; Duan et al., 2020; Zhang et al., 2022), recurrent neural networks (RNNs) (Kagemoto, 2020; Mohaghegh et al., 2021; Liu et al., 2022), or convolutional neural networks (CNNs) (Klein et al., 2022; Wedler et al., 2023) on synthetic or experimental one-dimensional elevation data. However, these studies presuppose that either temporal sequences of wave elevations can be solely measured at a single point in space by buoys \(\eta(x=x_{\rm p},t)\) or snapshots of initial wave conditions that are available throughout the entire space domain \(\eta(x,t=t_{\rm s})\). In practice, neither of these assumptions is feasible due to the lack of directional wave information of single-point measurements and the fact that the acquisition of spatial snapshots using remote sensing systems such as radars lead to sparse and unscaled observations \(\xi(x,t=t_{\rm s})\), requiring a reconstruction of wave surface elevations first. Figure 1: Graphical illustration of the phase-resolved reconstruction task of ocean wave surfaces \(\eta\) from sparse radar intensity surfaces \(\xi\) for the case of long-crested waves travelling in one spatial dimension. The radar measurement is a snapshot acquired at time instant \(t_{\rm s}\), which is utilized for reconstructing the wave surface elevation at the same time instant. This reconstruction serves as the initial condition for forecasting the further wave evolution at times \(t>t_{\rm s}\). Consequently, it would be advantageous to employ ML methods also for the phase-resolved reconstruction of wave elevations \(\eta(x,t)\) from X-band radar data \(\xi(x,t)\). However, as far as the authors are aware, this topic has not yet been addressed. Prior studies have solely focused on reconstructing phase-averaged statistical parameters of the prevailing sea state from radar data. For instance, Vicen-Bueno et al. (2012) and Salcedo-Sanz et al. (2015) improved the estimation of \(H_{\mathrm{s}}\) by extracting scalar features from sequences of radar images \(\xi(x,t)\) in a preprocessing step, which in turn were employed to train MLPs and support vector regression models. In contrast, Yang et al. (2021) extracted features from each of the consecutive radar images itself for improved \(H_{s}\) estimation at the current time instant. While these methods rely on handcrafted features acquired during a preprocessing step, end-to-end approaches that automatically extract important features from their input have also been proposed. For instance, Duan et al. (2020) and Chen and Huang (2022) used CNN-based methods to estimate \(H_{\mathrm{s}}\) and \(T_{\mathrm{p}}\) from radar images. Although there seems to be no relevant investigation into the ML-based reconstruction of phase-resolved wave surfaces from sparse X-Band radar data, we hypothesize that ML offers a valuable alternative for the radar inversion task (Hypothesis 1), as it shares many similarities to typical inverse problems in imaging (Bertero et al., 2022; Ongie et al., 2020) such as inpainting (Pathak et al., 2016) and restoration (Zhang et al., 2017). Two neural network architectures, with network components involving either a local or global approach of data processing, are investigated in detail for their performance in our task. Specifically, we will adapt the U-Net proposed by Ronneberger et al. (2015), a fully convolutional neural network that employs a mapping approach in Euclidean space, and the Fourier neural operator (FNO) proposed by Li et al. (2020), which is designed to learn a more global mapping in Fourier space. Despite the success of CNN-based approaches in imaging problems, we hypothesize that FNO models may be better suited for handling the complex and dynamic nature of ocean waves (Hypothesis 2), since we can assume that the wave features are already explicitly encoded in the network structure. In contrast, the U-Net needs to learn these wave features by aggregating information from multiple layers. Lastly, we expect that incorporating historical context via spatio-temporal radar data will enhance the reconstruction quality of both ML architectures (Hypothesis 3). In general, the fast inference capabilities of trained ML models, make them ideal for maintaining real-time capability of the entire prediction process (Criterion 1) due to the rapid surface reconstruction without particular data preprocessing. Beside the real-time capability, ensuring high reconstruction accuracy is crucial to prevent initial reconstruction errors that will accumulate and deteriorate the subsequent wave forecast. Hence, we strive for an empirical reference value for the surface similarity parameter (SSP) error metric (Perlin and Bustamante, 2014) of less than SSP \(\leq\) 0.10 between ground truth and reconstructed wave surfaces (Criterion 2), which is a commonly used error threshold in ocean wave research (Klein et al., 2020; Lunser et al., 2022). In addition, the proposed ML methods must be capable of handling real-world measurement conditions of radar snapshots taken at intervals of \(\Delta t_{\mathrm{r}}=[1,\,2]\,\mathrm{s}\)(Criterion 3), a common X-band radar revolution period (Neill and Hashemi, 2018). To summarize, the objective of this work is to develop an ML-based approach for phase-resolved radar inversion. This involves training of an ML models to learn a mapping functions \(\mathcal{M}\) that are able to reconstruct a spatial wave elevation snapshot \(\eta(x,t=t_{\mathrm{s}})\) from one or \(n_{\mathrm{s}}\) consecutive historical radar snapshots \(\xi(x,t_{i})\), where \(t_{i}=\{t_{\mathrm{s}}-i\Delta t_{t}\}_{i=0,\ldots,n_{\mathrm{s}}-1}\). As obtaining ground truth wave surface elevation data for large spatial domains in real ocean conditions is almost impractical, in Section 2, we first generate synthetic yet highly realistic one-dimensional spatio-temporal wave surfaces \(\eta(x,t)\) using the high-order spectral method for different sea states. The corresponding X-band radar surfaces \(\xi(x,t)\) are generated using a geometric approach and incorporate tilt- and shadowing modulations. In Section 3, two neural network architectures are introduced, a U-Net-based and FNO-based network, which should be investigated for their suitability for radar inversion. In Section 4, we discuss the computational results. In particular, we first compare the wave reconstruction performance of the U-Net-based and the FNO-based models \(\mathcal{M}_{\mathrm{F},n_{\mathrm{s}}}\), each trained using either \(n_{\mathrm{s}}=1\) radar snapshot in each input or spatio-temporal input data, meaning that multiple consecutive radar images \(n_{\mathrm{s}}\) are provided. Afterwards, the observations are generalized for the entire data set and discussed. Finally, in Section 5, we draw conclusions based on these results and suggest future research directions. ## 2 Data generation and preparation This section briefly introduces the generation of long-crested nonlinear synthetic wave data \(\eta(x,t)\) using the high-order spectral method, followed by the generation of synthetic radar data \(\xi(x,t)\) that accounts for the tilt- and shadowing modulation mechanisms. The final step involves extracting a number of \(N\) input-output \((\mathbf{x}_{i},\!\mathbf{y}_{i}),i=1,\ldots,N\) data samples from the synthetic radar and wave data, which we employ to train the supervised ML models in the subsequent steps of this work. ### Nonlinear synthetic wave data To generate synthetic one-dimensional wave data, the water-wave problem can be expressed by potential flow theory. Assuming a Newtonian fluid that is incompressible, inviscid, and irrotational, the underlying wave model is described by a velocity potential \(\Phi(x,z,t)\) satisfying the _Laplace equation_ \[\nabla^{2}\Phi=\frac{\partial^{2}\Phi}{\partial x^{2}}+\frac{\partial^{2}\Phi} {\partial z^{2}}=0 \tag{1}\] within the fluid domain, where \(z=0\,\mathrm{m}\) is the mean free surface with \(z\) pointing in upward direction. The domain is bounded by the _kinematic_ and _dynamic boundary conditions_ at the free surface \(\eta(x,t)\) and the _bottom boundary condition_ at the seabed at depth \(d\) \[\eta_{t}+\eta_{x}\Phi_{x}-\Phi_{z} \text{on }z=\eta(x,t) \tag{2}\] \[\Phi_{t}+g\eta+\frac{1}{2}\left(\Phi_{xx}^{2}+\Phi_{zz}^{2} \right) \text{on }z=\eta(x,t)\] \[\Phi_{z}=0 \text{on }z=-d.\] Solving this system of equations is challenging due to the nonlinear terms in the boundary conditions, which must be satisfied additionally at the unknown free surface \(\eta(x,t)\). Even though linear wave theory (Airy, 1849) provides adequate approximations for certain engineering applications, capturing realistic ocean wave effects requires modeling of nonlinear behaviour of surface gravity waves. Thus, we employ the HOS method, as formulated by West et al. (1987), which transforms the boundary conditions to the free surface and expresses them as a perturbation series of nonlinear order \(M\) around \(z=0\). In practice, an order of \(M\leq 5\) is sufficient for capturing the nonlinear wave effects of interest (Desmars, 2020; Lunser et al., 2022). The HOS simulation is linearly initialized by spatial wave surface elevation snapshots \(\eta(x,t_{\mathrm{s}}=0)\) sampled from the JONSWAP spectrum for finite water depth (Hasselmann et al., 1973; Bouws et al., 1985). The corresponding initial potential is linearly approximated. Subsequently the initial elevation and potential are propagated nonlinearly in time with the chosen HOS order \(M\). The referred JONSWAP spectrum attains its maximum at a peak frequency \(\omega_{\mathrm{p}}\), whereas the peak enhancement factor \(\gamma\) determines the energy distribution around \(\omega_{\mathrm{p}}\). The wave frequencies \(\omega\) are linked to the wavenumbers \(k\) by the linear dispersion relation \(\omega=\sqrt{gk\cdot\tanh\left(kd\right)}\). The relations \(\omega=\nicefrac{{2\pi}}{{T}}\) and \(k=\nicefrac{{2\pi}}{{L}}\) allow for substituting the peak frequency with a peak period \(T_{\mathrm{p}}\), peak wavelength \(L_{\mathrm{p}}\), or peak wavenumber \(k_{\mathrm{p}}\). Moreover, a dimensionless wave steepness parameter \(\epsilon=k_{\mathrm{p}}\cdot\nicefrac{{H_{\mathrm{s}}}}{{2}}\) is defined based on the significant wave height \(H_{\mathrm{s}}\). For more details on the HOS simulation, consider the work of Wedler et al. (2023) or Lunser et al. (2022), for example. In this study, we select a wave domain length of \(4000\,\mathrm{m}\), discretized by \(n_{x}=1024\) grid points, resulting in \(\Delta x=3.906\,\mathrm{m}\). A peak enhancement factor of \(\gamma=3\) is employed to emulate North Sea conditions. The water depth is \(d=500\,\mathrm{m}\) and the sea state parameters peak wavelength \(L_{\mathrm{p}}\) and steepness \(\epsilon\) are varied systematically over \(L_{\mathrm{p}}\in\{80,90,\ldots,190,200\}\,\mathrm{m}\) and \(\epsilon\in\{0.01,0.02,\ldots,0.09,0.10\}\), resulting in 130 possible \(L_{\mathrm{p}}\)-\(\epsilon\)-combinations. For each \(L_{\mathrm{p}}\)-\(\epsilon\)-combination, we generate four different initial surfaces \(\eta(x,t_{\mathrm{s}}=0)\) by superimposing the wave components of the JONSWAP spectrum with random phase shifts. The subsequent wave evolution \(\eta(x,t>0)\) for \(t=0,\ldots,50\,\mathrm{s}\) with \(\Delta t_{\mathrm{save}}=0.1\,\mathrm{s}\) is performed considering the nonlinearities imposed by HOS order \(M=4\). As a result, we generate a total of 520 unique spatio-temporal HOS wave data arrays, each of shape \(E_{\mathrm{HOS}}\in\mathbb{R}^{1024\times 500}\), where \((E_{\mathrm{HOS}})_{ij}=\eta(x_{i},t_{j})\) with \(x_{i}=i\cdot\Delta x\) and \(t_{j}=j\cdot\Delta t_{\mathrm{save}}\). ### Corresponding synthetic radar data As X-band radar systems are often pre-installed on marine structures for object detection purposes, they also gained attention for observing ocean surface elevations (Borge et al., 1999). The system antenna rotates with a device-specific revolution time \(\Delta t_{\mathrm{r}}\) of between \(1-2\,\mathrm{s}\)(Neill and Hashemi, 2018) while emitting radar beams along a range \(r\), which are interacting with short-scale capillary waves distributed on large-scale ocean surface waves by the Bragg resonance phenomenon, resulting in backscatter to the antenna (Valenzuela, 1978). This procedure provides measurement data \(\xi(r,t)\) as a proxy of wave surface elevations \(\eta(r,t)\), which are not directly relatable to each other due to the different modulation mechanisms. Most influential are assumed to be _tilt modulation_(Dankert and Rosenthal, 2004), _shadowing modulation_(Borge et al., 2004; Wijaya et al., 2015) or a combination of both (Salcedo-Sanz et al., 2015). In order to generate synthetic radar snapshots for this work, the modulation mechanisms is simulated according to Salcedo-Sanz et al. (2015) and Borge et al. (2004), as illustrated in Figure 2. _Tilt modulation_ refers to the variation in radar backscatter intensity depending on the local incidence angle \(\tilde{\Theta}(r,t)\) between the unit normal vector \(\mathbf{n}(r,t)\) perpendicular to the illuminated wave facet \(\eta(r,t)\) and the unit normal vector \(\mathbf{u}(r,t)\) pointing towards the antenna. As the backscatter cannot reach the antenna if the dot product \(\mathbf{n}\cdot\mathbf{u}\) approaches negative values for \(|\tilde{\Theta}|>\frac{\pi}{2}\), the tilt modulation \(\mathcal{T}\) is simulated by \[\mathcal{T}(r,t)=\mathbf{n}(r,t)\cdot\mathbf{u}(r,t)=\cos\tilde{\Theta}(r,t) \quad\text{ if }\,|\tilde{\Theta}(r,t)|\leq\frac{\pi}{2} \tag{3}\] The _shadowing modulation_ instead occurs when high waves located closer to the antenna obstruct waves at greater distances. Shadowing depends on the nominal incidence angle \(\Theta(r,t)\) of a wave facet \(\eta(r,t)\) with horizontal distance \(R(r)\) from the antenna at height \(z_{\mathrm{a}}\) above the mean sea level, geometrically expressed as \[\Theta(r,t)=\tan^{-1}\left[\frac{R(r)}{z_{\mathrm{a}}-\eta(r,t)}\right]. \tag{4}\] At a specific time instance \(t\), a wave facet \(\eta(r,t)\) at point \(r\) is shadowed in case there is another facet \(\eta^{\prime}=\eta(r^{\prime},t)\) closer to the radar \(R^{\prime}=R(r^{\prime})<R(r)\) that satisfies the condition \(\Theta^{\prime}=\Theta(r^{\prime},t)\geq\Theta(r,t)\). The shadowing-illumination mask \(\mathcal{S}\) can be constructed from this condition as follows \[\mathcal{S}(r,t)=\begin{cases}0&\quad\text{if }\,R(r^{\prime})<R(r)\text{ and } \Theta(r^{\prime},t)\geq\Theta(r,t),\\ 1&\quad\text{otherwise}.\end{cases} \tag{5}\] Assuming that tilt-and shadowing modulation contribute to the radar imaging process, the image intensity is proportional to the local radar cross section, that is \(\xi(r,t)\sim\mathcal{T}(r,t)\cdot\mathcal{S}(r,t)\). As marine radars are not calibrated, the received backscatter \(\xi(r,t)\) may be normalized to a user-depended range of intensity values. In this work, we assume an X-band radar system with an antenna installation height of \(z_{\mathrm{a}}=18\,\mathrm{m}\). Around this antenna, the system's dead range is estimated to be \(r_{\mathrm{min}}=100\,\mathrm{m}\), and it scans the wave surface Figure 2: Geometric display of tilt- and shadowing modulation. Tilt modulation \(\mathcal{T}(r,t)\) is characterized by the local incidence angle \(\tilde{\Theta}\) between surface normal vector \(\mathbf{n}\) and antenna vector \(\mathbf{u}\), while shadowing modulation \(\mathcal{S}(r,t)\) of a wave facet occurs if another wave closer to the radar systems obstructs the radar beams. with a spatial range resolution of \(\Delta r=3.5\,\mathrm{m}\) at \(n_{r}=512\) grid points. Thus, the maximum observation range is computed as \(r_{\mathrm{max}}=1892\,\mathrm{m}\). The radar revolution period is chosen according to Criterion 3 as a snapshot each \(\Delta t_{\mathrm{r}}=1.3\,\mathrm{s}\), i.e., \(n_{t}=38\) radar snapshots for \(50\,\mathrm{s}\) of simulation time. Using these definitions, we first transform the 520 wave data arrays \(E_{\mathrm{HOS}}\in\mathbb{R}^{1024\times 500}\) from their HOS grid to the radar system's grid, yielding \(E_{\mathrm{sys}}\in\mathbb{R}^{512\times 38}\), where \((E_{\mathrm{sys}})_{ij}=\eta(r_{i},t_{j})\) with \(r_{i}=i\cdot\Delta r\) and \(t_{j}=j\cdot\Delta t_{\mathrm{r}}\). To obtain highly realistic corresponding radar observations, we model tilt modulation \(\mathcal{T}(r,t)\) and shadowing modulation \(\mathcal{S}(r,t)\), resulting in 520 radar data arrays, each denoted as \(Z_{\mathrm{sys}}\in\mathbb{R}^{512\times 38}\) with \((Z_{\mathrm{sys}})_{ij}=\xi(r_{i},t_{j})\). ### Preparation of data for machine learning To train a supervised learning algorithm, labeled input-output data pairs are required. As visualized in Figure 3, from each of the 520 generated radar-wave arrays-pairs we extract six radar input snapshots \(\mathbf{x}\) from \((Z_{\mathrm{sys}})_{ij}=\xi(r_{i},t_{j})\) and wave output snapshots \(\mathbf{y}\) from \((E_{\mathrm{sys}})_{ij}=\eta(r_{i},t_{j})\) at six distinct time instances \(t_{\mathrm{s}}\) with the largest possible temporal distance. Each output \(\mathbf{y}\in\mathbb{R}^{512\times 1}\) contains a single snapshot at time \(t_{\mathrm{s}}\), while each input \(\mathbf{x}\in\mathbb{R}^{512\times n_{\mathrm{s}}}\) incorporates a number of \(n_{\mathrm{s}}\) historical radar snapshots at discrete times \(\{t_{\mathrm{s}}-i\cdot\Delta t_{\mathrm{r}}\}_{i=0,\ldots,n_{\mathrm{s}}-1}\). A single snapshot (\(n_{\mathrm{s}}=1\)) at a time \(t_{\mathrm{s}}\) can be used as input, however, as we assumed in Hypothesis 3, larger temporal context may enhance the quality of a network's reconstruction \(\mathbf{\hat{y}}\). The optimal value of \(n_{\mathrm{s}}\) is subject of investigation as discussed in Sections 4.1.2 and 4.2.2. In total, \(N=6\cdot 520=3120\) data samples are generated, each corresponding to a single \(L_{\mathrm{p}}\)-\(\epsilon\)-combination. The data set takes the of shape \(\mathbf{X}\in\mathbb{R}^{3120\times 512\times n_{\mathrm{s}}}\) and \(\mathbf{Y}\in\mathbb{R}^{3120\times 512\times 1}\) and is split into 60% training, 20% validation, and 20% test data using a stratified data split w.r.t. the sea state parameters \((L_{\mathrm{p}},\epsilon)\). This ensures an equal representation of each wave characteristic in the resulting subsets, as described in detail in A. ## 3 Machine learning methodology The U-Net (Ronneberger et al., 2015) and the Fourier neural operator (FNO) (Li et al., 2020) are neural network architectures for data with grid-like structures such as our radar and wave surface elevation snapshots. Their fundamental difference is the inductive bias encoded by each architecture, which refers to prior assumptions about either the solution space or the underlying data-generating process (Mitchell, 1980; Battaglia et al., 2018). The U-Net is a special type of CNN (LeCun et al., 1989) and imposes an inductive bias by assuming that adjacent data points in Euclidean space are semantically related and learns local mappings between input patches and output features in each layer. This local information is aggregated to more global features due to the utilization of multiple downsampling and convolutional layers. In contrast, the FNO operates under the assumption that the data information can be meaningfully represented in Fourier space. It employs multiple Fourier transformations to learn a mapping between the spectral representation of the input and desired output, directly providing a global understanding of the underlying patterns in the data. This section presents the U-Net- and FNO-based network architectures used in our study for radar inversion. In addition, suitable loss and metric functions are introduced for assessing the models performance. Figure 3: Schematic representation of the ML training sample extraction process. The left-hand side illustrates one of the raw radar and wave surface simulations \((Z_{\mathrm{sys}},E_{\mathrm{sys}}\in\mathbb{R}^{512\times 38})\), which are utilized to extract input-output samples shown on the right-hand side. Each input \(\mathbf{x}\) consists \(n_{\mathrm{s}}\) radar snapshots acquired at intervals of \(\Delta t_{\mathrm{r}}=1.3\,\mathrm{s}\), while each output \(\mathbf{y}\) represents a single-snapshot wave surface elevation at time instant \(t_{\mathrm{s}}\). In total \(N=6\cdot 520=3120\) data samples are generated. ### U-Net-based network architecture We first adopt the U-Net concept, originally developed for medical image segmentation by Ronneberger et al. (2015), which has since been applied to a variety of image-to-image translation and surrogate modeling problems, for instance by Isola et al. (2016); Liu et al. (2018); Stoian et al. (2019); Wang et al. (2020); Eichinger et al. (2022); Niekamp et al. (2023) and Stender et al. (2023). The mirrored image dimensions in a fully convolution autoencoder network allow for the U-Net's key property, that is the use of skip-connections for concatenating the output features from the encoding path with the inputs in the decoding path. This enables the reuse of data information of different spatial scales that would otherwise be lost during down-sampling and assists the optimizer to find the minimum more efficiently (Li et al., 2018). Our proposed encoder-decoder architecture is the result of a four-fold cross-validated hyperparameter study, documented in Table 2 in the appendix. As depicted in Figure 4, the adapted U-Net architecture, has a depth of \(n_{\mathrm{d}}=5\) consecutive encoder blocks followed by the same number of consecutive decoder blocks with skip-connections between them. Each encoder block is composed of a 1D convolutional layer with \(n_{\mathrm{k}}=32\) kernels of size \(s_{\mathrm{k}}=5\), that are responsible for identifying specific features in the input by shifting the smaller-sized kernels, containing the networks trainable weights, across the larger input feature maps in a step-wise manner. Each convolutional layer is followed by a GeLU activation function \(\sigma\)(Hendrycks and Gimpel, 2016) and an average pooling downsampling layer of size 2. To summarize, in the encoding path each input sample \(\mathbf{x}\in\mathbb{R}^{n_{r}\times n_{\mathrm{k}}}\) is transformed by the first convolutional layer resulting in \(v_{\mathrm{c1}}\in\mathbb{R}^{n_{r}\times n_{\mathrm{k}}}\), with \(n_{r}=512\) being the number of spatial grid-points and \(n_{\mathrm{s}}\) being the historic snapshots in the radar input. Subsequently this intermediate output is send through \(\sigma\), before the pooling layer reduces the spatial dimension to \(v_{\mathrm{p1}}\in\mathbb{R}^{\frac{1}{2}n_{r}\times n_{\mathrm{k}}}\). This process is repeated until the final encoding block's output being \(v_{\mathrm{p5}}\in\mathbb{R}^{\frac{1}{2}n_{r}\times n_{\mathrm{k}}}\). Next, the decoding blocks are applied, each consisting of a convolutional layer with again \(n_{\mathrm{k}}=32\) kernels of size \(s_{\mathrm{k}}=5\), followed by GeLU activation. Afterwards the feature maps' spatial dimensions are upsampled using transpose convolutional layers with linear activation. The resulting feature maps then are concatenated with the output of the corresponding stage in the encoding path via skip-connections, before the next convolution is applied. This process is repeated until the final output \(\mathbf{\hat{y}}\in\mathbb{R}^{n_{r}\times 1}\) is calculated using a convolutional layer with a single kernel and linear activation. As indicated above, the U-Net architecture assumes local connections between neighboring data points, which is accomplished through two mechanisms. Firstly, the convolutional layers use kernels with a receptive field of \(s_{\mathrm{k}}=5\) pixels to process different local parts of the larger input feature ma Figure 4: Fully convolutional encoder-decoder architecture based on the U-Net (Ronneberger et al., 2015). Each input \(\mathbf{x}\) is processed by \(n_{\mathrm{d}}=5\) alternating convolutional-, activation- and average pooling layers in the encoding path. The decoding path contains convolutional-, activation- and transpose convolutional layers for a gradual upsampling to calculate the output \(\mathbf{\hat{y}}\). Moreover, the outputs of the encoding stages are transferred to the decoding path via skip-connections. This is referred to as weight sharing, causing a property called _translational equivariance_: each patch of the input is processed by the same kernels. Secondly, the pooling layers induce locality by assuming that meaningful summations of information from small local regions in the intermediate feature maps can be made and creates a property referred to as _translational invariance_(Goodfellow et al., 2016). ### FNO-based network architecture In the second step, we explore a neural network based on the FNO (Li et al., 2020). While a CNN is limited to map between finite-dimensional spaces, neural operators are in addition capable to learn nonlinear mappings between a more general class of function spaces. This makes the FNO well-suited for capturing the spatio-temporal patterns that govern the dynamics of various physical problems that obey partial differential equations, if the solutions are well represented in Fourier space. FNO variants have been applied to e.g., fluid dynamics (Peng et al., 2022; Li et al., 2022), simulation of multiphase flow (Yan et al., 2022; Wen et al., 2022), weather forecasting (Pathak et al., 2022), material modeling (Rashid et al., 2022; You et al., 2022), and image classification (Williamson et al., 2022). The FNO-based iterative architecture approach (\(\mathbf{x}\to v_{0}\to v_{1}\rightarrow\ldots\rightarrow\mathbf{\hat{y}}\)) applied in this work is illustrated in Figure 5, while Table 3 in the appendix summarizes the determination of model hyperparameters by a four-fold cross-validation. Our FNO transforms radar input data \(\mathbf{x}\in\mathbb{R}^{n_{r}\times n_{\mathrm{w}}}\) into a higher-dimensional latent representation \(v_{0}\in\mathbb{R}^{n_{r}\times n_{\mathrm{w}}}\) of channel width \(n_{\mathrm{w}}=32\), using a linear neural network layer \(P\) with \(n_{\mathrm{w}}\) nodes. Subsequently, the latent representation passes through \(n_{\mathrm{f}}=3\) Fourier layers, each consisting of two paths. In the upper path, a global convolution operator defined in Fourier space is applied to each channel of \(v_{0}\) separately utilizing discrete Fourier transforms \(F\). A linear transformation \(R_{0}\) is then applied to the lower-order Fourier modes after truncating the Fourier series at a maximum number of \(n_{\mathrm{m}}=64\) modes. Subsequently, this scaled and filtered content is back-transformed to the spatial domain using an inverse discrete Fourier transforms \(F^{-1}\). In the lower path, a linear transformation \(W_{0}\) in the spatial domain is applied to the input \(v_{0}\) to account for non-periodic boundary conditions and higher-order modes that are neglected in the upper path of the Fourier layer. The outputs of the upper and lower paths are added, and the sum is passed through a nonlinear GeLU activation \(\sigma\) resulting in \(v_{1}\in\mathbb{R}^{n_{r}\times n_{\mathrm{w}}}\), before entering the next Fourier layer. In summary, the output of the \((i+1)\)-th Fourier layer is defined as \[v_{i+1}=\sigma\left(F^{-1}\left(R_{i}\cdot F(v_{i})\right)+W_{i}\cdot v_{i} \right). \tag{6}\] Finally, the output \(v_{3}\) of the last Fourier layer is transferred to the target output dimension \(\mathbf{\hat{y}}\in\mathbb{R}^{n_{r}\times 1}\) using another linear layer \(Q\). In summary, the FNOs weights correspond to \(P\in\mathbb{R}^{n_{\mathrm{x}}\times d_{\mathrm{w}}}\), \(Q\in\mathbb{R}^{n_{\mathrm{x}}\times d_{\mathrm{w}}}\) and all \(R_{i}\in\mathbb{C}^{d_{\mathrm{w}}\times d_{\mathrm{w}}\times d_{\mathrm{m}}}\) and \(W_{i}\in\mathbb{R}^{d_{\mathrm{w}}\times d_{\mathrm{w}}}\). As the \(R_{i}\)-matrices contain the main portion of the total number of weights, most parameters are learned in the Fourier space rather than the original data space. As previously noted, the FNO architecture incorporates a global inductive bias that assumes the input data exhibits approximately periodic properties and can be effectively represented in Fourier space. Furthermore, the FNO's design presupposes that the Fourier spectrum of the input data is smooth, enabling Figure 5: Network architecture based on the Fourier neural operator (Li et al., 2020). Each input \(\mathbf{x}\) is lifted to a higher dimensional representation \(v_{0}\) of channel width \(d_{\mathrm{w}}\) by a neural network \(P\). Afterwards \(n_{\mathrm{f}}=3\) Fourier layers are applied to each channel. Finally, \(v_{3}\) is transferred back to the target dimension of the output \(\mathbf{\hat{y}}\) by another neural network \(Q\). More specifically, each Fourier layer is composed of two paths added in the end. The upper one learns a mapping in Fourier space by adapting \(R_{i}\) for scaling and truncating the Fourier Series after \(n_{\mathrm{m}}\) modes, while the lower one learn a local linear transform \(W_{i}\). its frequency components to be represented by a limited number of low-wavenumber Fourier coefficients, as the \(R_{i}\) matrices, which are responsible for the global mapping, truncate higher-frequency modes. ### Training and evaluation Both the U-Net- and FNO-based architecture are implemented using the PyTorch library (Paszke et al., 2019). In order to enable a fair comparison, both networks employ the same loss function for training. To account for wave training data of varying spatial scales, the relative L2-norm of the error is utilized as the loss function, where \(\mathbf{y}\) and \(\mathbf{\hat{y}}\in\mathbb{R}^{512\times 1}\) represent the true and predicted wave surfaces for one training sample \[\mathcal{L}(\mathbf{y},\mathbf{\hat{y}})=\text{nL2}(\mathbf{y},\mathbf{\hat{y }})=\frac{\|\mathbf{\hat{y}}-\mathbf{y}\|_{2}}{\|\mathbf{y}\|_{2}}. \tag{7}\] To minimize the loss, we use the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001. The training is executed for 800 epochs on an NVIDIA GeForce RTX 3050 Ti Laptop GPU. For both, the U-Net-based models \(\mathcal{M}_{\text{U},n_{\text{s}}}\) and FNO-based models \(\mathcal{M}_{\text{F},n_{\text{s}}}\), only the models with the lowest test loss within the 800 epochs is stored for performance evaluation and visualization. Established machine learning metrics based on Euclidean distances treat the deviation of two surfaces in frequency or phase as amplitude errors (Wedler et al., 2022). Therefore, we introduce the surface similarity parameter (SSP) proposed by Perlin and Bustamante (2014) as an additional performance metric \[\text{SSP}(\mathbf{y},\mathbf{\hat{y}})=\frac{\sqrt{\int|F_{\mathbf{y}}(k)-F_ {\mathbf{\hat{y}}}(k)|^{2}dk}}{\sqrt{\int|F_{\mathbf{y}}(k)|^{2}dk}+\sqrt{ \int|F_{\mathbf{\hat{y}}}(k)|^{2}dk}}\in[0,1], \tag{8}\] where \(k\) denotes the wavenumber vector and \(F_{\mathbf{y}}\) denotes the discrete Fourier transform of a surface \(\mathbf{y}\). The SSP is a normalized error metric, with \(\text{SSP}=0\) indicating perfect agreement and \(\text{SSP}=1\) a comparison of phase-inverted surfaces. As the SSP combines phase-, amplitude-, and frequency errors in a single quantity and is a normalized error metric, it is used in recent ocean wave prediction studies by Klein et al. (2020, 2022), Wedler et al. (2022, 2023), Desmars et al. (2021, 2022) and Lunser et al. (2022). While metrics such as the nL2 or SSP evaluate the average reconstruction quality of each \(\mathbf{\hat{y}}\in\mathbb{R}^{n_{r}\times 1}\) across the entire spatial domain with \(n_{r}=512\) grid points, it is important to consider the potential imbalance in reconstruction error between those areas where the radar input was either shadowed or visible. This imbalance ratio can be quantified by \(\frac{\text{nL2}_{\text{shad}}}{\text{nL2}_{\text{vis}}}\). Here, \(\text{nL2}_{\text{shad}}=\text{nL2}(\mathbf{y}_{\text{shad}},\mathbf{\hat{y} }_{\text{shad}})\) and \(\text{nL2}_{\text{vis}}=\text{nL2}(\mathbf{y}_{\text{vis}},\mathbf{\hat{y}}_{ \text{vis}})\) are the nL2 norms of the output wave elevations in the shadowed or visible areas, respectively. We separate the visible and shadowed parts using the shadowing mask \(\mathcal{S}\) introduced in Eq. (5), where \(\mathbf{y}_{\text{vis}}=\mathcal{S}\cdot\mathbf{y}\) and \(\mathbf{y}_{\text{shad}}=(1-\mathcal{S})\cdot\mathbf{y}\). Afterwards, all cells with zero entries are removed from the output arrays, such that the number of visible or invisible data points is \(n_{\text{vis}}\) or \(n_{\text{shad}}\), respectively, and \(\mathbf{y}_{\text{vis}},\mathbf{\hat{y}}_{\text{vis}}\in\mathbb{R}^{n_{\text{ vis}}\times 1}\) and \(\mathbf{y}_{\text{shad}},\mathbf{\hat{y}}_{\text{shad}}\in\mathbb{R}^{n_{\text{ shad}}\times 1}\) satisfy \(n_{\text{vis}}+n_{\text{shad}}=n_{r}=512\). To conclude, a high value of the \(\frac{\text{nL2}_{\text{shad}}}{\text{nL2}_{\text{vis}}}\)-ratio indicates that the reconstruction in areas that were shadowed in the input is much worse than in the visible areas. We thus not only strive for low nL2 values across the entire spatial domain, but also for low \(\frac{\text{nL2}_{\text{shad}}}{\text{nL2}_{\text{vis}}}\) to achieve uniform reconstructions. We use a ratio metric only based on the Euclidean distance based nL2 and not on the SSP, as small sections of \(\mathbf{y}\) and \(\mathbf{\hat{y}}\) cannot be meaningfully considered in Fourier space. ## 4 Results This work explores the potential of utilizing machine learning for the reconstruction of one-dimensional ocean waves surfaces \(\eta(x,t=t_{\text{s}})\) from radar measurement surfaces \(\xi(x,t_{i})\) with \(t_{i}=\{t_{\text{s}}-i\Delta t_{r}\}_{i=0,\ldots,n_{\text{s}}-1}\). Therefore, each radar input sample \(\mathbf{x}\in\mathbb{R}^{512\times n_{\text{s}}}\), acquired according to Section 2, is to be mapped to the desired wave surface output \(\mathbf{y}\in\mathbb{R}^{512\times 1}\) via a ML model \(\mathcal{M}:\mathbf{x}\rightarrow\mathbf{y}\). We examine the impact of the number of historical radar snapshots \(n_{\text{s}}\) as well as the impact of the inductive bias of the U-Net-based models \(\mathcal{M}_{\text{U},n_{\text{s}}}\) and the FNO-based models \(\mathcal{M}_{\text{F},n_{\text{s}}}\), proposed in Section 3. We train the models using a total of \(N_{\text{train}}+N_{\text{val}}=2496\) samples and evaluate their performance using the previously excluded test set of \(N_{\text{test}}=624\) samples. The results are summarized in Table 1 and are reviewed and discussed regarding the pre-stated Hypothesis 1-Hypothesis 3 and Criterion 1-Criterion 3 in detail in the subsequent subsections. ### Performance of the U-Net-based model In the first step of our investigation, we examine the ability of U-Net-based models \(\mathcal{M}_{\mathrm{U},n_{\mathrm{s}}}\) to reconstruct wave surfaces using single snapshot radar input data \(\mathbf{x}\) for training. In the second step, we utilize the same architecture to determine the best number of historical snapshots \(n_{\mathrm{s}}\) required in each radar input to achieve the best reconstruction performance. To evaluate the effectiveness of the trained models, we also visually compare the predicted wave elevations \(\mathbf{\hat{y}}\) of selected test samples with their corresponding true elevations \(\mathbf{y}\). #### 4.1.1 U-Net using single-snapshot radar data Mapping of single snapshot radar data refers to mapping radar snapshots \(\mathbf{x}\in\mathbb{R}^{512\times 1}\) to wave snapshots \(\mathbf{y}\in\mathbb{R}^{512\times 1}\) recorded at the same time instant \(t_{\mathrm{s}}\). According to Table 1, the trained U-Net-based model \(\mathcal{M}_{\mathrm{U},1}\) achieves a performance given by a mean loss value of \(\mathrm{nL2}=0.329\) across all test set samples after 150 epochs of training. Afterwards, the model tends to overfit the training data, as shown in the loss curve in the appendix Figure 14(a). The observed error corresponds to a mean value of \(\mathrm{SSP}=0.171\) on test set, which fails to satisfy the predefined Criterion 2 of reconstruction errors below \(\mathrm{SSP}\leq 0.1\). To identify the origin of reconstruction errors, we employed model \(\mathcal{M}_{\mathrm{U},1}\) to predict reconstructions \(\mathbf{\hat{y}}\) of individual samples from the test set. Despite the stratified data split ensuring an equal distribution of sea state parameter combinations \((L_{\mathrm{p}},\epsilon)\) in the training and test set, the errors are unevenly distributed across individual samples, as illustrated in Figure 6. The sample in Figure 5(a) corresponds to a peak wavelength \(L_{\mathrm{p}}=180\,\mathrm{m}\) and small amplitudes caused by a small steepness of \(\epsilon=0.01\). It exhibits a minor impact of the shadowing modulation mechanism only affecting 9.4% of the total radar-illuminated surface \(\mathbf{x}\) in the top panel. The corresponding predicted surface reconstruction \(\mathbf{\hat{y}}\) in the bottom panel closely approximates the true wave elevation \(\mathbf{y}\), as evidenced by the sample-specific error of \(\mathrm{nL2}=0.152\) or \(\mathrm{SSP}=0.076\). In contrast, the sample in Figure 5(b) with the same \(L_{\mathrm{p}}=180\,\mathrm{m}\) but increased \(\epsilon=0.10\) shows 71.5% of the spatial \(r\)-domain being affected from shadowing modulation causing zero-valued intensities. This results in a high reconstruction error of \(\mathrm{nL2}=0.541\) or \(\mathrm{SSP}=0.311\). Particularly the shadowed areas seem to contribute to the poor reconstruction, as their error is 2.96 times higher than in the visible areas, indicated by \(\frac{\mathrm{nL2}_{\mathrm{shod}}}{\mathrm{nL2}_{\mathrm{viz}}}\). #### 4.1.2 U-Net using spatio-temporal radar data To improve the reconstruction quality of the U-Net-based architecture, especially for high wave steepness, we took inspiration from classical spectral-analysis- and optimization-based approaches (cf. Borge et al., 2004; Wu, 2004). These approaches use spatio-temporal radar data by considering temporal sequences of \(n_{\mathrm{s}}\) historical radar snapshots for reconstruction. Thus, we use multiple historical radar snapshots \(n_{\mathrm{s}}\) that satisfy Criterion 3 with \(\Delta t_{\mathrm{r}}=1.3\,\mathrm{s}\) for each input sample \(\mathbf{x}\in\mathbb{R}^{512\times n_{\mathrm{s}}}\), while the output remains a single snapshot \(\mathbf{y}\in\mathbb{R}^{512\times 1}\) at the respective last time instant \(t_{\mathrm{s}}\). We conducted 14 additional training runs of the same architecture to determine the best number of input snapshots \(n_{\mathrm{s}}\). The boxplot in Figure 7 shows that the model's mean performance across the entire test set significantly improves up to an value of \(n_{\mathrm{s}}=10\), confirming Hypothesis 3 as the reconstruction quality improves by incorporating multiple radar snapshots in the input. Moreover, the individual samples' error values become less scattered around the mean value. The model \(\mathcal{M}_{\mathrm{U},10}\) determined by the boxplot analysis achieves a final mean performance of \(\mathrm{nL2}=0.123\) or \(\mathrm{SSP}=0.061\) set, as shown in Table 1, now satisfying Criterion 2 of \(\mathrm{SSP}\leq 0.10\) and thus confirms \begin{table} \begin{tabular}{l l c c c c c c} \hline \hline \multicolumn{4}{c}{model} & \multicolumn{2}{c}{mean errors on test set} \\ \hline name & architecture & \(n_{\mathrm{s}}\) & epochs & investigated in & nL2 & \(\frac{\mathrm{nL2}_{\mathrm{shod}}}{\mathrm{nL2}_{\mathrm{viz}}}\) & SSP \\ \hline \hline \(\mathcal{M}_{\mathrm{U},1}\) & U-Net-based & 1 & 150 & Sec. 4.1.1 & 0.329 & 2.679 & 0.171 \\ \(\mathcal{M}_{\mathrm{U},10}\) & U-Net-based & 10 & 592 & Sec. 4.1.2 & 0.123 & 1.755 & 0.061 \\ \(\mathcal{M}_{\mathrm{F},1}\) & FNO-based & 1 & 721 & Sec. 4.2.1 & 0.242 & 1.886 & 0.123 \\ \(\mathcal{M}_{\mathrm{F},9}\) & FNO-based & 9 & 776 & Sec. 4.2.2 & 0.153 & 1.381 & 0.077 \\ \hline \hline \end{tabular} \end{table} Table 1: Reconstruction results averaged across the entire test set evaluated with different metrics for the investigated U-Net-based models \(\mathcal{M}_{\mathrm{U},n_{\mathrm{s}}}\) and FNO-based models \(\mathcal{M}_{\mathrm{F},n_{\mathrm{s}}}\) trained with either one or multiple radar snapshots \(n_{\mathrm{s}}\) in each samples’ input. Hypothesis 1. In addition, it yields a lower ratio of \(\frac{\text{nL}_{\text{2-had}}}{\text{nL}_{\text{2-vis}}}=1.755\) compared to \(2.679\) for model \(\mathcal{M}_{\text{U},1}\), indicating a more balanced reconstruction between shadowed and visible areas. Moreover, the model does not exhibit early overfitting anymore, achieving the best performance after \(592\) epochs, shown in Figure (b)b in the appendix. Figure 8 further confirms the improvement of the reconstruction by \(\mathcal{M}_{\text{U},10}\) for the same two test set samples reconstructed by \(\mathcal{M}_{\text{U},1}\) in Figure 6. The top panels display the most recent radar snapshot \(t_{\text{s}}\) in the darkest shading and preceding snapshots in increasingly lighter shades. The sample with small \(\epsilon=0.01\) in Figure (a)a experiences only a slight reduction in reconstruction error, while the sample with \(\epsilon=0.10\) in Figure (b)b exhibits a substantial reduction around one-third of the previous nL2 or SSP value. The improved performance seems mainly attributable to the enhanced reconstruction of shadowed areas. Figure 6: Two samples from the test set described by same wavelength \(L_{\text{p}}=180\,\text{m}\), but different wave steepness \(\epsilon\), reconstructed by the U-Net-based architecture \(\mathcal{M}_{\text{U},1}\). (a) Small \(\epsilon\) cause minor impact of the shadowing modulation in the radar input and allow accurate predictions. (b) Larger \(\epsilon\) create more extensive shadowed areas and cause higher prediction errors. Figure 7: Boxplot depicting the error distribution on test set, depending on the number of historical radar snapshots \(n_{\text{s}}\) provided to train U-Net-based architectures \(\mathcal{M}_{\text{U},n_{\text{s}}}\). The best model performance is achieved for \(n_{\text{s}}=10\). ### Performance of the FNO-based model The U-Net-based model \(\mathcal{M}_{\mathrm{U,10}}\) already supported Hypothesis 1 and Hypothesis 3 by demonstrating the potential to reconstruct wave surface elevations from radar data in general and improving reconstruction quality by including additional historical radar data. However, we also hypothesized that the FNO-based architecture may outperform CNN-based methods, such as the U-Net, due to its global inductive bias (Hypothesis 2), which may be beneficial for the wave data structure. To investigate this, we train FNO-based models \(\mathcal{M}_{\mathrm{F},n_{\mathrm{s}}}\) with \(n_{\mathrm{s}}=1\) radar snapshot in each input \(\mathbf{x}\) and subsequently determine the number \(n_{\mathrm{s}}\) to achieve the best performance. We again visually compare true and predicted wave elevations \(\mathbf{y}\) and \(\mathbf{\hat{y}}\). #### 4.2.1 FNO using single-snapshot radar data The FNO-based model \(\mathcal{M}_{\mathrm{F},1}\) trained with of \(n_{\mathrm{s}}=1\) snapshot in each input, attains its best performance \(\mathrm{nL2}=0.240\) after 721 training epochs, as shown Table 1 and also demonstrated in the loss curve in Figure (a)a in the appendix. Although the corresponding SSP \(=0.123\) does not attain the Criterion 2, it still presents a notable improvement compared to the SSP value of 0.171 previously obtained by the U-Net-based model \(\mathcal{M}_{\mathrm{U,1}}\). Moreover, \(\mathcal{M}_{\mathrm{F,1}}\) not only reduces the error across the entire spatial \(r\)-domain in terms of nL2 or SSP, but also reconstructs the waves more uniformly between shadowed and visible areas compared to \(\mathcal{M}_{\mathrm{U,1}}\). This is evident by the decrease in the mean \(\frac{\mathrm{nL2shad}}{\mathrm{nL2vis}}\)-ratio from 2.679 to 1.886. This improved wave reconstruction can be illustrated by comparing the reconstructions of the same two test set samples generated by \(\mathcal{M}_{\mathrm{F,1}}\) in Figure 9 to \(\mathcal{M}_{\mathrm{U,1}}\) in Figure 6. As depicted in Figure (a)a, the overall nL2 or SSP metrics of this sample are only slightly improved, but the ratio \(\frac{\mathrm{nL2shad}}{\mathrm{nL2vis}}\) achieved by \(\mathcal{M}_{\mathrm{F,1}}\) is substantially smaller than observed for \(\mathcal{M}_{\mathrm{U,1}}\). This observations are even more pronounced for the sample with high \(\epsilon\) in Figure (b)b. The \(\mathcal{M}_{\mathrm{F,1}}\) reduces the error over the entire range in terms of nL2 or SSP by almost half and also produces a more uniform reconstruction between shadowed and visible areas. Figure 8: Two samples from the test set described by same wavelength \(L_{\mathrm{p}}=180\,\mathrm{m}\), but different wave steepness \(\epsilon\), reconstructed by the U-Net based architecture trained with \(n_{\mathrm{s}}=10\) historical snapshots in the radar input \(\mathcal{M}_{\mathrm{U,10}}\). Compared to \(\mathcal{M}_{\mathrm{U,1}}\), a strong reconstruction improvement is observed, especially for the sample with high \(\epsilon=0.10\) in (b). #### 4.2.2 FNO using spatio-temporal radar data Although the FNO-based model \(\mathcal{M}_{\mathrm{F},1}\) outperforms the U-Net-based model \(\mathcal{M}_{\mathrm{U},1}\), it does not achieve the desired reconstruction quality of SSP \(\leq 0.10\) (Criterion 2). To enhance the model performance, we analyze the effect of including multiple historical snapshots in the input \(\mathbf{x}\in\mathbb{R}^{512\times n_{\mathrm{s}}}\). Again, 14 additional training runs were conducted, each with an increasing number of \(n_{\mathrm{s}}\). The results, depicted in Figure 10, demonstrate an initial improvement in performance for the models \(\mathcal{M}_{\mathrm{F},n_{\mathrm{s}}}\), which is slightly less notable than that observed for the U-Net-based models \(\mathcal{M}_{\mathrm{U},n_{\mathrm{s}}}\) in Figure 7 before. The FNO-based models achieve their best performance for \(n_{\mathrm{s}}=9\) input snapshots, beyond which the mean error slightly increases. According to Table 1, the model \(\mathcal{M}_{\mathrm{F},9}\) attains a mean performance of \(\mathrm{nL2}=0.153\) on the test set, after 776 training epochs, as depicted by the loss curve in the appendix Figure (b)b. This Figure 10: Boxplot depicting the error distribution on test set, depending on the number of historical radar snapshots \(n_{\mathrm{s}}\) provided to train FNO-based architectures \(\mathcal{M}_{\mathrm{F},n_{\mathrm{s}}}\). The best model performance is achieved for \(n_{\mathrm{s}}=9\). Figure 9: Two samples from the test set described by same wavelength \(L_{\mathrm{p}}=180\,\mathrm{m}\), but different wave steepness \(\epsilon\) reconstructed by the FNO-based architecture \(\mathcal{M}_{\mathrm{F},1}\). The \(\mathcal{M}_{\mathrm{F},1}\) outperforms the \(\mathcal{M}_{\mathrm{U},1}\) in reconstructing the shadowed areas, especially noticeable for the sample with large \(\epsilon=0.10\) in (b). to SSP \(=0.076\), fulfilling the Criterion 2 of a SSP \(\leq 0.10\). However, in comparison to the U-Net-based model \(\mathcal{M}_{\mathrm{U,10}}\), which achieved a final mean value of SSP \(=0.061\), the performance measured across the entire \(r\)-domain of \(\mathcal{M}_{\mathrm{F,9}}\) is slightly inferior, even though in the single-snapshot case \(\mathcal{M}_{\mathrm{F,1}}\) outperformed \(\mathcal{M}_{\mathrm{U,1}}\). Nevertheless, compared to all investigated models, \(\mathcal{M}_{\mathrm{F,1}}\) on average achieves the best best reconstruction uniformity between shadowed and visible areas indicated by a mean \(\frac{\mathrm{nL2_{\mathrm{shad}}}}{\mathrm{nL2_{\mathrm{vis}}}}=1.381\) on test set. Figure 11 shows the reconstruction of the same two samples from the test set using \(\mathcal{M}_{\mathrm{F,9}}\). Compared to \(\mathcal{M}_{\mathrm{F,1}}\) in Figure 9 both samples experience an almost similar increase in reconstruction quality measured in terms of the individual SSP and nL2 errors. In addition, this values are is comparable to that achieved by \(\mathcal{M}_{\mathrm{U,10}}\) in Figure 8. However, for the sample with small \(\epsilon=0.01\) in Figure 11, \(\mathcal{M}_{\mathrm{F,9}}\) generates a more balanced reconstruction than \(\mathcal{M}_{\mathrm{U,10}}\), as reflected by the reduction of \(\frac{\mathrm{nL2_{\mathrm{shad}}}}{\mathrm{nL2_{\mathrm{vis}}}}\) from 2.201 to 1.665 for this individual sample. For the higher-steepness sample in Figure 11, the increase of reconstruction uniformity is less significant but still present. ### Comparative discussion The aforementioned visual observations have been limited to the examination of only two samples from the test set, both described by peak wavelength \(L_{\mathrm{p}}=180\,\mathrm{m}\) and either steepness \(\epsilon=0.01\) or \(\epsilon=0.10\). To avoid any possible incidental observations, the generalization of the error values needs to be examined. This can be achieved by plotting individual error values against their combination of peak wavelength \(L_{\mathrm{p}}\) and steepness \(\epsilon\) for all 624 test set samples reconstructed using the U-Net based models \(\mathcal{M}_{\mathrm{U,1}}\) and \(\mathcal{M}_{\mathrm{U,10}}\) or FNO-based models \(\mathcal{M}_{\mathrm{F,1}}\) and \(\mathcal{M}_{\mathrm{F,9}}\). Figure 11: Two samples from the test set described by same wavelength \(L_{\mathrm{p}}=180\,\mathrm{m}\), but different wave steepness \(\epsilon\), reconstructed by the FNO-based architecture trained with \(n_{\mathrm{s}}=9\) historical snapshots in the radar input \(\mathcal{M}_{\mathrm{F,9}}\). Compared to \(\mathcal{M}_{\mathrm{F,1}}\) a reconstruction improvement is visible for both samples. Moreover, for this two samples the reconstruction quality on the entire \(r\)-domain is almost equivalent to the results of \(\mathcal{M}_{\mathrm{U,10}}\), but especially for the small steepness sample in (a) the error ratio between shadowed and visible areas is remarkably smaller using \(\mathcal{M}_{\mathrm{F,9}}\), which indicates the potential of a more uniform reconstruction. #### 4.3.1 Discussion of overall reconstruction quality Figure 12 illustrates the reconstruction error as the mean nL2 value across 4-5 test samples available for each specific \(L_{\mathrm{p}}\)-\(\epsilon\)-combination. Additionally, red dots in the cell centers indicate the combinations that achieved a mean \(\mathrm{SSP}\leq 0.10\) (Criterion 2). Figure 11(a) confirms the findings presented in Section 4.1.1 for the U-Net-based model \(\mathcal{M}_{\mathrm{U},1}\) trained with one radar snapshot in each input (\(n_{\mathrm{s}}=1\)). The reconstruction errors increase with increasing steepness \(\epsilon\) and thus with increasing wave height. Moreover we now observe that this effect occurs almost independent of the peak wavelength \(L_{\mathrm{p}}\). For samples described by \(\epsilon>0.02\), \(\mathcal{M}_{\mathrm{U},1}\) fails to meet the Criterion 2, which is attributable to the geometrical radar imaging problem demonstrated in Figure 2, showing that the increase in wave height, caused by increased \(\epsilon\), results in more and larger shadowed areas. Figure 13 demonstrates that the occurrence of shadowing mainly increases with increasing \(\epsilon\) and is less influenced by \(L_{\mathrm{p}}\). While \(\epsilon=0.01\) on average only causes around \(10\%\), \(\epsilon=0.10\) instead causes approximately \(70-75\%\) of each input \(\mathbf{x}\) being affected by shadowing modulation. This results in zero-valued intensities that complicate the radar inversion task. Understanding the challenges faced by model \(\mathcal{M}_{\mathrm{U},1}\) in reconstructing shadowed areas, requires revisiting the U-Net's local mode of operation, outlined in Section 3.1, and the exemplary Figure 12: Error surfaces generalizing the previous observations for the four investigated models \(\mathcal{M}\) depending on the \(L_{\mathrm{p}}\)-\(\epsilon\)-combination of the samples from the test set. Red dots indicate parameter combinations that meet the Criterion 2 of reconstruction errors \(\mathrm{SSP}\leq 0.10\). The upper subplots illustrate the result of (a) the U-Net based model and (b) the FNO-based model, both trained with only one radar snapshot (\(n_{\mathrm{s}}=1\)) in each input. The same architectures were trained with multiple historic radar snapshots in each input, as demonstrated in the lower subplots. Specifically, (c) shows the U-Net-based model trained with \(n_{\mathrm{s}}=10\) and (d) the FNO-based model trained with \(n_{\mathrm{s}}=9\). radar input depicted in the upper panel of Figure (b)b. Due to shadowing, numerous local areas exhibit zero-intensities covering up to approximately \(200\,\mathrm{m}\), especially for greater distances from the radar system. However, the kernels in the first convolutional layer with a kernel size of \(s_{\mathrm{k}}=5\) only cover a domain of \(s_{\mathrm{k}}\cdot\Delta r=17.5\,\mathrm{m}\) while being shifted across the input feature map in a step-wise manner. While the U-Net's translational equivariance property is useful for translating radar intensities to wave surface elevation regardless of their spatial location, it thus also causes kernels to be shifted across large areas with zero-input only, which cannot be processed in a meaningful way. Although the pooling layers subsequently reduce the dimension of feature maps, resulting in an increased ratio of kernel size to feature size, the problem of radar inversion can be assumed to be based on the mapping of individual pixel values, known as low-level features. These features are learned in the early layers of a CNN-based network (Zeiler and Fergus, 2014). Accordingly, the initial stages of the U-Net-based architecture are more important for our task than for its original purpose of image segmentation (Ronneberger et al., 2015) that is based on mid- to high-level features extracted in the later layers. We therefore face problems applying \(\mathcal{M}_{\mathrm{U},1}\) for reconstruction, as important kernels in the early layers receive a significant amount of sparse, not valuable content. Although increasing the kernel size \(s_{\mathrm{k}}\) is a theoretically possible solution, doing so would compromise the U-Net's local key property. Moreover, when processing 2D surfaces with 2D convolutional kernels in future research, it would result in a quadratic increase in the number of weights, leading to computational issues. For this reason, the approach of providing \(n_{\mathrm{s}}=10\) consecutive radar snapshots governed according to Criterion 3 for the training of U-Net based model \(\mathcal{M}_{\mathrm{U},10}\) in Section 4.1.2, more effectively accounts for the sparsity in the input data. The upper panel of Figure (b)b demonstrated the presence of input information across the majority of the \(r\)-domain. The wave surfaces undergo shape variations while traveling towards the radar due to differing phase velocities of their components caused by dispersion. This results in a different part of the radar surface being shadowed or visible at each time step and seem to allows to capture more information about the wave on average, as the reconstruction quality significantly improves compared to \(\mathcal{M}_{\mathrm{U},1}\). Therefore we infer that the spatial and temporal shifts of the additional radar intensities acquired at \(t_{i}=\sum_{i=0}^{n_{\mathrm{s}}-1}t_{\mathrm{s}}-i\Delta t_{r}\) can be compensated successfully. This may be attributed to the fact that each kernel applied to the input has its own channel for each snapshot, allowing for separate processing to counterbalance the shift first, followed by the addition of results to one feature map utilized as part of the input for the next layer. The improved reconstruction observed for \(\mathcal{M}_{\mathrm{U},10}\) is further supported by its performance generalization shown in Figure (c)c. Compared to Figure (a)a, the mean nL2 error is substantially smaller and reconstruction errors are more evenly distributed across the \(L_{\mathrm{p}}\)-\(\epsilon\)-space, resulting in a satisfactory SSP value (Criterion 2) for almost all samples. Although there is still a slight increase in the error for cells with higher \(L_{\mathrm{p}}\) and \(\epsilon\), the proposed model \(\mathcal{M}_{\mathrm{U},10}\) can accurately reconstruct samples with varying wave characteristics and degrees of shadowing, thus supporting Hypothesis 1 and Hypothesis 3 of this work. Figure 13: Graphs visualizing the average proportion of each input \(\mathbf{x}\) being affected by shadowing modulation in dependency of the samples wave steepness values \(\epsilon=0.01-0.10\) for the shortest, one medium and the longest peak wavelength \(L_{\mathrm{p}}\) occurring in the test set. Motivated by the inherent patterns in wave data and the successful application of the Fourier neural operator (FNO) to systems exhibiting certain periodic properties, we conducted a comparative analysis of the global inductive bias of this network architecture with the local inductive bias of the CNN-based U-Net. As discussed in Section 4.2.1, our observations indicate that the FNO-based model \(\mathcal{M}_{\mathrm{F},1}\), trained with only one snapshot (\(n_{\mathrm{s}}=1\)) outperforms the U-Net-based \(\mathcal{M}_{\mathrm{U},1}\) in reconstructing shadowed areas in the input, as evidenced, i.e., by comparing the reconstruction in Figure 9b to 6b. This observation is generalizable to the entire test data set, as shown in Figure 12b. Although errors in the FNO error surface still increase with higher steepness \(\epsilon\) and, consequently, with an increase in the percentage of shadowing according to Figure 13, the increase is much less severe than that obtained by \(\mathcal{M}_{\mathrm{U},1}\) shown in Figure 12a. The improved ability of the FNO-based model \(\mathcal{M}_{\mathrm{F},1}\) in reconstructing shadowed areas from a single-snapshot input can be attributed to its mode of operation outlined in Section 3.2. Although the latent representation \(v_{0}\) in Figure 5 is usually not explicitly known, we can infer that the layer \(P\) with \(n_{\mathrm{s}}=1\) input nodes and \(n_{\mathrm{w}}\) output nodes only performs linear transformations to each radar input x. As the radar inputs exhibit kinks at the transitions from visible to shadowed areas, consequently \(v_{0}\) will have similar characteristics along the range direction. These transitions result in peaks for specific wavenumbers \(k\) in the spectrum \(F(k)\). However, the desired wave outputs \(\mathbf{y}\) of the training data posses smooth periodic properties, without peaks at the kink-related wavenumbers in \(F_{\mathbf{y}}(k)\). Since the \(R_{i}\) matrices in the Fourier layers scale the radar inputs spectrum to the wave outputs spectrum, they learn small coefficients for the corresponding entries to reduce the peaks. Therefore, the FNO's global inductive bias, combined with the data structure of wave surfaces, can efficiently correct sparse, shadowed regions in spectral space, resolving the issue of insufficient local information for reconstruction that arises with the U-Net-based model \(\mathcal{M}_{\mathrm{U},1}\) in Euclidean space. Thus it can be also stated that the FNO explicitly hard-encodes prior knowledge about physical wave properties through its network structure and thus can be assumed to be a _physics-guided design of architecture_(cf. Willard et al., 2022; Wang and Yu, 2023) for our problem. Despite the better performance of the FNO-based model \(\mathcal{M}_{\mathrm{F},1}\) compared to \(\mathcal{M}_{\mathrm{U},1}\) in reconstructing shadowed radar inputs that already supports Hypothesis 2, Figure 12b still reveals that most of the test set samples fail to meet the Criterion 2 of SSP \(\leq 0.10\). However, this issue was resolved by training a FNO-based model \(\mathcal{M}_{\mathrm{F},9}\) with \(n_{\mathrm{s}}=9\) historical radar snapshots in each input. This was demonstrated for the two test set examples in Figure 11 and is generalized in Figure 12d. We observe from that Figure, that the slightly higher mean error across the entire test set of \(\mathcal{M}_{\mathrm{F},9}\) compared to \(\mathcal{M}_{\mathrm{U},10}\), is primarily caused by samples with low steepness \(\epsilon\) or short wavelengths \(L_{\mathrm{p}}\). It is worth noting, that the observed minimal increase in errors for short wavelengths cannot be attributed to a truncation at an insufficient number of Fourier series modes \(n_{\mathrm{m}}\) in the Fourier layers. In this work, \(n_{\mathrm{m}}\) is determined as 64 and the spectral representation is discretized by \(\Delta k=\frac{2\pi}{n_{\mathrm{r}}\cdot\Delta r}=0.00351\,\mathrm{m}^{-1}\). The highest peak wavenumber of \(k_{\mathrm{p}}=0.0785\,\mathrm{m}^{-1}\) in our data set is reached for samples with \(L_{\mathrm{p}}=80\,\mathrm{m}\). The spectral density around \(k_{\mathrm{p}}\) has decayed almost completely at \(k_{\mathrm{ilt}}=n_{\mathrm{m}}\cdot\Delta k=0.2264\,\mathrm{m}^{-1}\), such that no important wave components are filtered out, as is visualized in the Figure C.17 in the appendix. Therefore, the small unequal tendency in error distribution achieved by \(\mathcal{M}_{\mathrm{F},9}\) in Figure 12d for samples described by different \(L_{\mathrm{p}}\)-\(\epsilon\), is likely caused by other factors than by an unsuitable network hyperparameter \(n_{\mathrm{m}}\). Moreover, we observed in the loss curve shown in Figure B.16b in the appendix that further training for more than 800 epochs could potentially improve the model's performance, whereas the best performance on the test set for \(\mathcal{M}_{\mathrm{U},10}\) seems to have been already reached, as the model begins to overfit the training, as depicted in Figure B.15a. #### 4.3.2 Discussion of reconstruction uniformity So far the generalization of the reconstruction quality has been evaluated based on nL2 or SSP values across the entire \(r\)-domain only. However, Table 1 indicates that the FNO-based model \(\mathcal{M}_{\mathrm{F},9}\) achieves a more uniform reconstruction between shadowed and visible areas. This is demonstrated by the mean ratio of \(\frac{\mathrm{nL2}_{\mathrm{shad}}}{\mathrm{nL2}_{\mathrm{vis}}}=1.381\) across all samples in the test set, while the U-Net-based model \(\mathcal{M}_{\mathrm{U},10}\) still struggles with reconstructing shadowed areas as inferred by its \(\frac{\text{nL}2_{\text{had}}}{\text{nL}2_{\text{vis}}}=1.755\). For the \(\frac{\text{nL}2_{\text{had}}}{\text{nL}2_{\text{vis}}}\)-ratio, the error distribution is displayed in Figure 14 for each test set sample, based on their \(L_{\text{p}}\)-\(\epsilon\)-combination. The model \(\mathcal{M}_{\text{U},10}\) generates an error surface shown in Figure 14 that exhibits broadly varying levels of uniformity in the reconstruction, even for neighboring \(L_{\text{p}}\)-\(\epsilon\)-combinations. In some cases, the reconstruction errors in shadowed areas exceeds those in visible areas by more than 2.5 times. This undesired effect is much less pronounced for the FNO-based model \(\mathcal{M}_{\text{F},9}\), as a comparison with Figure 14 reveals. #### 4.3.3 Final comparison For a final evaluation, either the general reconstruction quality nL2 on the entire \(r\)-domain can be chosen as the main performance criterion, which, in our case, would argue for the selection of the U-Net-based model \(\mathcal{M}_{\text{U},10}\), or instead the uniformity of the reconstruction indicated by \(\frac{\text{nL}2_{\text{had}}}{\text{nL}2_{\text{vis}}}\), which would argue for the FNO-based model \(\mathcal{M}_{\text{F},9}\). This decision should be made based on the application case. If the ML-reconstructed wave surface is intended to be used as an initial condition for subsequent forecasting with the HOS method, we would expect a more uniform reconstruction to represent a more physical result, and consequently the FNO-based reconstruction to be less likely affect the entire prediction in a negative way. Moreover, we observed that the FNO-based models global approach would allow for a reasonably more meaningful reconstruction of shadowed areas even with fewer historical radar snapshots \(n_{\text{s}}\). This is not necessarily the case using the U-Net-based models. Besides, the trained FNO-based model \(\mathcal{M}_{\text{F},9}\) in this work allows for much faster inference speed than the U-Net-based \(\mathcal{M}_{\text{U},10}\), even though \(\mathcal{M}_{\text{F},9}\) is constructed as a custom implementation and contains more weights compared to \(\mathcal{M}_{\text{U},10}\), which uses standard layers from the PyTorch library that are in addition probably optimized. More specifically, using the hardware specifications outlined in Section 3.3, our \(\mathcal{M}_{\text{F},9}\) is able to generate predictions for a new input sample in an average time of \(1.9\cdot 10^{-3}\,\text{s}\), which is approximately 20 times faster than the average time required by \(\mathcal{M}_{\text{U},10}\), which is \(3.7\cdot 10^{-4}\,\text{s}\) for the same task. ## 5 Conclusion This work introduces a novel machine learning-based approach for the phase-resolved reconstruction of ocean wave surface elevations from sparse radar measurements. To evaluate the performance of our approach, we generate synthetic nonlinear wave surface data for a wide range of sea states and corresponding Figure 14: Error surfaces depicting the ratio \(\frac{\text{nL}2_{\text{had}}}{\text{nL}2_{\text{vis}}}\) between the reconstruction quality achieved on shadowed and visible areas depending on the specific \(L_{\text{p}}\)-\(\epsilon\)-combination of the samples from the test set. The individual cell entries display the mean ratio across the 4-5 samples available for each specific parameter combination. The uniformity of the reconstructions achieved by the U-Net-based model \(\mathcal{M}_{\text{U},10}\) in (a) thus is compared to the one achieved by the FNO-based model \(\mathcal{M}_{\text{F},9}\) in (b). radar surface data by incorporating both tilt- and shadowing modulation mechanisms. Two neural network architectures, based on the U-Net and the Fourier neural operator, are trained, both provided with varying amounts of spatio-temporal radar surface measurement input. Our results and discussion indicate that both models are capable of producing high-quality wave surface reconstructions with average errors below SSP \(\leq 0.10\) when trained with a sufficient amount of \(n_{\rm s}=10\) or 9 consecutive radar snapshots. Furthermore, both models generalize well across different sea states. On average, the U-Net-based model achieves slightly smaller errors across the entire spatial domain of each reconstructed wave sample, while the FNO-based model produces a more uniform wave reconstruction between areas that were shadowed and visible in the corresponding radar input. This observation is further confirmed by the edge case of instantaneous inversion, i.e., if the networks are trained with only a single radar snapshot in each input. The weakness in the reconstruction of shadowing-affected areas of the U-Net-based model can be attributed to the local operation of the network architecture, where its small convolutional kernels do not receive processable information when shifted across shadowed input areas with zero-intensities only. The problem can be circumvented by using the FNO-based network, which learns a global mapping between radar input and wave output in the Fourier space. Its network structure already encodes prior physical knowledge about the periodic data structure apparent in ocean waves and is therefore possibly better suited for our use case. Our findings suggest that the FNO-based network may provide additional advantages for example concerning smaller training data sets, noisy input radar data, or the reconstruction of two- dimensional ocean wave surfaces, which require the additional reconstruction of wave direction components. Future investigations may explore these potentials. ## Appendix A Influence of neural network hyperparameters To mitigate the high cost of obtaining a larger data set, a four-fold cross-validation approach with an independent test set was utilized for finding the network hyperparameters, as recommended for example by Raschka (2018). The data set of \(N=3120\) samples was divided into a fixed and independent test set comprising \(20\%\) or \(N_{\text{test}}=624\) samples, with the remaining \(2496\) samples partitioned into four equal-sized parts based on the governing sea state parameters \((L_{\text{p}},\epsilon)\) using a stratified data split technique to ensure equal representation of each wave characteristic in the resulting subsets. During each cross-validation step, one part with \(N_{\text{val}}=624\) samples was used as the validation set, and the remaining three parts with \(N_{\text{train}}=1872\) samples constituted the training set. Tables 2 and 3 present the results of the four-fold cross-validation hyperparameter studies for the U-Net- and FNO-based architectures. For both network types, the same fixed test set was excluded from this investigation. The metrics (nL2, \(\frac{\text{nL}_{\text{2,had}}}{\text{nL}_{\text{2,vis}}}\), SSP) and the number of epochs necessary to attain the best performance represent average values across all four folds. \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline \multicolumn{3}{c}{FNO hyperparameters} & \multicolumn{3}{c}{nL2} & \multicolumn{3}{c}{\(\frac{\mathrm{nL}_{2}\mathrm{thd}}{\mathrm{nL}_{\mathrm{2vis}}}\)} & \multicolumn{2}{c}{SSP} \\ \cline{2-10} layers \(n_{\mathrm{f}}\) & modes \(n_{\mathrm{m}}\) & width \(n_{\mathrm{w}}\) & \#weights & epochs & train & val & train & val & train & val \\ \hline \hline & & & 16 & 52,321 & 793 & 0.316 & 0.334 & 1.380 & 1.437 & 0.163 & 0.173 \\ 3 & 32 & 32 & 204,225 & 790 & 0.262 & 0.313 & 1.338 & 1.520 & 0.134 & 0.160 \\ 3 & 32 & 48 & 455,969 & 540 & 0.216 & 0.296 & 1.371 & 1.685 & 0.110 & 0.150 \\ & & 64 & 807,553 & 500 & 0.214 & 0.296 & 1.362 & 1.681 & 0.109 & 0.151 \\ & & 16 & 64,609 & 791 & 0.284 & 0.306 & 1.469 & 1.547 & 0.145 & 0.157 \\ 3 & 40 & 32 & 253,377 & 703 & 0.229 & 0.290 & 1.402 & 1.654 & 0.116 & 0.148 \\ & & 48 & 566,561 & 480 & 0.212 & 0.291 & 1.380 & 1.701 & 0.108 & 0.148 \\ & & 64 & 1,004,161 & 395 & 0.207 & 0.288 & 1.390 & 1.719 & 0.105 & 0.146 \\ & & 16 & 76,897 & 788 & 0.272 & 0.296 & 1.481 & 1.573 & 0.139 & 0.152 \\ 3 & 48 & 32 & 302,529 & 605 & 0.219 & 0.279 & 1.419 & 1.675 & 0.111 & 0.142 \\ & & 48 & 677,153 & 342 & 0.216 & 0.285 & 1.373 & 1.661 & 0.110 & 0.145 \\ & & 64 & 1,200,769 & 258 & 0.214 & 0.286 & 1.360 & 1.642 & 0.109 & 0.146 \\ & & 16 & 89,185 & 787 & 0.243 & 0.270 & 1.602 & 1.704 & 0.124 & 0.137 \\ 3 & 56 & 32 & 351,681 & 714 & 0.202 & 0.265 & 1.499 & 1.817 & 0.102 & 0.135 \\ & & 48 & 787,745 & 397 & 0.208 & 0.265 & 1.511 & 1.767 & 0.105 & 0.134 \\ & & 64 & 1,397,377 & 307 & 0.204 & 0.267 & 1.475 & 1.753 & 0.104 & 0.136 \\ & & 16 & 101,473 & 798 & 0.237 & 0.265 & 1.644 & 1.767 & 0.102 & 0.135 \\ 3 & 64 & 32 & 400,833 & 534 & 0.199 & 0.256 & 1.561 & 1.837 & **0.101** & **0.130** \\ & & 48 & 898,337 & 349 & 0.199 & 0.257 & 1.560 & 1.844 & 0.101 & 0.130 \\ & & 64 & 1,593,985 & 276 & 0.190 & 0.261 & 1.501 & 1.822 & 0.096 & 0.133 \\ & & 16 & 113,761 & 742 & 0.234 & 0.267 & 1.588 & 1.750 & 0.119 & 0.136 \\ 3 & 72 & 32 & 449,985 & 600 & 0.197 & 0.257 & 1.575 & 1.887 & 0.100 & 0.131 \\ & & 48 & 1,008,929 & 367 & 0.189 & 0.258 & 1.568 & 1.889 & 0.096 & 0.132 \\ & & 64 & 1,790,593 & 244 & 0.187 & 0.258 & 1.523 & 1.851 & 0.095 & 0.131 \\ \hline & & 16 & 68,977 & 789 & 0.281 & 0.316 & 1.386 & 1.520 & 0.144 & 0.162 \\ 4 & 32 & 32 & 270,817 & 520 & 0.220 & 0.296 & 1.355 & 1.668 & 0.112 & 0.151 \\ & & 48 & 605,777 & 394 & 0.190 & 0.289 & 1.358 & 1.760 & 0.097 & 0.147 \\ & & 64 & 1,073,857 & 233 & 0.185 & 0.292 & 1.303 & 1.711 & 0.094 & 0.148 \\ & & 16 & 85,361 & 787 & 0.250 & 0.291 & 1.469 & 1.658 & 0.128 & 0.148 \\ 4 & 40 & 32 & 336,353 & 608 & 0.193 & 0.281 & 1.397 & 1.817 & 0.098 & 0.143 \\ & & 48 & 753,233 & 331 & 0.188 & 0.283 & 1.360 & 1.771 & 0.095 & 0.144 \\ & & 64 & 1,336,001 & 167 & 0.206 & 0.293 & 1.314 & 1.638 & 0.105 & 0.149 \\ & & 16 & 101,745 & 720 & 0.245 & 0.289 & 1.408 & 1.612 & 0.125 & 0.148 \\ & & 32 & 401,889 & 327 & 0.212 & 0.282 & 1.365 & 1.662 & 0.103 & 0.143 \\ 4 & 48 & 48 & 900,689 & 185 & 0.205 & 0.282 & 1.355 & 1.676 & 0.104 & 0.144 \\ & & 64 & 1,598,145 & 431 & 0.144 & 0.282 & 1.324 & 1.785 & 0.073 & 0.142 \\ & & 16 & 118,129 & 703 & 0.216 & 0.265 & 1.533 & 1.767 & 0.110 & 0.135 \\ & & 32 & 467,425 & 518 & 0.172 & 0.269 & 1.420 & 1.873 & 0.087 & 0.136 \\ 4 & 56 & 48 & 1,048,145 & 285 & 0.171 & 0.269 & 1.406 & 1.834 & 0.087 & 0.137 \\ & & 64 & 1,860,289 & 156 & 0.177 & 0.269 & 1.385 & 1.778 & 0.089 & 0.136 \\ & & 16 & 134,513 & 784 & 0.204 & 0.258 & 1.555 & 1.843 & 0.103 & 0.131 \\ 4 & 64 & 32 & 532,961 & 270 & 0.186 & 0.258 & 1.497 & 1.846 & 0.095 & 0.131 \\ 4 & 48 & 1,195,601 & 149 & 0.187 & 0.260 & 1.476 & 1.831 & 0.094 & 0.132 \\ & & 64 & 2,122,433 & 114 & 0.176 & 0.259 & 1.450 & 1.826 & 0.089 & 0.131 \\ & & 16 & 150,897 & 740 & 0.189 & 0.256 & 1 ## Appendix B Loss curves After determining appropriate hyperparameters for the U-Net-based and FNO-based models in Appendix A, the train and validation data from the four-fold cross-validation were merged. This combined data set was then used to train the models \(\mathcal{M}_{\text{U},n_{\text{s}}}\) and \(\mathcal{M}_{\text{F},n_{\text{s}}}\), with one radar snapshot in each samples input (\(n_{\text{s}}=1\)) or either \(n_{\text{s}}=9\) or \(n_{\text{s}}=10\) historical radar snapshots in each input. The performance evaluation of these models was conducted on the previously excluded test set of \(N_{\text{test}}=624\) samples. The loss curves depicted in Figures 14(a)-15(b) illustrate the model performance and the impact of different values for \(n_{\text{s}}\) throughout the training epochs. Deviation between the train and test loss curves indicates overfitting, characterized by excessive adaptation to the training data, resulting in poor generalization to new samples. Consequently, the best models \(\mathcal{M}\) were selected based on the lowest test loss within the 800 training epochs. Figure 16: Loss curves for training of the FNO-based model. Subfigure (a) depicts the loss of model \(\mathcal{M}_{\text{F},1}\) trained with one one snapshot \(n_{\text{s}}=1\) in the radar input, where the best performance \(\text{nL2}=0.242\) on test set for model evaluation is reached after 721 epochs. Compared to the U-Net based model \(\mathcal{M}_{\text{U},1}\), \(\mathcal{M}_{\text{F},1}\) does not seem to be susceptible to overfitting. Subfigure (b) depicts model \(\mathcal{M}_{\text{F},9}\) trained with \(n_{\text{s}}=9\) instead, which increases performance, resulting in \(\text{nL2}=0.153\) after 776 epochs of training. It can be expected that training beyond 800 epochs would further slightly increase the best performance on test set. Figure 15: Loss curves for training of the U-Net-based model. Subfigure (a) depicts the loss of model \(\mathcal{M}_{\text{U},1}\) trained with one one snapshot \(n_{\text{s}}=1\) in the radar input, where the best performance \(\text{nL2}=0.329\) on test set for model evaluation is reached after 150 epochs. Afterwards the model would tend to overfit the training data. Subfigure (b) depicts model \(\mathcal{M}_{\text{U},10}\) trained with \(n_{\text{s}}=10\) instead, which strongly increases performance, resulting in \(\text{nL2}=0.123\) after 592 epochs of training. ## Appendix C Visualization of spectral representation During the investigations on the FNO models (see Figure 5), a concern arose regarding the chosen number of Fourier series modes \(n_{\mathrm{m}}=64\) in the \(R_{i}\)-matrices, which might lead to the omission of significant frequency components in the wave data. To address this concern, we visualized the JONSWAP spectra employed to initialize the HOS wave simulation for a specific steepness value \(\epsilon\) (since different \(\epsilon=0.08\) just scale the amplitude of spectral density) and all peak wavelengths \(L_{\mathrm{p}}\in\{80,90,\ldots,190,200\}\) m, each corresponding to a specific \(\omega_{\mathrm{p}}\) and \(k_{\mathrm{p}}\). Based on the findings depicted and explained in Figure 17, we conclude that this assumption is invalid. ## Acknowledgements This work was supported by the Deutsche Forschungsgesellschaft (DFG - German Research Foundation) [project number 277972093: Excitability of Ocean Rogue Waves] ## Declaration of interests The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Declaration of Generative AI and AI- assisted technologies in the writing process The manuscript was completely written by the authors. Once the manuscript was completed, the authors used ChatGPT in order to improve its grammar and readability. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication. Figure 17: JONSWAP spectra used in the data generation for one exemplary steepness value \(\epsilon\), but varying peak wavelengths \(L_{\mathrm{p}}=80-200\) m. The shortest peak wavelength of \(L_{\mathrm{p}}=80\) m corresponds to the highest peak wavenumber of \(k_{\mathrm{p}}=0.079\) m\({}^{-1}\). The filtering wavenumber of \(k_{\mathrm{fit}}=n_{\mathrm{m}}\cdot\Delta k=64\cdot 0.00351\) m\({}^{-1}=0.2246\) m\({}^{-1}\), which is indicated by the dotted red line and defined by the Fourier layers in this work, consequently does not truncate important wave components in our data set-up.
2308.09178
Testing gravity with gauge-invariant polarization states of gravitational waves: Theory and pulsar timing sensitivity
The determination of the polarization modes of gravitational waves (GWs) and their dispersion relations is a crucial task for scrutinizing the viability of extended theories of gravity. A tool to investigate the polarization states of GWs is the well-known formalism developed by Eardley, Lee, and Lightman (ELL) [Phys. Rev. D 8, 3308 (1973)] which uses the Newman-Penrose (NP) coefficients to determine the polarization content of GWs in metric theories of gravity. However, if the speed of GWs is smaller than the speed of light, the number of NP coefficients is greater than the number of polarizations. To overcome this inconvenience we use the Bardeen formalism to describe the six possible polarization modes of GWs considering general dispersion relations for the modes. The definition of a new gauge-invariant quantity enables an unambiguous description of the scalar longitudinal polarization mode. We apply the formalism to General Relativity, scalar-tensor theories, $f(R)$-gravity, and a wide class of quadratic gravity. We derive an explicit relation between a physical observable (the derivative of the frequency shift of an electromagnetic signal), and the gauge-invariant variables. Then we find an analytical formula for the pulsar timing rms response to each polarization mode. To estimate the sensitivity of a single pulsar timing we focus on the case of a dispersion relation of a massive particle. The sensitivity curves of the scalar longitudinal and vector polarization modes change significantly depending on the value of the effective mass. The detection (or absence of detection) of the polarization modes using the pulsar timing technique has decisive implications for alternative theories of gravity. Finally, investigating a cutoff frequency in the pulsar timing band can lead to a more stringent bound on the graviton mass than that presented by ground-based interferometers.
Márcio E. S. Alves
2023-08-17T20:33:08Z
http://arxiv.org/abs/2308.09178v3
# Testing gravity with gauge-invariant polarization states of gravitational waves ###### Abstract The determination of the polarization modes of gravitational waves (GWs), and of their dispersion relations is decisive to scrutinize the viability of extended theories of gravity. A tool to investigate the polarization states of GWs is the Newman-Penrose (NP) formalism. However, if the speed of GWs is smaller than the speed of light, the number of NP variables is greater than the number of polarizations. To overpass this inconvenience we use the Bardeen formalism to describe the six possible polarization modes of GWs considering different general dispersion relations for the modes. The definition of a new gauge-invariant quantity enables an unambiguous description of the scalar longitudinal polarization mode. We apply the formalism to General Relativity, scalar-tensor theories, and \(f(R)\)-gravity. To obtain a bridge between theory and experiment, we derive an explicit relation between a physical observable (the derivative of the frequency shift of an electromagnetic signal) with the gauge-invariant variables. From this relation, we find an analytical formula for the Pulsar Timing rms response to each polarization mode. To estimate the sensitivity of a single Pulsar Timing we focus on the case of a dispersion relation of a massive particle. The sensitivity curves of the scalar longitudinal and vector polarization modes change significantly depending on the value of the effective mass. The detection (or absence of detection) of the polarization modes using the Pulsar Timing technique has decisive implications for alternative theories of gravity. Finally, the investigation of a cutoff frequency in the Pulsar Timing band can lead to a more stringent bound on the graviton mass than that presented by ground-based interferometers. * August 2023 ## 1 Introduction The gravitational wave (GW) events detected so far by the Advanced LIGO and Advanced Virgo interferometers have shown their ability to impact our knowledge of physics and astrophysics. These observations offer a unique opportunity to test General Relativity (GR) in the dynamical regime. All the extensions to Einstein's theory predict modifications to the conventional GW signal due to one or more of three aspects, namely, changes in the waveform due to particularities in the generation mechanism, changes in the propagation due to new dispersion relations or differences in the interaction of the wave with the background geometry, and the number of independent polarization states of GWs. Considering these effects, tests of gravity performed with the data of the three observing runs of Advanced LIGO and Advanced Virgo have shown that all the observed events are consistent with GR [1, 2, 3]. However, the planned increase in the sensitivity of the detectors, the new generations of interferometers, the pulsar timing technique and the future space-based GW detectors as LISA will be able to produce stringent tests to GR. In the case of the polarization states of GWs, a detection indicating the presence of a polarization mode beyond the usual plus and cross polarizations would imply a violation of Einstein's theory. In general, an alternative theory of gravity in four dimensions can predict up to six polarization modes of GWs, namely, two tensor, two vector, one scalar transversal, and one scalar longitudinal [4, 5]. To check the presence or absence of such modes in a specific theory it is appropriate to consider the evaluation of gauge invariant quantities to warrant that they are related to truly physical observables. The most common strategy is to use the Newman-Penrose (NP) formalism [4, 5]. Within the NP formalism, one can describe the irreducible parts of the Riemann tensor in the linearized regime. In such a framework, two real and two complex variables describe the polarization states of GWs in any four-dimensional metric theory of gravity. The NP formalism has been applied in the scope of several theories to reveal the polarization properties of GWs (see, e.g., [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]). In recent years, however, some criticisms have been raised in the literature regarding the use of the original NP formalism in theories that present one or more massive modes in the linearized regime [20, 21, 22, 23]. This is the case of a huge class of alternative theories, including the massive version of the Brans-Dicke theory [8], Horndeski theory [22] and \(f(R)\)-gravity [7, 12, 20]. The main criticism is related to the fact that if GWs travel at a speed different from the speed of light, then other NP quantities, beyond the original four, would be non-null. As a consequence, the original NP formalism would be incomplete and could result in misleading conclusions for some theories. Thus, to have a complete description of the polarizations one needs new NP variables in this case. In fact, to describe six polarizations there are not only four but nine variables representing fourteen components of the Riemann tensor [23]. Certainly, some variables could be more important than others depending on the dispersion relation and frequency. However, there is a gauge-invariant alternate formalism to identify the polarization modes in a metric theory of gravity. It consists in decompose the metric into irreducible components according to their properties under spatial rotation [(3+1) decomposition] and then constructing gauge-invariant combinations of the metric perturbations. In the scope of cosmological perturbation theory such quantities are known as Bardeen variables [24, 25]. Recently, Wagle et al. [17] used these gauge-invariant variables to study the polarizations of GWs in the context of two theories, namely, the dynamical Chern-Simons gravity and the Einstein-dilaton-Gauss-Bonnet gravity. They have found that the NP formalism and the (3+1) decomposition lead to the same result in both cases. The formalism of Bardeen variables has the advantage that the same number of variables are applicable to describe the polarization modes of GWs to any frequency. In the present work, we review the Bardeen formalism and show how it can be applied in the identification of the polarization modes of GWs for any metric theory of gravity. We show that different from the case of the NP quantities, we have six variables to describe six polarizations, i.e., the number of variables is not related to the dispersion relation of GWs. In order to describe the scalar longitudinal polarization mode we define a new gauge-invariant variable as a combination of the usual Bardeen scalar variables. We also obtain the frequency shift of a light signal induced by GWs in terms of the Bardeen variables in such a way that the final result is valid for any dispersion relation. This frequency shift is the elementary observable of interferometric detectors and of the Pulsar Timing technique. Finally, we derive the sensitivity of a single Pulsar Timing for each polarization mode considering the dispersion relation of a massive particle. The article is organized as follows. In Section (2) we give a short overview of the NP formalism. In Section (3) we describe the Bardeen formalism and show how it can be applied in the description of the polarization modes of GWs for any theory of gravity. We apply the formalism to the case of GR, scalar-tensor theories of gravity, and \(f(R)\)-gravity. The relation between the gauge-invariant variables and the one-way response to GWs, as well as the Pulsar Timing sensitivity, are obtained in Section (4). Finally, we conclude the article with Section (5). Throughout the article, we use the metric signature \((-,+,+,+)\) and units such that \(c=\hbar=1\) unless otherwise mentioned. ## 2 An overview of the NP formalism In the original NP formalism for the determination of polarization modes of GWs, Eardley _et al._[4, 5] considered GWs propagating in the \(+z\) direction at the speed of light and defined a null complex tetrad \((\mathbf{k},\mathbf{l},\mathbf{m},\mathbf{\bar{m}})\). This tetrad is related to the Cartesian tetrad \((\mathbf{e}_{t},\mathbf{e}_{x},\mathbf{e}_{y},\mathbf{e}_{z})\) by \[\mathbf{k}=\frac{1}{\sqrt{2}}(\mathbf{e}_{t}+\mathbf{e}_{z}), \tag{1}\] \[\mathbf{l}=\frac{1}{\sqrt{2}}(\mathbf{e}_{t}-\mathbf{e}_{z}), \tag{2}\] \[\mathbf{m}=\frac{1}{\sqrt{2}}(\mathbf{e}_{x}+i\mathbf{e}_{y}), \tag{3}\] \[\mathbf{\bar{m}}=\frac{1}{\sqrt{2}}(\mathbf{e}_{x}-i\mathbf{e}_{y}). \tag{4}\] It is easy to verify that the tetrad vectors obey the relations: \[-\mathbf{k}\cdot\mathbf{l}=\mathbf{m}\cdot\mathbf{\bar{m}}=1, \tag{5}\] \[\mathbf{k}\cdot\mathbf{m}=\mathbf{k}\cdot\mathbf{\bar{m}}=\mathbf{l}\cdot \mathbf{m}=\mathbf{l}\cdot\mathbf{\bar{m}}=0. \tag{6}\] To denote components of tensors with respect to the null tetrad basis we use Roman subscripts, that is, \(P_{abc\dots}\equiv P_{\alpha\beta\gamma\dots}\alpha^{\alpha}b^{\beta}c^{\gamma}\dots\), where \((a,b,c,\dots)\) run over \(({\bf k},{\bf l},{\bf m},\bar{\bf m})\) and \((\alpha,\beta,\gamma,\dots)\) run over \((t,x,y,z)\). The Riemann curvature tensor \(R_{\alpha\beta\gamma\delta}\) can be split into the irreducible parts: the Weyl tensor, the traceless Ricci tensor and the curvature scalar, whose tetrad components can be named, respectively as \(\Psi_{A}\), \(\Phi_{AB}\) and \(\Lambda\) following the notation of Newman and Penrose [26, 27]. In general, in a four-dimensional space we have ten \(\Psi\)'s, nine \(\Phi\)'s and one \(\Lambda\) which are all algebraically independent. However, when we restrict ourselves to null plane waves, we find that the differential and algebraic properties of \(R_{\alpha\beta\gamma\delta}\) reduce the number of independent components to six. In the above tetrad, we can choose the following quantities to represent these components [4, 5] \[\Psi_{2}=-\frac{1}{6}R_{lklk}, \tag{7}\] \[\Psi_{3}=-\frac{1}{2}R_{lkl\bar{k}}, \tag{8}\] \[\Psi_{4}=-R_{l\bar{m}l\bar{m}}, \tag{9}\] \[\Phi_{22}=-R_{lml\bar{m}}. \tag{10}\] Notice that since \(\Psi_{3}\) and \(\Psi_{4}\) are complex, each represents two independent polarizations. For these six components, three are transverse to the direction of propagation, with two representing quadrupolar deformations [Re(\(\Psi_{4}\)) and Im(\(\Psi_{4}\))] and one monopolar deformation (\(\Phi_{22}\)). Three modes are longitudinal, with one an axially symmetric stretching mode in the propagation direction (\(\Psi_{2}\)), and one quadrupolar mode in each one of the two orthogonal planes containing the propagation direction [Re(\(\Psi_{3}\)) and Im(\(\Psi_{3}\))]. The above formalism is still accurate if the speed of GWs is close to the speed of light. In fact, corrections to the NP formalism are of the order \(O(\epsilon{\cal R})\) to the case of a nearly null GW [28], where \(\epsilon=(c/v_{\rm gw})^{2}-1\), \(v_{\rm gw}\) is the speed of GWs and \({\cal R}\) is some component of the Riemann tensor. Therefore, considering the current upper bound for the graviton mass from the observations of binary black hole mergers (\(m_{g}\leq 1.27\times 10^{-23}\) eV/\(c^{2}\)) [3], we find \(\epsilon\sim 10^{-21}\) for the frequency 0.1 kHz (considering the dispersion relation of a massive graviton). For the frequency of 1 mHz we obtain \(\epsilon\sim 10^{-11}\). Thus, in these cases \(O(\epsilon{\cal R})\) is several orders of magnitude smaller than the NP quantities for null waves [which are of the order \(O({\cal R})\) ]. We conclude that such corrections are undetectable in the frequency band of ground-based and space-based interferometers. On the other hand, they can become important for lower frequencies. For instance, in the band of Pulsar Timing Arrays, we find \(\epsilon\sim 1\) for the above mass of the graviton and considering frequencies of the order of nanohertz. To have a complete description of the polarizations one needs new NP variables in this case. Considering plane waves propagating at a speed \(v_{\rm gw}\), Hyun _et al._[23] have expressed the polarizations of GWs in terms of nine NP scalars, namely, \(\Psi_{0}\), \(\Psi_{1}\), \(\Psi_{2}\), \(\Psi_{3}\), \(\Psi_{4}\), \(\Phi_{00}\), \(\Phi_{02}\), \(\Phi_{22}\), \(\Lambda\). Since \(\Psi_{0}\), \(\Psi_{1}\) \(\Psi_{3}\), \(\Psi_{4}\), \(\Phi_{02}\) are complex, the nine scalars represent fourteen components of the Riemann curvature tensor needed to describe six polarization modes of GWs. To overpass this inconvenience one can use the formalism of Bardeen variables that we describe in the next section. ## 3 Describing the polarization states of GWs within the Bardeen framework ### Helicity decomposition and gauge invariant perturbations In this section, we introduce the helicity decomposition of the metric perturbation and define the gauge invariant variables that can be computed from them. Let us start by expanding the metric around flat space \(g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}\) with \(|h_{\mu\nu}|\ll 1\). The perturbation \(h_{\mu\nu}\) can be decomposed considering the behavior of its components under spatial rotations as follows \[h_{00}=2\psi, \tag{11}\] \[h_{0i}=\beta_{i}+\partial_{i}\gamma, \tag{12}\] \[h_{ij}=-2\phi\delta_{ij}+\left(\partial_{i}\partial_{j}-\frac{1}{3}\delta_{ ij}\nabla^{2}\right)\lambda+\frac{1}{2}(\partial_{i}\epsilon_{j}+\partial_{j} \epsilon_{i})+h_{ij}^{\rm TT}, \tag{13}\] where the vector and tensor quantities are subject to the following constraints \[\partial_{i}\beta^{i}=0,\ \ \partial_{i}\epsilon^{i}=0, \tag{14}\] \[\partial^{j}h_{ij}^{\rm TT}=0,\ \ \delta^{ij}h_{ij}^{\rm TT}=0. \tag{15}\] Therefore, from the 10 degrees of freedom of \(h_{\mu\nu}\) we have four scalar degrees of freedom \(\{\psi,\phi,\gamma,\lambda\}\), four degrees of freedom in the two transverse vectors \(\{\beta_{i},\epsilon_{i}\}\) and two degrees of freedom in the transverse-traceless (TT) spatial tensor \(h_{ij}^{\rm TT}\). The gauge transformation of the metric perturbation \[h_{\mu\nu}\to h_{\mu\nu}-(\partial_{\mu}\xi_{\nu}+\partial_{\nu}\xi_{\mu}), \tag{16}\] with \(|\partial_{\mu}\xi_{\nu}|\) small preserves \(|h_{\mu\nu}|\ll 1\), thus it is a symmetry of the linearized theory in general. To understand how the harmonic variables behave under this gauge transformation, notice that the 4-vector \(\xi_{\mu}\) can also be decomposed as \[\xi_{0}=A, \tag{17}\] \[\xi_{i}=B_{i}+\partial_{i}C, \tag{18}\] with \[\partial_{i}B^{i}=0. \tag{19}\] Therefore, considering Eq. (16) and following the symmetry of the transformations under spatial rotations, we find that the gauge transformations of the scalar harmonic variables read \[\psi \rightarrow \psi-\dot{A},\] \[\phi \rightarrow \phi+\frac{1}{3}\nabla^{2}C,\] \[\lambda \rightarrow \lambda-2C,\] \[\gamma \rightarrow \gamma-\dot{C}-A. \tag{20}\] The transformations of the vectors are \[\beta_{i} \rightarrow \beta_{i}-\dot{B}_{i},\] \[\epsilon_{i} \rightarrow \epsilon_{i}-2B_{i}, \tag{21}\] while \(h_{ij}^{TT}\) is gauge-invariant \[h_{ij}^{TT}\to h_{ij}^{TT}. \tag{22}\] The Riemann tensor and, correspondingly, the Einstein tensor are gauge invariant quantities. Therefore, one possible way of dealing with this gauge freedom is to impose gauge conditions on the metric perturbations. This is the usual way in GW physics. Several gauges are possible, some of the most common gauge choices are the synchronous gauge and the Newtonian gauge. The latter fixes the gauge uniquely, however, the conditions imposed by the former leave a residual gauge freedom. This ambiguity implies the existence of unphysical modes when the gravitational equations are solved. Particularly, this can lead to an ambiguity in the determination of truly propagating GW modes in alternative theories of gravity. In the Bardeen words "_only gauge-invariant quantities have any inherent physical meaning_" [24]. In this sense, Bardeen [24] has constructed gauge-invariant quantities from combinations of the above scalar and vector variables to deal with perturbations in a FLRW background spacetime. In this article, we consider solely the Minkowski metric for the background. From the transformations (20) we see that we can obtain the following two gauge-invariant scalar combinations \[\Phi = -\phi-\frac{1}{6}\nabla^{2}\lambda, \tag{23}\] \[\Psi = -\psi+\dot{\gamma}-\frac{1}{2}\ddot{\lambda}. \tag{24}\] In the same way, from the transformations (21) we obtain one gauge-invariant transverse spatial vector \[\Xi_{i}=\beta_{i}-\frac{1}{2}\dot{\epsilon}_{i},\ \ \partial_{i}\Xi^{i}=0. \tag{25}\] Thus, we have six gauge-invariant degrees of freedom: two scalars, \(\Psi\) and \(\Phi\), two degrees of freedom in the spatial vector \(\Xi_{i}\) and two degrees of freedom in the transverse-traceless spatial tensor \(h_{ij}^{TT}\). These gauge-invariant variables are the flat background version of the well-known Bardeen variables. We can now write the electric components of the Riemann tensor using the (3+1) decomposition of the metric perturbations \[R_{i0j0}=\partial_{i}\partial_{j}\Psi-\ddot{\Phi}\delta_{ij}+\frac{1}{2}( \partial_{i}\dot{\Xi}_{j}+\partial_{j}\dot{\Xi}_{i})-\frac{1}{2}\ddot{h}_{ij}^ {\rm TT}. \tag{26}\] As expected, the Riemann tensor depends only on the gauge-invariant variables \(\Phi\), \(\Psi\), \(\Xi_{i}\), and \(h_{ij}^{\rm TT}\). It will also be useful to know the components of the Ricci tensor \[R_{00} =\nabla^{2}\Psi-3\ddot{\Phi}, \tag{27}\] \[R_{0i} =-2\partial_{i}\dot{\Phi}-\frac{1}{2}\nabla^{2}\Xi_{i},\] (28) \[R_{ij} =-\partial_{i}\partial_{j}(\Phi+\Psi)-\delta_{ij}(-\ddot{\Phi}+ \nabla^{2}\Phi)\] \[-\frac{1}{2}(\partial_{i}\dot{\Xi}_{j}+\partial_{j}\dot{\Xi}_{i} )-\frac{1}{2}\Box h_{ij}^{\rm TT}, \tag{29}\] and the curvature scalar \[R=-2\nabla^{2}\Psi+6\ddot{\Phi}-4\nabla^{2}\Phi. \tag{30}\] From the above expressions we find the components of the Einstein tensor \[G_{00}=-2\nabla^{2}\Phi,\quad G_{0i}=R_{0i}, \tag{31}\] \[G_{ij}=R_{ij}-\frac{1}{2}\delta_{ij}R. \tag{32}\] If \(\dot{\Psi}\neq 0\) it will also prove useful to define the following gauge-invariant variable \[\Theta\equiv\eta_{\Psi}^{2}\Psi-\Phi, \tag{33}\] where \[\eta_{\Psi}\equiv\left|\frac{\nabla\Psi}{\dot{\Psi}}\right|. \tag{34}\] The physical meaning of \(\Theta\) will be clear in the next section. ### Description of the polarization states of GWs with Bardeen variables The six degrees of freedom encoded in the four gauge-invariant variables defined above can be radiative or non-radiative depending on the underlying theory of gravity. It is well known that the transverse-traceless tensor \(h_{ij}^{\rm TT}\) is the only radiative quantity in the GR theory. Moreover, for those theories of gravity predicting spin-0 polarization modes, it is expected that the scalars \(\Phi\) and \(\Psi\) are related through the field equations. This issue will be analyzed in the next section by means of examples in the context of scalar-tensor theories of gravity and \(f(R)\) gravity. In the present section, we describe in a general way the six polarization states of GWs by means of the Bardeen variables. To this end, we suppose that all the gauge-invariant quantities are radiative and independent. Since the variables are radiative, they are functions of the retarded time \[u\equiv t-\frac{\vec{k}\cdot\vec{r}}{\omega}, \tag{35}\] where \(\vec{k}\) is the GW wave vector and \(\omega\) is the angular frequency. Consider, for instance, the scalar \(\Psi=\Psi(u)\). In terms of the coordinates \(t\) and \(\vec{r}\) it obeys \[\nabla\Psi=-\frac{\vec{k}_{\Psi}}{\omega}\dot{\Psi}. \tag{36}\] Using this equation in the definition (34) we identify \(\eta_{\Psi}\) as the dispersion relation \[\eta_{\Psi}(\omega)=\frac{k_{\Psi}(\omega)}{\omega}, \tag{37}\] with \(k_{\Psi}=|\vec{k}_{\Psi}|\). Since each variable can have a different dispersion relation, we express them as functions of four different retarded times \(u_{A}=t-\eta_{A}\hat{k}\cdot\vec{r}\), and the four dispersion relations are expressed by the functions \(\eta_{A}(\omega)=k_{A}(\omega)/\omega\) with \(A=\Phi\), \(\Psi\), \(V\) and \(T\). Hence, using the definition of the variable \(\Theta\) (33) the electric components of the Riemann tensor (26) now read \[R_{0i0j}=\hat{k}_{i}\hat{k}_{j}\Theta^{\prime\prime}-(\delta_{ij}-\hat{k}_{i} \hat{k}_{j})\Phi^{\prime\prime}-\frac{1}{2}\eta_{V}(\hat{k}_{i}\Xi_{j}^{\prime \prime}+\hat{k}_{j}\Xi_{i}^{\prime\prime})-\frac{1}{2}h_{ij}^{\rm TT\prime \prime}, \tag{38}\] where primes denote derivatives with respect to the retarded times and \(\hat{k}_{i}\) are components of the unit wave vector. The transverse conditions can be written as \(\hat{k}^{i}\Xi_{i}=0\) and \(\hat{k}^{i}h_{ij}^{\rm TT}=0\). The two tensor polarization states are described by \(h_{ij}^{\rm TT}\) as usual. The two spin-1 polarization states are described by the vector \(\Xi_{i}\). Although \(\Xi_{i}\) is transverse to the direction of propagation of the GW, notice that it enters \(R_{0i0j}\) with a term that is proportional to \(\hat{k}_{i}\), which comes from the spatial derivative of \(\Xi_{i}\). Therefore, we arrive at the known result that vector polarization affects the curvature in the transverse and longitudinal directions. In the second term in the right-hand-side of Eq. (38), we recognize the quantity multiplying the scalar Bardeen variable \(\Phi\) as the projection operator \(P_{ij}=(\delta_{ij}-\hat{k}_{i}\hat{k}_{j})\) which has the property \(\hat{k}^{i}P_{ij}=0\). This operator projects any spatial vector on the subspace orthogonal to the direction of propagation of the GW. Thus, this term represents the scalar-transverse polarization mode. Finally, the term \(\hat{k}_{i}\hat{k}_{j}\Theta^{\prime\prime}\) expresses the overall longitudinal effect from the scalar variables and then \(\Theta\) is the degree of freedom responsible to describe the scalar-longitudinal polarization mode. To summarize, the six possible polarization modes of GWs can be described by the six degrees of freedom present in the gauge-invariant variables \(h_{ij}^{\rm TT}\), \(\Xi_{i}\), \(\Phi\), and \(\Theta\). Since this result is valid for any dispersion relation, it turns out that the Bardeen formalism is much simpler than the NP formalism in the determination of the polarization modes. ### Lorentz transformations of the gauge-invariant variables The infinitesimal Lorentz transformations of the Bardeen variables were previously studied in Ref. [29]. Here we just recall the results they found and relate them with the polarization states of GWs. Under an infinitesimal Lorentz transformation \(x^{\mu}\to x^{\prime\mu}=\Lambda^{\mu}{}_{\nu}x^{\nu}\) with \(\Lambda^{\mu}{}_{\nu}=\delta^{\mu}{}_{\nu}+\omega^{\mu}{}_{\nu}\), the metric perturbation changes according to \[h^{\prime}_{\mu\nu}=h_{\mu\nu}+\delta h_{\mu\nu}, \tag{39}\] where \[\delta h_{\mu\nu}=\omega_{\mu}{}^{\rho}h_{\rho\nu}+\omega_{\nu}{}^{\rho}h_{\mu \rho}. \tag{40}\] Using this transformation and remembering the decomposition of the metric perturbation in terms of the harmonic variables given by Eqs. (11), (12) and (13) one can find the change of each variable due to infinitesimal Lorentz transformations. If one restricts to boosts \(\omega_{ij}=0\) and we have the following expressions for the change in the gauge-invariant variables [29]1 Footnote 1: Notice that our definition of the variable \(\Psi\) differs from the definition given by the Ref. [29] by a minus sign, i.e., one should change \(\Psi\to-\Psi\) in order to compare the equations presented in both articles. \[\delta\Phi=\frac{1}{2}\omega_{0}{}^{i}\Xi_{i}, \tag{41}\] \[\delta\Psi=-2\omega_{0}{}^{i}\nabla^{-2}\partial_{i}(\dot{\Phi}-\dot{\Psi})+ \omega_{0}{}^{i}\Xi_{i}-\frac{3}{2}\omega_{0}{}^{i}\nabla^{-2}\ddot{\Xi}_{i}, \tag{42}\] \[\delta\Xi_{i}=\omega_{0}{}^{j}\nabla^{-2}\left[\Box h^{TT}_{ij}-(\partial_{i} \dot{\Xi}_{j}+\partial_{j}\dot{\Xi}_{i})-2(\partial_{i}\partial_{j}-\delta_{ ij}\nabla^{2})(\Phi-\Psi)\right], \tag{43}\] \[\delta h^{TT}_{ij}=\omega_{0i}\Xi_{j}+\omega_{0j}\Xi_{i}-\delta_{ij}\omega_{0 }{}^{k}\Xi_{k}+\omega_{0}{}^{k}\nabla^{-2}\left[\partial_{i}\partial_{j}\Xi_{ k}-\partial_{i}\partial_{k}\Xi_{j}-\partial_{j}\partial_{k}\Xi_{i}-(\partial_{i}h^{TT} _{jk}+\partial_{j}\dot{h}^{TT}_{ik})\right]. \tag{44}\] Therefore, as a consequence of the decomposition scheme, the Bardeen variables transform among themselves under boosts. In the analysis of the linearized vacuum field equations of a theory of gravity, one can find the governing equations for the Bardeen variables and also possible relations between the scalars \(\Phi\) and \(\Psi\). Although the variables are not Lorentz invariant in general it is interesting to notice some particular cases: 1. \(h^{TT}_{ij}\neq 0,\ \Box h^{TT}_{ij}=0,\ \Xi_{i}=0,\ \Phi=\Psi\). The variables \(\Phi\), \(\Psi\) and \(\Xi_{i}\) are Lorentz invariant. 2. \(h^{TT}_{ij}\neq 0,\ \Box h^{TT}_{ij}\neq 0,\ \Xi_{i}=0,\ \Phi=\Psi\). The variables \(\Phi\) and \(\Psi\) are Lorentz invariant. 3. \(h^{TT}_{ij}\neq 0,\ \Xi_{i}=0,\ \Phi\neq\Psi\). The variable \(\Phi\) is the only Lorentz invariant. As a consequence of cases (i) and (ii), the variable \(\Theta\) is also Lorentz invariant, and thus all the Lorentz observers measure the same scalar-longitudinal mode. The scalar-transversal polarization mode is Lorentz invariant in the three cases. On the other hand, the vector polarization mode is invariant only in the special case (i) for which it is null for all Lorentz observers. If \(\Xi_{i}\neq 0\) and/or \(h^{TT}_{ij}\neq 0\), the vector and tensor modes are not Lorentz invariant in general. ### Polarization states of GWs in some theories of gravity The procedure of determination of the polarization modes in an alternative theory of gravity starts, as usual, with the linearization of the vacuum field equations of the theory. The equations for metric perturbations should be written in terms of the Bardeen variables. Finally, from these equations, it will be possible to determine which variable represents a truly radiative mode, the number of independent degrees of freedom, and each dispersion relation related to the polarization modes. In this section, we illustrate this procedure by evaluating the polarization modes of GWs for General Relativity, scalar-tensor theories of gravity, and \(f(R)\)-gravity. #### 3.4.1 General Relativity Let us consider the Einstein-Hilbert action in the absence of matter sources with a vanishing cosmological constant \[I=\frac{1}{16\pi G}\int d^{4}x\sqrt{-g}R. \tag{45}\] The Einstein equations for vacuum is obtained if we vary this action with respect to the metric \(g_{\mu\nu}\). They are \[G_{\mu\nu}=0. \tag{46}\] Let us expand the metric about the Minkowski spacetime \(g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}\) and write the Einstein equations to first order in \(h_{\mu\nu}\). If we use the gauge-invariant quantities described previously we obtain the components of the Einstein tensor as given by the Eqs. (31) and (32). From the \(00\) component, we obtain \[\nabla^{2}\Phi=0. \tag{47}\] With this result and using Eq. (30) in the trace equation \(g^{\mu\nu}G_{\mu\nu}=-R=0\) we have \[\ddot{\Phi}-\frac{1}{3}\nabla^{2}\Psi=0. \tag{48}\] From the Eq. (47) we conclude that in the absence of matter \[\Phi=0, \tag{49}\] and with this result in the Eq. (48) we have \(\nabla^{2}\Psi=0\) and then \[\Psi=0. \tag{50}\] Therefore, the two gauge-invariant scalars are non-propagating degrees of freedom in Einstein theory and vanish in the absence of matter fields. If we use this result in the \(0i\) components of the Eq. (46) along with Eq. (31) we obtain a similar result for the vector modes \[\nabla^{2}\Xi_{i}=0\ \Rightarrow\ \Xi_{i}=0. \tag{51}\] Applying the above results and Eq. (32) in the spatial components of the field equations \(G_{ij}=0\) we find the following equation for gauge-invariant tensor perturbation \[\Box h^{TT}_{ij}=0. \tag{52}\] Therefore, we see that only the tensor degrees of freedom are radiative since they obey a wave equation. In the absence of matter \(\Phi=\Psi=0\) and \(\Xi_{i}=0\), we have only two polarization states represented by \(h^{TT}_{ij}\), which are the usual \(+\) and \(\times\) polarizations. Since the tensor modes propagate at the speed of light we have \(\eta_{T}=1\) for GR. From the Lorentz transformations of the Bardeen variables given by Eqs. (41), (42), (43) and (44), we see that the absence of scalar and vector polarization modes is a Lorentz invariant statement in the GR case. #### 3.4.2 Scalar-tensor theories of gravity For simplicity, we restrict to scalar-tensor theories of gravity whose action can be written in the following form in the absence of matter [30, 31] \[I=\frac{1}{16\pi}\int d^{4}x\sqrt{-g}\left[\varphi R-\frac{\varpi(\varphi)}{ \varphi}\nabla^{\mu}\varphi\nabla_{\mu}\varphi+V(\varphi)\right]. \tag{53}\] However, as will be clear at the end of this subsection, the results we will find are valid for more general scenarios encompassed by the Horndeski theory. In the theory described by (53), gravity is mediated not only by the metric but also by a scalar field \(\varphi\), there is a coupling function \(\varpi(\varphi)\) and \(V(\varphi)\) is a generic scalar field potential. Varying this action with respect to the metric and scalar field we obtain \[G_{\mu\nu} =\frac{1}{2}\varphi^{-1}V(\varphi)g_{\mu\nu}+\varpi(\varphi) \varphi^{-2}\Big{(}\nabla_{\mu}\varphi\nabla_{\nu}\varphi \tag{54}\] \[-\frac{1}{2}g_{\mu\nu}\nabla_{\alpha}\varphi\nabla^{\alpha} \varphi\Big{)}+\varphi^{-1}(\nabla_{\mu}\nabla_{\nu}\varphi-g_{\mu\nu}\Box \varphi),\] and \[\Box\varphi+\frac{\varphi V^{\prime}(\varphi)-2V(\varphi)}{3+2\varpi(\varphi) }=-\frac{\varpi^{\prime}(\varphi)\nabla_{\alpha}\varphi\nabla^{\alpha}\varphi} {3+2\varpi(\varphi)}, \tag{55}\] where a prime denotes derivative with respect to \(\varphi\). In the weak-field limit, we can expand the metric about the Minkowski background as in the GR case, while the scalar field is expanded as \[\varphi=\varphi_{0}+\delta\varphi,\ \ \delta\varphi\ll\varphi_{0}, \tag{56}\] where \(\varphi_{0}\) is a constant background value of \(\varphi\) determined from cosmology. Expanding the potential and the coupling function about \(\varphi_{0}\) up to the second order we have \[V(\varphi) =V(\varphi_{0})+V^{\prime}(\varphi_{0})\delta\varphi+\frac{1}{2}V ^{\prime\prime}(\varphi_{0})\delta\varphi^{2}+\mathcal{O}(\delta\varphi^{3}), \tag{57}\] \[\varpi(\varphi) =\varpi(\varphi_{0})+\varpi^{\prime}(\varphi_{0})\delta\varphi+ \frac{1}{2}\varpi^{\prime\prime}(\varphi_{0})\delta\varphi^{2}+\mathcal{O}( \delta\varphi^{3}). \tag{58}\] To ensure asymptotic flatness we impose that \(V(\varphi_{0})=V^{\prime}(\varphi_{0})=0\). In this limit, the equation for the scalar field (55) reads \[\left(\square-m^{2}\right)\delta\varphi=0, \tag{59}\] where now \(\square\) is the D'Alembertian operator for the Minkowski background metric and the mass of the scalar field is defined by \[m^{2}\equiv-\frac{\varphi_{0}V^{\prime\prime}(\varphi_{0})}{3+2\varpi_{0}}, \tag{60}\] where \(\varpi_{0}=\varpi(\varphi_{0})\). The Eqs. (54) become \[G_{\mu\nu}[h]+\left(\eta_{\mu\nu}\square-\partial_{\mu}\partial_{\nu}\right) \frac{\delta\varphi}{\varphi_{0}}=0, \tag{61}\] where \(G_{\mu\nu}[h]\) is the linearized Einstein tensor. Replacing the Eq. (31) in the 00 component of this equation we find \[\nabla^{2}\left(\Phi+\frac{1}{2}\frac{\delta\varphi}{\varphi_{0}}\right)=0. \tag{62}\] Hence, in the absence of matter, we conclude that the gauge-invariant scalar \(\Phi\) is proportional to the scalar field perturbation \[\Phi=-\frac{1}{2}\frac{\delta\varphi}{\varphi_{0}}. \tag{63}\] Moreover, using the trace of Eqs. (61), the Eq. (59) and the perturbed expression of the Ricci scalar (30), it is easy to show that \[\Psi=\Phi=-\frac{1}{2}\frac{\delta\varphi}{\varphi_{0}}. \tag{64}\] The \(0i\) components of Eqs. (61) together with Eq. (31) lead to the following equation for the gauge-invariant vector perturbation \[\nabla^{2}\Xi_{i}=0, \tag{65}\] which, in the absence of matter, gives \[\Xi_{i}=0. \tag{66}\] Finally, using Eqs. (32), (64) and (66), the spatial components of Eqs. (61) result \[\square h^{TT}_{ij}=0. \tag{67}\] Therefore, we conclude that in the scalar-tensor theory of gravity, there are three radiative degrees of freedom. The two tensor degrees of freedom obey the wave equation (67) in the same manner as in the Einstein theory. They propagate at the speed of light and, therefore, \(\eta_{T}=1\). On the other hand, there is a scalar degree of freedom \(\Phi=\Psi\) which obeys a Klein-Gordon type equation \[\left(\Box-m^{2}\right)\Phi=0. \tag{68}\] This equation has a solution \(\Phi\propto e^{ik_{\alpha}x^{\alpha}}\) with the wave 4-vector \(k^{\alpha}\equiv(\omega,\vec{k})\) respecting the dispersion relation \[\omega^{2}=k^{2}+m^{2}, \tag{69}\] and, therefore, the function \(\eta_{A}(\omega)=k_{A}(\omega)/\omega\) is given by \[\eta_{\Phi}(\omega)=\eta_{\Psi}(\omega)=\sqrt{1-\left(\frac{m}{\omega}\right) ^{2}}. \tag{70}\] Notice that this is a propagating mode provided that \(\omega>m\). Hence, \(m\) is a cutoff frequency for the massive scalar degree of freedom. Evaluating the scalar longitudinal gauge-invariant variable defined by Eq. (33) we find \[\Theta=-\left(\frac{m}{\omega}\right)^{2}\Phi. \tag{71}\] Thus, there is a non-zero contribution to the Riemann tensor in the direction of propagation of the scalar GW. The meaning of this result is that although one has only one scalar degree of freedom, it generates the effects of the two scalar polarization states, the scalar transversal and the scalar longitudinal modes. If \(m=0\Rightarrow\Theta=0\), the longitudinal effect vanishes and one restores the result of the original massless Brans-Dicke theory for which there is only the scalar transversal polarization mode [4, 5]. Notice that GWs in the scalar-tensor theories of gravity enter in case (i) discussed in Section 3.3. Therefore the variables \(\Phi\), \(\Theta\) and \(\Xi_{i}\) are Lorentz invariant quantities. Although in the present derivation we have considered the action (53), the results are valid for the Horndeski theory [32], which is the most general scalar-tensor theory of gravity with second-order equations of motion. This is because our results depend essentially on the weak field equations (59) and (61). The linearized field equations of Horndeski theory acquire exactly these forms, with a redefinition of the mass \(m\) (see Eqs. (17) and (18) of the Ref. [22]). In Ref. [22], there is a similar discussion about the polarization states in scalar-tensor theories, though a different approach has been used. #### 3.4.3 \(f(R)\)-gravity The action for the \(f(R)\)-gravity is defined as an extension of the Einstein-Hilbert action which, in the absence of matter, has the following form \[I=\frac{1}{16\pi G}\int d^{4}x\sqrt{-g}f(R), \tag{72}\] where \(f(R)\) is an arbitrary function of the Ricci scalar. If we vary this action with respect to the metric \(g_{\mu\nu}\) we obtain the vacuum field equations \[f^{\prime}R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}f-\nabla_{\mu}\nabla_{\nu}f^{ \prime}+g_{\mu\nu}\Box f^{\prime}=0, \tag{73}\] where in this section we use a prime to denote the derivative with respect to \(R\). Additionally, the trace of Eq. (73) gives \[\square f^{\prime}+\frac{Rf^{\prime}-2f}{3}=0. \tag{74}\] It is well known that the \(f(R)\) gravity is equivalent to a scalar-tensor theory of gravity [33]. Therefore, we expect to find the same results for the polarization modes as obtained in the previous section. We show this equivalence by directly solving the equations in the weak-field approximation. First of all, notice that the de Sitter spacetime is a vacuum solution of the theory. Therefore, different from Einstein's gravity, in order to study vacuum GWs in \(f(R)\)-gravity we should expand the metric around a non-flat background metric [34] \[g_{\mu\nu}=g_{\mu\nu}^{\rm(d)}+h_{\mu\nu}, \tag{75}\] where \(g_{\mu\nu}^{\rm(d)}\) is the de Sitter metric. In this sense, the perturbed Ricci scalar and the perturbed function \(f(R)\) become \[R = R_{\rm d}+\delta R+\mathcal{O}(h^{2}), \tag{76}\] \[f(R) = f(R_{\rm d})+f^{\prime}(R_{\rm d})\delta R+\frac{1}{2}f^{\prime \prime}(R_{d})\delta R^{2}+\mathcal{O}(h^{3}), \tag{77}\] where \(R_{\rm d}\) is the constant background curvature scalar of the de Sitter spacetime. With this expansion in the Eq. (74) we obtain \[\left(\square-m^{2}\right)\delta R=0, \tag{78}\] where \[m^{2}\equiv\frac{1}{3}\left(\frac{f_{\rm d}^{\prime}}{f_{\rm d}^{\prime\prime }}-R_{\rm d}\right), \tag{79}\] and \(f_{\rm d}=f(R_{\rm d})\). Notice that now all the covariant derivatives are evaluated using the de Sitter metric. Moreover, from the Eq. (73) we obtain the following equation for the perturbation of the Ricci tensor [34] \[\delta R_{\mu\nu}+\left(\frac{f_{\rm d}^{\prime\prime}}{f_{\rm d}^{\prime}}R_ {\mu\nu}^{\rm d}-\frac{1}{2}g_{\mu\nu}^{\rm d}\right)\delta R-\frac{1}{2}\frac {f_{\rm d}}{f_{\rm d}^{\prime}}\delta g_{\mu\nu}+\frac{f_{\rm d}^{\prime\prime }}{f_{\rm d}^{\prime}}\left(g_{\mu\nu}^{\rm(d)}\square-\nabla_{\mu}\nabla_{ \nu}\right)\delta R=0. \tag{80}\] At the scale size of the GW detectors, one can assume a nearly Minkowski background metric \(g_{\mu\nu}^{\rm(d)}\approx\eta_{\mu\nu}\) and \(R_{\mu\nu}^{\rm d}\approx 0\). Let us assume \(f(R)\) models for which \(f(R_{\rm d})\approx 0\) at this limit, but in general \(f_{\rm d}^{\prime}\neq 0\) and \(f_{\rm d}^{\prime\prime}\neq 0\). This is the case of some \(f(R)\) models which are viable alternatives to explain the accelerated expansion of the Universe [34]. In this limit the d'Alembertian operator in Eq. (78) is \(\square=\eta^{\mu\nu}\partial_{\mu}\partial_{\nu}\), and the Eq. (80) simplifies to \[G_{\mu\nu}[h]+\frac{1}{3m^{2}}\left(\eta_{\mu\nu}\square-\partial_{\mu} \partial_{\nu}\right)R[h]=0, \tag{81}\] where from Eq. (76) we have identified \(\delta R\approx R[h]\) in the flat space limit. Now, comparing this equation with Eq. (61) we can make the identification \(\delta\varphi/\varphi_{0}\to R/(3m^{2})\). Therefore, we can follow the same procedure as in the case of scalar-tensor theories to find the equations for the gauge invariants in the \(f(R)\)-gravity \[\left(\Box-m^{2}\right)\Phi=0, \tag{82}\] \[\nabla^{2}\Xi_{i}=0, \tag{83}\] \[\Box h_{ij}^{TT}=0, \tag{84}\] where \[\Psi=\Phi=-\frac{R}{6m^{2}}, \tag{85}\] and again we conclude that \[\Xi_{i}=0. \tag{86}\] Thus, as in the case of scalar-tensor theories of gravity, we conclude that \(f(R)\)-gravity presents three propagating degrees of freedom. The usual two tensor modes propagating at the speed of light and one scalar degree of freedom with transversal and longitudinal behavior. An analog discussion about the degrees of freedom and the polarization states of GWs in \(f(R)\)-gravity can be found, e.g., in Ref. [20] where the Lorentz gauge has been used. In terms of Lorentz transformations, GWs in the scope of \(f(R)\)-gravity also enter in the case (i) of Section 3.3. #### 3.4.4 Number of radiative degrees of freedom versus number of polarization modes A note about the distinction between the number of radiative degrees of freedom and the number of polarization modes of GWs in gravity theories. This discussion and some criticisms appear in several recent works as, e.g., [20] and [22]. In the language of [4, 5], the number of polarization modes corresponds to the way GWs interact with a sphere of test particles. It has not to do with the number of independent dynamical degrees of freedom of the linearized theory, which can be smaller or bigger than the number of polarization modes. In the case of massless scalar polarization mode, for instance, relative acceleration between the particles in the sphere is observed in the direction orthogonal to the wave vector \(\vec{k}\). On the other hand, for a massive scalar mode relative accelerations are also generated for particles located in the direction of propagation of the wave. In the latter case, we say that the theory presents two scalar polarization modes, though these modes are not independent. Therefore, GWs in the \(f(R)\)-gravity and the scalar-tensor theories present four polarization modes if \(m\neq 0\), with three independent radiative degrees of freedom. For these theories, the number of polarization modes and independent degrees of freedom agrees only if \(m=0\). In the present work, we have defined a gauge-invariant variable \(\Theta\) in Eq. (33) which enables an unambiguous determination of the existence of the scalar longitudinal GW mode. ## 4 Pulsar Timing sensitivity The elementary observable for interferometric detectors and for the Pulsar Timing technique is the "one-way" fractional frequency shift \(y(t)=[\nu(t)-\nu_{0}]/\nu_{0}\), where \(\nu(t)\) is the frequency of an electromagnetic signal at the time of reception \(t\) and \(\nu_{0}\) is the unperturbed frequency. A full gauge-invariant derivation of \(y\) was obtained by Koop and Finn [35]. Their derivation is quite general and includes the GW effect and all possible contributions from the background curvature (e.g., in the case of pulsar timing, Romer delay, aberration, Shapiro time delay, and other effects appear naturally). Furthermore, no assumptions were made about the size of the detectors compared to the GW wavelength, nor about the dispersion relation of GWs. For the purposes of the present article, it suffices to consider the special case of a Minkowski background. If in addition, we consider that the source and the receiver of the electromagnetic signal are at rest in the same global Lorentz frame, the equation for \(y\) can be written as [35] \[\frac{dy(t)}{dt}=-\int_{0}^{\lambda_{R}}R_{0i0j}n^{i}n^{j}d\lambda, \tag{87}\] where \(n^{i}\) is the spatial unit vector in the direction of the link between the source and the receiver, and \(\lambda\) is an affine parameter along this link. Therefore, the GW contribution to the time derivative of the frequency shift \(y\) is given by the projection of the Riemann tensor integrated along the unperturbed null geodesic linking the source and the receiver of the electromagnetic signal. Recently, Blaut [36] found the same result though using a quite different approach. Notice that in the Koop and Finn derivation, there is no specification of the field equations of the underlying theory of gravity. Only the curvature properties of the spacetime were taken into account such as the validity of the geodesic deviation equation. Therefore, Eq. (87) is valid for all theories of gravity with these properties, presenting any number of polarization modes. Thus, this gauge-invariant equation is appropriate for obtaining the sensitivity of interferometers and particularly of the Pulsar Timing technique to the polarization modes of GWs in alternative theories of gravity. To evaluate the one-way response let us consider the emitter of a light signal located at point 1 at a distance \(L\) from the receiver. The receiver is located at point 2 at the origin of the coordinate system. The trajectory of the light signal can be parametrized as \[t=t_{2}-(L-\lambda),\ \ \vec{r}=(L-\lambda)\hat{n}, \tag{88}\] with \(\lambda\in[0,\ L]\); \(t_{1}\) is the time of emission, \(t_{2}=t_{1}+L\) is the time of reception and \(\hat{n}\) is the unit vector pointing from 1 to 2. Within this parametrization, the retarded times are given by \[u_{A}=t_{2}-(1+\eta_{A}\mu)(L-\lambda),\ \ \lambda\in[0,\ L], \tag{89}\] where we have defined \(\mu\equiv\hat{k}\cdot\hat{n}\). Now, changing the variable of integration to the retarded time \(u_{A}\), the integration along the unperturbed trajectory of the light signal in Eq. (87) can be performed. Using the Eq. (38) in (87) we find \[\frac{dy}{dt}= -\left(\frac{\mu^{2}}{1+\eta_{S}\mu}\right)\left[\Theta^{\prime}(t )-\Theta^{\prime}\left(t-(1+\eta_{S}\mu)L\right)\right]\] \[+\left(\frac{1-\mu^{2}}{1+\eta_{S}\mu}\right)\left[\Phi^{\prime}( t)-\Phi^{\prime}\left(t-(1+\eta_{S}\mu)L\right)\right]\] \[+\left(\frac{\eta_{V}\mu\ n^{i}}{1+\eta_{V}\mu}\right)\left[\Xi_ {i}^{\prime}(t)-\Xi_{i}^{\prime}\left(t-(1+\eta_{V}\mu)L\right)\right]\] \[+\frac{1}{2}\left(\frac{n^{i}n^{j}}{1+\eta_{T}\mu}\right)\left[h_ {ij}^{\mathrm{TT}\prime}(t)-h_{ij}^{\mathrm{TT}\prime}\left(t-(1+\eta_{T}\mu )L\right)\right]. \tag{90}\] In the final expression, we have replaced the time of reception \(t_{2}\to t\), a prime denotes derivative with respect to the retarded time and, for simplicity, we have considered \(\eta_{\Psi}=\eta_{\Phi}=\eta_{S}\). In the synchronous gauge and for \(\eta_{A}=1\), the above equation coincides with the frequency shift derived in Refs. [37, 38]. However, in the present form, we have obtained an explicit relation between a physical observable (the time derivative of the frequency shift) with gauge-invariant quantities that describe the six possible polarization states of GWs. Furthermore, we have not made any hypotheses regarding the four dispersion relations except the equality of the dispersion relations of the scalar modes. To derive the Pulsar Timing response, let us consider the Earth located at the origin of a Cartesian system of coordinates with unit vectors \((\hat{i},\hat{j},\hat{k})\) oriented in the \(x\), \(y\) and \(z\) directions respectively. The GW wave vector is in the direction of \(\hat{k}\) and the vector \(L\hat{n}\) locates the Pulsar, where \(\hat{n}\) is the unit vector pointing from Earth to the Pulsar. The Pulsar is emitting electromagnetic signals continuously which are detected on Earth. With the help of Eq. (90) we can obtain the Fourier transform of the frequency shift induced by GWs on the signal emitted by the Pulsar. Using the property of the time derivative of the Fourier transform and the relation between the time \(t\) and the retarded time we find \[\tilde{y}(f)=\tilde{y}_{SL}(f)+\tilde{y}_{ST}(f)+\tilde{y}_{V}(f)+\tilde{y}_{ T}(f), \tag{91}\] where the Fourier transforms of the induced frequency shifts due to each gauge-invariant variable are given by \[\tilde{y}_{SL}(f) \equiv-\left(\frac{\mu^{2}}{1+\eta_{S}\mu}\right)H_{SL}(f)\left[ 1-e^{i2\pi fL(1+\eta_{S}\mu)}\right], \tag{92}\] \[\tilde{y}_{ST}(f) \equiv\left(\frac{1-\mu^{2}}{1+\eta_{S}\mu}\right)H_{ST}(f)\left[ 1-e^{i2\pi fL(1+\eta_{S}\mu)}\right],\] (93) \[\tilde{y}_{V}(f) \equiv\left(\frac{\eta_{V}\mu\ n^{i}}{1+\eta_{V}\mu}\right) \tilde{\Xi}_{i}(f)\left[1-e^{i2\pi fL(1+\eta_{V}\mu)}\right],\] (94) \[\tilde{y}_{T}(f) \equiv\frac{1}{2}\left(\frac{n^{i}n^{j}}{1+\eta_{T}\mu}\right) \tilde{h}_{ij}^{TT}(f)\left[1-e^{i2\pi fL(1+\eta_{T}\mu)}\right], \tag{95}\] where \(H_{SL}(f)\equiv\tilde{\Theta}(f)\) and \(H_{ST}(f)\equiv\tilde{\Phi}(f)\) are the frequency-dependent wave amplitude for the scalar longitudinal and scalar transversal polarizations respectively. For the polarization mode '\(A\)' we define the angular response \(R_{A}\) of a single Pulsar Timing as \(R_{A}^{2}\equiv|\tilde{y}_{A}(f)|^{2}/H_{A}^{2}(f)\). It follows \[R_{SL}^{2}=2\left(\frac{\mu^{2}}{1+\eta_{S}\mu}\right)^{2}\Big{[}1-\cos\big{(} 2\pi fL(1+\eta_{S}\mu)\big{)}\Big{]}, \tag{96}\] and \[R_{ST}^{2}=2\left(\frac{1-\mu^{2}}{1+\eta_{S}\mu}\right)^{2}\Big{[}1-\cos\big{(} 2\pi fL(1+\eta_{S}\mu)\big{)}\Big{]}. \tag{97}\] In the case of vector and tensor polarization modes, we assume an elliptically polarized wave and then we average over the polarizations in order to find the response. For an elliptically polarized vector wave, we have \[\tilde{\Xi}_{i}(f)=H_{V}(f)\left(e^{i\varphi}\sin\Gamma\epsilon_{i}^{(1)}+\cos \Gamma\epsilon_{i}^{(2)}\right), \tag{98}\] where \(H_{V}(f)\) is the vector wave amplitude, \(\varphi\) and \(\Gamma\) are polarization parameters, \(\epsilon_{i}^{(1)}\) and \(\epsilon_{i}^{(2)}\) are two orthogonal unit polarization vectors and both are orthogonal to \(\hat{k}\). An analogous expression can be written for the usual tensor GWs \[\tilde{h}_{ij}^{TT}(f)=H_{T}(f)\left(e^{i\varphi}\sin\Gamma\varepsilon_{ij}^{+ }+\cos\Gamma\varepsilon_{ij}^{\times}\right), \tag{99}\] where we can use the pair of orthogonal vectors \((\hat{\mathbf{\epsilon}}^{(1)},\hat{\mathbf{\epsilon}}^{(2)})\) to define the two polarization tensors \[\varepsilon_{ij}^{+} = \epsilon_{i}^{(1)}\epsilon_{j}^{(1)}-\epsilon_{i}^{(2)} \epsilon_{j}^{(2)}, \tag{100}\] \[\varepsilon_{ij}^{\times} = \epsilon_{i}^{(1)}\epsilon_{j}^{(2)}+\epsilon_{i}^{(2)}\epsilon_{ j}^{(1)}. \tag{101}\] In our Cartesian coordinate system we chose \(\hat{\mathbf{\epsilon}}^{(1)}\) and \(\hat{\mathbf{\epsilon}}^{(2)}\) to coincide with \(\hat{i}\) and \(\hat{j}\) respectively. Moreover, let us consider the usual spherical coordinates \((\theta,\phi)\) associated with the vector \(L\hat{n}\) that locates the Pulsar. Notice that in this coordinate system \(\mu=\hat{n}\cdot\hat{k}=\cos\theta\). Then, in the case of vector and tensor waves, we can perform an average over the polarization angles \((\varphi,\Gamma)\) to find the angular Pulsar Timing response \[R_{V}^{2}=\frac{1}{3}\left(\frac{\eta_{V}\mu\sqrt{1-\mu^{2}}}{1+\eta_{V}\mu} \right)^{2}(1+\cos^{2}\phi)\Big{[}1-\cos\big{(}2\pi fL(1+\eta_{V}\mu)\big{)} \Big{]}, \tag{102}\] and \[R_{T}^{2}=\frac{1}{12}\left(\frac{1-\mu^{2}}{1+\eta_{T}\mu}\right)^{2}(1+\cos ^{2}2\phi)\Big{[}1-\cos\big{(}2\pi fL(1+\eta_{T}\mu)\big{)}\Big{]}, \tag{103}\] where we have included a factor \(1/2\) in both expressions since there are two vector polarizations in the first case and two tensor polarizations in the second case. We evaluate the rms of the response by performing an average over sources uniformly distributed over the celestial sphere. The final Pulsar Timing rms response to each polarization mode has the form \[R_{A}^{\rm rms}(f)=\frac{2\pi fL}{\sqrt{\eta_{A}}}\left[\sum_{n=1}^{5}\frac{a_{n} ^{A}(f)}{(2\pi fL)^{n}}I_{n}^{A}(f)\right]^{1/2}, \tag{104}\] where \(I_{n}(f)\) is a set of five elementary integrals \[I_{n}^{A}(f)=-\int_{2\pi fL(1-\eta_{A})}^{2\pi fL(1+\eta_{A})}x^{n-3}(\cos x-1)dx, \tag{105}\] where \(n=1,2,3,4,5\). Therefore, the rms of the responses differ only in the dispersion relations \(\eta_{A}\) and in the coefficients \(a_{n}^{A}\) given in the Appendix. These quantities can be functions of the frequency if the speed of propagation of GWs is different from the speed of light except in the case of the scalar longitudinal polarization for which the coefficients \(a_{n}^{SL}\) are independent of frequency for any speed. Notice that we have not considered any specific form for the dispersion relation so far. Therefore, the analytical expression for the response (104) is a completely general result. In order to evaluate the effect of the dispersion relation on the GW response, henceforth we consider that each mode \(A\) has an effective mass \(m_{A}\) which results in \(\eta_{A}(f)=\sqrt{1-(m_{A}/2\pi f)^{2}}\). In Fig. 1 we show the Pulsar Timing rms response for the scalar longitudinal, scalar transversal, vector, and tensor polarization modes for a typical distance \(L=1\) kpc. Figure 1: We show the rms of the Pulsar Timing response for all the gauge-invariant variables of the polarization modes. For all cases, we have used the massive dispersion relation \(\eta_{A}(f)=\sqrt{1-(m_{A}/2\pi f)^{2}}\). The shaded region indicates the frequency range of the Pulsar Timing technique. Notice that if the mass is about that of the upper bound of the LIGO detector (\(1.27\times 10^{-23}\) eV) we have remarkable effects in this range. Finally, we define the sensitivity by \(\sqrt{S_{y}(f)B}/R_{A}^{\rm rms}\), where \(S_{y}(f)\) is the noise spectrum affecting the relative frequency shift of Pulsar Timing and \(B\) is the bandwidth corresponding to an integration time of 10 years (\(B=1\)cycle/10 years). Our estimated sensitivity is based on the noise model discussed in [39] for which \(S_{y}(f)\) is given by \[S_{y}(f)=[4.0\times 10^{-31}f^{-1}+3.41\times 10^{-8}f^{2}]\rm{Hz}^{-1}. \tag{106}\] The resulting Pulsar Timing sensitivities to the polarization states are shown in Fig. 2. Notice that the sensitivity to the scalar longitudinal mode is some orders of magnitude larger than the corresponding signal. Figure 2: Sensitivity curves for each gauge-invariant variable of the polarization modes considering the effect of the dispersion relation of massive GWs. The sensitivity to the massless tensor mode is shown in all figures (gray curve). Notice that as the mass approaches the upper bound of the LIGO detector (\(1.27\times 10^{-23}\) eV) we have remarkable effects in the sensitivity. The evidence of a cutoff frequency or even the evidence that such a cutoff is not on the Pulsar Timing frequency band can lead to a more stringent bound of the effective mass of the graviton than that presented by ground-based interferometers. magnitude better than the sensitivities of other polarizations, and the sensitivity to the vector modes can be up to five times better than that of the tensor mode. The response decreases as the wavelength of the GWs is of the order or larger than the distance from Earth to the Pulsar (long-wavelength limit). If \(m_{A}=0\) this happens in a very small frequency, far beyond the Pulsar Timing frequency band (\(10^{-9}-10^{-6}\) Hz) (see Fig. 1). On the other hand, for a non-null mass, a fast decrease in the response can occur in this band as the technique approaches the long-wavelength limit. The cutoff frequency for which the response vanishes is related to the mass by \[f_{c}=\left(\frac{m}{m_{\rm up}}\right)3.07\times 10^{-9}\ {\rm Hz}, \tag{107}\] where we have considered the upper bound on the graviton mass imposed by LIGO, Figure 3: The Pulsar Timing angular response for the tensor, vector, scalar transversal and scalar longitudinal polarization modes using a massive dispersion relation \(\eta_{A}(f)=\sqrt{1-(m_{A}/2\pi f)^{2}}\) for each case. We have considered a typical distance of \(L=1\) kpc and an angle \(\phi=0\). \(m_{\rm up}=1.27\times 10^{-23}\) eV/\(c^{2}\)[3], as a fiducial mass. Obviously, the effective mass of the vector and scalar polarizations do not need to respect this upper bound since it was derived from detections of the tensor modes. In Fig. 3 we show, as an example, the angular response (i.e., the frequency-dependent antenna pattern) given by Eqs. (96), (97), (102) and (103) at the frequency \(f=3.1\) nHz and \(\phi=0\) considering a single Pulsar. Depending on the value of the mass \(m\), we can have the long-wavelength regime at this frequency or not. When the GW approaches the long-wavelength regime, the symmetry of the response around \(\theta=\pi/2\) is restored for all the polarization modes. This case is shown in red in Fig. 3 for each polarization. The behavior of the response, in this case, is similar to that of ground-based interferometers. At \(\theta=\pi/2\) the tensor polarization has the maximum response and the response vanishes for the scalar longitudinal and vector modes. On the other hand, the response for the scalar transversal mode is identical in form to the response for tensor polarization. In the same figure, we notice the oscillations in the response which comes from the square brackets in the Eqs. (96), (97), (102) and (103). In the present case, the angles for which this term vanishes for a given frequency \(f\) are given by \[\cos\theta_{n}=\frac{1}{\eta_{A}}\left(\frac{n}{fL}-1\right),\ \ n=0,1,2,\ldots \tag{108}\] As we mentioned earlier, for the massless case the Pulsar Timing is out of the long-wavelength regime for the entire frequency range. In this case, we can notice an asymmetry of the response of GWs propagating in the parallel directions of the electromagnetic signal (\(\pi/2<\theta<\pi\)) with respect to GWs propagating in the antiparallel directions (\(0<\theta<\pi/2\)). For GWs traveling in parallel directions, the response can be some orders of magnitude higher than those traveling in antiparallel directions. This effect occurs for tensor, vector, and scalar modes. However, for the scalar longitudinal and vector modes, one can notice a remarkable enhancement of the response. This enhancement effect has been noticed for the first time by the present author and a collaborator [37, 38]. It is associated with the longitudinal behavior of the mentioned polarization modes and with the relative direction of the GW wave vector \(\vec{k}\) with respect to the direction of propagation of the electromagnetic signal emitted by the Pulsar. In the present scope, the physical origin of the enhancement effect in the response of longitudinal polarizations can be understood in light of Eq. (87). First of all, remember that in this case, the Riemann curvature tensor has components not only transverse to the direction of the propagation of the GW, but also in the longitudinal direction. Since the Riemann tensor is a function of the retarded time \(u\), and \(u\) depends on \(\theta\), light rays coming from different directions'see' the curvature generated by the GW differently. Consider that we are out of the long-wavelength regime. The light rays traveling in the opposite directions of the GWs pass through several maxima and minima of the curvature changing continuously its frequency. Since the final frequency shift measured at Earth is an integrated effect of the curvature, the result can be zero for some directions. On the other hand, those light rays propagating parallel or almost parallel to the GW experience fewer oscillations of the curvature. In this case, the final effect can be a higher frequency shift when compared with the anti-parallel case. This is because the average curvature is higher generating an increase in the response as \(\theta\rightarrow\pi\). When one approaches the long-wavelength regime, the light signals coming from different directions experience fewer curvature oscillations and the final effect in the frequency shift becomes symmetric. Finally, in the long-wavelength regime, the oscillations of the curvature cannot be noticed at all in a one-way light travel. In this situation, we have the usual frequency-independent antenna patterns of ground-based interferometers. The same argument applies to the explanation of the asymmetry of the transversal polarizations (scalar transversal and tensor) out of the long-wavelength regime. But in this case, the curvature goes to zero as one approaches \(\theta=0\) or \(\theta=\pi\) resulting in a suppression of the enhancement effect for \(\theta\rightarrow\pi\). In the Fig. 2, we notice that the graviton mass has a remarkable effect on the sensitivity curves as it approaches the upper bound of the LIGO detector (\(1.27\times 10^{-23}\) eV). For tensor and scalar transversal polarizations, the main effect is a limit in the sensitivity established by the cutoff frequency \(f_{c}\) given by the relation (107). On the other hand, for vector and scalar longitudinal polarization modes, we have a significant change in the shape of the sensitivity curve including a change in the frequency of maximum sensitivity. The sensitivity curves for massive gravitons are indistinguishable from that of the massless case if the effective mass is two orders of magnitude smaller than that of the LIGO upper bound in the case of vector and scalar longitudinal polarizations. Whereas for the transversal polarizations, it is enough that the graviton mass is one order of magnitude smaller than \(m_{\rm up}\). Remember that \(m_{\rm up}\) was obtained from observations of the tensor mode. This means that, in principle, the effective mass of the vector and scalar polarizations can be greater than \(m_{\rm up}\). If \(m\) is about three orders of magnitude higher than \(m_{\rm up}\) these polarizations would be undetectable in the Pulsar Timing frequency band. ## 5 Conclusion We have shown that the Bardeen framework enables a clear description of the six polarization modes of GWs even if each mode has a general dispersion relation. The response given by Eq. (90) shows an explicit relation between a physical observable (the derivative of the frequency shift) with the gauge-invariant variables. Therefore, this relation means we have a bridge between theory and experiment, avoiding possible ambiguities of gauge choice. A new gauge-invariant variable was introduced [see Eq. (33)] aiming for an unambiguous description of the scalar longitudinal polarization mode. In the case of a single Pulsar Timing, we obtained an analytical formula for the rms response [see Eq. (104)] which is valid for any dispersion relation. In the case of a dispersion relation of a massive particle, we have seen that it has a significant impact on the Pulsar Timing sensitivity to scalar longitudinal and vector GWs. Remarkably, the effects of the mass on the Pulsar Timing sensitivity are particularly noticeable if it is of the order of the LIGO's upper bound for the graviton mass (\(m_{\rm up}\)). If the mass is two orders of magnitude smaller than \(m_{\rm up}\), the sensitivity curves are indistinguishable from the massless case. On the other hand, in the case of the scalar transversal and the tensor polarization modes, it is enough that the mass is one order of magnitude smaller than \(m_{\rm up}\) to disregard its effects on the sensitivity. The main physical effect in these cases is a limitation in the detectability of these modes established by a cutoff frequency that depends on the mass. Notice that the effects on the sensitivity appear in the case of Pulsar Timing because the cutoff frequency we have considered lies in the Pulsar Timing frequency band. But, in principle, the cutoff frequency can be higher than the Pulsar Timing band in the case of vector and scalar polarizations. If this happens, such modes would be undetectable by Pulsar Timing experiments. In other words, the absence of detection does not imply that extra polarization states beyond the tensor polarization do not exist. In the future, we plan to analyze other dispersion relations of GWs appearing in the literature to check their implications on the Pulsar Timing sensitivity. The detection (or absence of detection) of the polarization modes using the Pulsar Timing technique has decisive implications for alternative theories of gravity. Consider, for instance, the case of the theories studied in Section 3.4 for which the tensor mode is massless and the scalar modes can be massive. Suppose that the scalar mode has a mass of about that of the LIGO upper bound, then for frequencies approaching the cutoff \(f_{c}\sim 3\times 10^{-9}\) Hz the sensitivity of the scalar longitudinal polarization becomes worse than that of the tensor modes. Below this frequency, the scalar modes could not be detected (or even be produced!). Therefore, suppose we are looking for GWs only in a frequency band below \(f_{c}\), and we detect only tensor polarizations. We could be led to the wrong conclusion that the scalar modes do not exist. On the other hand, if we find evidence of the existence of a cutoff frequency for the scalar modes, but not for the tensor modes, this could corroborate the scalar-tensor theories of gravity or \(f(R)\)-gravity. Moreover, this would lead to a bound on the mass of the scalar mode. We have seen that the Pulsar Timing sensitivity to the scalar longitudinal mode is some orders of magnitude better than the sensitivity to tensor modes. However, depending on the theory of gravity this could not be an advantage in terms of detection. In the case of the theories we have analyzed, the amplitude of the scalar longitudinal mode is related to the amplitude of the scalar transversal mode through a factor \((m/\omega)^{2}\) [see Eq. (71)]. Therefore, if \(m\) is much smaller than the smallest detectable frequency of Pulsar Timing, the scalar-longitudinal mode can become undetectable. In this situation, one could still detect the scalar transversal mode if it is strong enough. Obviously, these results apply to scalar-tensor theories of gravity and \(f(R)\)-gravity. For other theories of gravity, the relation between \(\Theta\) and \(\Phi\) should be analyzed as well as the mechanism of generation of these GW modes. Finally, the evidence of a cutoff frequency for any polarization or even the evidence that such a cutoff is not in the Pulsar Timing band can lead to a more stringent bound on the graviton mass than that presented by ground-based interferometers. Similarly, Pulsar Timing detection presents a great opportunity to test gravity by imposing bounds on the polarization modes of GWs. The author thanks Dr. Massimo Tinto for helpful discussions and encouragement, and Livia R. Alves for continuous encouragement during the development of this work. ## Appendix Here we give the frequency-dependent quantities which appear in Eq. (104). For the scalar-longitudinal response \[a_{1}^{SL} = 1, \tag{109}\] \[a_{2}^{SL} = -4,\] (110) \[a_{3}^{SL} = 6,\] (111) \[a_{4}^{SL} = -4,\] (112) \[a_{5}^{SL} = 1. \tag{113}\] For the scalar-transversal response \[a_{1}^{ST} = \left[1-\frac{1}{\eta_{S}^{2}(f)}\right]^{2}, \tag{114}\] \[a_{2}^{ST} = \frac{4}{\eta_{S}^{2}(f)}\left[1-\frac{1}{\eta_{S}^{2}(f)}\right],\] (115) \[a_{3}^{ST} = \frac{2}{\eta_{S}^{2}(f)}\left[\frac{3}{\eta_{S}^{2}(f)}-1\right],\] (116) \[a_{4}^{ST} = -\frac{4}{\eta_{S}^{4}(f)},\] (117) \[a_{5}^{ST} = \frac{1}{\eta_{S}^{4}(f)}. \tag{118}\] For the vector response \[a_{1}^{V} =\frac{1}{4}\left[1-\frac{1}{\eta_{V}^{2}(f)}\right], \tag{119}\] \[a_{2}^{V} =\frac{1}{2}\left[\frac{2}{\eta_{V}^{2}(f)}-1\right],\] (120) \[a_{3}^{V} =\frac{1}{4}\left[1-\frac{6}{\eta_{V}^{2}(f)}\right],\] (121) \[a_{4}^{V} =\frac{1}{\eta_{V}^{2}(f)},\] (122) \[a_{5}^{V} =-\frac{1}{4\eta_{V}^{2}(f)}. \tag{123}\] For the tensor response \[a_{1}^{T} =\frac{1}{16}\left[1-\frac{1}{\eta_{T}^{2}(f)}\right]^{2}, \tag{124}\] \[a_{2}^{T} =\frac{1}{4\eta_{T}^{2}(f)}\left[1-\frac{1}{\eta_{T}^{2}(f)} \right],\] (125) \[a_{3}^{T} =\frac{1}{8\eta_{T}^{2}(f)}\left[\frac{3}{\eta_{T}^{2}(f)}-1\right]\] (126) \[a_{4}^{T} =-\frac{1}{4\eta_{T}^{4}(f)},\] (127) \[a_{5}^{T} =\frac{1}{16\eta_{T}^{4}(f)}. \tag{128}\]
2303.13252
Nominal Sets in Agda -- A Fresh and Immature Mechanization
In this paper we present our current development on a new formalization of nominal sets in Agda. Our first motivation in having another formalization was to understand better nominal sets and to have a playground for testing type systems based on nominal logic. Not surprisingly, we have independently built up the same hierarchy of types leading to nominal sets. We diverge from other formalizations in how to conceive finite permutations: in our formalization a finite permutation is a permutation (i.e. a bijection) whose domain is finite. Finite permutations have different representations, for instance as compositions of transpositions (the predominant in other formalizations) or compositions of disjoint cycles. We prove that these representations are equivalent and use them to normalize (up to composition order of independent transpositions) compositions of transpositions.
Miguel Pagano, José E. Solsona
2023-03-23T13:37:04Z
http://arxiv.org/abs/2303.13252v1
# Nominal Sets in Agda ###### Abstract In this paper we present our current development on a new formalization of nominal sets in Agda. Our first motivation in having another formalization was to understand better nominal sets and to have a playground for testing type systems based on nominal logic. Not surprisingly, we have independently built up the same hierarchy of types leading to nominal sets. We diverge from other formalizations in how to conceive finite permutations: in our formalization a finite permutation is a permutation (i.e. a bijection) whose domain is finite. Finite permutations have different representations, for instance as compositions of transpositions (the predominant in other formalizations) or compositions of disjoint cycles. We prove that these representations are equivalent and use them to normalize (up to composition order of independent transpositions) compositions of transpositions. ## 1 Introduction Nominal sets were introduced to Computer Science by Gabbay and Pitts to give an adequate mathematical universe that permits the definition of inductive sets with binding [9]. Instead of taking equivalence classes of inductively defined sets (as in a formal treatment of, say, the Lambda Calculus) or a particular representation of the variables (as in the de Bruijn approach to Lambda Calculus), nominal sets have a notion of name abstraction that ensures all the properties expected for binders; in particular, alpha-equivalent lambda terms are represented by the same element of the nominal set of lambda terms. In this paper we present a new mechanization [11] of nominal sets. Most of the current mechanizations of nominal sets represent finite permutations as compositions of transpositions, where transpositions are represented by pairs of atoms and compositions as lists. In contrast, our starting point is permutations (i.e. bijective functions); finite permutations are permutations that can be represented by composition of transpositions. Moreover they conflate the set of atoms mentioned in a list with the domain of the (represented) permutation. Pondering about this issue, we decided to develop a "normalization" procedure for representations of finite permutations; in order to prove its correctness, we were driven to introduce a cycle notation. The rest of this paper is structured into four sections. In Sect. 2 we summarize the fundamentals of Nominal Sets; in Sect. 3 we explain the different representations of finite permutations and their equivalence; then, in Sect. 4 we present the most salient aspects of our mechanization in Agda; and finally in Sect. 5 we conclude by mentioning related works and contrasting them with our approach, indicating also our next steps. We assume some knowledge of Agda, but also hope that the paper can be followed by someone familiar with any other language based on type theory. ## 2 Fundamentals of Nominal Sets In this section we summarize the main concepts underlying the notion of Nominal Sets; for a more complete treatment we refer the reader to [14]. We repeat the basic definitions of group and group action. A _group_ is a set \(G\) with a distinguished element (\(\varepsilon\in G\), the _unit_), a binary operation (\(\_\cdot\_\)\(:G\times G\to G\), the _multiplication_), and a unary operation (\(\_^{-1}\)\(:G\to G\), the _inverse_), satisfying the following axioms: \[\begin{array}{llll}\mbox{Associativity:}&g_{1}\cdot(g_{2}\cdot g_{3})\;=\; (g_{1}\cdot g_{2})\cdot g_{3}&,\forall g_{1},g_{2},g_{3}\in G\\ \mbox{Inverse element:}&g\cdot(g^{-1})\;=\;\varepsilon\;=\;g^{-1}\cdot g&, \forall g\in G\\ \mbox{Identity element:}&\varepsilon\cdot g\;=\;g\;=\;g\cdot\varepsilon&, \forall g\in G\end{array}\] Although a group is given by the tuple \((G,\varepsilon,\_\cdot\_\,,\_^{-1})\) (and the proofs that these operations satisfy the axioms) we will refer to the group simply by \(G\). A sub-group of \(G\) is a subset \(H\subseteq G\) such that \(\varepsilon\in H\) and \(H\) is closed under the inverse and multiplication. Let \(G\) be a group. A _\(G\)-set_ is a set \(X\) with an operation \(\_\bullet\_\)\(:G\times X\to X\) (called the _action_) satisfying: \[\begin{array}{llll}\mbox{Identity:}&\varepsilon\bullet x\;=\;x&,\forall x \in X\\ \mbox{Compatibility:}&g_{1}\bullet(g_{2}\bullet x)\;=\;(g_{1}\cdot g_{2}) \bullet x&,\forall g_{1},g_{2}\in G,\forall x\in X\end{array}\] A morphism between \(G\)-sets \(X\) and \(Y\) is a function \(F\)\(:X\to Y\) that commutes with the actions: \[F\,(g\bullet x)\;=\;g\bullet F\,x\hskip 28.452756pt,\forall g\in G,\forall x\in X\] These are called _equivariant_ functions. Since \(id_{X}\) is equivariant and the composition of equivariant functions yields an equivariant function we can talk of the category of \(G\)-Sets. Any set \(X\) can be seen as a \(G\)-set by letting \(g\bullet x=x\); such a \(G\)-set is called the _discrete_\(G\)-set. Moreover any group acts on itself by the multiplication. One can form the (in)finitary product of \(G\)-sets by defining the action of \(G\) on a tuple in a pointwise manner: \[g\bullet\langle x_{1},x_{2}\rangle\;=\;\langle g\bullet x_{1},\,g\bullet x_{2 }\rangle\hskip 28.452756pt,\forall g\in G,\forall x_{1}\in X_{1},\forall x_{2} \in X_{2}\] The projections and the product morphism \(\langle F,H\rangle\) are equivariant, assuming that \(F\) and \(H\) are also equivariant. \(G\)-set, as a category, also has co-products. If \(X\) and \(Y\) are \(G\)-sets one can endow the set \(Y^{X}\) of functions from \(X\) to \(Y\) with the _conjugate_ action: \[(g\bullet F)\,x\;=\;g\bullet(F\,(g^{-1}\bullet x))\hskip 28.452756pt,\forall g \in G,\forall x\in X\;\;.\] \(G\)-sets over the Permutation GroupThe group of symmetries over a set \(X\) consists of \(G=Sym(X)\), where \(Sym(X)\) is the set of bijections on \(X\); the multiplication of \(Sym(X)\) is composition, the inverse is the inverse bijection, and the unit is the identity. Let \(Perm(X)\) be the subset of \(Sym(X)\) of bijections that changes only finitely many elements; i.e., \(f\in Perm(X)\) if \(\mbox{supp}(f)=\{x\in X\mid f\,x\neq x\}\) is finite. It is straightforward to prove that \(Perm(X)\) is a subgroup of \(Sym(X)\). Of course, if \(X\) is finite, then \(Perm(X)=Sym(X)\). Notice that \(X\) itself is a \(Perm(X)\)-set with the action being function application: \(\pi\bullet x\;=\;\pi\;x\;\). In particular, the _transposition_ (or _swapping_) of a pair of elements \(x,y\in X\) is the finite permutation \((x\,y)\in Perm(X)\) given by \[(x\,y)\;z\;=\;\begin{cases}y&\mbox{if $z=x$}\\ x&\mbox{if $z=y$}\\ z&\mbox{otherwise}\end{cases}\] A basic result (in [10] is proved as Theorem 6.3 and Corollary 6.5) is that every \(\pi\in\mathit{Perm}(X)\) can be expressed as a composition of _disjoint cycles_ \[\pi\ =\ (x_{1}\ x_{2}\ \dots\ x_{n})\circ\dots\circ(z_{1}\ z_{2}\ \dots\ z_{k})\] and every cycle can be expressed as a composition of transpositions \[(x_{1}\ x_{2}\ \dots\ x_{n})\ =\ (x_{1}\ x_{2})\circ(x_{2}\ x_{3})\circ\dots \circ(x_{n-1}\ x_{n})\] Therefore every \(\pi\in\mathit{Perm}(X)\) can be expressed as a composition of transpositions. We elaborate on the equivalence of the representations in Sect. 3. Let us exhibit this with a concrete example. **Example 1**.: _Let \(f\colon\mathbb{N}\to\mathbb{N}\) be defined as_ \[f\,x\ =\ \begin{cases}(x+2)\ mod\ 6&\text{ if }x\leq 5\\ x&\text{ else}\end{cases}\] _Function \(f\) is a finite permutation, because it has finite support: \(\{x\in\mathbb{N}\mid 0\leq x\leq 5\}\). Therefore it can be expressed as the composition of two cycles: \((1\ 3\ 5)\circ(0\ 2\ 4)\), or alternatively, it can also be expressed as a composition of four transpositions: \((1\ 3)\circ(3\ 5)\circ(0\ 2)\circ(2\ 4)\)._ Nominal SetsIf we let \(X\) be the set of variables for the lambda calculus, then a permutation on \(X\) is a renaming; such a permutation can be lifted to an action over the set of lambda terms (taking care of the bound variables). In the nominal parlance one says that \(X\) is the set of _atoms_ or that variables are atomic names: an atomic name has no structure in itself. We only assume that a set of atoms is a countable infinite set with decidable equality; from now on we will use \(\mathbb{A}\) to refer to a set of atoms. Let \(X\) be a \(\mathit{Perm}(\mathbb{A})\)-set. We say that \(x\in X\)_is supported by \(A\subseteq\mathbb{A}\)_ if \[\forall\,\pi.\ (\forall\,a\in A.\ \pi\,a=a)\ \implies\ \pi\bullet x=x\enspace.\] We say that \(X\) is a _nominal set_ if each element of \(X\) is supported by some finite subset of \(\mathbb{A}\). Since each finite permutation can be decomposed as a composition of transpositions, then one can prove that the above definition is equivalent to \[\forall\,a,a^{\prime}\in\mathbb{A}\setminus A.\ (a\ a^{\prime})\bullet x=x\enspace.\] The following are some examples of nominal sets: * The discrete \(\mathit{Perm}(\mathbb{A})\)-set \(X\) is nominal, because any \(x\in X\) is supported by \(\emptyset\). * \(\mathbb{A}\) itself is nominal once equipped with the action \(\pi\bullet a=\pi\,a\), because any \(a\in\mathbb{A}\) is supported by \(\{a\}\). More in general, any \(S\subseteq\mathbb{A}\) containing name \(a\) is a support for \(a\). * The set \(\lambda\mathit{Term}\) of \(\lambda\)-calculus terms, inductively defined by \(t\ ::=\ V(a)\mid A(t,t)\mid L(a,t)\) where \(a\in\mathbb{A}\), equipped with the action \(\_\bullet\_:\ \mathit{Perm}(\mathbb{A})\times\lambda\mathit{Term}\to\lambda \mathit{Term}\) such that \[\pi\bullet V(a)\ =\ V(\pi\,a)\] \[\pi\bullet A(t_{1},t_{2})\ =\ A(\pi\bullet t_{1},\,\pi\bullet t_{2})\] \[\pi\bullet L(a,t)\ =\ L(\pi\,a,\pi\bullet t)\] is nominal because any \(t\in\lambda\mathit{Term}\) is supported by \(supp(t)=FreeVars(t)\). In his book [14] Pitts uses classical logic to prove that if \(x\) is supported by some finite set \(A\), then there exists a least supporting set, called _the_ support of \(x\). As shown by Swan [15] one cannot define the least support in a constructive setting; therefore a formalization in a constructive type theory should ask for "some" finite support. This affects the notion of freshness: in classical logic we have \[x\mbox{ is fresh for }y\ \ \Leftrightarrow\ \ \operatorname{supp}(x)\cap \operatorname{supp}(y)=\emptyset,\] with \(x\in X\) and \(y\in Y\) being elements of different nominal sets; but in a constructive setting one has to limit this relation to atoms, that is \[a\in\mathbb{A}\mbox{ is fresh for }x\in X\ \ \Leftrightarrow\ \ a\not\in \operatorname{supp}(x),\] where \(\operatorname{supp}(x)\) is the set supporting \(x\), not necessarily the least one. Notice that the definition is the same ("there exists some finite support for each element"), but in classical logic that is sufficient to obtain the least support. ## 3 Finite Permutations As we have already said, a finite permutation on a set \(A\) can be explicitly given by: 1. a bijection \(f:A\to A\) together with its support \(\operatorname{supp}(f)\subseteq_{\mathit{fin}}A\); i.e., \(a\in\operatorname{supp}(f)\) if and only if \(f\,a\neq a\); 2. a composition of disjoint cycles; concretely, we can think of this as a finite set \(R\subseteq_{\mathit{fin}}A^{*}\) of disjoint cycles, each of them without repeated elements; 3. a composition of transpositions; that is, a finite sequence of pairs \(p:(A\times A)^{*}\). We present our proof that these definitions are equivalent. It basically boils down to define a predicate on sequences of elements in \(A\) not containing repeated elements ensuring that they are cycles for \(f\). We use the usual notation \((a\,b)\) to denote the bijection \(\{(a,b),(b,a)\}\). **Definition 1** (List of transpositions from a cycle).: _We define \(\mathit{toFP}\colon A\times A^{*}\to(A\times A)^{*}\)._ \[\mathit{toFP}(a,\rho)=\begin{cases}[]&\mbox{ if }\rho=[]\\ (a,b):\mathit{toFP}(b,\rho^{\prime})&\mbox{ if }\rho=b:\rho^{\prime}\end{cases}\] _If we know that \(\rho=a:\rho^{\prime}\), then we also write \(\mathit{toFP}(\rho)\) to mean \(\mathit{toFP}(a,\rho^{\prime})\)._ **Definition 2** (Permutation from a list of transpositions).: _Let as \(:(A\times A)^{*}\), then \([\![\mathit{as}]\!]:A\to A\) is defined by recursion on as:_ \[[\![\mathit{as}]\!]=\begin{cases}id&\mbox{ if }as=[]\\ (a\,b)\cdot[\![\mathit{as}^{\prime}]\!]&\mbox{ if }as=(a,b):\mathit{as}^{\prime} \end{cases}\] **Definition 3** (Prefixes).: _We say that a non-empty sequence \(\rho=[a_{1},\ldots,a_{n}]:A^{*}\) is a prefix with head \(a_{0}\) for bijection \(f\) if:_ 1. \(a_{0}\in\operatorname{supp}(f)\)_,_ 2. \(f\,a_{i}=a_{i+1}\)_, and_ 3. \(a_{0}\not\in\rho\) _A prefix \(\rho\) is closed if \(f\,a_{n}=a_{0}\). Since \(\rho\) is non-empty, we denote with \(\mathit{last}(\rho)\) its last element._ From this simple definition we can deduce: **Lemma 1** (Properties of prefixes).: _Let \(\rho\) be a prefix with head \(a\)._ 1. _If_ \(\rho^{\prime}\) _is a prefix with head_ \(\mathit{last}(\rho)\)_, then its concatenation_ \(\rho\rho^{\prime}\) _is a prefix with head_ \(a\)_._ 2. \(\rho\) _has no duplicates._ 3. _If_ \(\rho\) _is closed and_ \(b\in(a:\rho)\)_, then_ \(f\,b=\llbracket\mathit{toFP}(a,\rho)\rrbracket\,b\)_._ 4. _If_ \(b\not\in(a:\rho)\)_, then_ \(\llbracket\mathit{toFP}(a,\rho)\rrbracket\,b=b\)_._ We can extend this definition to a sequence of sequences: let \(R=[(a_{1},\rho_{1}),\ldots,(a_{m},\rho_{m})]:(A\times A^{*})^{*}\), then \(R\) is a list of prefixes, with its head, if each \(\rho_{i}\) is a prefix and \(\rho_{i}\cap\rho_{j}=\emptyset\). **Lemma 2** (Correctness of prefixes).: _Let \(R=[(a_{1},\rho_{1}),\ldots,(a_{m},\rho_{m})]\) be a list of closed prefixes, then \(\llbracket\mathit{toFP}(a_{1},\rho_{1})\,\ldots\,\mathit{toFP}(a_{m},\rho_{m}) \rrbracket\,a=f\,a\)._ This proves that from a representation with cycles one can get a representation with transpositions. If we can produce a list of closed prefixes from a finite permutation (as a bijection with its support explicitly given) then we have the equivalence. First we define a function \(\mathit{cycle}_{f}\colon\mathbb{N}\times A\to A^{*}\) such that \(\mathit{cycle}_{f}(n,a)\) computes a prefix with head \(a\) of length at most \(n+1\) by recursion on \(n\): \[\mathit{cycle}_{f}(0,a) =\llbracket f\,a\rrbracket\] \[\mathit{cycle}_{f}(n+1,a) =\begin{cases}\rho&\text{if }f\,b=a\\ \rho\llbracket f\,b\rrbracket&\text{otherwise}\end{cases}\] \[\text{where }\rho=\mathit{cycle}_{f}(n,a)\text{ and }b= \mathit{last}(\rho)\] We can extend this definition to compute a list of prefixes from a list of atoms: \[\mathit{cycles}_{f}(n,\llbracket,R) =R\] \[\mathit{cycles}_{f}(n,a:\mathit{as},R) =\begin{cases}\mathit{cycles}_{f}(n,\mathit{as},R)&\text{if }a\in \bigcup R\\ \mathit{cycles}_{f}(n,\mathit{as},\rho:R)&\text{otherwise}\end{cases}\] \[\text{where }\rho=a:\mathit{cycle}_{f}(n,a)\] **Lemma 3** (Correctness of computed cycles).: _If \(f\colon A\to A\) is a bijection and \(a\in\operatorname{supp}(f)\), then \(\mathit{cycle}_{f}(n,a)\) is a prefix with head \(a\), for all \(n\in\mathbb{N}\). Moreover if \(|\operatorname{supp}(f)|\leqslant n\), then \(\mathit{cycle}_{f}(n,a)\) is closed._ _If \(R\) is a list of prefixes and as \(\subseteq\operatorname{supp}(f)\), then \(\mathit{cycles}_{f}(n,\mathit{as},R)\) is a list of prefixes; if \(|\operatorname{supp}(f)|\leqslant n\), then \(\mathit{cycles}_{f}(n,\mathit{as},R)\) is a list of closed prefixes._ **Theorem 1**.: _If \(f\colon A\to A\) is a bijection, then \(R=\mathit{cycles}_{f}(|\operatorname{supp}(f)|,\operatorname{supp}(f), \llbracket))\) is a list of closed prefixes. Therefore \(\llbracket\mathit{toFP}^{*}(R)\rrbracket\,a=f\,a\), for all \(a\in A\)._ Notice that a composition of transpositions might mention elements that are not in the support of the induced permutation; for example, both \((1\;1)\) and \((1\;2)(2\;1)\) are equal to the identity permutation. One can get a "normalized" representation by composing our functions. As a matter of fact, this was our motivation to formalize cycles. **Corollary 1** (Normalization of transpositions).: _Let \(p\) be a list of transpositions and \(\mathit{ats}=\operatorname{supp}(\mathit{toFP}(p))\). Moreover, let \(R=\mathit{cycles}_{\mathit{toFP}(p)}(|\mathit{ats}|,\mathit{ats}, \llbracket)\). Then \(\llbracket\mathit{toFP}^{*}(R)\rrbracket=\llbracket\mathit{as}\rrbracket\); moreover every atom in \(\mathit{toFP}^{*}(R)\) is in its support._ ## 4 Our Formalization in Agda Our formalization is developed on top of the Agda's standard library v1.7 [16]. Figure 1 shows a high level view of the project. The standard library includes an algebraic hierarchy going beyond groups; it lacks, however, a formalization of group actions. The module GroupAction includes G-Sets, equivariant functions and constructions like products and co-products. We also have a Permutation module which includes the concepts of finite permutations, cycles, normalization and the permutation group. And last, in the module Nominal we formalize the concepts of support, nominal set, equivalence between different notions of support, normalization and again constructions like products and co-products. We first present the definition of Group in the standard library in order to introduce some terminology and concepts: ``` recordGroupc\(\ell\):Set(suc(c\(\sqcup\)\(\ell\)))where field Carrier:Setc _=_ :RelCarrier\(\ell\) _-_ :Op2Carrier _=_ :Carrier _-_1:Op1Carrier isGroup:IsGroup_ _=_ _-_ \(E\) _-_1 A Group is a _bundle_ where the components of its definition (the carrier set, the unit, the inverse, the composition) are explicitly mentioned plus a proof, given by _isGroup_, that they satisfy the axioms. Notice that one of the fields is a relation _=_; that relation should be an equivalence relation over the carrier: essentially this amounts to say that the Carrier has a setoid structure. Setoids allows for greater flexibility as they enable to work with a notion of equality that is not the propositional equality; Func X Y is the set of functions between setoids X and Y that preserve the equality; sometimes these functios are called _respectful_. G-SetsOur first definition is the _structure_ that collects the equations required for an action. In the following, we are under a module parameterized by G : Group. Figure 1: High level view of the modular organization in the project. recordIsAction(F:Func(G.setoidx_sX)X):Set_where _e_:CarrierG-CarrierX-CarrierX -Car We now prove that the first projection is equivariant; notice that G-Set-x is the product of X and Y introduced with the variable keyword. \(\pi_{1}\) : Equivariant G G-Set-x X f (F \(\pi_{1}\)) = proj\({}_{1}\) cong (F \(\pi_{1}\)) = proj\({}_{1}\) isEquivariant \(\pi_{1}\) _ = refl (set X) PermutationsNow we focus on the module Permutation. We start by introducing the group \(Sym(\mathbb{A})\) using the definitions of inverses from the standard library; notice that the equivalence relation is given by the point-wise (or extensional) equality of functions. -- In this context A-setoid is a Setoid (not necessarily decidable). A = Carrier A-setoid ; _\(\approx\)A = _\(\approx\)_ A-setoid Perm = Inverse A-setoid A-setoid _\(\approx_{p-}\) : Rel Perm _ F \(\approx_{p}\) G = (a : A) + f F a \(\approx\)Af G a Sym : Group (\(\ell\sqcup\ell\)') (\(\ell\sqcup\ell\)') Carrier Sym = Perm _\(\approx\)_ Sym = _\(\approx_{p-}\) _\(\vdash\)_ Sym = _\(o_{p-}\) _\(\varepsilon\)_ Sym = id\({}_{p}\) A-setoid -- identity Perm, from the stdlib _\(\prime\)_ Sym = _\(\dashv\)_ -- inverse permutation, from the stdlib isGroup Sym = record {... } -- ommited If we ask the setoid A-setoid to be decidable, then we can define the swapping permutation. module Perm (A-setoid : DecSetoid \(\ell\)') where open DecSetoid A-setoid renaming (Carrier to A) transp : A + A + A + A transp a b c with does (c \(\stackrel{{?}}{{=}}\) a) ... | true = b ... | false with does (c \(\stackrel{{?}}{{=}}\) b) ... | true = a ... | false = c transp-perm : (a b : A) + Perm transp-perm a b = record { f = transp a b ; f-a = transp a b ; cong\({}_{1}\) = transp-respects-\(\approx\) a b ; cong\({}_{2}\) = transp-respects-\(\approx\) a b ; inverse = transp-involutive a b } Our next goal is to define the group \(Perm(\mathbb{A})\) of finite permutations of atoms. As we explained before, finite permutation can be given by a bijective map, as a composition of transpositions, or as a composition of disjoint cycles. In other works the group of finite permutations is explicitly defined as lists of pairs, where each pair represents a transposition and the empty list is the identity permutation: appending a pair \((a,b)\) to a list \(p\) amounts to compose the transposition \((a\ b)\) to the permutation denoted by \(p\). Concatenation of lists \(p\) and \(p^{\prime}\) also induces their composition. This choice has the advantage of being explicit and avoids having alternative expressions for composing permutations. On the other hand it still allows different representatives for the same permutation; in fact, \([(a,a)]\), \([(b,a),(a,b)]\), and \([]\) are all representations of _the_ identity permutation. It is clear that the setoid of finite permutations should equate those three versions of the identity, therefore the equivalence relation used is that of inducing the same permutation. We started with the following syntactic representation of Finite Permutations, which is close to that of lists but in terms of \(S\)-expressions; since we cannot ensure canonicity with lists, why not to be more liberal also on associativity? ``` dataFinPerm:Set\(\ell\)where Id:FinPerm Swap:(ab:A)\(\dashv\)FinPerm Comp:(pq:FinPerm)\(\dashv\)FinPerm ``` The permutation associated with a FinPerm is given by ``` [Id]=id\({}_{p}\)setoid [Swapab]=transp-permab [Comppq]=[[q]\(\circ_{p}[\)p] ``` Before introducing our concrete formalization of \(\mathit{Perm}(\mathbb{A})\) let us exploit the fact that we have a decidable setoid of atoms to prove that the equivalence of finite permutation is also decidable. In order to do that, we define a relation \(\_\subseteq_{s}\) on FinPerm; p \(\subseteq_{s}\)q holds when q coincides with p in the support of the latter. Since we can compute the support of FinPerms and the equality of atoms is decidable, then we can decide \(\_\subseteq_{s}\). ``` \(\_\subseteq_{s}\):RelFinPerm(\(\ell\sqcup\ell\)') p\(\subseteq_{s}\)q=All(\(\lambda\)a\(\dashv\)f[[p]]a\(\approx\)f[[q]]a)(supportp) ```?\(\subseteq_{s}\):\(\forall\)pq\(\dashv\)Dec(p\(\subseteq_{s}\)q)?\(\subseteq_{s}\)pq=all?(\(\lambda\)a\(\dashv\)f[[p]]a\(\stackrel{{?}}{{=}}\)f[[q]]a)(supportp) ``` Moreover we can prove that the mutual containment is equivalent to denoting the same permutation; thus we can decide the equality of finite permutations as given by FinPerm: ``` \(\_\approx_{s}\):RelFinPerm(\(\ell\sqcup\ell\)') p\(\approx_{s}\)q=p\(\subseteq_{s}\)qx\(\alpha\)s p ``` ``` \(\_\approx_{s}\)-dec:\(\forall\)pq+Dec(p\(\approx_{s}\)q) \(\_\)s-decpq=(?\(\subseteq_{s}\)pq)x-dec(?\(\subseteq_{s}\)qp) --We omit the proofs of these lemmas. ``` \(\_\approx_{s}\)\(\Rightarrow_{p}\):\(\forall\)pq+p\(\approx_{s}\)q+[[p]\(\approx_{p}\)[[q]] \(\_\approx_{p}\)\(\Rightarrow_{s}\)s:\(\forall\)pq+[[p]]\(\approx_{p}\)[[q]]+p\(\subseteq_{s}\)q \(\_\)\(\stackrel{{?}}{{=}}\)p-_:\(\forall\)pq+Dec([[p]]\(\approx_{p}\)[[q]]) ``` Furthermore we can normalize a FinPerm to have an equivalent permutation where every occuring atom is in its support. Let us first revisit the Example 1 now in Agda where we see how to encode a finite permutation as a composition of cycles. f : \(\mathbb{N}\dashv\mathbb{N}\) f x with x \(\leqslant\)? 5 ... | yes p = (x + 2) mod 6 ... | no -p = x We represent cycles simply as lists of atoms; we certainly could also have used fresh lists to represent cycles. A composition of cycles is a list of cycles. Cycle = List A cycle0 cycle1 : Cycle cycle0 = 1 :: 3 :: 5 :: [] cycle1 = 0 :: 2 :: 4 :: [] f-cycles : List Cycle f-cycles = cycle0 :: cycle1 :: [] Or alternatively, it can also be expressed as a composition of four transpositions: f-swaps : FinPerm f-swaps = Comp (Comp (Swap 1 3) (Swap 3 5)) (Comp (Swap 0 2) (Swap 2 4)) In Figure 2 we show the three representations of finite permutations. The normalization of FinPerm is simply the composition of the mappings: norm : FinPerm \(\dashv\) FinPerm norm = cycles-to-FP \(\circ\) cycles-from-FP The functions cycles-to-FP maps lists of disjoint cycles to FinPerm and cycles-from-FP goes in the reverse direction, producing a list of disjoint cycles from a FinPerm (this is the composition of the diagonal arrows in Fig. 2). The correctness of the normalization follows the proof presented in Sec. 3. Although we do not enforce neither freshness for cycles nor disjointness of cycles we keep that as an invariant when we compute the cycles in to-cycles. module Thm (p : FinPerm) where ats = atoms! p -- Fresh list of the atoms in the support of p. -- from-atom-"* is the proof of Lemma 3. rel = from-atoms-"*[ p] ats []* (fp-supp p) (dom\(\supseteq\)atoms! p) -- the representation as composition of cycles \(\rho\)s = to-cycles [ p] (length ats) ats [] Figure 2: The mappings between different representations of permutations. -- This property follows from Lemma 3. \(\in\)-dom\(\Rightarrow\)\(\in\)\(\rho\)s : (_c-dom [ p ]) \(\subseteq\) (_c concat \(\rho\)s) norm-corr : [ p ] \(\approx_{p}\) [ norm p ] norm-corr x with x \(\in\)? concat \(\rho\)s... | yes xcat = ~*-out [ p ] rel xcat -- Item 3 of Lemma 1.... | no x\(\notin\)at = trans -- f [ p ] x = x = f [ norm p ] x (-c-dom\(\neq\)\(\notin\)-dom [ p ]) (contraposition \(\in\)-dom\(\Rightarrow\)\(\in\)\(\rho\)s x\(\notin\)at)) (~*-out-fresh [ p ] rel x\(\notin\)at) -- Item 4 of Lemma 1. We also have other correctness result to prove that the FinPerm obtained from a Perm and its support is equivalent to it: module Thm' (F : Perm) (ats : List A) (is-sup : ats is-supp-of F) (incl : (_cats) \(\subseteq\) (_c-dom F)) where \(\rho\)s = to-cycles p (lengthats) ats [] norm-corr : F \(\approx_{p}\) [ cycles-to-FP \(\rho\)s ] Let us remark that FinPerm is just a representation and the set of finite permutation, PERM, is the subset of Perm corresponding to the image of [_]: PERM : Set _ PERM = \(\Sigma\) p \(\in\) Perm ] (\(\Sigma\)p \(\in\) FinPerm ] (p \(\approx_{p}\) [ q ])) A disadvantage of using this encoding is that we need to deal with triples; for instance, the identity PERM is represented by Id. ID : PERM ID = id\({}_{p}\) setoid, Id, \(\lambda\) _ - refl The group \(Perm(\mathbb{A})\) is explicity defined as: Perm-A : Group (\(\ell\)\(\sqcup\)\(\ell\)') (\(\ell\)\(\sqcup\)\(\ell\)') Carrier Perm-A = PERM _=\(\otimes\)G_ Perm-A = _\(\approx_{p}\) on proj\({}_{1}\) _-_ Perm-A = _oP_ _\(\varepsilon\) Perm-A = ID _' Perm-A = _\({}^{-1}\)p isGroup Perm-A = _record {... } We alleviate the burden of working with triples by proving lemmas characterizing the action of PERMs in terms of the finite permutation, for instance for Id: -- In this context the group acting on G-Sets is Perm-A. module Act-Lemmas (X-set : G-Set (\(\ell_{1}\) = \(\ell\)x) {\(\ell_{2}\) = \(\ell\)x'}) where _=X_ = Setoid_s_ set id-act : \(\forall\) (\(\pi\) : PERM) (x : X) + proj\({}_{1}\)\(\pi\)\(\approx_{p}\) [ Id ] \(\rightarrow\) (\(\pi\) \(\bullet\) x) \(\approx\)X x id-act \(\pi\) x eq = trans (cong\({}^{1}\) {ID} x eq) (id\({}_{a}\) x) Nominal SetsRemember that a subset \(A\subseteq\mathbb{A}\) is a support for \(x\) if every permutation fixing every element of \(A\) fixes \(x\), through the action. A subset of a setoid A can be defined either as a predicate or as pairs (just as in PERM where the predicate is \(\lambda\) p \(\rightarrow\)\(\Sigma\)l q \(\in\) FinPerm ] (p \(\approx_{p}\) [ q ])) or as another type, say B, together with an injection \(\iota\) : Injection B A. variable X : G-Set P : SetoidPredicate A-setoid is-supp : Pred X _ is-supp x = (\(\pi\) : PERM) \(\dashv\) (predicate P \(\subseteq\_\notin\)-dom (proj\({}_{1}\)\(\pi\))) \(\dashv\) (\(\pi\)\(\bullet\) x) \(\approx\)X x The predicate \(\lambda\) a \(\dashv\) f (proj\({}_{1}\)\(\pi\)) a \(\approx\)A a is \(\_\notin\)-dom (proj\({}_{1}\)\(\pi\)); therefore, if P a iff \(a\in A\), then predicate P \(\subseteq\_\notin\)-dom (proj\({}_{1}\)\(\pi\)) is a correct formalization of \(\forall a\in A\). \(\pi\)\(a=a\). Our official definition of support is the following: _supports : Pred X _ _ _ supports_ x = \(\forall\) {a b} \(\dashv\) a \(\notin_{s}\) P \(\dashv\) b \(\notin_{s}\) P \(\dashv\) SWAP a b \(\bullet\) x \(\approx\)X x Here SWAP is a PERMutation equal to [swap a b]. We formally proved that both definitions are equivalent, which is stated by the mutual implications: is-supp\(\subseteq\)supports : \(\forall\) x \(\dashv\) is-supp x \(\dashv\) _supports_ x supports\(\subseteq\)is-supp : _supports_ \(\subseteq\) is-supp Let us note that the second implication uses explicitly the normalization of finite permutations and its correctness. In order to define nominal sets we need to choose how to say that a subset is finite; as explained by Coquand and Spiwak [8] there are several possibilities for this. We choose the easiest one: a predicate is finite if there is a list that enumerates all the elements satisfying the predicate. finite : Pred (SetoidPredicate setoid) _ finite P = \(\Sigma\)[ as \(\in\) List Carrier ] (predicate P \(\subseteq\) (_\(\in\) as)) A G-Set is nominal if all the elements of the underlying set are finitely supported. record Nominal (X : G-Set) : Set _ where field sup : \(\forall\) x \(\dashv\)\(\Sigma\)[ P \(\in\) SetoidPredicate setoid ] (finite P x P supports x) It is easy to prove that various constructions are nominals; for instance any discrete G-Set is nominal because every element is supported by the empty predicate \(\bot_{s}\): \(\Delta\)-nominal : (S : Setoid _ ) \(\dashv\) Nominal (\(\Delta\) S) sup (\(\Delta\)-nominal S) x = \(\bot_{s}\), \(\bot\)-finite, (\(\lambda\) _ _ _ S-refl {x = x}) where open Setoid S renaming (refl to S-refl) We have defined G-Set\(\Rightarrow\) X Y corresponding to the G-Set of equivariant functions from X to Y; now we can prove that G-Set\(\Rightarrow\) X Y is nominal, again with \(\bot_{s}\) as the support for any F : Equivariant X Y. \(\dashv\)-nominal : Nominal (G-Set\(\Rightarrow\) X Y) sup (\(\dashv\)-nominal) F = \(\bot_{s}\), \(\bot\)-finite, \(\lambda\) _ _ _ _ supported where supported : \(\forall\) {a b} x \(\dashv\) f ((SWAP a b) _ _ F) x \(\approx\)Y f F x ## 5 Conclusion Nominal techniques have been adopted in various developments. We distinguish developments borrowing some concepts from nominal techniques to be applied in specific use cases (e.g. formalization of languages with binders like the \(\lambda\) or \(\pi\) calculus with their associated meta-theory) [3, 7, 6, 5] from more general developments aiming to formalize at least the core aspects of the theory of nominal sets. We are more concerned with the later type. The nominal datatype package for Isabelle/HOL [17] developed by Urban and Berghofer implements an infrastructure for defining languages involving binders and for reasoning conveniently about alpha-equivalence classes. This Isabelle/HOL package inspired Aydemir et al. [2] to develop a proof of concept for the Coq proof assistant, however it had no further development. In his Master thesis [4], Choudhury notes that none of the previous developments following the theory of nominal sets were based on constructive foundations. He showed that a considerable portion (most of the first four chapters of Pitts book [14]) of the theory of nominal sets can also be developed constructively by giving a formalization in Agda. Pitts original work is based on classical logic, and depends heavily on the existence of the smallest finite support for an element of a nominal set. However, Swan [15] has shown that in general this existence cannot be constructively guaranteed, as it would imply the law of the excluded middle. Choudhury works with the notion of _some non-unique support_. In order to formalize the category of Nominal Sets, Choudhury preferred setoids instead of postulating functional extensionality. As far as we know, Choudhury is still the most comprehensive mechanization in terms of instances of constructions having a nominal structure. Recently Paranhos and Ventura [12] presented a constructive formalization in Coq of the core notions of nominal sets: support, freshness and name abstraction. They follow closely Choudhury's work in Agda [4], acknowledging the importance of working with setoids. They claim that by using Coq's type class and setoid rewriting mechanism, much shorter and simpler proofs are achieved, circumventing the "setoid hell" described by Choudhury. In his master thesis [13] Paranhos further developed the library. Both of those two formalizations in type theory take a very pragmatic approach to finite permutations: a finite permutation is a list of pairs of names. In our approach, we start with the more general notion of bijective function from which the finite permutations are obtained as a special case; moreover having different representations allowed us to state and prove some theorems that cannot even be stated in the other formalizations. So far, our main contributions are: the representation of finite permutations and the normalization of composition of transpositions; the equivalence between two definitions of the relation "\(A\) supports the element \(x\)"; and proving that the extension of every container type can be enriched with a group action (notice that this cover lists, trees, etc.). Our next steps are the definition of freshness. We are studying an alternative notion of support that would admit having a freshness relation between elements of two nominal sets (in contrast with other mechanization that only consider "the atom \(a\) is fresh for \(x\)") and name abstraction. In parallel we hope to be able to prove that extensions of finite containers on nominal sets are also nominal sets. We also hope to streamline further some rough corners of our development. ### Acknowledgments This formalization grew up from discussions with the group of the research project "Type-checking for a Nominal Type Theory": Maribel Fernandez, Nora Szasz, Alvaro Tasistro, and Sebastian Urciouli. We thank Cristian Vay for discussions about group theory. This work was partially funded by Agencia Nacional de Investigacion e Innovacion (ANII) of Uruguay.
2301.05249
Ultraviolet extensions of the Scotogenic model
The Scotogenic model is a popular scenario that induces radiative Majorana neutrino masses and includes a weakly-interacting dark matter candidate. We classify all possible ultraviolet extensions of the Scotogenic model in which (i) the dark $\mathbb{Z}_2$ parity emerges at low energies after the spontaneous breaking of a global $\rm U(1)_L$ lepton number symmetry, and (ii) the low-energy effective theory contains a naturally small lepton number breaking parameter, suppressed by the mass of a heavy mediator integrated out at tree-level. We find $50$ such models and discuss two of them in detail to illustrate our setup. We also discuss some general aspects of the phenomenology of the models in our classification, exploring possible lepton flavor violating signals, collider signatures and implications for dark matter. The phenomenological prospects of these scenarios are very rich due to the presence of additional scalar states, including a massless Goldstone boson.
Diego Portillo-Sánchez, Pablo Escribano, Avelino Vicente
2023-01-12T19:01:43Z
http://arxiv.org/abs/2301.05249v2
# Ultraviolet extensions of the Scotogenic model ###### Abstract The Scotogenic model is a popular scenario that induces radiative Majorana neutrino masses and includes a weakly-interacting dark matter candidate. We classify all possible ultraviolet extensions of the Scotogenic model in which (i) the _dark_\(\mathbb{Z}_{2}\) parity emerges at low energies after the spontaneous breaking of a global U(1)\({}_{\rm L}\) lepton number symmetry, and (ii) the low-energy effective theory contains a naturally small lepton number breaking parameter, suppressed by the mass of a heavy mediator integrated out at tree-level. We find 50 such models and discuss two of them in detail to illustrate our setup. We also discuss some general aspects of the phenomenology of the models in our classification, exploring possible lepton flavor violating signals, collider signatures and implications for dark matter. The phenomenological prospects of these scenarios are very rich due to the presence of additional scalar states, including a massless Goldstone boson. IFIC/23-01 ## 1 Introduction The Scotogenic model [1] is a popular extension of the Standard Model (SM) that addresses two of the currently most important open questions in physics: the origin of neutrino masses and the nature of the dark matter (DM) of the Universe. Its popularity stems from its simplicity. The model extends the SM particle content with three singlet fermions, \(N_{1,2,3}\), and a scalar doublet, \(\eta\), all odd under a new \(\mathbb{Z}_{2}\) symmetry under which the SM fields are even. These ingredients suffice to induce Majorana neutrino masses at the 1-loop level and provide a viable DM candidate, namely the lightest \(\mathbb{Z}_{2}\)-odd state. Radiative neutrino mass models [2, 3, 4, 5] provide a natural suppression for neutrino masses with loop factors. This is one of the main motivations in favor of this class of models [6]. In addition, further suppression is introduced in some models by assuming an approximate lepton number symmetry, broken in a small amount by the presence of a Lagrangian term with a suppressed coefficient. This is the case of the Scotogenic model, that requires a small \(\lambda_{5}\ll 1\) quartic parameter to obtain the correct size for neutrino masses with sizable Yukawa couplings. While this is technically valid, and natural in the sense of 't Hooft [7], it also calls for an extension that explains the smallness of the \(\lambda_{5}\) parameter, possibly relating it to the breaking of lepton number. In this work we consider ultraviolet (UV) extensions of the Scotogenic model that provide a natural explanation for the smallness of the \(\lambda_{5}\) parameter and in which the \(\mathbb{Z}_{2}\) parity of the model emerges at low energies from a spontaneously broken global U(1) lepton number symmetry. This endeavor was initiated in [8], where a specific UV model with these properties was proposed. Here we go beyond specific realizations and classify all possible models with these features in which a low-energy Scotogenic model is obtained after integrating out a heavy field at tree-level. Besides one or several massive scalars, the particle spectrum of the theory will contain a massless Goldstone boson, the _majoron_[9, 10, 11, 12], induced by the spontaneous breaking of lepton number. These new states are not present in the original Scotogenic model and lead to novel phenomenological predictions that allow one to probe our setup. The rest of the manuscript is organized as follows. First, we set our notation and conventions in Sec. 2, where the Scotogenic model is introduced. A general classification of all possible UV extensions of the Scotogenic model satisfying the requirements explained above is given in Sec. 3. Two selected example models will be presented in detail in Secs. 4 and 5. Some general aspects of the phenomenology of this class of models are discussed in Sec. 6. Finally, we summarize our results and conclude in Sec. 7. Additional information can be found in Appendix A, where we discuss scenarios with an accidental \(\mathbb{Z}_{2}\) symmetry. ## 2 The Scotogenic model Before we discuss specific UV realizations of our setup, let us introduce our conventions for the Scotogenic model. The particle content of the Scotogenic model [1] includes, besides the usual SM fields, three generations of right-handed fermions \(N\), transforming as \((\mathbf{1},0)\) under \((\mathrm{SU}(2)_{\mathrm{L}},\mathrm{U}(1)_{\mathrm{Y}})\), and one scalar \(\eta\), transforming as \((\mathbf{2},1/2)\). We also impose the conservation of an ad-hoc \(\mathbb{Z}_{2}\) symmetry, under which \(\eta\) and \(N\) are odd while the rest of the fields in the model are even. The lepton and scalar particle content of the model is shown in Table 1. 1 Footnote 1: We follow the conventions for the Scotogenic model used in [13]. The model contains two scalar doublets, the usual Higgs doublet \(H\) and the new doublet \(\eta\), only distinguished by their \(\mathbb{Z}_{2}\) charges. They can be decomposed in terms of their \(\mathrm{SU}(2)_{\mathrm{L}}\) components as \[H=\begin{pmatrix}H^{+}\\ H^{0}\end{pmatrix}\,,\quad\eta=\begin{pmatrix}\eta^{+}\\ \eta^{0}\end{pmatrix}\,. \tag{1}\] Once specified the particle content and symmetries of the model we can write down the Lagrangian. The Lagrangian of the model contains the terms \[\mathcal{L}_{\rm Y}=y\,\overline{N}\,\vec{\eta}^{\dagger}\,\ell_{L}+\frac{1}{2}M_ {N}\,\overline{N}^{c}N+{\rm h.c.}\,, \tag{2}\] where \(y\) is a general complex \(3\times 3\) matrix and \(M_{N}\) is a symmetric \(3\times 3\) mass matrix. The scalar potential of the model is given by \[\begin{split}\mathcal{V}_{\rm UV}&=m_{H}^{2}H^{ \dagger}H+m_{\eta}^{2}\eta^{\dagger}\eta+\frac{\lambda_{1}}{2}(H^{\dagger}H)^{ 2}+\frac{\lambda_{2}}{2}(\eta^{\dagger}\eta)^{2}\\ &+\lambda_{3}(H^{\dagger}H)(\eta^{\dagger}\eta)+\lambda_{4}(H^{ \dagger}\eta)(\eta^{\dagger}H)+\left[\frac{\lambda_{5}}{2}(H^{\dagger}\eta)^{ 2}+{\rm h.c.}\right]\,.\end{split} \tag{3}\] Here \(m_{H}^{2}\) and \(m_{\eta}^{2}\) are parameters with dimensions of mass\({}^{2}\). We assume that the minimization of the scalar potential leads to a vacuum defined by \[\langle H^{0}\rangle=\frac{v_{H}}{\sqrt{2}}\,,\quad\langle\eta^{0}\rangle=0\,. \tag{4}\] This vacuum configuration breaks the electroweak symmetry in the usual way but preserves the \(\mathbb{Z}_{2}\) symmetry of the model. As a consequence of this, the lightest \(\mathbb{Z}_{2}\)-odd state (either \(N_{1}\) or \(\eta^{0}\)) is completely stable and can play the role of the DM of the Universe. Furthermore, neutrinos acquire non-zero Majorana masses at the 1-loop level, as shown in Fig. 1. The resulting \(3\times 3\) neutrino mass matrix is given by \[(m_{\nu})_{\alpha\beta}=\frac{\lambda_{5}\,v_{H}^{2}}{32\pi^{2}}\sum_{n}\frac{ y_{n\alpha}\,y_{n\beta}}{M_{N_{n}}}\left[\frac{M_{N_{n}}^{2}}{m_{0}^{2}-M_{N_{n}} ^{2}}+\frac{M_{N_{n}}^{4}}{\left(m_{0}^{2}-M_{N_{n}}^{2}\right)^{2}}\,\log \frac{M_{N_{n}}^{2}}{m_{0}^{2}}\right]\,, \tag{5}\] where \(m_{0}^{2}=m_{\eta}^{2}+(\lambda_{3}+\lambda_{4})\;v_{H}^{2}/2\) and \(M_{N_{n}}\) are the diagonal elements of the \(M_{N}\) matrix. One can easily estimate that in order to obtain neutrino masses of the order of 0.1 eV with Scotogenic states in the TeV scale and Yukawas of order 1, \(\lambda_{5}\) must be of order \(\sim 10^{-10}\). The smallness of this parameter is protected by lepton number, and thus is technically natural [7]. However, it is not explained in the context of the Scotogenic model. \begin{table} \begin{tabular}{|c|c||c c c|c|} \hline **Field** & **Generations** & \({\rm SU(3)_{c}}\) & \({\rm SU(2)_{L}}\) & \({\rm U(1)_{Y}}\) & \(\mathbb{Z_{2}}\) \\ \hline \(\ell_{L}\) & 3 & **1** & **2** & -1/2 & + \\ \(e_{R}\) & 3 & **1** & **1** & -1 & + \\ \(N\) & 3 & **1** & **1** & 0 & \(-\) \\ \hline \(H\) & 1 & **1** & **2** & 1/2 & + \\ \(\eta\) & 1 & **1** & **2** & 1/2 & \(-\) \\ \hline \end{tabular} \end{table} Table 1: Lepton and scalar particle content and representations under the gauge and discrete symmetries in the Scotogenic model. \(\ell_{L}\) and \(e_{R}\) are the SM left- and right-handed leptons, respectively, and \(H\) is the SM Higgs doublet. ## 3 Ultraviolet extensions of the Scotogenic model ### General considerations The Scotogenic model has two features that call for a refinement, namely, the origin of the \(\mathbb{Z}_{2}\) symmetry and \(\lambda_{5}\ll 1\). Although these features do not pose any theoretical problem, they can be regarded as ad-hoc ingredients in an otherwise very natural framework. We are thus interested in an UV extension of the Scotogenic model that provides an explanation for them. More specifically, we want to classify all possible UV scenarios that lead to the Scotogenic model at low energies after integrating out a heavy scalar field \(S\), with \(m_{S}\gg v_{H}\), and satisfy the following two requirements: **(A)**: The Scotogenic \(\mathbb{Z}_{2}\) is obtained as a remnant after the spontaneous breaking of a \(\mathrm{U}(1)_{\mathrm{L}}\) lepton number symmetry by the VEV of one or several singlet scalar fields \(\sigma\): \[\mathrm{U}(1)_{\mathrm{L}}\xrightarrow{\langle\sigma\rangle}\,\mathbb{Z}_{2}\] **(B)**: The \((H^{\dagger}\eta)^{2}\) operator is forbidden in the UV theory due to \(\mathrm{U}(1)_{\mathrm{L}}\) conservation, but an operator of the form \((H^{\dagger}\eta)^{2}\sigma^{n}\), with \(n\geq 1\), is generated after integrating out \(S\). After the singlets get VEVs and \(\mathrm{U}(1)_{\mathrm{L}}\) is spontaneously broken, this will induce an effective \(\lambda_{5}\) coupling, which will be naturally suppressed by the large \(m_{S}\) energy scale. In this work we will concentrate on global \(\mathrm{U}(1)_{\mathrm{L}}\) lepton number symmetries, tree-level completions of the \(\lambda_{5}\) operator and UV models with one or two \(\sigma\) singlets. Gauged versions of the lepton number symmetry, higher-order completions and models with additional singlets are left for future work. The models we are looking for induce neutrino masses _a la Scotogenic_, with variations of the neutrino mass diagram in Fig. 1. This diagram has an internal scalar line (with \(\eta^{0}\)) and an internal fermion line (with \(N\)). The analogous diagrams in the UV extended models will include the heavy scalar \(S\) in the loop and one or several external legs with \(\sigma\) singlets (or Figure 1: Neutrino mass generation in the Scotogenic model. This Feynman diagram shows the relevant gauge eigenstates involved in the 1-loop contribution to neutrino masses. \(\sigma\) insertions, for short). After these considerations, there are two classes of models that can be already discarded: * Models without \(\sigma\) insertions in the scalar line. These models can be discarded because the \((H^{\dagger}\eta)^{2}\) operator would be allowed in the UV theory. This would preclude an explanation of \(\lambda_{5}\ll 1\). In addition, \(\eta\) would acquire a VEV. * Models without \(\sigma\) insertions in the fermion line. The \(\mathrm{U}(1)_{\mathrm{L}}\) charge of the \(N\) singlet fermions must necessarily vanish if the \(\sigma\overline{N}^{c}N\) operator is absent and their Majorana masses are explicitly introduced in the Lagrangian. However, in this case \(N\) will be even under the \(\mathbb{Z}_{2}\) symmetry obtained after spontaneous \(\mathrm{U}(1)_{\mathrm{L}}\) breaking. This scenario does not correspond to the Scotogenic model. Nevertheless, an additional accidental \(\mathbb{Z}_{2}\) symmetry may appear, as explained in Appendix A. We are thus left with neutrino mass topologies with \(\sigma\) insertions in both internal lines. The scalar line leads to an operator \((H^{\dagger}\eta)^{2}\sigma^{n}\) after the heavy \(S\) is integrated out. All possible topologies are shown in Table 2. Topologies \(\mathrm{I}-\mathrm{IV}\) include one \(S\) propagator and lead to a \(\lambda_{5}\) operator of the form \[\mathcal{O}_{\lambda_{5}}=(H^{\dagger}\eta)^{2}\sigma_{A}\sigma_{B}\,, \tag{6}\] suppressed by \(1/m_{S}\), while topology \(\mathrm{V}\) includes two \(S\) propagators and induces the operator \[\mathcal{O}_{\lambda_{5}}=(H^{\dagger}\eta)^{2}\sigma_{A}^{2}\sigma_{B}\sigma_ {C}\,, \tag{7}\] suppressed by \(1/m_{S}^{2}\). These two generic expressions for the \(\lambda_{5}\) operator include cases in which one of the \(\sigma\) insertions is missing (for instance, \(\sigma_{B}=\emptyset\)) and cases in which several \(\sigma\) insertions in the scalar line correspond to the same field (for instance, \(\sigma_{A}=\sigma_{B}\)). Finally, the fermion line simply corresponds to a \(\sigma-N-N\) Yukawa interaction. In the following we will always assume the presence of the operator \(\sigma\overline{N}^{c}N\) (for models with one \(\sigma\) field) or \(\sigma_{1}\overline{N}^{c}N\) (for models with two \(\sigma\) fields), and we will not draw it. The coefficient of this operator will be denoted by \(\kappa\). Therefore, once the singlet scalar gets a VEV, \(\langle\sigma_{(1)}\rangle=\frac{v_{\sigma_{(1)}}}{\sqrt{2}}\), the Majorana mass matrix for the singlet fermions \(N\) is generated, 2 Footnote 2: In models with two \(\sigma\) fields such that \(q_{\sigma_{1}}=q_{\sigma_{2}}\) or \(q_{\sigma_{1}}=-q_{\sigma_{2}}\), an additional Yukawa term \(\sigma_{2}\overline{N}^{c}N\) or \(\sigma_{2}^{*}\overline{N}^{c}N\) would be present. Here \(q_{\sigma_{1}}\) and \(q_{\sigma_{2}}\) denote the \(\mathrm{U}(1)_{\mathrm{L}}\) charges of \(\sigma_{1}\) and \(\sigma_{2}\), respectively. This would lead to \(M_{N}=\sqrt{2}\,(\kappa_{1}\,v_{\sigma_{1}}+\kappa_{2}\,v_{\sigma_{2}})\) without affecting our discussion. We note, however, that in such models both \(\sigma\) singlets are essentially copies of the same field. \[M_{N}=\sqrt{2}\,\kappa\,v_{\sigma_{(1)}}\,. \tag{8}\] In the following we classify all possible UV extensions of the Scotogenic model compatible with our requirements (A) and (B). Given their qualitative differences, it is convenient to discuss topologies \(\mathrm{I}-\mathrm{IV}\) and \(\mathrm{V}\) separately. ### Topologies I-IV We first discuss the models based on topologies \(\mathrm{I}-\mathrm{IV}\). We will refer to a specific model using the notation \(\xi(A,B)\), where \(\xi=\{\mathrm{I},\mathrm{II},\mathrm{III},\mathrm{IV}\}\) denotes the topology for the \((H^{\dagger}\eta)^{2}\sigma_{A}\sigma_{B}\) operator, as listed in Table 2, and \(A\) and \(B\) denote the singlets involved in the vertices where \(\sigma_{A,B}\) are coupled. Since we only consider UV theories with at most two different singlets, \(A\) and \(B\) can only take the values \(\emptyset,1,2,1^{*}\), where \(\emptyset\) indicates that no \(\sigma\) enters the corresponding vertex and \(\sigma_{1^{*}}\equiv\sigma_{1}^{*}\). It is important to mention that we do not consider scenarios with \(A,B=2^{*}\) because they lead to a redefinition of the charge \(q_{\sigma_{2}}\to-q_{\sigma_{2}}\). 3 Therefore, in principle each topology has 16 different variations depending on the way the \(\sigma_{A,B}\) singlets are coupled. However, we can reduce this number by taking into account the following arguments: Footnote 3: In the following, we will denote the \(\mathrm{U}(1)_{\mathrm{L}}\) charge of the field \(X\) as \(q_{X}\). Furthermore, \(q_{\ell_{L}}=q_{e_{R}}=1\) and \(q_{H}=0\), as usual. * \(A\neq B\) is required to forbid the term \((H^{\dagger}\eta\,\sigma_{A})^{2}\) in the effective Lagrangian. If this specific combination is allowed, then the term \((H^{\dagger}\eta\,\sigma_{A})\) is too. This trilinear interaction induces a non-zero VEV for \(\eta\) after both \(H\) and \(\sigma_{A}\) acquire their VEVs, hence breaking the Scotogenic \(\mathbb{Z}_{2}\) symmetry. * \(A\neq B^{*}\) is also required. Otherwise, \((H^{\dagger}\eta)^{2}\sigma_{A}\sigma_{A}^{*}\) is allowed by the \(\mathrm{U}(1)_{\mathrm{L}}\) symmetry and then the operator \((H^{\dagger}\eta)^{2}\) is present in the UV theory. * In all \(\xi(1,\emptyset)\) and \(\xi(\emptyset,1)\) models the effective operator leading to the \(\lambda_{5}\) coupling is \(\mathcal{O}_{\lambda_{5}}=(H^{\dagger}\eta)^{2}\sigma\). This implies the relation \(2q_{\eta}+q_{\sigma}=0\). In addition, the Yukawa coupling \(\sigma\overline{N}^{c}N\) implies \(2q_{N}+q_{\sigma}=0\). Hence, the charges for \(\eta\) and \(N\) must satisfy \(q_{\eta}=q_{N}\) and then the \(\overline{N}\tilde{\eta}^{\dagger}\ell_{L}\) Yukawa term is forbidden by \(\mathrm{U}(1)_{\mathrm{L}}\). Similarly, in all \(\xi(1^{*},\emptyset)\) and \(\xi(\emptyset,1^{*})\) models one finds \(q_{\eta}=-q_{N}\) and then \(q_{N}=\frac{1}{2}\) in order to allow the term \(\overline{N}\tilde{\eta}^{\dagger}\ell_{L}\). With these considerations, there are only 8 possibilities left in each of the four topologies. However, there are duplicities. Models based on topologies III and IV are symmetric with respect to the exchange \(\sigma_{A}\leftrightarrow\sigma_{B}\) (i.e. \(\xi(A,B)=\xi(B,A)\) with \(\xi=\mathrm{III},\mathrm{IV}\)). Similarly, \(\mathrm{II}(A,B)\sim\mathrm{II}(B,A)\) by redefining \(q_{S}\to-q_{S}\). This further reduces the number of fundamentally different UV models. In total, we find 24 (20 + 4, because in II-models \(S\) can be an \(\mathrm{SU}(2)_{\mathrm{L}}\) singlet or triplet) different UV theories. They are listed in Table 3, where the \(\mathrm{U}(1)_{\mathrm{L}}\) charges of \(N\), \(\eta\), \(\sigma_{A,B}\) and \(S\), as well as the \((\mathrm{SU}(2)_{\mathrm{L}},\mathrm{U}(1)_{\mathrm{Y}})\) representation of \(S\) in each model, are shown. Some comments are in order: * The \((\mathrm{SU}(2)_{\mathrm{L}},\mathrm{U}(1)_{\mathrm{Y}})\) representation of the heavy scalar \(S\) depends on the topology. In I-models \(S\) transforms as \((\mathbf{3},1)\), in II-models we have two possibilities, \((\mathbf{3},0)\) or \((\mathbf{1},0)\), while in III- and IV-models \(S\) transforms as \((\mathbf{2},1/2)\). * In all the models in Table 3, the global \(\mathrm{U}(1)_{\mathrm{L}}\) symmetry may be spontaneously broken to a \(\mathbb{Z}_{2}\) parity, under which \(N\) and \(\eta\) are odd. In all the \(\xi(1^{*},\emptyset)\) models and in \(\mathrm{I}(\emptyset,1^{*})\), the conservation of \(\mathrm{U}(1)_{\mathrm{L}}\) restricts the lepton number charges of \(N\), \(\eta\), \(\sigma_{A,B}\) and \(S\), which must take precise values, and this automatically implies a remnant \(\mathbb{Z}_{2}\) that corresponds to the usual Scotogenic parity. The model studied in Ref. [8], which corresponds to model \(\mathrm{I}(1^{*},\emptyset)\) in our notation, is a good example of this. In the rest of the models, the conservation of \(\mathrm{U}(1)_{\mathrm{L}}\) leaves one of the charges to be chosen freely. We decided to choose \(q_{N}\). In this case, these are the restrictions to recover the _dark_\(\mathbb{Z}_{2}\) parity from \(\mathrm{U}(1)_{\mathrm{L}}\) breaking: * \(q_{N}\) cannot be an integer. * If \(q_{N}=\frac{\alpha}{\beta}\), with \(\alpha,\beta\in\mathbb{Z}\), then \(\alpha\) and \(\beta\) have to be odd and even, respectively. * \(\mathrm{GCD}(\alpha,\beta)=1\), where \(\mathrm{GCD}\) stands for Greatest Common Divisor. Therefore, \(\alpha\) and \(\beta\) must be coprime. The first restriction comes from the requirement of \(N\) and \(\eta\) being both odd under the remnant Scotogenic \(\mathbb{Z}_{2}\). The relation \(q_{\eta}=q_{N}-1\) implies that if \(q_{N}\) is even, then \(q_{\eta}\) must be odd, and vice versa. Then, \(N\) and \(\eta\) will transform differently under the remnant \(\mathbb{Z}_{2}\) symmetry. As an example of this consider the model \(\mathrm{I}(1,2)\) with \(q_{N}=2\). In this case, the solution for the rest of the \(\mathrm{U}(1)_{\mathrm{L}}\) charges in the model is \(q_{\eta}=1,q_{\sigma_{1}}=-4,q_{\sigma_{2}}=2\) and \(q_{S}=4\). The global lepton number symmetry gets spontaneously broken as \(\mathrm{U}(1)_{\mathrm{L}}\to\mathbb{Z}_{2}\), but with \(N\) and \(\eta\) charged under \(\mathbb{Z}_{2}\) as \(+\) and \(-\), respectively, and this does not reproduce the Scotogenic model. Similarly, if \(q_{N}=\frac{\alpha}{\beta}\), after normalizing \begin{table} \begin{tabular}{c c c c c c c c c c} \hline & **Topology** & \(\mathbf{A}\) & \(\mathbf{B}\) & \(\mathbf{q_{N}}\) & \(\mathbf{q_{\eta}}\) & \(\mathbf{q_{\sigma_{1}}}\) & \(\mathbf{q_{\sigma_{2}}}\) & \(\mathbf{q_{S}}\) & \(\mathbf{(SU(2)_{L},U(1)_{Y})_{S}}\) \\ \hline \hline 1 & I & \(1^{*}\) & \(\emptyset\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & \(-1\) & - & \(-1\) & \(\mathbf{(3,1)}\) \\ 2 & I & \(\emptyset\) & \(1^{*}\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & \(-1\) & - & \(0\) & \(\mathbf{(3,1)}\) \\ [MISSING_PAGE_POST] {*}\) & \(2\) & \(q_{N}\) & \(q_{N}-1\) & \(-2q_{N}\) & \(2-4q_{N}\) & \(1-q_{N}\) & \(\mathbf{(2,1/2)}\) \\ \hline \hline \end{tabular} \end{table} Table 3: UV extended models leading to topologies \(\mathrm{I-IV}\) and satisfying conditions (A) and (B). For each model we show the \(\mathrm{U(1)_{L}}\) charges of \(N\), \(\eta\), \(\sigma_{1}\), \(\sigma_{2}\) and \(S\), as well as the \(\mathrm{(SU(2)_{L},U(1)_{Y})}\) representation of \(S\). Models that become any of the models in this list after renaming the fields or redefining their \(\mathrm{U(1)_{L}}\) charges are not included, as explained in the text. all \(\mathrm{U}(1)_{\mathrm{L}}\) charges so that they become integer numbers (multiplying by \(\beta\)) we obtain \(\tilde{q}_{\eta}=\beta-\alpha\) and \(\tilde{q}_{N}=\alpha\). Hence, for \(\eta\) and \(N\) to be odd under \(\mathbb{Z}_{2}\), \(\alpha\) and \(\beta\) must be odd and even, respectively. Finally, the third restriction is required to guarantee that \(n=2\) after \(\mathrm{U}(1)_{\mathrm{L}}\) breaks to the discrete symmetry \(\mathbb{Z}_{n}\). As an example we take model \(\mathrm{I}(1,2)\), where \(n\equiv\mathrm{GCD}(\tilde{q}_{\sigma_{1}},\tilde{q}_{\sigma_{2}},\tilde{q}_ {S})=\mathrm{GCD}(-2\alpha,2\beta,2\alpha)=2\mathrm{GCD}(\alpha,\beta)=2\). We checked for all the working models that \(\mathrm{GCD}(\tilde{q}_{\sigma_{1}},\tilde{q}_{\sigma_{2}},\tilde{q}_{S})\) or \(\mathrm{GCD}(\tilde{q}_{\sigma_{1}},\tilde{q}_{\sigma_{2}})\), depending on whether \(S\) acquires a VEV or not, always reduces to \(\mathrm{GCD}(\alpha,\beta)=1\). Also, we want \(q_{N}=\frac{\alpha}{\beta}\) to be irreducible. 3. In all models, and for all possible values of \(q_{N}\) in agreement with the restrictions listed in the previous item, \(\eta\) never acquires an induced VEV. This is crucial for the consistency of the Scotogenic model. 4. It is clear that in all models of the form \(\xi(A,\emptyset)\) or \(\xi(\emptyset,B)\), a trilinear coupling \(\mu\) participates in the generation of the \(\lambda_{5}\) coupling, induced after the breaking of \(\mathrm{U}(1)_{\mathrm{L}}\). This is perfectly consistent, but requires the assumption \(\mu\ll m_{S}\) to justify \(\lambda_{5}\ll 1\). This poses a theoretical issue, since \(\mu\) is a parameter of the UV theory. In contrast, in models of the form \(\xi(A,B)\) with \(A,B\neq\emptyset\), the \(\lambda_{5}\) coupling will only depend on the \(\sigma_{A,B}\) VEVs, induced at low energies and naturally small compared to \(m_{S}\). 5. Finally, we note that in \(\mathrm{I}\)-models the \(\mathrm{U}(1)_{\mathrm{L}}\) charges of the particles \(N\), \(\eta\) and \(\sigma_{A,B}\) remain the same after the non-trivial change \(A\leftrightarrow B\). For instance, this is the case in models \(\mathrm{I}(1,2)\) and \(\mathrm{I}(2,1)\). ### Topology V Topology V contains an internal line with a double \(S\) propagator and thus induces the \(\lambda_{5}\) coupling at order \(1/m_{S}^{4}\). This is two orders higher than the corresponding contributions from topologies \(\mathrm{I}-\mathrm{IV}\). Therefore, for a diagram with topology V to be dominant, other topologies must be absent (or highly suppressed due to a specific parameter choice). In fact, many models leading to topology V also generate other topologies, and they have been already included in the previous discussion. Nevertheless, there are also some models in which the symmetries allow for topology V but forbid other topologies, as we proceed to show. Topology V requires the presence of the operators \(H^{\dagger}\eta\,S\,\sigma_{A}\) and \(\left(S^{\dagger}\right)^{2}\sigma_{B}\sigma_{C}\) to produce a \(\lambda_{5}\) operator as in Eq. (7). A model based on this topology will be denoted as \(\mathrm{V}(A,B,C)\), where \(A\), \(B\), and \(C\) can take the values \(\emptyset,1,2,1^{*}\), as in the previous topologies. Again, we do not consider models with \(A,B,C=2^{*}\). The reason, however, is twofold. On the one hand, in scenarios involving \(2^{*}\) but not \(2\) this is again due to the existence of a redefinition of the charges that allows to show an equivalence to models without \(2^{*}\). On the other hand, models combining \(2\) and \(2^{*}\) do not lead to a solution for the \(U(1)_{\mathrm{L}}\) charges or their solutions are compatible with topology II, which is naturally dominant. 4 In conclusion, topology V leads to \(4\times 4\times 4=64\) different variations depending on the way the \(\sigma_{A,B,C}\) singlets are coupled. However, we can reduce this number by taking into account the following arguments: Footnote 4: This is the case of models \(\mathrm{V}(2,2^{*},C)\) and \(\mathrm{V}(2,B,2^{*})\). These are not equivalent to models \(\mathrm{V}(2,2,C)\) and \(\mathrm{V}(2,B,2)\), respectively, so they do not lead to just a redefinition of the charges. * All V models are symmetric under \(B\leftrightarrow C\), \(\mathrm{V}(A,B,C)=\mathrm{V}(A,C,B)\). Then, for each of the 4 possible values of \(A\), this removes \((4^{2}-4)/2=6\) possibilities, leaving 40 variations. * \(B\neq C\) and \(B\neq C^{*}\). The former is required to forbid the operator \(S^{\dagger}\sigma_{B}\). This would induce a VEV for \(S\), which in turn would induce a VEV for \(\eta\) due to the operator \(H^{\dagger}\eta\,S\,\sigma_{A}\). The latter restriction avoids having \(S^{\dagger}S^{\dagger}\) in the Lagrangian, since this term would imply that \(S\) is a singlet under every symmetry of the model, hence leading to an induced VEV for \(\eta\) as well. This condition together with the previous one leaves \(4\times[(4^{2}+4)/2-4-1]=20\) possibilities. * \(A\neq B^{*}\) (or \(C^{*}\)) leads either to models for which the equations for the charges are incompatible with the original assumptions or to models for which the solutions for the charges are compatible with topology II. Here \(\mathcal{O}_{\lambda_{5}}\) takes the form \((\sigma_{A}^{*}\sigma_{A})\sigma_{A}\sigma_{C}(H^{\dagger}\eta)^{2}\), which means that the term \(\sigma_{A}\sigma_{C}(H^{\dagger}\eta)^{2}\) would be allowed by lepton number and, in turn, \(\sigma_{C}(H^{\dagger}S^{\dagger}\eta)\) too, with \(C\neq A\) in order to satisfy the above requirements. Given that, by construction, we have the operator \(\sigma_{A}(H^{\dagger}S\eta)\) within the model, the diagram for the scalar line of II-models (shown in Table 2) would appear, leaving this new topology as a subdominant effect in the generation of the \(\lambda_{5}\) coupling. From the remaining 20 variations, this removes 2 for each \(A=1\) and \(A=1^{*}\), and 3 more for the models \(\mathrm{V}(\emptyset,B,\emptyset)\), leaving 13 possibilities. Notice that \(S\) can be a singlet and a triplet in all the models, so we have the 26 models shown in Table 4. Again, we note that if only one of the three A, B, or C labels is equal to 2, then the same model but with \(2^{*}\) instead, is equivalent to the former with \(q_{\sigma_{2}}\to-q_{\sigma_{2}}\). Again, in all the models in Table 4, the global \(\mathrm{U}(1)_{\mathrm{L}}\) symmetry may be spontaneously broken to a \(\mathbb{Z}_{2}\) parity, under which \(N\) and \(\eta\) are odd. In the models \(\mathrm{V}(1,1,\emptyset)\) and \(\mathrm{V}(1^{*},1^{*},\emptyset)\), the conservation of \(\mathrm{U}(1)_{\mathrm{L}}\) restricts the lepton number charges of \(N\), \(\eta\), \(\sigma_{A,B,C}\) and \(S\), which must take precise values, and this automatically implies a remnant \(\mathbb{Z}_{2}\) that corresponds to the usual Scotogenic parity. In the rest of the models, the conservation of \(\mathrm{U}(1)_{\mathrm{L}}\) leaves one of the charges to be chosen freely. We decided to choose \(q_{N}\). In this case, these are the restrictions to recover the _dark_\(\mathbb{Z}_{2}\) parity from \(\mathrm{U}(1)_{\mathrm{L}}\) breaking: * \(q_{N}\) cannot be an integer. * If \(q_{N}=\frac{\alpha}{\beta}\), with \(\alpha,\beta\in\mathbb{Z}\), then \(\alpha\) and \(\beta\) have to be odd and even, respectively. * \(\mathrm{GCD}(\alpha,\beta)=1\). Therefore, \(\alpha\) and \(\beta\) must be coprime. Additionally, some models have extra conditions for the \(\mathbb{Z}_{2}\) to appear: * In \(\mathrm{V}(2,2,0)\), we further require \(\mathrm{GCD}(3\,\alpha,\alpha-\beta)=1\) if \(\frac{\alpha-\beta}{3}\) is not an integer, or \(\mathrm{GCD}(\alpha,\frac{\alpha-\beta}{3})=1\) if \(\frac{\alpha-\beta}{3}\) is an integer. * In \(\mathrm{V}(2,1^{*},2)\), we further require \(\mathrm{GCD}(3\,\alpha,2\,\alpha-\beta)=1\) if \(\frac{2\,\alpha-\beta}{3}\) is not an integer, or \(\mathrm{GCD}(\alpha,\frac{2\alpha-\beta}{3})=1\) if \(\frac{2\,\alpha-\beta}{3}\) is an integer. \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline & **Topology** & \(A\) & \(B\) & \(C\) & \(q_{N}\) & \(q_{\eta}\) & \(q_{\sigma_{1}}\) & \(q_{\sigma_{2}}\) & \(q_{S}\) & \((\text{SU}(2)_{\text{L}},\text{U}(1)_{\text{Y}})_{S}\) \\ \hline \hline 25-26 & V & 1 & 1 & \(\emptyset\) & \(-\frac{1}{2}\) & \(-\frac{3}{2}\) & 1 & - & \(\frac{1}{2}\) & \((\mathbf{3},0)\) or \((\mathbf{1},0)\) \\ 27-28 & V & \(1^{*}\) & \(1^{*}\) & \(\emptyset\) & \(\frac{1}{4}\) & \(-\frac{3}{4}\) & \(-\frac{1}{2}\) & - & \(\frac{1}{4}\) & \((\mathbf{3},0)\) or \((\mathbf{1},0)\) \\ 29-30 & V & \(\emptyset\) & 1 & 2 & \(q_{N}\) & \(q_{N}-1\) & \(-2q_{N}\) & 2 & \(1-q_{N}\) & \((\mathbf{3},0)\) or \((\mathbf{1},0)\) \\ 31-32 & V & \(\emptyset\) & \(1^{*}\) & 2 & \(q_{N}\) & \(q_{N}-1\) & \(-2q_{N}\) & \(2-4q_{N}\) & \(1-q_{N}\) & \((\mathbf{3},0)\) or \((\mathbf{1},0)\) \\ 33-34 & V & 1 & 2 & \(\emptyset\) & \(q_{N}\) & \(q_{N}-1\) & \(-2q_{N}\) & \(2q_{N}+2\) & \(1+q_{N}\) & \((\mathbf{3},0)\) or \((\mathbf{1},0)\) \\ 35-36 & V & \(1^{*}\) & 2 & \(\emptyset\) & \(q_{N}\) & \(q_{N}-1\) & \(-2q_{N}\) & \(2-6q_{N}\) & \(1-3q_{N}\) & \((\mathbf{3},0)\) or \((\mathbf{1},0)\) \\ 37-38 & V & 1 & 1 & 2 & \(q_{N}\) & \(q_{N}-1\) & \(-2q_{N}\) & \(4q_{N}+2\) & \(1+q_{N}\) & \((\mathbf{3},0)\) or \((\mathbf{1},0)\) \\ 39-40 & V & \(1^{*}\) & \(1^{*}\) & 2 & \(q_{N}\) & \(q_{N}-1\) & \(-2q_{N}\) & \(2-8q_{N}\) & \(1-3q_{N}\) & \((\mathbf{3},0)\) or \((\mathbf{1},0)\) \\ 41-42 & V & 2 & 1 & \(\emptyset\) & \(q_{N}\) & \(q_{N}-1\) & \(-2q_{N}\) & 1 & \(-q_{N}\) & \((\mathbf{3},0)\) or \((\mathbf{1},0)\) \\ 43-44 & V & 2 & \(1^{*}\) & \(\emptyset\) & \(q_{N}\) & \(q_{N}-1\) & \(-2q_{N}\) & \(1-2q_{N}\) & \(q_{N}\) & \((\mathbf{3},0)\) or \((\mathbf{1},0)\) \\ 45-46 & V & 2 & 2 & \(\emptyset\) & \(q_{N}\) & \(q_{N}-1\) & \(-2q_{N}\) & \(\frac{2}{3}-\frac{2}{3}q_{N}\) & \(\frac{1}{3}-\frac{1}{3}q_{N}\) & \((\mathbf{3},0)\) or \((\mathbf{1},0)\) \\ 47-48 & V & 2 & 1 & 2 & \(q_{N}\) & \(q_{N}-1\) & \(-2q_{N}\) & \(\frac{2}{3}\) & \(\frac{1}{3}-q_{N}\) & \((\mathbf{3},0)\) or \((\mathbf{1},0)\) \\ 49-50 & V & 2 & \(1^{*}\) & 2 & \(q_{N}\) & \(q_{N}-1\) & \(-2q_{N}\) & \(\frac{2}{3}-\frac{4}{3}q_{N}\) & \(\frac{1}{3}+\frac{1}{3}q_{N}\) & \((\mathbf{3},0)\) or \((\mathbf{1},0)\) \\ \hline \hline \end{tabular} \end{table} Table 4: UV extended models leading to topology V and satisfying conditions (A) and (B). For each model we show the \(\text{U}(1)_{\text{L}}\) charges of \(N\), \(\eta\), \(\sigma_{1}\), \(\sigma_{2}\) and \(S\), as well as the \((\text{SU}(2)_{\text{L}},\text{U}(1)_{\text{Y}})\) representation of \(S\). Models that become any of the models in this list after renaming the fields or redefining their \(\text{U}(1)_{\text{L}}\) charges are not included, as explained in the text. * In \({\rm V}(2,1,2)\), we further require \({\rm GCD}(3\,\alpha,\beta)=1\) if \(\frac{\beta}{3}\) is not an integer, or \({\rm GCD}(\alpha,\frac{\beta}{3})=1\) if \(\frac{\beta}{3}\) is an integer. This concludes our classification of all possible UV extensions of the Scotogenic model satisfying our requirements (A) and (B). We will now illustrate it with two specific example models. An additional example can be found in [8]. ## 4 An UV extended Scotogenic model with one \(\sigma\) field Our first example model is an UV extension of the Scotogenic model with one \(\sigma\) field. Another example of this class of models can be found in [8]. ### Ultraviolet theory We consider an extension of the Scotogenic model with two new particles: the \({\rm SU}(2)_{\rm L}\) doublet \(S\) and the singlet \(\sigma\), both scalars. The \(\mathbb{Z}_{2}\) Scotogenic parity is replaced by a global \({\rm U}(1)_{\rm L}\) lepton number symmetry. Table 5 shows the scalar and leptonic fields of the model and their representations under the gauge and global symmetries. We want to explain the smallness of the Scotogenic's \(\lambda_{5}\) coupling. Our strategy will be to forbid it in our original Lagrangian and make it arise effectively at low energies once the scalar \(\sigma\) acquires a VEV and we integrate out \(S\). We also impose that, after symmetry breaking, the effective \(\lambda_{5}\) coupling would induce neutrino masses as shown in Fig. 2. In our notation, this is a \({\rm IV}(1^{*},\emptyset)\) model. This requires the presence of the operators \[\overline{N}\widetilde{\eta}^{\dagger}\ell_{L}\quad,\quad\sigma\overline{N}^{ c}N\quad,\quad H^{\dagger}SH^{\dagger}\eta\quad,\quad\sigma^{*}S^{\dagger} \eta\,, \tag{9}\] Figure 2: Neutrino mass generation in an extended Scotogenic model with one \(\sigma\) field. This Feynman diagram shows the relevant gauge eigenstates involved in the 1-loop contribution to neutrino masses. In our notation, this is a \({\rm IV}(1^{*},\emptyset)\) model. which in turn imply the following set of equations for the \(\mathrm{U}(1)_{\mathrm{L}}\) charges of the model: \[-q_{N}+q_{\eta}+1 =0\,, \tag{10}\] \[q_{\sigma}+2\,q_{N} =0\,,\] (11) \[q_{S}+q_{\eta} =0\,,\] (12) \[-q_{\sigma}-q_{S}+q_{\eta} =0\,. \tag{13}\] This system of linear equations has a unique solution: \[q_{N} =\frac{1}{2}\,, \tag{14}\] \[q_{\eta} =-\frac{1}{2}\,,\] (15) \[q_{\sigma} =-1\,,\] (16) \[q_{S} =\frac{1}{2}\,. \tag{17}\] With this solution, the operators \[\overline{N}^{c}N\quad,\quad\overline{N}\widetilde{H}^{\dagger}\ell_{L}\quad, \quad\left(H^{\dagger}\eta\right)^{2} \tag{18}\] are automatically forbidden due to \(\mathrm{U}(1)_{\mathrm{L}}\) conservation. One should note that if we chose the operator \(\sigma S^{\dagger}\eta\) instead of \(\sigma^{*}S^{\dagger}\eta\), no solution for the resulting system of equations would exist. Indeed, if one replaces \(-q_{\sigma}\) by \(q_{\sigma}\) in Eq. (13), the combination of the resulting equation with Eqs. (11) and (12) leads to \(q_{N}=q_{\eta}\), which is incompatible with Eq. (10). This illustrates why \(\xi(1,\emptyset)\) models are not compatible with our requirements. Having fixed the quantum numbers of all the particles in the model, we proceed to write its Lagrangian. The new Yukawa interactions are given by \[\mathcal{L}_{\mathrm{Y}}=y\,\overline{N}\,\widetilde{\eta}^{\dagger}\,\ell_{L }+\kappa\,\sigma\overline{N}^{c}N+\mathrm{h.c.}\,, \tag{19}\] \begin{table} \begin{tabular}{|c|c||c c c|c|} \hline **Field** & **Generations** & \(\mathrm{SU}(3)_{\mathrm{c}}\) & \(\mathrm{SU}(2)_{\mathrm{L}}\) & \(\mathrm{U}(1)_{\mathrm{Y}}\) & \(\mathrm{U}(1)_{\mathrm{L}}\) \\ \hline \(\ell_{L}\) & 3 & **1** & **2** & -1/2 & 1 \\ \(e_{R}\) & 3 & **1** & **1** & -1 & 1 \\ \(N\) & 3 & **1** & **1** & 0 & \(q_{N}\) \\ \hline \(H\) & 1 & **1** & **2** & 1/2 & 0 \\ \(\eta\) & 1 & **1** & **2** & 1/2 & \(q_{\eta}\) \\ \(\sigma\) & 1 & **1** & **1** & 0 & \(q_{\sigma}\) \\ \(S\) & 1 & **1** & **2** & 1/2 & \(q_{S}\) \\ \hline \end{tabular} \end{table} Table 5: Lepton and scalar particle content and representations under the gauge and global symmetries in an UV extension of the Scotogenic model with one \(\sigma\) field. where \(y\) is a general complex \(3\times 3\) matrix and \(\kappa\) is a complex symmetric \(3\times 3\) matrix. The scalar potential of the model can be written as \[\begin{split}\mathcal{V}_{\text{UV}}&=m_{H}^{2}H^{ \dagger}H+m_{S}^{2}S^{\dagger}S+m_{\sigma}^{2}\sigma^{*}\sigma+m_{\eta}^{2}\eta ^{\dagger}\eta+\frac{\lambda_{1}}{2}(H^{\dagger}H)^{2}+\frac{\lambda_{2}}{2}( \eta^{\dagger}\eta)^{2}\\ &+\frac{\lambda_{S}}{2}(S^{\dagger}S)^{2}+\frac{\lambda_{\sigma}} {2}(\sigma^{*}\sigma)^{2}+\lambda_{3}(H^{\dagger}H)(\eta^{\dagger}\eta)+ \lambda_{3}^{S}(H^{\dagger}H)(S^{\dagger}S)\\ &+\lambda_{3}^{\sigma}(H^{\dagger}H)(\sigma^{\dagger}\sigma)+ \lambda_{3}^{\eta S}(\eta^{\dagger}\eta)(S^{\dagger}S)+\lambda_{3}^{\eta \sigma}(\eta^{\dagger}\eta)(\sigma^{*}\sigma)\\ &+\lambda_{3}^{\sigma S}(\sigma^{*}\sigma)(S^{\dagger}S)+\lambda _{4}(H^{\dagger}\eta)(\eta^{\dagger}H)+\lambda_{4}^{HS}(H^{\dagger}S)(S^{ \dagger}H)\\ &+\lambda_{4}^{\eta S}(S^{\dagger}\eta)(\eta^{\dagger}S)+\left[ \beta(H^{\dagger}SH^{\dagger}\eta)+\mu(\sigma^{*}S^{\dagger}\eta)+\text{h.c.} \right]\,.\end{split} \tag{20}\] Here \(\mu\) is a trilinear parameter with dimensions of mass while \(m_{H}^{2}\), \(m_{\eta}^{2}\) and \(m_{\sigma}^{2}\) have dimensions of mass\({}^{2}\). Other Lagrangian terms are allowed by the gauge symmetries of the model but forbidden by \(\text{U}(1)_{\text{L}}\). ### Effective theory We will now assume that \(m_{S}\) is much larger than any other energy scale in the theory. At energies well below \(m_{S}\), all physical processes can be properly described by an effective field theory in which the heavy field \(S\) has been integrated out. We now present this effective theory, obtained after integrating out \(S\) at tree-level. The effective potential at low energies can be written as \[\begin{split}\mathcal{V}_{\text{IR}}&=m_{H}^{2}H^{ \dagger}H+m_{\eta}^{2}\eta^{\dagger}\eta+m_{\sigma}^{2}\sigma^{*}\sigma+\frac {\lambda_{1}}{2}(H^{\dagger}H)^{2}+\frac{\lambda_{2}}{2}(\eta^{\dagger}\eta)^{ 2}+\frac{\lambda_{\sigma}}{2}(\sigma^{*}\sigma)^{2}\\ &+\lambda_{3}(H^{\dagger}H)(\eta^{\dagger}\eta)+\lambda_{3}^{ \sigma}(H^{\dagger}H)(\sigma^{*}\sigma)+\left[\lambda_{3}^{\eta\sigma}-\frac{ |\mu|^{2}}{m_{S}^{2}}\right](\sigma^{*}\sigma)(\eta^{\dagger}\eta)\\ &+\left[\lambda_{4}-\frac{|\beta|^{2}(H^{\dagger}H)}{m_{S}^{2}} \right](H^{\dagger}\eta)(\eta^{\dagger}H)-\left[\frac{\beta\mu}{m_{S}^{2}} \sigma^{*}(H^{\dagger}\eta)^{2}+\text{h.c.}\right]+\mathcal{O}\left(\frac{1}{m _{S}^{4}}\right)\,.\end{split} \tag{21}\] Assuming that CP is conserved in the scalar sector, the neutral fields \(H^{0}\) and \(\sigma\) can be decomposed as \[H^{0}=\frac{1}{\sqrt{2}}(v_{H}+\phi+iA)\,,\quad\sigma=\frac{1}{\sqrt{2}}(v_{ \sigma}+\rho+iJ)\,, \tag{22}\] with \(\frac{v_{H}}{\sqrt{2}}\) and \(\frac{v_{\sigma}}{\sqrt{2}}\) the VEVs of \(H^{0}\) and \(\sigma\), respectively. These VEVs are determined by minimizing the scalar potential in Eq. (21). The resulting tadpole equations are given by \[\frac{d\mathcal{V}_{\text{IR}}}{dH^{0}}\bigg{|}_{(H^{0},\sigma)= \{\frac{v_{H}}{\sqrt{2}},\frac{v_{\sigma}}{\sqrt{2}}\}}= \frac{v_{H}}{\sqrt{2}}\left(m_{H}^{2}+\frac{\lambda_{1}v_{H}^{2}}{2 }+\frac{\lambda_{3}^{\sigma}v_{\sigma}^{2}}{2}\right)\,, \tag{23}\] \[\frac{d\mathcal{V}_{\text{IR}}}{d\sigma}\bigg{|}_{(H^{0},\sigma)= \{\frac{v_{H}}{\sqrt{2}},\frac{v_{\sigma}}{\sqrt{2}}\}}= \frac{v_{\sigma}}{\sqrt{2}}\left(m_{\sigma^{2}}+\frac{\lambda_{3}^{ \sigma}v_{H}^{2}}{2}+\frac{\lambda_{\sigma}v_{\sigma}^{2}}{2}\right)\,, \tag{24}\] where we have only written the non-trivial equations and these are evaluated at the VEVs of each scalar field. As we see from Eq. (21), once \(\sigma\) acquires a VEV, the operator is generated, with an effective \(\lambda_{5}\) coupling that is naturally suppressed by the mass of the heavy field \(S\), \[\frac{\lambda_{5}}{2}=-\frac{\beta\mu v_{\sigma}}{\sqrt{2}m_{S}^{2}}\ll 1\,. \tag{25}\] This follows from the assumption \(\mu\ll m_{S}\). As explained in Sec. 3, this is perfectly valid. However, it poses a theoretical problem since \(\mu\) is parameter of the UV theory. A model without this issue will be discussed below in Sec. 5. We now proceed to the computation of the scalar spectrum of the model. In the bases \(\{\phi,\rho\}\) for the CP-even states and \(\{A,J\}\) for the CP-odd ones, the squared mass matrices read \[{\cal M}_{R}^{2}=\left(\begin{array}{cc}m_{H}^{2}+\frac{1}{2}\left(3\lambda_ {1}v_{H}^{2}+\lambda_{3}^{\sigma}v_{\sigma}^{2}\right)&\lambda_{3}^{\sigma}v_ {H}v_{\sigma}\\ \lambda_{3}^{\sigma}v_{H}v_{\sigma}&m_{\sigma}^{2}+\frac{1}{2}\left(\lambda_ {3}^{\sigma}v_{H}^{2}+3\lambda_{\sigma}v_{\sigma}^{2}\right)\end{array}\right)\,, \tag{26}\] and \[{\cal M}_{I}^{2}=\left(\begin{array}{cc}m_{H}^{2}+\frac{\lambda_{1}v_{H}^{2 }}{2}+\frac{\lambda_{3}^{\sigma}v_{\sigma}^{2}}{2}&0\\ 0&m_{\sigma}^{2}+\frac{\lambda_{\sigma}^{2}v_{H}^{2}}{2}+\frac{\lambda_{\sigma }v_{\sigma}^{2}}{2}\end{array}\right)\,, \tag{27}\] respectively. The above expressions can be reduced using Eqs. (23) and (24). When this is done, the resulting \({\cal M}_{I}^{2}\) becomes identically zero. This implies the existence of two massless Goldstone bosons. One of them (\(A\)) corresponds to the state that is _eaten up_ by the \(Z\) boson and becomes its longitudinal component, while the other (\(J\)) is associated to the spontaneous breaking of the global U(1)\({}_{\rm L}\) symmetry, the so-called majoron. On the other hand, the reduction of \({\cal M}_{R}^{2}\) with Eqs. (23) and (24) leads to \[{\cal M}_{R}^{2}=\left(\begin{array}{cc}\lambda_{1}v_{H}^{2}&\lambda_{3}^{ \sigma}v_{H}v_{\sigma}\\ \lambda_{3}^{\sigma}v_{H}v_{\sigma}&\lambda_{\sigma}v_{\sigma}^{2}\end{array} \right)\,. \tag{28}\] This matrix can be brought to diagonal form as \(V_{R}^{T}{\cal M}_{R}^{2}V_{R}=\widehat{\cal M}_{R}^{2}={\rm diag}(m_{h}^{2},m_ {\Phi}^{2})\), where \(V_{R}\) is a unitary matrix that can be parametrized as \[V_{R}=\left(\begin{array}{cc}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{array}\right)\,. \tag{29}\] The mixing angle \(\theta\) is given by \[\tan(2\theta)=\frac{2({\cal M}_{R}^{2})_{12}}{({\cal M}_{R}^{2})_{11}-({\cal M }_{R}^{2})_{22}}=\frac{2r\lambda_{3}^{\sigma}}{r^{2}\lambda_{1}-\lambda_{ \sigma}}\approx-2r\frac{\lambda_{3}^{\sigma}}{\lambda_{\sigma}}+{\cal O}(r^{2 })\,, \tag{30}\] with \(r\equiv v_{H}/v_{\sigma}\). For \(v_{\sigma}\sim\) TeV, \(r\ll 1\) and simple approximate expressions can be obtained. The lightest of the two mass eigenstates is the well-known Higgs-like state \(h\), with mass \(m_{h}\approx 125\) GeV, discovered at the LHC. In addition, the model contains the heavy scalar \(\Phi\), with a mass of the order of \(v_{\sigma}\). We focus now on the \(\mathbb{Z}_{2}\)-odd scalars \(\eta^{+}\) and \(\eta^{0}\). The neutral \(\eta^{0}\) field can be decomposed as \[\eta^{0}=\frac{1}{\sqrt{2}}(\eta_{R}+i\eta_{I})\,. \tag{31}\] Their masses are given by \[m_{\eta^{+}} =m_{\eta}^{2}+\frac{v_{H}^{2}}{2}\lambda_{3}^{\rm eff}\,, \tag{32}\] \[m_{\eta_{R}}^{2} =m_{\eta}^{2}+\frac{v_{H}^{2}}{2}\left(\lambda_{3}^{\rm eff}+ \lambda_{4}^{\rm eff}-\sqrt{2}\,\frac{\beta\mu v_{\sigma}}{m_{S}^{2}}\right)\,,\] (33) \[m_{\eta_{I}}^{2} =m_{\eta}^{2}+\frac{v_{H}^{2}}{2}\left(\lambda_{3}^{\rm eff}+ \lambda_{4}^{\rm eff}+\sqrt{2}\,\frac{\beta\mu v_{\sigma}}{m_{S}^{2}}\right)\,, \tag{34}\] where we have defined \[\lambda_{3}^{\rm eff} \equiv\lambda_{3}+\lambda_{3}^{\eta\sigma}\frac{v_{\sigma}^{2}}{ v_{H}^{2}}-\mu^{2}\frac{v_{\sigma}^{2}}{v_{H}^{2}m_{S}^{2}}\,, \tag{35}\] \[\lambda_{4}^{\rm eff} \equiv\lambda_{4}-\frac{\beta^{2}v_{H}^{2}}{2m_{S}^{2}}\,. \tag{36}\] The mass square difference between \(\eta_{R}\) and \(\eta_{I}\) is given by \[m_{\eta_{R}}^{2}-m_{\eta_{I}}^{2}=-\sqrt{2}\,\frac{\beta\mu v_{\sigma}}{m_{S} ^{2}}v_{H}^{2}=\lambda_{5}v_{H}^{2}\,, \tag{37}\] exactly as in the usual Scotogenic model. Finally, the spontaneous breaking of U(1)\({}_{\rm L}\) by the VEV of \(\sigma\) induces a Majorana mass term for the \(N\) singlets, with \(M_{N}=\sqrt{2}\,\kappa\,v_{\sigma}\). This leads to Majorana neutrino masses at 1-loop, as shown in Fig. 2. The \(3\times 3\) neutrino mass matrix is given by usual Scotogenic formula in Eq. (5), where \(\lambda_{5}\) is the effective coupling in Eq. (25). Due to the additional scalar states, including a massless majoron with couplings to charged leptons, the phenomenology of this model is richer than that of the usual Scotogenic scenario. This will be discussed in Sec. 6. ## 5 An UV extended Scotogenic model with two \(\sigma\) fields We consider now an UV extension of the Scotogenic model with two \(\sigma\) fields. ### Ultraviolet theory We enlarge the Scotogenic particle content with three new particles: the scalar SU(2)\({}_{\rm L}\) singlets \(S\), \(\sigma_{1}\) and \(\sigma_{2}\). Again, instead of the usual \(\mathbb{Z}_{2}\) Scotogenic parity, a global U(1)\({}_{\rm L}\) lepton number symmetry is introduced. Table 6 shows the scalar and leptonic fields of the model and their representations under the gauge and global symmetries. We consider the 1-loop generation of neutrino masses by the diagram in Fig. 3. In our notation, this is a II(1,2) model. For this mechanism to take place, the operators \[\overline{N}\vec{\eta}^{\dagger}\ell_{L}\quad,\quad\sigma_{1}\overline{N}^{c} N\quad,\quad\sigma_{1}H^{\dagger}S\eta\quad,\quad\sigma_{2}H^{\dagger}S^{*}\eta \tag{38}\] must be allowed by the symmetries of the model. This restricts the \(\mathrm{U}(1)_{\mathrm{L}}\) charges of the fields in the model. In particular, one can write the following set of equations for them: \[-q_{N}+q_{\eta}+1 =0\,, \tag{39}\] \[q_{\sigma_{1}}+2\,q_{N} =0\,,\] (40) \[q_{\sigma_{1}}+q_{S}+q_{\eta} =0\,,\] (41) \[q_{\sigma_{2}}-q_{S}+q_{\eta} =0\,. \tag{42}\] \begin{table} \begin{tabular}{|c|c||c c c|c|} \hline **Field** & **Generations** & \(\mathrm{SU}(3)_{\mathrm{c}}\) & \(\mathrm{SU}(2)_{\mathrm{L}}\) & \(\mathrm{U}(1)_{\mathrm{Y}}\) & \(\mathrm{U}(1)_{\mathrm{L}}\) \\ \hline \(\ell_{L}\) & 3 & **1** & **2** & -1/2 & 1 \\ \(e_{R}\) & 3 & **1** & **1** & -1 & 1 \\ \(N\) & 3 & **1** & **1** & 0 & \(q_{N}\) \\ \hline \(H\) & 1 & **1** & **2** & 1/2 & 0 \\ \(\eta\) & 1 & **1** & **2** & 1/2 & \(q_{\eta}\) \\ \(\sigma_{1}\) & 1 & **1** & **1** & 0 & \(q_{\sigma_{1}}\) \\ \(\sigma_{2}\) & 1 & **1** & **1** & 0 & \(q_{\sigma_{2}}\) \\ \(S\) & 1 & **1** & **1** & 0 & \(q_{S}\) \\ \hline \end{tabular} \end{table} Table 6: Lepton and scalar particle content and representations under the gauge and global symmetries in an UV extension of the Scotogenic model with two \(\sigma\) fields. Figure 3: Neutrino mass generation in an extended Scotogenic model with two \(\sigma\) fields. This Feynman diagram shows the relevant gauge eigenstates involved in the 1-loop contribution to neutrino masses. In our notation, this is a \(\mathrm{II}(1,2)\) model. They can be solved in terms of \(q_{N}\) to obtain \[q_{\eta} =q_{N}-1\,, \tag{43}\] \[q_{\sigma_{1}} =-2\,q_{N}\,,\] (44) \[q_{\sigma_{2}} =2\,,\] (45) \[q_{S} =q_{N}+1\,. \tag{46}\] In addition, we want the operators \[\overline{N}^{c}N\quad,\quad\overline{N}\widetilde{H}^{\dagger}\ell_{L}\quad, \quad\left(H^{\dagger}\eta\right)^{2} \tag{47}\] to be forbidden. In order to forbid the first operator, a Majorana mass term for \(N\), we just require \(q_{N}\neq 0\). The second operator would lead to \(\nu_{L}\)-\(N\) Dirac mass terms and we can forbid it by requiring \(q_{N}\neq 1\). Then, Eq. (43) implies \(q_{\eta}\neq 0\) too. Finally, with these considerations, we choose \[q_{N}=\frac{1}{2}\,, \tag{48}\] which implies \[q_{\eta}=-\frac{1}{2}\quad,\quad q_{S}=\frac{3}{2}\quad,\quad q_{\sigma_{1}}= -1\quad,\quad q_{\sigma_{2}}=2\,. \tag{49}\] Some comments are in order. First, the diagram in Fig. 3 has two different \(\sigma\) singlets attached to the scalar internal line, \(\sigma_{1}\) and \(\sigma_{2}\). In principle, one may wonder why we did not consider the same \(\sigma\) singlet in both vertices as starting point for constructing our model. That would imply \(q_{S}=0\) and reduce the number of couplings in the model. However, such construction would lead to an effective operator \((H^{\dagger}\eta)^{2}\sigma^{2}\) after integrating out the \(S\) field. If this operator is allowed by all symmetries of the model, so is the trilinear \((H^{\dagger}\eta)\,\sigma\). We will eventually assume that the \(\sigma\) singlets acquire non-zero VEVs, breaking the original U(1)\({}_{\rm L}\). In the presence of the trilinear \((H^{\dagger}\eta)\,\sigma\), this would induce a tadpole for \(\eta\), hence breaking the \(\mathbb{Z}_{2}\) parity of the Scotogenic model. This forces us to discard this possibility and consider different \(\sigma_{1}\) and \(\sigma_{2}\) attached to the internal scalar line. It also illustrates why models with \(\sigma_{A}=\sigma_{B}\) are not compatible with our requirements. Furthermore, one may consider a third \(\sigma_{3}\) singlet field coupled to the internal fermion line. While this is possible, we preferred to choose a charge assignment that allows us to identify \(\sigma_{3}\equiv\sigma_{1}\) and reduce the number of fields in the model. Finally, once \(\sigma_{1}\) and \(\sigma_{2}\) acquire non-zero VEVs, the original U(1)\({}_{\rm L}\) symmetry will get broken to one of its \(\mathbb{Z}_{n}\) subgroups. Here \(n\) is the GCD of \(|q_{\sigma_{1}}|\) and \(|q_{\sigma_{2}}|\) after being normalized to become integer numbers, hence \(n=2\) and the remnant symmetry is \(\mathbb{Z}_{2}\). Once we know the quantum numbers of all the particles in the model, we can write its Lagrangian. The new Yukawa interactions are given by \[\mathcal{L}_{\rm Y}=y\,\overline{N}\,\widetilde{\eta}^{\dagger}\,\ell_{L}+ \kappa\,\sigma_{1}\overline{N}^{c}N+{\rm h.c.}\,, \tag{50}\] where \(y\) is a general complex \(3\times 3\) matrix and \(\kappa\) is a complex symmetric \(3\times 3\) matrix. The scalar potential of the model is given by \[\begin{split}\mathcal{V}_{\rm UV}&=m_{H}^{2}H^{\dagger} H+m_{S}^{2}S^{*}S+m_{\sigma_{i}}^{2}\sigma_{i}^{*}\sigma_{i}+m_{\eta}^{2}\eta^{ \dagger}\eta+\frac{\lambda_{1}}{2}(H^{\dagger}H)^{2}+\frac{\lambda_{2}}{2}( \eta^{\dagger}\eta)^{2}\\ &+\frac{\lambda_{S}}{2}(S^{*}S)^{2}+\frac{\lambda_{\sigma_{i}}}{2 }(\sigma_{i}^{*}\sigma_{i})^{2}+\lambda_{3}(H^{\dagger}H)(\eta^{\dagger}\eta)+ \lambda_{3}^{S}(H^{\dagger}H)(S^{*}S)\\ &+\lambda_{3}^{\sigma_{i}}(H^{\dagger}H)(\sigma_{i}^{*}\sigma_{i} )+\lambda_{3}^{\eta S}(\eta^{\dagger}\eta)(S^{*}S)+\lambda_{3}^{\eta\sigma_{i} }(\eta^{\dagger}\eta)(\sigma_{i}^{*}\sigma_{i})\\ &+\lambda_{3}^{\sigma\sigma}(\sigma_{1}^{*}\sigma_{1})(\sigma_{2} ^{*}\sigma_{2})+\lambda_{3}^{\sigma_{i}S}(\sigma_{i}^{*}\sigma_{i})(S^{*}S)+ \lambda_{4}(H^{\dagger}\eta)(\eta^{\dagger}H)\\ &+\left[\beta_{1}(\sigma_{1}H^{\dagger}S\eta)+\beta_{2}(\sigma_{ 2}H^{\dagger}S^{\dagger}\eta)+\frac{\mu}{\sqrt{2}}(\sigma_{2}\sigma_{1}\sigma_ {1})+\lambda_{0}(SS\sigma_{1}\sigma_{2}^{*})+\text{h.c.}\right]\,,\end{split} \tag{51}\] where we sum over \(i=1,2\). Here \(\mu\) is a trilinear parameter with dimensions of mass while \(m_{H}^{2}\), \(m_{\eta}^{2}\) and \(m_{\sigma_{i}}^{2}\) have dimensions of mass\({}^{2}\). Other Lagrangian terms are allowed by the gauge symmetries of the model but forbidden by \(\text{U}(1)_{\text{L}}\). ### Effective theory In the following we will assume that \(m_{S}\) is much larger than any other energy scale in the model and integrate out the heavy scalar \(S\). If we do this at tree-level, the effective scalar potential at low energies can be written as \[\begin{split}\mathcal{V}_{\rm IR}&=m_{H}^{2}(H^{ \dagger}H)+m_{\eta}^{2}(\eta^{\dagger}\eta)+m_{\sigma_{i}}^{2}(\sigma_{i}^{*} \sigma_{i})+\frac{\lambda_{1}}{2}(H^{\dagger}H)^{2}+\frac{\lambda_{2}}{2}( \eta^{\dagger}\eta)^{2}+\frac{\lambda_{\sigma_{i}}}{2}(\sigma_{i}^{*}\sigma_{i })^{2}\\ &+\lambda_{3}(H^{\dagger}H)(\eta^{\dagger}\eta)+\lambda_{3}^{ \sigma_{i}}(H^{\dagger}H)(\sigma_{i}^{*}\sigma_{i})+\lambda_{3}^{\eta\sigma_{i }}(\eta^{\dagger}\eta)(\sigma_{i}^{*}\sigma_{i})+\lambda_{3}^{\sigma\sigma}( \sigma_{1}^{*}\sigma_{1})(\sigma_{2}^{*}\sigma_{2})\\ &+\left[\lambda_{4}-\frac{|\beta_{i}|^{2}}{m_{S}^{2}}(\sigma_{i}^ {*}\sigma_{i})\right](H^{\dagger}\eta)(\eta^{\dagger}H)\\ &+\left[\frac{\mu}{\sqrt{2}}(\sigma_{2}\sigma_{1}\sigma_{1})- \frac{\beta_{1}\beta_{2}}{m_{S}^{2}}\sigma_{1}\sigma_{2}(H^{\dagger}\eta)^{2} +\text{h.c.}\right]+\mathcal{O}\left(\frac{1}{m_{S}^{4}}\right)\,.\end{split} \tag{52}\] Now, we decompose the neutral fields \(H^{0}\) and \(\sigma_{1,2}\) as \[H^{0}=\frac{1}{\sqrt{2}}(v_{H}+\phi+i\,A)\,,\quad\sigma_{i}=\frac{1}{\sqrt{2}} (v_{\sigma_{i}}+\rho_{i}+i\,J_{i})\,, \tag{53}\] where we defined \(\frac{v_{H}}{\sqrt{2}}\) and \(\frac{v_{\sigma_{i}}}{\sqrt{2}}\) as the VEVs of the corresponding fields. After this, we can compute the tadpole equation resulting from the effective potential in Eq. (52), evaluated at the VEVs of each scalar field. The non-trivial tadpole equations are \[\frac{d\mathcal{V}_{\rm IR}}{dH^{0}}\bigg{|}_{\langle H^{0}, \sigma_{i}\rangle=\{\frac{v_{H}}{\sqrt{2}},\frac{v_{\sigma_{i}}}{\sqrt{2}}\}} =\frac{v_{H}}{\sqrt{2}}\left(m_{H}^{2}+\lambda_{1}\frac{v_{H}^{2}} {2}+\lambda_{3}^{\sigma_{1}}\frac{v_{\sigma_{1}}^{2}}{2}+\lambda_{3}^{\sigma_ {2}}\frac{v_{\sigma_{2}}^{2}}{2}\right)=0, \tag{54}\] \[\frac{d\mathcal{V}_{\rm IR}}{d\sigma_{1}}\bigg{|}_{\langle H^{0}, \sigma_{i}\rangle=\{\frac{v_{H}}{\sqrt{2}},\frac{v_{\sigma_{i}}}{\sqrt{2}}\}} =\frac{v_{\sigma_{1}}}{\sqrt{2}}\left(m_{\sigma_{1}}^{2}+\mu\,v_{ \sigma_{2}}+\lambda_{3}^{\sigma_{1}}\frac{v_{H}^{2}}{2}+\lambda_{\sigma_{1}} \frac{v_{\sigma_{1}}^{2}}{2}+\lambda_{3}^{\sigma\sigma}\frac{v_{\sigma_{2}}^{2} }{2}\right)=0,\] (55) \[\frac{d\mathcal{V}_{\rm IR}}{d\sigma_{2}}\bigg{|}_{\langle H^{0}, \sigma_{i}\rangle=\{\frac{v_{H}}{\sqrt{2}},\frac{v_{\sigma_{i}}}{\sqrt{2}}\}} =\frac{v_{\sigma_{2}}}{\sqrt{2}}\left(m_{\sigma_{2}}^{2}+\mu\frac{v_{ \sigma_{1}}^{2}}{2v_{\sigma_{2}}}+\lambda_{3}^{\sigma_{2}}\frac{v_{H}^{2}}{2}+ \lambda_{\sigma_{2}}\frac{v_{\sigma_{2}}^{2}}{2}+\lambda_{3}^{\sigma\sigma} \frac{v_{\sigma_{1}}^{2}}{2}\right)=0. \tag{56}\] As already explained, as a result of \(\sigma_{i}\) acquiring a VEV, lepton number gets spontaneously broken, leaving a discrete \(\mathbb{Z}_{2}\) symmetry, under which all the particles in the model are even except for \(N\) and \(\eta\), which are odd. Another important consequence of the spontaneous breaking of lepton number is the generation of the \((H^{\dagger}\eta)^{2}\) operator, with a naturally suppressed \(\lambda_{5}\) coupling due to the \(1/m_{S}^{2}\) factor. One finds \[\frac{\lambda_{5}}{2}=-\frac{v_{\sigma_{1}}v_{\sigma_{2}}\beta_{1}\beta_{2}}{2 m_{S}^{2}}\ll 1\,, \tag{57}\] where \(\beta_{i}\) are dimensionless parameters of the UV theory and \(v_{\sigma_{i}}\ll m_{S}\) by construction. This expression clearly corresponds to a \(\mathrm{II}(1,2)\) model, following the classification of Sec. 3. We now consider the scalar spectrum of the model. We will assume that CP is conserved in the scalar sector, just for the sake of simplicity. In this case, the spectrum contains three CP-even and three CP-odd gauge eigenstates. In the bases \(\{\phi,\rho_{1},\rho_{2}\}\) and \(\{A,J_{1},J_{2}\}\), their mass matrices are given by \[\mathcal{M}_{R}^{2}=\left(\begin{array}{ccc}\lambda_{1}v_{H}^{2}&\lambda_{3 }^{\sigma_{1}}v_{H}v_{\sigma_{1}}&\lambda_{3}^{\sigma_{2}}v_{H}v_{\sigma_{2}} \\ \lambda_{3}^{\sigma_{1}}v_{H}v_{\sigma_{1}}&\lambda_{\sigma_{1}}v_{\sigma_{1} }^{2}&v_{\sigma_{1}}(\mu+\lambda_{3}^{\sigma\sigma}v_{\sigma_{2}})\\ \lambda_{3}^{\sigma_{2}}v_{H}v_{\sigma_{2}}&v_{\sigma_{1}}(\mu+\lambda_{3}^{ \sigma\sigma}v_{\sigma_{2}})&\lambda_{2}v_{\sigma_{2}}^{2}-\frac{\mu v_{\sigma _{1}}^{2}}{2v_{\sigma_{2}}}\end{array}\right) \tag{58}\] and \[\mathcal{M}_{I}^{2}=\left(\begin{array}{ccc}0&0&0\\ 0&-2\mu v_{\sigma_{2}}&-\mu v_{\sigma_{1}}\\ 0&-\mu v_{\sigma_{1}}&-\frac{\mu v_{\sigma_{1}}^{2}}{2v_{\sigma_{2}}}\end{array} \right)\,, \tag{59}\] respectively. The tadpole equations (54)-(56) were used in the derivation of Eqs. (58) and (59). The CP-even and CP-odd physical mass eigenstates can be written as linear combinations of \(\{\phi,\rho_{1},\rho_{2}\}\) and \(\{A,J_{1},J_{2}\}\), respectively, obtained after the diagonalization of the matrices \(\mathcal{M}_{R}^{2}\) and \(\mathcal{M}_{I}^{2}\). Out of the three CP-even mass eigenstates, one can be identified with the Higgs boson, with mass \(m_{h}\simeq 125\) GeV, discovered at the LHC. In addition, two massive CP-even scalar fields exist. In what concerns the CP-odd mass eigenstates, their mass matrix in Eq. (59) can be readily diagonalized as \(V_{I}^{T}\,\mathcal{M}_{I}^{2}\,V_{I}=\widehat{\mathcal{M}}_{I}^{2}\), where \[V_{I}=\left(\begin{array}{ccc}1&0&0\\ 0&\cos\theta&-\sin\theta\\ 0&\sin\theta&\cos\theta\end{array}\right) \tag{60}\] is a unitary matrix and \(\widehat{\mathcal{M}}_{I}^{2}\) is a diagonal matrix. One obtains \[\widehat{\mathcal{M}}_{I}^{2}=\left(\begin{array}{ccc}0&0&0\\ 0&0&0\\ 0&0&-\frac{\mu(v_{\sigma_{1}}^{2}+4v_{\sigma_{2}}^{2})}{2v_{\sigma_{2}}}\end{array} \right)\,, \tag{61}\] thus leading to two massless pseudoscalar bosons. The first one is the Goldstone boson that becomes the longitudinal component of the \(Z\) boson (\(A\)), while the second one (a linear combination of fields \(J_{1}\) and \(J_{2}\)) is associated to the spontaneous breaking of \(\mathrm{U}(1)_{\mathrm{L}}\) and is the so-called majoron, denoted as \(J\). The \(J_{1}-J_{2}\) mixing angle is given by \[\tan(2\theta)=\frac{2\,(\mathcal{M}_{I}^{2})_{23}}{(\mathcal{M}_{I}^{2})_{22}-( \mathcal{M}_{I}^{2})_{33}}=\frac{4v_{\sigma_{1}}v_{\sigma_{2}}}{4v_{\sigma_{2} }^{2}-v_{\sigma_{1}}^{2}}\,. \tag{62}\] We finally turn our attention to the \(\mathbb{Z}_{2}\)-odd scalars and decompose the neutral field \(\eta^{0}\) as \[\eta^{0}=\frac{1}{\sqrt{2}}(\eta_{R}+i\,\eta_{I})\,. \tag{63}\] The mass of the charged \(\eta^{+}\) and the neutral \(\eta_{R,I}\) fields are given by \[m_{\eta^{+}}^{2} =m_{\eta}^{2}+\frac{v_{H}^{2}}{2}\lambda_{3}^{\mathrm{eff}}\,, \tag{64}\] \[m_{\eta_{R}}^{2} =m_{\eta}^{2}+\frac{v_{H}^{2}}{2}\left(\lambda_{3}^{\mathrm{eff} }+\lambda_{4}^{\mathrm{eff}}-\frac{\beta_{1}\beta_{2}v_{\sigma_{1}}v_{\sigma_ {2}}}{m_{S}^{2}}\right)\,,\] (65) \[m_{\eta_{I}}^{2} =m_{\eta}^{2}+\frac{v_{H}^{2}}{2}\left(\lambda_{3}^{\mathrm{eff} }+\lambda_{4}^{\mathrm{eff}}+\frac{\beta_{1}\beta_{2}v_{\sigma_{1}}v_{\sigma_ {2}}}{m_{S}^{2}}\right)\,, \tag{66}\] where we defined \[\lambda_{3}^{\mathrm{eff}} \equiv\lambda_{3}+\lambda_{3}^{\eta\sigma_{1}}\frac{v_{\sigma_{ 1}}^{2}}{v_{H}^{2}}+\lambda_{3}^{\eta\sigma_{2}}\frac{v_{\sigma_{2}}^{2}}{v_{ H}^{2}} \tag{67}\] \[\lambda_{4}^{\mathrm{eff}} \equiv\lambda_{4}-\frac{\beta_{1}^{2}v_{\sigma_{1}}^{2}}{2m_{S}^ {2}}-\frac{\beta_{2}^{2}v_{\sigma_{2}}^{2}}{2m_{S}^{2}}\,. \tag{68}\] As in the Scotogenic model, the mass difference between \(\eta_{R}\) and \(\eta_{I}\) is proportional to the \(\lambda_{5}\) coupling: \[m_{\eta_{R}}^{2}-m_{\eta_{I}}^{2}=-\frac{v_{\sigma_{1}}v_{\sigma_{2}}\beta_{1 }\beta_{2}}{m_{S}^{2}}v_{H}^{2}=\lambda_{5}v_{H}^{2}\,. \tag{69}\] Finally, the breaking of \(\mathrm{U}(1)_{\mathrm{L}}\) also induces a Majorana mass term for the \(N\) singlets, with \(M_{N}=\sqrt{2}\,\kappa\,v_{\sigma_{1}}\). This leads to Majorana neutrino masses at 1-loop, as shown in Fig. 3. The resulting neutrino mass matrix is given by Eq. (5), with the effective \(\lambda_{5}\) of Eq. (57). Furthermore, contrary to the minimal Scotogenic model, this UV extension induces a 1-loop interaction between the majoron and a pair of charged leptons. This enriches the phenomenology of the model, as we discuss in the next Section. ## 6 Phenomenology All UV scenarios discussed in our classification of Sec. 3 and illustrated with the two examples of Secs. 4 and 5 share some common features. They are characterized at low energies by a Scotogenic model extended with a massless pseudoscalar, the majoron \(J\), and one or several massive scalars and pseudoscalars. While some phenomenological implications may be specific to particular models, there are also some general expectations that we may highlight. ### Majoron coupling to charged leptons The presence of a massless majoron dramatically affects the phenomenology of this class of models. In fact, models including a majoron are strongly constrained by a variety of experimental limits, such as those originated by the majoron coupling to a pair of charged leptons. The relevance of these limits depends on the flavor structure of the couplings [14], which necessarily depends on the specific model. Stringent constraints exist for both flavor-conserving and flavor-violating couplings. Let us write the majoron interaction with charged leptons as [15], \[\mathcal{L}_{\ell\ell J}=J\,\bar{\ell}_{\beta}\left(S_{L}^{\beta\alpha}\,P_{L} +S_{R}^{\beta\alpha}\,P_{R}\right)\ell_{\alpha}+\text{h.c.}\,. \tag{70}\] Here \(\ell_{\alpha,\beta}\) are the charged leptons with flavors \(\alpha\) and \(\beta\), while \(P_{L,R}\) are the usual chiral projectors. We consider all flavor combinations for the \(S_{L,R}\) couplings: \(\beta\alpha=\{ee,\mu\mu,\tau\tau,e\mu,e\tau,\mu\tau\}\). Due to the pseudoscalar nature of majorons, the diagonal \(S^{\beta\beta}=S_{L}^{\beta\beta}+S_{R}^{\beta\beta*}\) couplings are purely imaginary. They receive strong constraints from astrophysical observations, due to the cooling effects induced by the majoron in dense astrophysical media. Flavor off-diagonal couplings are constrained by the null searches of lepton flavor violation in processes involving charged leptons. In particular, searches for \(\ell_{\alpha}\to\ell_{\beta}\,J\) can be used to set bounds on the combinations \[|S^{\beta\alpha}|=\left(\left|S_{L}^{\beta\alpha}\right|^{2}+\left|S_{R}^{ \beta\alpha}\right|^{2}\right)^{1/2}\,. \tag{71}\] A compilation of the current limits on the majoron couplings to charged leptons can be found in Table 7. While in some scenarios the majoron couplings to charged leptons appear at tree-level [18, 19], in many cases the leading order contribution is induced at the 1-loop level. For instance, this is the case of the popular type-I seesaw with spontaneous lepton number violation [20, 9, 21]. Similarly, in the Scotogenic scenarios discussed in this paper, the majoron coupling to charged leptons is also induced at 1-loop [8, 22] by the Feynman diagram in Fig. 4. Here \begin{table} \begin{tabular}{|c|c|c|} \hline **Coupling** & **Upper limit** & **References** \\ \hline \(\text{Im}\,S^{ee}\) & \(2.1\times 10^{-13}\) & [16] \\ \(\text{Im}\,S^{\mu\mu}\) & \(2.1\times 10^{-9}\) & [17] \\ \hline \(|S^{e\mu}|\) & \(5.3\times 10^{-11}\) & [15] \\ \(|S^{e\tau}|\) & \(5.9\times 10^{-7}\) & [15] \\ \(|S^{\mu\tau}|\) & \(7.6\times 10^{-7}\) & [15] \\ \hline \end{tabular} \end{table} Table 7: Current limits on the majoron couplings to charged leptons. The limit on \(\text{Im}\,S^{ee}\) is at 90% C.L. [16]. The limit on \(\text{Im}\,S^{\mu\mu}\) has been obtained by performing a simulation of the supernova SN1987A [17]. An alternative and more stringent limit \(\text{Im}\,S^{\mu\mu}<2.1\times 10^{-10}\) can be derived with more aggressive assumptions in the simulation. \(g_{JNN}\) is the \(J-N-N\) coupling, which depends on the specific model. It is given by \[g_{JNN}=\left\{\begin{array}{rl}i\frac{\kappa}{\sqrt{2}}&\mbox{in models with one $\sigma$ singlet}\\ i\frac{\kappa}{\sqrt{2}}\,\cos\theta&\mbox{in models with two $\sigma$ singlets}\end{array}\right.\;\;, \tag{72}\] where the mixing angle \(\theta\) is defined in Eq. (60). The prefactor \(\cos\theta\) in models with two \(\sigma\) singlets is due to the fact that only \(\sigma_{1}\) has a coupling to \(\overline{N}^{c}N\). No other contributions to the majoron coupling to charged leptons exist at 1-loop. One may wonder about a Feynman diagram with two scalar lines in the loop, induced by a \(J\,\eta^{+}\eta^{-}\) coupling. However, this contribution vanishes exactly. The reason is the pseudoscalar nature of the majoron. The \(J\bar{\ell}_{\alpha}\ell_{\alpha}\) vertex must be proportional to \(\gamma_{5}\), but the Lorentz structure of this contribution does not generate such pseudoscalar coupling. 5 Also, diagrams with gauge bosons vanish due to the pure singlet nature of \(N\). Therefore, one can find the \(S_{L,R}\) couplings introduced in Eq. (70) by direct computation of the diagram in Fig. 4. The result can be written as [8, 22] Footnote 5: We also note that the \(J\,\eta^{+}\eta^{-}\) coupling is absent in many models, since Lagrangian terms like \(\sigma|\eta|^{2}\) or \(\sigma^{2}|\eta|^{2}\) are forbidden by lepton number. Only in models with two \(\sigma\) fields one may have a term of the form \(\sigma_{1}\sigma_{2}|\eta|^{2}\) (when \(q_{\sigma_{1}}=-q_{\sigma_{2}}\)) leading to a \(J\,\eta^{+}\eta^{-}\) interaction vertex after symmetry breaking. However, as explained in the text, even when this term is present, the associated 1-loop contribution to the majoron coupling to a pair of charged leptons vanishes exactly due to the pseudoscalar nature of the majoron. \[S_{L}^{\beta\alpha} =-\frac{m_{\ell_{\beta}}}{8\pi^{2}}\left(y^{\dagger}g_{JNN}\, \Gamma\,y\right)_{\beta\alpha}\,, \tag{73}\] \[S_{R}^{\beta\alpha} =\frac{m_{\ell_{\alpha}}}{8\pi^{2}}\left(y^{\dagger}g_{JNN}\, \Gamma\,y\right)_{\beta\alpha}\,, \tag{74}\] for the non-diagonal couplings and \[S^{\beta\beta} =-\frac{m_{\ell_{\beta}}}{8\pi^{2}}\left(y^{\dagger}g_{JNN}\, \Gamma\,y\right)_{\beta\beta}\,, \tag{75}\] for the diagonal ones. Here \(m_{\ell_{\beta}}=\{m_{e},m_{\mu},m_{\tau}\}\) and we have defined \[\Gamma_{mn}=\frac{M_{N_{n}}}{\left(M_{N_{n}}^{2}-m_{\eta^{+}}^{2}\right)^{2}} \left(M_{N_{n}}^{2}-m_{\eta^{+}}^{2}+m_{\eta^{+}}^{2}\,\log\frac{m_{\eta^{+}}^ {2}}{M_{N_{n}}^{2}}\right)\delta_{mn}\,. \tag{76}\] Figure 4: 1-loop generation of the majoron coupling to a pair of charged leptons in the Scotogenic scenarios discussed in this work. We can now study how the bounds on these couplings restrict the parameter space of the models considered in our classification. In particular, in the following we focus on the 2-body decay \(\mu\to eJ\), for which \[\text{BR}\left(\mu\to eJ\right)=\frac{m_{\mu}}{32\,\pi\,\Gamma_{\mu}}\left(|S_{L} ^{e\mu}|^{2}+|S_{R}^{e\mu}|^{2}\right)\,, \tag{77}\] where \(\Gamma_{\mu}\approx 3\times 10^{-19}\) GeV is the total decay width of the muon. We used a Casas-Ibarra parametrization [23] properly adapted to the Scotogenic model [24, 25, 26] and the best-fit values obtained in the global fit [27] to neutrino oscillation data in order to express the Yukawa matrix \(y\) in terms of experimentally measured quantities. We assumed that the three singlet fermions are degenerate, that is \(M_{N_{1}}=M_{N_{2}}=M_{N_{3}}=M_{N}\) and we fixed \(\lambda_{5}=5\times 10^{-8}\). Notice that lower values of this parameter would imply larger values of the Yukawas, thus further restricting the parameter space of the model. It also proves convenient to define \[r_{\eta}=\frac{m_{0}}{m_{\eta^{+}}}\,. \tag{78}\] Our results are shown in Fig. 5. On the left-hand side we fixed the coupling \(g_{JNN}\) to \(10^{-1}\) (blue), and to \(10^{-2}\) (pink), and we considered \(r_{\eta}=1\) in both scenarios. The colored regions correspond to regions allowed by the experimental bound on the \(\mu\to eJ\) decay, which implies \(\text{BR}\left(\mu\to eJ\right)<10^{-5}\)[18]. As expected, the larger the \(J-N-N\) coupling is, the smaller the allowed region of the parameter space becomes. We also find that light Scotogenic states can be made compatible with the \(\mu\to eJ\) bound. This can be easily understood by inspecting Figure 5: Contours of \(\text{BR}\left(\mu\to eJ\right)\) in the \((M_{N},m_{\eta^{+}})\) plane. The colored regions correspond to the regions allowed by the current experimental bound on the branching ratio. On the left, \(g_{JNN}\) has been fixed to \(10^{-1}\) (blue) and to \(10^{-2}\) (pink), while \(r_{\eta}=1\) has been used. On the right, the coupling \(g_{JNN}\) was not fixed and three different values of the \(r_{\eta}\) ratio have been considered, \(0.1\) (pink), \(1\) (blue) and \(2\) (green). the non-trivial relation between the masses \(m_{\eta^{+}}\) and \(M_{N}\) and the Yukawa couplings \(y\). Under the assumptions mentioned above one finds \[S_{L,R}\propto g_{JNN}\Gamma_{ii}\left(y^{\dagger}y\right)_{12}\,, \tag{79}\] where \(\Gamma_{ii}\) is any of the diagonal entries of \(\Gamma\), given by \[\Gamma_{ii}\propto M_{N}\frac{M_{N}^{2}-m_{\eta^{+}}^{2}+m_{\eta^{+}}^{2}\, \log\frac{m_{\eta^{+}}^{2}}{M_{N}^{2}}}{\left(M_{N}^{2}-m_{\eta^{+}}^{2} \right)^{2}}\,. \tag{80}\] Eq. (5) implies that the Yukawa product \(\left(y^{\dagger}y\right)_{12}\) is proportional to \[\left(y^{\dagger}y\right)_{12}\propto\frac{1}{M_{N}}\frac{\left(M_{N}^{2}-m_{ 0}^{2}\right)^{2}}{M_{N}^{2}-m_{0}^{2}+M_{N}^{2}\,\log\frac{m_{0}^{2}}{M_{N}^{ 2}}}\,. \tag{81}\] Therefore, in the limit \(r_{\eta}=1\) one finds \[S_{L,R}\propto g_{JNN}\frac{M_{N}^{2}-m_{\eta^{+}}^{2}+m_{\eta^{+}}^{2}\log \left(\frac{m_{\eta^{+}}^{2}}{M_{N}^{2}}\right)}{M_{N}^{2}-m_{\eta^{+}}^{2}+M_ {N}^{2}\log\left(\frac{m_{\eta^{+}}^{2}}{M_{N}^{2}}\right)}\,. \tag{82}\] For a fixed \(g_{JNN}\) value two possibilities arise: (i) if we fix \(m_{\eta^{+}}\), the \(\Gamma_{ii}\)\(\left(y^{\dagger}y\right)_{12}\) combination decreases if \(M_{N}\) increases, and (ii) if we fix \(M_{N}\), the \(\Gamma_{ii}\)\(\left(y^{\dagger}y\right)_{12}\) combination increases if \(m_{\eta^{+}}\) increases. Essentially, the involved couplings strongly depend on \(m_{\eta^{+}}\) and \(M_{N}\) and this dependence may lead to an apparent non-decoupling behavior that explains the results for the \(\mu\to eJ\) branching ratio observed in Fig. 5. Finally, the right-hand side of this figure provides complementary information. Here we considered \(g_{JNN}=i\,\frac{\kappa}{\sqrt{2}}=i\frac{M_{N}}{2v_{\sigma}}\) and fixed \(v_{\sigma}=5\) TeV. Since the \(g_{JNN}\) coupling grows with \(M_{N}\), for each \(m_{\eta^{+}}\) there is a maximum value of \(M_{N}\) for which \(\mbox{BR}\left(\mu\to eJ\right)<10^{-5}\). This can be clearly seen in our results. ### Collider signatures Since the spontaneous breaking of U(1)\({}_{\rm L}\) requires the introduction of additional scalar multiplets, all models in our classification have extended scalar sectors containing several states besides the ones in the Scotogenic model. This can be used to probe them at colliders. One of the CP-even scalars, presumably the lightest, is to be identified with the 125 GeV state discovered at the LHC. The production cross-section and decay rates of this state, denoted generally as \(h\), must agree with the values measured by the ATLAS and CMS collaborations. Since these are very close to those predicted for a pure SM Higgs, \(h\approx\mbox{Re}(H^{0})\) is generally required. In particular, mixings with the \(\sigma\) states are strongly constrained, since they would affect its decay rates in a twofold way. First, the \(\sigma\) states do not couple to the SM gauge bosons or to quarks. Thus, any mixing would induce a universal reduction of the \(h\) partial decay widths into these states. And second, \(h\) can have additional decay modes. It can decay invisibly to a pair of singlet fermions (\(h\to N_{1}N_{1}\)) or to pair of majorons (\(h\to JJ\)). The former can only take place if \(m_{N_{1}}\leq m_{h}/2\). In contrast, since the majoron is massless, the latter is always kinematically available. We can write the interaction Lagrangian of \(h\) with a pair of majorons as \(\mathcal{L}_{hJJ}=\frac{1}{2}\,g_{hJJ}\,h\,J^{2}\), where \(g_{hJJ}\) is a dimensionful coupling that depends on the specific model. This interaction induces the invisible decay \(h\to JJ\), with the decay width given by \[\Gamma(h\to JJ)=\frac{g_{hJJ}^{2}}{32\,\pi\,m_{h}}\,. \tag{83}\] If we assume a total Higgs decay width in agreement with the SM expectation, \(\Gamma_{h}\approx\Gamma_{h}^{\rm SM}=4.1\) MeV [28], the bound on the invisible Higgs branching ratio \({\rm BR}(h\to JJ)<0.19\) at 95% C.L. [29], implies \(g_{hJJ}<3.1\) GeV. This translates into constraints on the parameters of the scalar potential of the model, which are encoded in \(g_{hJJ}\). For instance, in the model discussed in [8], this implies that the coefficient of the \((H^{\dagger}H)(\sigma^{*}\sigma)\) operator must be \(\lesssim 10^{-2}\). We note, however, that stronger constraints can be derived by combining invisible and visible channels, as recently pointed out in [30]. Finally, all models in our classification also contain additional heavy states. They can also be searched for at colliders. Their production cross-sections and decay models strongly depend on the specific realization of our setup and, more specifically, on their gauge composition. If they have sizable doublet components, they can in principle be produced at high rates at the LHC via Drell-Yan processes. In contrast, heavy scalars with a dominant component in the singlet direction have very suppressed production cross-sections at the LHC. Due to the constraints discussed above, which imply suppressed mixing between the SM Higgs doublet and the \(\sigma\) states, this is the most likely scenario in all models discussed in our classification. ### Dark matter In all UV models studied in this paper, a remnant \(\mathbb{Z}_{2}\) symmetry is obtained as a result of the spontaneous breaking of lepton number. This is the Scotogenic \(\mathbb{Z}_{2}\) parity, under which only the usual Scotogenic states \(N\) and \(\eta\) are charged. The conservation of \(\mathbb{Z}_{2}\) implies that the lightest of them is completely stable and, in principle, a valid DM candidate. Both options have been widely studied in the literature. In the case of a scalar candidate, the DM phenomenology resembles that of the Inert Doublet model [31, 32, 33, 34, 35], with the DM production in the early Universe set by gauge interactions. In contrast, the case of a fermion candidate typically requires large Yukawa couplings. This leads to tension with bounds from lepton flavor violation [24], although the observed DM relic density can be achieved [36, 37, 38, 39, 40]. The low energy theories resulting from our UV extended models do not correspond _exactly_ to the original Scotogenic model. As explained above and illustrated in Secs. 4 and 5, additional scalar states are present: the massless majoron and one or several massive scalars. These new degrees of freedom couple to the \(\mathbb{Z}_{2}\)-odd states and may affect the resulting DM phenomenology, which may have some differences with respect to the one in the original Scotogenic scenario. This has recently been studied in [41, 42] for the case of fermion DM. The main conclusion from these works is that the new scalar states open up new regions in parameter space in which the DM relic density can match the observed value. In particular, annihilations become very efficient when the mass of the DM candidate, \(m_{N_{1}}\), is about half of the mass of a new scalar state. This implies that one can find the correct DM abundance for any value of \(m_{N_{1}}\) without resorting to coannihilations, in contrast to the original Scotogenic model. These models are also expected to have a rich phenomenology at direct and indirect detection experiments [42]. ## 7 Summary and discussion The Scotogenic model is a very popular scenario for neutrino masses and dark matter. In this work we have considered extensions of this scenario that naturally explain the smallness of the quartic \(\lambda_{5}\) coupling and the origin of the Scotogenic \(\mathbb{Z}_{2}\) parity. This is achieved in UV extensions including a conserved global lepton number symmetry, spontaneously broken by the VEVs of one or several scalar singlets, and a new heavy state that suppresses all lepton number violating effects at low energies. We explored all possible models with these assumptions and found 50 variations. They are all characterized at low energies by the presence of a massless Goldstone boson, the majoron, as well as other massive scalars besides the usual Scotogenic states. Two specific example models are discussed in detail in order to illustrate the basic ingredients of our setup. In these two models, as well as in all the variants in our classification, a rich phenomenology is expected, with potential signatures in collider and lepton flavor violating searches, and implications for dark matter. Out of the 50 models revealed by our analysis, only one had been previously studied in the literature, namely [8]. This illustrates the vast model space beyond the original Scotogenic model yet to be explored. In fact, there are many variations of the fundamental setup that keep all the positive features and include additional ingredients. While many of these modified Scotogenic scenarios may contain unnecessary or redundant ingredients, others may offer novel ways to address open questions in current particle physics [43]. This is the main motivation behind the classification presented in this work. There are several ways in which our analysis can be extended. First of all, we have considered UV theories that realize the \(\lambda_{5}\) coupling at tree-level. In this case, the only source of suppression is given by the large energy scale \(m_{S}\), assumed to lie well above the electroweak scale. Alternatively, the \(\lambda_{5}\) coupling can also be realized at loop order, as recently explored in [44]. This possibility leads to many novel extensions of the Scotogenic setup with, at least potentially, new phenomenological expectations. Another way in which our analysis can be extended is by considering a local lepton number symmetry. In this case, the massless majoron that was characteristic in our setup would be replaced by a heavy \(Z^{\prime}\) boson, with a dramatic impact on the low-energy phenomenology. However, we note that this direction requires non-trivial extensions of the fermion particle content in order to cancel out the usual triangle gauge anomalies. Therefore, a general classification of all possible gauge models becomes more cumbersome, although interesting too. Finally, variations with non-universal lepton charges for the \(N\) fermions or featuring alternative numbers of generations for the Scotogenic states can be explored as well. ## Acknowledgements The authors are grateful to Julio Leite for enlightening discussions, in particular for drawing their attention to topology V. Work supported by the Spanish grants PID2020-113775GB-I00 (AEI/10.13039/501100011033) and CIPROM/2021/054 (Generalitat Valenciana). The work of PE is supported by the FPI grant PRE2018-084599. AV acknowledges financial support from MINECO through the Ramon y Cajal contract RYC2018-025795-I. DPS would like to thank the AHEP group for the hospitality during his visit. The work of DPS was supported by Ciencia de Frontera CONACYT project No. 428218 and the program "BECAS CONACYT NACIONALES". ## Appendix A Accidental \(\mathbb{Z}_{2}\) symmetries The dark \(\mathbb{Z}_{2}\) parity of the Scotogenic model can also be an accidental symmetry generated after the \(\sigma\) singlet (or singlets) acquires a VEV. In these scenarios, the symmetry breaking path is also \(\mathrm{U}(1)_{\mathrm{L}}\to\mathbb{Z}_{2}\), but with \(\ell_{L}\), \(e_{R}\) and \(\eta\) as the only particles charged under the discrete symmetry. In this case, the Yukawa term \(\bar{N}\tilde{\eta}^{\dagger}\ell_{L}\) and the Majorana mass \(\overline{N}^{c}N\) are allowed by all symmetries, while \(\bar{N}\tilde{H}^{\dagger}\ell_{L}\) is forbidden. Furthermore, given that \(\eta\) is the only \(\mathbb{Z}_{2}\)-odd scalar, it will always appear in pairs in the effective scalar potential. Therefore, although the \(\mathbb{Z}_{2}\) Scotogenic parity does not emerge as a remnant symmetry after the breaking of \(\mathrm{U}(1)_{\mathrm{L}}\), it appears _accidentally_ as a consequence of it. In fact, one can see that the resulting symmetry is nothing but a non-supersymmetric version of the well-known R-parity \(R_{p}=(-1)^{3B+L+2s}\)[45], which has its origin in a combination of the \(\mathrm{U}(1)_{\mathrm{L}}\) and Lorentz symmetries. 6 These UV models are not included in the classification presented in Sec. 3 since they violate requirement (A). However, they also lead to the Scotogenic model at low energies. Footnote 6: The relation between R-parity and the Scotogenic \(\mathbb{Z}_{2}\) symmetry has been explored in [46]. Let us illustrate this possibility with a specific example. 7 Consider the particle content and charge assignment in Table 8. The new Yukawa interactions in the model are given by Footnote 7: This model corresponds to the \(\Pi^{\prime}\,(1,\emptyset)\) model shown below in Table 9. \[\mathcal{L}_{Y}=y\,\overline{N}\,\tilde{\eta}^{\dagger}\,\ell_{L}+M_{N}\, \overline{N}^{c}N+\mathrm{h.c.}\,, \tag{84}\] while the scalar potential of the model is written as \[\mathcal{V}_{\mathrm{UV}} =m_{H}^{2}H^{\dagger}H+m_{S}^{2}S^{*}S+m_{\sigma}^{2}\sigma^{*} \sigma+m_{\eta}^{2}\eta^{\dagger}\eta+\frac{\lambda_{1}}{2}(H^{\dagger}H)^{2} +\frac{\lambda_{2}}{2}(\eta^{\dagger}\eta)^{2}\] \[+\frac{\lambda_{S}}{2}(S^{*}S)^{2}+\frac{\lambda_{\sigma}}{2}( \sigma^{*}\sigma)^{2}+\lambda_{3}(H^{\dagger}H)(\eta^{\dagger}\eta)+\lambda_{ 3}^{S}(H^{\dagger}H)(S^{*}S) \tag{85}\] \[+\lambda_{3}^{\sigma S}(\sigma^{*}\sigma)(S^{*}S)+\lambda_{4}(H^{ \dagger}\eta)(\eta^{\dagger}H)+\left[\beta(\sigma H^{\dagger}\eta S)+\mu_{1}\, H^{\dagger}\eta S^{*}+\mu_{2}\,\sigma\,S^{2}+\mathrm{h.c.}\right]\,.\] It is easy to check that other Lagrangian terms are forbidden by \(\mathrm{U}(1)_{\mathrm{L}}\). This global symmetry gets spontaneously broken once the electroweak singlet \(\sigma\) acquires a non-zero VEV, leaving a remnant \(\mathbb{Z}_{2}\) under which \(\eta\), \(S\), \(\ell_{L}\) and \(e_{R}\) are odd, while the rest of the fields are even. We can call this symmetry \(\mathbb{Z}_{2}^{\rm rem}\). Since \(q_{N}=0\), \(N\) is even under \(\mathbb{Z}_{2}^{\rm rem}\), and thus this symmetry cannot be identified with the Scotogenic dark parity. Nevertheless, the Lagrangian of the Scotogenic model is still obtained after decoupling the heavy scalar \(S\). This is due to the fact that a new accidental \(\mathbb{Z}_{2}\) parity appears. The only fields charged under this parity are \(\eta\) and \(N\), while all the other fields in the effective theory are even, therefore, this accidental symmetry, that we can denote as \(\mathbb{Z}_{2}^{\rm acc}\), is precisely the Scotogenic \(\mathbb{Z}_{2}\). As already explained, it is a non-supersymmetric version of R-parity. Let us now generalize the idea studied in this Appendix. We consider again the set of models in which \((H^{\dagger}\eta)^{2}\) is generated by the topologies shown in Table 2 with the addition of at most two different singlets \(\sigma_{1,2}\). There are two possibilities to construct models in which the \(\mathbb{Z}_{2}^{\rm acc}\) symmetry is obtained: 1. **Models with \(\boldsymbol{q_{N}\neq 0}\)**. In this case we consider the models shown in Table 2 but impose that \(N\) is even under the remnant \(\mathbb{Z}_{2}^{\rm rem}\) parity while \(\ell_{L}\), \(e_{R}\) and \(\eta\) are odd. The Majorana masses of the \(N\) fermions are induced by the \(\kappa\,\sigma_{1}\overline{N}^{c}N\) Yukawa term. 2. **Models with \(\boldsymbol{q_{N}=0}\)**. This case is excluded from the classification in Sec. 3, which focuses on \(q_{N}\neq 0\), and must be discussed independently. In these models the Majorana mass term \(M_{N}\,\overline{N}^{c}N\) is present in the UV theory. We now proceed to discuss these two cases independently. Again, we find it convenient to consider topologies \(\mathrm{I}-\mathrm{I}\mathrm{V}\) and \(\mathrm{V}\) separately, since they have some qualitative differences. ### Topologies I-IV We first consider topologies \(\mathrm{I}-\mathrm{I}\mathrm{V}\). The case of \(q_{N}\neq 0\) can be regarded as a revision of our discussion in Sec. 3, imposing now different conditions on the resulting models. In fact, the models studied in Sec. 3 could also lead to \(\mathrm{U}(1)_{\rm L}\to\mathbb{Z}_{2}^{\rm rem}\), leaving the Scotogenic \(\mathbb{Z}_{2}\) parity as an accidental symmetry. This will be the case when these conditions on \(q_{N}\) are satisfied: * \(q_{N}=2\,z\), where \(z\) can be any integer number except zero. \begin{table} \begin{tabular}{|c|c||c c c|c|} \hline **Field** & **Generations** & \(\mathrm{SU}(3)_{\rm c}\) & \(\mathrm{SU}(2)_{\rm L}\) & \(\mathrm{U}(1)_{\rm Y}\) & \(\mathrm{U}(1)_{\rm L}\) \\ \hline \(\ell_{L}\) & 3 & **1** & **2** & -1/2 & 1 \\ \(e_{R}\) & 3 & **1** & **1** & -1 & 1 \\ \(N\) & 3 & **1** & **1** & 0 & 0 \\ \hline \(H\) & 1 & **1** & **2** & 1/2 & 0 \\ \(\eta\) & 1 & **1** & **2** & 1/2 & -1 \\ \(\sigma\) & 1 & **1** & **1** & 0 & 2 \\ \(S\) & 1 & **1** & **1** & 0 & -1 \\ \hline \end{tabular} \end{table} Table 8: Lepton and scalar particle content and representations under the gauge and global symmetries in an UV extension of the Scotogenic model with accidental \(\mathbb{Z}_{2}\) symmetry. * \(q_{N}=\frac{\alpha}{\beta}\), with \(\alpha,\beta\in\mathbb{Z}\) and \(\alpha\) and \(\beta\) even and odd, respectively. Also, \(\mathrm{GCD}(\alpha,\beta)=1\) has to be satisfied. Notice, however, that models with fixed charges, that is, the ones with only \(\sigma_{1}\), always have the Scotogenic symmetry as the remnant symmetry and do not enter this discussion. Considering now scenarios with \(q_{N}=0\), only \(11\) different models exist and they are listed in Table 9. Let us denote them as \(\xi^{\prime}(A,B)\), where \(\xi=\{\mathrm{I},\mathrm{II},\mathrm{III},\mathrm{IV}\}\) and the prime is used to distinguish these models from the ones studied in Sec. 3. Each of the \(11\) models needs to satisfy any of the following conditions on \(q_{\sigma_{1}}\) in order to generate the \(\mathbb{Z}_{2}\) parity as an accidental symmetry: * \(q_{\sigma_{1}}=2\,z\), where \(z\) can be any integer number, including zero. 8 Footnote 8: We note that if \(q_{\sigma_{1}}=0\), a second \(\sigma_{2}\) singlet, with \(q_{\sigma_{2}}\neq 0\), is required to break the \(\mathrm{U}(1)_{\mathrm{L}}\) symmetry. In this case, \(\sigma_{1}\) becomes a total singlet and is irrelevant for the model construction. * \(q_{\sigma_{1}}=\frac{\alpha}{\beta}\), with \(\alpha,\beta\in\mathbb{Z}\) and \(\alpha\) and \(\beta\) even and odd, respectively. Also, \(\mathrm{GCD}(\alpha,\beta)=1\) has to be satisfied. We finally point out that in none of the above scenarios \(\eta\) gets an induced VEV. ### Topology V We move on to topology V. Again, for this topology we can distinguish the same two types of models as for the previous topologies. First of all, we consider the case \(q_{N}\neq 0\). The \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & **Topology** & \(\mathbf{A}\) & \(\mathbf{B}\) & \(\mathbf{q_{N}}\) & \(\mathbf{q_{\eta}}\) & \(\mathbf{q_{\sigma_{1}}}\) & \(\mathbf{q_{\sigma_{2}}}\) & \(\mathbf{q_{S}}\) & \(\mathbf{(SU(2)_{L},U(1)_{Y})_{S}}\) \\ \hline \hline 1 & \(\mathrm{I}^{\prime}\) & 1 & \(\emptyset\) & 0 & \(-1\) & 2 & - & \(-2\) & \((\mathbf{3},1)\) \\ 2 & \(\mathrm{I}^{\prime}\) & \(\emptyset\) & 1 & 0 & \(-1\) & 2 & - & 0 & \((\mathbf{3},1)\) \\ 3 & \(\mathrm{I}^{\prime}\) & 1 & 2 & 0 & \(-1\) & \(q_{\sigma_{1}}\) & \(2-q_{\sigma_{1}}\) & \(-q_{\sigma_{1}}\) & \((\mathbf{3},1)\) \\ 4-5 & \(\mathrm{II}^{\prime}\) & 1 & \(\emptyset\) & 0 & \(-1\) & 2 & - & \(-1\) & \((\mathbf{3},0)\) or \((\mathbf{1},0)\) \\ 6-7 & \(\mathrm{II}^{\prime}\) & 1 & 2 & 0 & \(-1\) & \(q_{\sigma_{1}}\) & \(2-q_{\sigma_{1}}\) & \(1-q_{\sigma_{1}}\) & \((\mathbf{3},0)\) or \((\mathbf{1},0)\) \\ 8 & \(\mathrm{III}^{\prime}\) & 1 & \(\emptyset\) & 0 & \(-1\) & 2 & - & \(-2\) & \((\mathbf{2},1/2)\) \\ 9 & \(\mathrm{III}^{\prime}\) & 1 & 2 & 0 & \(-1\) & \(q_{\sigma_{1}}\) & \(2-q_{\sigma_{1}}\) & \(-2\) & \((\mathbf{2},1/2)\) \\ 10 & \(\mathrm{IV}^{\prime}\) & 1 & \(\emptyset\) & 0 & \(-1\) & 2 & - & 1 & \((\mathbf{2},1/2)\) \\ 11 & \(\mathrm{IV}^{\prime}\) & 1 & 2 & 0 & \(-1\) & \(q_{\sigma_{1}}\) & \(2-q_{\sigma_{1}}\) & 1 & \((\mathbf{2},1/2)\) \\ \hline \hline \end{tabular} \end{table} Table 9: UV extended models leading to topologies \(\mathrm{I}-\mathrm{IV}\) and for which the term \(\overline{N}^{c}N\) is allowed and the Scotogenic \(\mathbb{Z}_{2}\) is an accidental symmetry. For each model we show the \(\mathrm{U}(1)_{\mathrm{L}}\) charges of \(N\), \(\eta\), \(\sigma_{1}\), \(\sigma_{2}\) and \(S\), as well as the \((\mathrm{SU}(2)_{\mathrm{L}},\mathrm{U}(1)_{\mathrm{Y}})\) representation of \(S\). Models that become any of the models in this list after renaming the fields or redefining their \(\mathrm{U}(1)_{\mathrm{L}}\) charges are not included. accidental symmetry arises for the models 29-40 in Table 4 when any of these conditions is satisfied: * \(q_{N}=2\,z\), where \(z\) can be any integer number except zero. * \(q_{N}=\frac{\alpha}{\beta}\), with \(\alpha,\beta\in\mathbb{Z}\) and \(\alpha\) and \(\beta\) even and odd, respectively (\(\beta\neq 1\)). Also, \(\mathrm{GCD}(\alpha,\beta)=1\) has to be satisfied. For models 41-50 we have different conditions, although in all of them we need \(q_{N}=\frac{\alpha}{\beta}\), with \(\alpha,\beta\in\mathbb{Z}\) and \(\alpha\) and \(\beta\) even and odd, respectively. Also, \(\mathrm{GCD}(\alpha,\beta)=1\) has to be satisfied and we will allow \(\beta=1\) in these models. In addition: * In model \(\mathrm{V}(2,1,0)\), we further require \(q_{N}\neq\pm\frac{2}{3}\) if \(S\) is a singlet and \(q_{N}\neq\frac{2}{3}\) if \(S\) is a triplet. In both cases, \(q_{N}\) can not be an integer (i.e., we need \(\beta\neq 1\)). * In model \(\mathrm{V}(2,1^{*},0)\), we further require \(q_{N}\neq\frac{2}{3}\), \(\frac{2}{5}\), \(\frac{2}{7}\) if \(S\) is a singlet and \(q_{N}\neq\frac{2}{3}\), \(\frac{2}{5}\) if \(S\) is a triplet. In both cases, \(q_{N}\) can not be an integer (i.e., we need \(\beta\neq 1\)). * In model \(\mathrm{V}(2,2,0)\), we have two options depending on the nature of \(q_{N}\). If \(q_{N}\in\mathbb{Z}\), \(\mathrm{GCD}(3q_{N},1-q_{N})=1\) if \(\frac{q_{N}-1}{3}\) is not an integer and \(\mathrm{GCD}(q_{N},\frac{q_{N}-1}{3})=1\) if \(\frac{q_{N}-1}{3}\) is an integer. If \(q_{N}\notin\mathbb{Z}\), \(\mathrm{GCD}(3\,\alpha,\alpha-\beta)=1\) if \(\frac{\alpha-\beta}{3}\) is not an integer and \(\mathrm{GCD}(\alpha,\frac{\alpha-\beta}{3})=1\) if \(\frac{\alpha-\beta}{3}\) is an integer. * In model \(\mathrm{V}(2,1,2)\), we also have two options depending on the nature of \(q_{N}\). If \(q_{N}\in\mathbb{Z}\), \(\mathrm{GCD}(3q_{N},1-2q_{N})=1\) if \(\frac{1-2\,q_{N}}{3}\) is not an integer and \(\mathrm{GCD}(q_{N},\frac{1-2\,q_{N}}{3})=1\) if \(\frac{1-2\,q_{N}}{3}\) is an integer. If \(q_{N}\notin\mathbb{Z}\), \(\mathrm{GCD}(3\,\alpha,2\,\alpha-\beta)=1\) if \(\frac{2\,\alpha-\beta}{3}\) is not an integer and \(\mathrm{GCD}(\alpha,\frac{2\,\alpha-\beta}{3})=1\) if \(\frac{2\,\alpha-\beta}{3}\) is an integer. * In model \(\mathrm{V}(2,1,2)\), we have again two options depending on the nature of \(q_{N}\). There is no further requirement if \(q_{N}\in\mathbb{Z}\), whereas if \(q_{N}\notin\mathbb{Z}\), \(\mathrm{GCD}(3\,\alpha,\beta)=1\) if \(\frac{\beta}{3}\) is not an integer and \(\mathrm{GCD}(\alpha,\frac{\beta}{3})=1\) if \(\frac{\beta}{3}\) is an integer. Models with \(q_{N}=0\) based on the topology \(\mathrm{V}\) are collected in Table 10. Each of the 8 models needs to satisfy any of the following conditions on \(q_{\sigma_{1}}\) in order to generate the \(\mathbb{Z}_{2}\) parity as an accidental symmetry: * \(q_{\sigma_{1}}=2\,z\), where \(z\) can be any integer number, including zero. * \(q_{\sigma_{1}}=\frac{\alpha}{\beta}\), with \(\alpha,\beta\in\mathbb{Z}\) and \(\alpha\) and \(\beta\) even and odd, respectively. Also, \(\mathrm{GCD}(\alpha,\beta)=1\) has to be satisfied.
2308.16058
A Classification of Observation-Driven State-Space Count Models for Panel Data
State-space models are widely used in many applications. In the domain of count data, one such example is the model proposed by Harvey and Fernandes (1989). Unlike many of its parameter-driven alternatives, this model is observation-driven, leading to closed-form expressions for the predictive density. In this paper, we demonstrate the need to extend the model of Harvey and Fernandes (1989) by showing that their model is not variance stationary. Our extension can accommodate for a wide range of variance processes that are either increasing, decreasing, or stationary, while keeping the tractability of the original model. Simulation and numerical studies are included to illustrate the performance of our method.
Jae Youn Ahn, Himchan Jeong, Yang Lu, Mario V. Wüthrich
2023-08-30T14:30:58Z
http://arxiv.org/abs/2308.16058v1
# A Classification of Observation-Driven State-Space Count Models for Panel Data ###### Abstract State-space models are widely used in many applications. In the domain of count data, one such example is the model proposed by Harvey and Fernandes (1989). Unlike many of its parameter-driven alternatives, this model is observation-driven, leading to closed-form expressions for the predictive density. In this paper, we demonstrate the need to extend the model of Harvey and Fernandes (1989) by showing that their model is not variance stationary. Our extension can accommodate for a wide range of variance processes that are either increasing, decreasing, or stationary, while keeping the tractability of the original model. Simulation and numerical studies are included to illustrate the performance of our method. **Keywords:** dependence, posterior ratemaking, dynamic random effects, conjugate-prior, local-level models, state-space model, experience rating. JEL Classification: C32; C53 MSC:62M10. ## 1 Introduction Time series of counts data are widely used in many areas such as insurance, finance, marketing, economics, etc. According to Cox (1981), there are two major types of time series models, called observation-driven and parameter-driven, respectively. In the count time series framework, the most popular observation-driven time series models are thinning based models, such as \(\text{INAR}(p)\) and INGARCH models; see, e.g., Lu (2021); Davis et al. (2021) for a review. These models are not state-space based, whereas the best known state-space models include, for instance, Zeger (1988); Henderson and Shimakura (2003); Fruhwirth-Schnatter and Wagner (2006); Cui and Lund (2009); Davis and Wu (2009); Jung et al. (2011); Jia et al. (2023), to name a few. Compared to \(\text{INAR}(p)\) and INGARCH models, state-space models have several advantages: * First, it is more convenient to include covariates (i.e., regressors). * Second, it is more convenient to address missing values, as well as changes of exposures in a state-space framework. * Third, stationarity (or non-stationary) is more tractable under a state-space framework, and in the case of a stationary process, the marginal distribution is often simple to work out. These three properties can be essential in applications involving panel (i.e., longitudinal) data. One typical example is car insurance pricing, where the insurer observes, for each year, the values of the covariates, as well as a count response variable representing the annual number of claims. Let us explain the importance of these three properties in more detail. _Allowing for covariates_. Most INAR and INARCH (or INGARCH) type models are used without including covariates. The only exceptions we are aware of are Davis et al. (2003) and Agosto et al. (2016). These models directly postulate conditional distributions of the future observations, whose parameters are functions of the _current_ values of the covariates. A drawback of this approach is that past values of the covariates do not enter into the conditional distribution. To see why past covariate values could be important for car insurance pricing, let us assume, for expository purpose, that the covariate \(X_{t}\) is univariate, and that given past claim numbers \(Y_{t-1},Y_{t-2},\ldots\) and past and current covariate values, the conditional expectation (i.e., the premium) of \(Y_{t}\) is increasing in \(X_{t}\). In other words, \(X_{t}\) measures the underlying risk of the policy. Then, _given_\(Y_{t-1},\ldots\) and \(X_{t}\), the premium should be decreasing in past covariate values \(X_{t-1},\ldots\), this is because the larger \(X_{t-1},\ldots\), the more "efforts" the policyholder made in the past to arrive at the given numbers of claims \(Y_{t-1},\ldots\). Because these efforts are statistically likely to continue in the future1, the premium should be decreasing in \(X_{t-1},\ldots\); we refer to Equation (15) of Dionne and Vanasse (1989) for an example of a premium function that satisfies this decreasingness constraint. Footnote 1: And they should be compensated to give incentives for safe driving, due to bonus-malus systems used in insurance pricing. Accounting for missing values and change of exposureIn car insurance, it is common for the insurance policies to be analyzed by calendar year. Then for each policy, the first observation is usually left truncated (due to policy inceptions during the calendar year), with an exposure equal to only a fraction of a year. Similarly, the last observation could be censored because of early termination of the policy. These differences of exposure can be conveniently handled by multiplying the stochastic Poisson parameter in the state-space model by an offset term being equal to the fraction of the year covered. It is less straightforward to adjust for such changes of exposures under the INAR and the INGARCH framework, respectively. Analyzing stationarity and stationary distributionIn a longitudinal data context, the likelihood function involves the initial distribution of the observed process. Indeed, for each given individual, the joint probability density function (pdf) of the first \(n\) observations can be decomposed as \[f(Y_{1},Y_{2},\ldots,Y_{T})=f(Y_{1})f(Y_{2}|Y_{1})f(Y_{3}|Y_{1},Y_{2})\cdots f (Y_{T}|Y_{T-1},\ldots,Y_{1}),\] where the first term \(f(Y_{1})\) is the marginal pdf of the first response \(Y_{1}\). In observation-driven count models such as INAR(\(p\)) and IN(G)ARCH, the subsequent terms corresponding to the conditional distributions are usually more tractable, but the first term can be rather cumbersome. This first term can be omitted, only if the time series length \(T\) is very large. When \(T\) is small, however, its omission induces a bias. In state-space models, on the other hand, the term \(F(Y_{1})\) is often tractable, since \(Y_{1}\) is the output of a latent variable \(\Theta_{1}\), whose distribution is usually chosen simple. For instance, in many parameter-driven models, the latent process (\(\Theta_{t}\)) is assumed stationary and ergodic, following a Gaussian AR(1) or an autoregressive gamma process. Then, the marginal distribution of \(\Theta_{1}\) is simple. Despite the three aforementioned advantages of state-space count models, many _parameter-driven_ state-space models often suffer from a much higher computational burden since the latent process (\(\Theta_{t}\)) has to be integrated out, which leads to a \(T\)-dimensional integral that may be approximated via Monte Carlo simulation; see, e.g., Chan and Ledolter (1995), Fruhwirth-Schnatter and Wagner (2006). This makes their implementation challenging, especially in a longitudinal data context with a large cross-sectional dimension; see, e.g., Lu (2018) for a discussion in the context of insurance pricing. Harvey and Fernandes (1989)'s model (henceforth HF model) is one of the rare examples of _observation-driven_ state-space models, that enjoys the three properties above, while still being tractable. More precisely, in contrast to its _parameter-driven_ counterparts, the dynamics of this latter model is not exogenous, but endogenous, in the sense that the dynamics depends not only on the past values of the state variable, but also on the past values of the observed responses \[\Theta_{t+1}|(\Theta_{1:t},Y_{1:t})\ =\ \Theta_{t+1}|(\Theta_{t},Y_{1:t}). \tag{1}\] where \(Y_{1:t}=(Y_{1},\ldots,Y_{t})\) and \(\Theta_{1:t}=(\Theta_{1},\ldots,\Theta_{t})\) denote the processes of the past claim observations, \((Y_{t})\), and the latent risk factors up to time \(t\), \((\Theta_{t})\), respectively. The tractability of the predictive distribution arises from the Poisson-gamma conjugacy, by _assuming_ that the conditional distributional on the right hand side of (1) is a gamma distribution, while the conditional distribution in the measurement equation of \(Y_{t}\), given \(\Theta_{t}\), is assumed to be Poisson. This model can be regarded as the count valued analog of a Bayesian state-space model that relies on conjugate priors, such as Smith and Miller (1986) and Shephard (1994) for real-valued univariate processes, and Uhlig (1997) for real-valued multivariate processes. It has recently been applied by Ahn et al. (2023) in an insurance pricing context. In this paper we start by explaining that in the HF model, the state variable follows a (multiplicative) random walk, and it has a non-stationary (increasing) variance process. This explosion-in-variance property might not be appropriate in many applications. Therefore, we extend the HF model to accommodate for various other types of variance dynamics. In particular, we classify the extended class of observation-driven state-space models into several groups: * The original HF model, which has an _explosive (non-stationary)_ dynamics with increasing variance \(\mathrm{V}ar\left(\Theta_{t}\right)\) with time \(t\geq 1\). 2. The second class corresponds to the case where, when \(t\) goes to infinity, the latent process (\(\Theta_{t}\)) degenerates to a constant and, hence, the uncertainty related to the non-observability of the latent factor \(\Theta_{t}\) asymptotically vanishes. 3. In the third class, the latent process (\(\Theta_{t}\)) considered in (1) has a variance process that is bounded (from zero and infinity). This class includes the special cases where the variance process is time-invariant or converging. This third case is probably the most realistic situation for car insurance pricing, where we learn the unobservable risk factors of the insurance policyholders over time, but there always remains some uncertainty. The models in these three classes differ in terms of their (variance) stationarity properties, but they all enjoy the three aforementioned properties in terms of covariates, change of exposure, and stationary distributions. Our paper also contributes to the forecasting literature on exponential moving average or exponential smoothing; see Hyndman et al. (2008), Chapter 16, for a review of such methods for count data. This literature has traditionally focused on models with increasing variance, such as the HF model, which gives rise to an exponentially weighted moving average (EWMA) predictor. In this paper we show that many count process models with bounded variance also allow for EWMA predictors, hence, broadening the scope of exponential smoothing methods. The rest of the paper is organized as follows. Section 2 extends the HF model, by allowing some of the parameters of the HF model to be time-varying. Section 3 discusses various specifications of this extended HF family, and it classifies them into different classes according to their stationarity (or non-stationarity) behavior, see Table 1, below. Section 4 illustrates the difference of their long-term dynamics through simulations. Section 5 compares these models using a real insurance dataset. Section 6 concludes. The mathematical proofs are provided in the appendix. ## 2 The extended HF model Throughout this paper, we use the following notation. * \(\text{Gamma}(\alpha,\beta)\): gamma distribution with shape parameter \(\alpha>0\) and rate parameter \(\beta>0\). It has mean \(\alpha/\beta\) and variance \(\alpha/\beta^{2}\). By convention, we use \(\text{Gamma}(0,\beta)\) to denote a constant zero. * Beta(\(\alpha,\beta\)): beta distribution on \((0,1)\) with mean \(\frac{\alpha}{\alpha+\beta}\) and variance \(\frac{\alpha\beta}{(\alpha+\beta)^{2}(\alpha+\beta+1)}\). By convention, we use Beta(\(\alpha,0\)) to denote a constant one. * Pois(\(\lambda\)): Poisson distribution with mean \(\lambda>0\). * NB(\(\lambda,\Gamma\)): negative binomial (NB) distribution with mean \(\lambda\) and variance \(\lambda+\lambda^{2}/\Gamma\). We recall the usual Poisson-gamma relationship. If \(Y\) is Poisson, given \(\Theta\), with mean \(\Theta\), and \(\Theta\) follows a gamma prior distribution, then the posterior distribution of \(\Theta\), given \(Y\), is still a gamma distribution, and the marginal distribution of \(Y\) is a NB distribution. ### The model We provide an observation-driven state-space model with constant mean, which generalizes the HF model. **Model 1**.: _Given exogenous processes \(\left(\lambda_{t}\right)_{t\geq 1},\ \left(q_{t}^{*}\right)_{t\geq 1}\) and \(\left(q_{t}^{**}\right)_{t\geq 1}\) satisfying for \(t\geq 1\)_ \[0\leq q_{t}^{*}\leq q_{t}^{**}\leq 1\quad\text{ and }\quad q_{t}^{**},\lambda_{t}>0, \tag{2}\] _the response variables \((Y_{t})_{t\geq 1}\) and the state-space variables (random effects) \((\Theta_{t})_{t\geq 1}\) satisfy:_ * _The conditional distribution of_ \(Y_{t}\)_, given the state variable and past observations_2\(Y_{1:(t-1)}\)_, is Poisson_ Footnote 2: By convention, for \(t=1\), the information set \(\sigma(Y_{1:(t-1)})\) reduces to the trivial \(\sigma\)-field. \[Y_{t}|\left(Y_{1:(t-1)},\Theta_{1:t}\right)\sim\operatorname{Pois}\left( \lambda_{t}\Theta_{t}\right),\qquad\text{ for }t\geq 1.\] (3) * _At time_ \(t=1\)_,_ \(\Theta_{1}\) _is gamma distributed as_ \[\Theta_{1}\sim\operatorname{Gamma}\left(\alpha_{1|0},\beta_{1|0}\right),\] (4) _where, for identification purposes, we assume equal deterministic parameters_ \(\alpha_{1|0}=\beta_{1|0}>0\)_, so that_ \(\mathbb{E}\left[\Theta_{1}\right]=1\)_._ * _At time_ \(t\geq 1\)_, the filtering distribution of_ \(\Theta_{t}\)_, given past observations_ \(Y_{1:t}\)_, is gamma_ \[\Theta_{t}|Y_{1:t}\sim\operatorname{Gamma}\left(\alpha_{t},\beta_{t}\right),\] (5) _where_ \(\alpha_{t}>0\) _and_ \(\beta_{t}>0\) _are deterministic functions of_ \(Y_{1:t}\) _and_ \(\lambda_{1:t}\) _up to time_ \(t\)__ \[\alpha_{t}=\begin{cases}\alpha_{t|t-1}+Y_{t},&\text{if $Y_{t}$ is observed};\\ \alpha_{t|t-1},&\text{otherwise};\end{cases}\] (6) _and_ \[\beta_{t}=\begin{cases}\beta_{t|t-1}+\lambda_{t},&\text{if $Y_{t}$ is observed};\\ \beta_{t|t-1},&\text{otherwise}.\end{cases}\] (7) 4. _At time_ \(t+1\geq 1\)_, the predictive distribution of_ \(\Theta_{t+1}\)_, given_ \(Y_{1:t}\)_, is gamma with_ \[\Theta_{t+1}|Y_{1:t}\sim\operatorname{Gamma}\left(\alpha_{t+1|t},\beta_{t+1|t }\right),\] (8) _with for_ \(t\geq 1\)__ \[\alpha_{t+1|t} = q_{t}^{*}\alpha_{t}+\left(q_{t}^{**}-q_{t}^{*}\right)\beta_{t},\] \[\beta_{t+1|t} = q_{t}^{**}\beta_{t}.\] Definitions (6) and (7) follow from Bayes' rule, and \(\alpha_{t}>0\) and \(\beta_{t}>0\) are deterministic functions of the past observations \(Y_{1:t}\), up to time \(t\), and of \(\lambda_{1:t}\), the latter allows to integrate time-varying covariates. From this, we deduce that the conditional distribution of \(Y_{t}\), given \(Y_{1:(t-1)}\), is negative binomial with \[Y_{t}\,|\,Y_{1:(t-1)}\sim\operatorname{NB}\left(\lambda_{t}\,\frac{\alpha_{t| t-1}}{\beta_{t|t-1}},\,\alpha_{t|t-1}\right), \tag{9}\] where the conditional probability mass function is given as follows \[f\left(\left.Y_{t}\right|Y_{1:(t-1)}\right)=\frac{\Gamma\left(Y_{t}+\alpha_{t| t-1}\right)}{Y_{t}!\,\Gamma\left(\alpha_{t|t-1}\right)}\left(\frac{\lambda_{t}} {\lambda_{t}+\beta_{t|t-1}}\right)^{Y_{t}}\left(\frac{\beta_{t|t-1}}{\lambda_{ t}+\beta_{t|t-1}}\right)^{\alpha_{t|t-1}}. \tag{10}\] In particular, the predictive mean is \[\mathbb{E}[Y_{t}|Y_{1:(t-1)}]=\lambda_{t}\,\frac{\alpha_{t|t-1}}{\beta_{t|t-1 }}. \tag{11}\] ### Stochastic representation of the observation-driven property Model 1 defines the conditional distributions of the latent variable \(\Theta_{t}\), given the past (or past and current) observations. However, it does not provide a state equation directly linking \(\Theta_{t}\) with \(\Theta_{t+1}\). To work out this state equation, we first recall the following lemma. **Lemma 1** (Lukacs (1955)).: _Consider two independent random variables_ \[\begin{split}&\Theta\sim\operatorname{Gamma}(\alpha,\beta),\\ & B\sim\operatorname{Beta}\left(q^{*}\alpha,(1-q^{*})\alpha \right),\end{split} \tag{12}\] _where \(\alpha>0,\beta>0\), and \(q^{*}\in(0,1]\) are given constants. Then, their product_ \[\Theta B\sim\operatorname{Gamma}(q^{*}\alpha,\beta).\] As a consequence, if \(\eta\) is independent of \(\Theta\) and \(B\), and \(\eta\sim\operatorname{Gamma}((q^{**}-q^{*})\beta,q^{**}\beta)\) with constant \(q^{**}\) such that \(q^{**}\geq q^{*}\), then we have: \[\frac{\Theta B}{q^{**}}+\eta\sim\operatorname{Gamma}\left(q^{*}\alpha+(q^{**}- q^{*})\,\beta,q^{**}\beta\right). \tag{13}\] Formula (13) implies the following stochastic representation of the latent process \((\Theta_{t})_{t\geq 1}\): \[\Theta_{t+1}=\frac{\Theta_{t}B_{t+1}}{q_{t}^{**}}+\eta_{t+1}, \tag{14}\] where \[B_{t+1}\,|\,(Y_{1:t},\Theta_{1:t})\sim\operatorname{Beta}(q_{t}^{*}\alpha_{t}, (1-q_{t}^{*})\alpha_{t}),\] and \[\eta_{t+1}\,|\,(Y_{1:t},\Theta_{1:t})\sim\operatorname{Gamma}((q_{t}^{**}-q_{t }^{*})\beta_{t},q_{t}^{**}\beta_{t}).\] Moreover, \(B_{t+1}\) and \(\eta_{t+1}\) are conditionally independent, given \(Y_{1:t}\) and \(\Theta_{1:t}\). The observation-driven nature of the evolution in (1) is evident from the evolution mechanism in (13)-(14), and this justifies the choice of (8) by an explicit example. ### Link with other time series models Link with random coefficient AR(1) processesWe remark that conditional on the information up to time \(t\) in (14), we have \[\mathbb{E}\left[\frac{B_{t+1}}{q_{t}^{**}}\,\bigg{|}\,\Theta_{1:t},Y_{1:t}\right] =\frac{q_{t}^{*}}{q_{t}^{**}}\leq 1.\] Thus, the process \((\Theta_{t})\) can be compared with a (random coefficient) auto-regressive process, see, e.g., Joe (1996) and Jorgensen and Song (1998), in which the first term in (14) describes a (stochastic) thinning of the previous state \(\Theta_{t}\), and \(\eta_{t+1}\) adds new noise to the update. Our specification of \((\Theta_{t})\) differs from this random coefficient literature by the fact that we consider an endogenous, i.e., observation-driven dynamics of \((\Theta_{t})\). Link with Kalman filtersIt is well known in a linear Gaussian state-space model3 that all the conditional/filtering/predictive distributions are Gaussian; see Durbin and Koopman (2012), Chapter 4. This result is based on the Gaussian-Gaussian conjugacy, as well as the closure of the Gaussian distribution to convolution and scaling. Because our model is based on the Poisson-gamma conjugacy, as well as the closure of the gamma distribution to scaling and convolution,4 it can be viewed as a count variable analogue of the Kalman filter. More generally, state-space models based on other conjugate priors have been proposed by Smith and Miller (1986), Shephard (1994), and Uhlig (1997), to name but a few. Footnote 3: That is, the conditional joint distribution of the pair \((Y_{t+1},\Theta_{t+1})\) given past information \(Y_{1:t}\) and \(\Theta_{1:t}\) is Gaussian. Footnote 4: This holds for fixed scale parameter. ## 3 A classification according to the behavior of the variance process In this section, we show that Model 1 can allow for various forms of variance processes, e.g., of increasing, decreasing, constant or stationary type. ### The static shared random effect model The model with shared (or static) random effect assumes that given a time-invariant latent variable \(\Theta_{t}\equiv\Theta\), a.s., for all \(t\geq 1\), and with \(\Theta\) following a \(\text{Gamma}(\beta_{1|0},\beta_{1|0})\) distribution, the counts \(Y_{t}\) are conditionally independent with a \(\text{Pois}(\lambda_{t}\Theta)\) distribution. For \(\lambda_{t}\equiv\lambda>0\), this is a special case of the Buhlmann and Straub (1970) credibility model. This is a static shared random effect model is obtained from Model 1 by setting \[q_{t}^{*}=q_{t}^{**}=1\quad\Longleftrightarrow\quad B_{t+1}\equiv 1,\ \eta_{t+1} \equiv 0. \tag{15}\] ### HF model: A model with increasing variance To analyze more sophisticated cases, let us first investigate some properties of Model 1 concerning the first and second order moments of the processes. **Lemma 2**.: _In Model 1 we have the following moment behaviors for \(t\geq 1\)_ * \(\mathbb{E}\left[\alpha_{t}\right]=\beta_{t}\)_,_ \(\mathbb{E}\left[\Theta_{t}\right]=1\) _and_ \(\mathbb{E}\left[Y_{t}\right]=\lambda_{t}\)_._ * \(\mathbb{E}\left[\mathrm{V}ar\left(\Theta_{t}\,|\,Y_{1:t}\right)\right]=\frac{ 1}{\beta_{t}}\)_._ * \(\mathbb{E}\left[\mathrm{V}ar\left(\Theta_{t+1}\,|\,Y_{1:t}\right)\right]=\frac{ 1}{q_{t}^{**}}\frac{1}{\beta_{t}}\)_._ * \(\mathrm{V}ar\left(\mathbb{E}\left[\Theta_{t+1}\,|\,Y_{1:t}\right]\right)= \left(\frac{q_{t}^{**}}{q_{t}^{**}}\right)^{2}\mathrm{V}ar\left(\mathbb{E} \left[\Theta_{t}\,|\,Y_{1:t}\right]\right)\)_._ Proof.: See A.1. Property \(i)\) says that process \(\left(\Theta_{t}\right)\) is mean-stationary. Properties \(iii)\) and \(iv)\) allow to analyze the variance behavior of \(\left(\Theta_{t}\right)\). Indeed, by the total variance decomposition formula, we get \[\mathrm{V}ar\left(\Theta_{t+1}\right) =\mathbb{E}\left[\mathrm{V}ar\left(\Theta_{t+1}\,|\,Y_{1:t} \right)\right]+\mathrm{V}ar\left(\mathbb{E}\left[\Theta_{t+1}\,|\,Y_{1:t} \right]\right) \tag{16}\] \[=\frac{1}{q_{t}^{**}}\frac{1}{\beta_{t}}+\left(\frac{q_{t}^{*}}{ q_{t}^{**}}\right)^{2}\mathrm{V}ar\left(\mathbb{E}\left[\Theta_{t}\,|\,Y_{1:t} \right]\right).\] On the other hand, by using the total variance decomposition formula again, we have \[\mathrm{V}ar\left(\Theta_{t}\right) =\mathbb{E}\left[\mathrm{V}ar\left(\Theta_{t}\,|\,Y_{1:t}\right) \right]+\mathrm{V}ar\left(\mathbb{E}\left[\Theta_{t}\,|\,Y_{1:t}\right]\right)\] \[=\frac{1}{\beta_{t}}+\mathrm{V}ar\left(\mathbb{E}\left[\Theta_{t }\,|\,Y_{1:t}\right]\right). \tag{17}\] Let us now compare equations (16) and (17). Because \(\frac{1}{q_{t}^{**}}\geq 1\) the first term in (16) is larger than the first one of (17). Similarly, because \(\left(\frac{q_{t}^{*}}{q_{t}^{**}}\right)^{2}\leq 1\), the second term in (16) is smaller than the second one of (17). As a result, the variance process \(\left(\mathrm{V}ar\left(\Theta_{t}\right)\right)\) in our model is not necessarily monotone. Throughout the rest of this section, we will discuss cases in which this sequence of variances \(\left(\mathrm{V}ar\left(\Theta_{t}\right)\right)\) is either increasing, time-varying, or decreasing. In this subsection we start by describing an increasing case. The HF model, Harvey and Fernandes (1989), is obtained from Model 1 under the extra constraints for \(t\geq 1\) \[q_{t}^{*}=q_{t}^{**}=q\ \in\ (0,1). \tag{18}\] Harvey and Fernandes (1989)'s original formulation is without covariates, Gamerman et al. (2013) extend their model by introducing time-varying exogenous covariates \((\lambda_{t})\). In this model, the stochastic representation (14) becomes \[\Theta_{t+1}=\frac{\Theta_{t}B_{t+1}}{q}, \tag{19}\] which implies that \(\operatorname{E}\left[\Theta_{t+1}|Y_{1:t}\right]=\Theta_{t}\). In other words, \((\Theta_{t})\) is a martingale with respect to the filtration generated by \((Y_{1:t})\). The following lemma shows that under some conditions, this martingale has an explosive variance behavior. **Lemma 3** (Explosive variance in the HF model).: _Under the HF Model, if the exogenous process \((\lambda_{t})\) is bounded both both above and below across time \(t\geq 1\), then \((\operatorname{V}\!ar\left(\Theta_{t}\right))\) and \((\operatorname{V}\!ar\left(Y_{t}\right))\) increase to infinity when \(t\) goes to infinity._ Proof.: See A.2. One advantage of the HF model is that under this model, the predictive mean (11) is an exponentially weighted moving average (EWMA) of past observations. The literature on exponential moving average forecasting of counts has traditionally focused on _non-stationary_ models only; see, e.g., Hyndman et al. (2008), Chapter 17. It is seen in the next subsection, however, that it is not the only possible specification leading to EWMA predictors. ### A model with converging variance In this subsection, we discuss a special case of Model 1 which is asymptotically strongly stationary. By _strongly_ stationarity, we mean that the conditional distribution of \(Y_{t+h}\), given \(Y_{t}\), converges to a non-degenerate distribution. In other words, in the long run, the process \((Y_{t})\) evolves in a "steady state", i.e., an "equilibrium". This implies, in particular, that the variance and mean of the process converge to positive constants.5 Note also that stationarity does not mean "time-invariant". Indeed, for large time horizons \(h\), the variance of \(Y_{t+h}\) converges to a positive constant. Specifically, we assume, in Model 1, that \(q_{t}^{*},q_{t}^{**}\) and \(\lambda_{t}\) are all time-invariant for \(t\geq 1\) \[q_{t}^{*}=pq,\qquad q_{t}^{**}=q, \tag{20}\] for \(p,q\in(0,1)\), and \[\lambda_{t}=\lambda. \tag{21}\] Then, we have for \(t\geq 1\) \[\begin{split}\alpha_{t}&=\alpha_{t|t-1}+Y_{t}=pq \alpha_{t-1}+q\left(1-p\right)\beta_{t-1}+Y_{t},\\ \beta_{t}&=\beta_{t|t-1}+\lambda_{t}=q\beta_{t-1}+ \lambda_{t},\end{split} \tag{22}\] where the second identities need \(t\geq 2\). Because our model has as predictive distribution a NB distribution, with the number of trial parameter satisfying a linear recursion, this model coincides with the NB-INGARCH(1,1) model, see Goncalves et al. (2015), who established its stationarity. **Lemma 4** (Goncalves et al. (2015)).: _If in Model 1 the parameters satisfy (20) and (21), then the process \((Y_{t})\) is asymptotically strongly stationary. In particular, \((\mathrm{V}ar\left(\Theta_{t}\right))\) and \((\mathrm{V}ar\left(Y_{t}\right))\) converge._ Under assumptions (20) and (21), the predictive mean (11) is once again an EWMA of the past observations. In other words, it is an extension of the standard EWMA forecasting literature; see Hyndman et al. (2008). **Remark 1**.: There are also other stationary count process models allowing for EWMA predictive mean formulas, such as the INGARCH(1,1), see Ferland et al. (2006), or the NB-INGARCH, see Zhu (2011). These other models, however, do not admit a state-space representation and, therefore, do not possess the three properties mentioned in the introduction. In other words, among all the stationary count models with an EWMA predictive mean, the model of Goncalves et al. (2015) has the advantage of having a state-space representation. ### A model with decreasing variance In this subsection, we consider the special case of Model 1 with the constraint for \(t\geq 1\) \[q_{t}^{*}=p\ \in[0,1)\quad\text{and}\quad q_{t}^{**}=1. \tag{23}\] This implies for \(t\geq 2\), see (22), \[\alpha_{t}=p\alpha_{t-1}+(1-p)\beta_{t-1}+Y_{t}\quad\text{and}\quad\beta_{t}= \beta_{t-1}+\lambda_{t}.\] If \((\lambda_{t})\) is bounded from below, then both processes \((\beta_{t})\) and \((\alpha_{t})\) go to infinity when \(t\) increases to infinity. Let us study the variance of this process. By comparing (16) and (17) with the condition in (23), we get \[\mathrm{V}ar\left(\Theta_{t+1}\right)-\mathrm{V}ar\left(\Theta_{t}\right)= \left(p^{2}-1\right)\mathrm{V}ar\left(\mathbb{E}\left[\Theta_{t}\,|\,Y_{1:t} \right]\right)=\left(p^{2}-1\right)\mathrm{V}ar\left(\frac{\alpha_{t}}{\beta_{ t}}\right)<0. \tag{24}\] Thus, the latent process \((\Theta_{t})\), and hence \((Y_{t})\), have both a decreasing variance (for the latter it is sufficient to assume that \((\lambda_{t})\) is bounded). Moreover, we have the following stronger result. **Lemma 5**.: _Under Model 1 and constraint (23), if the exogenous process \((\lambda_{t})\) is bounded both from below and above by positive constants, then \((\mathrm{V}ar\left(\Theta_{t}\right))\) converges to zero, when \(t\) goes to infinity, and \((\mathrm{V}ar\left(Y_{t}\right))\) converges to \(\lambda\), under the additional assumption (21)._ Proof.: See A.3. ### A model with constant variance In this subsection, we discuss a special case of Model 1 for which the variance is a constant. Note that the model in Goncalves et al. (2015), which is the model in Section 3.3, requires \((\lambda_{t})\) to be time-invariant (21), which might be too restrictive for insurance applications. Moreover, the variance is not time-invariant in Lemma 4, but it only converges to a constant at infinity. In the following, we look at another type of stationarity property, by looking only at the variance of the process \((\Theta_{t})\).6 However, instead of requiring the variance to converge to a constant when time \(t\) goes to infinity, we request it to remain constant for any \(t\),7 that is, for \(t\geq 1\) Footnote 6: Note that the process \((\Theta_{t})\) has a constant mean by Lemma 1. Footnote 7: In particular, we do not require the process to be covariance stationary. That is, the covariance function of the process \(\mathrm{C}ov(\Theta_{t},\Theta_{t-h})\) can depend on \(t\). \[1=\mathbb{E}\left[\Theta_{t}\right]\quad\text{and}\quad\frac{1}{\beta_{1|0}}= \mathrm{V}ar\left(\Theta_{t}\right). \tag{25}\] This will in turn allow us to relax the time-invariance assumption on \((\lambda_{t})\). More precisely, by comparing (16) and (17), we get immediately the following result. **Lemma 6**.: _In Model 1, the variance process \((\operatorname{V\!ar}\left(\Theta_{t}\right))\) is constant, if and only if \(q_{t}^{*}\) and \(q_{t}^{**}\) satisfy the following equation for all \(t\geq 1\)_ \[\frac{1}{q_{t}^{**}}\frac{1}{\beta_{t}}+\left(\frac{q_{t}^{*}}{q_{t}^{**}} \right)^{2}\left(\frac{1}{\beta_{1|0}}-\frac{1}{\beta_{t}}\right)=\frac{1}{ \beta_{1|0}}. \tag{26}\] Thus, there are infinitely many possible combinations of \(q_{t}^{*}\) and \(q_{t}^{**}\) in order for \(\operatorname{V\!ar}\left(\Theta_{t}\right)\) to remain constant. Among such choices, motivated by the conditional linear auto-regressive structure (CLAR(1)) in Grunwald et al. (2000), we may assume the following updating rule \[\operatorname{\mathbb{E}}\left[\Theta_{t+1}\,|\,Y_{1:t}\right]=p\operatorname{ \mathbb{E}}\left[\Theta_{t}\,|\,Y_{1:t}\right]+1-p, \tag{27}\] for some \(p\in(0,1)\), which is equivalent to the condition \[\frac{q_{t}^{*}}{q_{t}^{**}}=p. \tag{28}\] Then, under this additional assumption (27), requirement (26) becomes for \(t\geq 1\) \[q_{t}^{**}=\frac{\beta_{1|0}}{p^{2}\beta_{1|0}+\left(1-p^{2}\right)\beta_{t}}, \tag{29}\] which can be calculated recursively from \(\beta_{t}=q_{t-1}^{**}\beta_{t-1}+\lambda_{t}\), \(t\geq 2\), with initialization \(\beta_{1}=\beta_{1|0}+\lambda_{1}\). ### A model with bounded variance For some applications, the assumption of a time-invariant \((\lambda_{t})\) imposed in the previous subsection might be too restrictive. In this subsection, we relax this assumption, and consider a model satisfying (20) only, but not (21). Then, we get the following result. **Lemma 7**.: _If in Model 1, (20) is satisfied, and if the process \((\lambda_{t})\) is bounded from both above and below by positive constants, then the variance process \((\operatorname{V\!ar}\left(Y_{t}\right))\) is bounded from above._ Proof.: See A.4. ### A typology of models according to the variance process To summarize, the following table lists all the different models considered in this section. Because of Equation (11), the models with a constant \(q_{t}^{*}\) in this table yield a prediction of \(Y_{t+1}\) as a exponential moving average of past observations \(Y_{1},\ldots,Y_{t}\). In this regard, Model 1 broadens the scope of exponential smoothing based forecasting methods. Indeed, when it comes to count data, their focus was predominantly on the HF model, specifically in the context of count data; see Hyndman et al. (2008), Chapter 16. ## 4 Numerical illustration In this section, we simulate trajectories of the various examples considered in Section 3, and we illustrate the differences in long-term behavior between them. Throughout all examples, we set \(\alpha_{1|0}=\beta_{1|0}=3\), and we let \(\lambda_{t}=1\) to simulate 5,000 independent trajectories of \((\Theta_{t})\) for \(t=1,\ldots,T=50\), under each of the following specifications on the dynamics of \((\Theta_{t})\): * Increasing variance of \((\Theta_{t})\) (2nd row of Table 1): \(q_{t}^{*}=q_{t}^{**}=0.8\), * Decreasing variance of \((\Theta_{t})\) (3rd row of Table 1): \(q_{t}^{*}=0.8,\ q_{t}^{**}=1\), * Converging variance of \((\Theta_{t})\) (4th row of Table 1): \(q_{t}^{*}=0.8,\ q_{t}^{**}=0.9\), * Constant variance of \((\Theta_{t})\) (6th row of Table 1): \(q_{t}^{*}=0.9q_{t}^{**},\ q_{t}^{**}=\frac{\alpha_{0}}{\alpha_{0}\cdot 0.9^{2}+(1 -0.9^{2})\beta_{t}}=\frac{3}{2.43+0.19\beta_{t}}\). \begin{table} \begin{tabular}{|c|c|} \hline Model & Condition \\ \hline \hline Shared random effect & \(q_{t}^{*}=q_{t}^{**}=1\) \\ \hline HF model with increasing (explosive) variance & \(q_{t}^{*}=q_{t}^{**}=q\), \((\lambda_{t})\) bounded \\ \hline Decreasing variance & \(q_{t}^{*}=p,\ q_{t}^{**}=1\), \((\lambda_{t})\) bounded \\ \hline Converging variance & \(q_{t}^{*}=pq,\ q_{t}^{**}=q,\ \lambda_{t}=\lambda\) \\ \hline Bounded variance & \(q_{t}^{*}=pq,\ q_{t}^{**}=q\), \((\lambda_{t})\) bounded \\ \hline Constant variance & Eq. (26) \\ \hline \end{tabular} \end{table} Table 1: Typology of various special cases of Model 1 according to the long-run behavior of the variance process. All the constants \(p\) and \(q\) lie strictly between 0 and 1 in this table. By “\((\lambda_{t})\) bounded”, we mean that it is both upper and lower bounded by positive constants. ### Long-run behavior of \((\Theta_{t})\) For each of the four models above, we display, in Figure 1, four independent paths over \(T=50\) time periods. We observe in the northwest panel that the magnitude of the variation of \((\Theta_{t})\) becomes bigger over time, reflecting an increasing variance process. Moreover, the trajectories are highly persistent, which echos the martingale property (19) in the HF model. In the northeast panel, all trajectories of \((\Theta_{t})\) tend to vary less and less over time, which is consistent with the decreasing variance specification. Moreover, all the trajectories fluctuate around one positive value, which is expected because of the constant mean property of the process. In the model with constant variance, it is observed that the fluctuation level of \((\Theta_{t})\) is stable over time compared to the other scenarios. Lastly, it is shown that the fluctuation level of \((\Theta_{t})\) in the converging variance case is between the fluctuation levels in the constant and decreasing cases. Figure 1: Four independent trajectories of \((\Theta_{t})\) under each of the four specifications. Northwest panel: the model with increasing variance. Northeast panel: the model with decreasing variance. Southwest panel: the model with converging variance. Southeast panel: the model with constant variance. ### Variance of \((\Theta_{t})\) For each of the above four models, we plot the empirical density plots of \(\Theta_{t}\) at different times \(t=1,5,20,50\), where each time series \((\Theta_{t})\) is simulated 5,000 times. From Figure 2 we observe the following: * In the HF model with increasing variance, the distribution of \(\Theta_{50}\) has both a thicker right tail, and a much higher peak near zero compared to the distributions of \(\Theta_{1}\) and \(\Theta_{5}\). This reflects the increasing variance of \((\Theta_{t})\) over time under a constant mean, as shown in Figure 2. * On the other hand, in the model with decreasing variance, the distribution of \(\Theta_{50}\) is much more concentrated around the mean value 1 compared to those of \(\Theta_{1}\) and \(\Theta_{5}\), as the variance of \((\Theta_{t})\) decreases over time. * In the model with converging variance, we observe that the distribution of \(\Theta_{t}\) becomes more concentrated as \(t\) increases. Moreover, the distribution of \(\Theta_{50}\) is significantly different from, say, \(\Theta_{20}\). * In the model with constant variance, the distributions of \(\Theta_{1}\), \(\Theta_{5}\), \(\Theta_{20}\), and \(\Theta_{50}\) are quite close, which reflects the fact they have the same mean and variance. ## 5 Real data analysis We use the LGPIF (Local Government Property Insurance Fund) data from the state of Wisconsin. Although the dataset encompasses claims information across multiple types of coverages, in our analysis, we only focus on inland marine (IM) claims. The dataset consists of 6,775 observations from 1,234 policyholders, longitudinally observed for the period of the years 2006-2011.8 We use the observations between 2006 and 2010 for model estimation, while the observations from year 2011 are set aside for out-of-sample validation. We refer the reader to Frees et al. (2016) for a detailed explanation about the data. Table 2 provides a brief summary statistics of the observed policy characteristics. We have one categorical covariate (entity location) available in the dataset with the following values: "City", "County", "Miscellaneous", "School", "Town", and "Village". We code this covariate as 5 binary variables (dummy coding), corresponding to the indicators of "City", "County", "School", "Town", and "Village", with "Miscellaneous" as the reference group. We also have two continuous covariates related to the coverage amount (i.e., the maximal amount Figure 2: Empirical density of \(\Theta_{t}\) under each of the four scenarios at times \(t=1,5,20,50\) covered per claim) and the deductible amount (i.e., the minimal damage to trigger a claim payment). These two covariates may vary in time for a given policyholder. Thus, we cannot fit the model with converging variance, since this latter requires time-invariant covariates (and expected frequencies \(\lambda_{t}\), respectively). By letting \(i\) be index of the policyholders, \(i=1,\ldots,N=1,234\), and letting \(T_{i}\) be the maximal number of observations for the \(i^{th}\) policyholder, one can write the full log-likelihood as, see (9), \[\ell=\sum_{i=1}^{N}\sum_{t=1}^{T_{i}}\log p(Y_{i,t}|Y_{i,1:(t-1)}),\qquad Y_{i,t}|Y_{i,1:(t-1)}\sim\text{NB}\left(\lambda_{i,t}\,\frac{\alpha_{i,t|t-1}}{ \beta_{i,t|t-1}},\,\alpha_{i,t|t-1}\right), \tag{30}\] with expected frequency \(\lambda_{i,t}=\exp(\mathbf{x}_{i,t}\eta)\), regression parameter \(\eta\in\mathbb{R}^{d}\), and \(\mathbf{x}_{i,t}\in\mathbb{R}^{d}\) are the observable policy characteristics of policyholder \(i\) at time \(t\) of dimension \(d=8\); note that we add lower indices \(i\) to all parameters, as these can now be policyholder dependent. We consider special cases of Model 1, namely, we assume \(q_{t}^{*}/q_{t}^{**}=p\in[0,1]\) for all \(t\geq 1\), and, moreover, \(q_{t}^{**}\in(0,1]\) should not depend on \(i\). This gives us recursive formulas for the shape and rate parameters \[\alpha_{i,t+1}=pq_{t}^{**}\alpha_{i,t}+q_{t}^{**}\left(1-p\right)\beta_{i,t}+Y _{i,t+1}\quad\text{and}\quad\beta_{i,t+1}=q_{t}^{**}\beta_{i,t}+\lambda_{i,t+1},\] for \(t\geq 1\), and with initial values \(\alpha_{i,1}=\alpha_{1|0}+Y_{i,1}\), \(\beta_{i,1}=\beta_{1|0}+\lambda_{i,1}\), and \(\beta_{1|0}=\alpha_{1|0}\). Moreover, \begin{table} \begin{tabular}{l|l r} \hline \hline Categorical & Description & Proportions \\ levels & & \\ \hline TypeCity & Indicator for city entity: & 14.00 \% \\ TypeCounty & Indicator for county entity: & 5.78 \% \\ TypeMisc & Indicator for miscellaneous entity: & 11.04 \% \\ TypeSchool & Indicator for school entity: & 28.17 \% \\ TypeTown & Indicator for town entity: & 17.28 \% \\ TypeVillage & Indicator for village entity: & 23.73 \% \\ \hline Continuous & Minimum & Mean & Maximum \\ variables & & \\ \hline CoverageIM & Logged coverage amount of IM claim & 0 & 0.85 & 46.75 \\ lnDeductIM & Logged deductible amount for IM claim & 0 & 5.34 & 9.21 \\ \hline \hline \end{tabular} \end{table} Table 2: Policy characteristics used as covariates we have for \(t\geq 1\) \[\alpha_{i,t+1|t}=\alpha_{i,t+1}-Y_{i,t+1}\quad\text{ and }\quad\beta_{i,t+1|t}= \beta_{i,t+1}-\lambda_{i,t+1},\] and we initialize all policyholders \(i\) as follows \(\alpha_{i,1|0}=\alpha_{1|0}\) and \(\beta_{i,1|0}=\beta_{1|0}\). This allows us to implement the log-likelihood function (30) for given observations \(\mathbf{Y}=(Y_{1,1:T_{1}},\ldots,Y_{N,1:T_{N}})\). Set maximal observation period \(T=\max_{1\leq i\leq N}T_{i}\). Then, the log-likelihood \(\ell=\ell_{\mathbf{Y}}(\vartheta)\) is a function of the parameters \[\vartheta=(\beta_{1|0},p,q^{**}_{1:(T-1)},\eta)\ \in\ \mathbb{R}_{+}\times[0,1] \times(0,1]^{T-1}\times\mathbb{R}^{d}. \tag{31}\] Let us now consider the following models: * Independent latent factors model: \(\alpha_{i,t}=\alpha_{1|0},\ \beta_{i,t}=\beta_{1|0}\) for all \(t\geq 1\). * Shared random effect model: \(p=1,\ q^{**}_{t}=q=1\). * Increasing variance of \((\Theta_{t})\): \(p=1\), \(q^{**}_{t}=q\in(0,1)\). * Decreasing variance of \((\Theta_{t})\): \(p\in(0,1),\ q^{**}_{t}=q=1\). * Constant variance of \((\Theta_{t})\): \(p\in(0,1),\ q^{**}_{t}=\frac{\beta_{1|0}}{p^{2}\beta_{1|0}+(1-p^{2})\beta_{t}}\). We do not report the bounded variance case since the estimate lies in the boundary to the increasing variance case. As all of these models satisfy the generalized linear model (GLM) assumption \(\mathbb{E}\left[Y_{i,t}\right]=\lambda_{i,t}=\exp(\mathbf{x}_{i,t}\eta)\), and we use the following two-step estimate approach to estimate \(\vartheta\) given in see (31): 1. Estimate regression parameter \(\eta\in\mathbb{R}^{d}\) from the standard NB GLM, which means we do not consider the serial correlations among \(Y_{i,1:T_{i}}\) at this stage. Note that this approach still yields a consistent estimate of \(\eta\) as long as the mean model is correctly specified (but it is still less efficient as the variance structure may be misspecified). 2. After \(\eta\) has been estimated from Step 1, estimate the parameters of the random effects dynamics such as \(\beta_{1|0}\), \(p\) and \(q\) (if available). This two-step approach is consistent by the usual arguments on pseudo likelihood estimation, see Gourieroux et al. (1984), and it has two advantages. First, the second numerical optimization step is simple since it involves only a smaller number of parameters that are \(\beta_{1|0}\), and \(q\). Second, using this approach, we get the same estimator for the regression coefficients \(\eta\) in front of the covariates for all the models considered, making it easier to compare them. It would have also been possible to estimate all the parameters together using maximum likelihood estimation, but implementation is more cumbersome and convergence may be an issue. The model with constant variance shows the best goodness-of-fit as shown in Table 3 so that the constant variance model is the best in terms of AIC and the shared random effect model is the best in terms of BIC, while the difference is small. Note that the parameter estimation with decreasing variance model was unable to find a set of parameters sufficiently different from the shared random effect model. In this regard, one can conclude that the decreasing variance model is not suitable for this database. Using the observations from year 2011 as the out-of-sample validation set, we assess the predictive performance of the aforementioned models. We use the RMSE (root mean-squared error), the MAE (mean-absolute error), and the PDL (Poisson deviance loss) defined as follows \[\text{RMSE} =\sqrt{\frac{1}{|\mathcal{T}|}\sum_{i\in\mathcal{T}}\left(Y_{i,T_{ i}+1}-\widehat{Y}_{i,T_{i}+1}\right)^{2}},\] \[\text{MAE} =\frac{1}{|\mathcal{T}|}\sum_{i\in\mathcal{T}}\left|Y_{i,T_{i}+1}- \widehat{Y}_{i,T_{i}+1}\right|,\] \[\text{PDL} =\frac{1}{|\mathcal{T}|}\sum_{i\in\mathcal{T}}2\left(\widehat{Y}_ {i,T_{i}+1}-Y_{i,T_{i}+1}-Y_{i,T_{i}+1}\log\left(\frac{\widehat{Y}_{i,T_{i}+1 }}{\widehat{Y}_{i,T_{i}+1}}\right)\right),\] where \(\mathcal{T}\) is the number of observations in the validation set \(\mathcal{T}\), and \(\widehat{Y}_{i,T_{i}+1}\) are the forecasts obtained from the fitted models. We prefer a model with lower values of RMSE, MAE, and/or PDL, and it turns out that the model assuming increasing variance shows the best \begin{table} \begin{tabular}{l c c c c c} \hline \hline & Independent & Shared & Increasing & Decreasing & Constant \\ \hline \(\beta_{1|0}\) & 0.488 & 0.651 & 0.786 & 0.651 & 0.603 \\ \(p\) & 0 & 1 & 1 & 1.000 & 0.937 \\ \(q\) & 1 & 1 & 0.830 & 1 & - \\ \hline Loglik & -934.135 & -905.357 & -904.317 & -905.357 & -902.019 \\ AIC & 1886.271 & 1828.713 & 1828.633 & 1830.713 & 1824.039 \\ BIC & 1946.068 & 1888.511 & 1895.075 & 1897.155 & 1890.481 \\ \hline \hline \end{tabular} \end{table} Table 3: Estimated model parameters and goodness-of-fit for the considered models predictive performance in our example, as shown in Table 4. This change of ranking with respect to Table 3 may have many reasons, e.g., non-stationarity of the data which likely increases the state-space process if not properly modeled. This closes our example. ## 6 Conclusion In this paper, we expanded the observation-driven state-space model of Harvey and Fernandes (1989) to a broader spectrum of specifications characterized by various variance process behaviors. They are suitable for count processes with a constant mean, but with increasing, decreasing, constant, converging, or bounded variance process. These models inherit most of the major advantages of state-space models, but are more tractable for regression modeling than their parameter-driven counterparts. Additionally, we elucidated the relationship of this model class with the INGARCH literature, see Goncalves et al. (2015), and also drew connections to the forecasting literature that focuses on exponential smoothing, see Hyndman et al. (2008). ## Acknowledgments Jae Youn Ahn is partly supported by a National Research Foundation of Korea (NRF) grant funded by the Korean Government and Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT). Yang Lu thanks NSERC through a discovery grant [RGPIN-2021-04144, DGECR-2021-00330]. Himchan Jeong is supported by the Simon Fraser University New Faculty Start-up Grant (NFSG). \begin{table} \begin{tabular}{l r r r r r} \hline \hline & Independent & Shared & Increasing & Decreasing & Constant \\ \hline RMSE & 9.0586 & 0.7091 & 0.5896 & 0.7091 & 0.8240 \\ MAE & 0.3821 & 0.1107 & 0.1048 & 0.1107 & 0.1143 \\ PDL & 0.8407 & 0.2523 & 0.2425 & 0.2523 & 0.2572 \\ \hline \hline \end{tabular} \end{table} Table 4: Out-of-sample validation performance
2303.03041
Automatic detection of aerial survey ground control points based on Yolov5-OBB
The use of ground control points (GCPs) for georeferencing is the most common strategy in unmanned aerial vehicle (UAV) photogrammetry, but at the same time their collection represents the most time-consuming and expensive part of UAV campaigns. Recently, deep learning has been rapidly developed in the field of small object detection. In this letter, to automatically extract coordinates information of ground control points (GCPs) by detecting GCP-markers in UAV images, we propose a solution that uses a deep learning-based architecture, YOLOv5-OBB, combined with a confidence threshold filtering algorithm and an optimal ranking algorithm. We applied our proposed method to a dataset collected by DJI Phantom 4 Pro drone and obtained good detection performance with the mean Average Precision (AP) of 0.832 and the highest AP of 0.982 for the cross-type GCP-markers. The proposed method can be a promising tool for future implementation of the end-to-end aerial triangulation process.
Cheng Chuanxiang, Yang Jia, Wang Chao, Zheng Zhi, Li Xiaopeng, Dong Di, Chang Mengxia, Zhuang Zhiheng
2023-03-06T11:13:23Z
http://arxiv.org/abs/2303.03041v1
# Automatic detection of aerial survey ground control points based on Yolov5-OBB ###### Abstract The use of ground control points (GCPs) for georeferencing is the most common strategy in unmanned aerial vehicle (UAV) photogrammetry, but at the same time their collection represents the most time-consuming and expensive part of UAV campaigns. Recently, deep learning has been rapidly developed in the field of small object detection. In this letter, to automatically extract coordinates information of ground control points (GCPs) by detecting GCP-markers in UAV images, we propose a solution that uses a deep learning-based architecture, YOLOv5-OBB, combined with a confidence threshold filtering algorithm and an optimal ranking algorithm. We applied our proposed method to a dataset collected by DJI Phantom 4 Pro drone and obtained good detection performance with the mean Average Precision (AP) of 0.832 and the highest AP of 0.982 for the cross-type GCP-markers. The proposed method can be a promising tool for future implementation of the end-to-end aerial triangulation process. UAV tilt photogrammetry, automatic detection of GCPs, deep learning, YOLOv5-OBB, aerial triangulation ## I Introduction Unlike conventional surveys involving huge costs, labor, and time, unmanned aerial vehicles (UAVs) photogrammetry is a cost-effective way to conduct aerial surveys at ultra-high spatial resolutions (1 cm to 1 m) for numerous applications in the close-range domain [1, 2]. To ensure the geometric accuracy of derived maps and other data products in a map coordinate system, it is essential to use accurate location data to align or georeference the captured UAV imagery. In general, market-available civil UAVs carry a low-cost GNSS/IMU (Global Navigation Satellite System/Inertial Measurement Unit) module and collect the position and orientation information [3]. Although it is convenient to use direct orientation for georeferencing, this method does not meet the requirements for high-precision (e.g. centimeter-level) mapping in some applications [4, 5, 6]. Alternative practical way (also called indirect orientation) is to use Ground Control Points (GCPs, with known highly precise and accurate coordinates and elevation) as a "hook" to tie the captured images down to the earth's surface[7].For instance, C. Hugenholtz et al [5] showed that georeferencing with and without GCPs showed similar accuracy in the horizontal direction, but the error between the two in the vertical direction differed by a factor of 2-3. For the in-direct orientation approach, widely used GCP-markers that look distinctly different from the background are manually placed around the area of interest prior to conducting surveys [3]. The high accuracy locations (i.e. GCPs information) of GCP-markers are manually measured using a GNSS device. While obtained in exactly the same way as GCPs, the Checkpoints serve to verify the accuracy of the georeferenced map by comparing the GNSS-measured locations to the coordinates of the Check-points shown on the map. Gianfranco F et al.[6] found that adding only one GCP in GCP-free UAV bundle adjustment resulted in a georeferenced image to be as good as the generated image georeferenced with only GCP. Yang, J [8] showed that the loss of GCP during the aerial survey significantly impacted the overall aerial survey results. With the development of direct orientation techniques, although we may further reduce the number of GCPs, the number of Check-points cannot be reduced when evaluating the geometric accuracy of the acquired image products [9],[10]. In a word, acquiring GCPs remains a part of inexpensive civil UAV mapping because GCPs can significantly improve the accuracy of aerial triangulation results and ensure the quality of the generated maps and other products. The general process of acquiring GCPs includes deploying GCP-markers, measuring their ground positioning information, and identifying their corresponding positions in the UAV imagery. However, GCP-markers are tiny targets on the UAV imagery, so finding GCP-markers and manually acquiring the GCPs information is a highly tedious and time-consuming task. The whole process of UAV aerial triangulation nowadays is well addressed by an open-source algorithm, except for the GCPs automatically added to the bundle adjustment [2]. Jain et al. [11] proposed a pipeline to segment white L-shape GCP-markers from the image automatically by integrating three components of edge oriented histogram, canny edge detection, and Convolutional Neural Network (CNN) classification. However, the sophisticated processing pipeline they proposed makes it difficult to replicate. In addition, the edge oriented histogram and edge detection algorithm can easily fail when there are other objects of similar color and shape on the ground. Ren [12] used the covariance equation to locate the position of GCPs. However, they can only estimate approximate locations of GCPs because systematic errors are difficult to eliminate. It may also require some manual corrections to accurately locate each GCPs. In recent years, deep learning techniques have been widely used and achieve state-of-the-art performance in many fields [13]. In this study, we propose an easy-to-implement workflow that integrates YOLOv5-OBB, one of compound-scaled arbitrary orientation object detection deep learning architecture, with confidence threshold filtering and optimal ranking to automatically detect GCP-markers and locate the position of GCP. ## II Method ### _GCPs dataset and data preprocessing_ To explore the influence of the shape and pointing of the GCP-markers on model training, we conducted experiments with L-shape and cross-shape GCP-markers (Fig. 1). According to different locations of GCP on the GCP-marker, we considered four kinds of the L-shape GCP-markers, including top-left, bottom-left, top-right, and bottom-right. The cross-shape GCP-marker has a relatively simple GCP location (i.e., at its center) and is therefore considered as a category only. In this study, we collected about 5000 UAV images using the DJI Phantom 4 Pro drone. In theory, the larger size of input imagery should provide more information and yield more accurate results. However, the size of the acquired UAV image is 5,472 \(\times\) 3,648 pixels, which is too large to directly fit into GPU memory. It also requires a longer training time, and the added information may be redundant, which then affects the model training process as well as the subsequent detection performance. Therefore, we followed the method of YOLT [14] and cropped the original large-size images to a suitable size. To do so, we first padded the original image along both axes and then cropped it uniformly to prevent distortion. We finally acquired 2,358 cropped images with GCP-markers as the training dataset. ### _Detect GCP-Markers and locate the position of GCP_ **1) The vertices position of arbitrary Orientation Bounding Box instead of GCP's position** In general, when using object detection algorithms to detect ground targets, we can use four vertex coordinates of the horizontal bounding box as the GCP positions if the GCP markers can be considered axis-aligned objects. However, in many cases, objects in the UAV image are not exactly aligned to the image axis, so the resulting four vertex coordinates cannot be used as the GCP position with high precision. For example, **Fig. 2a** shows the case where the axis-aligned horizontal bounding box of this GCP-marker is not aligned with its edges, so the vertex coordinates cannot be used as GCPs. To address this issue, we adopt the arbitrary orientation object detection algorithm to generate the oriented bounding box (e.g. Fig. 2b) that better matches the outlines of the GCP-marker. Thus, the vertex position of the detected oriented bounding box in the image coordinates (x,y) matches the position of GCP and thus can be used as a proxy to the position of GCP. YOLO-v5 (the fifth version of You Only Look Once)[15] has achieved state-of-the-art performance in small target detection. In this study, the YOLOv5-OBB[16] algorithm is an improvement on YOLO-v5 based on the circular smooth label structure[17], which can detect targets with arbitrary directions on remote sensing images. We use four vertex coordinates to represent the Orientation Bounding Box positions, where (x1,y1) is the starting point and the other points are labeled as (x2,y2), (x3,y3), and (x4,y4) in a clockwise direction in order with it as the starting point. We find that the Orientation Bounding Box used to display the detected GCP-markers starts mainly at the bottom-right, so we can use Equation (1) to calculate the position of the GCP. \[\begin{cases}[x,y]^{T}=[(j-1)*w,(i-1)*h]^{T}+[x_{1},y_{1}]^{T},right~{}down\\ [x,y]^{T}=[(j-1)*w,(i-1)*h]^{T}+[x_{2},y_{2}]^{T},left~{}down\\ [x,y]^{T}=[(j-1)*w,(i-1)*h]^{T}+[x_{3},y_{3}]^{T},left~{}up\\ [x,y]^{T}=[(j-1)*w,(i-1)*h]^{T}+[x_{4},y_{4}]^{T},right~{}up\\ [x,y]^{T}=[(j-1)*w,(i-1)*h]^{T}+[\Sigma x_{i},\Sigma y_{i}]^{T}/4,cross\end{cases}\] where w, h, are the width and height of the detected image; i, j, are the number of rows of the cropped image in the original image; BR, BL, TR, TL, and CR refer to bottom-right, bottom-left, top-right, top-left and cross-shaped GCP-markers types, respectively. **2) Detection performance in different scale GCP-markers** Fig. 1: Types of GCP markers. The yellow circle marker is the location of the GCP Fig. 2: (a) Horizontal Bounding Box. (b) Orientation Bounding Box. The yellow dot is the position of the stabbing point and the green dot is the position of the nearest vertex of the target detection frame to the stabbing point position. During the acquisition of UAV images, the scale of GCP-markers in the images changes with the surface topography. To cope with this issue, the constructed YOLOv5-OBB model should be able to accurately identify GCP-markers independent of their scale variation. To this end, we acquired UAV images of GCP-markers of the same size, type, and color deployed on different surfaces from a relative altitude range of 60-220 meters. In this way, we can identify GCP-markers on the acquired images and evaluate the effectiveness of the established model. **3) Confidence threshold filtering algorithm** The complex background information in the UAV image scene is the main factor affecting the model detection performance. Artificial features, such as domestic waste, are very similar to GCP-markers, which will directly affect the accuracy of model detection. Although these false-positive features are often detected incorrectly by the trained model, their confidence values are usually small. Setting a confidence threshold can effectively filter out most of these false positive detections, but will inevitably result in some images with GCP-markers being lost. Fortunately, UAVs generally acquire images using a redundant overlay strategy to ensure the quality of subsequent image stitching, so the same GCP marker may be captured by multiple photos. To estimate the best confidence thresholds, we selected four different sites and used GCP-markers of the same size, material, and shape with a DJI Phantom 4 Pro UAV. Also, Site 4 datasets are the images collected at different aerial heights to determine the confidence threshold statistics. This experimental study uses a dataset of UAV images collected from four sites (Table I), acquiring a total of 1,745 images with a pixel size of 5,472 \(\times\) 3,648. During training, the YOLO model generally uses input images with a pixel size of a multiple of 32. In this experiment, we cropped each image to 608 x 608 pixels tiles to allow for an easy trade-off between speed and accuracy, resulting in 86,574 images for model training and validation. In the model validation, we found that the cropped image dataset has a large number of images without GCP-markers, and the false positive GCPs tend to have low confidence level. For this reason, we can use the confidence threshold filtering method to exclude possible false positive GCPs while ensuring that images with GCP-markers are not lost. This involves the determination of the optimal confidence threshold. For this purpose, we derive equations (2) and (3). Equation (2) measures the percentage of all images in the dataset above the confidence threshold that contains GCP-markers. Equation (3) calculates the percentage of the number of lost GCPs to all GCPs due to setting the confidence threshold. True positive (TP) is the number of positive samples that are correctly predicted \[Precision=\frac{\overline{TP}}{TP+FP} \tag{2}\] \[Loss\;Ratio=\frac{loss\;GCP\;by\;filtering}{all\;numbers\;of\;GCP} \tag{3}\] **4) Optimal Ranking Algorithm** UAVs carry mostly non-metric consumer-grade cameras due to their limited payload, and it is accompanied by a large amount of image distortion. UAV images are mainly affected by radial distortion, which arises because the arc of light away from the center of the lens is larger than that near the center. The closer the GCP-markers are to the edge of the image, the lower the detection accuracy due to image distortion. \[d=\sqrt{(x_{i}-w/2)^{2}+(y_{i}-h/2)^{2}} \tag{4}\] \[d_{max}=\sqrt{\left(\frac{w}{2}\right)^{2}+\left(\frac{h}{2}\right)^{2}} \tag{5}\] \[score=\vartheta*\left(1-\frac{d}{d_{max}}\right)+confidence \tag{6}\] \[PONA=\frac{(1-d/d_{max})>0.5}{(1-d/d_{max})} \tag{7}\] To acquire the accurately identified GCP-markers and ensure that they are located in regions with less image distortion, we use Equations (4)-(6) to accomplish this goal. Specifically, Equation (4) is used to calculate the distance from the GCP to the center point of each image. We also calculate the farthest distance from each GCP to the center of the image using Equation (5). We adopt Equation (6) to consider the effects of both image distortion and object detection confidence in a weighted manner. The \((1-d/d_{max})\) in Equation (6) refers to the degree of image distortion at each location of GCP-markers. The more severe the image distortion is, the smaller the value. In addition, the second component of Equation (6) refers to the confidence of the identified object as GCP-marker. The higher the detection confidence, the lower the probability of a false positive detection and the higher the probability that this detected object is a GCP-marker. The \(\sigma\) in Equation (6) refers to the adjustment weights used to adjust the two above mentioned numbers. In a word, the higher the weighted score value, the lower the distortion of this GCP in the image and the higher the probability of correct detection by our proposed method. It is worth noting that multiple images may contain the same GCP-marker due to the redundancy of the UAV image acquisition strategy. To acquire the highest quality of each GCP, we calculate each score value from all images containing the same GCP-marker, then rank them from largest to smallest and select the top few large scores as the true values of GCP positions to be added to the subsequent bundle adjustment. In addition, Equation (7) is used to verify the performance of the optimal ranking algorithm, if (1-d/dmax) is greater than 0.5, the GCP is considered to be located in the region of severe image distortion. The PONA in Equation (7) refers to the percentage of GCP in the image distortion area. ### _Workflow in this study_ **Fig. 3** shows the workflow of this study, which can be roughly divided into two parts. The first part focuses on preparing the GCP-markers dataset and training the YOLOv5-OBB model for detecting GCP-markers. The second part is the core of this study, which includes the evaluation of the detection performance of the model, the filtering of the detection results and the effect of optimization ranking. 1. The accuracy of the detected GCP position by YOLOv5-OBB will directly affect the accuracy of the solution of the subsequent aerial triangulation. For this reason, Step 1 compares the differences between the GCP positions identified by YOLOv5-OBB and those real GCP locations. 2. The scale of GCP-markers on the image varies with the flight altitude of the UAVs. Step 2 explores the effectiveness of the YOLOv5-OBB model in detecting GCP-markers at different scales and explores the smallest GCP-markers that can be detected. 3. The complex backgrounds in the aerial survey scene seriously affect the identification accuracy of the deep learning model. The false positive detection results can be effectively filtered by exploring the optimal confidence threshold (i.eStep 3). 4. Since the GCP-markers laid on the ground are generally acquired by multiple UAV images, they are distributed in different locations of the images. When the GCP-markers are closer to the edge of the image, adding detected GCPs to the bundle adjustment introduces errors due to the distortion of the image. To filter the GCP-markers located in the non-image distortion region, we propose the optimal ranking algorithm (i.eStep 4) and explore its application performance. ## III Results and Analysis ### _Model performance_ Considering the tradeoff of model weight parameters, we choose the YOLOv5-OBB model with medium rather than small and large-sized initial parameters to train the GCP-markers dataset on. We used the pre-trained weights provided in [18] as initial values to converge the model to the desired level with a total of 300 epochs. **Fig. 4.** shows the results of our tests on the trained model, where the mAP (i.e. mean Average Precision) is 0.832, the highest AP (i.e. Average Precision) is 0.982 for the cross-type GCPs, and the lowest AP is 0.676 for the bottom-right category. ### _The difference between the predicted GCP position and the real position_ To quantify the accuracy of the predicted GCP locations, we plotted and analyzed error scatter (Fig. 4(b)) by randomly sampling 60% of the test datasets. Specifically, we evaluated the difference between the real GCP position and the GCP position detected by YOLOv5-OBB. Our analysis found that the maximum error of the predicted GCP positions by YOLOv5-OBB was no more than 4 pixels compared with the real GCP positions, with 80% of them within 2 pixels and 98% within 3 pixels, and only a few had relatively large errors. Our proposed method achieved a state-of-the-art accuracy level, i.e. comparable to the 1 to 3 pixel error by manually extracted positions of GCP. In addition, we found that the larger error comes from the L-type GCP marker because it has four categories and is easily confused in the YOLOv5-OBB model. In contrast to the L-type GCP marker, we recommend using the cross-type GCP marker in aerial surveys because it has only one category. ### _Testing of different scale GCP-markers in model_ To reduce the confusion caused by other factors, we filtered the dataset with GCP-markers of different altitude ranges and analyzed the model performance only on these filtered data. Our results show (TABLE II) the detection performance of GCP-markers at different relative height values for eight 20 m intervals between 60 and 220 m. The overall results show that the YOLOv5-OBB model has a high performance in detecting GCP-markers at different scales, with most of them having a precision greater than 97.4% and a recall greater than 91%. Exceptionally, a low recall (of 72%) occurs in the relative altitude range of 140-160 m, corresponding to 26 images in which GCP-markers are not detected. This is because our training dataset is mostly collected from relatively flat landscapes (i.e. roads and settlements), while most of the incorrectly detected GCP-markers in this relative altitude range (i.e. 140-160m) are collected from sloping landscapes (i.e., hillside). It should be noted that the training dataset including Fig. 4: **(a)** mAP of the model (b)the difference distribution in horizontal and vertical directions Fig. 3: Workflow in this study images of different landscapes will improve the robustness of the model. In addition, we found that the smallest GCP-marker size detected by the YOLOv5-OBB model at 60-220 m relative altitude is greater than \(12\times 12\) pixels. ### _Performance of confidence threshold filtering algorithm_ The YOLOv5-OBB model gives a confidence value (i.e. 0-1) when detecting GCP-markers. We can find an optimal confidence threshold to filter false-positive detections. To do so, we can perform a sensitivity analysis by looking at the trade-off between the Precision and Loss-Ratio values while varying the confidence value. Our analysis (Fig. 5) shows that as the confidence threshold increases, the Precision (i.e solid line) increases at all four sites, and the Loss Ratio (i.e., dashed line) also increases. The confidence filtering analysis (**Fig. 5**) shows that when the confidence threshold is set to 0.7, the average Precision is relatively high, while the Loss Ratio is not too high. Only about 26% of the discarded data is tagged with GCP-markers. Statistics of the discarded data with GCP-markers reveal that 60% are data with severe image distortion. Our analysis also shows that the number of detected GCP-markers on the ground does not decrease after filtering, implying that the discarded data itself is redundant. Table III shows the Loss Ratio values, the Precision values, and the number of images with GCP-markers for the four data sets when the confidence threshold is set to 0.7. Even if 26% of the data with GCP-markers are discarded, the UAV images with the same GCP-markers still have a lot of redundant data, so here we do not discuss the recall metric of the model. ### _Performance of the optimal ranking algorithm_ We use the optimal ranking algorithm to filter out redundant data located in the image distortion area. We randomly selected 11 GCP markers placed on the ground in Site 1-4 as testing data to evaluate its performance. To find the GCP with the smallest distortion region of the image, we first set the \(\partial\) in equation (6) to 2. We then calculate all the scored values with the same GCP-marker in the data and rank these calculated scores from largest to smallest. In the top three values of the scores (Table IV), the accuracy of the model detection is 100% and the top five GCPs are all in the non-distorted region of the image. However, in the top 10 values within the highest scores, the model appears to have some detection errors, because there are some cases where GCPs are in the distorted regions of the images. Therefore, we recommend selecting the top 5 highest scoring GCPs to be added to the bundle adjustment. The results show that the optimal ranking algorithm can find both the correct GCP-markers and the GCPs in the lighter image distortion region. ## IV Conclusion In this letter, we use the YOLOv5-OBB combined with a confidence threshold filtering algorithm and optimal ranking algorithm to automatically detect GCP-marks and find GCP positions. The aim is to reduce the manual workload during aerial triangulation processing and to improve the efficiency of building 3D models.
2310.03179
Multi-Domain Walking with Reduced-Order Models of Locomotion
Drawing inspiration from human multi-domain walking, this work presents a novel reduced-order model based framework for realizing multi-domain robotic walking. At the core of our approach is the viewpoint that human walking can be represented by a hybrid dynamical system, with continuous phases that are fully-actuated, under-actuated, and over-actuated and discrete changes in actuation type occurring with changes in contact. Leveraging this perspective, we synthesize a multi-domain linear inverted pendulum (MLIP) model of locomotion. Utilizing the step-to-step dynamics of the MLIP model, we successfully demonstrate multi-domain walking behaviors on the bipedal robot Cassie -- a high degree of freedom 3D bipedal robot. Thus, we show the ability to bridge the gap between multi-domain reduced order models and full-order multi-contact locomotion. Additionally, our results showcase the ability of the proposed method to achieve versatile speed-tracking performance and robust push recovery behaviors.
Min Dai, Jaemin Lee, Aaron D. Ames
2023-10-04T21:48:35Z
http://arxiv.org/abs/2310.03179v1
# Multi-Domain Walking with Reduced-Order Models of Locomotion ###### Abstract Drawing inspiration from human multi-domain walking, this work presents a novel reduced-order model based framework for realizing multi-domain robotic walking. At the core of our approach is the viewpoint that human walking can be represented by a hybrid dynamical system, with continuous phases that are fully-actuated, under-actuated, and over-actuated and discrete changes in actuation type occurring with changes in contact. Leveraging this perspective, we synthesize a multi-domain linear inverted pendulum (MLIP) model of locomotion. Utilizing the step-to-step dynamics of the MLIP model, we successfully demonstrate multi-domain walking behaviors on the bipedal robot Cassie--a high degree of freedom 3D bipedal robot. Thus, we show the ability to bridge the gap between multi-domain reduced order models and full-order multi-contact locomotion. Additionally, our results showcase the ability of the proposed method to achieve versatile speed-tracking performance and robust push recovery behaviors. ## I Introduction The agility and versatility displayed in human locomotion have long served as an inspiration for the study of robotic bipedal locomotion. For humans, walking involves a sequence of distinct gait phases as shown in Fig. 1. In the context of forward walking, these phases encompass the swing foot's heel strike, toe strike, the transition of weight from the new stance foot's heel to toe, and the subsequent heel lift and ankle push-off of the stance foot [1]. In contrast, walking robots typically rely on flat-footed gaits. This preference often arises from mathematical convenience: a desire for a feedback-linearizable fully-actuated system [2, 3] or a direct application of point-foot walking methods on robots with conventional feet [4, 5]. Nevertheless, multi-domain gait presents compelling biomechanical advantages: the heel strike effectively dampens impact forces and ankle push-off is remarkably energy-efficient. These advantages have been substantiated in robotic applications as in [6, 7]. Furthermore, compared to flat-foot gait, the capacity to raise the heel permits longer strides within the same joint constraints, resulting in faster walking speeds [8]. Researchers have explored methods for realizing multi-domain walking on robotic platforms due to its advantages. Full-model based methods [9, 10, 11, 7] employ multi-domain trajectory optimization within the Hybrid Zero Dynamics (HZD) framework. They entail solving a challenging nonlinear optimization problem, which can be computationally demanding and prone to convergence issues for obtaining a single periodic orbit. Additionally, these methods necessitate offline trajectory generation for different periodic orbits associated with different speeds and contact sequences. Furthermore, they are sensitive to model discrepancies and require a heuristic foot placement regulator [12] for stabilization--often synthesized from reduced-order models, i.e., the "Raibert controller" [13]. In the realm of prosthesis legs and feet, multi-domain walking is often utilized but with "model-free" controllers. These controllers typically consist of combinations of low-gain impedance control [14, 15] and torque-based control, which is determined either using predefined parameters [16] or through a biomechanically inspired model [17]. This flexible controller design allows for expert tuning to tailor parameters to individual users under different scenarios, but this tuning is time-consuming. In addition, stability is not a primary concern in this domain, as it is assumed that the human user can stabilize themselves through stepping. Some approaches incorporate the human into the HZD gait generation loop [18], but they encounter similar to those previously mentioned. In this paper, we present a novel framework that enables multi-domain walking, including: heel-to-toe, toe-to-heel, and flat-footed behaviors. We represent these different phases of locomotion through a hybrid dynamical system model. Leveraging this, and as illustrated in Fig. 1, our approach begins with the introduction of a reduced-order model, the _multi-domain linear inverted pendulum (MLIP)_, specifically designed to capture the intricate weight-shifting dynamics inherent in multi-domain walking. Subsequently, we develop a controller that stabilizes the Poincare map associated with Fig. 1: A complete gait cycle: (top) Human multi-domain walking. (center) MLIP walking inspired by human walking. (bottom) Cassie’s multi-domain walking stabilized through the MLIP model. the hybrid dynamics of the MILP, i.e., ensures the stability of the step-to-step dynamics. Via the construction of outputs for the full-order system, we synthesize a feedback controller that realizes multi-domain walking on the bipedal robot Cassie. This results in a remarkably human-like walking gait, adhering to the same gait cycle time distribution observed in human walking. Leveraging the effective ankle push-off during heel-toe-toe walking, we attain an impressive walking speed of 2.15 m/s on Cassie, which cannot be realized using a flat-footed gait due to inherent physical joint limits. Notably, our method surpasses existing approaches by offering versatile walking behaviors adaptable to diverse gait parameters and commanded speeds, all while guaranteeing stability and eliminating the need for offline optimization. Furthermore, our framework demonstrates robustness against external disturbances. A collection of the resulting walking behaviors is available in the accompanying video1. Footnote 1: [https://youtu.be/8u5ZiWe_qlw](https://youtu.be/8u5ZiWe_qlw) The rest of the paper is structured as follows: Section II introduces the hybrid control problem of multi-domain walking. Sections III and IV detail the proposed MLIP model and its integration into the full robot model, respectively. We then present the results and evaluate the performance under different circumstances in section V. Finally, the paper concludes with Section VI. ## II Hybrid Dynamics of Bipedal Robots Bipedal walking is represented by a hybrid control system [19, 12] defined as the tuple \(\mathcal{HC}=(\Gamma,\mathcal{D},\mathcal{S},\Delta,\mathcal{FG})\). Each component is explained as follows: * \(\Gamma=(V,E)\) is a directed circle graph with a set of vertices \(V=\{v_{i}\}_{i\in I}\) and a set of directed edges \(E=\{v_{\text{i}}\!\rightarrow\!v_{\text{i}\!\rightarrow\!v_{\text{j}\!\in\!I }}\). \(I\) is an indexed set of domains that we will illustrate soon. * \(\mathcal{D}=\{\mathcal{D}_{v}\}_{v\in V}\) is a set of domains of admissibility. * \(\mathcal{S}=\{\mathcal{S}_{e}\}_{e\in E}\) is a set of guards. * \(\Delta=\{\Delta_{e}\}_{e\in E}\) is a set of reset maps * \(\mathcal{FG}=\{f_{v},g_{v}\}_{v\in V}\) is the continuous control system, which is a set of vector fields on the state manifolds. In any domain, the robot's continuous dynamics can be obtained from the Euler-Lagrangian equations: \[D(\mathbf{q})\ddot{\mathbf{q}}+H(\mathbf{q},\dot{\mathbf{q}})=B\mathbf{\tau}+J_{\text {i}}(\mathbf{q})^{T}\mathbf{f}_{\text{i}}, \tag{1}\] \[J_{\text{i}}(\mathbf{q})\ddot{\mathbf{q}}+\dot{J_{\text{i}}}(\mathbf{q}, \dot{\mathbf{q}})\dot{\mathbf{q}}=0, \tag{2}\] where \(\mathbf{q}\in Q\) is a set of generalized coordinates in the \(n\)-dimensional configuration space \(Q\), \(D(\mathbf{q})\in\mathbb{R}^{n\times n}\), \(H(\mathbf{q},\dot{\mathbf{q}})\in\mathbb{R}^{n}\), \(B\in\mathbb{R}^{n\times m}\) are the inertia matrix, the collection of centrifugal, Coriolis and gravitational forces, and the actuation matrix, respectively. Additionally, \(\mathbf{\tau}\in U\subseteq\mathbb{R}^{m}\) stands for input torque, \(J_{\text{i}}(\mathbf{q})\in\mathbb{R}^{n\times h_{\text{i}}}\) is the domain-specific Jacobian matrix related to contact constraints, and \(\mathbf{f}_{\text{i}}\in\mathbb{R}_{\text{i}}^{h}\) represents the corresponding constraint wrench. Discrete impacts are assumed to be instantaneous and plastic, with solution derivations detailed in [19]. Denoting \(\mathbf{x}=[\mathbf{q}^{T},\dot{\mathbf{q}}^{T}]^{T}\in\mathcal{TQ}\), the equation of motion for the hybrid system is as follows: \[\begin{cases}\dot{\mathbf{x}}&=f_{v}(\mathbf{x})+g_{v}(\mathbf{x})\mathbf{\tau} \quad\mathbf{x}\in\mathcal{D}_{v}\setminus\mathcal{S}_{e}\\ \mathbf{x}^{+}&=\Delta_{e}(\mathbf{x}^{-})\quad\quad\quad\quad\quad\mathbf{x}^{-}\in \mathcal{S}_{e}\end{cases}, \tag{3}\] for all \(v\in V\) and corresponding \(e\in E\). Inspired by human heel-to-toe walking, our hybrid system model incorporates three domains: fully-actuated (FA), under-actuated (UA) and over-actuated (OA). This yields \(V=v_{\text{FA}},v_{\text{UA}},v_{\text{OA}}\) and \(E=v_{\text{FA}}\!\! ## III Step-to-Step Dynamics and Stabilization for Multi-Domain LIP Model In this section, we first propose a reduced-order model, termed the multi-domain linear inverted pendulum (MLIP) model, which is a variant of the canonical linear inverted pendulum (LIP) model [2] that can describe multi-domain walking. This extension involves incorporating the position of the zero-moment point (ZMP) [20] as an additional state variable. After characterizing its step-to-step (S2S) dynamics, we apply a linear controller to stabilize the error dynamics, considering the discrepancy between the actual dynamics of the robot and the reduced-order model. ### _MLIP Model and Step-to-step Dynamics_ As shown in Fig. 3 and 4, the MLIP model includes a point mass and two massless telescopic legs. It also has a constant center of mass (CoM) height \(z_{0}\) relative to the stance pivot as in the LIP model. What sets it apart from the conventional LIP model is the inclusion of a pair of feet with a known arc length denoted as \(\rho\), which can be calculated from foot curvature. In accordance with the bipedal locomotion domains outlined in Sec. II, the MLIP model encompasses the UA, OA, and FA domains. The continuous dynamics of the MLIP model in all domains are governed by the following linear equations: \[\frac{d}{dt}\underbrace{\begin{bmatrix}p\\ L\\ p_{\text{mp}}\end{bmatrix}}_{\boldsymbol{\xi}}=\underbrace{\begin{bmatrix}0& \frac{1}{z}&0\\ g&0&-g\\ 0&0&0\end{bmatrix}}_{\boldsymbol{\lambda}_{\text{ca}}}\begin{bmatrix}p\\ L\\ p_{\text{mp}}\end{bmatrix}+\underbrace{\begin{bmatrix}0\\ 0\\ 1\end{bmatrix}}_{\boldsymbol{\xi}_{\text{in}}}\dot{p}_{\text{mp}}, \tag{4}\] where \(p\), \(L\), and \(p_{\text{mp}}\) represent the horizontal CoM position, mass-normalized centroidal angular momentum [21], and horizontal ZMP position, all defined relative to the stance pivot. The stance pivot we refer to is the stance pivot at the UA phase thus it is dependent on walking mode. When walking in a heel-to-toe manner, the stance pivot is positioned at the toe, while in toe-to-heel walking, it is located at the heel. For flat-footed walking, the stance pivot can be picked anywhere from heel to toe. In this context, we choose the pivot point to be directly under the ankle. Given that the dynamics are linear, the end-of-domain states \(\boldsymbol{\xi}_{\text{i}}^{-}\) has closed-form solution given by: \[\boldsymbol{\xi}_{\text{i}}^{-}=\underbrace{e^{A_{\text{a}}T_{\text{i}}}}_{A_ {\text{i}}}\boldsymbol{\xi}_{\text{i}}^{+}+\int_{0}^{T_{\text{i}}}e^{A_{\text {ca}}(T_{\text{i}}-t)}B_{\text{ct}}\dot{p}_{\text{amp, i}}(t)dt, \tag{5}\] where superscripts \((\cdot)^{+/-}\) indicate the beginning and end of each domain, respectively, and \(T_{\text{i}}\) represents the time duration of the i-th domain. Unlike the guards defined in Definition 2, the transitions between domains in the MLIP model are purely time-based, as the legs are effectively virtual. To determine \(\dot{p}_{\text{amp, i}}\) in each domain, we draw inspiration from human walking data [22], as elaborated below. **FA:** We denote the distance that the ZMP travels during the fully-actuated phase as \(l\). In the context of heel-to-toe walking, this distance corresponds to \(\rho\). For toe-to-heel and flat-footed walking, the values are \(-\rho\) and 0, respectively. \[\dot{p}_{\text{2mp, FA}}(t)=\frac{l}{T_{\text{FA}}}. \tag{6}\] **UA:** During the under-actuated phase, the ZMP is considered to be fixed at the stance pivot. Thus, \(\dot{p}_{\text{mp, UA}}(t)=0\). **OA:** During the over-actuated phase, the ZMP shifts from the previous stance pivot to the new stance leg, thus traveling the step length denoted as \(u\) within the step duration \(T_{\text{OA}}\): \[\dot{p}_{\text{mp, OA}}(t)=\frac{u}{T_{\text{OA}}}. \tag{7}\] As depicted in Fig. 3, in the context of heel-to-toe walking, the parameter \(u\) indicates the distance between the stance toe and the swing heel. However, in the case of toe-to-heel and flat-footed walking, \(u\) is adapted to denote the distance between the stance heel and the swing toe, as well as the separation between the stance under the ankle and the swing under the ankle. Given \(\dot{p}_{\text{2mp, i}}\) is defined to be independent of time for all domains, we further simplified the above dynamics (5) : \[\int_{0}^{T_{\text{i}}}e^{A_{\text{ca}}(T_{\text{i}}-t)}B_{\text{ct}}\dot{p }_{\text{amp, i}}(t)dt =\int_{0}^{T_{\text{i}}}e^{A_{\text{ca}}(T_{\text{i}}-t)}dtB_{ \text{ct}}\dot{p}_{\text{amp, i}}\] \[=\underbrace{\int_{0}^{T_{\text{i}}}e^{A_{\text{ca}}(T_{\text{i}} -t)}dtB_{\text{ct}}\frac{1}{T_{\text{i}}}}_{\boldsymbol{\xi}_{\text{i}}}d_{ \text{i}}.\] As a result, we can express the solution to the continuous time dynamics for \(i\)-th domain as follows: \[\boldsymbol{\xi}_{\text{i}}^{-}=A_{\text{i}}\boldsymbol{\xi}_{\text{i}}^{+}+B _{\text{i}}d_{\text{i}}\] (MLIP-CT) where \(d_{\text{OA}}=u\), \(d_{\text{FA}}=l\), and \(d_{\text{UA}}=0\). With continuous phase dynamics defined, we need to specify the impact dynamics. For the model with massless legs, discrete state jumps resulting from impact do not occur. Instead, the impact equation characterizes the effects of switching between the stance and swing legs, which is defined to happen at the transition from OA to FA domain. Thus, the impact equations are given by: \[\begin{cases}\boldsymbol{\xi}_{\text{OA}}^{+}&=\boldsymbol{\xi}_{\text{UA}} ^{-}\\ \boldsymbol{\xi}_{\text{FA}}^{+}&=\boldsymbol{\xi}_{\text{OA}}^{-}+B_{\Delta}u+C _{\Delta}\\ \boldsymbol{\xi}_{\text{UA}}^{+}&=\boldsymbol{\xi}_{\text{FA}}^{-}\end{cases},\] (MLIP-DT) Fig. 4: MLIP gait cycle for (a) toe-to-heel and (b) flat-footed walking with arrows indicating walking direction. Fig. 3: MLIP gait cycle for heel-to-toe walking. Note that as the MLIP model has massless legs and feet, the foot angles do not impact dynamics. The stance and swing foot pitch angles shown are up to the user’s choice. The feet drawn are straight lines, but curved feet with an arc length \(\rho\) would result in the same dynamics. where \(B_{\Delta}=\begin{bmatrix}-1&0&-1\end{bmatrix}^{T}\) and \(C_{\Delta}=\begin{bmatrix}-l&0&-l\end{bmatrix}^{T}\). We are interested in understanding the S2S dynamics for stabilizing the system. Consider the pre-impact state at the UA phase as \(\mathbf{\xi}[k]\) in the \(k\)-th step, which is shown in red in Fig. 3 and 4, a complete step evolution follows this sequence: \(\mathbf{\xi}[k]\rightarrow\mathbf{\xi}^{+}_{\text{OA}}\rightarrow\mathbf{\xi}^{-}_{\text{OA}} \rightarrow\mathbf{\xi}^{+}_{\text{FA}}\rightarrow\mathbf{\xi}^{-}_{\text{UA}} \rightarrow\mathbf{\xi}[k+1]\). Consequently, the S2S dynamics of the MLIP model can be described using Eq. (MLIP-DT) and (MLIP-CT): \[\mathbf{\xi}_{k+1}=A_{\mathbf{\xi}}\mathbf{\xi}_{k}+B_{\mathbf{\xi}}u_{k}+C_{\mathbf{\xi}} \tag{8}\] where the detailed matrices are obtained as follows: \[A_{\mathbf{\xi}} =A_{\text{UA}}A_{\text{FA}}A_{\text{OA}}, \tag{9a}\] \[B_{\mathbf{\xi}} =A_{\text{UA}}A_{\text{FA}}(B_{\text{OA}}+B_{\Delta}),\] (9b) \[C_{\mathbf{\xi}} =A_{\text{UA}}A_{\text{FA}}(B_{\text{FA}}l+C_{\Delta}). \tag{9c}\] Given that the Poincare section is defined at the end of the UA phase, it follows that \(p_{\text{zmp,k}}=0\) for all \(k\in\mathbb{N}\). Additionally, it can be confirmed that \(A_{\mathbf{\xi}}(3,3)=1\), \(B_{\mathbf{\xi}}(3)=0\), and \(C_{\mathbf{\xi}}(3)=0\). Therefore, we use \(\mathbf{x}^{M}\in\mathbb{R}^{2}\) to represent the horizontal CoM states, specifically \(\mathbf{x}^{M}=[p,L]^{T}\), such that \(\mathbf{\xi}=[\mathbf{x}^{M};0]\). Using the two-dimensional states, we can express the S2S dynamics of the MLIP as follows: \[\mathbf{x}^{\text{M}}_{k+1}=A^{\text{M}}\mathbf{x}^{\text{M}}_{k}+B^{\text{M}}u_{k}+C^ {\text{M}},\] (MLIP-S2S) where \(A^{\text{M}}=A_{\mathbf{\xi}}(1:2,1:2)\), \(B^{\text{M}}=B_{\mathbf{\xi}}(1:2)\), and \(C^{\text{M}}=C_{\mathbf{\xi}}(1:2)\). **Remark 2**.: _It might appear that the MLIP model requires non-trivial FA, UA, and OA phases to obtain well-defined S2S dynamics, given the presence of \(\frac{1}{T_{i}}\) in the definition of \(B_{i}\). However, it's important to note that the integral from 0 to \(T_{i}\) always results in zero when \(T_{i}=0\). In cases of \(T_{\text{OA}}=0\), we also need to set \(B_{\Delta}=\begin{bmatrix}-1&0&0\end{bmatrix}^{T}\) to immediately transfer the zero moment point to the new stance foot._ **Remark 3**.: _The second coordinate \(L\) can be replaced by linear velocity, as done in [2, 5]. However, we have chosen to use angular momentum about stance pivot for higher data quality, given a Kalman filter for angular momentum can be easily set up as demonstrated in [23]. Replacing the second coordinate will result in a modified \(A_{\text{ct}}\), while the linear structure of the step-to-step dynamics remains unchanged._ **Remark 4**.: _One can retrieve the Hybrid-LIP model in [5] by using the CoM linear velocity as the second coordinate and setting \(l=0\) and \(T_{\text{FA}}=T_{\text{OA}}=0\). However, there are different assumptions regarding the motion of the ZMP in the OA phase, which is equivalent to the double support phase in the Hybrid-LIP model. For more in-depth information, one can refer to [5]._ ### _Stabilization for Robot Dynamics_ Having formulated the S2S dynamics for the MLIP model as a discrete-time linear control system, we now aim to stabilize robot walking using it. In practice, we encounter a significant challenge in obtaining the S2S dynamics of the robot due to its inherently nonlinear nature. However, if we assume that the robot's control scheme ensures the availability of the next step, we can mathematically express S2S dynamics as the Poincare map: \[\mathbf{x}^{-}_{k+1}=\mathcal{P}_{\mathbf{x}}(\mathbf{x}^{-}_{k},\tau(t)). \tag{10}\] Here, \(\mathbf{x}^{-}_{k}\in\mathcal{T}\mathcal{Q}\) is the robot state at the end of the UA phase, i.e., pre-impact states. The associated Poincare map is denoted as \(\mathcal{P}_{\mathbf{x}}\). Our focus lies on the pre-impact CoM states, which can be denoted \(\mathbf{x}^{R}=[p^{\text{R},-},L^{\text{R},-}]^{T}\in\mathbb{R}^{2}\). We can express the evolution of these pre-impact CoM states as \[\mathbf{x}^{\text{R}}_{k+1}=\mathcal{P}_{\mathbf{x}^{R}}(\mathbf{x}^{\text{R}}_{k},\tau(t)). \tag{11}\] Using the MLIP S2S dynamics as an approximation, the robot S2S dynamics can be expressed as: \[\mathbf{x}^{\text{R}}_{k+1}=A^{\text{M}}\mathbf{x}^{\text{R}}_{k}+B^{\text{M}}u^{ \text{R}}_{k}+C^{\text{M}}+w, \tag{12}\] where \(u^{\text{R}}_{k}\) denotes the \(k\)-th step size of the robot. \(w=\mathcal{P}_{\mathbf{x}^{R}}(\mathbf{x}^{\text{R}}_{k},\tau(t))-A^{\text{M}}\mathbf{x}^{ \text{R}}_{k}-B^{\text{M}}u^{\text{R}}_{k}-C^{\text{M}}\) represents the model discrepancy, i.e. the integrated dynamics difference between the robot and the MLIP over a step. It is assumed that the realizable set of walking behaviors satisfies \(w\in\mathbf{W}\), where \(\mathbf{W}\) is a bounded set as mentioned in [5]. Let \(\mathbf{e}:=\mathbf{x}^{\text{R}}-\mathbf{x}^{\text{M}}\) represent the error state, a stabilizing controller can be designed as follows: \[u^{\text{R}}_{k}=u^{\text{M}}_{k}+K(\mathbf{x}^{\text{R}}_{k}-\mathbf{x}^{\text{M}}_{k }), \tag{13}\] which yields the error dynamics: \[\mathbf{e}_{k+1}=(A^{\text{M}}+B^{\text{M}}K)\mathbf{e}_{k}+w, \tag{14}\] where \(K\) is the controller gain. From linear control theory, any selections of \(K\) that result in stable \(A^{\text{M}}+B^{\text{M}}K\) can drive \(\mathbf{e}\) to converge to an invariant set \(\mathbf{E}\) as in [24], i.e. if \(\mathbf{e}_{k}\in\mathbf{E}\), \(\mathbf{e}_{k+1}\in\mathbf{E}\). In this work, the controller gain \(K\) was determined using the linear quadratic regulator. It's important to note that this process is independent of the specific model, as long as it can be represented as a linear discrete-time system. Similar techniques have been successfully employed in related works, such as [6] for S2S dynamics derived from numerically linearized full robot dynamics and in [5] for a different reduced-order model. ### _MLIP Periodic Orbits_ In the context of walking, the robot is often required to follow a desired velocity. A straightforward approach is to employ closed-form reduced-order model periodic orbits, i.e. set \(\mathbf{x}^{\text{M}}=\mathbf{x}^{*}\) to be the desired state. As MLIP is a planar model, it allows for decoupled planning of sagittal and lateral motion. We thus present the results for both Period-1 and Period-2 orbit that are suitable for sagittal and lateral planning, respectively. A visualization of the periodic orbits using different parameters is shown in Fig. 5. **Period-1 Orbit:** The desired step size \(u^{*}\) for a Period-1 orbit is determined by the desired walking velocity \(v^{d}\) and the step duration \(T\), where \(u^{*}=v^{d}T\). The corresponding desired periodic pre-impact state for achieving \(u^{*}\) is calculated by setting \(\mathbf{x}_{k+1}=\mathbf{x}_{k}=\mathbf{x}^{*}\) in Eq. (MLIP-S2S): \[\mathbf{x}^{*}=(I_{2\times 2}-A^{\text{M}})^{-1}(B^{\text{M}}u^{*}+C^{\text{M}}), \tag{15}\] where \(I_{2\times 2}\in\mathbb{R}^{2\times 2}\) is the identity matrix. In this context, the controller can be expressed as: \[u^{\text{R}}=u^{*}+K(\mathbf{x}^{\text{R}}-\mathbf{x}^{*}). \tag{16}\] **Period-2 Orbit:** Unlike P1 orbits, there is no unique solution for the P2 orbit that achieves a desired velocity \(v^{d}\). We use subscripts L/R to denote the left or right stance leg. The step sizes must satisfy the equation \(u_{\text{L}}^{*}+u_{\text{R}}^{*}=2v^{d}T\), and the choice of one step size determines the orbit. Solving \(\mathbf{x}_{k+2}=\mathbf{x}_{k}\) yields the desired periodic pre-impact states: \[\mathbf{x}_{\text{L/R}}^{*}=(I_{2\times 2}-(A^{\text{M}})^{2})^{-1} (A^{\text{M}}B^{\text{M}}u_{\text{L/R}}^{*}+B^{\text{M}}u_{\text{ R/L}}^{*}\] \[+A^{\text{M}}C^{\text{M}}+C^{\text{M}}).\] Consequently, the controller can be written as \(u_{\text{L/R}}^{*}=u_{\text{L/R}}^{*}+K(\mathbf{x}^{\text{R}}-\mathbf{x}_{\text{L/R}}^ {*})\). When applying P2 orbits in the lateral plane, the state vector \(\mathbf{x}\) is defined as \(\mathbf{x}:=[p_{y},-L_{x}]^{T}\) to ensure consistency with the sign conventions used in the \(A_{\text{ct}}\) matrix. ## IV Robot Implementation The reduced-order model generates discrete commands using step-to-step stabilization. However, to translate these commands into real-time control signals for the physical robot, it is imperative to construct continuous control outputs and design corresponding feedback controllers. This process is detailed in this section. ### _Output Definition_ We have implemented our proposed method on robot Cassie, a 3D underactuated robot developed by Agility Robotics [25]. As depicted in Fig. 6, each leg of Cassie is modeled with 6 degrees of freedom (DOF), comprising five motor joints and one passive tarsus joint. When combined with the 6 DOF for the floating-base pelvis frame, the total DOF for the robot is 18. To realize the MLIP-based approach on Cassie, the output design adheres to specific requirements. The vertical CoM position \(z_{\text{com}}\) relative to the stance pivot should remain approximately constant. The vertical position of the swing foot \(z_{\text{st foot}}\) is constructed to periodically lift off and strike the ground. The horizontal position of the swing foot \(\{x,y\}_{\text{sw foot}}\) relative to the stance pivot is controlled to achieve the desired step size \(\{u_{x},u_{y}\}\) from the MLIP-based stepping controller Eq. (16). The pitch angles of the stance and swing foot should be controlled to provide the desired contact location corresponding to the desired walking mode. Additionally, the pelvis roll and pitch angles \(\{\theta^{y},\theta^{x}\}_{\text{pelvis}}\) and the stance and swing hip yaw angles \(\theta_{\text{st hip}}^{z}\), \(\theta_{\text{sw hip}}^{z}\) should be controlled to fully constrain the walking behaviors. With these considerations in mind, the desired walking behavior is encoded by the virtual constraints [19], defined as: \[\mathbf{\mathcal{Y}}=\mathbf{h}^{a}-\mathbf{h}^{d}\in\mathbb{R}^{12},\] where \(\mathbf{h}^{a}\) and \(\mathbf{h}^{d}\) denote the actual and desired output defined as follows: \[\mathbf{h}=\text{col}(\{x,y,z\}_{\text{com}},\theta_{\text{st hip}}^{ z},\theta_{\text{st foot}}^{y},\{\theta^{x}\}_{\text{pelvis}},\] \[\{x,y,z\}_{\text{sw foot}},\theta_{\text{sw hip}}^{z},\theta_{ \text{sw foot}}^{y})\in\mathbb{R}^{12}.\] In Fig. 6, we provide a visualization of these outputs, specifically for flat-footed walking. For heel-to-toe walking, the position of CoM is defined to be relative to the stance toe, and the position of the swing foot is represented as the vector from the stance toe to the swing heel. Similar modifications need to be applied to toe-to-heel walking. Cassie employs a line foot design, which means that only the toe pitch motor is present on the foot link. Consequently, the robot lacks motor control for adjusting the foot roll angle. This limitation leads to inherent underactuation in the lateral plane during single support phases. Similarly, in the OA domain, where only the back toe and front heel make contact with the ground for heel-to-toe walking, the ZMP can solely reside along the line connecting these two contact points. This configuration results in coupled control of the xCoM and yCoM components. Therefore, achieving independent control of both xCoM and yCoM is not feasible during OA phases. These hardware constraints lead to the following choice of selection matrices \(S_{\text{i}}\) for each domain such that \(\mathbf{h}_{\text{FA},\text{UA},\text{OA}}=S_{\text{FA},\text{UA},\text{OA}}\mathbf{h}\): \[S_{\text{FA}}=\text{diag}(\{1,0,1\},1,0,\{1,1\},\{1,1,1\},1), \tag{17a}\] \[S_{\text{UA}}=\text{diag}(\{0,0,1\},1,1,\{1,1\},\{1,1\},1,1),\] (17b) \[S_{\text{QA}}=\text{diag}(\{0,0,1\},1,1,\{1,1\},\{0,0,0\},0,1). \tag{17c}\] Fig. 5: Phase portraits depicting various periodic orbits with \(z_{0}=0.8\)m: (a) Heel-to-toe walking at speeds of 0, 0.5, 1, 2 m/s shown in blue, red, yellow, and purple lines. (b) Comparison of heel-toe walking (red) and flat-footed walking (black) at 2m/s, highlighting FA, UA, and OA phases with solid, dashed, and dotted lines, respectively. (c) Toe-to-heel walking at -1 m/s, showcasing different total step times (T = 0.4s, 0.6s, and 0.8s) using blue, red, and yellow lines. (d) Flat-footed P2 orbit at 0m/s with nominal step widths of 0.3m, 0.4m, and 0.5m in blue, red, and yellow lines. Fig. 6: Robot Cassie schematics and output definition. The selection matrix for each domain aligns with the holonomic constraints present in that specific domain. In the UA and OA phases, we assume patch contact with the ground, meaning that the contact points' positions x, y, z and yaw angle are constrained. During the FA phase, we apply the line contact assumption, which adds the constraint on the contact pitch angle. In the current formulation, we allow the horizontal CoM states to evolve passively during the OA phase. This choice helps avoid mismatched commands for xCoM and yCoM control during that phase. The construction of the most desired outputs for our control framework is consistent with the approach outlined in [5]. However, there are exceptions in the case of the horizontal COM position, stance foot pitch, and swing foot pitch angles. #### Iv-A1 Horizontal COM position During the FA phase, one option is to control the ZMP to transit from the heel to the toe, as assumed in the MLIP model construction. However, this approach requires feedback control of the ZMP position at the jerk level, which is impractical. Instead, we opt to directly regulate the horizontal COM states at the end of the FA phase to reach a desired state denoted as \(\mathbf{x}_{\text{FA}^{-}}^{\text{M,*}}\). This state can be calculated using the MLIP step-to-step dynamics, with the Poincare section instead chosen to be the end of the FA phase, i.e., \(\text{FA}^{-}\). The trajectory for this control is defined using a Bezier polynomial that can be written as: \[x_{\text{com}}^{d}(s_{\text{FA}})\coloneqq b_{\text{xcom}}(s_{\text{FA}})=A(s _{\text{FA}})\alpha_{\text{xcom}}, \tag{18}\] where \(s_{\text{FA}}\coloneqq\frac{t_{\text{FA}}}{t_{\text{FA}}}\in[0,1)\) is the phase variable in FA domain, and \(A(s_{\text{FA}})\in\frac{1}{\text{IR}}\times n_{b}\) is defined using the definition of Bezier polynomial. The \(k\)-th element of \(A(s_{\text{FA}})\) is computed as: \[A_{k}(s_{\text{FA}})=\frac{n_{b}!}{k!(n_{b}-k)!}s_{\text{FA}}^{k}(1-s_{\text{ FA}})^{n_{b}-k}.\] Additionally, \(\alpha_{\text{xcom}}\in\mathbb{R}^{n_{b}}\) represents the coefficients of a Bezier polynomial of degree \(n_{b}\). These coefficients are determined at the start of the FA phase, subject to the following linear equality constraints: \[\begin{bmatrix}A(0)\\ \dot{A}(0,T_{\text{FA}})\\ A(1)\\ \dot{A}(1,T_{\text{FA}})\end{bmatrix}\alpha_{\text{xcom}}=\begin{bmatrix}x_{ \text{com, FA+}}^{a}\\ \dot{x}_{\text{com, FA+}}^{a}\\ \frac{P_{\text{FA-}}^{a}}{\tau_{0}}\\ \frac{1}{\tau_{0}}L_{\text{FA-}}^{M,*}\end{bmatrix}. \tag{19}\] Here, \(\dot{A}(s_{\text{FA}},T_{\text{FA}})\in\mathbb{R}^{1\times n_{b}}\) is also determined based on the definition of the Bezier polynomial, ensuring that \(\dot{b}_{\text{xcom}}(s_{\text{FA}},T_{\text{FA}})=\dot{A}(s_{\text{FA}},T_{ \text{FA}})\alpha_{\text{xcom}}\). The actual horizontal CoM position and velocity at the beginning of the domain is given by \(x_{\text{com, FA+}}^{a}\) and \(\dot{x}_{\text{com, FA+}}^{a}\). Notably, the second coordinate in \(\mathbf{x}_{\text{FA-}}^{M,*}\) requires conversion from angular momentum to velocity, as indicated. #### Iv-A2 Stance and swing foot pitch angle The desired end-of-domain stance and swing foot pitch angles vary for different contact modes in different domains. For instance, in the UA phase of heel-to-toe walking, the final stance foot angle should be greater than zero to enable heel-lift motion. However, these angles share a common output structure: \[\{\theta_{\text{swf foot foot}}^{y}\}^{d}(s_{\text{i}}) \coloneqq(1-b_{\text{foot}}(s_{\text{i}}))\{\theta_{\text{swf foot} }^{y}\}_{\text{i+}}^{a} \tag{20}\] \[+b_{\text{foot}}(s_{\text{i}})\{\theta_{\text{swf foot}}^{y}\}_{ \text{i}}^{*},\] where \(s_{\text{i}}\coloneqq\frac{t_{\text{i}}}{T_{\text{i}}}\in[0,1)\) is the phase variable within each domain, \(\{\theta_{\text{swf foot}}^{y}\}_{\text{i+}}^{a}\) is the actual foot pitch angle at beginning of i-th domain, \(\{\theta_{\text{swf foot}}^{y}\}_{\text{i}}^{*}\) denotes the desired end-of-domain pitch angle, \(b_{\text{foot}}(s_{\text{i}})\) represents a Bezier polynomial that transitions from 0 to 1. ### _Feedback Controller_ Using the synthesized outputs, we employ a task-space quadratic programming (QP) based controller [26] to ensure the tracking of desired trajectories while respecting the constrained dynamics, physical motor torque limits, and ground contact forces constraints. In each domain i and at each control loop, we formulate the QP with optimization variables \(\vec{\mathbf{q}},\mathbf{\tau},\mathbf{f}_{\text{i}}\) as follows: \[\underset{\vec{\mathbf{q}},\mathbf{\tau},\mathbf{f}_{\text{i}}}{\text{min}} \quad||\vec{\mathbf{h}}_{\text{i}}^{a}(q,\dot{q},\ddot{q})-\tilde{\mathbf{h} }_{\text{i}}^{d}-\tilde{\mathbf{\mathcal{Y}}}_{\text{i}}^{t}||_{Q}^{2},\] (TSC-QP) s.t. Eq. ( 1 ), (Dynamics) \[A_{\text{GRF}}\mathbf{f}_{\text{i}}\leq\mathbf{b}_{\text{GRF}},\] (Contact) \[\mathbf{\tau}_{lb}\leq\mathbf{\tau}\leq\mathbf{\tau}_{ub}.\] (Torque Limit) Here, \(Q\) denotes a weight matrix, and \(\tilde{\mathbf{\mathcal{Y}}}^{t}=-K_{p}\mathbf{\mathcal{Y}}-K_{d}\tilde{\mathbf{\mathcal{Y}}}\) is the target acceleration of the output that enables exponential tracking, where \(K_{p},K_{d}\) are the proportional and derivative gains. The affine contact constraint on \(\mathbf{f}_{\text{i}}\) approximates the contact friction cone constraint. \(\mathbf{\tau}_{lb}\) and \(\mathbf{\tau}_{ub}\) represent the lower and upper torque bounds. Solving this QP yields the optimal torque \(\mathbf{\tau}\) that is applied to the robot. ## V Results We evaluate the proposed approach using our C++ implementation in the open-sourced simulator [27] with the Mujoco physics engine [28] on the robot Cassie. The output construction and corresponding low-level controller, as described in (TSC-QP), are executed at a rate of 1kHz for real-time control. A visual representation of the results is available in the supplementary video provided earlier. For our MLIP planning, we assume a constant CoM height of \(z_{0}=0.8\)m and a foot length of \(\rho=0.16\)m, consistent with Cassie's physical foot length. Since lateral dynamics are always underactuated during the single support phase, we use \(T_{\text{SS}}\) to denote the step duration for the single support phase, where \(T_{\text{SS}}=T_{\text{FA}}+T_{\text{UA}}\) for both the sagittal and lateral planes. In all tests, we use \(T_{\text{SS}}=0.4\)s and \(T_{\text{OA}}=0.1\)s. For the MLIP model in the sagittal plane, we set \(T_{\text{FA}}=T_{\text{UA}}=\frac{T_{\text{SS}}}{2}\), resulting in a phase distribution of 40% FA, 40% UA, and 20% OA, which aligns with actual human walking data [29]. In the lateral plane, we have \(T_{\text{FA}}=0\) and \(T_{\text{UA}}=T_{\text{SS}}\). ### _Versatile Walking_ The versatility of the proposed approach enables us to showcase a wide range of walking behaviors using Cassie. In all test scenarios, the robot initiates from a stationary posture and is initially commanded to step-in-place for a duration of five seconds. Subsequently, the reference velocity gradually changes to the desired value. Fig. 7(a) and (b) provide a glimpse of the robot's steady-state walking under speeds of 2 m/s and -1.5 m/s, respectively. Employing the multi-domain formulation, Cassie exhibits a natural walking motion characterized by human-like foot rolls with distinct UA, OA, and FA phases. In Fig. 7(c), we offer a comparison of the phase portrait for the periodic orbit realized by Cassie and the periodic orbit defined for MLIP for 2m/s heel-to-toe walking. Fig. 7(d) depicts the phase portrait for lateral walking using a P2 orbit. In Fig. 8, we examine the performance of our approach across a range of reference velocities. Remarkably, at all reference velocities, the walking stabilizes near the reference velocity, and the robot's states remain within a small, bounded error set relative to the nominal MLIP states as shown in the phase portrait. **Remark 5**.: _Our primary objective in this work is not to achieve perfect velocity tracking performance. As discussed in Sec. III-B, our proposed reduced-order model planner is designed to stabilize the robot within a small bounded error set around the planned reduced-order model trajectory. To improve the global tracking performance, there are two viable strategies. One approach involves integrating a high-level planner to adjust the desired velocity sent to the robot. Alternatively, data-driven techniques [30] can be employed to reduce robot-dependent model discrepancies, thereby reducing the size of the error set._ ### _Push Recovery_ The proposed method uses online foot placement planning to stabilize the robot. To evaluate the robustness of this approach, we apply unknown disturbance forces to Cassie. Specifically, as shown in Fig. 9(b), we applied a 50N and -50N push to the pelvis of the robot at times 15s and 20s, where each lasts for 0.5s. Fig. 9(a) shows the robot's response to the 50N push during heel-to-toe walking at a speed of 1m/s. Evidently, the robot quickly recovers from the perturbation by taking a few larger steps. Fig. 9 (c) displays the CoM velocity profiles for three different commanded speeds when subjected to the same disturbance. Notably, all commanded speeds exhibit the ability to withstand external forces and subsequently resume normal walking behavior. ### _Maximum Speed_ Multi-domain walking offers the potential for achieving larger footsteps compared to flat-footed walking, ultimately leading to higher walking speeds. Our results effectively showcase this advantage in Fig. 10, where we compare the walking behavior for the maximum attainable speed based on the specified gait parameters in (a) and (b). It is clear that heel-to-toe walking results in a noticeably larger footstep. The corresponding velocity profile and phase portrait are presented in Fig. 10(c) and (d). The proposed method using heel-to-toe walking achieves an impressive speed of approximately 2.15 m/s, whereas flat-footed walking only reaches a speed of 1.65 m/s. To provide context, the average human walking speed is 1.42 m/s. We are able to realize highly dynamic locomotion behaviors on Cassie using our proposed method. **Remark 6**.: _The previous method [5] was capable of achieving a maximum speed of approximately 2 m/s on Cassie. However, this was only attainable with much faster stepping Fig. 8: Velocity tracking performance and corresponding steady-state phase portrait for a range of walking speeds: 2 m/s, 1 ms/s, 0.5 m/s, 0 m/s, -0.75 m/s, and -1.5 m/s, depicted by blue, yellow, green, burgundy, purple, and red lines, respectively. The commanded velocity is given in solid black lines. Fig. 7: (a) Heel-to-toe walking at a speed of 2 m/s. (b) Toe-to-heel walking at -1.5m/s. (c) Phase portrait depicting 2m/s steady-state heel-toe walking. The yellow, red, and blue lines represent the FA, UA, and OA phases for the robot, following the color code from Fig. 3. The dashed, solid, and dotted lines correspond to the FA, UA, and OA phases for the periodic orbit in the MLIP model. (d) Phase portrait illustrating steady-state lateral walking, also using the same color code. Fig. 9: (top) Cassie’s recovery from a 50N push during 1m/s walking. (bottom) Disturbance force profile and corresponding velocity tracking results for push recovery tests at walking speeds of 1m/s, 0.5m/s, and 0.75m/s, represented by blue, red, and yellow lines respectively. motions, specifically with \(T_{\text{S}\text{S}}=0.3\)s and \(T_{\text{O}\text{A}}=0\)s. It's important to note that the resulting gait was unstable without additional data-driven adaptation, as discussed in [30]._ ## VI Conclusion In conclusion, this paper introduces a novel reduced-order model based approach to realize multi-domain walking on bipedal robots. Leveraging the S2S dynamics of the proposed MLIP model, we have demonstrated the ability to stabilize the robot at arbitrary walking speed, achieving a remarkable maximum speed of 2.15 m/s. The robustness of the method is demonstrated through push recovery tests. Importantly, this method can be readily extended to applications in robotic assistive devices, such as exoskeletons, to enable human-like multi-domain locomotion for the mobility impaired.
2303.15637
The Fundamental Limitations of Learning Linear-Quadratic Regulators
We present a local minimax lower bound on the excess cost of designing a linear-quadratic controller from offline data. The bound is valid for any offline exploration policy that consists of a stabilizing controller and an energy bounded exploratory input. The derivation leverages a relaxation of the minimax estimation problem to Bayesian estimation, and an application of Van Trees' inequality. We show that the bound aligns with system-theoretic intuition. In particular, we demonstrate that the lower bound increases when the optimal control objective value increases. We also show that the lower bound increases when the system is poorly excitable, as characterized by the spectrum of the controllability gramian of the system mapping the noise to the state and the $\mathcal{H}_\infty$ norm of the system mapping the input to the state. We further show that for some classes of systems, the lower bound may be exponential in the state dimension, demonstrating exponential sample complexity for learning the linear-quadratic regulator offline.
Bruce D. Lee, Ingvar Ziemann, Anastasios Tsiamis, Henrik Sandberg, Nikolai Matni
2023-03-27T23:37:37Z
http://arxiv.org/abs/2303.15637v1
# The Fundamental Limitations of Learning Linear-Quadratic Regulators ###### Abstract We present a local minimax lower bound on the excess cost of designing a linear-quadratic controller from offline data. The bound is valid for any offline exploration policy that consists of a stabilizing controller and an energy bounded exploratory input. The derivation leverages a relaxation of the minimax estimation problem to Bayesian estimation, and an application of Van Trees' inequality. We show that the bound aligns with system-theoretic intuition. In particular, we demonstrate that the lower bound increases when the optimal control objective value increases. We also show that the lower bound increases when the system is poorly excitable, as characterized by the spectrum of the controllability gramian of the system mapping the noise to the state and the \(\mathcal{H}_{\infty}\) norm of the system mapping the input to the state. We further show that for some classes of systems, the lower bound may be exponential in the state dimension, demonstrating exponential sample complexity for learning the linear-quadratic regulator offline. ## 1 Introduction Reinforcement Learning (RL) has demonstrated success in a variety of domains, including robotics (Levine et al., 2016) and games (Silver et al., 2017). However, it is known to be very data intensive, making it challenging to apply to complex control tasks. This has motivated efforts by both the machine learning and control communities to understand the statistical hardness of RL in analytically tractable settings, such as the tabular setting (Azar et al., 2017) and the linear-quadratic control setting (Simchowitz and Foster, 2020). Such studies provide insights into the fundamental limitations of RL, and the efficiency of particular algorithms. There are two common problems of interest for understanding the statistical hardness of RL from the perspective of learning a linear-quadratic regulator (LQR): online LQR, and offline LQR. Online LQR models an interactive problem in which the learning agent attempts to minimize a regret-based objective, while simultaneously learning the dynamics (Abbasi-Yadkori and Szepesvari, 2011). Offline LQR models a two-step pipeline, where data from the system is collected, and then used to design a controller (Dean et al., 2019). Guarantees in the online setting are in the form of regret bounds, whereas the offline setting focuses on Probably Approximately Correct (PAC) guarantees. The high data requirements of RL often render offline approaches the only feasible option for physical systems Levine et al. (2020). Despite this fact, recent years have seen greater efforts to provide lower bounds for the online LQR problem (Ziemann and Sandberg, 2022; Tsiamis et al., 2022). Meanwhile, lower bounds in the offline LQR setting are conspicuously absent. Motivated by this fact, we derive lower bounds for designing a linear-quadratic controller from offline data. Notation:The Euclidean norm of a vector \(x\) is denoted by \(\|x\|\). The quadratic norm of a vector \(x\) with respect to a matrix \(P\) is denoted \(\|x\|_{P}=\sqrt{x^{\top}Px}\). For a matrix \(A\), the spectral norm is denoted \(\|A\|\) and the Frobenius norm is denoted \(\|A\|_{F}\). The spectral radius of a square matrix \(A\) is denoted \(\rho(A)\). A symmetric, positive semidefinite matrix \(A=A^{\top}\) is denoted \(A\succeq 0\), and a symmetric, positive definite matrix is denoted \(A\succ 0\). Similarly, \(A\succeq B\) denotes that \(A-B\) is positive semidefinite. The eigenvalues of a symmetric positive definite matrix \(A\in\mathbb{R}^{n\times n}\) are denoted \(\lambda_{1}(A),\ldots,\lambda_{n}(A)\), and are sorted in non-ascending order. We also denote \(\lambda_{1}(A)=\lambda_{\max}(A)\), and \(\lambda_{n}(A)=\lambda_{\min}(A)\). For a matrix \(A\), the vectorization operator \(\mathsf{vec}\,A\) maps \(A\) to a column vector by stacking the columns of \(A\). The kronecker product of \(A\) with \(B\) is denoted \(A\otimes B\). Expectation and probability with respect to all the randomness of the underlying probability space are denoted \(\mathbf{E}\) and \(\mathbf{P}\), respectively. Conditional expectation and probability given the random variable \(X\) are denoted by \(\mathbf{E}[\cdot|X]\) and \(\mathbf{P}[\cdot|X]\). For an event \(\mathcal{G}\), \(\mathbf{1}_{\mathcal{G}}\) denotes the indicator function for \(\mathcal{G}\). For a matrix \(A\in\mathbb{R}^{d_{\mathsf{K}}\times d_{\mathsf{K}}}\) and a symmetric matrix \(Q\in\mathbb{R}^{d_{\mathsf{K}}\times d_{\mathsf{K}}}\), we denote the solution \(P\) to the discrete Lyapunov equation, \(A^{\top}PA-P+Q=0\), by \(\mathtt{dlyap}(A,Q)\). If we also have \(B\in\mathbb{R}^{d_{0}\times d_{\mathsf{U}}}\) and \(R\in\mathbb{R}^{d_{0}\times d_{\mathsf{U}}}\), \(R\succ 0\), we denote the solution \(P\) to the discrete algebraic Riccati equation \(Q+A^{\top}PA-A^{\top}PB(B^{\top}PB+R)^{-1}B^{\top}PA=0\) by \(\mathtt{DARE}(A,B,Q,R)\). We use the indexing shorthand \([K]:=\{1,\ldots,K\}\). Problem FormulationLet \(\theta\in\mathbb{R}^{d_{\Theta}}\) be an unknown parameter. We study the fundamental limitations to learning to control the following parametric system model: \[X_{t+1}\!=\!A(\theta)X_{t}\!+\!B(\theta)U_{t}\!+\!W_{t},\,X_{0}\!=\!0,\quad t \!=\!0,1,\ldots. \tag{1}\] The noise process \(W_{t}\) is assumed to be iid mean zero Gaussian with fixed covariance matrices \(\Sigma_{W}\succ 0\). The matrices \(A(\theta)\in\mathbb{R}^{d_{\mathsf{K}}\times d_{\mathsf{K}}}\) and \(B(\theta)\in\mathbb{R}^{d_{\mathsf{K}}\times d_{\mathsf{U}}}\) are known continuously differentiable functions of the unknown parameter. The system \((A(\theta),B(\theta))\) is assumed to be stabilizable. We assume that the learner is given access to \(N\in\mathbb{N}\) experiments \((X_{0,n},\ldots,X_{T-1,n}),n\in[N]\) from (1) of length \(T\in\mathbb{N}\). The input signal during these experiments is \[U_{t,n}=FX_{t,n}+\tilde{U}_{t,n}, \tag{2}\] where \(F\) renders the system stable1, i.e. \(\rho(A(\theta)+B(\theta)F)<1\). Meanwhile, \(\tilde{U}_{t,n}\) is an exploration component with energy budget \(\sigma_{\tilde{u}}^{2}NT\),2 where \(\sigma_{\tilde{u}}\in\mathbb{R}_{+}\). More precisely, \(\tilde{U}_{t,n}\) may be selected as a function of past observations \((X_{0,n},\ldots,X_{t,n})\), past trajectories \((X_{0,m},\ldots,X_{T-1,m}),m<n\) and possible auxiliary randomization, while being constrained to an energy budget Footnote 1: Access to a stabilizing controller is often assumed unstable system identification Ljung (1998). Open-loop unstable identification leads to poor conditioning. Footnote 2: The choice to place a budget on the exploratory input \(\tilde{U}_{t,n}\) rather than the total input \(U_{t,n}\) is for ease of exposition. The energy of the exploratory input is bounded by the total budget, which is sufficient for our bounds. \[\frac{1}{NT}\sum_{n=1}^{N}\sum_{t=0}^{T-1}\mathbf{E}_{\theta}\,\tilde{U}_{t,n }^{\top}\tilde{U}_{t,n}\leq\sigma_{\tilde{u}}^{2}. \tag{3}\] This formulation allows both open- and closed-loop experiments, but normalizes the average exploratory input energy to \(\sigma^{2}_{\hat{u}}\). The subscript \(\theta\) on the expectation denotes that the system is rolled out with parameter \(\theta\). For a fixed parameter \(\theta\), we denote the data collected from these experiments by the random variable \(\mathcal{Z}:=\{\{(X_{t,n},U_{t,n})_{t=0}^{T-1}\}_{n=1}^{N}\). The learner deploys a policy \(\pi\) which is a measureable function of the \(N\) offline experiments and the current state. In particular, the learner maps the offline data and the current state to the control input, \(U_{t}=\pi(X_{t};\mathcal{Z})\). This is the case if the learner outputs a non-adaptive state feedback controller designed with the offline data. The goal of the learner is to minimize the cost defined by: \[V_{T}^{\pi}(\theta)\!:=\!\frac{1}{T}\mathbf{E}_{\theta}^{\pi} \!\left[\sum_{t=0}^{T-1}\!\left(X_{t}^{\top}QX_{t}+U_{t}^{\top}RU_{t}\right)\! +\!X_{T}^{\top}Q_{T}(\theta)X_{T}\right]\!.\] The expectation is over both the offline experiments, and a new evaluation rollout. Single subscripts on the states and actions, \(X_{t}\) and \(U_{t}\), refer to the evaluation rollout at time \(t\). The superscript on the expectation denotes that the inputs applied in the evaluation rollout follow the policy \(U_{t}=\pi(X_{t};\mathcal{Z})\). Note that due to the dependence of the terminal cost \(Q_{T}(\theta)\) on the unknown parameter \(\theta\), the learner does not explicitly know the cost function it is minimizing. This is not an issue: it simply means that the learner must infer the objective function from the collected data. The following assumption guarantees the existence of a static state feedback controller that minimizes \(V_{T}^{\pi}(\theta)\). **Assumption 1.1**.: _We assume \((A(\theta),B(\theta))\) is stabilizable, \((A(\theta),Q^{1/2})\) is detectable, and \(R\succ 0\) and that \(Q_{T}(\theta)=P(\theta)\), where \(P(\theta)=\texttt{DARE}(A(\theta),B(\theta),Q,R)\)._ Under this assumption, the optimal policy for the known system is \(U_{t}=K(\theta)X_{t}\), where \(K(\theta)\) is the LQR: \[K(\theta)=-(B(\theta)^{\top}P(\theta)B(\theta)+R)^{-1}B(\theta) ^{\top}P(\theta)A(\theta).\] In light of this, we focus on the case in which the search space of the learner is the class of linear time-invariant state feedback policies where the gain is a measurable function of the past \(N\) experiments3. This set is denoted \(\Pi_{\text{lin}}\). Footnote 3: This assumption is not critical, and may be removed without significantly changing the result. See the proof of the main result in Ziemann and Sandberg (2022) for details on how to remove this assumption. The stochastic LQR cost \(V_{T}^{\pi}(\theta)\) may be represented in terms of the gap between the control actions taken by the policy \(\pi\) and the optimal policy, as shown below. **Lemma 1.1** (Lemma 11.2 of Soderstrom (2002)).: _We have that_ \[V_{T}^{\pi}(\theta)=\operatorname{tr}(P(\theta)\Sigma_{W})+ \frac{1}{T}\sum_{t=0}^{T-1}\!\mathbf{E}_{\theta}^{\pi}\left\|U_{t}-K(\theta)X _{t}\right\|_{\Psi(\theta)}^{2},\] _where \(\Psi(\theta):=B^{\top}(\theta)P(\theta)B(\theta)+R\)._ Using the above lemma, the objective of the learner may be restated from minimizing \(V_{T}^{\pi}(\theta)\) to minimizing the excess cost: \[\mathsf{EC}_{T}^{\pi}(\theta)\!:=\!V_{T}^{\pi}(\theta)\!-\!\inf _{\hat{\pi}}V_{T}^{\hat{\pi}}(\theta)\!=\!\frac{1}{T}\!\sum_{t=0}^{T-1}\! \mathbf{E}_{\theta}^{\pi}\!\left\|U_{t}\!-\!K(\theta)X_{t}\right\|_{\Psi( \theta)}^{2}. \tag{4}\] The second equality follows from the representation of the stochastic LQR cost in Lemma 1.1 by cancelling the constant terms. Note that the infimum in the second term is given access to the true parameter value \(\theta\), and will therefore be attained by the optimal LQR controller. In particular, it does not rely upon the offline experimental data. We denote this optimal policy by \(\pi_{\theta}(X_{t};\mathcal{Z})=K(\theta)X_{t}\). Our objective is to lower bound the excess cost for any learning agent in the class \(\Pi_{\mathsf{lin}}\). To this end, we introduce the \(\varepsilon\)-local minimax excess cost: \[\mathcal{EC}_{T}^{\mathsf{lin}}(\theta,\varepsilon):=\inf_{\pi\in\Pi_{ \mathsf{lin}}}\sup_{\|\theta^{\prime}-\theta\|\leq\varepsilon}\mathsf{EC}_{T}^ {\pi}(\theta^{\prime}). \tag{5}\] To motivate this choice, first note that if we were instead interested in an excess cost bound for only a single value of \(\theta\) that holds for all estimators, the optimal policy would trivially be the LQR, \(\pi(X_{t},\mathcal{Z})=K(\theta)X_{t}\). This policy would result in a lower bound of zero. By instead requiring that the learner perform well on all parameter instances in a nearby neighborhood, we remove the possibility of the trivial solution, and can achieve meaningful lower bounds. The emphasis of the nearby neighborhood in (5) is essential. As the local neighborhood defined by the ball of radius \(\varepsilon\), \(\mathcal{B}(\theta,\varepsilon)=\{\theta^{\prime}\,\|\,\theta^{\prime}-\theta \|\leq\varepsilon\}\), becomes sufficiently small, we are still able to provide instance-specific lower bounds for a single parameter value \(\theta\). Therefore, the \(\varepsilon\)-local minimax excess cost is a much stronger notion than the standard _global_ minimax excess cost, \(\inf_{\pi\in\Pi_{\mathsf{lin}}}\sup_{\theta^{\prime}}\mathsf{EC}_{T}^{\pi}( \theta^{\prime})\), as it does not require our estimator to perform well on _all possible_ parameter values but only those in a small (possibly infinitesimal) neighborhood. Indeed, the global minimax excess cost for learning the optimal controller of the class of unknown stable scalar systems is infinite, as shown in Corollary 2.2, and illustrated in Figure 1. Our focus in obtaining the lower bound on \(\mathcal{EC}_{T}^{\mathsf{lin}}(\theta,\varepsilon)\) is to gain an understanding of what system-theoretic quantities render the learning problem statistically challenging. To this end, our lower bound depends on familiar system-theoretic quantities, such as \(P(\theta)\). The covariance of the state under the optimal LQR controller also appears in our analysis. Under the optimal LQR controller, the covariance of the state converges to the stationary covariance as \(T\to\infty\): \[\Sigma_{X}(\theta):=\lim_{t\to\infty}\mathbf{E}_{\theta}^{\pi_{\theta}}\Big{[}X _{t}X_{t}^{\top}\Big{]}=\mathtt{dlyap}((A(\theta)+B(\theta)K(\theta))^{\top}, \Sigma_{W}).\] ### Contributions Our main contribution is the following theorem. For the formal statements, see Theorem 2.2 and Corollary 2.1. **Theorem 1.1** (Main result, Informal).: _The \(\varepsilon\)-local minimax excess cost is lower bounded as_ \[\text{excess cost}\geq\frac{\text{system-theoretic condition number}}{\text{\# data points}\times\text{signal-to-noise ratio}}.\] In the above bound, the system-theoretic condition number depends on familiar system-theoretic quantities such as the covariance of the state under the optimal controller, and the solution to the Riccati equation. The signal-to-noise ratio depends on how easily the system is excited via both the exploratory input, and the noise. This signal-to-noise ratio may be quantified in terms of the controllability gramian of the system, as well as the exploratory input budget. We also study several consequences of the above result by restricting attention to the setting where all system parameters are unknown, i.e. \(\mathsf{vec}\begin{bmatrix}A(\theta)&B(\theta)\end{bmatrix}=\theta\). In this setting, Theorem 1.1 may be reduced to \(\mathcal{E}\mathcal{C}_{T}^{\text{lin}}(\theta,\varepsilon)\geq\frac{c( \theta,\varepsilon)}{NT}\), where \(c(\theta,\varepsilon)\) is easily interpretable. In particular, we may reach the following conclusions: \(\bullet\) For classes of system where the operator norm of system-theoretic matrices such as the controllability gramian and the solution to the Riccati equation are constant with respect to dimension, we may take \(c(\theta,\varepsilon)\propto d_{\mathsf{U}}d_{\mathsf{X}}.\) Combining results from Mania et al. (2019) and Tu et al. (2022) demonstrates that when \(d_{\mathsf{U}}\leq d_{\mathsf{X}}\), the upper bound on the excess cost is also proportional to \(\frac{d_{\mathsf{X}}d_{\mathsf{U}}}{NT}\). In particular, our bound is optimal in the dimension for underactuated systems when the remaining system-theoretic quantities are constant with respect to dimension. \(\bullet\) There exist classes of systems for which we may take \(c(\theta,\varepsilon)\propto\exp(d_{\mathsf{X}})\). This demonstrates that the excess cost of a learned LQR controller may grow exponentially in the dimension. \(\bullet\) The lower bound grows in an interpretable manner with familiar system-theoretic quantities. In particular, we may take \(c(\theta,\varepsilon)\) to grow with the eigenvalues of both the solution to the Riccati equation, \(P(\theta)\), and the state covariance under the optimal controller, \(\Sigma_{X}(\theta)\). This suggests that the problem of learning to control a system with a small gap from the optimal controller is data intensive when controlling the underlying system is hard. ### Related Work System IdentificationSystem identification is often a first step in designing a controller from experimental data, and has a longstanding history. The text Ljung (1998) covers classic asymptotic Figure 2: A classic model-based pipeline for learning a controller from data. results. Control oriented identification was studied in Chen and Nett (1993); Helmicki et al. (1991). Recently, there has been interest in finite sample analysis for fully-observed linear systems (Dean et al., 2019; Simchowitz et al., 2018; Faradonbeh et al., 2018; Sarkar and Rakhlin, 2019), and partially-observed linear systems (Oymak and Ozay, 2019; Sarkar et al., 2021; Simchowitz et al., 2018; Tsiamis and Pappas, 2019; Lee and Lamperski, 2020; Zheng and Li, 2020). Lower bounds for the sample complexity of system identification are presented in Jedra and Proutiere (2019); Tsiamis and Pappas (2021). For a more extensive discussion of prior work, we refer to the survey by Tsiamis et al. (2022). Learning Controllers OfflineLearning a controller from offline data is a familiar paradigm for control theorist and practitioners. It typically consists of system identification, followed by robust (Zhou et al., 1996) or certainty-equivalent (Simon, 1956) control design, see Figure 2. Recent work provides finite sample guarantees for such methods (Dean et al., 2019; Mania et al., 2019). Upper and lower bounds on the sample complexity of stabilization from offline data are presented in Tsiamis et al. (2022). The RL community has a similar paradigm, known as offline RL (Levine et al., 2020). Policy gradient approaches are a model-free algorithm suitable for offline RL, and are analyzed in Fazel et al. (2018). Lower bounds on the variance of the gradient estimates in policy gradient approaches are supplied in Ziemann et al. (2022). Lower bounds for offline linear control are also studied in Wagenmaker et al. (2021) with the objective of designing optimal experiments. We instead focus on the LQR setting to understand the dependence of the excess cost on interpretable system-theoretic quantities. Online LQRThe problem of learning the optimal LQR controller online has a rich history beginning with Astrom and Wittenmark (1973). Regret minimization was introduced in Lai (1986); Lai and Wei (1986). The study of regret in online LQR was re-initiated by Abbasi-Yadkori and Szepesvari (2011), inspired by works in the RL community. Many works followed to propose algorithms which were computationally tractable (Ouyang et al., 2017; Dean et al., 2018; Abeille and Lazaric, 2018; Mania et al., 2019; Cohen et al., 2019; Faradonbeh et al., 2020; Jedra and Proutiere, 2021). Lower bounds on the regret of online LQR are presented in Simchowitz and Foster (2020); Cassel et al. (2020); Ziemann and Sandberg (2022). The results in this paper follow a similar proof to Ziemann and Sandberg (2022). The primary difference is that since our controller is designed via offline data, we may not make use of the exploration-exploitation tradeoff to upper bound the information available to the learner, as is done in Ziemann and Sandberg (2022). ## 2 Excess Cost Lower Bound We now proceed to establish our lower bound. As we are interested in the worst-case excess cost from any element of \(\mathcal{B}(\theta,\varepsilon)\), we make the additional assumption that \(F\) stabilizes \((A(\theta^{\prime}),B(\theta^{\prime}))\) for all \(\theta^{\prime}\in\mathcal{B}(\theta,\varepsilon)\).4 This also ensures that the optimal LQR controller exists for all points in the prior. Footnote 4: We ultimately study the limit as \(\varepsilon\) becomes small. Therefore, this is not significantly stronger than assuming that \(F\) stabilizes \((A(\theta),B(\theta))\). To obtain a lower bound on the local minimax excess cost, we lower bound the maximization over \(\theta^{\prime}\in\mathcal{B}(\theta,\varepsilon)\) by an average over a distribution supported on \(\mathcal{B}(\theta,\varepsilon)\). This reduces the problem to lower bounding a Bayesian complexity. Instead of fixing the parameter \(\theta\), we let \(\Theta\) be a random vector taking values in \(\mathbb{R}^{d_{\Theta}}\) and suppose that it has prior density \(\lambda\). Doing so enables the use of information theoretic tools to lower bound the complexity of estimating the parameter from data. The relaxation of the the maximization is shown in the following lemma. **Lemma 2.1**.: _Fix \(\varepsilon>0\) and let \(\lambda\) be any prior on \(\mathcal{B}(\theta,\varepsilon)\). Then for any \(\pi\in\Pi^{\mathsf{lin}}\) with \(\pi(X_{t},\mathcal{Z})=\hat{K}(\mathcal{Z})X_{t}\),_ \[\sup_{\theta^{\prime}\in\mathcal{B}(\theta)}\mathsf{EC}_{T}^{\pi}(\theta^{ \prime})\geq\textbf{E}\mathrm{tr}\Big{(}[\hat{K}(\mathcal{Z})-K(\Theta)])^{ \top}\Psi(\Theta)[\hat{K}(\mathcal{Z})-K(\Theta)]\Sigma_{\Theta}^{\hat{K}( \mathcal{Z})}\Big{)},\] _where \(\Sigma_{\Theta}^{\hat{K}(\mathcal{Z})}:=\frac{1}{T}\sum_{t=0}^{T-1}\textbf{E}^ {\pi}\big{[}X_{t}X_{t}^{\top}|\mathcal{Z},\Theta\big{]}\). The expectation is over the prior \(\Theta\sim\lambda\), and the randomness of both the offline rollouts and the evaluation rollout. We recall the shorthand \(\Psi(\Theta)=B(\Theta)^{\top}P(\Theta)B(\Theta)+R\)._ Proof.: By the quadratic expression for the excess cost in (4) and the fact that the supremum over a set always exceeds the weighted average over a set, we have the following inequality: \[\sup_{\|\theta^{\prime}-\theta\|\leq\varepsilon}\mathsf{EC}_{T}^{ \pi}(\theta^{\prime})\geq\underset{\Theta\sim\lambda}{\mathbf{E}}R_{T}^{\pi}(\Theta)\] \[= \frac{1}{T}\,\textbf{E}\sum_{t=0}^{T-1}\textbf{E}^{\pi}\Big{[}\| U_{t}\!-\!K(\Theta)X_{t}\|_{\Psi(\Theta)}^{2}\,|\mathcal{Z},\Theta\Big{]}\] \[= \textbf{E}\,\mathrm{tr}\big{(}[\hat{K}(\mathcal{Z})\!-\!K( \Theta)]\big{)}^{\top}\Psi(\Theta)([\hat{K}(\mathcal{Z})-K(\Theta)]\Sigma_{ \Theta}^{\hat{K}(\mathcal{Z})}).\] The second to last equality follows by the tower rule. The last equality results by substituting \(U_{t}=\hat{K}(\mathcal{Z})X_{t}\), followed by the trace-cyclic property and linearity of expectation. We may treat the data from offline experimentation, \(\mathcal{Z}\), as an observation of the underlying parameter \(\Theta\). In particular, \(\mathcal{Z}\) may be expressed as a random vector taking values in \(\mathbb{R}^{NT(d_{\mathsf{X}}+d_{0})}\) with conditional density \(p(\cdot|\theta)\). The following Fisher information matrix and prior density concentration matrix measure estimation performance of \(\Theta\) from the sample \(\mathcal{Z}\) with respect to the square loss: \[\mathtt{I}_{p}(\theta) :=\int\left(\frac{\nabla_{\theta}p(z|\theta)}{p(z|\theta)}\right) \left(\frac{\nabla_{\theta}p(z|\theta)}{p(z|\theta)}\right)^{\top}p(z|\theta)dz, \tag{6}\] \[\mathtt{J}(\lambda) :=\int\left(\frac{\nabla_{\theta}\lambda(\theta)}{\lambda(\theta )}\right)\left(\frac{\nabla_{\theta}\lambda(\theta)}{\lambda(\theta)}\right)^{ \top}\lambda(\theta)d\theta. \tag{7}\] The first quantity (6) measures the information content of the sample \(\mathcal{Z}\) with regards to \(\Theta\). The second quantity (7) measures the concentration of the prior density \(\lambda\). As the gradient operator \(\nabla_{\theta}\) maps to a vector of dimension \(d_{\Theta}\), both \(I_{p}(\theta)\) and \(J(\lambda)\) are \(d_{\Theta}\times d_{\Theta}\) dimensional. See Ibragimov and Has'minskii (2013) for further details about these integrals and their existence. As we seek lower bounds for estimating \(K(\Theta)\) instead of just \(\Theta\), we must account for the transformation from a quadratic loss over the error in estimating \(\Theta\) to the error in estimating \(K(\Theta)\), as appears in Lemma 2.1. To do so, we introduce the Van Trees' inequality (van Trees, 2004; Bobrovsky et al., 1987). We first impose the following standard regularity conditions: **Assumption 2.1**.: 1. _The prior_ \(\lambda\) _is smooth with compact support._ 2. _The conditional density of_ \(\mathcal{Z}\) _given_ \(\Theta\)_,_ \(p(z|\cdot)\)_, is continuously differentiable on the domain of_ \(\lambda\) _for almost every_ \(z\)_._ 3. _The score_5 _has mean zero;_ \(\int\left(\frac{\nabla_{\theta}p(z|\theta)}{p(z|\theta)}\right)p(z|\theta)dz=0\)_._ Footnote 5: The score is the gradient of the log-likelihood. It evaluates to \(\frac{\nabla_{\theta}p(z|\theta)}{p(z|\theta)}\). 4. \(\mathsf{J}(\lambda)\) _is finite and_ \(\mathsf{I}_{p}(\theta)\) _is a continuous function of_ \(\theta\) _on the domain of_ \(\lambda\)_._ 5. \(\mathsf{vec}\,K\) _is differentiable on the domain of_ \(\lambda\)_._ The following theorem is a less general adaption from Bobrovsky et al. (1987) which suffices for our needs. **Theorem 2.1** (Van Trees Inequality).: _Fix two random variables \((\mathcal{Z},\Theta)\sim p(\cdot|\cdot)\lambda(\cdot)\) and suppose Assumption 2.1 holds. Let \(\mathcal{G}\) be a \(\sigma(\mathcal{Z})\)-measurable event. Then for any \(\sigma(\mathcal{Z})\)-measurable \(\hat{K}\):_ \[\begin{split}&\boldsymbol{E}\left[\mathsf{vec}(\hat{K}( \mathcal{Z})-K(\Theta))\,\mathsf{vec}(\hat{K}(\mathcal{Z})-K(\Theta))^{\top} \mathbf{1}_{\mathcal{G}}\right]\\ &\succeq\boldsymbol{E}[\mathsf{D}_{\theta}\,\mathsf{vec}\,K( \Theta)\mathbf{1}_{\mathcal{G}}]^{\top}\left[\boldsymbol{E}\,\mathsf{I}_{p}( \Theta)+\mathsf{J}(\lambda)\right]^{-1}\boldsymbol{E}[\mathsf{D}_{\theta}\, \mathsf{vec}\,K(\Theta)\mathbf{1}_{\mathcal{G}}].\end{split} \tag{8}\] _The notation \(\mathsf{D}_{\theta}\,\mathsf{vec}\,K(\cdot)\) above follows the standard convention for a Jacobian: it stacks the transposed gradients of each element of \(\mathsf{vec}\,K(\cdot)\) into a \(d\mathsf{x}d_{\mathsf{U}}\times d_{\Theta}\) dimensional matrix._ We see from Theorem 2.1 that the transformation to the error in estimating \(K(\Theta)\) is accounted for by \(\mathsf{D}_{\theta}\,\mathsf{vec}\,K(\cdot)\). We now massage the lower bound in Lemma 2.1 to a form compatible with Theorem 2.1. Doing so requires us to express the lower bound as a quadratic form conditioned on some \(\sigma(\mathcal{Z})\)-measureable event \(\mathcal{G}\). We therefore select an event \(\mathcal{G}\) for which we may uniformly lower bound the quantities \(\Psi(\Theta)\) and \(\Sigma_{\Theta}^{\hat{K}(\mathcal{Z})}\). To this end, we define positive definite matrices \(\Psi_{\theta,\varepsilon}\) and \(\Sigma_{\theta,\varepsilon}\) that satisfy \[\Psi(\theta^{\prime})\succeq\Psi_{\theta,\varepsilon}\text{ and }\frac{1}{2} \Sigma_{X}(\theta^{\prime})\succeq\Sigma_{\theta,\varepsilon}\quad\forall \theta^{\prime}\in\mathcal{B}(\theta,\varepsilon). \tag{9}\] The matrix \(\Psi_{\theta,\varepsilon}\) will serve to uniformly lower bound \(\Psi(\Theta)\). When the learned controller is close to the optimal controller, the covariance of the state under the learned controller will be close to the covariance of the state under the optimal controller, which is in turn lower bounded in terms of \(\Sigma_{\theta,\varepsilon}\). In particular, if \(\left\|\hat{K}(\mathcal{Z})-K(\Theta)\right\|\) is sufficiently small, we can argue that \(\Sigma_{\Theta}^{\hat{K}(\mathcal{Z})}\succeq\frac{1}{2}\Sigma_{X}(\Theta) \succeq\Sigma_{\theta,\varepsilon}\). The aforementioned condition on \(\left\|\hat{K}(\mathcal{Z})-K(\Theta)\right\|\) will hold only if there is a large amount of data available to fit \(\hat{K}(\mathcal{Z})\). To achieve a bound that holds in the low data regime, we observe that the state covariance under the learned controller is always lower bounded by the noise covariance: \(\Sigma_{\Theta}^{\mathcal{Z}}\succeq\Sigma_{W}\). For this reason, the subsequent results will be presented in two parts: one in which we condition on an event where \(\left\|\hat{K}(\mathcal{Z})-K(\theta)\right\|\) is small, and one that holds generally. To present these results concisely, the positive definite matrix \(\Gamma_{\theta,\varepsilon}\) is used to denote either \(\Sigma_{W}\) or \(\Sigma_{\theta,\varepsilon}\). The Kronecker product of these lower bounds arises frequently, motivating the shorthand \[\Xi_{\theta,\varepsilon}:=\Gamma_{\theta,\varepsilon}\otimes\Psi_{\theta, \varepsilon}. \tag{10}\] **Lemma 2.2** (Application of Van Trees' Inequality).: _For any smooth prior \(\lambda\) on \(\mathcal{B}(\theta,\varepsilon)\) and any \(\pi\in\Pi^{\mathsf{lin}}\) with \(\pi(X_{t},\mathcal{Z})=\hat{K}(\mathcal{Z})X_{t}\),_ \[\sup_{\theta^{\prime}\in\mathcal{B}(\theta,\varepsilon)}\mathsf{EC}_{T}^{\pi}( \theta^{\prime})\geq\frac{\operatorname{tr}\big{(}\Xi_{\theta,\varepsilon} \,\textbf{E}[\mathsf{D}_{\theta}\,\mathsf{vec}\,K(\Theta)\mathbf{1}_{\mathcal{G }}]\,\textbf{E}[\mathsf{D}_{\theta}\,\mathsf{vec}\,K(\Theta)\mathbf{1}_{ \mathcal{G}}]^{\top}\big{)}}{\|\textbf{E}\,\mathsf{I}_{p}(\Theta)+\mathsf{J}( \lambda)\|}, \tag{11}\] _where either:_ 1)__\(\Gamma_{\theta,\varepsilon}=\Sigma_{W}\) _and_ \(\mathcal{G}=\Omega\)_,_ _or_ 2)__\(\Gamma_{\theta,\varepsilon}=\Sigma_{\theta,\varepsilon}\) _and_ \(\mathcal{G}=\mathcal{E}\)_, if_ \(T\geq\sup_{\theta^{\prime}\in\mathcal{B}(\theta,\varepsilon)}\frac{16\,\| \Sigma_{X}(\theta^{\prime})\|}{\lambda_{\min}(\Sigma_{X}(\theta^{\prime}))}\)_._ _The event \(\Omega\) is the entire sample space, i.e. \(\mathbb{P}[\Omega]=1\), and_ \[\mathcal{E} =\Bigg{\{}\sup_{\theta^{\prime}\in\mathcal{B}(\theta,\varepsilon )}\Big{\|}\hat{K}(\mathcal{Z})-K(\theta^{\prime})\Big{\|}\leq\alpha\Bigg{\}}\] \[\alpha =\inf_{\theta^{\prime}\in\mathcal{B}(\theta,\varepsilon)}\min\! \Bigg{\{}\frac{\|A_{cl}(\theta^{\prime})\|}{\|B(\theta^{\prime})\|},\frac{ \lambda_{\min}(\Sigma_{X}(\theta^{\prime}))/24}{\|A_{cl}(\theta^{\prime})\|\, \|B(\theta^{\prime})\|\,\mathcal{J}(A_{cl}(\theta^{\prime}))\,\|\Sigma_{X}( \theta^{\prime})\|}\Bigg{\}}.\] _Here, \(A_{cl}(\theta)=A(\theta)+B(\theta)K(\theta)\) and \(\mathcal{J}(A_{cl}(\theta))=\sum_{t=0}^{\infty}\big{\|}A_{cl}(\theta)^{t} \big{\|}^{2}\)._ Proof.: We always have that \(\Sigma_{\hat{K}(\mathcal{Z})}\succeq\Sigma_{W}\). Lemma A.2 shows that if \(T\geq\frac{16\|\Sigma_{X}(\Theta)\|^{2}}{\lambda_{\min}(\Sigma_{X}(\Theta))}\), then under event \(\mathcal{E}\), we have \(\Big{\|}\Sigma_{X}(\Theta)^{-1/2}\Sigma_{\Theta}^{\hat{K}(\mathcal{Z})}\Sigma _{X}(\Theta)^{-1/2}-I\Big{\|}\leq\frac{1}{2}\). This in turn implies that \(\Sigma_{\Theta}^{\hat{K}(\mathcal{Z})}\succeq\Sigma_{\theta,\varepsilon} \mathbf{1}_{\mathcal{E}}\). With this fact in hand, we may replace \(\Psi(\Theta)\) in the lower bound from Lemma 2.1 by \(\Psi_{\theta,\varepsilon}\), and \(\Sigma_{\Theta}^{\hat{K}(\mathcal{Z})}\) by \(\Gamma_{\theta,\varepsilon}\mathbf{1}_{\mathcal{G}}\), where \((\Gamma_{\theta,\varepsilon},\mathcal{G})\) can only be set as \((\Sigma_{\theta,\varepsilon},\mathcal{E})\) if \(T\) is sufficiently large. Then \[\sup_{\theta^{\prime}\in\mathcal{B}(\theta,\varepsilon)}\mathsf{ EC}_{T}^{\pi}(\theta^{\prime})\geq\mathbf{E}\operatorname{tr}([\hat{K}( \mathcal{Z})-K(\Theta)])^{\top}\Psi_{\theta,\varepsilon}([\hat{K}(\mathcal{Z} )-K(\Theta)]\Gamma_{\theta,\varepsilon})\mathbf{1}_{\mathcal{G}}\] \[=\mathbf{E}\operatorname{tr}([\tilde{K}(\mathcal{Z})-\sqrt{\Psi_{ \theta,\varepsilon}}K(\Theta)\sqrt{\Gamma_{\theta,\varepsilon}}])^{\top}[ \tilde{K}(\mathcal{Z})-\sqrt{\Psi_{\theta,\varepsilon}}K(\Theta)\sqrt{\Gamma_{ \theta,\varepsilon}}])\mathbf{1}_{\mathcal{G}}\] \[=\mathbf{E}\operatorname{tr}\big{(}[\mathsf{vec}\,\tilde{K}( \mathcal{Z})-\mathsf{vec}\,\sqrt{\Psi_{\theta,\varepsilon}}K(\Theta)\sqrt{ \Gamma_{\theta,\varepsilon}}]\cdot[\mathsf{vec}\,\tilde{K}(\mathcal{Z})- \mathsf{vec}\,\sqrt{\Psi_{\theta,\varepsilon}}K(\Theta)\sqrt{\Gamma_{\theta, \varepsilon}}]\big{)}^{\top}\mathbf{1}_{\mathcal{G}},\] where \(\tilde{K}(\mathcal{Z})=\sqrt{\Psi_{\theta,\varepsilon}}\hat{K}(\mathcal{Z}) \sqrt{\Gamma_{\theta,\varepsilon}}\). We now invoke the Van Trees' inequality, Theorem 2.1: \[\sup_{\theta^{\prime}\in\mathcal{B}(\theta,\varepsilon)}\mathsf{ EC}_{T}^{\pi}(\theta^{\prime})\!\geq\!\operatorname{tr}\!\Big{(}\!\mathbf{E}[ \mathsf{D}_{\theta}\,\mathsf{vec}\,\sqrt{\Psi_{\theta,\varepsilon}}K(\Theta) \sqrt{\Gamma_{\theta,\varepsilon}}\mathbf{1}_{\mathcal{G}}]\,[\mathbf{E}\, \mathsf{I}_{p}(\Theta)\!+\!\mathsf{J}(\lambda)]^{-1}\mathbf{E}[\mathsf{D}_{ \theta}\,\mathsf{vec}\,\sqrt{\Psi_{\theta,\varepsilon}}K(\Theta)\sqrt{\Gamma_{ \theta,\varepsilon}}\mathbf{1}_{\mathcal{G}}]^{\top}\Big{)}\] \[=\!\operatorname{tr}\!\Big{(}\!\mathbf{E}[(\sqrt{\Gamma_{\theta, \varepsilon}}\!\otimes\!\sqrt{\Psi_{\theta,\varepsilon}})\,\mathsf{D}_{ \theta}\,\mathsf{vec}\,K(\Theta)\mathbf{1}_{\mathcal{G}}]\,[\mathbf{E}\,\mathsf{ I}_{p}(\Theta)\!+\!\mathsf{J}(\lambda)]^{-1}\,\mathbf{E}[(\sqrt{\Gamma_{\theta, \varepsilon}}\!\otimes\!\sqrt{\Psi_{\theta,\varepsilon}})\,\mathsf{D}_{\theta}\, \mathsf{vec}\,K(\Theta)\mathbf{1}_{\mathcal{G}}]^{\top}\!\Big{)},\] where we used that \(\mathsf{vec}\,\sqrt{\Psi_{\theta,\varepsilon}}K(\Theta)\sqrt{\Gamma_{\theta, \varepsilon}}=(\sqrt{\Gamma_{\theta,\varepsilon}}\otimes\sqrt{\Psi_{\theta, \varepsilon}})\,\mathsf{vec}\,K(\Theta)\) in the last line. We conclude by applying the trace cyclic property, and extracting the minimum eigenvalue of \([\mathbf{E}_{\Theta}\,I_{p}(\theta)+J(\lambda)]^{-1}\). Lemma 2.2 may be interpreted according to the following intuition. To design a controller that attains low cost, it is essential to distinguish between two nearby instances of the underlying parameter, \(\theta\) and \(\theta^{\prime}\), from the experimental data, \(\mathcal{Z}\). The Fisher Information term on the denominator of the bound in Lemma 2.2 captures the ease with which we can distinguish between \(\theta\) and an infinitesimally perturbed \(\theta^{\prime}\) from the collected data \(\mathcal{Z}\), and can be thought of as a signal-to-noise ratio. The derivative of the controller appearing on the numerator of the bound in Lemma 2.2 is a change of variables term that accounts for the extent to which infinitesimal perturbations of the underlying parameter impact the optimal controller gain. Sensitive perturbations are those which are difficult to detect from the collected data, yet lead to a large change in the controller gain. Such perturbations dictate the statistical hardness of learning a LQR controller. Motivated by this fact, we can select particularly sensitive perturbation directions of the underlying parameter which emphasize the hardness of the problem. To do so, we restrict the support of the prior \(\lambda\) to a lower dimensional subspace. Before presenting this result, it will be useful to see the expression for Fisher information matrix from this experimental setup. It can be shown via the chain rule of Fisher Information that \[I_{p}(\theta)=\mathbf{E}_{\theta}\sum_{n=1}^{N}\sum_{t=0}^{T-1} \mathsf{D}_{\theta}\,\mathsf{vec}\left[A(\theta)\quad B(\theta)\right]^{ \top}\left[Z_{t,n}Z_{t,n}^{\top}\otimes\Sigma_{W}^{-1}\right]\mathsf{D}_{ \theta}\,\mathsf{vec}\left[A(\theta)\quad B(\theta)\right], \tag{12}\] where \(Z_{t,n}=\begin{bmatrix}X_{t,n}\\ U_{t,n}\end{bmatrix}\). See, for instance, Lemma 3.1 of Ziemann and Sandberg (2022). With this in hand, the following Lemma provides a restriction to lower dimensional priors, which allows us to understand how poor conditioning of the information matrix along any particular parameter perturbation direction pushes through to a challenge in estimating the optimal controller. **Lemma 2.3**.: _Consider any matrix \(V\in\mathbb{R}^{d_{\Theta}\times k}\) with \(k\leq d_{\Theta}\) which has orthonormal columns. For any smooth prior \(\lambda\) over \(\left\{\theta+V\tilde{\theta}:\left\|\tilde{\theta}\right\|\leq\varepsilon\right\}\), and any \(\pi\in\Pi^{\mathsf{lin}}\) with \(\pi(X_{t},\mathcal{Z})=\hat{K}(\mathcal{Z})X_{t}\),_ \[\sup_{\theta^{\prime}\in\mathcal{B}(\theta,\varepsilon)}\mathsf{EC}_{T}^{\pi}( \theta^{\prime})\geq\frac{\operatorname{tr}\left(\Xi_{\theta,\varepsilon}\, \boldsymbol{E}[\mathsf{D}_{\theta}\,\mathsf{vec}\,K(\Theta)V\mathbf{1}_{ \mathcal{G}}]\,\,\boldsymbol{E}[\mathsf{D}_{\theta}\,\mathsf{vec}\,K(\Theta) V\mathbf{1}_{\mathcal{G}}]^{\top}\right)}{\|V^{\top}(\boldsymbol{E}\,\mathbbm{1}_{p}( \Theta)+\mathsf{J}(\lambda))V\|},\] _where \(\Xi_{\theta,\varepsilon}\) is defined in (10) and \(\mathcal{G}\) is defined in Lemma 2.2._ Proof.: We may write \(\Theta=\theta+V\tilde{\Theta}\), where \(\tilde{\Theta}\sim\tilde{\lambda}\), and \(\tilde{\lambda}\) is a smooth prior on \(\left\{\tilde{\theta}\in\mathbb{R}^{k}:\left\|\tilde{\theta}\right\|\leq \epsilon\right\}\). Defining \(\tilde{A}(\tilde{\theta}):=A(\theta+V\tilde{\theta})\), \(\tilde{B}(\tilde{\theta}):=B(\theta+V\tilde{\theta})\), and \(\tilde{K}(\tilde{\theta}):=K(\theta+V\tilde{\theta})\), we may instantiate the bound in Lemma 2.2 over the lower dimensional parameter \(\tilde{\theta}\). We have that the Jacobian of the controller becomes \(\mathsf{D}_{\tilde{\theta}}\,\mathsf{vec}\,\tilde{K}(\tilde{\Theta})=\mathsf{ D}_{\theta}\,\mathsf{vec}\,K(\Theta)V.\) Similarly, the Jacobian arising in the Fisher information may be written \(D_{\tilde{\theta}}\,\mathsf{vec}\left[\tilde{A}(\tilde{\Theta})\quad\tilde{B }(\tilde{\Theta})\right]=D_{\theta}\,\mathsf{vec}\left[A(\Theta)\quad B( \Theta)\right]V\). Lastly, the prior density of the lower dimensional parameter satisfies \(\tilde{J}(\tilde{\lambda})=V^{\top}J(\lambda)V\). Then under this prior, the lower bound in Lemma 2.2 becomes that in the lemma statement. In the above lemma, the columns of \(V\) may be interpreted as perturbation directions of the system parameters. We now upper bound the denominator arising in the above bound. In particular, we show how to bound the Fisher Information in any particular perturbation direction. **Lemma 2.4**.: _For any matrix \(V\in\mathbb{R}^{d_{\Theta}\times k}\) with orthonormal columns,_ \[\left\|V^{\top}\,\boldsymbol{E}[I_{p}(\Theta)]V\right\|\leq TN\bar{L},\] _where \(\bar{L}=\sup_{\theta^{\prime}\in\mathcal{B}(\theta,\varepsilon)}L(\theta^{\prime})\) and_ \[\begin{split}& L(\theta^{\prime})=\sup_{w\in\text{span}(V),\|w\| \leq 1}\frac{4}{\lambda_{\min}(\Sigma_{W})}\\ &\cdot\Bigg{(}\nu_{1}(w)\bigg{(}\left\|\text{\tt{dlyap}}\Big{(}( A(\theta^{\prime})+B(\theta^{\prime})F)^{\top},\Sigma_{W}\Big{)}\right\|\\ &+\sigma_{\bar{u}}^{2}\Bigg{(}\sum_{t=0}^{\infty}\left\|(A( \theta^{\prime})\!+\!B(\theta^{\prime})F)^{t}B\right\|\bigg{)}^{2}\Bigg{)}\!+ 2\sigma_{\bar{u}}^{2}\nu_{2}(w)\Bigg{)}.\end{split} \tag{13}\] _Here, \(\nu_{1}(w)=\|D_{\theta}\,\text{\tt{vec}}\,A(\theta^{\prime})w\|^{2}+2\left\|D _{\theta}\,\text{\tt{vec}}\,B(\theta^{\prime})w\right\|^{2}\left\|F\right\|^ {2}\) and \(\nu_{2}(w)=\|D_{\theta}\,\text{\tt{vec}}\,B(\theta^{\prime})w\|^{2}\) are change of coordinate terms that quantify the impact of the perturbation direction on the information upper bound. We recall that \(\sigma_{\bar{u}}^{2}\) is the average exploratory input energy._ The quantity \(\text{\tt{dlyap}}((A(\theta^{\prime})+B(\theta^{\prime})F)^{\top},\Sigma_{W})\), in the above bound may be interpreted as either the steady state covariance during exploration in the absence of exploratory inputs, or the controllability gramian from the noise to the state. The quantity \(\sum_{t=0}^{\infty}\left\|(A(\theta^{\prime})+B(\theta^{\prime})F)^{t}B(\theta ^{\prime})\right\|\) bounds the \(\mathcal{H}_{\infty}\) norm of the closed-loop system during offline experimentation. Therefore, \(\sigma_{\bar{u}}^{2}\big{(}\sum_{t=0}^{\infty}\left\|(A(\theta^{\prime})+B( \theta^{\prime})F)^{t}B(\theta^{\prime})\right\|\big{)}^{2}\) upper bounds the impact of exploratory input on the state during offline experimentation. The proof of the above lemma applies repeated use of the triangle inequality, submultiplicativity, the Cauchy-Schwarz inequality. See Appendix A.1 for proof details. We now present our first main result: a non-asymptotic lower bound on the local minimax excess cost. As with Lemma 2.2, it is presented in two components: one that holds generally, and another that requires enough data such that any sufficiently good policy \(\pi\in\Pi_{\text{fin}}\) outputs a feedback controller \(\hat{K}(\mathcal{Z})\) which is near optimal with high probability. Consequently, the burn-in times are larger for the second result, and the size of the prior, \(\varepsilon\), is required to be small. We drop the dependence of \(A\), \(B\), \(P\), \(\Psi\), \(K\), and \(\Sigma_{X}\) on \(\theta\) when the argument is clear from context. **Theorem 2.2**.: _Consider any matrix \(V\in\mathbb{R}^{d_{\Theta}\times k}\) with \(k\leq d_{\Theta}\) which has orthonormal columns. Let_ \[G=\inf_{\theta^{\prime},\tilde{\theta}\in\mathcal{B}(\theta, \varepsilon)}\operatorname{tr}\bigg{(}\Xi_{\theta,\varepsilon}\,\mathsf{D}_{ \theta}\,\text{\tt{vec}}\,K(\theta^{\prime})V\Big{(}\mathsf{D}_{\theta}\, \text{\tt{vec}}\,K(\tilde{\theta})V\Big{)}^{\top}\bigg{)},\] _and \(\bar{L}\) be as in Lemma 2.4. Also let \(\Xi_{\theta,\varepsilon}\) be as defined in (10). Then for any smooth prior \(\lambda\) over \(\Big{\{}\theta+V\tilde{\theta}:\left\|\tilde{\theta}\right\|\leq\varepsilon\Big{\}}\),_ \[\mathcal{E}\mathcal{C}_{T}^{\text{lin}}(\theta,\varepsilon)\geq \frac{G}{8NT\bar{L}} \tag{14}\] _is satisfied for_ 1)_\(\Gamma_{\theta,\varepsilon}=\Sigma_{W}\) if \(TN\geq\frac{\|J(\lambda)\|}{L}\)._ 2)_\(\Gamma_{\theta,\varepsilon}=\Sigma_{\theta,\varepsilon}\) if \(T\geq\sup_{\theta^{\prime}\in\mathcal{B}(\theta,\varepsilon)}\frac{16\|\Sigma _{X}(\theta^{\prime})\|^{2}}{\lambda_{\min}(\Sigma_{X}(\theta^{\prime}))}\), \(TN\geq\frac{1}{\bar{L}}\text{max}\Big{\{}\|J(\lambda)\|\,,\frac{G}{\lambda_{ \min}(\Sigma_{W})\lambda_{\min}(R)\alpha^{2}}\Big{\}}\), and_ \(\varepsilon\leq\min\Bigl{\{}\frac{\alpha}{2c_{1}},c_{2}\Bigr{\}}\), where_ \[c_{1} =84\Phi^{9}\tau(A_{cl})\] \[c_{2} =\frac{1}{10\tau(A_{cl})c_{1}}\min\bigl{\{}(1+\|A_{cl}\|)^{-2},(1+ \|P\|)^{-1}\bigr{\}}\] \[\Phi =(1+\max\bigl{\{}\|A\|\,,\|B\|\,,\|P\|\,,\|K\|\,,\|R^{-1}\|\bigr{\}})\] \[\tau(A_{cl}) =\Biggl{(}\sup_{k\geq 0}\Bigl{\{}\Bigl{\|}A_{cl}^{k}\Bigr{\|}\, \rho(A_{cl})^{-k}\Bigr{\}}\Biggr{)}^{2}/(1-\rho(A_{cl})^{2}).\] Proof.: We must show that for all \(\pi\in\Pi^{\mathsf{lin}}\), \(\sup_{\theta^{\prime}\in\mathcal{B}(\theta,\varepsilon)}\mathsf{EC}_{T}^{\pi}( \theta^{\prime})\geq\frac{G}{8NTL}\). Suppose that for some \(\pi\in\Pi^{\mathsf{lin}}\), \(\sup_{\theta^{\prime}\in\mathcal{B}(\theta,\varepsilon)}\mathsf{EC}_{T}^{\pi} (\theta^{\prime})\leq\frac{G}{8NTL}\). We have by Lemma 2.3 that \[\sup_{\theta^{\prime}\in\mathcal{B}(\theta,\varepsilon)}\mathsf{EC}_{T}^{\pi} (\theta^{\prime})\geq\frac{\operatorname{tr}\left(\Xi_{\theta,\varepsilon} \operatorname{\mathbf{E}}[\mathsf{D}_{\theta}\operatorname{\mathsf{vec}}K( \Theta)V\mathbf{1}_{\mathcal{G}}]\operatorname{\mathbf{E}}[\mathsf{D}_{\theta} \operatorname{\mathsf{vec}}K(\Theta)V\mathbf{1}_{\mathcal{G}}]^{\top}\right)}{ \|V^{\top}(\operatorname{\mathbf{E}}\operatorname{\mathbf{1}}_{p}(\Theta)+ \operatorname{\mathbf{J}}(\lambda))V\|}. \tag{15}\] The burn-in requirement \(TN\geq\frac{\|J(\lambda)\|}{L}\) enables upper bounding \(\bigl{\|}V^{\top}J(\lambda)V\bigr{\|}\) by \(TN\bar{L}\). Lemma 2.4 then allows us to upper bound the denominator in (15) by \(2TN\bar{L}\). To remove the indicators from the lower bound, we take an infimum over \(\tilde{\theta},\theta^{\prime}\in\mathcal{B}(\theta,\varepsilon)\) to lower bound the numerator in (15) by \(\mathbf{P}[\mathcal{G}]^{2}G\). For case 1, we immediately have \(\mathbf{P}[\mathcal{G}]^{2}=\mathbf{P}[\Omega]^{2}=1\). For case 2, we may leverage the assumptions that the prior is small and that the burn-in time is satisfied to show that \(\mathbf{P}[\mathcal{G}]^{2}=\mathbf{P}[\mathcal{E}]^{2}\geq\frac{1}{4}\). See Appendix A.2 for more details. This in turn implies that \[\sup_{\theta^{\prime}\in\mathcal{B}(\theta,\varepsilon)}\mathsf{EC}_{T}^{\pi} (\theta^{\prime})\geq\frac{G}{8TN\bar{L}}.\] Therefore, for all \(\pi\in\Pi^{\mathsf{lin}}\), the above lower bound is satisfied. This implies that \[\mathcal{EC}_{T}^{\mathsf{lin}}(\theta,\varepsilon)=\inf_{\pi\in\Pi^{\mathsf{ lin}}}\sup_{\theta^{\prime}\in\mathcal{B}(\theta,\varepsilon)}\mathsf{EC}_{T}^{\pi} (\theta^{\prime})\geq\frac{G}{8TN\bar{L}}.\] The above result holds non-asymptotically. It will be helpful to present the result asymptotically, as the number of experiments tends to \(\infty\) for an understanding of the dependence on control-theoretic quantities. **Corollary 2.1**.: _For any \(\alpha\in(0,1/2)\) and any matrix \(V\in\mathbb{R}^{d_{\Theta}\times k}\) with \(k\leq d_{\Theta}\) which has orthonormal columns, we have that_ \[\liminf_{N\to\infty}\sup_{\theta^{\prime}\in\mathcal{B}(\theta,N^{-\alpha})}N \mathsf{EC}_{T}^{\pi}(\theta^{\prime})\geq\frac{G}{8TL(\theta)},\] _holds always for \(\Gamma=\Sigma_{W}\) and for \(\Gamma=\frac{1}{2}\Sigma_{X}\) if \(T\geq\frac{16\|\Sigma_{X}\|^{2}}{\lambda_{\min}(\Sigma_{X})}\), where \(L\) is as in Lemma 2.4 and_ \[G=\operatorname{tr}\left((\Gamma\otimes\Psi)\,\mathsf{D}_{\theta}\operatorname {\mathsf{vec}}K(\theta)V(\mathsf{D}_{\theta}\operatorname{\mathsf{vec}}K( \theta)V)^{\top}\right).\] Proof.: The burn-in requirements in Theorem 2.2 are satisfied asymptotically, see Appendix A.3 for more details. Using a similar argument to the derivations above, it can be shown that the global minimax complexity is infinite. **Corollary 2.2**.: _The global minimax excess cost is infinite for the class of scalar systems of the form:_ \[X_{t+1}=aX_{t}+bU_{t}+W_{t},\] _with \(\theta=\begin{bmatrix}a&b\end{bmatrix}^{\top}\), and \(Q=R=\Sigma_{W}=\sigma_{\bar{a}}^{2}=1\). More precisely, for the class of stable scalar systems with the offline exploration policy \(F=0\), we have_ \[\liminf_{N\to\infty}\sup_{a,b:|a|<1}N\mathsf{EC}_{T}^{\pi}(a,b)=\infty.\] Proof.: We argue as in the proof of Corollary 2.1 with \(V=\begin{bmatrix}0&1\end{bmatrix}^{\top}\). In this perturbation direction, the lower bound evaluates to \(\frac{1}{T}\frac{b^{2}P+1}{32}\frac{\partial K}{\partial b}(a,b)^{2}\). Considering \(a=1-\gamma\) and \(b=\gamma\) for \(0<\gamma<1\) and taking the limit as \(\gamma\to 0\) results in the lower bound of \(\infty\). For more details, see Appendix A.4. ## 3 Consequences of the Lower Bound In this section, we examine cases where the bound in Corollary 2.1 has interpretable dependence upon system properties. To do so, we restrict attention to the setting where all system parameters are unknown, i.e. \(\mathsf{vec}\begin{bmatrix}A(\theta)&B(\theta)\end{bmatrix}=\theta\). In this setting, the quantity \(\mathsf{D}_{\theta}\mathsf{vec}\begin{bmatrix}A(\theta)&B(\theta)\end{bmatrix}\) arising in the bounds from the previous section is the identity matrix. The derivative of the controller multiplied by a matrix with orthonormal columns, \(\mathsf{D}_{\theta}\mathsf{vec}\,K(\theta)V\), arises in the bounds from the previous section. In this section, this quantity is expressed in terms of the directional derivative of the controller in some direction \(v\), denoted \(d_{v}K(\theta)\). In particular, we represent the columns of \(V\) as \(v=\mathsf{vec}\begin{bmatrix}\Delta_{A}&\Delta_{B}\end{bmatrix}\) for arbitrary perturbations \(\Delta_{A}\) of \(A\) and \(\Delta_{B}\) of \(B\) which satisfy \(\big{\|}\big{[}\Delta_{A}&\Delta_{B}\big{]}\big{\|}_{F}=1\). The corresponding change in the closed-loop state matrix is denoted \(\Delta_{A_{cl}}=\Delta_{A}+\Delta_{B}K\). Then the directional derivative of the controller is shown in Lemma B.1 of Simchowitz and Foster (2020) to be \[d_{v}K(\theta)\!=\!-\!\Psi^{-1}(\Delta_{B}^{\top}PA_{cl}\!+\!B^{\top}P\Delta_{ A_{cl}}\!+\!B^{\top}P^{\prime}A_{cl}), \tag{16}\] where \(P^{\prime}=\mathtt{dlyap}(A_{cl},A_{cl}^{\top}P\Delta_{A_{cl}}+\Delta_{A_{cl}} ^{\top}PA_{cl})\). The subsequent sections study the bound from Corollary 2.1 under various perturbations \(\begin{bmatrix}\Delta_{A}&\Delta_{B}\end{bmatrix}\). Proofs are deferred to Appendix B. ### Dimensional dependence In the setting of online LQR for an unknown system, recent works (Simchowitz and Foster, 2020; Ziemann and Sandberg, 2022) obtaining lower bounds on the regret have used perturbation directions which cause tension between identification and control (Polderman, 1986). In particular, they considered the set of perturbation directions \[\mathbf{\Delta}=\bigg{\{}\mathsf{vec}\begin{bmatrix}-\Delta K&\Delta \end{bmatrix}\bigg{|}\Delta\in\mathbb{R}^{d_{\mathsf{X}}\times d_{\mathsf{U}}},\big{\|}\big{[}\!-\!\Delta K&\Delta\big{]}\big{\|}_{F}=1\bigg{\}}. \tag{17}\] For all such perturbations, \(\Delta_{A_{cl}}=0\), making it impossible to distinguish between the true parameters and the perturbed parameters online without sufficient exploratory input noise. While the tension between identification and control is no longer present in the offline setting, this set of perturbation directions retains the benefit that the directional derivative in (16) is easy to work with. In particular for any \(v=\mathsf{vec}\left[-\Delta K\quad\Delta\right]\in\mathbf{\Delta}\), \[d_{v}K(\theta)=-\Psi^{-1}\Delta^{\top}PA_{cl}. \tag{18}\] As the matrices \(\Delta\) parametrizing the set \(\mathbf{\Delta}\) are \(d_{\mathsf{X}}\times d_{\mathsf{U}}\) dimensional, we may stack \(d_{\mathsf{X}}d_{\mathsf{U}}\) orthogonal vectors \(v_{i}\) belonging \(\mathbf{\Delta}\) into a matrix \(V=\begin{bmatrix}v_{1}&\ldots&v_{d_{\mathsf{X}}d_{\mathsf{U}}}\end{bmatrix}\). This allows us to present a lower bound which demonstrates the dependence of the offline LQR problem upon the system dimensions \(d_{\mathsf{X}}\) and \(d_{\mathsf{U}}\). **Proposition 3.1**.: _Suppose that \(T\geq\frac{16\left\|\Sigma_{X}\right\|^{2}}{\lambda_{\min}(\Sigma_{X})}\). Then for \(\alpha\in(0,1/2)\),_ \[\liminf_{N\to\infty}\sup_{\theta^{\prime}\in\mathcal{B}(\theta,N^{-\alpha})} N\mathsf{EC}_{T}^{\pi}(\theta^{\prime})\geq\frac{d_{\mathsf{X}}d_{\mathsf{U}} \lambda_{\min}(\Sigma_{X}-\Sigma_{W})\lambda_{\min}(P)^{2}}{16T\left\|\Psi \right\|\left\|\left[-K\quad I\right]\right\|^{2}\tilde{L}}\] _where \(\tilde{L}\) is given by \(L(\theta)\) as in (13) by replacing \(\nu_{1}\) with \(1\) and \(\nu_{2}\) with \(1+2\left\|F\right\|^{2}\)._ In addition to the system dimensions, we can interpret the remaining system-theoretic parameters. Note that \(\tilde{L}\) bounds the information available from the offline experimentation. It depends on the norm of the controllability gramian from noise to the state, as well as \(\sigma_{\tilde{u}}^{2}(\sum_{t=0}^{\infty}\left\|(A+BF)^{t}B\right\|)^{2}\), which bounds the impact of the exploratory input on the state. The \(\Psi\) in the denominator of the above bound may scale as \(\lambda_{\max}(P)\), and therefore effectively cancels a \(\lambda_{\min}(P)\) in the numerator for well-conditioned problems. This leaves a single \(\lambda_{\min}(P)\) in the numerator. As \(x^{\top}Px\) is the optimal objective value of the noiseless LQR problem starting from initial state \(x\), the appearance of \(\lambda_{\min}(P)\) in the bound captures the fact that as the system becomes harder to control, it also becomes harder to learn to control. Lastly, the variance term \(\lambda_{\min}(\Sigma_{X}-\Sigma_{W})\) implies that the excess cost is large when the optimal closed-loop system has a large state covariance relative to the process noise covariance. **Remark 3.1**.: _The dimensional dependence \(d_{\mathsf{X}}d_{\mathsf{U}}\) in the above bound is optimal up to constant factors when \(d_{\mathsf{U}}\leq d_{\mathsf{X}}\). To see that this is so, observe that Theorem 2 of Mania et al. (2019) demonstrates an upper bound on the excess cost that scales as \(d_{\mathsf{U}}\varepsilon^{2}\), where \(\varepsilon^{2}\) bounds the system identification error, \(\max\biggl{\{}\left\|\hat{A}-A\right\|^{2},\left\|\hat{B}-B\right\|^{2} \biggr{\}}\). A consequence of Theorem 5.4 in Tu et al. (2022) is that if we apply exploratory inputs which are generated from a Gaussian distribution with mean zero and covariance \(\sigma_{\tilde{u}}^{2}I\), then the upper bound on the system identification error scales as \(\frac{d_{\mathsf{X}}+d_{\mathsf{U}}}{NT}\). In particular, as long as number of offline trajectories \(N\) exceeds \(cd_{\mathsf{X}}\), for some universal constant \(c\), then \(\max\biggl{\{}\left\|\hat{A}-A\right\|^{2},\left\|\hat{B}-B\right\|^{2} \biggr{\}}\,\lesssim\,\left\|\Sigma_{W}\right\|\frac{d_{\mathsf{X}}+d_{\mathsf{ U}}}{NT\lambda_{\min}(\text{controllability gramian})}\). Consequently, the upper bound on the excess cost scales with \(\frac{d_{\mathsf{U}}(d_{\mathsf{X}}+d_{\mathsf{U}})}{NT}\lesssim\frac{d_{ \mathsf{X}}d_{\mathsf{U}}}{NT}\) in the underactuated setting. Therefore, for classes of systems where the remaining system-theoretic quantities are constant with respect to system dimension, the bound is optimal in the dimension._ ### Exponential Lower Bounds The previous section demonstrated a lower bound that scales linearly with \(d_{\mathsf{X}}d_{\mathsf{U}}\). Prior work (Tsiamis et al., 2022b) has shown that in the setting of online LQR, there exist classes of systems where the lower bounds on the regret may scale exponentially with the state dimension. This is shown by demonstrating that particular system-theoretic terms, which are often treated as constant with respect to dimension, may actually grow exponentially with the state dimension. We demonstrate that in the setting of offline LQR, such systems still cause exponential dependence on dimension. Furthermore, because there are fewer restrictions upon the perturbation directions in the lower bound for the offline setting, we construct a simpler class of a systems which exhibits this behavior. In particular, consider the system \[A=\begin{bmatrix}\rho&2&0&&0&0\\ 0&\rho&2&&0&0\\ &&\ddots&&\\ 0&0&0&&\rho&2\\ 0&0&0&&0&\rho\end{bmatrix},\quad B=\begin{bmatrix}0\\ 0\\ \vdots\\ 0\\ 1\end{bmatrix}, \tag{19}\] with \(0<\rho<1\), \(F=0\), \(Q=I\), \(R=1\), and \(\Sigma_{W}=I\). Let \(V=\mathsf{vec}\begin{bmatrix}0&B/\left\lVert B\right\rVert_{F}\end{bmatrix}\). Then the quantity \(L(\theta)\) in Corollary 2.1 becomes \(8\sigma_{\bar{u}}^{2}\), as \(\nu_{1}(V)=0\). Meanwhile, (using the option \(\Gamma=\Sigma_{W}=I\)), the quantity \(G\) becomes \[G=\operatorname{tr}\left((I\otimes\Psi)\operatorname{\mathsf{D}}_{\theta} \mathsf{vec}\,K(\theta)VV^{\top}\operatorname{\mathsf{D}}_{\theta}\mathsf{vec }\,K(\theta)^{\top}\right)=\operatorname{tr}(\Psi d_{V}K(\theta)d_{V}K( \theta)^{\top}). \tag{20}\] Using this insight, we may show that the lower bound grows exponentially with the system dimension. **Proposition 3.2**.: _For the system in (19) suppose \(d_{\mathsf{X}}\geq 3\). Then for \(\alpha\in(0,1/2)\),_ \[\liminf_{N\to\infty}\sup_{\theta^{\prime}\in\mathcal{B}(\theta,N^{-\alpha})}N \mathsf{EC}_{T}^{\pi}(\theta^{\prime})\geq\frac{\rho^{2}}{256T\sigma_{\bar{u} }^{2}}4^{d_{\mathsf{X}}-2}.\] We have therefore demonstrated that accurately learning the LQR controller from offline data may require an amount of data that is exponential in the state dimension. The reason that this system is particularly challenging to learn to control is that a small misidentification of \(B\) causes the learner to apply slightly suboptimal control inputs, which are then amplified by the off-diagonal terms of \(A\). The construction used, (19), avoids the two subsystem example that was used to derive exponential lower bounds for online LQR in Tsiamis et al. (2022b). A crucial reason that we are able to bypass such a construction in the offline setting is that the dominant statistical rate of \(\frac{1}{NT}\) for offline LQR is present for any perturbation direction of the underlying parameters. In contrast, the regret in the online setting only has the dominant statistical rate in the directions defined by the perturbation set in (17). ### Interesting System-Theoretic Quantities A consequence of the result in Section 3.2 is that treating system-theoretic quantities as constant with respect to dimension, as is done in Section 3.1, may fail to capture the difficulty of the problem. This leads to unfavorable aspects of the lower bound in Section 3.1, such as the dependence of the denominator on \(\left\|K\right\|\). Such an appearance indicates that for systems where the optimal LQR has a large gain, the lower bound becomes small. This is in contrast to our expectations, as a large optimal gain is often indicative of poor controllability (consider a scalar system, with \(B\to 0\)). Motivated by the above discussion, we focus our attention on deriving bounds which have favorable dependence upon system-theoretic quantities. To do so, we examine a perturbation direction for which the lower bound from Corollary 2.1 reduces to easily interpretable quantities which align with our intuition. By taking \(V=\mathsf{vec}\begin{bmatrix}A&B\\ \hline\left\|\begin{bmatrix}A&B\end{bmatrix}\right\|_{F}\end{bmatrix}\), the directional derivative expression from (16) reduces to \[d_{V}K(\theta)=\frac{2\Psi^{-1}(B^{\top}\mathsf{dlyap}(A_{cl},P)A_{cl})}{ \left\|\begin{bmatrix}A&B\end{bmatrix}\right\|_{F}}. \tag{21}\] Then the quantity \(G\) in Corollary 2.1 (using \(\Gamma=\Sigma_{X}\)), is \[\begin{split}&\operatorname{tr}\Bigl{(}(\Sigma_{X}\otimes(B^{ \top}PB+R))\mathsf{D}_{\theta}\,\mathsf{vec}\,K(\theta)V(\mathsf{D}_{\theta}\, \mathsf{vec}\,K(\theta)V)^{\top}\Bigr{)}\\ &=\operatorname{tr}((B^{\top}PB+R)d_{V}K(\theta)\Sigma_{X}d_{V}K (\theta)^{\top})\\ &=\frac{4}{\left\|\begin{bmatrix}A&B\end{bmatrix}\right\|_{F}^{2}} \operatorname{tr}((B^{\top}PB+R)^{-1}B^{\top}\mathsf{dlyap}(A_{cl},P)A_{cl} \Sigma_{X}A_{cl}^{\top}\mathsf{dlyap}(A_{cl},P)B).\end{split} \tag{22}\] This leads to the following proposition. **Proposition 3.3**.: _Suppose that \(R\) and \(B^{\top}PB\) are simultaneously diagonalizable by \(U\): \(B^{\top}PB=U\Lambda_{B^{\top}PB}U^{\top}\) and \(R=U\Lambda_{R}U^{\top}\), where \(\Lambda_{B^{\top}PB}\) and \(\Lambda_{R}\) are diagonal. Also suppose that the diagonal entries of \(\Lambda_{B^{\top}PB}\) are sorted in non-ascending order. Assume \(T\geq\frac{16\left\|\Sigma_{X}\right\|^{2}}{\lambda_{\min}(\Sigma_{X})}\). Let \(\tilde{L}\) be as in Proposition 3.1. Then for \(\alpha\in(0,1/2)\)_ \[\liminf_{N\to\infty}\sup_{\theta^{\prime}\in\mathcal{B}(\theta,N^{-\alpha})}N \mathsf{EC}_{T}^{\pi}(\theta^{\prime})\geq\frac{\lambda_{\min}(\Sigma_{X}- \Sigma_{W})}{2T\left\|\begin{bmatrix}A&B\end{bmatrix}\right\|_{F}^{2}\tilde{L }}\inf_{i\in[d_{0}]}\frac{\lambda_{i}(B^{\top}PB)}{\lambda_{i}(B^{\top}PB)+ \Lambda_{R,ii}}\sum_{j=1}^{d_{0}}\lambda_{n-j}(\mathsf{dlyap}(A_{cl},P)).\] As \(R\) is often chosen to be a scalar multiple of the identity for LQR problems, the assumption that \(R\) and \(B^{\top}PB\) are simultaneously diagonalizable is often satisfied. If we additionally have \(R\preceq B^{\top}PB\), then \(\inf_{i\in[d_{0}]}\frac{\lambda_{i}(B^{\top}PB)}{\lambda_{i}(B^{\top}PB)+ \Lambda_{R,ii}}\geq\frac{1}{2}\). As in Proposition 3.1, \(\lambda_{\min}(\Sigma_{X}-\Sigma_{W})\) highlights the dependence on the closed-loop state covariance, and \(\tilde{L}\) describes the impact of the controllability of the closed-loop system under the pre-stabilizing controller, as well as the input budget. Note that \(\tilde{L}\) provides an upper bound on the information in the face of an optimal offline exploration policy. Studying it may therefore assist with experiment design, as in Wagenmaker et al. (2021). Rather than the appearance of \(\left\|\begin{bmatrix}-K&I\end{bmatrix}\right\|\) on the denominator, as we saw in Proposition 3.1, we have \(\left\|\begin{bmatrix}A&B\end{bmatrix}\right\|_{F}\). Therefore, the bound does not diminish as a result of a large optimal controller gain. Lastly, observe that \(\sum_{j=1}^{du}\lambda_{n-j}(\mathsf{dlyap}(A_{cl},P))\) replaces \(\lambda_{\min}(P)\) from Proposition 3.1. This quantity captures the \(d_{0}\) smallest eigenvalues rather than just the smallest. If \(d_{0}=d_{\mathsf{X}}\), we get all eigenvalues of \(\mathsf{dlyap}(A_{cl},P)\). Further note that the eigenvalues of \(\mathsf{dlyap}(A_{cl},P)\) diverge as \(A_{cl}\) approaches marginal stability, leading to an infinite excess cost. Conclusion We presented lower bounds for offline linear-quadratic control problems. The focus was to understand the fundamental limitations of learning controllers from offline data in terms of system-theoretic properties. Several interesting consequences arose, such as the fact that our lower bound achieves the optimal dimensional dependence \(d_{\mathsf{X}}d_{\mathsf{U}}\) for underactuated systems. We also showed that there exist classes of systems where the sample complexity is exponential with the system dimension, \(d_{\mathsf{X}}\). We finally demonstrated that the lower bound scales in a natural way with familiar system-theoretic constants including the eigenvalues of the Riccati solution. An avenue for future work is extension of the lower bounds to the partially observed setting. ## Acknowledgements Bruce D. Lee is supported by the DoD through a National Defense Science & Engineering Fellowship. Ingvar Ziemann is supported by a Swedish Research Council International Postdoc grant. Henrik Sandberg is supported by the Swedish Research Council (grant 2016-00861). Nikolai Matni is partially supported by NSF CAREER award ECCS-2045834.
2306.10191
Neural Priming for Sample-Efficient Adaptation
We propose Neural Priming, a technique for adapting large pretrained models to distribution shifts and downstream tasks given few or no labeled examples. Presented with class names or unlabeled test samples, Neural Priming enables the model to recall and conditions its parameters on relevant data seen throughout pretraining, thereby priming it for the test distribution. Neural Priming can be performed at test time, even for pretraining datasets as large as LAION-2B. Performing lightweight updates on the recalled data significantly improves accuracy across a variety of distribution shift and transfer learning benchmarks. Concretely, in the zero-shot setting, we see a 2.45% improvement in accuracy on ImageNet and 3.81% accuracy improvement on average across standard transfer learning benchmarks. Further, using Neural Priming at inference to adapt to distribution shift, we see a 1.41% accuracy improvement on ImageNetV2. These results demonstrate the effectiveness of Neural Priming in addressing the challenge of limited labeled data and changing distributions. Code is available at github.com/RAIVNLab/neural-priming.
Matthew Wallingford, Vivek Ramanujan, Alex Fang, Aditya Kusupati, Roozbeh Mottaghi, Aniruddha Kembhavi, Ludwig Schmidt, Ali Farhadi
2023-06-16T21:53:16Z
http://arxiv.org/abs/2306.10191v3
# Neural Priming for Sample-Efficient Adaptation ###### Abstract We propose Neural Priming, a technique for adapting large pretrained models to distribution shifts and downstream tasks given few or no labeled examples. Presented with class names or unlabeled test samples, Neural Priming enables the model to recall and conditions its parameters on relevant data seen throughout pretraining, thereby priming it for the test distribution. Neural Priming can be performed at inference, even for pretraining datasets as large as LAION-2B. Performing lightweight updates on the recalled data significantly improves accuracy across a variety of distribution shift and transfer learning benchmarks. Concretely, in the zero-shot setting, we see a \(2.45\%\) improvement in accuracy on ImageNet and \(3.81\%\) accuracy improvement on average across standard transfer learning benchmarks. Further, using Neural Priming at inference to adapt to distribution shift, we see a \(1.41\%\) accuracy improvement on ImageNetV2. These results demonstrate the effectiveness of Neural Priming in addressing the challenge of limited labeled data and changing distributions. Code is available at github.com/RAIVNLab/neural-priming. ## 1 Introduction Humans have a vast store of prior experience which we draw on to flexibly perform a diverse range of tasks [20; 5; 4; 12]. While engaging in an activity, we naturally retrieve relevant information or schema in a cognitive phenomena known as Priming [34]. This process ensures that necessary knowledge is readily accessible in memory, leading to enhanced performance for the task at hand [42]. Pre-trained, general-purpose models such as CLIP [40] and ALIGN [23] have extensive prior knowledge learned from large-scale, diverse datasets. These datasets seek to capture all natural variation in real data within their distribution. Can these models also benefit from something like priming? We observe that models trained even on the largest of such datasets often substantially improve in performance when fine-tuned on task-specific data. This begs the question of what the model learns from fine-tuning on the target dataset, if it already trained on many similar examples during pre-training. We speculate that the effect of fine-tuning a pre-trained model on task-specific data is similar to that of priming. Given the sheer size and diversity of the pre-training dataset it becomes challenging for the model to find a consistent solution that is optimal for all subsets of the data. This becomes particularly evident for open-vocabulary models such as CLIP, where multiple natural language descriptions can correspond to a single image, highlighting the challenge of accommodating diverse interpretations. We hypothesize that training on the downstream dataset re-aligns the model to the specific objective. With this in consideration, we propose Neural Priming. Specifically, Neural Priming recalls a subset of the pre-training data similar to the target distribution, re-aligns the natural language descriptions to the downstream task, and quickly adapts the model to the subset. We perform extensive experiments on 7 transfer learning and 4 distribution shift datasets to validate our method. We use the OpenCLIP [40; 51] ViT [8] set of models pre-trained on the LAION-2B and 400M [43]. We find \(\mathrm{Neural\ Priming}\) leads to significant accuracy improvements, particularly when labeled data is scarce and in specialized domains. Concretely, \(\mathrm{Neural\ Priming}\) improves accuracy by 2.45% on ImageNet and 4.25% on average across the other 6 datasets over the base CLIP model. In the few-shot setting, \(\mathrm{Neural\ Priming}\) improves accuracy by 3.81% on average over recent methods on standard transfer learning benchmarks. We show \(\mathrm{Neural\ Priming}\) is efficient and can be performed on-the-fly. For datasets containing more than 2 billion images, we can prime our model to ImageNet in less than 2 minutes with a single commercial GPU. \(\mathrm{Neural\ Priming}\) is flexible and can be used with variable degrees of information about the downstream distribution. When the model has language-only task descriptions, our approach can efficiently retrieve a _priming pool_ of relevant examples from the pre-training set and attune the model to this data. At inference time, given a set of test images to classify, Neural Priming is able to further filter the priming pool down using these examples specific to the test distribution. When we have access to labeled examples in the few-shot setting, \(\mathrm{Neural\ Priming}\) can further filter the priming pool through agreement between our language-only retrieval and those of our ground-truth labeled examples. **We make the following contributions:** * We introduce \(\mathrm{Neural\ Priming}\), a novel method that leverages retrieval from the pre-training dataset for efficient and accurate adaptation of large, pre-trained models to downstream tasks. * up to \(4.25\%\) and \(3.81\%\) improvements respectively over baselines (Section 4.2). * \(\mathrm{Neural\ Priming}\) also enables transductive learning and improves performance on standard distribution shift datasets by 2.51% on average, all without using any additional data (Section 4.3). * Our approach generalizes to various architectures and pre-training datasets while being complementary to techniques [33; 39] that improve zero-shot performance of open-vocabulary models. ## 2 Related Work ### Open-Vocabulary Models and Zero-shot Inference Open-vocabulary models have proven to be an effective approach for transfer learning. Such models enable training on vast amounts of web-scale images without the need for labor-intensive human Figure 1: **A diagram of \(\mathrm{Neural\ Priming}\), our proposed method. \(\mathrm{Neural\ Priming}\) is a framework for leveraging an open-vocabulary model’s _own pre-training data_ to improve performance on downstream tasks. \(\mathrm{Neural\ Priming}\) encompasses two processes: **1.** Collecting a _priming pool_ of relevant examples from the pre-training set to prime with and **2.** using these examples to attune our model to a given task. We show performance improvements across a wide range of transfer learning and robustness benchmarks. labeling by leveraging pre-existing natural language descriptions [40]. Open-vocabulary models have set state-of-the-art on ImageNet [7] as well other transfer learning benchmarks [52; 1; 41]. Open-vocabulary models offer additional capabilities beyond standard pre-trained models. They can perform zero-shot inference, where predictions are made without training on target data. Additionally, they are robust to distribution shifts [40], enable prompt-tuning methods [39; 33], and can be used for text-based retrieval [10]. Zero-shot can have different meanings in the literature. In the context of this paper, we consider zero-shot as the experimental setting in which the model receives no training examples drawn from the training distribution. Prompt-tuning has emerged as a popular research direction in the domains of large language and open-vocabulary models. In the context of open-vocabulary models, prompt-tuning can involve modifying the textual prompts or queries used during the training or inference of the model to improve its understanding of visual content or achieve specific goals. In the original CLIP paper, Radford et al. [40] design hand-crafted prompt templates for ImageNet and other transfer learning datasets and show that this leads to substantial accuracy improvements. More recently, other work [33; 54] has used machine learning approaches to learn the prompts rather than hand-crafting them. ### Distribution Shifts Robustness to distribution shift is a key property of good machine learning models as it represents a notion of reliability. In particular, studies on natural distribution shifts, including ImageNet-V2 [41], ImageNet-Sketch [50], ImageNet-R [18], and ImageNet-A [19], find that models have a consistent performance drop when exposed to a distribution at inference time not seen during train time [48]. In order to focus on robustness and eliminate the confounder of better models being generally better, this performance gap is measured through effective robustness, which is the robustness improvement over ImageNet trained models. Prior work has shown that the performance of models on in distribution and out of distribution is highly correlated across many algorithmic training interventions, except for cases where training on larger and more diverse datasets increases robustness [35]. The most significant recent improvement in effective robustness [41] is the introduction of open-vocabulary models. At its time of release, CLIP [40] achieved unprecedented effective robustness on a variety of distribution shifts. Studies have suggested that these models achieve high effective robustness through their data distribution [9], a result of training on large amounts of web-scraped data. However, these models are still worse at downstream tasks than models fine-tuned on in-distribution data. Moreover, fine-tuning on downstream data causes robustness on other data distributions to deteriorate [40; 51]. Many mitigation methods have been proposed to such as Wise-FT, FLYP, LP-FT, and model surgery [51; 15; 28; 29]. Our paper differs from these methods in goal: whereas they seek to keep model robustness while gaining the benefits of fine-tuning on task-specific data, we seek the benefits of fine-tuning while _not collecting any in-distribution data_. Hence these methods are complementary to Neural Priming, and we employ Wise-FT in our model attunement procedure. ### Transductive Learning Transductive learning [13; 6] focuses on leveraging unlabeled data during inference. It differs from traditional supervised learning, which solely relies on labeled data at train time. Related to transductive learning is test-time training [47; 14; 44]. Test-time training involves adapting and refining the model's predictions based on the specific testing examples encountered. Transductive learning differs from test-time training in that test-time training only considers one test sample at a time, whereas transductive aims to learn from the entire test set. ### Few-Shot Learning Few-shot learning research aims to addresses the challenge of learning from a limited number of labeled examples. In many real-world scenarios, acquiring large labeled datasets is impractical or costly. Older lines of work have focused on meta-training small models [46; 11; 37; 22] on small-scale datasets. More recently, the approach for few-shot learning has shifted towards training large, general-purpose models such as CLIP [40] and ALIGN [23] on web-scale datasets. ### Retrieval-Augmented Models In language, works have demonstrated the effectiveness of retrieval from text corpora or structured data for tasks such as question answering [3; 16; 25]. In general, these methods seek to recover facts either from a large corpus or knowledge graph, then use those to complete tasks. This differs from our scenario, where exact examples at inference time do not necessarily exist in the pre-training corpus. REACT [30] and SuS-X [49] are retrieval-augmented methods for open-vocabulary models which use search to fine-tune with relevant examples [30]. We differ from [30] in that they add a substantial number of new parameters whereas we do not. Additionally our approach is significantly more efficient, both computationally and in terms of number of samples, enabling use at inference for additional improvement (Section 3.1.2). We differ from [49] in that their work uses semantic retrieval whereas \(\mathrm{Neural\ Priming}\) leverages language for fast initial filtering and image search for accurate retrieval. Further, Neural Priming shows that models can improve by revisiting examples seen throughout pretraining whereas other works retrieve new examples from external datasets. ## 3 Method \(\mathrm{Neural\ Priming}\) is the process of retrieving relevant information from the pre-training dataset and leveraging it for a specific task. We study it in the context of vision-language contrastive pre-training, so the form our task description takes is a set of class names, \(\mathcal{C}\), already in natural language. A CLIP model [40] consists of a vision embedding model, \(V\), and a language embedding model, \(L\), each producing a vector representation in \(\mathbb{R}^{d}\). The pre-training dataset, \(\mathcal{D}\), consists of a large number of image-text pairs collected from the web. The text component can be noisy, potentially containing irrelevant or inaccurate information about the image content. We break our method down into two main steps: **1.** Collecting the priming pool, where we gather data from our pre-training dataset relevant to a particular task and **2.** model attunement, where we leverage this data to improve our model. ### Collecting the Priming Pool #### 3.1.1 Leveraging Natural Language Task Information The goal of this step is to collect an initial pool of images relevant to the task at hand given the previously defined natural language description \(\mathcal{C}\). For example, if our task is a set of dog breeds, ideally we would collect sets of images belonging to those breeds and label them accordingly. A simple way to prime is by using retrieval to gather relevant data points from our pre-training dataset. An existing method for language-based retrieval involves using the CLIP text embedding of a class description \(c\in\mathcal{C}\) for retrieval using semantic similarity scores on the pre-training set [2]. However, with neural priming, prioritizing precision over recall is crucial, considering the size, diversity, and noise of the pre-training dataset. This form of semantic retrieval has a major downside: it is not clear where to threshold similarity scores to retrieve the most relevant images. Threshold too late and we allow unrelated images to be included in our pool. Further, this threshold is often specific to a category, making it infeasible to search at scale. Our approach to language-based priming is to search for the existence of the class name, \(c\in\mathcal{C}\), in the captions of our pre-training dataset to retrieve images relevant to a particular category. We organize these image-text pairs into separate categorical clusters \(\{B_{c}\}\) according to the class name \(c\) mentioned in their captions. This approach has a few advantages over semantic retrieval: **1.** After setting up an inverted index search structure for text retrieval [26; 45], exact string matching is far faster than semantic retrieval, even when approximate nearest neighbor strategies are employed [24], **2.** with exact string search, the category boundary is clear and therefore does not require per category tuning, **3.** the retrieval results are overall qualitatively more relevant. Finally, to leverage the semantic understanding of our CLIP model, we then hard filter the priming pool. We do this by constructing a "zero-shot" CLIP classifier, as defined by Radford et al. [40], and removing examples from categorical clusters that do not align with their label according to the CLIP model. #### 3.1.2 Leveraging Image Information at Test Time At inference time, the model can narrow the relevant priming pool even further by utilizing information about the test distribution. To do this, given an image \(x\) in our test set, we compute the cosine similarity using our CLIP image encoder \(V\), \(\cos(V(x),V(y))\) for every \(y\in P\), our priming pool. We retrieve examples with the top-\(k\) cosine similarity scores (\(k=10\) in most of our experiments). We do this collectively for every image in the test set and collect the retrievals to form a filtered priming pool. If an example is retrieved twice, we de-duplicate them in the final priming pool. Since we do this for all images in the test set, we consider this the _transductive setting_ to align with prior work [13; 6]. ### Attuning CLIP to the Priming Pool The goal of this step is to modify our CLIP model to take advantage of the data in the priming pool \(P\). Note that we do not add extra parameters to the final CLIP classifier, unlike [30], and the backbone weights remain unchanged. We first construct the task-specific zero-shot linear head \(W_{z}\in\mathbb{R}^{d\times n}\), where \(d\) is the feature dimension and \(n\) is the number of classes, for our vision encoder using the text model as in [40]. To get logits for a particular example, \(x\), we compute \(W_{z}\cdot L(x)\), so our prediction is \(\operatorname*{arg\,max}_{c}W_{z}\cdot L(x)\). To attune our CLIP model to the priming pool, we perform nearest-class mean (NCM) [32] on all retrieved examples per category to get a pool specific classification head. Namely for a given class \(c\), we compute a centroid \(\tilde{y}_{c}=\frac{1}{|B_{c}|}\sum_{x\in B_{c}}L(x)\) and then normalize this centroid to produce a class embedding \(y_{c}=\tilde{y}_{c}/\|y_{c}\|\). We define the collection of centroids as matrix \(W_{ft}=[y_{c}]_{c}\in\mathbb{R}^{d\times n}\). To expand this to few-shot scenarios, we mix the labeled data into the corresponding categorical clusters before performing NCM. Finally, we mix \(W_{z}\) and \(W_{ft}\) using a mixing coefficient \(\alpha\in[0,1]\) as \(W_{\alpha}=(1-\alpha)\cdot W_{ft}+\alpha\cdot W_{z}\), which is our final classification head. We choose alpha according to an exponential scaling heuristic \(\alpha=e^{-|P|^{2}/\sigma}\). We discuss our choice of this heuristic in the appendix. Intuitively, if we do not have much data in our priming pool, we want it to influence our model less. Using \(\alpha\) mixing when fine-tuning CLIP models has been shown to reduce the effects of distribution shift [51]. ## 4 Experiments Our key results include: **1.** Priming improves performance over baselines in the few-shot setting by 3.81% on average across all datasets and 2.4% on ImageNet in the zero-shot setting **2.** Priming in the Figure 2: **A qualitative example of our approach for transductive image filtering.** Given an initial _priming pool_, acquired through natural-language text search on the captions of our pre-training dataset (Section 3.1), we filter out irrelevant examples using images from our test set. **(left)** we show examples from the great owl categorical cluster of our priming pool before filtering, **(center)** we show an example image from the same category of ImageNet-V2, **(right)** example retrievals using image embedding similarity from the entire priming pool. The visual similarity of the retrievals are apparent, and they are generally from the appropriate categorical cluster. Doing this filtering results in a significantly more relevant priming pool. transductive, or on-the-fly, setting further improves performance over baselines by 2.51% accuracy and 1.09% over standard Neural Priming **3.** Priming is complementary to existing prompt-tuning methods. Our finding indicates that images in the priming set impart distinct information to the model compared to textual class descriptions. We include full details of hyperparameter choices and error bars included in the appendix. ### Datasets and Architectures We evaluate on standard transfer learning and distribution shift benchmarks. ImageNet [7] is a large-scale, general classification dataset that has been well-studied in both transfer learning and distribution shift. ImageNetV2 [41] is one of its natural distribution shift test sets, made by reproducing the original data collection procedure of ImageNet, but even modern large-scale pre-trained models have performance drops on it. ImageNet Sketch [50] and ImageNet-R [18] are natural distribution shifts created by assembling sketches and various renditions of the ImageNet classes. ImageNet-A [19] is a natural adversarial distribution shift of ImageNet, created by collecting images that are misclassified by ResNets. StanfordCars [27], FGVCircraft [31], Flowers102 [36], and OxfordPets [38] are fine-grained classification datasets which require understanding subtle visual differences between classes and are commonly used for transfer learning benchmarks [17; 21; 40; 54]. SUN397 [52] is a large-scale scene recognition dataset with 397 scene categories. We perform our experiments with OpenCLIP models [51] trained on LAION-2B and 400M [43]. We choose OpenCLIP because their pretrain datasets are publicly available, therefore we can ensure no additional data is introduced. The model architecture reported in the main paper is the B-16 variant trained on LAION-2B unless otherwise stated and we report L-14 and B-32 in the Appendix C. ### Zero-shot Results In this setting, our model only has access to data it has seen during pre-training, in this case LAION-2B. Neural Priming improves top-1 accuracy by 2.45% on ImageNet and 4.25% on average across 6 other diverse downstream datasets compared to the CLIP baseline. In the zero-shot setting, Neural Priming outperforms the 3-shot CLIP model on StanfordCars and FGVCircraft. This result is particularly noteworthy since traditionally training on in-distribution data generally outperforms zero-shot techniques [53]. Note that we do not present error-bars for the zero-shot experiments as the process is deterministic. We also compare with VLM [33] and CuPL [39], two zero-shot prompt-tuning methods which obtain natural language descriptions of each class using language models, and a retrieval with fine-tuning baseline. For implementation details on how the retrieval and fine-tuning are performed see Appendix H. Interestingly, we find that Neural Priming is complementary to existing prompt-tuning methods. The accuracy improvements from CuPL and VLM are additive with Neural Priming. For example, CuPL and Neural Priming each improve performance by **3.78%** and **7.17%** respectively on FGVCircraft. Ensembling the methods results in **10.74%** improvement over the baseline (Table 2). This surprising result suggests that the textual class descriptions in CuPL and VLM provide unique information to the model that differ from the information obtained from the images in the priming set. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & ImageNet & \begin{tabular}{c} Stanford \\ Cars \\ \end{tabular} & \begin{tabular}{c} FGVC \\ Aircraft \\ \end{tabular} & Flowers102 & Food101 & \begin{tabular}{c} Oxford \\ Pets \\ \end{tabular} & SUN397 \\ \hline CLIP [40; 21] & 68.30 & 87.40 & 25.86 & 71.65 & 86.58 & 90.21 & 67.35 \\ Retrieval + Finetuning & 70.28 & 87.95 & 26.22 & 72.15 & 86.63 & 90.35 & 68.01 \\ VLM [33] & 69.35 & 87.88 & 28.54 & 72.11 & 86.31 & 90.24 & 67.73 \\ CuPL [39] & 70.25 & 88.63 & 29.64 & 72.32 & 86.20 & 91.16 & 70.80 \\ \hline Priming (Ours) & 70.75 & 89.30 & 33.03 & 79.81 & 86.66 & **91.87** & 71.21 \\ Priming + CuPL (Ours) & **71.38** & **90.23** & **36.00** & **80.04** & **86.86** & 91.85 & **72.35** \\ \hline \hline \end{tabular} \end{table} Table 1: **Performance of Neural Priming and comparable methods in the zero-shot setting.** Priming consistently improves top-1 accuracy across standard transfer learning data sets. Performance reported for the OpenCLIP ViT-B-16 model pretrained on LAION-2B. Another observation is that Neural Priming is especially effective for specialized domains such as StanfordCars, Flowers102, and FGVCAircraft. We speculate this is due to the fact that the label space and image content differs from the majority of the pre-training set. For example, although airplanes occur frequently in LAION-2B, they are rarely described according to their specific model such as _Boeing 737-200_. Therefore, recalling and priming the model on pre-train images with such fine-grained classes significantly improves the model. For analysis of LAION-2B with regards to label statistics see Appendix B. In contrast, for datasets which are more aligned with LAION-2B and the distribution of internet images, such as ImageNet and SUN397, the accuracy gain provided from Neural Priming is smaller in comparison, albeit still significant. In the limit of this trend, Food101 sees almost no improvement across all methods, and even training on in-distribution data for the few-shot case barely improves the accuracy. We speculate that this is because images similar to those in Food101 are already well-represented in LAION-2B, rendering additional food images of marginal informational value. We provide analysis of how well the attributes of each dataset are captured by LAION-2B in Appendix B. To be precise, when we refer to term "shot number" throughout the experiments section, we mean the number of labeled examples from the target training set. We do not consider images retrieved from LAION-2B as shots in this setting because they are obtained from the pre-training set. ### Few-Shot Results Neural Priming improves performance for all datasets and shots in the few-shot setting. We compare with CoOp, a recent method for few-shot prompt-tuning, and a Nearest-class-Mean (NCM) baseline. On average across all shots and datasets Neural Priming improves by 3.81% in accuracy over the closest baseline. Results can be found in Figure 3 and Table 8 of the Appendix. Notably, we find that Neural Priming can match the accuracy of models trained with a substantial number of training examples _without using any of the labeled training data_ for all of the evaluated Figure 3: **Performance of Neural Priming and comparable methods in the few-shot setting. We find consistent improvement across shot numbers and datasets. In particular, Neural Priming especially excels for fine-grained datasets such as FGVCAircraft and Flowers102. We hypothesize that such fine-grained captioned images are not well represented in LAION-2B, therefore revisiting this subset of data improves the model more.** datasets (Figure 3). Similar to the zero-shot setting, we observe that Neural Priming is complementary with prompt-tuning methods (Appendix F). Additionally, we observe that as the shot number increases, improvement over the baseline decreases. At 1-shot the improvement in accuracy over the baselines is 5.63% on average, while at 10-shot the improvement is 2.04%. Intuitively, as the model receives more target training data, obtaining additional examples from the pretrain set becomes less necessary. ### Transductive Results We compare Neural Priming in the transductive setting on 4 standard distribution shift datasets, ImageNet-V2, ImageNet Sketch, ImageNet-R and ImageNet-A. Distribution shift datasets are a natural application of adaptation at test-time. Often real-world datasets differ from the training data, therefore models should be able to adapt on-the-fly. In this setting, the model can learn from the test images without labels before making predictions. We compare with Test-Time Prompt-Tuning (TPT), a state-of-the-art method which uses a self-supervised objective to learn from test data. We find that Neural Priming with images in the test set improves performance over standard Neural Priming by 1.09% as well as 2.51% over TPT across the 4 distribution shifts (Table 2). Looking at Figure 2, we qualitatively see that the priming pool more closely matches the test images after filtering for the closest images in the initial priming pool. Though the distribution shift can often be imperceptible such as between ImageNet and ImageNetv2, quantitatively we see that the transductive filtering step finds images in the pretraining close to the test distribution. The transductive retrieval for 50,000 images in the test set on average takes 96 seconds for a priming pool of 1 million images, while retraining the classifier takes on average 11.5 seconds for a priming pool of size 10,000 on standard hardware. We provide further analysis of run-time efficiency of the on-the-fly variant of Neural Priming in Appendix G. ### Ablations We investigate the impact of the priming pool size on the zero-shot accuracy of downstream tasks (Figure 4). Our analysis reveals that as the size of the priming pool increases, there is a general improvement in accuracy. However, there are certain limitations associated with enlarging the pool. The majority of classes in the downstream task have a limited number of available images. \begin{table} \begin{tabular}{c c c c c} \hline \hline & ImageNet-V2 & ImageNet-R & \begin{tabular}{c} ImageNet \\ Sketch \\ \end{tabular} & ImageNet-A \\ \hline CLIP [21, 40] & 59.35 & 64.57 & 57.05 & 35.95 \\ TPT [44] & 59.84 & 78.74 & 52.75 & 36.92 \\ Priming (Ours) & 60.12 & 77.98 & 58.29 & 37.56 \\ Transduct. Priming (Ours) & **60.76** & **79.37** & **59.97** & **38.20** \\ \hline \hline \end{tabular} \end{table} Table 2: **Performance of Neural Priming and relevant methods for the transductive setting.** Neural Priming finds examples similar to the test image at inference to optimize the model. Models are evaluated zero-shot on 4 distribution shift datasets. Neural Priming excels on distribution shifts which differ significantly from the natural language description of the class names. Performance reported for the OpenCLIP ViT-B-16 model pretrained on LAION-2B. Figure 4: **Ablation over the number of samples per class in the priming pool.** We observe a consistent zero-shot accuracy improvement as the number of samples drawn from our pool increases. Consequently, when we retrieve a larger number of images for the priming pool, they tend to contain more noise and mislabeled samples. Furthermore, for rare classes, the number of images obtained through exact string search is often less than 100. To address this, a potential extension could involve utilizing a language model to generate alias names for classes, which could then be used to perform additional string searches, thereby expanding the initial priming pool size. We also analyze the impact of the architecture on the accuracy improvement achieved by \(\mathrm{Neural\ Priming}\) in the zero-shot setting (Figure 5). To examine this, we conduct experiments using models of varying capacities, namely ViT B-32, B-16, and L-14. We observe that the gains remains consistent across the models. This finding suggests that even as we scale the architecture's capacity, our method will continue to yield significant and consistent relative error reduction. ## 5 Limitations Neural Priming has a few potential limitations. Firstly, it requires that the pre-train dataset contains images similar to those in the downstream task. Though all of the datasets we benchmark have abundant relevant data, it is possible for more out-of-distribution datasets that LAION-2B simply does not contain related or queryable images. Secondly, accurate class names are required for retrieval. Meaningful class names for some datasets can be difficult to obtain. For example, in the Flowers102 dataset, some flower species are given by their latin names, which leads to poor retrieval. This issue generally affects open-vocabulary models which require accurate class names to initialize the zero-shot classifier. This limitation may be resolved by using language models to replace class names with their more commonly known synonyms. Lastly, Neural Priming requires access to the pre-training data set which is not always possible such as in the case of OpenAI variant of CLIP. In this case a surrogate dataset would likely suffice, such as using LAION-2B. ## 6 Discussion & Conclusion We present \(\mathrm{Neural\ Priming}\), a method to improve the performance of open-vocabulary models by leveraging their own large-scale, diverse pre-training data with no additional data required. With \(\mathrm{Neural\ Priming}\), we demonstrate how to construct a high quality priming pool of examples from the pre-training dataset relevant to a particular task and how to utilize this pool to improve our model. We further show that our method is effective across a variety of downstream tasks and settings. In particular, our method can be used in situations where only natural language descriptions of relevant classes are given, when we have the ability to adapt at inference time, and when we are provided with few labeled in-distribution examples. In all settings, our framework demonstrates a substantial improvement in performance over existing interventions, and is in fact complementary with current prompt-tuning and robustness methods. Our method is also computationally cheap, not requiring any modification of model backbone weights and only a fast text search on the pre-training corpus. The efficacy of \(\mathrm{Neural\ Priming}\) leads to some interesting questions for future work. For example, if the model has seen this data before, why does it help to recall them? We hypothesize that this is due to the fact that the diversity of these datasets introduces competing objectives, which are difficult for the model to optimize directly. For example, the same kind of image could appear with multiple captions and vice-versa, making it difficult to prompt a CLIP model trained on such data at inference time for a particular task. A systematic study of this could elucidate important limitations of current large-scale training paradigms. Figure 5: **Analyzing the effect of model capacity on Neural Priming. We find the relative error reduction stays consistent even as the scale of the model increases.** ## Acknowledgments We are grateful to Sarah Pratt, Mitchell Wortsman, and Romain Beaumont for helpful discussions and feedback. Ali Farhadi acknowledges funding from the NSF awards IIS 1652052, IIS 17303166, DARPA N66001-19-2-4031, DARPA W911NF-15-1-0543, and gifts from Allen Institute for Artificial Intelligence, Google, and Apple. Ludwig Schmidt and Alex Fang are in part supported by the NSF AI Institute for Foundations of Machine Learning (IFML, CCF-2019844), Open Philanthropy, Google, and the Allen Institute for AI.
2305.09190
(Almost) Complete Intersection Lovász-Saks-Schrijver ideals and regularity of their powers
We discuss the property of (almost) complete intersection of LSS-ideals of graphs of some special forms, like trees, unicyclic, and bicyclic graphs. Further, we give a sufficient condition for the complete intersection property of twisted LSS-ideals in terms of a new graph theoretical invariant called twisted positive matching decomposition number denoted by tpmd. We also study the regularity of powers of LSS-ideals and make observations related to the Koszul property of the quotients of the same.
Marie Amalore Nambi, Neeraj Kumar, Chitra Venugopal
2023-05-16T05:59:56Z
http://arxiv.org/abs/2305.09190v2
# (almost) complete intersection Lovasz\(-\)Saks\(-\)Schrijver ideals and regularity of their powers ###### Abstract. We discuss the property of (almost) complete intersection of LSS-ideals of graphs of some special forms, like trees, unicyclic, and bicyclic. We also study the regularity of powers of LSS-ideals and make observations related to the Koszul property of the quotients of the same. Key words and phrases:Complete intersection, Almost complete intersection, Lovasz\(-\)Saks\(-\)Schrijver(LSS) ideal, Regularity 2020 Mathematics Subject Classification: Primary 13F65, 13F70, 13C40; Secondary 14M10, 13D02, 05E40 ## Introduction Let \(G\) be a simple graph on \([n]=\{1,\ldots,n\}\) and \(k\) be a field. Then for an integer \(d>0\), and a polynomial ring \(S=k[x_{ij}\mid i\in[n],j\in[d]]\), the ideal \[L_{G}^{k}(d)=\langle f_{e}^{(d)}=\sum_{l=1}^{d}x_{il}x_{jl}\mid e\in E(G)\rangle\] is called the Lovasz\(-\)Saks\(-\)Schrijver ideal ([23]). We refer to it as LSS-ideal in short and denote it by \(L_{G}(d)\) when the field \(k\) is evident. It defines the variety of orthogonal representations of the complementary graph of \(G\) (cf.[9]). For \(d=1\), \(L_{G}(d)\) corresponds to the edge ideal of a graph \(G\), which is well studied in the literature, see [5]. When \(d=2\), in [7], Bolognini et al. prove that for a bipartite graph \(G\), the binomial edge ideal \(J_{G}\) coincides with the parity binomial edge ideal \(\mathcal{I}_{G}\), Lovasz\(-\)Saks\(-\)Schrijver ideal \(L_{G}(2)\), and the Permanental edge ideal \(\Pi_{G}\) (cf. [16]). More results on the equality of \(J_{G}\) and \(\mathcal{I}_{G}\) with \(\Pi_{G}\) is discussed in [20, Remark 3.4]. LSS-ideals have a close relationship with some other classes of ideals associated with graphs, like the determinantal ideals of the \((d+1)\)-minors of generic/ generic symmetric matrices with \(0\)s in positions corresponding to the edges of graph \(G\) (denoted by \(X_{G}^{gen}/X_{G}^{sym}\) respectively) and Pfaffian ideals of order \(2d\) of generic skew-symmetric matrices with entries prescribed by the edges of \(G\) (denoted by \(X_{G}^{skew}\)). This is evident from the following isomorphisms discussed in [9]. 1. Let \(G\) be a subgraph of a complete bipartite graph \(K_{m,n}\) where \(m,n\in\mathbb{N}\), then \(K[x_{ij}]/(I_{d+1}(X_{G}^{gen})+(x_{ij}\,|\{i,j\}\in E))\cong K[YZ]/L_{G}(d) \cap K[YZ]\) where \(Y=(y_{ij})\) and \(Z=(z_{ij})\) are \(m\times d\) and \(d\times n\) matrices of variables respectively. 2. Let \(G\) be a subgraph of a complete graph \(K_{n}\) where \(n\in\mathbb{N}\), then \(K[x_{ij}]/(I_{d+1}(X_{G}^{sym})+(x_{ij}\,|\{i,j\}\in E))\cong K[YY^{T}]/L_{G} (d)\cap K[YY^{T}]\) where \(Y=(y_{ij})\) is an \(n\times n\) matrix of variables. 3. Let \(G\) be a subgraph of a complete graph \(K_{n}\) where \(n\in\mathbb{N}\) and for \(\hat{f}_{e}^{(d)}=\sum_{k=1}^{d}(y_{i\,2k}-1y_{j\,2k}-y_{i\,2k}y_{j\,2k-1})\), let \(\hat{L}_{G}(d)=\{\hat{f}_{e}^{(d)}:\,e\in E\}\) be the twisted LSS-ideal associated to \(G\). Then \(K[x_{ij}]/(\mathrm{Pf}_{2d+2}(X_{G}^{skew})+(x_{ij}\,|\{i,j\}\in E))\cong K[ YJY^{T}]/\hat{L}_{G}(d)\cap K[YJY^{T}]\) where \(Y=(y_{ij})\) is an \(n\times 2d\) matrix of variables and \(J\) is a \(2d\times 2d\) block matrix with \(d\) blocks of \(\begin{pmatrix}0&1\\ -1&0\end{pmatrix}\) on the Introduction Let \(G\) be a graph and \(d\) be a positive integer. Let \(G\) be a graph and \(d\) be a positive integer. **Acknowledgement.** The first author is financially supported by the University Grant Commission, India. The second author is partially funded by MATRICS grant, project no. MTR/2020/000635, from Science and Engineering Research Board (SERB), India. The third author is financially supported by INSPIRE fellowship, DST, India. ## 1. Preliminaries Throughout the article, \(d\) denotes a positive integer and \(G\), a finite simple undirected graph on the vertex, set \(V(G)=[n]\) and edge set \(E(G)\), unless otherwise stated. **Definitions and Notations.** * A _subgraph_ of \(G\) is a graph \(H\) such that \(V(H)\subset V(G)\) and \(E(H)\subset E(G)\). * For \(U\subset V(G)\), \(G[U]\) denotes the _induced subgraph_ of \(G\) on vertex set \(U\) and for \(i,j\in U\), \(\{i,j\}\in E(G[U])\) if and only if \(\{i,j\}\in E(G)\). For a vertex \(u\in V(G)\), \(G\setminus u\) denotes the induced subgraph on \(V(G)\setminus u\). * For \(m,n>0\), \(K_{n}\) denotes the _complete_ graph on \([n]\). \(K_{m,n}\) denotes the _complete bipartite_ graph on \([m+n]\). For \(n>2\), \(C_{n}\) denotes the _cycle_ on \([n]\). * A connected graph \(G\) is said to be a _tree_ if \(G\) has \(n-1\) edges. * A connected graph \(G\) is said to be a unicyclic graph if \(G\) has \(n\) edges. * A connected graph \(G\) is said to be a bicyclic graph if \(G\) has \(n+1\) edges. * For a vertex \(v\in V(G)\), _degree_ of a vertex, denoted by \(deg_{G}(v)\), is the number of edges incident to \(v\). * For a graph, \(\Delta(G)=\max_{v\in V(G)}\deg_{G}(v)\). The property of LSS ideals corresponding to forests being radical, prime, and complete intersections is given in the following theorem, which is referred to repeatedly in this article. **Remark 1.1**.: [9, Theorem 1.5] Let \(G\) be a forest and denote by \(\Delta(G)\) the maximal degree of a vertex in \(G\). Then 1. \(L_{G}(d)\) is radical for all \(d\). 2. \(L_{G}(d)\) is a complete intersection if and only if \(d\geq\Delta(G)\). 3. \(L_{G}(d)\) is prime if and only if \(d\geq\Delta(G)+1\). **Definition 1.1**.: [9, Definition 5.1] Given a graph \(G=(V,E)\) a positive matching of \(G\) is a subset \(M\subset E\) of pairwise disjoint sets such that there exists a weight function \(w:V\to\mathbb{R}\) satisfying: \[\sum_{i\in e}w(i)>0\text{ if }e\in M,\qquad\qquad\sum_{i\in e}w(i)<0\text{ if }e\in E \setminus M.\] In [9], the authors introduce a graph theoretical invariant called positive matching decomposition number and show it to be related to the algebraic properties of LSS-ideals. Further study on positive matching decompositions (pmd) of graphs is done in [12]. It is defined as follows. **Definition 1.2**.: [9, Definition 5.3] Let \(G=(V,E)\) be a finite simple graph. A positive matching decomposition (or pm-decomposition) of \(G\) is a partition \(E=\cup_{i=1}^{p}M_{i}\) into pairwise disjoint subsets such that \(M_{i}\) is a positive matching on \((V,E\setminus\cup_{j=1}^{i-1}M_{j})\) for \(i=1,\dots,p\). The \(M_{i}\) are called the parts of the pm-decomposition. The smallest \(p\) for which \(G\) admits a pm-decomposition with \(p\) parts will be denoted by \(\operatorname{pmd}(G)\). **Remark 1.2**.: [9, Theorem 1.3] Let \(G=(V,E)\) be a graph. Then for \(d\geq\mathrm{pmd}(G)\) the ideal \(L_{G}(d)\) is a radical complete intersection. In particular, \(L_{G}(d)\) is prime if \(d\geq\mathrm{pmd}(G)+1.\) **Remark 1.3**.: [12, Theorem 2.1] A matching \(M\) of a graph \(G\) is positive if and only if the subgraph of \(G\) induced by \(M\) has no alternating closed walks with respect to \(M\). **Remark 1.4**.: [20, Lemma 4.1] Let \(I\) be a radical ideal in a Noetherian commutative ring \(R\). Then, for any \(f\in R\) and \(n\geq 2\), \[I:f=I:f^{n}.\] **Remark 1.5**.: [20, Lemma 4.2] If \(I\subseteq R=k[x_{1},\ldots,x_{n}]\) is a homogeneous ideal such that \(I=J+(a)\), where \(J\) is generated by a homogeneous regular sequence, \(a\) is a homogeneous element and \(J:a=J:a^{2}\), then \(I\) is either a complete intersection or an almost complete intersection. **Definition 1.3**.: Let \(S\) be a standard graded polynomial ring over a field \(k\) and \(M\) be a finitely generated graded \(S\)-module. Then the _Castelnuovo-Mumford regularity_ or simply _regularity_ of \(M\) over \(S\), denoted by \(\mathrm{reg}_{S}(M)\), is defined as \[\mathrm{reg}_{S}(M)\coloneqq\max\{j-i\mid\mathrm{Tor}_{i}^{S}(M,k)_{j}\neq 0\}.\] For convenience, we shall use \(\mathrm{reg}(M)\) instead of \(\mathrm{reg}_{S}(M)\). **Remark 1.6**.: [6, Lemma 4.4] Let \(u_{1},\ldots,u_{n}\) be a regular sequence of homogeneous polynomials in \(S\) with \(\deg(u_{i})=d\). Let \(I=(u_{1},\ldots,u_{n})\) be an ideal. Then for all \(s\geq 1\), we have \[\mathrm{reg}(I^{s})=ds+(d-1)(n-1).\] **Remark 1.7**.: [18, Corollary 2.11] Let \(S\) be a standard graded polynomial ring over a field \(k\) and \(u_{1},\ldots,u_{n}\) be a homogeneous \(d\)-sequence with \(\deg(u_{i})=d_{i}\) in \(S\) such that \(u_{1},\ldots,u_{n-1}\) is a regular sequence. Set \(I=(u_{1},\ldots,u_{n})\) and \(d=\max\{d_{i}:1\leq i\leq n\}.\) Then, for all \(s\geq 1\), \[\mathrm{reg}(S/I^{s})\leq d(s-1)+\max\{\mathrm{reg}(S/I),\sum_{i=1}^{n-1}d_{i} -n\}.\] ## 2. Complete Intersection In this section, we prove Theorem 0.1 and give a necessary condition (Lemma 2.1) for an LSS-ideal to be a complete intersection. **Lemma 2.1**.: Let \(d\) be an integer and \(G\) be a graph such that \(d<\Delta(G)\). Then \(L_{G}(d)\) is not a complete intersection. Proof.: It suffices to show that there exists a prime \(P\) such that \(L_{G}(d)\) is contained in \(P\) with \(\mathrm{ht}(P)\leq\mu(G)-1\). To prove this, let \(u\) be a vertex of \(G\) such that \(\deg_{G}(u)\geq d+1\) and set \(T=\{u\}\). Assume \(P_{T}=(x_{u1},\ldots,x_{ud})+Q_{G\setminus u}\), where \(Q_{G\setminus u}\) is a minimal prime of \(L_{G\setminus u}(d)\). Since the prime ideals \(Q_{G\setminus u}(d)\) and \((x_{u1},\ldots,x_{ud})\) are in distinct set of variables, \(P_{T}\) is a prime ideal and clearly contains \(L_{G}(d)\). Now, from [17, Theorem 13.5], one has, \(\mathrm{ht}(Q_{G\setminus u})\leq\mu(G\setminus u)\). Then, \[\mathrm{ht}(P_{T}) =\mathrm{ht}(x_{u1},\ldots,x_{ud})+\mathrm{ht}(Q_{G\setminus u}),\] \[\leq d+\mu(G\setminus u),\] \[=d+\mu(G)-(d+1),\] \[=\mu(G)-1.\] Thus \(L_{G}(d)\) is not a complete intersection. **Theorem 2.1**.: Let \(G\) be a unicyclic graph and \(d\geq 3\). Then: 1. \(L_{G}(d)\) is a radical complete intersection if \(d\geq\Delta(G)\). 2. \(L_{G}(d)\) is prime if \(d\geq\Delta(G)+1\). Proof.: 1. By Remark 1.2, it is enough to show \(\operatorname{pmd}(G)\leq d\). We consider a matching \(M_{1}\) consisting of an edge \(e_{1}\) of the cycle and edges \(e_{2},\ldots,e_{m}\) such that for \(i=1,\ldots,m\), \(e_{i}\) is not an edge of the cycle and is from a vertex of degree \(\Delta(G)\) in \(G\setminus\{e_{1},\ldots,e_{i-1}\}\). Since \(M_{1}\) has only one edge from the cycle, from Remark 1.3, \(M_{1}\) is a positive matching on \(G\). From the construction of \(M_{1}\), we get that \(G\setminus M_{1}\) is a forest with \(\Delta(G\setminus M_{1})=\max\{2,\Delta(G)-1\}\) and from Remark 1.2, it follows that \(\operatorname{pmd}(G\setminus M_{1})=\max\{2,\Delta(G)-1\}\). This implies that \(\operatorname{pmd}(G)\leq\max\{3,\Delta(G)\}\), in particular, \(\operatorname{pmd}(G)\leq d\). 2. Since \(\operatorname{pmd}(G)\leq d\), the result follows from Remark 1.2. **Theorem 2.2**.: Let \(G\) be a bicyclic graph and \(d\geq 4\). Then: 1. \(L_{G}(d)\) is a radical complete intersection if \(d\geq\Delta(G)\). 2. \(L_{G}(d)\) is prime if \(d\geq\Delta(G)+1\). Proof.: The proof is similar to that of Theorem 2.1. We consider a positive matching \(M_{1}\) consisting of an edge \(e_{1}\) of a cycle and edges \(e_{2},\ldots,e_{m}\) such that for \(i=1,\ldots,m\), \(e_{i}\) is not an edge of any cycle and is from a vertex of degree \(\Delta(G)\) in \(G\setminus\{e_{1},\ldots,e_{i-1}\}\). Then, in this case, from the construction of the matching \(M_{1}\), \(G\setminus M_{1}\) is unicyclic with \(\Delta(G\setminus M_{1})=\max\{3,\Delta(G)-1\}\). Moreover, from Theorem 2.1, \(\operatorname{pmd}(G\setminus M_{1})\leq\max\{3,\Delta(G)-1\}\). Hence we get, \(\operatorname{pmd}(G)\leq\max\{4,\Delta(G)\}\) and so the theorem. As a consequence of the above theorems, sufficient conditions are obtained for the ideal of \((d+1)\)-minors of generic/ generic symmetric matrices associated with graphs to be radical and of maximal height and the Pfaffian ideal generated by Pfaffians of order \(2d+2\) of generic skew-symmetric matrices associated with graphs to be radical. **Corollary 2.1**.: Let \(G\) be a unicyclic (bicyclic) graph with \(d\geq\Delta(G)\) and \(d\geq 3\). Then: 1. \(I_{d+1}(X_{G}^{sym})\) and \(\operatorname{Pf}_{2d+2}(X_{G}^{skew})\) are radical. 2. \(I_{d+1}(X_{G}^{sym})\) attains maximal height. 3. If \(G\) is even cyclic, then \(I_{d+1}(X_{G}^{gen})\) is radical and attains maximal height. Proof.: Follows from Theorem 2.1 and [9, Proposition 7.4, 7.5, 7.7]. **Remark 2.1**.: Note that Theorem 2.1 and Theorem 2.2 fail to be true for \(d=2\) and \(d=3\), respectively. For example: 1. The ideal \(L_{C_{4}}(2)\) is not a complete intersection by [20, Theorem 3.5]. 2. Let \(G=K_{2,3}\) be a graph. Using Macaulay2 one can see that \(\mu(L_{G}(3))>\operatorname{ht}(L_{G}(3))=5\), thus \(L_{G}(3)\) is not a complete intersection. ### Conclusion The proof of Theorem 0.1 follows from Lemma 2.1, Theorem 2.1 and Theorem 2.2. ## 3. Almost Complete Intersection This section includes necessary and sufficient conditions for LSS-ideals corresponding to trees and \(C_{3}\)-free unicyclic (bicyclic) graphs to be almost complete intersections. We begin by giving a necessary condition for an LSS-ideal associated with a graph, in general, to be an almost complete intersection. **Lemma 3.1**.: Let \(G\) be a graph and \(d\) be an integer such that \(d<\Delta(G)-1\). Then \(L_{G}(d)\) is not an almost complete intersection. Proof.: Let \(u\) be a vertex of \(G\) with \(\deg_{G}(u)\geq d+2\) and set \(T=\{u\}\). Then the rest of the proof is similar to that of Lemma 2.1. **Theorem 3.1**.: Let \(H_{1}\) and \(H_{2}\) be trees with \(\Delta(H_{1})\leq d\) and \(\Delta(H_{2})\leq d\). If \(G\) is a tree on \([n]\) with \(\Delta(G)>d\), then \(L_{G}(d)\) is an almost complete intersection if and only if \(G\) is obtained by adding an edge between \(H_{1}\) and \(H_{2}\). Proof.: Suppose \(G\) is obtained by adding an edge \(e=\{u,v\}\) between \(H_{1}\) and \(H_{2}\), where \(\Delta(H_{1})\leq d\) and \(\Delta(H_{2})\leq d\). Then, \(\deg_{G}(u)=d+1\) or \(\deg_{G}(v)=d+1\). From Remark 1.1, it follows that \(L_{G\setminus e}(d)\) is a radical complete intersection. Since \(\Delta(G)>d\) from Lemma 2.1, Remark 1.4 and Remark 1.5, it follows that \(L_{G}(d)\) is an almost complete intersection. Now, assume that \(G\) is not a graph obtained by adding an edge between \(H_{1}\) and \(H_{2}\). Then, either there exists a vertex \(u\) such that \(\deg_{G}(u)\geq d+2\) or there exist \(v,w\in V(G)\) such that \(\deg_{G}(v)=d+1,\deg_{G}(w)=d+1\) and \(\{v,w\}\notin E(G)\). We claim there exists a prime ideal \(P\supseteq L_{G}(d)\) such that \(\operatorname{ht}(P)\leq n-3\). From this, it will follow that \(\operatorname{ht}(L_{G}(d))\leq n-3\) and so \(L_{G}(d)\) is not an almost complete intersection. **Case I:** If there exists a vertex \(u\) such that \(\deg_{G}(u)\geq d+2\), then the claim follows from Lemma 3.1. **Case II:** If there exist \(v,w\in V(G)\) such that \(\deg_{G}(v)=d+1,\deg_{G}(w)=d+1\) and \(\{v,w\}\notin E(G)\), then set \(T=\{v,w\}\). Input: \(T\) PHILE (\(G\setminus T\) has an vertex \(u\) such that \(\deg_{G\setminus T}(u)\geq d\)) \[\{\] (3.1) \[T=T\cup\{u\}\] \[\}\] RETURN From the construction of \(T\), we have \(\Delta(G\setminus T)\leq d-1\). Then from Remark 1.1(c), \(L_{G\setminus T}(d)\) is a prime ideal. We name the elements of \(T\) as \(v,w,v_{1},\ldots,v_{m}\) and to this set, we associate an ideal \(P_{T}\) generated by \[x_{v1},\ldots,x_{vd},x_{w1},\ldots,x_{wd},x_{v_{1}1},\ldots x_{v_{md}},L_{G \setminus T}(d).\] Clearly, \(P_{T}\) is a prime ideal containing \(L_{G}(d)\). Next, we compute the height of \(P_{T}\). From Remark 1.1, \(G\setminus T\) is a complete intersection. Hence, \[\operatorname{ht}(L_{G\setminus T}(d))=\mu(G\setminus T)=|E(G)|-\deg_{G}(v)- \deg_{G}(w)-\sum_{i=0}^{m-1}\deg_{G\setminus\{v,w,v_{1},\ldots,v_{i}\}}(v_{i+ 1}).\] Then, \[\begin{split}\operatorname{ht}(P_{T})&=\operatorname{ht}(x _{v1},\dots,x_{vd},x_{w1},\dots,x_{wd},x_{v_{1}1},\dots x_{v_{m}d})+ \operatorname{ht}(L_{G\setminus T}(d)),\\ &=d(m+2)+n-1-\deg_{G}(v)-\deg_{G\setminus v}(w)-\sum_{i=0}^{m-1} \deg_{G\setminus\{v,w,v_{1},\dots,v_{i}\}}(v_{i+1}).\end{split} \tag{3.2}\] By the construction of \(T\), we have \(\deg_{G}(v)=d+1\), \(\deg_{G\setminus v}(w)=d+1\), and \(\deg_{G\setminus\{v,w,v_{1},\dots,v_{i}\}}(v_{i+1})\geq d\), for \(i=0,\dots,m-1\). Substituting these values in Equation (3.2), we get, \(\operatorname{ht}(P_{T})\leq n-3\), as desired. Next, we move on to look at the almost complete intersection LSS-ideals coming corresponding to unicyclic (bicyclic) graphs. **Theorem 3.2**.: Let \(H\) be a tree with \(\Delta(H)\leq d\) and \(U\) be a unicyclic graph with \(\Delta(U)\leq d\). Let \(G\) be a connected \(C_{3}\)-free unicyclic graph on \([n]\) with \(\Delta(G)>d\) and \(d\geq 3\). Then \(L_{G}(d)\) is an almost complete intersection if and only if \(G\) has one of the following forms: 1. \(G\) is obtained by adding an edge between vertices of \(H\); 2. \(G\) is obtained by adding an edge between \(H\) and \(U\). Proof.: Assume that \(L_{G}(d)\) is an almost complete intersection. Then \(\operatorname{ht}(L_{G}(d))=\mu(L_{G}(d))-1=n-1\). From Lemma 3.1, it follows that \(G\) does not have a vertex with \(\deg_{G}(u)\geq d+2\). Now, we claim that if \(G\) has two distinct vertices \(u,v\in V(G)\) such that \(\deg_{G}(u)=d+1\) and \(\deg_{G}(v)=d+1\), then \(\{u,v\}\in E(G)\). Suppose \(\{u,v\}\notin E(G)\). Then setting \(T=\{u,v\}\) and proceeding along the same lines as the proof of Lemma 2.1, one gets \(\operatorname{ht}(L_{G}(d))\leq n-2\). This contradicts the fact that \(\operatorname{ht}(L_{G}(d))=n-1\). Therefore, we get \(\{u,v\}\in E(G)\). This implies if \(G\) has three vertices of degree \(d+1\), then \(G\) has \(C_{3}\) as an induced subgraph. Thus, the number of vertices of degree \(d+1\) is at most \(2\), since \(G\) is a \(C_{3}\)-free unicyclic graph. Hence, \(G\) is either of type-(a) or type-(b). Conversely, suppose \(G\) is of type-(a) or type-(b). Then there exists an edge \(e\in E(G)\) such that \(L_{G}(d)=L_{G\setminus e}+f_{e}^{(d)}\) and \(\Delta(G\setminus e)=d\). Since \(G\setminus e\) is either a tree or a unicyclic graph, Remark 1.1 and Theorem 2.1 implies \(L_{G\setminus e}(d)\) is a radical complete intersection. Therefore, from Remark 1.4, Remark 1.5 and Lemma 2.1, it follows that \(L_{G}(d)\) is an almost complete intersection. **Remark 3.1**.: Let \(G\) be a graph on [9]. From Macaulay2 computations, we get \(L_{G}(3)\) to be an almost complete intersection. In fact, looking at graphs of similar form (containing \(C_{3}\)), we observe a class of almost complete intersection LSS-ideals being associated with it. Therefore, with ample computational evidence, we ask the following question. **Question 3.1**.: Let \(G\) be a connected unicyclic graph on \([n]\) and \(d\geq 2\). If \(G\) is obtained by attaching pendant vertices of \(d-1\) trees \(H_{1},\ldots,H_{d-1}\) with \(\Delta(H_{i})\leq d\), where \(i=1,\ldots,d-1\) to each vertex of \(C_{3}\), is \(L_{G}(d)\) an almost complete intersection? If true, this, along with Theorem 3.2 (dropping the assumption of \(C_{3}\)-free) will characterize unicyclic graphs whose associated LSS-ideals are almost complete intersections. **Corollary 3.1**.: Let \(G=G_{1}\cup\cdots\cup G_{m}\) be a disconnected unicyclic graph. Then \(L_{G}(d)\) is an almost complete intersection if and only if for some \(i\), \(L_{G_{i}}(d)\) is an almost complete intersection and for \(j\neq i\), \(L_{G_{j}}(d)\) are complete intersections. **Theorem 3.3**.: Let \(H\) be a tree with \(\Delta(H)\leq d\), \(U_{1}\), \(U_{2}\) be unicyclic graphs with \(\Delta(U_{i})\leq d\) for \(i=1,2\), and \(B\) be a bicyclic graph with \(\Delta(B)\leq d\). Let \(G\) be a connected \(C_{3}\)-free bicyclic graph on \([n]\) with \(\Delta(G)>d\) and \(d\geq 4\). Then \(L_{G}(d)\) is an almost complete intersection if and only if \(G\) has one of the following forms: 1. \(G\) is obtained by adding an edge between vertices of \(U_{1}\); 2. \(G\) is obtained by adding an edge between \(U_{1}\) and \(U_{2}\); 3. \(G\) is obtained by adding an edge between \(H\) and \(B\). Proof.: The proof is similar to that of Theorem 3.2. The following proposition is a consequence of isomorphisms 1 and 2 mentioned in the introduction. **Proposition 3.1**.: Let \(d\) be an integer. 1. For a subgraph \(G\) of \(K_{m,n}\) where \(m,n\in\mathbb{N}\), if \(L_{G}(d)\) is an almost complete intersection, then height of \(I_{d+1}(X_{G}^{gen})\) is one less than the maximal height. 2. Let a subgraph \(G\) of \(K_{n}\) where \(n\in\mathbb{N}\), if \(L_{G}(d)\) is an almost complete intersection, then height of \(I_{d+1}(X_{G}^{sym})\) is one less than the maximal height. From the results in this section, it thus follows that, for all graphs of the form mentioned in Theorem 3.1, Theorem 3.2, and Theorem 3.3, the corresponding ideal of \(d+1\)-minors of the associated generic matrices attain one less than the maximal height. ## 4. Regularity and Koszulness Let \(G\) be a finite simple graph and \(d\) be a positive integer. In this section, we first compute regularity of powers of LSS-ideals corresponding to trees and unicyclic graphs with \(\Delta(G)\leq d\). This is followed by associating certain invariants of \(G\) in terms of the cardinality of the edges of its induced subgraphs and giving lower bounds for the regularity of powers of the related LSS-ideals. Further, bounds are given for the regularity of powers of almost complete intersection LSS-ideals. The section ends with looking at Koszulness of quotients of LSS-ideals. **Proposition 4.1**.: Let \(G\) be a graph on \([n]\) with \(\Delta(G)\leq d\) and \(d\geq 3\). 1. If \(G\) is a tree then for all \(s\geq 1\), we have \(\operatorname{reg}(S/L_{G}(d)^{s})=2s+n-3\). 2. If \(G\) is a connected unicyclic graph then we have \(\operatorname{reg}(S/L_{G}(d)^{s})=2s+n-2\), for all \(s\geq 1\). Proof.: From Remark 1.1 and Theorem 2.1, it follows that \(L_{G}(d)\) is a complete intersection. Then the statement is a consequence of Remark 1.6. **Proposition 4.2**.: Let \(G\) be a graph and \(H\) be its induced subgraph. Then, for all \(i,j\geq 0\) and \(s\geq 1\), \[\beta_{i,j}(S/L_{H}(d)^{s})\leq\beta_{i,j}(S/L_{G}(d)^{s}).\] Proof.: Let \(H\) be an induced subgraph of \(G\) and \(S_{H}=k[x_{ij}\mid i\in V(H)\), and \(j\in[d]]\). First, we claim that \(L_{H}(d)^{s}=L_{G}(d)^{s}\cap S_{H}\) for all \(s\geq 1\), where \(L_{H}(d)\) is a LSS-ideal of \(H\) in \(S_{H}\). We have \(L_{H}(d)^{s}\subseteq L_{G}(d)^{s}\cap S_{H}\), since generators of \(L_{H}(d)^{s}\) are contained in \(L_{G}(d)^{s}\). For other side inclusion, consider the following map \(\phi:S\to S_{H}\) by setting \(\phi(x_{ij})=0\), if \(x_{ij}\notin V(H)\) and \(\phi(x_{ij})=x_{ij}\), if \(x_{ij}\in V(H)\). Let \(g=\sum_{e_{1},\ldots,e_{s}\in E(G)}r_{e_{1},\ldots,e_{s}}f_{e_{1}}^{(d)}\cdots f _{e_{s}}^{(d)}\in L_{G}(d)^{s}\), where \(r_{e_{1},\ldots,e_{s}}\in S\). Note that \(\phi(g)=g\), if \(g\in S_{H}\). Thus, we get \[g =\sum_{e_{1},\ldots,e_{m}\in E(G)}\phi(r_{e_{1},\ldots,e_{m}}) \phi(f_{e_{1}}^{(d)}\cdots f_{e_{s}}^{(d)}),\] \[=\sum_{e_{1},\ldots,e_{m}\in E(H)}\phi(r_{e_{1},\ldots,e_{m}})f_{ e_{1}}^{(d)}\cdots f_{e_{s}}^{(d)}.\] Therefore, \(g\in L_{H}(d)^{s}\). Now, we claim that \(S_{H}/L_{H}(d)^{s}\) is an algebra retract of \(S/L_{G}(d)^{s}\). Then the statement follows from [26, Corollary 2.5]. Consider, \(S_{H}/L_{H}(d)^{s}\stackrel{{\bar{\phi}}}{{\hookrightarrow}}S/L_ {G}(d)^{s}\stackrel{{\bar{\phi}}}{{\rightarrow}}S_{H}/L_{H}(d)^{s}\), where \(\bar{\phi}\) is an induced by the map \(\phi\). Then one can see that \(\bar{\phi}\circ i\) is identity on \(S_{H}/L_{H}(d)^{s}\) and hence the claim. **Notation 4.1**.: Let \(d\geq 3\) and \(G\) be a finite simple graph. We define two invariants of \(G\) in the following way: 1. \(\mathfrak{t}(G)=\max\{|E(H)|\mid H\text{ is an induced subgraph of }G\text{ such that }H\text{ is a forest with }\Delta(H)\leq d\}\). 2. \(\mathfrak{u}(G)=\max\{|E(H)|\mid H\text{ is an induced subgraph of }G\text{ such that }H\text{ is a unicyclic graph with }\Delta(H)\leq d\}\). **Corollary 4.1**.: Let \(G\) be a finite simple graph. Then one has \[\operatorname{reg}(S/L_{G}(d)^{s})\geq 2(s-1)+\max\{\mathfrak{t}(G),\mathfrak{u}(G )\},\] (see, Notation 4.1) for all \(s\geq 1\). Proof.: The assertion follows from Proposition 4.1 and Proposition 4.2. **Theorem 4.1**.: Let \(G\) be a tree on \([n]\). If \(L_{G}(d)\) is an almost complete intersection, then for all \(s\geq 1\), one has \[2s+n-4\leq\operatorname{reg}(S/(L_{G}(d))^{s})\leq 2(s-1)+\max\{\operatorname{ reg}(S/L_{G}),n-3\}.\] Proof.: Suppose \(G\) is a tree with \(L_{G}(d)\) being an almost complete intersection. From Theorem 3.1, \(G\) is obtained by adding an edge \(e\) between two complete intersection trees. Thus, \(G\setminus e\) is a complete intersection and so \(\mathfrak{t}(G)=n-2\). From Corollary 4.1, it then follows that, \(2s+n-4\leq\operatorname{reg}(S/(L_{G}(d))^{s})\), for all \(s\geq 1\). Hence, we have the desired lower bound. Now, since an almost complete intersection ideal is generated by a \(d\)-sequence and \(L_{G}(d)\) is generated in degree \(2\), the upper bound follows from Remark 1.7. **Theorem 4.2**.: Let \(H\) be a tree with \(\Delta(H)\leq d\), \(U_{1}\) and \(U_{2}\) be unicyclic graphs with \(\Delta(U_{i})\leq d\) and \(B\) be a bicyclic graph with \(\Delta(B)\leq d\). Let \(G\) be a connected graph on \([n]\) of one of the following forms: 1. \(G\) is obtained by adding an edge between two vertices of \(H\) and \(d\geq 3\); 2. \(G\) is obtained by adding an edge between of \(H\) and \(U_{1}\) and \(d\geq 3\); 3. \(G\) is obtained by adding an edge between two vertices of \(U_{1}\) and \(d\geq 4\); 4. \(G\) is obtained by adding an edge between \(U_{1}\) and \(U_{2}\) and \(d\geq 4\); 5. \(G\) is obtained by adding an edge between \(H\) and \(B\) and \(d\geq 4\). Then for all \(s\geq 1\), one has \[2s+n-3\leq\operatorname{reg}(S/(L_{G}(d))^{s})\leq 2(s-1)+\max\{\operatorname{ reg}(S/L_{G}),n-1\}.\] Proof.: From Theorem 3.2 and Theorem 3.3, it follows that, \(G\) is obtained by adding an edge to a complete intersection graph. Also, \(L_{G}(d)\) is generated in degree \(2\). Therefore, the lower and the upper bounds follow from Corollary 4.1 and Remark 1.7, respectively. A \(k\)-algebra \(R\) is Koszul if the differentials in the minimal free resolution of the residue field have linear entries. Koszul algebras were originally introduced by Priddy in [27]. If a graded \(k\)-algebra is Koszul, then the corresponding Poincare series is found to be rational (cf. [14]). Further, analogous to the result of Auslander, Buchsbaum, and Serre characterizing regular rings in terms of the finiteness of projective dimension, Avramov, Eisenbud, and Peeva gave the characterization, \(R\) is Koszul \(\Longleftrightarrow\operatorname{reg}_{R}(K)<\infty\Longleftrightarrow \operatorname{reg}_{R}(M)<\infty\) for every finitely generated \(R\)-module \(M\) (cf. [3, 4]). For more details on Koszul algebras, refer [14, 8]. **Proposition 4.3**.: Let \(d,r\geq 1\) and \(G\) be a graph on \([n]\) with \(|E(G)|=r\). Then \(S/L_{G}(d)\) is Koszul if and only if \(r\leq nd\) or \(r\geq\binom{nd+1}{2}-[(\frac{nd}{2})^{2}]\). Proof.: This follows from [13, Theorem 7.1] and the fact that \(L_{G}(d)\) is generated by generic quadratic forms. As a consequence of the above proposition, we obtain certain classes of Koszul algebras. **Corollary 4.2**.: Let \(G\) be a graph on \([n]\) and \(L_{G}(d)\) be the corresponding LSS-ideal. 1. If \(G\) is a tree/unicyclic graph, then \(S/L_{G}(d)\) is Koszul for all \(d\). 2. If \(G\) is a complete/complete bipartite/bicyclic graph, then \(S/L_{G}(d)\) is Koszul for all \(d\geq 2\).
2308.07141
Existence and Multiplicity of Solutions for Fractional $p$-Laplacian Equation Involving Critical Concave-convex Nonlinearities
We investigate the following fractional $p$-Laplacian equation \[ \begin{cases} \begin{aligned} (-\Delta)_p^s u&=\lambda |u|^{q-2}u+|u|^{p_s^*-2}u &&\text{in}~\Omega,\\ u &=0 &&\text{in}~ \mathbb{R}^n\setminus\Omega, \end{aligned} \end{cases} \] where $s\in (0,1)$, $p>q>1$, $n>sp$, $\lambda>0$, $p_s^*=\frac{np}{n-sp}$ and $\Omega$ is a bounded domain (with $C^{1, 1}$ boundary). Firstly, we get a dichotomy result for the existence of positive solution with respect to $\lambda$. For $p\ge 2$, $p-1<q<p$, $n>\frac{sp(q+1)}{q+1-p}$, we provide two positive solutions for small $\lambda$. Finally, without sign constraint, for $\lambda$ sufficiently small, we show the existence of infinitely many solutions.
Weimin Zhang
2023-08-14T13:48:22Z
http://arxiv.org/abs/2308.07141v2
###### Abstract ###### Abstract We investigate the following fractional \(p\)-Laplacian equation \[\begin{cases}(-\Delta)_{p}^{s}u=\lambda|u|^{q-2}u+|u|^{p^{*}_{s}-2}u&\text{in } \Omega,\\ u=0&\text{in }\mathbb{R}^{n}\setminus\Omega,\end{cases}\] where \(s\in(0,1)\), \(p>q>1\), \(n>sp\), \(\lambda>0\), \(p^{*}_{s}=\frac{np}{n-sp}\) and \(\Omega\) is a bounded domain (with \(C^{1,1}\) boundary). Firstly, we get a dichotomy result for the existence of positive solution with respect to \(\lambda\). For \(p\geq 2\), \(p-1<q<p\), \(n>\frac{sp(q+1)}{q+1-p}\), we provide two positive solutions for small \(\lambda\). Finally, without sign constraint, for \(\lambda\) sufficiently small, we show the existence of infinitely many solutions. **Existence and Multiplicity of Solutions for Fractional \(p\)-Laplacian Equation Involving Critical Concave-convex Nonlinearities** Weimin Zhang School of Mathematical Sciences, Key Laboratory of Mathematics and Engineering Applications (Ministry of Education) & Shanghai Key Laboratory of PMMP, East China Normal University, Shanghai 200241, China **Keywords:** Critical Sobolev exponent, Fractional \(p\)-Laplacian, Convex-concave, Multiplicity of solutions. ## 1 Introduction In this paper, we are interested in the following fractional \(p\)-Laplacian equation \[(P_{\lambda})\qquad\qquad\begin{cases}(-\Delta)_{p}^{s}u=\lambda|u|^{q-2}u+|u| ^{p^{*}_{s}-2}u&\text{in }\Omega,\\ u=0&\text{in }\mathbb{R}^{n}\setminus\Omega,\end{cases}\] where \(s\in(0,1)\), \(p>q>1\), \(n>sp\), \(\lambda>0\), \(p^{*}_{s}=\frac{np}{n-sp}\) is called the critical Sobolev exponent, and \(\Omega\subset\mathbb{R}^{n}\) is a bounded domain (with \(C^{1,1}\) boundary). \((-\Delta)_{p}^{s}\) denotes the fractional \(p\)-Laplacian operator, and when \(u\) is sufficiently smooth, it can be represented pointwisely by \[(-\Delta)_{p}^{s}u(x)=2\lim_{e\to 0^{+}}\int_{\mathbb{R}^{n}\setminus B_{e}(x)} \frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))}{|x-y|^{n+sp}}dy,\] up to a normalization constant depending on \(n\) and \(s\), which is consistent with the usual linear fractional Laplacian \((-\Delta)^{s}\) when \(p=2\), see [13]. Let \(f:\Omega\times\mathbb{R}\to\mathbb{R}\) be a Caratheodory mapping, consider the general fractional \(p\)-Laplacian equation \[\begin{cases}(-\Delta)_{p}^{s}u=f(x,u)&\text{in }\Omega,\\ u=0&\text{in }\mathbb{R}^{n}\setminus\Omega.\end{cases} \tag{1.1}\] To give a weak formulation of (1.1), we denote the Gagliardo seminorm by \[[u]_{s,p}:=\left(\int_{\mathbb{R}^{2n}}\frac{|u(x)-u(y)|^{p}}{|x-y|^{n+sp}}dxdy \right)^{1/p}.\] Let \[W^{s,p}(\mathbb{R}^{n}):=\{u\in L^{p}(\mathbb{R}^{n}):[u]_{s,p}<\infty\}\] be endowed with the norm \[\|u\|_{W^{s,p}}:=(|u|_{p}^{p}+[u]_{s,p}^{p})^{1/p},\] where \(|\cdot|_{p}\) denotes the usual norm of \(L^{p}(\mathbb{R}^{n})\). Denote the subspace \[W^{s,p}_{0}(\Omega):=\left\{u\in W^{s,p}(\mathbb{R}^{n}):u=0\text{ a.e. in }\mathbb{R}^{n}\setminus\Omega\right\},\] equivalently renormed with \(\|u\|=[u]_{s,p}\) (see [13, Theorem 7.1]), it is well known that \(W^{s,p}_{0}(\Omega)\) is a uniformly convex Banach space. Furthermore, the embedding \(W^{s,p}_{0}(\Omega)\hookrightarrow L^{r}(\Omega)\) is continuous for \(r\in[1,p_{s}^{*}]\) and compact for \(r\in[1,p_{s}^{*})\), see [13, Theorems 6.5, 7.1]. \((-\Delta)^{s}_{p}\) can be variationally regarded as an operator from \(W^{s,p}_{0}(\Omega)\) into its dual space \(W^{s,p}_{0}(\Omega)^{*}\) as follows, \[\langle(-\Delta)^{s}_{p}u,v\rangle=\int_{\mathbb{R}^{2n}}\frac{J_{u}(x,y)(v(x) -v(y))}{|x-y|^{n+sp}}dxdy,\quad\forall\;v\in W^{s,p}_{0}(\Omega),\] where \(J_{u}(x,y)=|u(x)-u(y)|^{p-2}(u(x)-u(y))\). We call \(u\in W^{s,p}_{0}(\Omega)\) a weak solution (respectively weak subsolution, weak supersolution) of (1.1) if \(f(x,u)\in W^{s,p}_{0}(\Omega)^{*}\) and \[\langle(-\Delta)^{s}_{p}u,v\rangle=(\text{respectively }\leq,\,\geq)\int_{\Omega}f(x,u) vdx,\quad\forall\;v\in W^{s,p}_{0}(\Omega),v\geq 0.\] For simplicity, from now on, we omit the term weak in the rest of our paper. If \(f\) satisfies the growth condition: \[|f(x,t)|\leq C_{0}(1+|t|^{r-1})\quad\text{for }\;C_{0}>0,1<r\leq p_{s}^{*} \tag{1.2}\] and a.e. \(x\in\Omega\), \(t\in\mathbb{R}\). Thus solutions of problem (1.1) coincide with critical points of the \(C^{1}\) functional \[E(u)=\frac{1}{p}\|u\|^{p}-\int_{\Omega}\int_{0}^{u}f(x,t)dtdx,\quad u\in W^{1, p}_{0}(\Omega). \tag{1.3}\] In order to find critical points of \(E\), the eigenvalues of \((-\Delta)^{s}_{p}\) based on \(\mathbb{Z}_{2}\)-cohomological index introduced in [25] are usually used to carry out some linking constructions, see for instance [22, 30, 31]. When \(f\) satisfies the subcritical growth condition (i.e. (1.2) with \(1<r<p_{s}^{*}\)), many results about existence and multiplicity of solutions of (1.1) have been established, see for example [19, 20, 22]. In the special case \(p=2\), Ros-Oton and Serra [32] gave a Pohozaev identity for solutions of (1.1), and when \(p\neq 2\), a similar identity for \(u\in W^{s,p}_{0}(\Omega)\) solution to (1.1) was conjectured in [22, Section 7]. It results that if \(f(x,t)=|t|^{p_{s}^{*}-2}t\), there may not exist non trivial solutions of (1.1) when \(\Omega\) is star-shaped. This motivates people to consider the perturbation problem \[\begin{cases}(-\Delta)^{s}_{p}u=\lambda g(x,u)+|u|^{p_{s}^{*}-2}u&\text{in } \Omega,\\ \quad\quad\quad\quad\quad\quad\quad\quad u=0&\text{in }\mathbb{R}^{n}\setminus \Omega,\end{cases} \tag{1.4}\] where \(g:\Omega\times\mathbb{R}\to\mathbb{R}\) satisfies the subcritical growth (1.2) with \(1<r<p_{s}^{*}\). This type of problems bring new difficulties due to the fact that the associated energy of (1.4) cannot satisfy Palais-Smale condition globally because the embedding \(W^{s,p}_{0}(\Omega)\subset L^{p_{s}^{*}}(\Omega)\) is not compact. However, the Palais-Smale condition can hold true in suitable thresholds related to the best Sobolev constant \[S_{s,p}:=\inf_{u\in D^{s,p}(\mathbb{R}^{n})\setminus\{0\}}\frac{[u]^{p}_{s,p} }{|u|^{p_{s}^{*}}_{p_{s}}},\] where \(D^{s,p}(\mathbb{R}^{n}):=\big{\{}u\in L^{p_{s}^{*}}(\mathbb{R}^{n}):[u]_{s,p} <\infty\big{\}}\). * Mahwin and Molica Bisci [28] proved that there exists an interval \(\mathcal{V}\subset(0,\infty)\) such that for every \(\lambda\in\mathcal{V}\), (1.4) admits at least one solution, which is a local minimizer of the corresponding energy. The main strategy in [28] is to check the sequentially weakly lower semicontinuity of the functional \[H(u)=\frac{1}{p}\|u\|^{p}-\frac{1}{p^{*}}|u|_{p_{s}^{*}}^{p_{s}^{*}}\] restricted in a sufficiently small ball of \(W_{0}^{s,p}(\Omega)\). As a special case, they showed in [28, Theorem 1.2] that \[\begin{cases}(-\Delta)_{p}^{s}u=\lambda(|u|^{q-2}u+|u|^{r-2}u)+|u|^{p_{s}^{*}- 2}u&\text{in }\Omega,\\ u=0&\text{in }\mathbb{R}^{n}\setminus\Omega,\end{cases}\] (1.5) has a positive solution for \(\lambda\in\mathcal{V}\), provided \(2\leq q<p<r<p_{s}^{*}\). * In [30], Mosconi _et al._ proved that when \(g(x,u)=|u|^{p-2}u\), (1.4) has a non trivial solution in the following cases: 1. \(n=sp^{2}\) and \(\lambda\in(0,\lambda_{1})\); 2. \(n>sp^{2}\) and \(\lambda\not\in\{\lambda_{k}\}\); 3. \(\frac{n^{2}}{n+s}>sp^{2}\); 4. \(\frac{n^{3}+s^{3}p^{3}}{n(n+3)}>sp^{2}\) and \(\partial\Omega\in C^{1,1}\), where \(\lambda_{k}\) is the \(k\)-th eigenvalue of \((-\Delta)_{p}^{s}\) given in [25]. * Bhakta and Mukherjee [6] obtained that when \(p\geq 2\), there exist \(\lambda_{0}>0\), \(n_{0}\in\mathbb{N}\) and \(q_{0}\in(1,p)\) such that for all \(\lambda\in(0,\lambda_{0})\), \(n>n_{0}\) and \(q\in(q_{0},p)\), \((P_{\lambda})\) has at least one sign changing solution. For the convex-concave nonlinearities, there are also some literature concerning problem \((P_{\lambda})\) with the classical \(p\)-Laplacian operator, i.e. \(s=1\), \[\begin{cases}-\Delta_{p}u=\lambda|u|^{q-2}u+|u|^{p^{*}-2}u&\text{in }\Omega,\\ u=0&\text{on }\partial\Omega,\end{cases} \tag{1.6}\] where \(p^{*}=\frac{np}{n-p}\) and \(\Delta_{p}u=\operatorname{div}(|\nabla u|^{p-2}\nabla u)\). For example: * Ambrosetti, Brezis and Cerami [1] did seminal works for \(1<q<p=2\), and they proved that there exists \(\Lambda>0\) such that 1. problem (1.6) has at least two positive solutions if \(0<\lambda<\Lambda\); 2. problem (1.6) has at least one positive solution if \(\lambda=\Lambda\); 3. problem (1.6) has no positive solution if \(\lambda>\Lambda\). * Garcia Azorero, Manfredi and Peral Alonso [17] (see also [16]) generalized the above results for general \(p>1\), provided either \(\frac{2n}{n+2}<p<3\), \(1<q<p\) or \(p\geq 3\), \(p>q>\frac{p^{*}-2}{p-1}\). For the linear fractional Laplacian case, that is \(p=2\), \(s\in(0,1)\), \(n>2s\), \(1<q<2\), the above (i)-(iii) for \((P_{\lambda})\) were obtained by Barrios, Colorado, Servadei and Soria [4]. Motivated by the works mentioned above, we are concerned here with the existence and multiplicity problem for \((P_{\lambda})\). To state our results, we introduce some definitions. For all \(x\in\Omega\), let \[\mathrm{d}_{\Omega}(x):=\operatorname{dist}(x,\partial\Omega),\] and consider the weighted space \[\mathcal{C}_{s}^{0}(\overline{\Omega}):=\big{\{}u\in C^{0}(\overline{\Omega}): \frac{u}{\mathrm{d}_{\Omega}^{s}}\text{ admits a continuous extension to }\overline{\Omega}\big{\}}\] endowed with the norm \(\|u\|_{\mathcal{C}_{s}^{0}(\overline{\Omega})}=\|\frac{u}{\mathrm{d}_{\Omega} ^{s}}\|_{\infty}\). Our first result is a dichotomy claim which extends the results in [4] with \((-\Delta)^{s}\) and [17] with \(-\Delta_{p}\). **Theorem 1.1**.: _Let \(s\in(0,1)\), \(p\in(1,\infty)\), \(q\in(1,p)\), \(n>sp\), let \(\Omega\) be a bounded domain with \(C^{1,1}\) boundary. Then there exists \(0<\Lambda<\infty\) such that_ 1. \((P_{\lambda})\) _has no positive solutions for_ \(\lambda>\Lambda\)_;_ 2. \((P_{\lambda})\) _has a minimal positive solution_ \(u_{\lambda}\) _for any_ \(0<\lambda<\Lambda\)_; moreover, this family of minimal solutions is increasing with respect to_ \(\lambda\)_, and_ \(\|u_{\lambda}\|_{\mathcal{C}_{s}^{0}(\overline{\Omega})}\to 0\) _as_ \(\lambda\to 0^{+}\)_;_ 3. \((P_{\lambda})\) _has at least one positive solution_ \(u_{\Lambda}\) _for_ \(\lambda=\Lambda\)_, given by the pointwise limit of_ \(u_{\lambda}\) _as_ \(\lambda\to\Lambda^{-}\)_._ The authors in [1] used supersolution and subsolution method to find a minimal solution \(v_{\lambda}\) for \(\lambda\in(0,\Lambda)\), which is a stable solution (i.e., the second variation of the energy functional at \(v_{\lambda}\) is nonnegative). The stability yields that \(\{v_{\lambda}\}_{0<\lambda<\Lambda}\) is bounded in \(H_{0}^{1}(\Omega)\), so the weak limit of \(v_{\lambda}\) is a weak solution for (1.6) with \(\lambda=\Lambda\). In contrast to [1], we cannot apply Picone's identity due to the presence of the nonlocal term, and there is no related stability theory for fractional \(p\)-Laplacian so far. Due to the nonlinearity of \((-\Delta)_{p}^{s}\), we cannot derive the strong comparison principle directly, that is \[(-\Delta)_{p}^{s}u_{1}\leq(-\Delta)_{p}^{s}u_{2}\ \text{ and }\ u_{1}\neq u_{2}\Rightarrow u_{1}<u_{2}.\] In [26], the author derived a version of strong comparison principle, but with rather restrictive assumptions. Our key observation is that all positive solutions to (1.1) can be controlled by \(\mathrm{d}_{\Omega}^{s}(x)\), see Proposition 2.8. Our strategy is to use the comparison principle to get the uniqueness of solution to \[(Q_{\lambda})\] We will use the iteration method to show that the unique solution to \((Q_{\lambda})\) is less than any positive solution to \((P_{\lambda})\), we prove then the existence of a minimal positive solution \(u_{\lambda}\) of \((P_{\lambda})\) for all \(\lambda\in(0,\Lambda)\), which is increasing with respect to \(\lambda\). For any \(\lambda\in(0,\Lambda)\), let \(0<\lambda^{\prime\prime}<\lambda<\lambda^{\prime}<\Lambda\) and \[\Sigma=\big{\{}u\in W_{0}^{1,p}(\Omega)\cap\mathcal{C}_{s}^{0}(\overline{ \Omega}):u_{\lambda^{\prime\prime}}<u<u_{\lambda^{\prime}}\big{\}}. \tag{1.7}\] Moreover, we use the variational method to find a local minimum solution \(\widehat{u}_{\lambda}\in\Sigma\) with respect to the topology of \(\mathcal{C}_{s}^{0}(\overline{\Omega})\). In Lemma 3.5, we show the boundedness of \(\{\widehat{u}_{\lambda}\}_{0<\lambda<\Lambda}\) in \(W_{0}^{s,p}(\Omega)\), which yields the boundedness of the minimal solution \(\{u_{\lambda}\}_{0<\lambda<\Lambda}\) in \(W_{0}^{s,p}(\Omega)\). **Remark 1.2**.: _Theorem 1.1 remains true for (1.4) if \(g(x,u)=|u|^{q-2}u+|u|^{r-2}u\) with \(1<q<p\), \(r\in(1,p_{s}^{*})\). Hence it works in the frame of [28, Theorem 1.2]. In [17], it was shown that (1.6) has an extremal solution \(u_{\Lambda}\) in the distributional sense, but the regularity of \(u_{\Lambda}\) was not mentioned. According to the proof of Theorem 1.1, we can assert that \(u_{\Lambda}\) in [17, Lemma 6.3] belongs to \(W_{0}^{1,p}(\Omega)\)._ Now we will consider the existence of a second positive solution to \((P_{\lambda})\). **Theorem 1.3**.: _Let \(s\in(0,1)\), \(p\geq 2\), \(p-1<q<p\), \(n>\frac{sp(q+1)}{q+1-p}\), and \(\Omega\) be a bounded domain with \(C^{1,1}\) boundary. There exists \(\lambda^{*}>0\) such that for all \(\lambda\in(0,\lambda^{*})\), problem \((P_{\lambda})\) has at least two positive solutions._ For equation (1.6) with \(p=2\), one of main points in [1] for finding two positive solutions is a result of Brezis and Nirenberg [11] which connects variational and nonvariational methods. Roughly speaking, a local minimizer in \(C^{1}\)-topology is also a local minimizer in \(W^{1,2}_{0}(\Omega)\). For \(p>1\), the equivalence between \(C^{1}(\overline{\Omega})\) and \(W^{1,p}_{0}(\Omega)\) local minimizers of the energy functional was proven respectively in [17] and [21]. However, in fractional operator cases, the space \(C^{1}(\overline{\Omega})\) seems to be not suitable for this aim, but \(\mathcal{C}^{0}_{s}(\overline{\Omega})\) can serve as a suitable substitute. Iannizzotto, Mosconi and Squassina [24] proved that for \(p\geq 2\) and a given \(f\) satisfying (1.2), a local minimizer of the energy \(E\) (see (1.3)) in \(\mathcal{C}^{0}_{s}(\overline{\Omega})\cap W^{s,p}_{0}(\Omega)\) with respect to \(\mathcal{C}^{0}_{s}(\overline{\Omega})\)-topology is also a local minimizer in \(W^{s,p}_{0}(\Omega)\), see also Barrios _et al._[4, Proposition 2.5] for the case \(p=2\). In virtue of the maximum principle [7, Theorem A.1], positive solutions to \((P_{\lambda})\) coincide with nontrivial critical points of the following functional defined on \(W^{s,p}_{0}(\Omega)\) \[\widetilde{I}_{\lambda}(u)=\frac{1}{p}\|u\|^{p}-\frac{\lambda}{q}\int_{\Omega }{(u^{+})}^{q}dx-\frac{1}{p_{s}^{*}}\int_{\Omega}{(u^{+})}^{p_{s}^{*}}dx, \tag{1.8}\] where \(u^{+}=\max\{u,0\}\). As mentioned above, \((P_{\lambda})\) has a minimal solution \(u_{\lambda}\), and \(\widetilde{I}_{\lambda}\) has a minimizer \(\widehat{u}_{\lambda}\) in \(\Sigma\) (see (1.7)) with respect to \(\mathcal{C}^{0}_{s}(\overline{\Omega})\)-topology, which is also a local minimizer in \(W^{s,p}_{0}(\Omega)\) if \(p\geq 2\) by [24]. We can assume \(u_{\lambda}=\widehat{u}_{\lambda}\), otherwise, the theorem naturally holds true. Under the assumption that \(\widetilde{I}_{\lambda}\) has only two critical points \(0\) and \(u_{\lambda}\), we will prove in Proposition 4.4 that \(\widetilde{I}_{\lambda}\) satisfies the Palais-Smale condition for all level \[c<c_{s,p}:=\widetilde{I}_{\lambda}(u_{\lambda})+\frac{s}{n}S_{s,p}^{\frac{n}{ p}}. \tag{1.9}\] It remains to construct a mountain pass geometry of \(\widetilde{I}_{\lambda}\) around \(u_{\lambda}\), and check that the mountain pass level strictly less than \(c_{s,p}\). It has been conjectured in [8] that all minimizers for \(S_{s,p}\) are of the form \(cU(|x-x_{0}|/\varepsilon)\), where \[U(x)=\frac{1}{\left(1+|x|^{\frac{p}{p-1}}\right)^{(n-sp)/p}},\quad x\in\mathbb{ R}^{n}.\] As far as we are aware, this conjecture remains open. However, the asymptotic estimates for all minimizers were established by [8]. To estimate the mountain pass level, we will make use of some truncation functions \(u_{\varepsilon,\delta}\) constructed by Mosconi _et al._[30] (see (2.9)) and some useful integral estimates of \(u_{\varepsilon,\delta}\), see subsection 2.1 below. In contrast to [30], our mountain pass geometry is around \(u_{\lambda}\), instead of \(0\), for which the estimates will be more complex, the nonlocal integral also brings new difficulties to our computations. To overcome these obstacles, we will proceed simultaneously the construction of mountain pass geometry and the estimate for mountain pass level. To be more precise, we consider \(\eta_{\delta}u_{\lambda}\) where \(\eta_{\delta}\) is a cut-off function (see (2.6)). Lemma 4.1 leads to \(\eta_{\delta}u_{\lambda}\to u_{\lambda}\) in \(W^{s,p}_{0}(\Omega)\) when \(\delta\to 0\). We choose \(u_{\lambda}\) to be the starting point of mountain pass path, with the terminal point \[e=\eta_{\delta}u_{\lambda}+t_{0}u_{\varepsilon,\delta}, \tag{1.10}\] where \(t_{0}\) depending on \(\varepsilon\) and \(\delta\) is a positive number such that \(\widetilde{I}_{\lambda}(e)<\widetilde{I}_{\lambda}(u_{\lambda})\). Consider the set of mountain pass paths \[\Gamma_{\varepsilon,\delta}:=\{\gamma\in C\left([0,1],W^{s,p}_{0}(\Omega) \right):\gamma(0)=u_{\lambda},\,\gamma(1)=e\}, \tag{1.11}\] and the mountain pass level \[m_{\varepsilon,\delta}:=\inf_{\gamma\in\Gamma_{*,\delta}}\max_{t\in[0,1]}\widetilde {I}_{\lambda}(\gamma(t)). \tag{1.12}\] We select a special mountain pass path \[\gamma_{\varepsilon,\delta}(t)=\begin{cases}\eta_{2t\delta}u_{\lambda}&\text{if }\ 0 \leq t\leq\frac{1}{2},\\ \eta_{\delta}u_{\lambda}+(2t-1)t_{0}u_{\varepsilon,\delta}&\text{if }\ \frac{1}{2}<t\leq 1. \end{cases} \tag{1.13}\] and check that \(\widetilde{I}_{\lambda}(\gamma_{\varepsilon,\delta}(t))\) tends to \(\widetilde{I}_{\lambda}(u_{\lambda})<c_{s,p}\) as \(\delta\to 0\) for all \(0\leq t\leq\frac{1}{2}\). To reach \(m_{\varepsilon,\delta}<c_{s,p}\), it suffices to prove \[\sup_{t\geq 0}\widetilde{I}_{\lambda}(\eta_{\delta}u_{\lambda}+tu_{\varepsilon,\delta})<c_{s,p} \tag{1.14}\] for some \(\varepsilon,\delta>0\). In Lemma 4.3, for \(p\geq 2\), \(p-1<q<p\) and \(n>\frac{sp(q+1)}{q+1-p}\), by taking \(\varepsilon=\delta^{k+1}\) with suitable choice of \(k\in(0,p-1)\), we can claim that (1.14) holds whenever \(\delta\) is sufficiently small. A key point is that \(u_{\varepsilon,\delta}\) and \(\eta_{\delta}u_{\lambda}\) will be chosen to have disjoint support domains, which permits us to handle many integral estimates. The mountain pass technique provides then a second critical point. Finally, we prove that the mountain pass level is positive for \(\lambda\) positive but small, which guarantees the non-triviality of this critical point. **Remark 1.4**.: _In Theorem 1.3, it is clear that \(\lambda^{*}\leq\Lambda\), that given in Theorem 1.1. However, we don't know how to rule out the triviality of the critical point when the mountain pass level is just zero, hence we cannot claim actually \(\lambda^{*}=\Lambda\)._ Finally, without sign constraint, we obtain the existence of infinitely many solutions for \((P_{\lambda})\). Denote \[I_{\lambda}(u)=\frac{1}{p}\|u\|^{p}-\frac{\lambda}{q}\int_{\Omega}|u|^{q}dx- \frac{1}{p_{s}^{*}}\int_{\Omega}|u|^{p^{*}}dx,\quad u\in W_{0}^{s,p}(\Omega). \tag{1.15}\] Notice that actually all the non-zero critical points of \(\widetilde{I}_{\lambda}\) are the positive critical points of \(I_{\lambda}\). **Theorem 1.5**.: _Let \(s\in(0,1)\), \(p>1\), \(q\in(1,p)\), \(n>sp\), and \(\Omega\) be a bounded domain. There exists \(\lambda^{**}>0\) such that for all \(\lambda\in(0,\lambda^{**})\), \((P_{\lambda})\) has a sequence of solutions \(\{u_{j}\}\) satisfying \(I_{\lambda}(u_{j})\to 0^{-}\)._ Garcia Azorero and Peral Alonso in [15, Theorem 4.5] proved that for \(1<q<p\) and \(n>p\), (1.6) has infinitely many solutions provided \(\lambda\) is small (see also [1] for \(p=2\)). They used \(\mathbb{Z}_{2}\)-genus and Lusternik-Schnirelman theory (see for instance [2, Theorem 10.9]). Using a dual Fountain theorem due to Bartsch and Willem [5], when \(p=2\), it was showed in [33, Theorem 3.22] that for \(\lambda>0\) small enough, there exists a sequence of solutions to (1.6), whose energies are negative and tends to \(0\). Different from the proof of [33, Theorem 3.22], we use the \(\mathbb{Z}_{2}\)-genus to construct a sequence of minimax levels \(b_{j}\) for the functional \(I_{\lambda}\) in a small ball \(B_{r}(0)\), see (5.1) below. We will show that \(b_{j}\) are negative critical values of \(I_{\lambda}\), and lower bounded by \(\widetilde{b}_{j}\) defined in (5.2). By means of weak convergence argument (see Lemma 5.2), we will prove that \(\widetilde{b}_{j}\to 0^{-}\) for small \(r\), so does \(b_{j}\). **Remark 1.6**.: _In Section 5, we provide a space decomposition method for the reflexible and separable Banach space. We believe that this can extend the Fountain theorem [33, Section 3] and the dual Fountain theorem [5] to the general Banach framework._ The paper is organized as follows. In Section 2, we introduce some notations and preliminary results. The proof of Theorems 1.1, 1.3 and 1.5 are completed respectively in Sections 3-5. ## 2 Notations and Preliminaries In this paper, \(C,C^{\prime},C_{1},C_{2},...\) denote always generic positive constants. \(|\cdot|_{p}\) means the usual norm of \(L^{p}(\mathbb{R}^{n})\) or \(L^{p}(\Omega)\). \(\|\cdot\|\) denotes the norm of \(W_{0}^{s,p}(\Omega)\). We use \(\varphi_{1}\) to denote the eigenfunction corresponding to the first eigenvalue \(\lambda_{1}\) of \((-\Delta)_{p}^{s}\) in \(W_{0}^{s,p}(\Omega)\) such that \(\varphi_{1}>0\) in \(\Omega\) and \(|\varphi_{1}|_{\infty}=1\). ### Truncations for the minimizers of \(S_{s,p}\) From [8], we know that if \(s\in(0,1)\), \(p>1\), there exists a minimizer \(U\in D^{s,p}(\mathbb{R}^{n})\) for \(S_{s,p}\). Up to scaling and translation, we can assume that \(U(0)=1\), \(U\) is radially symmetric, nonnegative, radially decreasing and resolves in \(\mathbb{R}^{n}\) \[(-\Delta)_{p}^{s}U=U^{p_{s}^{*}-1}. \tag{2.1}\] Without confusion, we denote also \(U(x)=U(r)\) with \(r=|x|\). For any \(\varepsilon>0\), let \[U_{\varepsilon}(x)=\frac{1}{\varepsilon^{(n-sp)/p}}U\left(\frac{x}{ \varepsilon}\right) \tag{2.2}\] which is also a minimizer for \(S_{s,p}\), and satisfies (2.1). We recall some results shown in Lemmas 2.2, 2.6 and 2.7 of [30] respectively. **Lemma 2.1**.: _There exist constants \(c_{1},c_{2}>0\) and \(\theta>1\) such that for all \(r\geq 1\),_ \[\frac{c_{1}}{r^{(n-sp)/(p-1)}}\leq U(r)\leq\frac{c_{2}}{r^{(n-sp)/(p-1)}} \tag{2.3}\] _and_ \[2U(\theta r)\leq U(r). \tag{2.4}\] Let \(\theta\) be as in Lemma 2.1, fix \(\eta\in C^{\infty}(\mathbb{R}^{n},[0,1])\) such that \[\eta(x)=\left\{\begin{array}{ll}0,&\mbox{if}\,|x|\leq 2\theta,\\ 1,&\mbox{if}\,|x|\geq 3\theta,\end{array}\right. \tag{2.5}\] and for any \(\delta>0\), denote \[\eta_{\delta}(x)=\eta\left(\frac{x}{\delta}\right). \tag{2.6}\] **Lemma 2.2**.: _Assume that \(0\in\Omega\). Then there exists a constant \(C=C(n,\Omega,p,s)>0\) such that for any \(v\in W_{0}^{s,p}(\Omega)\), \(\delta>0\) satisfying \((-\Delta)_{p}^{s}v\in L^{\infty}(\Omega)\) and \(B_{5\theta\delta}(0)\subset\Omega,\) there holds_ \[\|v\eta_{\delta}\|^{p}\leq\|v\|^{p}+C\left|(-\Delta)_{p}^{s}v\right|_{\infty} ^{p/(p-1)}\delta^{n-sp}.\] For any \(\varepsilon,\delta>0\), we denote \[M_{\varepsilon,\delta}=\frac{U_{\varepsilon}(\delta)}{U_{\varepsilon}( \delta)-U_{\varepsilon}(\theta\delta)}. \tag{2.7}\] Let \[g_{\varepsilon,\delta}(t)=\begin{cases}0,&\mbox{if }0\leq t\leq U_{\varepsilon}( \theta\delta),\\ M_{\varepsilon,\delta}^{p}(t-U_{\varepsilon}(\theta\delta)),&\mbox{if }U_{ \varepsilon}(\theta\delta)\leq t\leq U_{\varepsilon}(\delta),\\ t+U_{\varepsilon}(\delta)(M_{\varepsilon,\delta}^{p-1}-1),&\mbox{if }t\geq U_{ \varepsilon}(\delta),\end{cases}\] and \[G_{\varepsilon,\delta}(t)=\int_{0}^{t}g_{\varepsilon,\delta}^{\prime}(\tau)^{ 1/p}d\tau=\begin{cases}0,&\mbox{if }0\leq t\leq U_{\varepsilon}(\theta \delta),\\ M_{\varepsilon,\delta}\left(t-U_{\varepsilon}(\theta\delta)\right),&\mbox{if }U_{ \varepsilon}(\theta\delta)\leq t\leq U_{\varepsilon}(\delta),\\ t,&\mbox{if }t\geq U_{\varepsilon}(\delta).\end{cases} \tag{2.8}\] The functions \(g_{\varepsilon,\delta}\) and \(G_{\varepsilon,\delta}\) are decreasing and absolutely continuous. Then the radially nonincreasing function \[u_{\varepsilon,\delta}(r)=G_{\varepsilon,\delta}(U_{\varepsilon}(r))\leq U_{ \varepsilon}(r), \tag{2.9}\] satisfies \[u_{\varepsilon,\delta}(r)=U_{\varepsilon}(r)\ \text{if}\ r\leq\delta\quad\text{and} \quad\text{supp}(u_{\varepsilon,\delta})\subset B_{\theta\delta}(0). \tag{2.10}\] **Lemma 2.3**.: _There exists a constant \(C=C(n,p,s)>0\) such that for any \(\varepsilon\leq\frac{\delta}{2}\),_ \[\|u_{\varepsilon,\delta}\|^{p}\leq S_{s,p}^{\frac{n}{p}}+C\left( \frac{\varepsilon}{\delta}\right)^{(n-sp)/(p-1)}, \tag{2.11}\] \[|u_{\varepsilon,\delta}|_{p_{z}^{s}}^{p_{z}^{s}}\geq S_{s,p}^{ \frac{n}{p}}-C\Big{(}\frac{\varepsilon}{\delta}\Big{)}^{\frac{n}{p-1}},\] (2.12) \[|u_{\varepsilon,\delta}|_{p}^{p}\geq\begin{cases}\frac{1}{C} \varepsilon^{sp}|\log\left(\frac{\varepsilon}{\delta}\right)|,&\text{if}\ \ n=sp^{2},\\ \frac{1}{C}\varepsilon^{sp},&\text{if}\ \ n>sp^{2}.\end{cases} \tag{2.13}\] ### Further useful results Consider fractional equation: \[(-\Delta)_{p}^{s}u\geq 0\quad\text{in}\ \Omega;\quad u=0\quad\text{in}\ \mathbb{R}^{n}\backslash\Omega. \tag{2.14}\] Referring to [7, Theorem A.1], we recall the following useful maximum principle. **Lemma 2.4**.: _Let \(s\in(0,1)\), \(1<p<\infty\) and \(\Omega\subset\mathbb{R}^{n}\) be an open bounded connected set. If \(u\in W^{s,p}_{0}(\Omega)\) is a nonnegative solution to (2.14) and \(u\not\equiv 0\) in \(\Omega\), then \(u>0\) almost everywhere in \(\Omega\)._ The following comparison principle for fractional \(p\)-Laplacian can be found in [23, Proposition 2.10] (see also [27, Lemma 9]). **Lemma 2.5**.: _Let \(p>1\) and \(\Omega\) be an open bounded set. If \(u,v\in W^{s,p}_{0}(\Omega)\) satisfy that for all \(\varphi\in W^{s,p}_{0}(\Omega)\) with \(\varphi\geq 0\) in \(\Omega\),_ \[\int_{\mathbb{R}^{2n}}\frac{J_{u}(x,y)(\varphi(x)-\varphi(y))}{|x-y|^{n+sp}}dxdy \geq\int_{\mathbb{R}^{2n}}\frac{J_{v}(x,y)(\varphi(x)-\varphi(y))}{|x-y|^{n+ sp}}dxdy,\] _then \(u\geq v\) in \(\Omega\)._ Iannizzotto, Mosconi and Squassina [24] proved the following equivalence. **Lemma 2.6**.: _Let \(p\geq 2\), \(s\in(0,1)\) and \(n>sp\). Let \(\Omega\subset\mathbb{R}^{n}\) be a bounded domain with a \(C^{1,1}\)-boundary, \(f:\Omega\times\mathbb{R}\to\mathbb{R}\) be a Caratheodory mapping satisfying (1.2). Then, for any \(u_{0}\in W^{s,p}_{0}(\Omega)\), the following are equivalent:_ 1. _there exists_ \(\rho>0\) _such that_ \(E(u_{0}+v)\geq E(u_{0})\) _for all_ \(v\in W^{s,p}_{0}(\Omega)\)_,_ \(\|v\|\leq\rho\)_;_ 2. _there exists_ \(\sigma>0\) _such that_ \(E(u_{0}+v)\geq E(u_{0})\) _for all_ \(v\in W^{s,p}_{0}(\Omega)\cap\mathcal{C}^{0}_{s}(\overline{\Omega})\)_,_ \(\|v\|_{\mathcal{C}^{0}_{s}(\overline{\Omega})}\leq\sigma\)_,_ _where \(E\) is defined in (1.3)._ Next is a Holder regularity result for solutions of (1.1), which comes from [12, Theorem 3.3, Remark 3.4] and [23, Theorem 1.1]. **Proposition 2.7**.: _Let \(\Omega\) be a bounded domain with \(C^{1,1}\) boundary, \(f\) satisfy (1.2). Then there exists \(\alpha\in(0,s]\) such that if \(u\in W^{s,p}_{0}(\Omega)\) is a solution of (1.1), we have \(u\in C^{\alpha}(\overline{\Omega})\)._ If we assume \(f(x,u)\in L^{\infty}(\Omega)\) and \(f(x,u)\geq 0\), the solutions of (1.1) will satisfy also a kind of Hopf's lemma [14, Theorem 1.5]. In particular, the solutions can be controlled by \(\mathrm{d}^{s}_{\Omega}(x)\). The decay information of solutions near the boundary is very useful to construct suitable subsolutions or supersolutions. **Proposition 2.8**.: _Let \(\Omega\) be a bounded domain with \(C^{1,1}\) boundary and let \(u\in W^{s,p}_{0}(\Omega)\) be a weak solution of the following equation_ \[\begin{cases}(-\Delta)^{s}_{p}u=f&\text{in }\Omega,\\ \qquad u=0&\text{in }\mathbb{R}^{n}\setminus\Omega,\end{cases} \tag{2.15}\] _where \(f\in L^{\infty}(\Omega)\), \(f\geq 0\) and \(f\not\equiv 0\). Then there are two positive numbers \(C\), \(C^{\prime}\) such that_ \[C\mathrm{d}^{s}_{\Omega}(x)\leq u(x)\leq C^{\prime}\mathrm{d}^{s}_{\Omega}(x),\ \ \text{for a.e.}\ x\in\Omega.\] Proof.: By Lemmas 2.4 and 2.5, we have \(u>0\) a.e. in \(\Omega\). Since \(f\in L^{\infty}(\Omega)\), the Holder continuity follows from [23, Theorem 1.1]. From the Hopf's lemma [14, Theorem 1.5], there exists \(C\) such that \(C\mathrm{d}^{s}_{\Omega}(x)\leq u(x).\) Morover, it follows from [23, Theorem 4.4] that there exists \(C^{\prime}\) such that \(C^{\prime}\mathrm{d}^{s}_{\Omega}(x)\geq u(x)\). At last, we summarize the super-subsolution method as follows. **Proposition 2.9**.: _Let \(f:\Omega\times\mathbb{R}\to\mathbb{R}\) be a Caratheodory mapping satisfying that \(f(x,t)\) is continuous and increasing in \(t\) for a.e. \(x\in\Omega\). Let \(\underline{u}\in W^{s,p}_{0}(\Omega)\) be a subsolution to (1.1), and \(\overline{u}\in W^{s,p}_{0}(\Omega)\) be a supersolution to (1.1) such that \(\underline{u}\leq\overline{u}\) and_ \[f(\cdot,\underline{u}(\cdot)),\ f(\cdot,\overline{u}(\cdot))\in L^{\frac{p^{*} }{p^{*}_{s}-1}}(\Omega).\] _Then there exists a weak solution \(u\in W^{s,p}_{0}(\Omega)\) of (1.1) such that \(\underline{u}\leq u\leq\overline{u}\)._ Proof.: Let \(u_{0}:=\underline{u}\), and by induction we define \(u_{j}\) (\(j\geq 1\)) as solutions of \[\begin{cases}(-\Delta)^{s}_{p}u_{j+1}=f(x,u_{j})&\text{in }\Omega,\\ \qquad u_{j+1}=0&\text{in }\mathbb{R}^{n}\setminus\Omega.\end{cases} \tag{2.16}\] By comparison principle, then \(\underline{u}\leq u_{j}\leq u_{j+1}\leq\overline{u}\). Consequently, \[|u_{j}|\leq\max\{|\underline{u}|,|\overline{u}|\},\quad|f(x,u_{j})|\leq\max\{| f(x,\underline{u})|,|f(x,\overline{u})|\}.\] Multiplying (2.16) by \(u_{j+1}\), \[\|u_{j+1}\|^{p}= \int_{\Omega}f(x,u_{j})u_{j+1}dx \leq\int_{\mathbb{R}^{n}}\max\{|\underline{u}|,|\overline{u}|\} \max\{|f(x,\underline{u})|,|f(x,\overline{u})|\}dx.\] Therefore \(\{u_{j}\}\) is a bounded sequence in \(W^{s,p}_{0}(\Omega)\), and \(u_{j}\) weakly converges to some \(u\) in \(W^{s,p}_{0}(\Omega)\), \(u_{j}(x)\to u(x)\) a.e. in \(\mathbb{R}^{n}\). Consequently, \(u\) is a weak solution to (1.1) and \(\underline{u}\leq u\leq\overline{u}\). ## 3 Proof of Theorem 1.1 Let \[\Lambda:=\sup\{\lambda>0:(P_{\lambda})\ \text{has a positive solution in }\ W^{s,p}_{0}(\Omega)\}. \tag{3.1}\] We first verify that \(\Lambda\) is finite and nonzero. For simplicity, we will not repeat always the regularity assumption for \(\Omega\) and we also often omit to repeat \(n>sp\). **Lemma 3.1**.: _For \(s\in(0,1)\), \(p>q>1\) and \(n>sp\), we have \(0<\Lambda<\infty\)._ Proof.: Let \(e\) be the solution of the following equation \[\left\{\begin{aligned} (-\Delta)_{p}^{s}e&=1& \quad\text{ in }\Omega,\\ e&=0&\quad\text{ in }\mathbb{R}^{n} \setminus\Omega.\end{aligned}\right. \tag{3.2}\] Since \(1<q<p<p_{s}^{*}\), we can find \(\lambda_{0},M>0\) small satisfying \[\lambda_{0}M^{q-p}|e|_{\infty}^{q-1}+M^{p_{s}^{*}-p}|e|_{\infty}^{p_{s}^{*}-1 }\leq 1,\] which implies that for \(0<\lambda\leq\lambda_{0}\), \[(-\Delta)_{p}^{s}(Me)=M^{p-1}\geq\lambda M^{q-1}|e|_{\infty}^{q-1}+M^{p_{s}^{* }-1}|e|_{\infty}^{p_{s}^{*}-1}\geq\lambda(Me)^{q-1}+(Me)^{p_{s}^{*}-1}.\] So \(Me\) is a supersolution of \((P_{\lambda})\). Fix any \(0<\lambda\leq\lambda_{0}\), there exists \(\varepsilon_{0}\in(0,1)\) (recall that \(\varphi_{1}\in C(\overline{\Omega})\), \(|\varphi_{1}|_{\infty}=1\) and \(\varphi_{1}>0\) in \(\Omega\)), such that \[\lambda_{1}\leq\lambda(\varepsilon\varphi_{1})^{q-p}\leq\lambda(\varepsilon \varphi_{1})^{q-p}+(\varepsilon\varphi_{1})^{p_{s}^{*}-p},\quad\forall\;0< \varepsilon<\varepsilon_{0}.\] We check readily that \(\varepsilon\varphi_{1}\) is a subsolution of \((P_{\lambda})\). According to Proposition 2.8, we can choose \(\varepsilon>0\) small enough such that \(\varepsilon\varphi_{1}\leq Me\). By Proposition 2.9, \((P_{\lambda})\) has a positive solution, which yields \(\Lambda>0\). From [9, Theorem 4.1], we know that \(\lambda_{1}\) is an isolated eigenvalue, so there is \(\widetilde{\lambda}>\lambda_{1}\), which is not an eigenvalue of \((-\Delta)_{p}^{s}\). Assume by contradiction that \(\Lambda=\infty\), we can choose then \[\widetilde{\lambda}>\max_{t\geq 0}\big{(}\widetilde{\lambda}t^{p-q}-t^{p_{s}^{*}- q}\big{)}\] such that \((P_{\widetilde{\lambda}})\) has a positive solution \(u_{\widetilde{\lambda}}\). Hence \(u_{\widetilde{\lambda}}\) is a supersolution of the following equation \[\left\{\begin{aligned} (-\Delta)_{p}^{s}u&= \widetilde{\lambda}|u|^{p-2}u&\quad\text{in }\Omega,\\ u&=0&\quad\text{ in }\mathbb{R}^{n} \setminus\Omega.\end{aligned}\right. \tag{3.3}\] Applying again Propositions 2.7-2.8, there is \(\varepsilon>0\) small such that \(\varepsilon\varphi_{1}\leq u_{\widetilde{\lambda}}\). Since \(\lambda_{1}<\widetilde{\lambda}\), \(\varepsilon\varphi_{1}\) is a subsolution to (3.3). By Proposition 2.9, there exists a positive solution of (3.3), hence a contradiction occurs with the choice of \(\widetilde{\lambda}\). It means that \(0<\Lambda<\infty\). In the sequel, we are going to find the minimal positive solution of \((P_{\lambda})\). We claim that the minimal solution can be derived by iterating from the solution of \((Q_{\lambda})\). The first step is to prove that the solution of \((Q_{\lambda})\) is unique. **Lemma 3.2**.: _Let \(s\in(0,1)\), \(p>1\), \(n\geq 1\), \(q\in(1,p)\) and \(\lambda>0\). Then there exists a unique solution \(v_{\lambda}\) to \((Q_{\lambda})\). Moreover, for any \(\lambda>0\), \(v_{\lambda}=\lambda^{\frac{1}{p-q}}v_{1}\)._ Proof.: Consider the functional \[E_{\lambda}(v)=\frac{\lambda}{p}\|v\|^{p}-\frac{1}{q}|v^{+}|_{q}^{q},\quad v\in W _{0}^{s,p}(\Omega),\] which is coercive and weakly lower semi-continuous. Therefore, there is a global minimizer \(v_{\lambda}\in W_{0}^{s,p}(\Omega)\) for \(E_{\lambda}\). By \(p>q>1\), we obtain \(E_{\lambda}(v_{\lambda})<0\) hence \(v_{\lambda}\neq 0\). By maximum principle, we have \(v_{\lambda}>0\) in \(\Omega\). We claim that \(v_{\lambda}\) is the unique positive solution to \((Q_{\lambda})\). Let \(w\) be another positive solution, using Proposition 2.7, \(v_{\lambda}\), \(w\in L^{\infty}(\Omega)\), then by Proposition 2.8, there are two positive constants \(C\), \(C^{\prime}\) such that \[Cw(x)\leq v_{\lambda}(x)\leq C^{\prime}w(x),\quad\text{for a.e. }x\in\Omega. \tag{3.4}\] Let \(\beta_{0}:=\sup\{\ell:v_{\lambda}\geq\ell w\text{ a.e. in }\Omega\}\), by (3.4), we have \(0<\beta_{0}<\infty\). Clearly \[(-\Delta)_{p}^{s}v_{\lambda}=\lambda v_{\lambda}^{q-1}\geq\lambda\beta_{0}^{q-1 }w^{q-1}=(-\Delta)_{p}^{s}\big{(}\beta_{0}^{\frac{q-1}{\ell-1}}w\big{)}.\] By comparison principle, we get \(v_{\lambda}\geq\beta_{0}^{\frac{q-1}{\ell-1}}w\), so that \(\beta_{0}\geq\beta_{0}^{\frac{q-1}{\ell-1}}\), hence \(\beta_{0}\geq 1\), which means \(v_{\lambda}\geq w\) in \(\Omega\). Similarly, \(w\geq v_{\lambda}\), so we have \(w=v_{\lambda}\). The uniqueness yields readily the expression of \(v_{\lambda}\) by \(v_{1}\), since \(\lambda^{\frac{1}{p-q}}v_{1}\) resolves \((Q_{\lambda})\). Due to the nonlinearity of \((-\Delta)_{p}^{s}\), we cannot derive the strong comparison principle directly. Here we will show the strict comparison between minimal solutions, thanks to the following inequality. **Lemma 3.3**.: _Let \(f_{\lambda}(t)=\lambda t^{q-1}+t^{p^{*}_{s}-1}\) for \(\lambda>0\), \(t>0\), \(q<p^{*}_{s}\). Then for any \(0<\lambda<\lambda^{\prime}<\infty\) and \(M>0\), there exists \(\beta_{0}>1\) such that \(f_{\lambda}(\beta_{0}t)\leq f_{\lambda^{\prime}}(t)\) for \(0<t\leq M\)._ Proof.: Let \(1<\beta\leq(\frac{\lambda^{\prime}+\lambda}{2\lambda})^{\frac{1}{q-1}}\), then \[f_{\lambda^{\prime}}(t)-f_{\lambda}(\beta t) =\Big{(}\frac{\lambda^{\prime}+\lambda}{2}-\lambda\beta^{q-1} \Big{)}t^{q-1}+\frac{\lambda^{\prime}-\lambda}{2}t^{q-1}+(1-\beta^{p^{*}_{s}- 1})t^{p^{*}_{s}-1}\] \[\geq\frac{\lambda^{\prime}-\lambda}{2}t^{q-1}+(1-\beta^{p^{*}_{s} -1})t^{p^{*}_{s}-1}.\] As \(\lambda^{\prime}>\lambda\) and \(p^{*}_{s}>q\), there exist \(t_{0}>0\), \(\beta_{1}>1\) such that \(f_{\lambda}(\beta_{1}t)\leq f_{\lambda^{\prime}}(t)\) for all \(0<t<t_{0}\). On the other hand, there exists some \(\beta_{2}>1\) but close enough such that \(f_{\lambda}(\beta_{2}t)\leq f_{\lambda^{\prime}}(t)\) for all \(t\in[t_{0},M]\). Now, we are ready to prove the existence of minimal positive solution to \((P_{\lambda})\). **Proposition 3.4**.: _For any \(0<\lambda<\Lambda\), the problem \((P_{\lambda})\) has a minimal positive solution \(u_{\lambda}\) such that \(u_{\lambda}<u_{\lambda^{\prime}}\) if \(0<\lambda<\lambda^{\prime}<\Lambda\). Moreover, \(\|u_{\lambda}\|_{\mathcal{C}^{0}_{s}(\overline{\Omega})}\to 0\) as \(\lambda\to 0^{+}\)._ Proof.: Let \(w_{\lambda}\) be an arbitrary positive solution of \((P_{\lambda})\) with \(\lambda\in(0,\Lambda)\). Let \(v_{\lambda}\) be the unique solution of \((Q_{\lambda})\), we claim that \(v_{\lambda}\leq w_{\lambda}\). Indeed, \(w_{\lambda}\) is a supersolution to \((Q_{\lambda})\). Using Propositions 2.7-2.8, we see that for \(\varepsilon>0\) small enough, there holds \(\varepsilon\varphi_{1}\leq w_{\lambda}\) and \(\varepsilon\varphi_{1}\) is a subsolution to \((Q_{\lambda})\). As before, we get a solution of \((Q_{\lambda})\) between \(w_{\lambda}\) and \(\varepsilon\varphi_{1}\), there holds \(v_{\lambda}\leq w_{\lambda}\) seeing Lemma 3.2. Take any \(\lambda\in(0,\Lambda)\). By the definition of \(\Lambda\), there exists \(\lambda^{\prime}\in(\lambda,\Lambda)\) such that \((P_{\lambda^{\prime}})\) has a positive solution, denoted by \(w_{\lambda^{\prime}}\). Applying Lemma 3.2 and the above argument, there holds \(v_{\lambda}\leq v_{\lambda^{\prime}}\leq w_{\lambda^{\prime}}\). As \(v_{\lambda}\) and \(w_{\lambda^{\prime}}\) are respectively sub and supersolution to \((P_{\lambda})\), we can use the classical monotone iteration method or Proposition 2.9, starting with \(v_{\lambda}\), to obtain a positive solution \(u_{\lambda}\) for \((P_{\lambda})\). As \(w_{\lambda^{\prime}}\) can be replaced by arbitrary solution of \((P_{\lambda})\) or \((P_{\lambda^{\prime}})\), we conclude that \(u_{\lambda}\) is the minimal positive solution to \((P_{\lambda})\) and \(u_{\lambda}\leq u_{\lambda^{\prime}}\). More precisely, for \(0<\lambda<\lambda^{\prime}<\Lambda\), using Lemma 3.3, there exists \(\beta_{0}>1\) such that \[(-\Delta)_{p}^{s}u_{\lambda^{\prime}}=f_{\lambda^{\prime}}(u_{\lambda^{\prime}}) \geq f_{\lambda^{\prime}}(u_{\lambda})\geq f_{\lambda}(\beta_{0}u_{\lambda})= \lambda\beta_{0}^{q-1}u_{\lambda}^{q-1}+\beta_{0}^{p^{*}_{s}-1}u_{\lambda}^{p^{ *}_{s}-1}\geq(-\Delta)_{p}^{s}(\beta_{0}^{\frac{q-1}{\ell-1}}u_{\lambda}).\] Applying again the comparison principle, we see that \(u_{\lambda^{\prime}}\geq\beta_{0}^{\frac{q-1}{\ell-1}}u_{\lambda}>u_{\lambda}\) in \(\Omega\). Finally, coming back to the proof of Lemma 3.1, for any \(M>0\) small enough, there exists \(\lambda>0\) small such that \(Me\) is a supersolution of \((P_{\lambda})\). As above, since \(u_{\lambda}\) is the minimal solution, we get \(u_{\lambda}\leq Me\). So \(\|u_{\lambda}\|_{\mathcal{C}^{0}_{s}(\overline{\Omega})}\to 0\) as \(\lambda\to 0^{+}\). For any \(\lambda\in(0,\Lambda)\), we can find \(0<\lambda^{\prime\prime}<\lambda<\lambda^{\prime}<\Lambda\). By Proposition 3.4, there are three minimal solutions corresponding to \(\lambda^{\prime\prime}\), \(\lambda\), \(\lambda^{\prime}\), denoted by \(u_{1},u_{\lambda},u_{2}\) respectively. Let us define for every \(x\in\Omega\), \[\widehat{f}_{\lambda}(x,t)=\left\{\begin{array}{ll}f_{\lambda}(u_{2}(x)),& \mbox{if }\;t\geq u_{2}(x);\\ f_{\lambda}(t),&\mbox{if }\;u_{2}(x)>t\geq 0.\end{array}\right.\] Consider the following equation \[\left\{\begin{aligned} (-\Delta)^{s}_{p}u&=\widehat{f}_{ \lambda}(x,u)&\mbox{in }\;\Omega,\\ u&\geq 0&\mbox{in }\;\Omega,\\ u&=0&\mbox{in }\;\mathbb{R}^{n}\setminus\Omega,\end{aligned}\right. \tag{3.5}\] with the associated energy functional \[\widehat{I}_{\lambda}(u)=\frac{1}{p}\|u\|^{p}-\int_{\Omega}\widehat{F}_{ \lambda}(x,u)dx,\quad\mbox{where }\;\widehat{F}_{\lambda}(x,u)=\int_{0}^{u^{+}}\widehat{f}_{\lambda}(x,t)dt.\] One can prove that \(\widehat{I}_{\lambda}\) is weakly lower continuous and coercive in \(W^{s,p}_{0}(\Omega)\), so \(\widehat{I}_{\lambda}\) can achieve its global minimum at some \(\widehat{u}_{\lambda}\in W^{s,p}_{0}(\Omega)\) such that \(0<\widehat{u}_{\lambda}\leq u_{2}\). We can find a subsolution \(\varepsilon\varphi_{1}\) of \((P_{\lambda^{\prime\prime}})\) such that \(\varepsilon\varphi_{1}\leq\widehat{u}_{\lambda}\) and use super-subsolution method to conclude \(\widehat{u}_{\lambda}\geq u_{1}\). In virtue of Proposition 3.4, \(u_{1}<u_{2}\) in \(\Omega\). Let \(\Sigma\) be as in (1.7), i.e. \(\Sigma=\{u\in W^{1,p}_{0}(\Omega)\cap\mathcal{C}^{0}_{s}(\overline{\Omega}): u_{1}<u<u_{2}\}.\) We shall prove that \(\widehat{u}_{\lambda}\) is a interior point of \(\Sigma\) with respect to \(\mathcal{C}^{0}_{s}(\overline{\Omega})\)-topology. **Lemma 3.5**.: \(\widehat{u}_{\lambda}\) _is a solution to \((P_{\lambda})\), and \(\widehat{u}_{\lambda}\in\Sigma^{o}\) is a local minimizer of \(\widetilde{I}_{\lambda}\) given in (1.8). In other words, there exists \(\sigma>0\) such that for any \(h\in W^{s,p}_{0}(\Omega)\cap\mathcal{C}^{0}_{s}(\overline{\Omega})\) with \(\|h\|_{\mathcal{C}^{0}_{s}(\overline{\Omega})}<\sigma\),_ \[\widehat{u}_{\lambda}+h\in\Sigma\quad\mbox{and}\quad\widetilde{I}_{\lambda}( \widehat{u}_{\lambda}+h)\geq\widetilde{I}_{\lambda}(\widehat{u}_{\lambda}).\] _Moreover, we have \(\widetilde{I}_{\lambda}(\widehat{u}_{\lambda})<0\),_ \[(p-1)\|\widehat{u}_{\lambda}\|^{p}-(q-1)\lambda|\widehat{u}_{\lambda}|^{q}_{ q}-(p^{*}_{s}-1)|\widehat{u}_{\lambda}|^{p^{*}_{2}}_{p^{*}_{s}}\geq 0, \tag{3.6}\] _and there exists \(C>0\) such that \(\|\widehat{u}_{\lambda}\|<C\) for all \(0<\lambda<\Lambda\)._ Proof.: Using Lemma 3.3, there exists \(\beta>1\) such that \[(-\Delta)^{s}_{p}u_{2}=f_{\lambda^{\prime}}(u_{2})\geq f_{\lambda^{\prime}}( \widehat{u}_{\lambda})\geq f_{\lambda}(\beta\widehat{u}_{\lambda})=\lambda \beta^{q-1}\widehat{u}_{\lambda}^{q-1}+\beta^{p^{*}_{s}-1}\widehat{u}_{\lambda} ^{p^{*}_{s}-1}\geq(-\Delta)^{s}_{p}(\beta^{\frac{q-1}{p-1}}\widehat{u}_{\lambda}),\] which together with the comparison principle implies \[u_{2}\geq\gamma_{2}\widehat{u}_{\lambda},\quad\mbox{where }\;\gamma_{2}:=\beta^{ \frac{q-1}{p-1}}>1. \tag{3.7}\] Similarly, there exists \(\gamma_{1}>1\) such that \[\widehat{u}_{\lambda}\geq\gamma_{1}u_{1}. \tag{3.8}\] Moreover, by Proposition 2.8, there exists \(\beta^{\prime}>0\) such that \[u_{1}(x)\geq\beta^{\prime}\mathrm{d}^{s}_{\Omega}(x). \tag{3.9}\] Choose now \(\sigma:=\left[\min(\gamma_{1},\gamma_{2})-1\right]\beta^{\prime}\). Let \(h\in W^{s,p}_{0}(\Omega)\cap\mathcal{C}^{0}_{s}(\overline{\Omega})\) satisfy \(\|h\|_{\mathcal{C}^{0}_{s}(\overline{\Omega})}<\sigma\), then by (3.7)-(3.9), there holds \[\widehat{u}_{\lambda}+h>\widehat{u}_{\lambda}-(\gamma_{1}-1)\beta^{\prime} \mathrm{d}^{s}_{\Omega}(x)>\widehat{u}_{\lambda}-(\gamma_{1}-1)u_{1}\geq u_{1}\] and similarly \(\widehat{u}_{\lambda}+h<u_{2}\), so \(\widehat{u}_{\lambda}+h\in\Sigma\). Furthermore, \[\widetilde{I}_{\lambda}(\widehat{u}_{\lambda}+h) =\widehat{I}_{\lambda}(\widehat{u}_{\lambda}+h)+\int_{\Omega} \widehat{F}_{\lambda}(x,\widehat{u}_{\lambda}+h)dx-\frac{\lambda}{q}|\widehat{ u}_{\lambda}+h|_{q}^{q}-\frac{1}{p_{s}^{*}}|\widehat{u}_{\lambda}+h|_{p_{s}^{*}}^{p_{s}^ {*}}\] \[\geq\widehat{I}_{\lambda}(\widehat{u}_{\lambda})+\int_{\Omega} \int_{0}^{\widehat{u}_{\lambda}+h}\widehat{f}_{\lambda}(x,t)dtdx-\frac{\lambda }{q}|\widehat{u}_{\lambda}+h|_{q}^{q}-\frac{1}{p_{s}^{*}}|\widehat{u}_{\lambda }+h|_{p_{s}^{*}}^{p_{s}^{*}}\] \[=\widetilde{I}_{\lambda}(\widehat{u}_{\lambda}).\] The last equality comes from the definition of \(\widehat{f}_{\lambda}\) and \(\widehat{u}_{\lambda}+h\in\Sigma\), so \(\widehat{u}_{\lambda}\) is a local minimizer for \(\widetilde{I}_{\lambda}\) with respect to \(C_{s}^{0}(\overline{\Omega})\) topology. Consider \(g(t)=\widetilde{I}_{\lambda}(t\widehat{u}_{\lambda})\), \(g^{\prime\prime}(1)\geq 0\) hence (3.6) holds. On the other hand, \(C_{c}^{\infty}(\Omega)\) is dense in \(W_{0}^{s,p}(\Omega)\) (see [29, Theorem 2.6]), so dose \(W_{0}^{s,p}(\Omega)\cap C_{s}^{0}(\overline{\Omega})\). Then \(\widehat{u}_{\lambda}\) solves \((P_{\lambda})\), hence we have \[\|\widehat{u}_{\lambda}\|^{p}-\lambda|\widehat{u}_{\lambda}|_{q}^{q}-| \widehat{u}_{\lambda}|_{p_{s}^{*}}^{p_{s}^{*}}=0. \tag{3.10}\] Combining (3.6) with (3.10), we get \((p_{s}^{*}-q)\lambda|\widehat{u}_{\lambda}|_{q}^{q}\geq(p_{s}^{*}-p)\|\widehat {u}_{\lambda}\|^{p}\). Using again (3.10), \[\widetilde{I}_{\lambda}(\widehat{u}_{\lambda})=\left(\frac{1}{p}-\frac{1}{p_{s }^{*}}\right)\|\widehat{u}_{\lambda}\|^{p}-\left(\frac{1}{q}-\frac{1}{p_{s}^{* }}\right)\lambda|\widehat{u}_{\lambda}|_{q}^{q}<0.\] Moreover, for \(0<\lambda<\Lambda\), \[\|\widehat{u}_{\lambda}\|^{p}\leq C_{1}|\widehat{u}_{\lambda}|_{q}^{q}\leq C_{ 2}\|\widehat{u}_{\lambda}\|^{q},\] which deduces that \(\|\widehat{u}_{\lambda}\|\) is uniformly bounded. Now we are in position to prove the existence of positive solution for \((P_{\Lambda})\). **Lemma 3.6**.: _Let \(s\in(0,1)\), \(p>1\), \(n>sp\). For \(\lambda=\Lambda\), there is a positive solution to \((P_{\Lambda})\)._ Proof.: Let \(\widehat{u}_{\lambda}\) be given in Lemma 3.5, there exists \(C>0\) such that \(\|\widehat{u}_{\lambda}\|\leq C\). Notice that \(\widehat{u}_{\lambda}\) is also a critical point of \(\widetilde{I}_{\lambda}\), hence a positive solution to \((P_{\lambda})\). Let \(u_{\lambda}\) be the minimal positive solution given in Lemma 3.4. So \[\|u_{\lambda}\|^{p}=\lambda|u_{\lambda}|_{q}^{q}+|u_{\lambda}|_{p_{s}^{*}}^{p_ {s}^{*}}\leq\lambda|\widehat{u}_{\lambda}|_{q}^{q}+|\widehat{u}_{\lambda}|_{p_ {s}^{*}}^{p_{s}^{*}}=\|\widehat{u}_{\lambda}\|^{p}\leq C^{p}.\] As \(u_{\lambda}\) is increasing with respect to \(\lambda\), then there exists \(u_{\Lambda}\in W_{0}^{s,p}(\Omega)\) such that \(u_{\lambda}\) weakly converges to \(u_{\Lambda}\) and \(u_{\lambda}(x)\to u_{\Lambda}(x)\) a.e. in \(\Omega\). Consequently, \(u_{\Lambda}\) is a nontrivial positive solution to \((P_{\Lambda})\). Clearly, Lemma 3.1, Lemma 3.4 and Lemma 3.6 complete the proof of Theorem 1.1. ## 4 Proof of Theorem 1.3 Now, we consider the existence of second positive solution for \((P_{\lambda})\). For convenience and without loss of generality, **we assume**\(0\in\Omega\). For \(0<\lambda<\Lambda\), let \(u_{\lambda}\) be the minimal solution given in Proposition 3.4, and \(\widehat{u}_{\lambda}\) be given in Lemma 3.5. If \(u_{\lambda}\neq\widehat{u}_{\lambda}\), we get already two positive solutions of \((P_{\lambda})\). Therefore, in this section, we always assume \[u_{\lambda}=\widehat{u}_{\lambda}.\] When \(p\geq 2\), from Lemma 2.6 and Lemma 3.5, it follows that \(u_{\lambda}\) is a local minimizer of \(\widetilde{I}_{\lambda}\) in \(W_{0}^{s,p}(\Omega)\), that is, there exists \(\rho>0\) such that \[\widetilde{I}_{\lambda}(u)\geq\widetilde{I}_{\lambda}(u_{\lambda})\quad\text{ for any }\|u-u_{\lambda}\|\leq\rho. \tag{4.1}\] In order to find a second positive solution, we will show a mountain pass geometry for \(\widetilde{I}_{\lambda}\) around \(u_{\lambda}\), we will choose the mountain pass paths from \(u_{\lambda}\) to a terminal point \(e\) such that \(\|e\|>\rho\) and \(\widetilde{I}_{\lambda}(e)<\widetilde{I}_{\lambda}(u_{\lambda})\). We denote the set of mountain pass paths by \(\Gamma_{\varepsilon,\delta}\), and the mountain pass level by \(m_{\varepsilon,\delta}\) as in (1.11) and (1.12) respectively. We will use the path \(\gamma_{\varepsilon,\delta}\) given in (1.13). The following three claims will be checked: 1. \(\gamma_{\varepsilon,\delta}\in C([0,1],W_{0}^{s,p}(\Omega))\); 2. There exist \(\delta\geq 2\varepsilon>0\) such that the maximum of \(\widetilde{I}_{\lambda}\) along the path \(\gamma_{\varepsilon,\delta}\) is strictly less than \(c_{s,p}\) with \(c_{s,p}\) given in (1.9); 3. \(\widetilde{I}_{\lambda}\) satisfies the Palais-Smale condition for any level \(c<c_{s,p}\). Let \(\eta_{\delta}\) be given in (2.6). We can get the following lemma. **Lemma 4.1**.: _Let \(B_{5\theta\delta}(0)\subset\Omega\) where \(\theta\) is given in Lemma 2.1. If \(u\in W_{0}^{s,p}(\Omega)\cap L^{\infty}(\Omega)\), then \(\eta_{\delta}u\to u\) in \(W_{0}^{s,p}(\Omega)\) as \(\delta\to 0\)._ Proof.: It is easy to see that there exists \(C=C(p)>0\) such that \[\|\eta_{\delta}u-u\|^{p} \leq C\int_{\mathbb{R}^{2n}}\frac{|1-\eta_{\delta}(x)|^{p}|u(x)-u (y)|^{p}}{|x-y|^{n+sp}}dxdy+C\int_{\mathbb{R}^{2n}}\frac{|(1-\eta_{\delta})(x) -(1-\eta_{\delta})(y)|^{p}|u(y)|^{p}}{|x-y|^{n+sp}}dxdy\] \[=K_{1}+K_{2}.\] By the Lebesgue's dominated convergence theorem, we have \(\underset{\delta\to 0}{\lim}K_{1}=0\). Moreover, since \(u\in L^{\infty}(\Omega)\), \[K_{2}\leq C\|1-\eta_{\delta}\|^{p}=C\|\eta_{\delta}\|^{p}=C\|\eta_{1}\|^{p} \delta^{n-sp}.\] Therefore \(\lim_{\delta\to 0}\|\eta_{\delta}u-u\|^{p}=0\) as \(n>sp\). By the above lemma, we get \(\gamma_{\varepsilon,\delta}\in\Gamma_{\varepsilon,\delta}\). The second key observation is \[\sup_{u\in\gamma_{\varepsilon,\delta}([0,1])}\widetilde{I}_{\lambda}(u)<c_{s,p}, \tag{4.2}\] which means then \(m_{\varepsilon,\delta}<c_{s,p}\). We begin with the following basic inequality. **Lemma 4.2**.: _Assume that \(p\in[2,\infty)\) and \(\gamma\in(0,2]\), then there exists \(C=C(p,\gamma)>0\) such that_ \[|a-b|^{p}\leq a^{p}+b^{p}-pab^{p-1}+Ca^{\gamma}b^{p-2}\] _for all \(a,b>0\)._ Proof.: Let \(f(t)=|1-t|^{p}-1-t^{p}+pt\). We see that for \(p\geq 2\), \(\limsup_{t\to\infty}f(t)\leq 0\), then for any \(\gamma\in(0,2]\), \(\sup_{t>0}t^{-\gamma}f(t)<\infty\), so we are done. Now we prove the claim (4.2). **Lemma 4.3**.: _Assume that \(p\in[2,\infty)\), \(p-1<q<p\), \(n>\frac{sp(q+1)}{q+1-p}\). Let \(\theta\) be given in Lemma 2.1, \(B_{5\theta\delta}(0)\subset\Omega\), and \(m_{\varepsilon,\delta}\) be given in (1.12). Then there exist \(\delta\geq 2\varepsilon>0\) such that \(m_{\varepsilon,\delta}<c_{s,p}\)._ Proof.: By Proposition 2.7, \(u_{\lambda}\in C^{\alpha}(\overline{\Omega})\) for some \(\alpha\in(0,s]\), which with Lemma 2.2 deduces that \[\begin{split}\widetilde{I}_{\lambda}(\eta_{\delta}u_{\lambda})& =\frac{1}{p}\|\eta_{\delta}u_{\lambda}\|^{p}-\frac{\lambda}{q}| \eta_{\delta}u_{\lambda}|_{q}^{q}-\frac{1}{p_{s}^{*}}|\eta_{\delta}u_{\lambda} |_{p_{s}^{*}}^{p_{*}^{*}}\\ &\leq\frac{1}{p}\|u_{\lambda}\|^{p}+C\left|(-\Delta)_{p}^{s}u_{ \lambda}\right|_{\infty}^{p/(p-1)}\delta^{n-sp}-\frac{\lambda}{q}|u_{\lambda}| _{q}^{q}-\frac{1}{p_{s}^{*}}|u_{\lambda}|_{p_{s}^{*}}^{p_{*}^{*}}+C\delta^{n} \\ &=\widetilde{I}_{\lambda}(u_{\lambda})+C\delta^{n-sp}+C\delta^{ n}.\end{split} \tag{4.3}\] We will estimate the maximum of \(\widetilde{I}_{\lambda}(\eta_{\delta}u_{\lambda}+Ru_{\varepsilon,\delta})\) with respect to \(R>0\). **Step 1.**_Estimate for \(W_{0}^{s,p}(\Omega)\)-norm_. We claim that if \(\delta/\varepsilon\geq 2\), the following estimate holds true. \[\begin{split}\|\eta_{\delta}u_{\lambda}+Ru_{\varepsilon,\delta}\|^ {p}&\leq\|\eta_{\delta}u_{\lambda}\|^{p}+R^{p}\left[S^{\frac{n}{ sp}}+C\left(\frac{\varepsilon}{\delta}\right)^{(n-sp)/(p-1)}\right]\\ &\quad-C_{1}R^{p-1}\varepsilon^{\frac{n-sp}{p}}+\frac{C_{2}R^{p- 2}}{\delta^{sp}}\varepsilon^{n-\frac{(n-sp)(p-2)}{p}}\left(\frac{\delta}{ \varepsilon}\right)^{n-\frac{(n-sp)(p-2)}{p-1}}.\end{split} \tag{4.4}\] where \(C_{1},C_{2},C_{3}\) are independent of \(\varepsilon,\delta,R\). Indeed, \[\begin{split}\|\eta_{\delta}u_{\lambda}+Ru_{\varepsilon,\delta} \|^{p}&=\int_{\mathbb{R}^{2n}}\frac{|\eta_{\delta}u_{\lambda}(x)+ Ru_{\varepsilon,\delta}(x)-\eta_{\delta}u_{\lambda}(y)-Ru_{\varepsilon,\delta}(y)|^{p}}{ |x-y|^{n+sp}}dxdy\\ &\leq\int_{A_{1}}\frac{|\eta_{\delta}u_{\lambda}(x)-\eta_{ \delta}u_{\lambda}(y)|^{p}}{|x-y|^{n+sp}}dxdy+\int_{A_{2}}\frac{|Ru_{ \varepsilon,\delta}(x)-Ru_{\varepsilon,\delta}(y)|^{p}}{|x-y|^{n+sp}}dxdy\\ &\quad+2\int_{A_{3}}\frac{|\eta_{\delta}u_{\lambda}(x)-Ru_{ \varepsilon,\delta}(y)|^{p}}{|x-y|^{n+sp}}dxdy\\ &=:K_{1}+K_{2}+2K_{3},\end{split} \tag{4.5}\] where \(A_{1}=B_{\delta\delta}^{c}(0)\times B_{\delta\delta}^{c}(0)\), \(A_{2}=B_{2\theta\delta}(0)\times B_{2\theta\delta}(0)\), \(A_{3}=B_{2\theta\delta}^{c}(0)\times B_{\theta\delta}(0)\). We estimate \(K_{3}\) by Lemma 4.2 with \(\gamma=2\). For \(p\in[2,\infty)\), there exists \(C=C(p)>0\) such that \[\begin{split} K_{3}&\leq\int_{A_{3}}\frac{|\eta_{ \delta}u_{\lambda}(x)|^{p}}{|x-y|^{n+sp}}dxdy+\int_{A_{3}}\frac{|Ru_{ \varepsilon,\delta}(y)|^{p}}{|x-y|^{n+sp}}dxdy\\ &\quad-p\int_{A_{3}}\frac{|\eta_{\delta}u_{\lambda}(x)||Ru_{ \varepsilon,\delta}(y)|^{p-1}}{|x-y|^{n+sp}}dxdy+C\int_{A_{3}}\frac{|\eta_{ \delta}u_{\lambda}(x)|^{2}|Ru_{\varepsilon,\delta}(y)|^{p-2}}{|x-y|^{n+sp}}dxdy \\ &=:L_{1}+L_{2}-L_{3}+L_{4}.\end{split} \tag{4.6}\] First \[K_{1}+2L_{1}=\|\eta_{\delta}u_{\lambda}\|^{p}\quad\text{and}\quad K_{2}+2L_{2 }=\|Ru_{\varepsilon,\delta}\|^{p}. \tag{4.7}\] For any \(y\in B_{\theta\delta}(0)\), \(x\in B_{2\theta\delta}^{c}(0)\), we have \(|x-y|\leq|x|+\theta\delta\). By (2.10) and (2.2), we obtain \[\begin{split} L_{3}&\geq pR^{p-1}\int_{B_{\delta \delta}\setminus B_{2\theta\delta}}\frac{|\eta_{\delta}u_{\lambda}(x)|}{(|x|+ \theta\delta)^{n+sp}}dx\int_{B_{\theta\delta}}u_{\varepsilon,\delta}(y)^{p-1 }dy\\ &\geq\frac{CR^{p-1}}{\delta^{sp}}\int_{B_{\delta}}u_{\varepsilon, \delta}(y)^{p-1}dy\\ &\geq\frac{CR^{p-1}}{\delta^{sp}}\int_{B_{\delta}}U_{\varepsilon} (y)^{p-1}dy=\frac{CR^{p-1}\varepsilon^{n-\frac{(n-sp)(p-1)}{p}}}{\delta^{sp}} \int_{B_{\delta/\varepsilon}}U(y)^{p-1}dy.\end{split}\] Due to Lemma 2.1 and \(\delta/\varepsilon\geq 2\), we get \[\begin{split}\int_{B_{\delta/\varepsilon}}U(y)^{p-1}dy& \geq C\int_{1}^{\delta/\varepsilon}U(r)^{p-1}r^{n-1}dr\\ &\geq C\int_{1}^{\delta/\varepsilon}\frac{1}{r^{n-sp}}r^{n-1}dr= \frac{C}{sp}\left[\left(\frac{\delta}{\varepsilon}\right)^{sp}-1\right]\geq \frac{C}{2sp}\left(\frac{\delta}{\varepsilon}\right)^{sp}.\end{split}\] Thus, \[L_{3}\geq\frac{CR^{p-1}\varepsilon^{n-\frac{(n-sp)(p-1)}{p}}}{\delta^{sp}} \left(\frac{\delta}{\varepsilon}\right)^{sp}=CR^{p-1}\varepsilon^{n-\frac{(n- sp)(p-1)}{p}-sp}=CR^{p-1}\varepsilon^{\frac{n-sp}{p}}. \tag{4.8}\] For any \(x\in B^{c}_{2\theta\delta}(0)\), \(y\in B_{\theta\delta}(0)\), we have \(|x-y|\geq|x|-\theta\delta\geq\frac{|x|}{2}\). It follows from (2.9) that \[L_{4}\leq\frac{CR^{p-2}}{\delta^{sp}}\int_{B_{\theta\delta}}u_{ \varepsilon,\delta}(y)^{p-2}dy \leq\frac{CR^{p-2}}{\delta^{sp}}\int_{B_{\theta\delta}}U_{ \varepsilon}(y)^{p-2}dy\] \[=\frac{CR^{p-2}}{\delta^{sp}}\varepsilon^{n-\frac{(n-sp)(p-2)}{p} }\int_{B_{\theta\delta/\varepsilon}}U(y)^{p-2}dy.\] Using \(U\in L^{\infty}(\mathbb{R}^{n})\), Lemma 2.1 and \(\delta/\varepsilon\geq 2\), we have \[\int_{B_{\delta/\varepsilon}}U(y)^{p-2}dy \leq C\int_{1}^{\delta/\varepsilon}U(r)^{p-2}r^{n-1}dr+C\] \[\leq C\int_{1}^{\delta/\varepsilon}\frac{1}{r^{\frac{(n-sp)(p-2 )}{p-1}}}r^{n-1}dr+C\leq C\left(\frac{\delta}{\varepsilon}\right)^{n-\frac{(n -sp)(p-2)}{p-1}}.\] Therefore \[L_{4}\leq\frac{CR^{p-2}}{\delta^{sp}}\varepsilon^{n-\frac{(n-sp)(p-2)}{p}} \left(\frac{\delta}{\varepsilon}\right)^{n-\frac{(n-sp)(p-2)}{p-1}},\] which together with (4.5)-(4.8) and (2.11), implies (4.4). **Step 2.**_Estimates for power terms._ We claim that \[|\eta_{\delta}u_{\lambda}+Ru_{\varepsilon,\delta}|_{q}^{q}\geq|\eta_{\delta}u _{\lambda}|_{q}^{q}+CR^{q}\varepsilon^{n-\frac{(n-sp)q}{p}}\quad\text{for all}\;\;R>0, \tag{4.9}\] and \[|\eta_{\delta}u_{\lambda}+Ru_{\varepsilon,\delta}|_{p_{s}^{*}}^{p_{s}^{*}} \geq|\eta_{\delta}u_{\lambda}|_{p_{s}^{*}}^{p_{s}^{*}}+R^{p_{s}^{*}}\left[S^{ \frac{n}{sp}}-C\Big{(}\frac{\varepsilon}{\delta}\Big{)}^{\frac{n}{p-1}}\right]. \tag{4.10}\] Indeed, because the supports of \(\eta_{\delta}u_{\lambda}\) and \(u_{\varepsilon,\delta}\) are disjoint, there holds \[|\eta_{\delta}u_{\lambda}+Ru_{\varepsilon,\delta}|_{q}^{q}=|\eta_{\delta}u_{ \lambda}|_{q}^{q}+|Ru_{\varepsilon,\delta}|_{q}^{q}, \tag{4.11}\] and \[|\eta_{\delta}u_{\lambda}+Ru_{\varepsilon,\delta}|_{p_{s}^{*}}^{p_{s}^{*}}=| \eta_{\delta}u_{\lambda}|_{p_{s}^{*}}^{p_{s}^{*}}+|Ru_{\varepsilon,\delta}|_{ p_{s}^{*}}^{p_{s}^{*}}. \tag{4.12}\] By a direct computation, \[|u_{\varepsilon,\delta}|_{q}^{q}=\int_{B_{\theta\delta}}u_{\varepsilon,\delta }^{q}(y)dy\geq\int_{B_{\delta}}U_{\varepsilon}(y)^{q}dy=\varepsilon^{n-\frac{ (n-sp)q}{p}}\int_{B_{\delta/\varepsilon}}U(y)^{q}dy.\] It follows from Lemma 2.1, \(\delta/\varepsilon\geq 2\) and \(n>\frac{spq}{q-p+1}\) that \[\int_{B_{\delta/\varepsilon}}U(y)^{q}dy\geq C\int_{1}^{\delta/\varepsilon}U(r )^{q}r^{n-1}dr\geq C\int_{1}^{\delta/\varepsilon}\frac{1}{r^{(n-sp)q/(p-1)}}r ^{n-1}dr\geq C.\] Thus, \[|u_{\varepsilon,\delta}|_{q}^{q}\geq C\varepsilon^{n-\frac{(n-sp)q}{p}}. \tag{4.13}\] By (4.11), (4.13), we have that (4.9) holds. Using (4.12) and (2.12), we get (4.10). **Step 3.**_Conclusion._ Combining (4.4), (4.9) with (4.10), we deduce that \[\widetilde{I}_{\lambda}(\eta_{\delta}u_{\lambda}+Ru_{\varepsilon, \delta})\leq \widetilde{I}_{\lambda}(\eta_{\delta}u_{\lambda})+\frac{R^{p}}{p} \left[S^{\frac{n}{rp}}+C\left(\frac{\varepsilon}{\delta}\right)^{(n-sp)/(p-1)} \right]-\frac{R^{p^{*}_{s}}}{p^{*}_{s}}\left[S^{\frac{n}{rp}}-C\Big{(}\frac{ \varepsilon}{\delta}\Big{)}^{\frac{n}{p-1}}\right]\] \[-C_{1}R^{q}\varepsilon^{n-\frac{(n-sp)q}{p}}-C_{1}R^{p-1} \varepsilon^{\frac{n-sp}{p}}+\frac{C_{2}R^{p-2}}{\delta^{sp}}\varepsilon^{n- \frac{(n-sp)(p-2)}{p}}\left(\frac{\delta}{\varepsilon}\right)^{n-\frac{(n-sp)( p-2)}{p-1}}.\] Taking \(\varepsilon=\delta^{k+1}\) with \(k>0\) in the above inequality, then \[\widetilde{I}_{\lambda}(\eta_{\delta}u_{\lambda}+Ru_{\varepsilon, \delta})= \widetilde{I}_{\lambda}(\eta_{\delta}u_{\lambda})+g_{\delta}(R), \tag{4.14}\] with \[g_{\delta}(R)= \frac{R^{p}}{p}\left(S^{\frac{n}{rp}}_{s,p}+C\delta^{k(n-sp)/(p-1 )}\right)-\frac{R^{p^{*}_{s}}}{p^{*}_{s}}\left(S^{\frac{n}{rp}}_{s,p}-C\delta ^{\frac{kn}{p-1}}\right)\] \[-C_{1}R^{q}\delta^{(k+1)\left[n-\frac{(n-sp)q}{p}\right]}-C_{1}R^ {p-1}\delta^{\frac{(k+1)(n-sp)}{p}}+C_{2}R^{p-2}\delta^{(n-sp)\left[1-\frac{( p-2)(p-k-1)}{p(p-1)}\right]}.\] Let \(R_{\delta}\in\mathbb{R}^{+}\) satisfy \[g_{\delta}(R_{\delta})=\max_{R\in\mathbb{R}^{+}}g_{\delta}(R).\] Clearly, there exists \(\delta_{0}>0\) such that when \(\delta_{0}>\delta>0\), \(\{R_{\delta}\}_{\delta_{0}>\delta>0}\) is bounded, and it has a positive lower bound \(T>0\). Let \[h_{\delta}(R)=\frac{R^{p}}{p}\left(S^{\frac{n}{rp}}_{s,p}+C\delta^{k(n-sp)/(p- 1)}\right)-\frac{R^{p^{*}_{s}}}{p^{*}_{s}}\left(S^{\frac{n}{rp}}_{s,p}-C\delta ^{\frac{kn}{p-1}}\right).\] Obviously, \(h_{R}\) is increasing on \([0,R^{*}_{\delta}]\), decreasing on \([R^{*}_{\delta},\infty)\) with \[R^{*}_{\delta}=\left(\frac{S^{\frac{n}{rp}}_{s,p}+C\delta^{k(n-sp)/(p-1)}}{S^ {\frac{n}{rp}}_{s,p}-C\delta^{\frac{kn}{p-1}}}\right)^{\frac{1}{p^{*}_{s}-p}}.\] Therefore \[\max_{R\in\mathbb{R}^{+}}h_{\delta}(R)=\frac{s}{n}\frac{\left(S^{\frac{n}{rp}} _{s,p}+C\delta^{k(n-sp)/(p-1)}\right)^{\frac{p^{*}_{s}}{p^{*}_{s}-p}}}{\left(S ^{\frac{n}{rp}}_{s,p}-C\delta^{\frac{kn}{p-1}}\right)^{\frac{p}{p^{*}_{s}-p}} }=\frac{s}{n}S^{\frac{n}{rp}}_{s,p}+O\left(\delta^{k(n-sp)/(p-1)}\right).\] Thus, \[\max_{R\in\mathbb{R}^{+}}g_{\delta}(R)\leq \max_{R\in\mathbb{R}^{+}}h_{\delta}(R)-C_{1}R^{q}_{\delta}\delta^ {(k+1)\left(n-\frac{(n-sp)q}{p}\right)} \tag{4.15}\] \[-C_{1}R^{p-1}_{\delta}\delta^{\frac{(k+1)(n-sp)}{p}}+C_{2}R^{p-2} _{\delta}\delta^{(n-sp)\left[1-\frac{(p-2)(p-k-1)}{p(p-1)}\right]}\] \[= \frac{s}{n}S^{\frac{n}{rp}}_{s,p}+O\left(\delta^{k(n-sp)/(p-1)} \right)-C_{1}R^{q}_{\delta}\delta^{(k+1)\left[n-\frac{(n-sp)q}{p}\right]}\] \[-C_{1}R^{p-1}_{\delta}\delta^{\frac{(k+1)(n-sp)}{p}}+C_{2}R^{p-2} _{\delta}\delta^{(n-sp)\left[1-\frac{(p-2)(p-k-1)}{p(p-1)}\right]}.\] Putting \(0<k<p-1\), it follows that \[\frac{k(n-sp)}{p-1}<\min\left\{(n-sp)\left[1-\frac{(p-2)(p-k-1)}{p(p-1)} \right],\frac{(k+1)(n-sp)}{p}\right\}. \tag{4.16}\] Using (4.15) and (4.16), we get \[\max_{R\in\mathbb{R}^{+}}g_{\delta}(R)\leq\frac{s}{n}S_{s,p}^{\frac{m}{p}}-C_{1}R _{\delta}^{q}\delta^{(k+1)\left(n-\frac{(n-sp)q}{p}\right)}+O\left(\delta^{k(n- sp)/(p-1)}\right). \tag{4.17}\] By (4.3), (4.14), (4.17) and \(R_{\delta}>T>0\), we obtain \[\widetilde{I}_{\lambda}(\eta_{\delta}u_{\lambda}+Ru_{\varepsilon,\delta})=c_ {s,p}-C\delta^{(k+1)\left[n-\frac{(n-sp)q}{p}\right]}+O\left(\delta^{k(n-sp)/( p-1)}\right)+O(\delta^{n-sp}).\] By \(0<k<p-1\), to reach our aim, it suffices to have \[(k+1)\left[n-\frac{(n-sp)q}{p}\right]<\frac{k(n-sp)}{p-1}, \tag{4.18}\] and it is equivalent to \[n-\frac{(n-sp)q}{p}<k\left[\frac{n-sp}{p-1}-n+\frac{(n-sp)q}{p}\right]. \tag{4.19}\] We want to prove that there exists \(0<k<p-1\) such that (4.19) holds. In fact (4.19) holds true with \(k=p-1\), since we have \[n>\frac{sp(q+1)}{q+1-p}.\] We get then \[\max_{R\in\mathbb{R}^{+}}\widetilde{I}_{\lambda}(\eta_{\delta}u_{\lambda}+Ru_ {\varepsilon,\delta})<c_{s,p} \tag{4.20}\] if \(\delta\) is sufficiently small. Combining Lemma 4.1 with (4.20), we conclude (4.2) provided \(\delta\) is small enough, which means \(m_{\varepsilon,\delta}<c_{s,p}\). Next, we proceed to check that the functional \(\widetilde{I}_{\lambda}\) satisfies the Palais-Smale condition at the level \(c<c_{s,p}\). Recall that the functional \(\widetilde{I}_{\lambda}\) is said satisfying the Palais-Smale condition at a level \(c\in\mathbb{R}\) (for short \((PS)_{c}\)) if any sequence \(\{u_{j}\}\subset W_{0}^{s,p}(\Omega)\) such that \[\widetilde{I}_{\lambda}(u_{j})\to c\ \ \ \text{and}\ \ \ \widetilde{I}_{\lambda}^{ \prime}(u_{j})\to 0\ \ \text{in}\ \ W_{0}^{s,p}(\Omega)^{*}\] admits a subsequence which is convergent in \(W_{0}^{s,p}(\Omega)\). **Proposition 4.4**.: _Let \(s\in(0,1)\), \(1<q<p\), \(n>sp\) and \(u_{\lambda}\) be the minimal positive solution to \((P_{\lambda})\) in Proposition 3.4. Assume that \(\widetilde{I}_{\lambda}\) has only two critical points \(0\) and \(u_{\lambda}\). Then \(\widetilde{I}_{\lambda}\) satisfies the \((PS)_{c}\) condition for all \(c<c_{s,p}\)._ Proof.: Let \(\{u_{j}\}\subset W_{0}^{s,p}(\Omega)\) be a \((PS)_{c}\) sequence of \(\widetilde{I}_{\lambda}\) with \(c<c_{s,p}\), i.e. \[\widetilde{I}_{\lambda}(u_{j})=\frac{1}{p}\|u_{j}\|^{p}-\frac{\lambda}{q}|u_{j }^{+}|^{q}_{q}-\frac{1}{p_{s}^{*}}|u_{j}^{+}|^{p_{s}^{*}}_{p_{s}^{*}}=c+o(1) \tag{4.21}\] and \[\begin{split}\langle\widetilde{I}_{\lambda}^{\prime}(u_{j}),v \rangle=&\int_{\mathbb{R}^{2n}}\frac{J_{u_{j}}(x,y)(v(x)-v(y))}{|x- y|^{n+sp}}dxdy-\lambda\int_{\Omega}(u_{j}^{+})^{q-1}vdx\\ &-\int_{\Omega}(u_{j}^{+})^{p_{s}^{*}-1}vdx=o(1)\|v\|\ \ \text{for all}\ v\in W_{0}^{s,p}(\Omega).\end{split} \tag{4.22}\] Taking \(v=u_{j}\) in (4.22), by (4.21) and the Sobolev embedding \(W^{s,p}_{0}(\Omega)\subset L^{q}(\Omega)\), when \(j\) goes to \(\infty\), \[p_{s}^{*}c+o(1)\|u_{j}\|+o(1)= p_{s}^{*}\widetilde{I}_{\lambda}(u_{j})-\langle \widetilde{I}^{\prime}_{\lambda}(u_{j}),u_{j}\rangle\] \[= \left(\frac{p_{s}^{*}}{p}-1\right)\|u_{j}\|^{p}-\left(\frac{p_{s}^ {*}}{q}-1\right)\lambda|u_{j}^{+}|_{q}^{q}\] \[\geq \left(\frac{p_{s}^{*}}{p}-1\right)\|u_{j}\|^{p}-\left(\frac{p_{s}^ {*}}{q}-1\right)\lambda C\|u_{j}\|^{q}.\] Hence, \(\{u_{j}\}\) is bounded in \(W^{s,p}_{0}(\Omega)\). Therefore there is a renamed subsequence of \(\{u_{j}\}\), which converges weakly to some \(u\in W^{s,p}_{0}(\Omega)\) and \(u_{j}\to u\) a.e. in \(\mathbb{R}^{n}\). Now we will study more precisely the behavior of weakly convergent sequence \(\{u_{j}\}\) in several steps. **Step 1.** We claim that \[\lim_{j\to\infty}(\|u_{j}-u\|^{p}-\|u_{j}\|^{p}+\|u\|^{p})=0,\quad\text{and} \quad\liminf_{j\to\infty}(|u_{j}-u|_{p_{s}^{*}}^{p_{s}^{*}}-|u_{j}^{+}|_{p_{s }^{*}}^{p_{s}^{*}}+|u^{+}|_{p_{s}^{*}}^{p_{s}^{*}})\geq 0.\] Consider \[\Theta_{j}(x,y):=\frac{u_{j}(x)-u_{j}(y)}{|x-y|^{\frac{n}{p}+s}}\quad\text{ and }\quad\Theta(x,y):=\frac{u(x)-u(y)}{|x-y|^{\frac{n}{p}+s}}.\] Then \(\{\Theta_{j}\}\) is bounded in \(L^{p}(\mathbb{R}^{2n})\), and \(\Theta_{j}(x,y)\to\Theta(x,y)\) a.e. in \(\mathbb{R}^{2n}\). By Brezis-Lieb's lemma (see [33, Lemma 1.32]), the first claim is done. Moreover, as \(|u_{j}(x)-u(x)|\geq|u_{j}^{+}(x)-u^{+}(x)|\) and \(u_{j}^{+}(x)\to u^{+}(x)\) a.e. in \(\mathbb{R}^{n}\), using again Brezis-Lieb's lemma, \[|u_{j}-u|_{p_{s}^{*}}^{p_{s}^{*}}\geq|u_{j}^{+}-u^{+}|_{p_{s}^{*}}^{p_{s}^{*} }=|u_{j}^{+}|_{p_{s}^{*}}^{p_{s}^{*}}-|u^{+}|_{p_{s}^{*}}^{p_{s}^{*}}+o(1),\] which gives the second claim. **Step 2.** For any \(v\in W^{s,p}_{0}(\Omega)\), \[\lim_{j\to\infty}\int_{\mathbb{R}^{2n}}\frac{J_{u_{j}}(x,y)(v(x)-v(y))}{|x-y| ^{n+sp}}dxdy=\int_{\mathbb{R}^{2n}}\frac{J_{u}(x,y)(v(x)-v(y))}{|x-y|^{n+sp}}dxdy. \tag{4.23}\] Indeed, define \(\varPhi_{j}(x,y):=|x-y|^{-\frac{n+sp}{p^{\prime}}}J_{u_{j}}(x,y)\) and \(\varPhi(x,y):=|x-y|^{-\frac{n+sp}{p^{\prime}}}J_{u}(x,y)\) where \(p^{\prime}=\frac{p}{p-1}\). Then \(\{\varPhi_{j}\}\) is bounded in \(L^{p^{\prime}}(\mathbb{R}^{2n})\), and \(\varPhi_{j}(x,y)\to\varPhi(x,y)\) a.e. in \(\mathbb{R}^{2n}\), so \(\varPhi_{j}\) converges weakly to \(\varPhi\) in \(L^{p^{\prime}}(\mathbb{R}^{2n})\). On the other hand, \(|x-y|^{-\frac{n+sp}{p}}|v(x)-v(y)|\in L^{p}(\mathbb{R}^{2n})\), hence (4.23) holds. **Step 3.** By Step 2, it is easy to see that \(u\) is a critical point of \(\widetilde{I}_{\lambda}\), so \[\|u\|^{p}=\lambda|u^{+}|_{q}^{q}+|u^{+}|_{p_{s}^{*}}^{p_{s}^{*}}. \tag{4.24}\] Setting \(\widehat{u}_{j}=u_{j}-u\), Step 1 implies then \[\|\widehat{u}_{j}\|^{p}=\|u_{j}\|^{p}-\|u\|^{p}+o(1)\;\;\text{and}\;\;| \widehat{u}_{j}|_{p_{s}^{*}}^{p_{s}^{*}}\geq|u_{j}^{+}|_{p_{s}^{*}}^{p_{s}^{* }}-|u^{+}|_{p_{s}^{*}}^{p_{s}^{*}}+o(1). \tag{4.25}\] Taking \(v=u_{j}\) in (4.22), since \(\{u_{j}\}\) is bounded in \(W^{s,p}_{0}(\Omega)\) and \(u_{j}\to u\) in \(L^{q}(\Omega)\), we get \[\|u_{j}\|^{p}=\lambda|u^{+}|_{q}^{q}+|u_{j}^{+}|_{p_{s}^{*}}^{p_{s}^{*}}+o(1). \tag{4.26}\] It follows from (4.25), (4.26) and (4.24) that \[\|\widehat{u}_{j}\|^{p}=|u_{j}^{+}|_{p_{s}^{*}}^{p_{s}^{*}}-|u^{+}|_{p_{s}^{*}} ^{p_{s}^{*}}+o(1)\leq|\widehat{u}_{j}|_{p_{s}^{*}}^{p_{s}^{*}}+o(1)\leq\frac{\| \widehat{u}_{j}\|_{p_{s}^{*}}^{p_{s}^{*}}}{S_{s,p}^{p_{s}^{*}/p}}+o(1), \tag{4.27}\] \[\|\widehat{u}_{j}\|^{p}(S_{s,p}^{p^{*}/p}-\|\widehat{u}_{j}\|^{p^{*}_{s}-p})\leq o (1). \tag{4.28}\] By (4.25) and (4.27), there holds \[\widetilde{I}_{\lambda}(u_{j}) =\widetilde{I}_{\lambda}(u)+\frac{1}{p}\|\widehat{u}_{j}\|^{p}- \frac{1}{p^{*}_{s}}|u_{j}^{+}|_{p^{*}_{s}}^{p^{*}_{s}}+\frac{1}{p^{*}_{s}}|u^{ +}|_{p^{*}_{s}}^{p^{*}_{s}}+o(1)\] \[=\widetilde{I}_{\lambda}(u)+\frac{1}{p}\|\widehat{u}_{j}\|^{p}- \frac{1}{p^{*}_{s}}\|\widehat{u}_{j}\|^{p}+o(1)\] \[=\widetilde{I}_{\lambda}(u)+\frac{s}{n}\|\widehat{u}_{j}\|^{p}+o (1).\] Hence \[\widetilde{I}_{\lambda}(u)+\frac{s}{n}\underset{j\to\infty}{\mathrm{limsup}}\| \widehat{u}_{j}\|^{p}=c<c_{s,p}.\] As we assume that \(\widetilde{I}_{\lambda}\) has only two critical points \(0\) and \(u_{\lambda}\), it follows that either \(u=0\) or \(u=u_{\lambda}\). By Lemma 3.5, we know that \(\widetilde{I}_{\lambda}(u_{\lambda})<0\). Hence, \[\underset{j\to\infty}{\mathrm{limsup}}\|\widehat{u}_{j}\|^{p}<S_{s,p}^{ \frac{n}{sp}}. \tag{4.29}\] Using (4.28) and (4.29), we get \(\|\widehat{u}_{j}\|\to 0\), which means that \(\{u_{j}\}\) has a convergent subsequence. To get the main result, we will apply Ghoussoub-Preiss' generalized mountain pass theorem [18, Theorem (1)]. **Theorem 4.5**.: _Let \(X\) be a Banach space, and \(\varphi\) be a \(C^{1}\) functional on \(X\). Taking \(u,v\in X\) and consider_ \[c=\inf_{\gamma\in\Gamma}\Bigl{[}\max_{0\leq t\leq 1}\varphi(\gamma(t))\Bigr{]}\] _where \(\Gamma=\{\gamma\in C([0,1],X):\gamma(0)=u,\gamma(1)=v\}\). Assume that \(F\) is a closed subset of \(X\) such that for any \(\gamma\in\Gamma\), one has \(\gamma([0,1])\cap\{x\in F:\varphi(x)\geq c\}\neq\varnothing\). Then there exists a sequence \(\{x_{j}\}\subset X\) satisfying_ 1. \(\lim_{j\to\infty}\text{dist}(x_{j},F)=0\)_;_ 2. \(\lim_{j\to\infty}\varphi(x_{j})=c\)_;_ 3. \(\lim_{j\to\infty}\|\varphi^{\prime}(x_{j})\|_{X^{*}}=0\)_._ **Lemma 4.6**.: _Let \(s\in(0,1)\), \(p\geq 2\), \(p-1<q<p\) and \(n>\frac{sp(q+1)}{q+1-p}\). Let \(\Lambda\) be given in (3.1), and \(m_{\varepsilon,\delta}\) be the mountain pass level defined in (1.12) where \(\delta\), \(\varepsilon\) are given by Lemma 4.3. For \(\lambda\in(0,\Lambda)\), if \(m_{\varepsilon,\delta}\neq 0\), then problem \((P_{\lambda})\) has at least two positive solutions._ Proof.: We assume by contradiction that there are only two critical points \(0\) and \(u_{\lambda}\) of \(\widetilde{I}_{\lambda}\). It follows from Proposition 4.4 that \(\widetilde{I}_{\lambda}\) satisfies \((PS)_{c}\) condition for \(c<c_{s,p}\). Let \(\rho\) be given in (4.1). If there exists \(0<\rho_{0}<\rho\) such that \(\inf_{u\in\partial B_{\rho_{0}}(u_{\lambda})}\widetilde{I}_{\lambda}(u)> \widetilde{I}_{\lambda}(u_{\lambda})\), then \(m_{\varepsilon,\delta}>\widetilde{I}_{\lambda}(u_{\lambda})\). As Lemma 4.1 and Lemma 4.3 showed, \(m_{\varepsilon,\delta}<c_{s,p}\). Using mountain pass theorem in [3], we obtain a mountain pass critical point \(w_{\lambda}\) of \(\widetilde{I}_{\lambda}\). As \((-\Delta)^{*}_{p}w_{\lambda}=\lambda(w_{\lambda}^{+})^{q-1}+(w_{\lambda}^{+}) ^{p^{*}_{s}-1}\), by Lemmas 2.5, \(w_{\lambda}\) is also a nonnegative solution to \((P_{\lambda})\). Since \(m_{\varepsilon,\delta}\neq 0\) and \(m_{\varepsilon,\delta}>\widetilde{I}_{\lambda}(u_{\lambda})\), then \(w_{\lambda}\notin\{0,u_{\lambda}\}\). So problem \((P_{\lambda})\) has at least two positive solutions \(u_{\lambda}\) and \(w_{\lambda}\). If \(m_{\varepsilon,\delta}=\widetilde{I}_{\lambda}(u_{\lambda})\), then for any \(0<\rho_{0}<\rho\), we have \(\inf_{u\in\partial B_{\rho_{0}}(u_{\lambda})}\widetilde{I}_{\lambda}(u)= \widetilde{I}_{\lambda}(u_{\lambda})\). Applying Theorem 4.5 with \[c=m_{\varepsilon,\delta},\ \ X=W_{0}^{s,p}(\Omega),\ \ F=\partial B_{\rho_{0}}(u_{ \lambda}),\ \ \varphi(x)=\widetilde{I}_{\lambda}(x),\ \ u=u_{\lambda},\ \ v=e,\] we obtain still another critical point \(w_{\lambda}\in\partial B_{\rho_{0}}(u_{\lambda})\) of \(\widetilde{I}_{\lambda}\). **Remark 4.7**.: _From the above lemma, \((P_{\lambda})\) has two positive solutions for \(0<\lambda<\Lambda\) whenever \(m_{\varepsilon,\delta}\neq 0\). However, we cannot rule out the case that the mountain pass critical point is trivial when \(m_{\varepsilon,\delta}=0\). The following lemma can tell us that \(m_{\varepsilon,\delta}>0\) if \(\lambda\) is sufficiently small, which means that the trivial critical point will not occur._ **Lemma 4.8**.: _Let \(m_{\varepsilon,\delta}\) be the mountain pass level defined in (1.12). There exists \(\lambda^{*}>0\) such that \(m_{\varepsilon,\delta}>0\) for \(\lambda\in(0,\lambda^{*})\)._ Proof.: By the Sobolev embedding, for any \(r\in[1,p_{s}^{*}]\), there is \(C>0\) such that for any \(u\in W_{0}^{s,p}(\Omega)\), \[\widetilde{I}_{\lambda}(u)\geq\frac{1}{p}\|u\|^{p}-\frac{C\lambda}{q}\|u\|^{q} -\frac{C}{p_{s}^{*}}\|u\|^{p_{s}^{*}},\quad\forall\;u\in W_{0}^{s,p}(\Omega).\] Therefore, there exists \(\lambda^{*}>0\) such that if \(\lambda\in(0,\lambda^{*})\), there are \(\rho_{0}>0\) and \(c>0\) satisfying \[\widetilde{I}_{\lambda}(u)\geq c,\quad\text{for all }\|u\|=\rho_{0}. \tag{4.30}\] We claim that if \(\lambda\in(0,\lambda^{*})\), then \(\|u_{\lambda}\|<\rho_{0}\). Indeed, let \(g_{\lambda}(t)=\widetilde{I}_{\lambda}(tu_{\lambda})\), then \[g_{\lambda}^{\prime}(t)=t^{p-1}\|u_{\lambda}\|_{p}^{p}-\lambda t^{q-1}|u_{ \lambda}|_{q}^{q}-t^{p_{s}^{*}-1}|u_{\lambda}|_{p_{s}^{*}}^{p_{s}^{*}}=t^{q-1} f_{\lambda}(t)\] where \[f_{\lambda}(t):=t^{p-q}\|u_{\lambda}\|_{p}^{p}-\lambda|u_{\lambda}|_{q}^{q}-t^ {p_{s}^{*}-q}|u_{\lambda}|_{p_{s}^{*}}^{p_{s}^{*}}.\] Clearly, \(f_{\lambda}\) has a unique maximal point \(t_{\max}>0\) such that \(f_{\lambda}\) is increasing on the interval \([0,t_{\max}]\) and decreasing on \([t_{\max},\infty)\). Moreover, we can conclude \(f_{\lambda}(t_{\max})>0\), otherwise \(g_{\lambda}\) is nonincreasing, hence \(g_{\lambda}(t)\leq 0\) for all \(t\geq 0\), which contradicts (4.30). Therefore, there are only two positive critical points \(t_{1}\), \(t_{2}\) of \(g_{\lambda}\) satisfying \(0<t_{1}<t_{\max}<t_{2}<\infty\), and \(g_{\lambda}\) is decreasing on the intervals \((0,t_{1})\) and \((t_{2},\infty)\), and increasing on the interval \((t_{1},t_{2})\). Since \(u_{\lambda}\) is a local minimizer of \(\widetilde{I}_{\lambda}\), so \(t_{1}=1\). This implies \(g_{\lambda}(t)<0\) for \(t\in(0,1]\) as \(\widetilde{I}_{\lambda}(u_{\lambda})<0\). Hence \(\|u_{\lambda}\|<\rho_{0}\). If we set \(t_{0}\) in (1.10) large such that \(\|e\|>\rho_{0}\), then for any \(\gamma\in\Gamma_{\varepsilon,\delta}\), we have \(\gamma([0,1])\cap\partial B_{\rho_{0}}(0)\neq\varnothing\). As a consequence, \(m_{\varepsilon,\delta}\geq c>0\). Thanks to Lemmas 4.6, 4.8, we complete the proof of Theorem 1.3. ## 5 Proof of Theorem 1.5 In this section, we will prove the existence of infinitely many solutions of \((P_{\lambda})\) for \(\lambda>0\) small. Consider \[\mathcal{F}=\{A\subset W_{0}^{s,p}(\Omega)\backslash\{0\}:u\in A\Rightarrow-u \in A\},\] and \[\mathcal{A}_{j,\,r}=\{A\in\mathcal{F}:\text{$A$ is compact},A\subset B_{r}(0), \,\text{ind}(A)\geq j\}.\] Here \(\text{ind}(A)\) denotes the \(\mathbb{Z}_{2}\)-genus of \(A\), namely, the least integer \(k\) such that there exists odd functional \(\phi\in C(W_{0}^{s,p}(\Omega),\mathbb{R}^{k})\) satisfying \(\phi(u)\neq 0\) for all \(u\in A\). By [3], the \(\mathbb{Z}_{2}\)-genus possesses the following properties: 1. Definiteness: \(\text{ind}(A)=0\) if and only if \(A=\varnothing\); 2. Monotonicity: If there is an odd continuous map from \(A\) to \(B\) (in particular, if \(A\subset B\)) for \(A,B\in\mathcal{F}\), then \(\operatorname{ind}(A)\leq\operatorname{ind}(B)\); 3. Subadditivity: If \(A\) and \(B\) are closed set in \(\mathcal{F}\), then \(\operatorname{ind}(A\cup B)\leq\operatorname{ind}(A)+\operatorname{ind}(B)\). Define \[b_{j}:=\inf_{A\in\mathcal{A}_{j,r}}\max_{u\in A}I_{\lambda}(u). \tag{5.1}\] We are going to prove that \(b_{j}\) is finite, a critical value of \(I_{\lambda}\), and \(b_{j}\to 0^{-}\) as \(j\to\infty\). **Lemma 5.1**.: _There exists \(\lambda^{**}>0\) such that for all \(\lambda\in(0,\lambda^{**}]\) there are \(r,a>0\) such that_ 1. \(I_{\lambda}(u)\geq a\) _for all_ \(\|u\|=r\)_;_ 2. \(I_{\lambda}\) _is bounded from below in_ \(B_{r}(0)\subset W_{0}^{s,p}(\Omega)\)_;_ 3. \(I_{\lambda}\) _satisfies (PS) condition in_ \(B_{r}(0)\)_._ Proof.: Assume that \(\{u_{j}\}\subset B_{r}(0)\) is a (PS) sequence, that is, \(\{I_{\lambda}(u_{j})\}\) is bounded in \(\mathbb{R}\) and \(I_{\lambda}^{\prime}(u_{j})\to 0\) in \(W_{0}^{s,p}(\Omega)^{*}\). Since \(\{u_{j}\}\) is bounded in \(W_{0}^{s,p}(\Omega)\), there is a subsequence, denoted still by \(\{u_{j}\}\), which converges weakly to some \(u\in\overline{B}_{r}(0)\) and \(u_{j}(x)\to u(x)\) a.e. in \(\mathbb{R}^{n}\). Arguing as Proposition 4.4, we have \[\|u_{j}-u\|^{p}=|u_{j}-u|_{p_{s}^{*}}^{p_{s}^{*}}+o(1)\leq\frac{\|u_{j}-u\|^{ p_{s}^{*}}}{S_{s,p}^{p_{s}^{*}/p}}+o(1),\] which implies that \[\text{either}\quad S_{s,p}^{\frac{n}{p_{s}^{*}}}\leq\liminf_{j\to\infty}\|u_ {j}-u\|\quad\text{or}\quad\lim_{j\to\infty}\|u_{j}-u\|=0.\] Therefore if we fix \(0<2r<S_{s,p}^{\frac{n}{p_{s}^{*}}}\), then \(u_{j}\) must converge to \(u\) in \(W_{0}^{s,p}(\Omega)\). By the Sobolev embedding \(W_{0}^{s,p}(\Omega)\subset L^{m}(\Omega)\) for \(m\in[1,p_{s}^{*}]\), there exists \(C>0\) such that for any \(u\in W_{0}^{s,p}(\Omega)\), \[I_{\lambda}(u)\geq\frac{1}{p}\|u\|^{p}-\frac{C\lambda}{q}\|u\|^{q}-\frac{C}{p_ {s}^{*}}\|u\|^{p_{s}^{*}},\] which concludes that there exists \(\lambda^{**}>0\) such that if \(\lambda\in(0,\lambda^{**}]\), there are \(\frac{1}{2}S_{s,p}^{\frac{n}{p_{s}^{*}}}>r>0\) and \(a>0\) such that \(I_{\lambda}(u)\geq a\) for \(\|u\|=r\). Obviously, \(I_{\lambda}\) is bounded from below in \(B_{r}(0)\). By Lemma 5.1, the deformation lemma will hold for \(I_{\lambda}\) restricted in \(B_{r}(0)\) for some \(r>0\). If \(b_{j}\in(-\infty,0)\) is not a critical value of \(I_{\lambda}\), it follows from [33, Lemma 3.1] that there exist \(\varepsilon\in(0,-b_{j})\) and a homotopy mapping, odd in \(u\) \[\eta:[0,1]\times I_{\lambda}^{b_{j}+\varepsilon}\to I_{\lambda}^{b_{j}+ \varepsilon}\] such that \(\eta(0,\cdot)\) is the identity map of \(I_{\lambda}^{b_{j}+\varepsilon}\) and \(\eta(1,I_{\lambda}^{b_{j}+\varepsilon})\subset I_{\lambda}^{b_{j}-\varepsilon}\). Here for any \(c\in\mathbb{R}\), \(I_{\lambda}^{c}\) means the sublevel set \(\{u\in B_{r}(0):I_{\lambda}(u)\leq c\}\). According to the definition of \(b_{j}\), there exists \(A\in\mathcal{A}_{j,r}\) such that \(A\subset I_{\lambda}^{b_{j}+\varepsilon}\). However, by means of the monotonicity of \(\mathbb{Z}_{2}\)-genus, we have \[\operatorname{ind}(\eta(1,A))\geq\operatorname{ind}(A)\geq j.\] However, \(\eta(1,A)\subset I_{\lambda}^{b_{j}-\varepsilon}\), which contradicts the definition of \(b_{j}\). Hence, the hypothesis was wrong, \(b_{j}\) is a critical value of \(I_{\lambda}\). Next, we aim to prove \(b_{j}\to 0\) as \(j\to\infty\). For that, we will consider suitable subsets of \(\mathcal{A}_{j,r}\). Since \(W_{0}^{s,p}(\Omega)\) is reflexive and separable, by [10, Corollary 3.27], \(W_{0}^{s,p}(\Omega)^{*}\) is also reflexive and separable. Therefore, there exists a sequence \(\{f_{j}\}\subset W_{0}^{s,p}(\Omega)^{*}\) such that \(\{f_{j}\}\) are linearly independent and \(\operatorname{span}\{f_{j},j\geq 1\}\) is dense in \(W_{0}^{s,p}(\Omega)^{*}\). Denote \(F_{j}=\operatorname{span}\{f_{k},1\leq k\leq j\}\) for any \(j\geq 1\). Let \(g_{1}=f_{1}\), let \(v_{1}\in W_{0}^{s,p}(\Omega)\) satisfy \(g_{1}(v_{1})=1\). Clearly, there is \(g_{2}\in F_{2}\backslash F_{1}\) such that \(g_{2}(v_{1})=0\), and there exists \(v_{2}\in W_{0}^{s,p}(\Omega)\) satisfying \(g_{2}(v_{2})=1\). By induction, we get two sequences \(g_{j}\in F_{j}\backslash F_{j-1}\), \(v_{j}\in W_{0}^{s,p}(\Omega)\) such that \[\forall\;j\geq 2,\quad g_{j}(v_{k})=0\;\;\text{for $1\leq k\leq j-1$}\;\; \text{and}\;\;g_{j}(v_{j})=1.\] Define \[E_{j}=\operatorname{span}\{v_{k},1\leq k\leq j\},\;\;E_{j}^{\perp}=\underset {1\leq k\leq j}{\cap}\operatorname{Ker}(g_{k})=\underset{f\in F_{j}}{ \cap}\operatorname{Ker}(f),\quad\forall\;j\geq 1.\] Clearly, \(\{v_{j}\}\) are linearly independent, hence \(\dim(E_{j})=j\) for all \(j\geq 1\). It's not difficult to see that for any \(j\geq 1\), there holds \(W_{0}^{s,p}(\Omega)=E_{j}\oplus E_{j}^{\perp}\). In other words, for any \(w\in W_{0}^{s,p}(\Omega)\), there is a unique \(w_{j}\in E_{j}\) such that \(g_{k}(w_{j})=g_{k}(w)\) for all \(1\leq k\leq j\), so \(w_{j}^{\perp}=w-w_{j}\in E_{j}^{\perp}\). We denote the projection from \(W_{0}^{s,p}(\Omega)\) into \(E_{j}\) parallel to \(E_{j}^{\perp}\) by \(P_{j}\), i.e. \(P_{j}(w)=w_{j}\). An easy but very useful observation is **Lemma 5.2**.: _Let \(\{u_{j}\}\subset W_{0}^{s,p}\) be a bounded sequence such that \(u_{j}\in E_{j}^{\perp}\) for any \(j\), then \(u_{j}\to 0\) weakly in \(W_{0}^{s,p}(\Omega)\)._ Proof.: Let \(f\in\cup_{j}F_{j}\). From the definition of \(E_{j}^{\perp}\), \(f(u_{j})=0\) for \(j\) large enough, so \(\lim_{j\to\infty}f(u_{j})=0\). The same conclusion holds for any \(f\in W_{0}^{s,p}(\Omega)^{*}\), since \(\cup_{j}F_{j}\) is dense in \(W_{0}^{s,p}(\Omega)^{*}\) and \(\{u_{j}\}\) is bounded. Proof of Theorem 1.5.: Let \(\lambda^{**}\), \(r\) be as in Lemma 5.1 and \(\lambda\in(0,\lambda^{**})\). Let \(A_{j}=E_{j}\cap\partial B_{1}(0)\), clearly \(A_{j}\in\mathcal{A}_{j,r}\) since \(\operatorname{ind}(A_{j})=j\) and \(A_{j}\) is compact. Moreover, as \(\dim(E_{j})<\infty\) there is \(C_{j}>0\) such that for any \(v\in E_{j}\), \(\|u\|\leq C_{j}|u_{q}\). Consequently, for any \(u\in A_{j}\) and any \(t>0\), \[I_{\lambda}(tu)\leq\frac{t^{p}}{p}\|u\|^{p}-\frac{\lambda t^{q}}{q}|u|_{q}^{q} \leq\frac{t^{p}}{p}\|u\|^{p}-\frac{C\lambda t^{q}}{q}\|u\|^{q}.\] There exists then \(\varepsilon\in(0,r)\) satisfying \[\varepsilon A_{j}\subset B_{r}(0)\quad\text{and}\quad\max_{u\in\varepsilon A _{j}}I_{\lambda}(u)<0.\] It means that \(b_{j}\) is finite and negative. It follows that there exists a critical point \(u_{j}\in B_{r}(0)\) of \(I_{\lambda}\) with \(I_{\lambda}(u_{j})=b_{j}<0\). Let \[\widetilde{b}_{j}=\inf_{A\in\mathcal{A}_{j,r}}\sup_{u\in A\cap E_{j-1}^{ \perp}}I_{\lambda}(u). \tag{5.2}\] Note that \(\widetilde{b}_{j}\) is well defined because \(A\cap E_{j-1}^{\perp}\neq\varnothing\) for any \(A\in\mathcal{A}_{j,r}\). In fact, if \(P_{j-1}u\neq 0\) for all \(u\in A\), it follows from the property of \(\mathbb{Z}_{2}\)-genus that \(\operatorname{ind}(A)\leq\operatorname{ind}(P_{j-1}(A))\leq j-1\), which is a contradiction. Obviously \(\widetilde{b}_{j}\leq b_{j}\). Now we claim \(b_{j}\to 0^{-}\) as \(j\to\infty\). Suppose the contrary: \(b_{j}\leq\alpha<0\) for all \(j\in\mathbb{N}^{+}\). By the definition of \(\widetilde{b}_{j}\), there is a sequence \(\{u_{j}\}\) such that \[u_{j}\in E_{j-1}^{\perp}\cap B_{r}(0)\text{ and }|I_{\lambda}(u_{j})- \widetilde{b}_{j}|<\frac{1}{j}.\] By Lemma 5.2, \(u_{j}\to 0\) weakly in \(W_{0}^{s,p}(\Omega)\), hence \(u_{j}\to 0\) in \(L^{m}(\Omega)\) for all \(m\in[1,p_{s}^{*})\). As \(\widetilde{b}_{j}\leq\alpha\), we have \[\limsup_{j\to\infty}I_{\lambda}(u_{j})\leq\alpha<0. \tag{5.3}\] On the other hand, \[I_{\lambda}(u_{j})=\frac{1}{p}\|u_{j}\|^{p}-\frac{1}{p_{s}^{*}}|u_{j }|_{p_{s}^{*}}^{p_{s}^{*}}+o(1) \geq\frac{1}{p}\|u_{j}\|^{p}-\frac{S_{s,p}^{-p_{s}^{*}/p}}{p_{s}^{* }}\|u_{j}\|_{p^{*}}^{p_{s}^{*}}+o(1)\] \[=\|u_{j}\|^{p}\left(\frac{1}{p}-\frac{S_{s,p}^{-p_{s}^{*}/p}}{p_{s }^{*}}\|u_{j}\|^{p_{s}^{*}-p}\right)+o(1).\] By \(\frac{1}{2}S_{s,p}^{\frac{n^{2}}{sp^{2}}}>r>0\), there holds \[\frac{1}{p}-\frac{S_{s,p}^{-p_{s}^{*}/p}}{p_{s}^{*}}\|u_{j}\|^{p_{s}^{*}-p} \geq\frac{1}{p}-\frac{S_{s,p}^{-p_{s}^{*}/p}}{p_{s}^{*}}r^{p_{s}^{*}-p}\geq 0.\] Hence \(I_{\lambda}(u_{j})\geq o(1)\), which contradicts (5.3). This implies \(b_{j}\to 0^{-}\) and problem (\(P_{\lambda}\)) has infinitely many solutions.
2305.19383
Quantum Natural Language Processing based Sentiment Analysis using lambeq Toolkit
Sentiment classification is one the best use case of classical natural language processing (NLP) where we can witness its power in various daily life domains such as banking, business and marketing industry. We already know how classical AI and machine learning can change and improve technology. Quantum natural language processing (QNLP) is a young and gradually emerging technology which has the potential to provide quantum advantage for NLP tasks. In this paper we show the first application of QNLP for sentiment analysis and achieve perfect test set accuracy for three different kinds of simulations and a decent accuracy for experiments ran on a noisy quantum device. We utilize the lambeq QNLP toolkit and $t|ket>$ by Cambridge Quantum (Quantinuum) to bring out the results.
Srinjoy Ganguly, Sai Nandan Morapakula, Luis Miguel Pozo Coronado
2023-05-30T19:54:02Z
http://arxiv.org/abs/2305.19383v1
# Quantum Natural Language Processing based Sentiment Analysis using lambeq Toolkit ###### Abstract Sentiment classification is one the best use case of classical natural language processing (NLP) where we can witness its power in various daily life domains such as banking, business and marketing industry. We already know how classical AI and machine learning can change and improve technology. Quantum natural language processing (QNLP) is a young and gradually emerging technology which has the potential to provide quantum advantage for NLP tasks. In this paper we show the first application of QNLP for sentiment analysis and achieve perfect test set accuracy for three different kinds of simulations and a decent accuracy for experiments ran on a noisy quantum device. We utilize the lambeq QNLP toolkit and \(t|ket>\) by Cambridge Quantum (Quantum) to bring out the results. Quantum Computing, Quantum Natural Language Processing, lambeq ## I Introduction Taking computational speeds and performance into consideration, quantum computers are exponentially faster than the present generation classical computers. Until the last two decades or the end of the \(20^{th}\) century quantum computer was a fictional story developed by great mathematicians and physicists such as Richard Feynman, Erwin Schrodinger, David Deutsch, etc. In the early \(21^{st}\) century quantum computers gained its importance and the fictional story was greatly read, understood and was theoretically well developed. Recently, the first quantum computers have been built, some of them have been made publicly available, and they have already gained its significance and showed its power in various domains like machine learning, chemistry, natural language processing, biomedicine, etc. In this paper we will see how quantum computers can help us in improving the domain of natural language processing. Quantum computing is itself a nascent field so is Quantum Natural Language Processing (QNLP), we take the phenomenon of superposition, entanglement, interference to our own advantage and run NLP models or language related tasks on the hardware. As of now we are currently in the Noisy Intermediate-Scale Quantum (NISQ) [1] computers era, where the error rate is directly proportional to the number of qubits and information a qubit contains can be lost easily which is why quantum computers are stored at very cool temperatures and requires great maintainence. QNLP is different from classical NLP. QNLP has its origins in abstract mathematical theory which includes category theory - especially monoidal categories, diagramatic quantum theory and ZX calculus. To gain more understanding about the concepts of diagrammatic quantum theory, the reader can refer to [2] which explains the fundamentals of diagrammatic reasoning for quantum theory and is the core of QNLP. Since a model of natural language is being equivalent to a model which explains quantum mechanical phenomena, this approach makes QNLP quantum-native. By this process linguistic structure can be encoded easily where as encoding grammar in classical is very costly. In this paper we are going to see how accurately quantum computers can predict the sentiments, where they are already trained with around 130 sentences. We will also see how classical computers and quantum computers with embedded noise will give the results and compare them to get a better understanding of why we need quantum computers and how powerful they are. The rest of the paper is ordered as follows: section 2 gives an introduction to the related work that is done and the research going on in this field; section 3 gives a clear picture and brief intuition on QNLP and also explains the sentiment classification experiment; In section 4 we discuss the results of classical, quantum and quantum with noise devices, section 5 summarises the work and also proposes some future lines of work in the domain of QNLP. ## II Related Work As researchers, scientists, enthusiasts identified the capability of quantum devices and the power they have got, more effort and time is put into this domain. Despite of QNLP being a new and emerging field, Noisy Intermediate Scale Quantum(NISQ) devices have already led to some propitious results [3] and also applied to divergent field such as quantum music [4]. As we all know that the applications of classical NLP is already seen in our day-to-day lives. Voice assistants such as Siri and Alexa are the best examples. The problem and constraint with classical NLP is that it can only read and decipher bits but cannot deeply understand the meaning of the language and that is where there is scope for quantum to do this in a meaning-aware manner. In QNLP, the sentences are depicted by variational quantum circuits and each and every word in a sentence is transformed into quantum states by using parameterized quantum gates. With the help of the variational circuit technique, QNLP becomes NISQ-friendly [5]. Scientists and researchers at Cambridge Quantum developed the first high level python framework for QNLP named lamp-beq. This unique toolkit is a open source package which offers the functionality of converting sentences into quantum circuits [6]. The first and foremost effort of designing and execution of natural language models on a real quantum computer was accomplished by Cambridge Quantum where they used an intermediate dataset containing sentences of two different classes - Food or IT and results obtained were profound as given in [7]. In this work, we present the first application of QNLP for sentiment analysis on an intermediate level dataset where we achieve successful results in a binary classification of sentiments - positive and negative sentences. We demonstrate that for both classical and quantum simulations we achieve proper convergence. ## III Methodology and Experiments We have utilized the Distributional Compositonal Categorical (DisCoCat) [8] framework for our task of sentiment classification. The DisCoCat framework provides a unique way of combining different constitutent words to form the meaning of a whole sentence. It follows a compositional mechanism for the grammar to entangle word meanings which are distributed in a vector space. Lambek's pregroup grammar is used in DisCoCat to retrieve the meaning of a sentence from the meaning of the words. ### _Compositional Model of Meaning_ The compositional model of meaning is inspired from Lambek's pregroup grammar. In this formalism, we assign atomic types 'n' for noun and's' for sentence to the words present in a sentence which assists in the composition of the word meanings together. The grammatical rules to compose different types of sentences can be found in [9] and the string diagrams shown have utilized those rules given by Lambek. Fig. 1 shows the string diagram for a sentence "siva hates thrilling comics" where the words are represented by boxes or alternatively triangle shaped boxes and the wires (cups and straight wires) represent the entagling effect which composes the words together to give the meaning of the sentence, which is the grammar. The juxtaposition of the atomic types reduces to's' which signifies that the sentence is grammatically correct. This juxtaposition is solved in (1). \[n\cdot n^{r}\cdot s\cdot n^{l}\cdot n\cdot n^{l}\cdot n\to 1\cdot s\cdot 1 \cdot 1\to s \tag{1}\] Due to today's NISQ devices i.e. small number of qubits, qubit decoherece and since the string diagrams themselves are resource intensive, they need to be rewritten into a more NISQ friendly version which is going to consume less number of qubits to represent the sentences. Since cups consume twice the number of qubits assigned to either 'n' or's', therefore, removing the cups from the original string diagrams results in a diagram which actually consumes less number of resources and is better adapted to today's NISQ hardware. From Fig. 2 it can be seen that the diagram has reduced in s Fig. 1: String diagram of ”siva hates thrilling comics”. Fig. 2: Rewritten string diagram of ”siva hates thrilling comics”. ### _Language into Quantum Circuits_ After we have designed the string diagrams for the language, we have to transform them into quantum circuits in order to run them on simulators and quantum hardware. The compositional model of meaning described before follows a bottom-up approach i.e. composing words to form the meaning of sentence. On the contrary, language in the form of quantum circuits follows a top-down approach i.e. inferring the meaning of words using the meaning of sentence. This top-down approach is valid because for training quantum circuits a dataset of sentences with labels is provided and from that the meaning of words is inferred. Fig. 3 refers to the quantum circuit of the string diagram shown in Fig. 1. The upper triangles (faced upwards) are called "states" or \(|states>\) and the lower triangles (faced downwards) are knows as "effects" or \(<effects|\). It can be seen that there are 7 states i.e. 7 qubits and 6 effects i.e. 6 qubits for post-selected measurement. This means that 6 out of 7 qubits need to be used for measurement. Fig. 3 shows that we have Hadamard gates and CNOT gates used to create the entangling effects or cups present in the string diagram. The nouns such as "siva" and "comics" have been converted into circuit form using parametrized quantum gates - \(Rz(\alpha)\) and \(Rx(\alpha)\). The verbs such as "hates" and "thrilling" have been denoted by parameterized controlled Rz gates. This quantum circuit is denoted as Instantaneous Quantum Polynomial (IQP) [10] which consists of fixed Hadamard gates, parametrized single qubit gates and controlled two qubit quantum gates. Since the IQP consists of parametrized gates which can vary or modify their output based on input parameters, so it is an example of a variational quantum circuit. According to the original string diagram quantum circuit, we require 7 qubit quantum hardware to run the sentence on a quantum computer. Fig. 4 displays the quantum circuit of the diagram in Fig. 2 and it can be deciphered that there are 4 qubits (states) and 3 qubits for post-selected measurement. This is a great reduction of qubits instead of having 6 of 7, we get 3 of 4 after removing the cups by using rewriting technique. Therefore the rewritten circuit can be utilized for NISQ devices. ### _Experimental Details_ For conducting sentiment analysis on a quantum computer we have used a binary sentiment classification dataset which contains positive and negative sentiments of candidates on reading various book generets such as fiction, nonfiction, comics and classics. A label of 0 is assigned to positive sentiments and a label of 1 is assigned to negative sentiments. The dataset consists of 130 sentences, out of which 70 are in the training set, 30 in development set and 30 in test set. There are 7 nouns, 3 adjectives and 5 verbs in total for the sentences present in the dataset. We have employed lambeq [6], world's first QNLP toolkit for carrying out our experiments. This toolkit provides a Fig. 4: Rewritten circuit diagram of ”siva hates thrilling comics”. Fig. 3: Circuit diagram of ”siva hates thrilling comics”. convenient way of converting string diagrams into quantum circuits and then using those circuits for each of the sentences for training purpose on a quantum computer. The toolkit itself is based on Python programming language and entails unique features - high level, open source - code available on GitHub, modular - gives independent modules for greater flexibility, extensive - object-oriented design and interoperability - simple communication with other packages. The lambeq pipeline shown in Fig. 5 is the general process for QNLP training. A sentence is first parsed by a parser and then converted into a string diagram. Here lambeq uses the state-of-the-art DepCCGParser given in [11] to parse the sentences in a CCG format and then converts them to string diagrams. The process of converting CCG to string diagram and vice-versa has been explained in [12] by considering CCG as a biclosed category. After the sentence is converted into a string diagram, it is converted into a quantum circuit based on the ansatz present in lambeq. There are several ansatz which lambeq provides such as SpiderAnsatz, TensorAnsatz, IQPAnsatz, etc. For each sentence present in the dataset, a circuit is formed and for all the sentences in the dataset those circuits are stored in a list. Based on the optimization scheme chosen, these circuits are sent to the simulator or quantum hardware for training. The training process in QNLP is very similar to that of a classical machine learning method. The circuits are ran one by one, measurements are collected from each of the circuits into prediction labels using classical post-processing. These prediction labels are compared with the true labels using a suitable cost function and the output of the cost function is fed into a classical optimizer which calculates the new parameters of the quantum gates. These modified parameters are fed back into the variational circuit again and the process repeats until convergence. ## IV Results and Discussions We apply QNLP using lambeq toolkit to our sentiment analysis dataset, and there are four simulation types which we cover: In classical pipeline the sentences in our dataset are modeled as tensor networks; Quantum pipeline simulation without noise; Quantum pipeline simulation using JAX, JAX is a powerful scientific computing library used for automatic differentiation; Quantum pipeline simulation with noise, by using IBM Qiskit's fake hardware simulator. We have used FakeVigo as the fake hardware backend simulator for this experiment; it can be changed according to one's requirement. ### _Classical Simulation_ In all the four experiments, first we convert each sentence present in our data set into string diagrams using the DepCCGParser. Once the sentences are converted into string diagrams, we apply Spider ansatz by which the noun and sentence spaces each receive a dimension of 2. The PyTorch backend is used for the training with Adam optimizer. The plots for accuracy obtained on training and development sets are shown in Fig. 6. We have obtained perfect accuracy on the test set for this case. ### _Noiseless Quantum Simulation_ This experiment is not very different from the classical one. We use the same configuration as we used in classical pipeline, however, we change the backend from PyTorch to IBM Qiskit's Aer simulator that is accessible through pytket, which is \(t|ket>\)'s python interface. We use a gradient-approximation technique called Simultaneous Perturbation Stochastic Approximation (SPSA) [13]. The reason to choose this optimizer is because SPSA does not calculates the gradient of a quantum circuit but rather approximates it. Evaluating gradients on a quantum hardware by differentiating quantum circuits is very resource intensive and this is where SPSA comes to the rescue. Even though there is a lot of instability during the early stages of training as shown in Fig. 7, but eventually the model converges to good accuracy. The test set accuracy using quantum pipeline is also perfect but the performance varies depending upon on the number of iterations. ### _Quantum Simulation with JAX_ The string diagrams are changed into variational quantum circuits using the IQP ansatz. The noun and sentence types get a single qubit each and the layers of IQP are set to 1. We use JAX because the prediction functions are compiled with great speed and JAX takes a very short time to run and execute the results. Results can be seen in Fig. 8. Even though we obtain perfect accuracy using JAX, this configuration needs 4 times Fig. 5: lambeq QNLP Toolkit Pipeline. Fig from [6]. Fig. 6: Results of classical pipeline simulation. the number of iterations of noiseless quantum pipeline to gain this feat. ### _Noisy Quantum Simulation_ Running circuits on real hardware, taking into account the 130 circuits needed, will be quite difficult and time consuming. So to make this process a bit simpler, we have used Qiskit's fake quantum hardware backend called FakeVigo with the help of \(t|ket>\). The FakeVigo hardware has 5 qubits and is easily able to run our circuits because of the diagram rewriting we have employed. If we exclude the noise, everything else is same in noisy pipeline. The test accuracy is not perfect for noisy quantum simulation which can be seen in Fig. 9. We achieved 83.33% accuracy on the test set. To attain perfect accuracy it requires even more iterations. To compare and show the difference of how noisy quantum simulation differs from noiseless quantum simulation we haven't ran the experiment with increased number of iterations. ## V Conclusions and Future Work In this paper, we have showed the first application of QNLP - binary sentiment classification using the lambeq QNLP toolkit on an intermediate dataset consisting of book genre sentiments. We were able to achieve successful convergence for all the simulations carried out using classical and quantum pipelines. Perfect accuracy on the test set was achieved for three simulations and a decent accuracy was obtained for the noisy quantum pipeline case. QNLP is a new field and much work needs to be done in this field in order to achieve quantum advantage. The current work can be extended by including more number of nouns, adjectives and verbs for each of the sentiments in the dataset. This will increase the parameter space. We have performed binary sentiment classification, therefore another direction would be include multi class sentiment classification by including neutral sentiments as well. It would be a great direction of research if random sentences (without following a particular pattern) are also being utilized in the dataset which is of interest to us as that will provide intuition about the scalability aspects of QNLP. ## Acknowledgment S. G. is very grateful to L. M. P. C. for his guidance and suggestions for improvement of the experiments and to Universidad Politecnica de Madrid for supporting this research. S. N. M. thanks Karunya Institute of Technology and Sciences for letting him explore research directions in the field of quantum technology. The authors acknowledge the use of Google Colab Pro for carrying out the experiments and the libraries lambeq & \(t|ket>\) from Cambridge Quantum (Quantum).
2306.00382
Calibrated and Conformal Propensity Scores for Causal Effect Estimation
Propensity scores are commonly used to estimate treatment effects from observational data. We argue that the probabilistic output of a learned propensity score model should be calibrated -- i.e., a predictive treatment probability of 90% should correspond to 90% of individuals being assigned the treatment group -- and we propose simple recalibration techniques to ensure this property. We prove that calibration is a necessary condition for unbiased treatment effect estimation when using popular inverse propensity weighted and doubly robust estimators. We derive error bounds on causal effect estimates that directly relate to the quality of uncertainties provided by the probabilistic propensity score model and show that calibration strictly improves this error bound while also avoiding extreme propensity weights. We demonstrate improved causal effect estimation with calibrated propensity scores in several tasks including high-dimensional image covariates and genome-wide association studies (GWASs). Calibrated propensity scores improve the speed of GWAS analysis by more than two-fold by enabling the use of simpler models that are faster to train.
Shachi Deshpande, Volodymyr Kuleshov
2023-06-01T06:26:26Z
http://arxiv.org/abs/2306.00382v2
# Calibrated Propensity Scores ###### Abstract Propensity scores are commonly used to balance observed covariates while estimating treatment effects. Estimates obtained through propensity score weighing can be biased when the propensity score model cannot learn the true treatment assignment mechanism. We argue that the probabilistic output of a learned propensity score model should be calibrated, i.e. a predictive treatment probability of 90% should correspond to 90% individuals being assigned the treatment group. We propose simple recalibration techniques to ensure this property. We investigate the theoretical properties of a calibrated propensity score model and its role in unbiased treatment effect estimation. We demonstrate improved causal effect estimation with calibrated propensity scores in several tasks including high-dimensional genome-wide association studies, where we also show reduced computational requirements when calibration is applied to simpler propensity score models. ## 1 Introduction This paper studies the problem of inferring the causal effect of an intervention from observational data. For example, consider the problem of estimating the effect of a treatment on a medical outcome or the effect of a genetic mutation on a phenotype. A key challenge in this setting is confounding--e.g., if a treatment is only given to sick patients, it may paradoxically appear to trigger worse outcomes [11; 45]. Propensity score methods are a popular tool for correcting for confounding in observational data [40; 4; 45; 24; 46]. These methods estimate the probability of receiving a treatment given observed covariates, and balance covariates based on this probability. Propensity score methods can become unreliable when their predictive model outputs incorrect treatment assignment probabilities [17; 26]. For example, when the propensity score model is overconfident (a known problem with neural network estimators [12]), predicted assignment probabilities can be too small [44], which yields a blow-up in the estimated causal effects. More generally, propensity score weighting stands to benefit from accurate uncertainty quantification [16]. This work argues that propensity score methods can be improved by leveraging calibrated uncertainty estimation in treatment assignment models. Intuitively, when a calibrated model outputs a treatment probability of 90%, then 90% of individuals with that prediction should be assigned to the treatment group [36; 21]. We argue that calibration is a necessary condition for propensity score models that also addresses the aforementioned problems of model overconfidence. Off-the-shelf propensity score models are typically uncalibrated [16]; our work introduces algorithms that provably enforce calibration in these models, provides theoretical analysis, and demonstrates the usefulness of calibrated propensities on several tasks, including genome-wide association studies. In summary, this paper makes the following contributions: (1) we provide formal arguments that explain the benefits of uncertainty calibration in propensity score models; (2) we propose simple algorithms that enforce calibration; (3) we provide theoretical guarantees on the calibration and regret of these algorithms and we demonstrate their effectiveness in genome-wide association studies. ## 2 Background NotationFormally, we are given an observational dataset \(\mathcal{D}=\{(x^{(i)},y^{(i)},t^{(i)})\}_{i=1}^{n}\) consisting of \(n\) units, each characterized by features \(x^{(i)}\in\mathcal{X}\subseteq\mathbb{R}^{d}\), a binary treatment \(t^{(i)}\in\{0,1\}\), and a scalar outcome \(y^{(i)}\in\mathcal{Y}\subseteq\mathbb{R}\). We assume \(\mathcal{D}\) consists of i.i.d. realizations of random variables \(X,Y,T\sim P\) from a data distribution \(P\). Although we assume binary treatments and scalar outcomes, our approach naturally extends beyond this setting. The feature space \(\mathcal{X}\) can be any continuous or discrete set. ### Causal effect estimation using propensity scoring We seek to estimate the true effect of \(T=t\) in terms of its average treatment effect (ATE). \[Y[x,t]=\mathbb{E}[Y|X=x,\text{do}(T=t)]\qquad\qquad\text{ATE}=\mathbb{E}[Y[x,1] -Y[x,0]], \tag{1}\] where \(\text{do}(\cdot)\) denotes an intervention [35]. We assume strong ignorability, i.e., \((Y(0),Y(1))\perp T|X\) and \(0<P(T|X)<1\), for all \(X\in\mathcal{X},T\in\{0,1\}\), where \(Y(0)\) and \(Y(1)\) denote potential outcomes. We also make the stable unit treatment value assumption (SUTVA), which states that there is a unique value of outcome \(Y_{i}(t)\) corresponding to unit \(i\) with input \(x_{i}\) and treatment \(t\)[40]. Under these assumptions, the propensity score defined as \(e(X)=P(T=1|X)\) satisfies the conditional independence \((Y(0),Y(1))\perp T|e(X)\)[40]. Propensity score also acts as a balancing score, i.e. \(X\perp T|e(X)\). Thus, ATE can be expressed as \(\tau=\mathbb{E}\bigg{(}\frac{TV}{e(X)}-\frac{(1-T)Y}{1-e(X)}\bigg{)}\). The Inverse Propensity of Treatment Weight (IPTW) estimator uses an approximate model \(Q(T=1|X)\) of \(P(T=1|X)\) to produce an estimate \(\hat{\tau}\) of the ATE, which is computed as \[\hat{\tau}=\frac{1}{n}\sum_{i=1}^{n}\bigg{(}\frac{t^{(i)}y^{(i)}}{Q(T=1|x^{(i) })}-\frac{(1-t^{(i)})y^{(i)}}{1-Q(T=1|x^{(i)})}\bigg{)}.\] We also define the Augmented Inverse Propensity Score Weight (AIPW) estimator in Appendix A. ### Calibrated and conformal prediction for uncertainty estimation This paper seeks to evaluate and improve the uncertainty of propensity scores. A standard tool for evaluating predictive uncertainties is a proper loss (or proper scoring rule) \(L:\Delta_{\mathcal{Y}}\times\mathcal{Y}\rightarrow\mathbb{R}\), defined over the set of distributions \(\Delta_{\mathcal{Y}}\) over \(\mathcal{Y}\) and a realized outcome \(y\in\mathcal{Y}\). Examples of proper losses include the L2 or the log-loss. It can be shown that a proper score is a sum of the following terms [10]: proper \(\text{loss}=\text{calibration}-\text{sharpness}+\text{irreducible}\) term. Calibration.Intuitively, calibration means that a 90% confidence interval contains the outcome about \(90\%\) of the time. Sharpness means that confidence intervals should be tight. Maximally tight and calibrated confidence intervals are Bayes optimal. In the context of propensity scoring methods, we say that a propensity score model \(Q\) is calibrated if the true probability of \(T=1\) conditioned on predicting a probability \(p\) matches the predicted probability: \[P(T=1\mid Q(T=1|X)=p)=p\ \ \forall p\in[0,1] \tag{2}\] Calibrated and conformal predictionOut of the box, most models \(Q\) are not calibrated. Calibrated and conformal prediction yield calibrated forecasts by comparing observed and predicted frequencies on a hold-out dataset [41; 21; 2; 47]. ## 3 Calibrated propensity scores We start with the observation that a good propensity scoring model \(Q(T|X)\) must not only correctly output the treatment assignment, but also accurately estimate predictive uncertainty. Specifically, the _probability_ of the treatment assignment must be correct, not just the class assignment. While a Bayes optimal \(Q\) will perfectly estimate uncertainty, suboptimal models will need to balance various aspects of predictive uncertainty, such as calibration and sharpness. This raises the question: what predictive uncertainty estimates work best for causal effect estimation using propensity scoring? ### Calibration: A necessary condition for propensity scoring model This paper argues that calibration improves propensity-scoring methods. Intuitively, if the model \(Q(T=1|X)\) predicts a treatment assignment probability of 80%, then 80% of these predictions should receive the treatment. If the prediction is larger or smaller, the downstream IPTW estimator will overcorrect or undercorrect for the biased treatment allocation; see below for a simple example. In other words, calibration is a _necessary condition_ for a correct propensity scoring model. We formalize this intuition below, and we provide examples in AppendixF.2 where an IPTW estimator fails when it is not calibrated. **Theorem 3.1**.: _When \(Q(T|X)\) is not calibrated, there exists an outcome function such that an IPTW estimator based on \(Q\) yields an incorrect estimate of the true causal effect almost surely._ Please refer to Appendix F.2 for a full proof. ### Calibrated uncertainties improve propensity scoring models In addition to being a necessary condition, we also identify settings in which calibration is either sufficient or prevents common failure modes of IPTW estimators. Specifically, we identify and study two such regimes: (1) accurate but over-confident propensity scoring models (e.g., neural networks [12]); (2) high-variance IPTW estimators that take as input numerically small propensity scores. #### 3.2.1 Bounding the error of causal effect estimation using proper scores Our first step for studying the role of calibration is to relate the error of an IPTW estimator to the difference between a model \(Q(T|X)\) and the true \(P(T|X)\). We define \(\pi_{t,y}(Q)=\sum_{x}P(y|x,t)\frac{P(t|x)}{Q(t|x)}P(x)\) to be the estimated probability of \(y\) given \(t\) with a propensity score model \(Q\). It is not hard to show that the true \(Y[t]:=\mathbb{E}_{X}Y[X,t]=\mathbb{E}_{X}\mathbb{E}[Y|X=x,\mathrm{do}(T=t)]\) can be written as \(\sum_{y}y\pi_{y,t}(P)\) (see Appendix F.3). Similarly, the estimate of an IPTW estimator with propensity model \(Q\) in the limit of infinite data tends to \(\hat{Y}_{Q}[1]-\hat{Y}_{Q}[0]\), where \(\hat{Y}_{Q}[t]:=\sum_{y}y\pi_{y,t}(Q)\). We may bound the expected L1 ATE error \(|Y[1]-Y[0]-(\hat{Y}_{Q}[1]-\hat{Y}_{Q}[0])|\) by \(\sum_{t}|Y[t]-\hat{Y}_{Q}[t]|\leq\sum_{t}\sum_{y}|y|\cdot|\pi_{y,t}(P)-\pi_{y,t}(Q)|\). Our first lemma bounds the error \(|\pi_{y,t}(P)-\pi_{y,t}(Q)|\) as a function of the difference between \(Q(T|X)\) and the true \(P(T|X)\). A bound on the ATE error follows as a simple corollary. **Lemma 3.2**.: _The expected error \(|\pi_{y,t}(P)-\pi_{y,t}(Q)|\) induced by an IPTW estimator with propensity score model \(Q\) is bounded as_ \[|\pi_{y,t}(P)-\pi_{y,t}(Q)|\leq\mathbb{E}_{X\sim R_{y,t}}[\ell_{\chi}(P,Q)^{ \frac{1}{2}}], \tag{3}\] _where \(R_{y,t}\propto P(Y=y|X,T=t)P(X)\) is a data distribution and \(\ell_{\chi}(P,Q)=\left(1-\frac{P(T=t|X)}{Q(T=t|X)}\right)^{2}\) is the \(chi\)-squared loss between the true propensity score and the model \(Q\)._ Proof (Sketch).: Note that \(|\pi_{y,t}(P)-\pi_{y,t}(Q)|\leq\mathbb{E}_{X\sim R_{y,t}}\left|1-\frac{P(T=t| X)}{Q(T=t|X)}\right|\leq\mathbb{E}_{R_{y,t}}\ell_{\chi}(P,Q)^{\frac{1}{2}}\) See Appendix F.3.1 for the full proof. **Corollary 3.3**.: _Let \(|y|\leq K\) for all \(y\in\mathcal{Y}\). The error of an IPTW estimator with propensity score model \(Q\) is bounded by \(2|\mathcal{Y}|K\max_{y,t}\mathbb{E}_{R_{y,t}}\ell_{\chi}(P,Q)^{\frac{1}{2}}\)._ Note that \(\ell_{\chi}\) is a type of proper loss or proper scoring rule: it is small only if \(Q\) correctly captures the probabilities in \(P\). A model that is accurate, but that does not output correct probability will have a large \(\ell_{\chi}\); conversely, when \(Q=P\), the bound equals to zero and the IPTW estimator is perfectly accurate. To the best of our knowledge, this is the first bound that relates the accuracy of an IPTW estimator directly to the quality of uncertainties of the probabilistic model \(Q\). #### 3.2.2 Calibration reduces variance of inverse probability estimators A common failure mode of IPTW estimators arises when the probabilities from a propensity scoring model \(Q(T|X)\) are small or even equal to zero--division by \(Q(T|X)\) then causes the IPTW estimator to take on very large values or be undefined. Furthermore, when \(Q(T|X)\) is small, small changes in its value cause large changes in the IPTW estimator, which induces problematically high variance. Here, we show that calibration can help mitigate this failure mode. If \(Q\) is calibrated, then it cannot take on abnormally small values relative to \(P\). Specifically, if \(P(T=t|X)\) is larger than some \(\delta>0\), then any prediction from a calibrated estimate \(Q\) of \(P\) has to be larger than \(\delta>0\) as well. In other words, division by small numbers cannot be a greater problem than in the true model. **Theorem 3.4**.: _Let \(P\) be the data distribution, and suppose that \(1-\delta>P(T|X)>\delta\) for all \(T,X\) and let \(Q\) be a calibrated model relative to \(P\). Then \(1-\delta>Q(T|X)>\delta\) for all \(T,X\) as well._ Proof (Sketch).: The proof is by contradiction. Suppose \(Q(T=1|x)=q\) for some \(x\) and \(q<\delta\). Then because \(Q\) is calibrated, of the times when we predict \(q\), we have \(P(T=1|Q(T=1|X)=q)=q<\delta\), which is impossible since \(P(T=1|x)>\delta\) for every \(x\). See Appendix F.3.2 for the full proof. #### 3.2.3 Calibration improves causal effect estimation with accurate propensity models Unfortunately, calibration by itself is not sufficient to correctly estimate treatment effects. For example, consider defining \(Q(T|X)\) as the marginal \(P(T)\): this \(Q\) is calibrated, but cannot accurately estimate treatment effects. However, if the model \(Q\) is sufficiently accurate (as might be the case with a powerful neural network), calibration becomes the missing piece for an accurate IPTW estimator. Specifically, we define separability, a condition which states that when \(P(T|X_{1})\neq P(T|X_{2})\) for \(X_{1},X_{2}\in\mathcal{X}\), then the model \(Q\) satisfies \(Q(T|X_{1})\neq Q(T|X_{2})\). Intuitively, the model \(Q\) is able to discriminate between various \(T\)--something that might be achievable with an expressive neural \(Q\) that has high classification accuracy. We show that a model that is separable and also calibrated achieves accurate causal effect estimation. **Theorem 3.5**.: _The error of an IPTW estimator with propensity model \(Q\) tends to zero as \(n\rightarrow\infty\) if:_ 1. _Separability holds, i.e.,_ \(\forall X_{1},X_{2}\in\mathcal{X},P(T|X_{1})\neq P(T|X_{2})\implies Q(T|X_{1}) \neq Q(T|X_{2})\)__ 2. _The model_ \(Q\) _is calibrated, i.e.,_ \(\forall q\in(0,1),P(T=1|Q(T=1|X)=q)=q\)__ See Appendix F.3.3 for the proof. Below, we also show that a post-hoc recalibrated model \(Q^{\prime}\) has vanishing regret \(\ell(Q^{\prime},Q)\) with respect to a base model \(Q\) and a proper loss \(\ell\) (including \(\ell_{\chi}\) used in our calibration bound). ## 4 Algorithms for calibrated propensity scoring ### A framework for calibrated propensity scoring Next, we propose algorithms that produce calibrated propensity scoring models. Our approach is outlined in Algorithm 1; it differs from standard propensity scoring methods by the addition of a post-hoc recalibration step (step #3) after training the model \(Q\). The recalibration step in Algorithm 1 implements a post-hoc recalibration procedure [36; 21] and is outlined in Algorithm 2. The key idea is to learn an auxiliary model \(R:[0,1]\rightarrow[0,1]\) such that the joint model \(R\circ H\) is calibrated. Below, we argue that if \(R\) can approximate the density \(P(T=1|Q(T|X)=p)\), \(R\circ Q\) will be calibrated [21; 18]. Learning \(R\) that approximates \(P(T=1|Q(T|X)=p)\) requires specifying (1) a model class for \(R\) and (2) a learning objective \(\ell\). One possible model class for \(R\) are **non-parametric kernel density estimators** over \([0,1]\); their main advantage is that they can provably learn the one-dimensional conditional density \(P(T=1|Q(T|X)=p)\). Examples of such algorithms are RBF kernel density estimation or isotonic regression. Alternatively, one may use a family of **parametric models** for \(R\) e.g., logistic regression, neural networks. Such parametric recalibrators can be implemented easily within deep learning frameworks and work well in practice, as we later demonstrate empirically. Our learning objective for \(R\) can be any proper scoring rule such as the L2 loss, the log-loss, or the Chi-squared loss. Optimizing it is a standard supervised learning problem. ### Ensuring calibration in propensity scoring models Next, we seek to show that Algorithms 1 and 2 provably yield a calibrated model \(R\circ Q\). This shows that the desirable property of calibration can be maintained in practice. NotationWe have a calibration dataset \(\mathcal{C}\) of size \(m\) sampled from \(P\) and we train a recalibrator \(R:[0,1]\rightarrow[0,1]\) over the outputs of a base model \(Q\) to minimize a proper loss \(L\). We denote the Bayes-optimal recalibrator by \(B:=P(T=1\mid Q(X))\); the probability of \(T=1\) conditioned on the forecast \((R\circ Q)(X)\) is \(S:=P(T=1\mid(R\circ Q)(X))\). To simplify notation, we omit the variable \(X\), when taking expectations over \(X,T\), e.g. \(\mathbb{E}[L(R\circ Q,T)]=\mathbb{E}[L(R(Q(X)),T)]\). Our first claim is that if we can perform density estimation, then we can ensure calibration. We first formally define the task of density estimation. **Task 4.1** (Density Estimation).: _The model \(R\) approximates the density \(B:=P(T=t\mid Q(X))\). The expected proper loss of \(R\) tends to that of \(B\) as \(m\rightarrow\infty\) such that w.h.p.:_ \[\mathbb{E}[L(B\circ Q,T)]\leq\mathbb{E}[L(R\circ Q,T)]<\mathbb{E}[L(B\circ Q,T)]+\delta\] _where \(\delta>0\), \(\delta=o(m^{-k}),k>0\) is a bound that decreases with \(m\)._ Note that non-parametric kernel density estimation is formally guaranteed to solve one-dimensional density estimation given enough data. **Fact 4.2** (Wasserman [49]).: _When \(R\) implements kernel density estimation and \(L\) is the log-loss, Task 4.1 is solved with \(\delta=o(1/m^{2/3})\)._ We now show that when we can solve Task 4.1, our approach yields models that are asymptotically calibrated in the sense that their calibration error tends to zero as \(m\rightarrow\infty\). **Theorem 4.3**.: _The model \(R\circ Q\) is asymptotically calibrated and the calibration error \(\mathbb{E}[L_{c}(R\circ Q,S)]<\delta\) for \(\delta=o(m^{-k}),k>0\) w.h.p._ See Appendix F.4.1 for the full proof. ### No-regret calibration Next, we show that Algorithms 1 and 2 produce a model \(R\circ Q\) that is asymptotically just as good as the original \(Q\) as measured by the proper loss \(L\). **Theorem 4.4**.: _The recalibrated model has asymptotically vanishing regret relative to the base model: \(\mathbb{E}[L(R\circ Q,T)]\leq\mathbb{E}[L(Q,T)]+\delta,\) where \(\delta>0,\delta=o(m)\)._ Proof (Sketch).: Solving Task 4.1 implies \(\mathbb{E}[L(R\circ Q,T)]\leq\mathbb{E}[L(B\circ Q,T)]+\delta\leq\mathbb{E}[L( Q,T)]+\delta\); the second inequality holds because a Bayes-optimal \(B\) has lower loss than an identity mapping. See Appendix F.4.2 for the full proof. Thus, given enough data, we are guaranteed to produce calibrated forecasts and preserve base model performance as measured by \(L\) (including \(L_{\chi}\) used in our calibration bound). Empirical evaluation We perform experiments on several observational studies to evaluate calibrated propensity score models. We cover different types of treatment assignment mechanisms, base propensity score models, and varying dimensionality of observed covariates. Setup.We use the Inverse-Propensity Treatment Weight (IPTW) and Augmented Inverse Propensity Weight (AIPW) estimators in our experiments. We compare the estimates obtained through calibrated propensities with several baselines including estimators based on uncalibrated propensity scores. We use sigmoid or isotonic regression as the recalibrator and utilize cross-validation splits to generate the calibration dataset. We measure the performance in terms of the absolute error in estimating ATE as \(\epsilon_{ATE}=|\hat{\tau}-\tau|,\) where \(\tau\) is the true treatment effect and \(\hat{\tau}\) is our estimated treatment effect. Analysis of calibration.We evaluate the calibration of the propensity score model using expected calibration error (ECE), defined as \(\mathbb{E}_{p\sim Q(T=1|X)}||P(T=1|Q(T=1|X)=p)-p|,\) where \(Q(T=1|X)\) models the treatment assignment mechanism \(P(T=1|X)\). To compute ECE, we divide the probabilistic output range \([0,1]\) into equal-sized intervals \(\{I_{0},I_{1},..,I_{M}\}\) such that we can generate buckets \(\{B_{i}\}_{i=1}^{M}\), where \(B_{i}=\{(X,T,Y)|Q(T=1|X)\in I_{i}\}\). The estimated ECE is then computed as \(\sum_{i=1}^{M}\frac{|B_{i}|}{|\bigcup_{j=1}^{M}B_{j}|}|\text{avg}_{i}(B_{i})- \text{pred}_{i}(B_{i})|,\) where \(\text{avg}_{i}(B_{i})=\sum_{j=1}^{|B_{i}|}T_{j}/|B_{i}|\) and \(\text{pred}_{i}(B_{i})=\sum_{j=1}^{|B_{i}|}Q(T=1|X_{j})/|B_{i}|.\) ### Drug effectiveness study We simulate an observational study of recovery time from disease in response to the administration of a drug [51]. The decision to treat an individual with the drug is dependent on the covariates specified as age, gender, and severity of disease. We use logistic regression as the propensity score model. In Figure 1, we see that weighing using recalibrated propensities allows us to approximate the distribution of individual treatment effect estimates better than uncalibrated propensities. In Figure 2, we compare the histogram of propensity scores before and after calibration. Please refer to Appendix B for details on the simulation, models used, and calibration plots. In Table 2, we employ different treatment assignment mechanisms in each simulated observational study, allowing us to compare mechanisms that may or may not be well-specified by a linear model (Appendix B). We see that calibrated propensities produce lower absolute error in estimating average treatment effect (\(\epsilon_{ATE}\)) under varying mechanisms. Here, the naive estimation computes the outcomes without weighing the samples with propensities. In Table 1, we also compare a range of base propensity score models for Simulation A and see the benefits of calibration across these setups. Additional details including ECE are in Table 8, Appendix E. In summary, calibrated propensities approximate the true distribution of individual treatment effects better and reduce the occurrence of numerically low scores. They reduce the error in ATE estimation across \begin{table} \begin{tabular}{l c c} \hline \hline Base model & \(\epsilon_{ATE}\)(Plain) & \(\epsilon_{ATE}\) (Calib) \\ \hline Log. Reg & 0.479 (0.005) & 0.091 (0.022) \\ MLP & 0.455 (0.042) & 0.027 (0.031) \\ SVM & 0.485 (0.004) & 0.454 (0.013) \\ Naive Bayes & 0.471 (0.003) & 0.021 (0.018) \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of different base propensity score models. Figure 1: Recalibrating propensity score model reduces the bias in estimating treatment effect from observational data. Figure 2: Histogram of propensities pre- and post-calibration. Calibration reduces the occurrence of numerically small scores. different propensity score models and treatment assignment mechanisms. In real-world observational studies, where we don't know the true treatment assignment mechanism, calibration can be useful to improve the treatment effect estimates from a potentially misspecified model. ### Unstructured covariates We simulate a simple observational study following Louizos et al. [30] and Deshpande et al. [6] such that variables \(X,T,Y\sim\mathbb{P}\) are binary and the true ATE is zero. Appendix C contains a detailed description of this simulation. We also introduce an unstructured image covariate \(\mathbf{X}\) that represents \(X\) as a randomly chosen MNIST image of a zero or one, depending on whether \(X=0\) or \(X=1\). Specifically, \(\mathbb{P}(\mathbf{X}|X=1)\) is uniform over MNIST images of '1' and \(\mathbb{P}(\mathbf{X}|X=0)\) is uniform over MNIST images of '0'. We use a multi-layer perceptron as the propensity score model and recalibrate its output. In Table 3, we compare the IPTW estimates for ATE using binary \(X\) and image \(\mathbf{X}\) covariates. The ECE is higher for the plain propensity score model trained on image covariates, indicating higher miscalibration. We see that recalibration also improves ATE estimates with high-dimensional, unstructured covariates. ### Genome-Wide Association Studies Genome-Wide Association Studies (GWASs) attempt to estimate the treatment effect of genetic mutations (called SNPs) on individual traits (called phenotypes) from observational datasets. Each \begin{table} \begin{tabular}{l c c c c c} \hline \hline Dataset & \begin{tabular}{c} Spatial \\ (\(\alpha\)=0.1) \\ \end{tabular} & \begin{tabular}{c} Spatial \\ (\(\alpha\)=0.3) \\ \end{tabular} & \begin{tabular}{c} Spatial \\ (\(\alpha\)=0.5) \\ \end{tabular} & \begin{tabular}{c} HGDP \\ \end{tabular} & \begin{tabular}{c} TGP \\ \end{tabular} \\ \hline Naive & 16.23 (0.91) & 11.76 (0.84) & 9.81 (0.69) & 11.82 (0.11) & 12.24 (0.71) \\ PCA & 9.60 (0.37) & 9.54 (0.41) & 9.38 (0.38) & 11.69 (0.20) & 10.73 (0.38) \\ FA & 9.55 (0.34) & 9.53 (0.44) & 9.23 (0.30) & 11.65 (0.16) & 10.59 (0.32) \\ LMM & 10.24 (0.41) & 9.58 (0.45) & **8.15 (0.40)** & **10.09 (0.35)** & **9.44 (0.57)** \\ \hline IPTW (Calib) & **8.13 (0.35)** & **8.69 (0.56)** & **8.32 (0.34)** & 10.86 (0.13) & **9.57 (0.58)** \\ IPTW (Plain) & 12.56 (1.25) & 10.22 (0.81) & 9.09 (0.48) & 11.62 (0.12) & 11.76 (0.86) \\ AIPW (Calib) & 8.94 (0.29) & 9.00 (0.58) & 8.59 (0.39) & 11.06 (0.12) & 10.32 (0.43) \\ AIPW (Plain) & 13.89 (0.76) & 10.46 (0.72) & 8.99 (0.51) & 11.38 (0.11) & 11.56 (0.65) \\ \(\Delta_{ECE}\) & 0.022 (0.001) & 0.016 (0.007) & 0.015 (0.001) & 0.011 (0.001) & 0.022 (0.001) \\ \hline \hline \end{tabular} \end{table} Table 4: GWAS with calibrated propensities. We compare IPTW and AIPW estimates using calibrated propensity scores against standard baselines and a specialized GWAS analysis system (LMM/LIMIX). \begin{table} \begin{tabular}{l c c c c} \hline \hline Setting & \(\varepsilon_{ATE}\) with & \multicolumn{2}{c}{Plain Propensities} & \multicolumn{2}{c}{Recalibrated Propensities} \\ & naive estimation & \(\varepsilon_{ATE}\) & ECE & \(\varepsilon_{ATE}\) & ECE \\ \hline Simulation A & 0.495 (0.002) & 0.477 (0.007) & 0.033 (0.001) & 0.156 (0.027) & 0.027 (0.001) \\ Simulation B & 0.222 (0.003) & 0.210 (0.002) & 0.040 (0.001) & 0.193 (0.002) & 0.016 (0.001) \\ Simulation C & 0.273 (0.003) & 0.153 (0.003) & 0.053 (0.001) & 0.147 (0.002) & 0.025 (0.002) \\ Simulation D & 0.290 (0.004) & 0.066 (0.005) & 0.118 (0.001) & 0.026 (0.004) & 0.026 (0.002) \\ \hline \hline \end{tabular} \end{table} Table 2: Recalibrating the output of the propensity score model results in a lower error in estimating causal effects. Reduction in ECE implies that the calibration of the model improves with our technique. Results are averaged over 10 experimental repetitions and braces contain the standard error. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Setting & \(\varepsilon_{ATE}\) with & \multicolumn{2}{c}{Plain Propensities} & \multicolumn{2}{c}{Recalibrated Propensities} \\ & naive estimation & \(\varepsilon_{ATE}\) & ECE & \(\varepsilon_{ATE}\) & ECE \\ \hline Image Covariate & 0.187 (0.010) & 0.161 (0.046) & 0.107 (0.029) & 0.095 (0.005) & 0.024 (0.003) \\ Binary Covariate & 0.176 (0.019) & 0.140 (0.029) & 0.052 (0.011) & 0.099 (0.008) & 0.028 (0.004) \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of structured and unstructured covariates. SNP acts as a treatment. Confounding occurs because of hidden ancestry: individuals with shared ancestry have correlated genes and phenotypes. The key takeaways can be summarized as follows. First, recalibration enables off-the-shelf IPTW estimators to match or outperform a state-of-the-art GWAS analysis system (LLM/LIMIX; see Tables 4 and 6). Second, our method enables the use of propensity score models that would otherwise be unusable due to the poor quality of their uncertainty estimates (e.g., Naive Bayes; see Table 5). Third, leveraging new types of propensity score models that are fast to train (such as Naive Bayes), improves the speed of GWAS analysis by more than two-fold (see Table 7). SetupWe simulate the genotypes and phenotypes of individuals following a range of standard models as described in Appendix D. The outcome is simulated as \(Y=\beta^{T}G+\alpha^{T}Z+\epsilon\), where \(G\) is the vector of SNPs, \(Z\) contains the hidden confounding variables, \(\epsilon\) is noise distributed as Gaussian, \(\beta\) is the vector of treatment effects corresponding to each SNP and \(\alpha\) holds coefficients for the hidden confounding variables. We assume that the aspect of hidden population structure in \(Z\) that needs to be controlled for is fully contained in the observed genetic data to ensure ignorability [27]. To estimate the average marginal treatment effect corresponding to each SNP, we iterate successively over the vector of SNPs such that the selected SNP is treatment \(T\) and all the remaining SNPs are covariates \(X\) for predicting the phenotypic outcome \(Y\). The outcome is a vector of estimated treatment effects \(\hat{\beta}\) corresponding to the vector of SNPs. We measure \(\varepsilon_{ATE}\) as the \(l_{2}\) norm of the difference between true and estimated marginal treatment effect vectors. We use calibrated propensity scores with the IPTW and AIPW estimators to compute these treatment effects. We compare the performance of these estimators with standard methods to perform GWAS, including Principal Components Analysis (PCA) [37; 38], Factor Analysis (FA), and Linear Mixed Models (LMMs) [52; 28], implemented in the popular LIMIX library [29]. Unless mentioned otherwise, 1% of total SNPs are causal and we have 4000 individuals in the dataset. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Method & 1\% Causal SNPs & 2\% Causal SNPs & 5\% Causal SNPs & 10\% Causal SNPs \\ \hline Naive & 22.408 (5.752) & 15.150 (2.213) & 23.388 (5.021) & 14.846 ( 2.272) \\ PCA & 18.104 (5.378) & 13.699 (2.413) & 15.837 (3.331) & 11.683 (0.983) \\ FA & 18.532 (3.641) & 14.166 (2.259) & 16.855 (2.764) & 11.963 (0.958) \\ LMM & 17.575 (3.408) & 13.896 (2.152) & 14.681 (3.366) & 10.108 (0.827) \\ \hline IPTW (Calib) & **17.237 (3.054)** & **13.113 (1.775)** & **14.587 (3.432)** & **8.625 (0.838)** \\ IPTW (Plain) & 19.297 (3.425) & 14.372 (1.482) & 18.290 (3.788) & 11.859 (0.95240) \\ AIPW (Calib) & 17.647 (3.208) & 13.382 (1.676) & 15.166 (3.597) & 9.078 (0.928) \\ AIPW (Plain) & 20.652 (3.286) & 13.720 (1.798) & 21.321 (4.750) & 12.904 (1.990) \\ \hline \hline \end{tabular} \end{table} Table 6: Increasing proportion of causal SNPs. Calibrated propensities reduce the bias in treatment effect estimation across all setups and compare favorably against standard GWAS methods. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Dataset & Metrics & LR & MLP & Random Forest & Adaboost & NB \\ \hline Spatial & \(\varepsilon_{ATE}\) (plain) & 13.886 (0.755) & 17.403 (1.070) & 12.911 (0.612) & 16.234 (0.916) & 582.731 (64.514) \\ (\(\alpha\)=0.1) & \(\varepsilon_{ATE}\) (calib) & 8.942 (0.287) & 14.661 (0.762) & 8.706 (0.322) & 8.524 (0.297) & 8.526 (0.472) \\ & \(\Delta_{ECE}\) & 0.022 (0.001) & 0.072 (0.003) & 0.060 (0.001) & 0.252 (0.006) & 0.281 (0.002) \\ \hline HGDP & \(\varepsilon_{ATE}\) (plain) & 11.380 (0.110) & 12.358 (0.197) & 11.529 (0.107) & 11.816 (0.108) & 138.086 (5.086) \\ & \(\varepsilon_{ATE}\) (calib) & 11.060 (0.120) & 11.198 (0.106) & 11.299 (0.143) & 11.070 (0.123) & 11.430 (0.133) \\ & \(\Delta_{ECE}\) & 0.011 (0.001) & 0.069 (0.002) & 0.053 (0.001) & 0.275 (0.006) & 0.206 (0.003) \\ \hline TGP & \(\varepsilon_{ATE}\) (plain) & 11.560 (0.650) & 11.965 (0.754) & 11.677 (0.614) & 12.246 (0.713) & 87.329 (5.716) \\ & \(\varepsilon_{ATE}\) (calib) & 10.320 (0.430) & 11.530 (0.633) & 10.519 (0.402) & 10.244 (0.398) & 9.070 (0.316) \\ & \(\Delta_{ECE}\) & 0.022 (0.001) & 0.061 (0.002) & 0.070 (0.002) & 0.204 (0.007) & 0.267 (0.004) \\ \hline \hline \end{tabular} \end{table} Table 5: We compare the AIPW estimate using calibrated propensities. Our methods unlock the use of certain propensity score models (e.g., Naive Bayes) which only work after recalibration. In Table 4, we demonstrate the effectiveness of estimators using calibrated propensities on five different GWAS datasets (Appendix D). Here, we have a total of 100 SNPs. In Table 6, we increase the proportion of causal SNPs for the Spatial simulation and continue to see improved performance under calibration. In Table 5, we compare different base models to learn propensity scores and show that calibration improves the performance in each case. We also see that the performance of plain Naive Bayes as the base propensity score model is very poor owing to the simplistic conditional independence assumptions, but calibration improves its performance significantly. In Table 7, we compare the computational throughput of calibrated Naive Bayes as the propensity score model with logistic regression. Here, we have a total of 1000 SNPs. We see that using calibrated Naive Bayes obtains performance competitive with logistic regression at a significantly higher throughput. Please refer to Appendix E for results on additional GWAS datasets. ## 6 Related work Isotonic regression [33] and Platt scaling [36] are commonly used to calibrate uncertainties over discrete outputs. This concept has been extended to regression calibration [21], online calibration [19] and structured prediction [20]. Calibrated uncertainties have been used to improve deep reinforcement learning [31; 18], natural language processing [23], Bayesian optimization [5], etc. Kang and Schafer [17] and Lensis et al. [26] demonstrate the degradation in treatment effect estimation in response to misspecified treatment and outcome models. Different notions of calibration have been proposed to reduce the bias in treatment effect estimation by optimizing the covariate balancing property [14; 55; 34] and by correcting measurement error [43]. Lin and Zeng [27] rigorously define propensity score-based techniques to correct for confounding in Genome-Wide Association Studies (GWAS). Zhao et al. [53; 54] propose techniques to balance both genetic and non-genetic covariates using propensity scores. Other techniques to correct for confounding in GWAS include Principal Components Analysis [37], Genomic Control [7], Stratification Scores [8] and Linear Mixed Models [28]. ## 7 Discussion and conclusions True treatment assignment mechanisms in observational studies are rarely known. Mis-specified propensity score models and outcome models may lead to biased treatment effect estimation [17; 26]. Different parametric and non-parametric models have been proposed to learn propensity scores [32; 13; 15; 25]. We proposed a simple technique to perform post-hoc calibration of the propensity score model. We show that calibration is a necessary condition to obtain accurate treatment effects and calibrated uncertainties improve propensity scoring models. Empirically, we show that our technique reduces bias in estimates across a range of treatment assignment functions and base propensity score models. As compared to calibration by optimizing the covariate balancing property [14], our procedure is simpler and does not require any modification to the training of the base propensity score model. Propensity score models over high-dimensional, unstructured covariates like images, text, and genomic sequences are harder to specify, and we show that we can improve treatment effect estimates for such covariates over a range of base models including the popular logistic regression. We also show that we can calibrate simpler models like Naive Bayes over high-dimensional covariates and obtain higher computational throughput while maintaining competitive performance as measured by the error in treatment effect estimation. Limitations and future directionsWe perform an empirical evaluation for observational studies with binary treatments, but our calibration procedure can be potentially applied to multi-valued and continuous treatments. We leave this as future work. Our GWAS experiments were performed on a range of standard simulation models, but it will be interesting to extend these experiments to include non-genetic covariates, a higher number of SNPs, and real-world genotype matrices. Additionally, the calibration of outcome models is an exciting direction for future work. \begin{table} \begin{tabular}{l c c} \hline \hline Method & \(\epsilon_{ATE}\) & \multicolumn{1}{c}{Tput (SNPs/sec)} \\ \hline MM & 19.908 (3.592) & - \\ Calibrated NB & **18.210 (1.705)** & 47.6 \\ Plain NB & 1455.992 (185.084) & 68.6 \\ Calibrated LR & 23.618 (3.832) & 19.5 \\ Plain LR & 27.921 (4.713) & 20.1 \\ \hline \hline \end{tabular} \end{table} Table 7: Calibrated Naive Bayes yields lower \(\epsilon_{ATE}\) (IPTW) and uses lower computational resources as compared to logistic regression.
2303.05424
The carbon footprint of astronomical research infrastructures
We estimate the carbon footprint of astronomical research infrastructures, including space telescopes and probes and ground-based observatories. Our analysis suggests annual greenhouse gas emissions of $1.2\pm0.2$ MtCO$_2$e yr$^{-1}$ due to construction and operation of the world-fleet of astronomical observatories, corresponding to a carbon footprint of 36.6$\pm$14.0 tCO$_2$e per year and average astronomer. We show that decarbonising astronomical facilities is compromised by the continuous deployment of new facilities, suggesting that a significant reduction in the deployment pace of new facilities is needed to reduce the carbon footprint of astronomy. We propose measures that would bring astronomical activities more in line with the imperative to reduce the carbon footprint of all human activities.
Jürgen Knödlseder
2023-03-09T17:16:18Z
http://arxiv.org/abs/2303.05424v1
# The Carbon Footprint of Astronomical Research Infrastructures ###### Abstract We estimate the carbon footprint of astronomical research infrastructures, including space telescopes and probes and ground-based observatories. Our analysis suggests annual greenhouse gas emissions of \(1.2\pm 0.2\) MtCO\({}_{2}\) yr\({}^{-1}\) due to construction and operation of the world-fleet of astronomical observatories, corresponding to a carbon footprint of 36.6\(\pm\)14.0 tCO\({}_{2}\)e per year and average astronomer. We show that decarbonising astronomical facilities is compromised by the continuous deployment of new facilities, suggesting that a significant reduction in the deployment pace of new facilities is needed to reduce the carbon footprint of astronomy. We propose measures that would bring astronomical activities more in line with the imperative to reduce the carbon footprint of all human activities. Observatories, space telescopes, space probes, carbon footprint, climate change ## 1 Introduction The Intergovernmental Panel on Climate Change (IPCC) is a body created in 1988 by the World Meteorological Organization (WMO) and the United Nations Environment Programme (UNEP) with the objective to provide governments at all levels with scientific information that they can use to develop climate policies. The IPCC authors assess thousands of scientific papers published each year to provide a comprehensive summary of what is known about the drivers of climate change, its impacts and future risks, and how adaptation and mitigation can reduce those risks. According to the 6\({}^{\rm th}\) IPCC assessment report (IPCC 2021), it is unequivocal that human influence has warmed the atmosphere, ocean and land. The scale of recent changes across the climate system as a whole - and the present state of many aspects of the climate system - are unprecedented over many centuries to many thousands of years. Global warming of 1.5\({}^{\circ}\)C and 2\({}^{\circ}\)C will be exceeded during the 21\({}^{\rm st}\) century unless deep reductions in carbon dioxide (CO\({}_{2}\)) and other greenhouse gas (GHG) emissions occur in the coming decades. Many changes due to past and future GHG emissions are irreversible for centuries to millennia, especially changes in the ocean, ice sheets and global sea level. From a physical science perspective, limiting human-induced global warming to a specific level requires limiting cumulative CO\({}_{2}\) emissions, reaching at least net zero CO\({}_{2}\) emissions, along with strong reductions in other GHG emissions. There is growing recognition in the astronomy and astrophysics community that it must assume its share in the global effort to reduce GHGs. Like many other institutes we have therefore undertaken at the Institut de Recherche en Astrophysique et Planétologie (IRAP) an estimate of our GHG emissions so that we can devise an action plan that meets the challenge to drastically reduce emissions. In doing this exercise, we aimed in including all relevant sources of GHG emissions, comprising the purchase of goods and services and the use of data from research infrastructures, such as space telescopes and probes and ground-based observatories. While these sources were generally omitted in other works, the Bilan Carbone(r) method that we used for our estimate prescribes to include all sources for which our laboratory is responsible and on which our activity depends on. In other words, to identify the sources that need to be included in the estimate, the question to ask is whether our activity will be impacted if a given source is removed. Obviously, removing purchase of goods and services and use of data from observing facilities would make our activity impossible. In addition, research infrastructures are often invented, eventually built, and sometimes operated by researchers from our lab, hence as astronomers we also share the responsibility for their existence. In total we found that IRAP's GHG emissions in 2019 were \(51.5\pm 6.0\) tCO\({}_{2}\)e per astronomer of which \(27.4\pm 4.8\) tCO\({}_{2}\)e were attributed to the use of observational data (Martin et al., 2022). Interestingly, the sources that were so far neglected in GHG emission estimates of other research laboratories dominate IRAP's GHG emissions, with 55% due to the use of observational data and 18% due to the purchase of goods and services, of which a substantial fraction is related to instrument developments. The next most important source of GHG emissions was professional travelling (16%), all remaining contributions sum up to only 11%. So IRAP's carbon footprint is largely dominated by inventing, developing, constructing and using research infrastructures, which is the core business of our institute. To understand whether this is specific to IRAP, or whether this is a general feature of astronomy, we went one step further and estimated the total carbon footprint of the world-fleet of astronomical observatories that were operating in 2019. ## 2 Estimate of the carbon footprint of the world-fleet of astronomical research infrastructures ### Method We estimated the carbon footprint of astronomical research infrastructures using primarily a monetary method that relates cost to GHG emissions. This approach is known to have large uncertainties due to the aggregation of activities, products and monetary flows that may vary considerably from one facility or field of activity to another. An alternative life-cycle assessment (LCA) methodology is recommended by key space industry actors (ESA LCA Working Group, 2016) as the optimal method to assess and reduce the carbon footprint of space missions, but it is difficult to implement in practice (especially for comparative or discipline-wide assessments) due to the confidential nature of the required input activity data (Maury et al., 2020). At present, a monetary method analysis is thus the only feasible way to assess the combined carbon footprint of the world's space- and ground-based astronomical research infrastructures. For space missions, we complemented the monetary method by an alternative approach based on the payload launch mass. We adopted throughout this study an uncertainty of 80% for the carbon footprint estimate of individual facilities, as recommended by the French Agency for Ecological Transition (ADEME) for a monetary analysis (Breitenstein, 2021). For our estimate we followed the standard method of multiplying activity data with emission factors, including GHG emissions from constructing and operating the facilities. We started with considering a list of facilities from which data were used in peer-reviewed journal articles made by IRAP researchers in 2019. The list includes 46 space missions and 39 ground-based observatories. For space missions, we estimated the carbon footprint by multiplying mission cost or payload launch mass with appropriate emission factors. Owing to their longer lifetimes compared to space missions, we separated construction from operations for ground-based observatories and estimated the carbon footprint by multiplying construction and operating costs with appropriate emission factors. The full list of cost and mass data that we gathered from the literature and the internet can be found in the Supplementary Information of Knodlseder et al. (2022). To derive the carbon footprint of the world-fleet of astronomical facilities we only considered the infrastructures that were still operating in 2019, reducing our initial list from 85 to 75 facilities. We then used a bootstrap method to extrapolate the carbon footprint of the facilities in our list to an estimated number of 55 active space missions and 1142 ground-based observatories.* In short, the bootstrap method randomly selects \(M\) facilities from a reduced list of \(N\) infrastructures, selecting on average each infrastructure \(M/N\) times (with \(M\geq N\)). Summing up the carbon footprints of all selected infrastructures provides then a linear extrapolation of the carbon footprint from \(N\) to \(M\) infrastructures. Yet bootstrapping goes beyond a linear extrapolation in that it preserves the discrete character of the facilities, and by repeating the sampling process it provides a probability density distribution for the aggregated carbon footprint of the \(M\) facilities. We repeated the random sampling 10,000 times and used the mean and standard deviation of the results to provide an estimate of the value and uncertainty of the aggregated carbon footprint. In order to reduce the bias that may arise from the specific 75 facilities in our initial list, and to avoid mixing infrastructures with hugely different carbon footprints (such as small and large optical telescopes), we divided the facilities in our list into broad categories that reflect scientific topic and observatory type. Details of the method and estimates for the number of worldwide active facilities per category are provided in Knodlseder et al. (2022). ### Emission factors We estimated dedicated emission factors for our study using existing carbon footprint estimates for space missions and ground-based observatories. Specifically, life-cycle carbon footprints of space missions were estimated from the case studies of Wilson (2019) which covered the entire mission including the launcher and a few years of operations. From these studies, we inferred mean emission factors of 140 \(\mathrm{tCO_{2}}\) equivalent (\(\mathrm{CO_{2}e}\)) per million EUR (MURUR) of mission cost and 50 \(\mathrm{tCO_{2}e}\) kg\({}^{-1}\) of payload launch mass. Emission factors of ground-based observatories were derived using existing carbon footprint assessments for the construction of two facilities and the operations of three facilities. We found a mean emission factor of 240 \(\mathrm{tCO_{2}e}\) MURURUR\({}^{-1}\) for construction and of 250 \(\mathrm{tCO_{2}e}\) MURURUR\({}^{-1}\) for operations. A lower monetary emission factor for space missions is supported by the fact that space missions are much less material intensive compared with ground-based observatories after normalizing by cost. For example, the liftoff mass of a EURUR1 billion space mission launched with Ariane 5 ECA is about 790 tonnes, while the European Extremely Large Telescope (E-ELT), which has a similar cost, has a mass of about 60,000 tonnes. The space sector is in fact unique, and is characterized by low production rates, long development cycles and specialized materials and processes (Geerken et al., 2018). The emission factors used in this study are summarised in Table 1 where they are compared to monetary emission factors selected from Breitenstein (2021), covering the range of values encountered for economic activity sectors in France. The comparison shows that emission factors for astronomical research infrastructures are at the low side of other economic activity sectors, implying that decarbonising observatory construction and operations will be challenging within the current socio-economic system. Office work is an important contributor to the carbon footprint of the space sector (Chanoine et al., 2017), which is in agreement with the observation that its emission factor is close to that of office work activities such as insurance, banking and advisory services. As explained above, constructing ground-based observatories is considerably more material intensive than building a space mission, hence a larger emission factor for ground-based observatory construction with respect to space missions is plausible. Due to the lack of published information we were not able to derive a specific emission factor for space mission operations, yet since the underlying infrastructures and activities are similar to operations of ground-based observatories it seems plausible that their emission factors are comparable. We note that the emission factor of operations depends sensitively on the carbon intensity of electricity generation (which is an important contributor to the overall operations footprint) and the number of persons needed for operations (which is an important contributor to the overall operating costs). Consequently, the operations emission factor for a specific facility may deviate significantly from our estimated average value, yet since we are considering here only the aggregated carbon footprint of astronomical facilities such deviations should average out. ### Results The aggregated results of our estimation are summarised in Table 2. Two set of values are presented: the first where we bootstrap-sampled all research infrastructures in each of the categories, and the second where we bootstrap-sampled all except the facilities with the largest carbon footprint in each category (the footprints of the non-sampled facilities were simply added to the bootstrap result). The latter approach is motivated by the possibility that the largest carbon footprint in a given category arises from a facility that is unique in the world, hence excluding this facility from the sampling avoids that the bootstrap sampling selects this unique facility multiple times. Examples for such unique facilities in our initial list are the Hubble space telescope \begin{table} \begin{tabular}{l l} \hline Activity & Emission factor \\ \hline Space missions (based on payload launch mass) & 50 \(\mathrm{tCO_{2}e}\) kg\({}^{-1}\) \\ Space missions (based on mission cost) & 140 \(\mathrm{tCO_{2}e}\) MURURUR\({}^{-1}\) \\ Ground-based observatory construction & 240 \(\mathrm{tCO_{2}e}\) MURURUR\({}^{-1}\) \\ Ground-based observatory operations & 250 \(\mathrm{tCO_{2}e}\) MURURUR\({}^{-1}\) \\ \hline Insurance, banking and advisory services & 110 \(\mathrm{tCO_{2}e}\) MURURURUR\({}^{-1}\) \\ Architecture and engineering, building maintenance & 170 \(\mathrm{tCO_{2}e}\) MURURUR\({}^{-1}\) \\ Installation and repair of machines and equipment & 390 \(\mathrm{tCO_{2}e}\) MURURUR\({}^{-1}\) \\ Metal products (aluminum, cupper, steel, etc.) & 1700 \(\mathrm{tCO_{2}e}\) MURURUR\({}^{-1}\) \\ Mineral products (concrete, glass, etc.) & 1800 \(\mathrm{tCO_{2}e}\) MURURUR\({}^{-1}\) \\ \hline \end{tabular} \end{table} Table 1: Emission factors. or the ALMA observatory which have annual carbon footprints of several tens of \(\rm{ktCO_{2}e~{}yr^{-1}}\). So the second approach is more conservative, plausibly bracketing together with the first approach the true value of the carbon footprint of astronomical research infrastructures. Table 2 gives both the lifecycle and the annual footprints. The lifecycle footprint includes the contributions from construction and operations until 2019, while the annual footprint is the sum of the lifecycle footprint of each facility divided by its lifetime, defined as the time since start of operations, or ten years, whatever is longer. While the lifecycle footprint aggregates carbon footprints over different time periods, and hence is of limited use, the annual footprint is an estimate of the yearly GHG emissions of the considered research infrastructures. The last row provides the average results between both bootstrapping approaches, with the differences between the results added to the quoted uncertainties. Our analysis hence suggests that the world-fleet of astronomical facilities that were operating in 2019 had an annual carbon footprint of \(1.2\pm 0.2~{}\rm{MtCO_{2}e~{}yr^{-1}}\). Dividing the annual carbon footprint by an estimated number of 30,000 astronomers in the world gives a footprint of \(42.8\pm 7.7~{}\rm{tCO_{2}e~{}yr^{-1}}\) per average astronomer for the first bootstrapping approach and \(35.1\pm 4.6~{}\rm{tCO_{2}e~{}yr^{-1}}\) for the second. These results are a bit larger than the estimated footprint of \(27.4\pm 4.8~{}\rm{tCO_{2}e~{}yr^{-1}}\) related to the use of observational data for an average IRAP astronomer, yet the IRAP estimate is based on a restricted list of facilities which may tend to underestimate the true footprint. Taking nevertheless all these results at face value, we derive an estimate of \(36.6\pm 14.0~{}\rm{tCO_{2}e~{}yr^{-1}}\) for the annual carbon footprint of the world-fleet of astronomical facilities per average astronomer that comprises all individual results and their uncertainties. ## 3 Consequences According to our analysis, astronomical research infrastructures appear to be the single most important contributor to the carbon footprint of an average astronomer. Additional contributions include purchase of goods and services, travelling and commuting, supercomputing, running the office building and meals, that for IRAP add up to an additional carbon footprint of 23 \(\rm{tCO_{2}e~{}yr^{-1}}\) per astronomer, resulting in a total professional annual footprint of an average astronomer of about \(\sim 50~{}\rm{tCO_{2}e~{}yr^{-1}}\). Adding also the astronomer's lifestyle footprint, estimated to \(10~{}\rm{tCO_{2}e~{}yr^{-1}}\) for upper class consumers in France (Lenglart et al., 2010), leads to an estimated annual footprint of about \(\sim 60~{}\rm{tCO_{2}e~{}yr^{-1}}\) for an average astronomer in France. Keeping global warming with a reasonable chance below a level of \(1.5\,^{\circ}\)C or \(2\,^{\circ}\)C requires GHG emission reductions by 84% or 63% in 2050 with respect to 2019 (IPCC 2022), corresponding to annual average emission reductions of about \(\sim 6\%\) or \(\sim 3\%\). GHG emissions are not equally distributed between regions, activities and humans, requiring more than average reductions by important emitters to assure the social acceptability of the efforts. Our analysis suggests that astronomers are important emitters, and asking consequently for an order of magnitude reduction effort of GHG emissions over the coming 30 years is not implausible. Obviously, astronomy has not only an environmental but also a societal impact, and finding the right balance between these impacts needs to be subject of public debate. Yet this applies to all sectors of human activity, be it any scientific sector, or sectors that satisfy basic human needs, such as agriculture, housing, health care, dressing and transport. Exempting astronomy from significant GHG emission reductions seems thus difficult to justify. \begin{table} \begin{tabular}{l c c} \hline Category & Lifecycle footprint (\(\rm{MtCO_{2}e}\)) & Annual footprint (\(\rm{ktCO_{2}e~{}yr^{-1}}\)) \\ \hline \multicolumn{3}{c}{All facilities sampled} \\ Space missions (cost-based) & \(8.4\pm 2.0\) & \(596\pm 111\) \\ Space missions (mass-based) & \(6.4\pm 1.2\) & \(455\pm 74\) \\ Ground-based observatories & \(14.2\pm 1.5\) & \(757\pm 131\) \\ Total & \(21.6\pm 3.2\) & \(1283\pm 232\) \\ \hline \multicolumn{3}{c}{Facility with largest footprint excluded from sampling} \\ Space missions (cost-based) & \(7.1\pm 1.4\) & \(490\pm 79\) \\ Space missions (mass-based) & \(5.8\pm 0.9\) & \(417\pm 65\) \\ Ground-based observatories & \(12.6\pm 1.0\) & \(600\pm 70\) \\ Total & \(19.0\pm 2.3\) & \(1054\pm 137\) \\ \hline Total (average) & \(20.3\pm 3.3\) & \(1168\pm 249\) \\ \hline \end{tabular} \end{table} Table 2: Carbon footprint of world-fleet of astronomical research infrastructures active in 2019. ## 4 Taking action Coming back to astronomical research infrastructures, reducing their carbon footprint requires first that each planned or existing facility performs a detailed environmental lifecycle analysis, informing quantified action plans to reduce their emissions. Progress in the implementation of the action plans need to be monitored, and plans be adapted if needed. LCA results, action plans and achievements need to be made public, so that the progress on GHG emission reductions is transparent and fairness can be assured. For proposed facilities LCA results should inform implementation decisions, while for existing facilities LCA results should inform decarbonisation plans. Possible actions include switching to renewable energies for observatory operations, reducing air-travelling and avoiding air-shipping, moving to electric vehicle fleets, and extending equipment lifetime. With such measures, the European Southern Observatory (ESO) plans to reduce its operations-related GHG emissions of 28 ktCO2e yr\({}^{-1}\) in 2018 by up to 4.4 ktCO2e yr\({}^{-1}\) over the next years, corresponding to a reduction of 15%.2 This is an important step, yet falls short of the required reduction levels mentioned above. In addition, ESO is currently building the E-ELT with an estimated construction carbon footprint of at least 63.7 ktCO2e (ESO, personal communication), corresponding to about 15 years of GHG emission savings. Operating the E-ELT will add additional GHG emissions, as illustrated by the past and predicted annual carbon footprint of electricity consumption at the ESO observatory sites in Chile, shown in Fig. 1. While between 2016 and 2022 a reduction of GHG emissions from electricity consumption by \(\sim 50\%\) was achieved (by swapping at Paranal from liquid petrol gas generators to a grid connection in 2018 and adding photovoltaic power plants in 2022), the additional electricity needs of E-ELT will have annihilated all the reductions by the end of this decade; despite important efforts, the GHG emissions due to electricity consumption will exceed in 2030 those of 2016. This illustrates an obvious but inconvenient truth: it is extremely difficult to decarbonise while ramping up! ESO is so far the only organisation that provides public information on carbon footprint estimates and reduction plans, exposing the organisation obviously to be used as a case-study. There are no reasons to believe that the situation is different for other organisations, at least as long as they continue to expand. It's up to these other organisations to prove us wrong, yet until this is done, we should accept that reducing the GHG emissions of astronomy is challenging while continuing with the deployment of new facilities at the current pace. Figure 1: Past and predicted annual carbon footprint of electricity consumption at the ESO observatory sites in La Silla, Paranal and Armazones (data from Filippi et al. 2022). ## 5 Towards sustainable astronomy Obviously, all this calls for deep changes in how astronomy is done in the future, but given the required order of magnitude reductions in GHG emissions, how could it be otherwise? A first step would be to use what we already have and move towards a more extended and deeper analysis of existing astronomical data archives. It is well recognised that archives are valuable resources for astronomy, and a significant fraction of discoveries is made by exploring already existing data (e.g. White et al. 2009; De Marchi 2022). Use of archival data should be actively promoted and be considered when evaluating operation extensions. Resources should be allocated according to carbon footprint, having in mind that remaining carbon budgets that keep global warming below \(1.5\,\mathrm{\SIUnitSymbolCelsius}\) or \(2\,\mathrm{\SIUnitSymbolCelsius}\) shrink rapidly. Today, no funding agency is investing significantly into decarbonising research infrastructures; tomorrow, decarbonising existing facilities must become their funding priority! This also means that less money will be available to build new infrastructures, yet is this really a problem? Stoehr et al. (2015) argue that, in the future, observatories will compete for astronomers to work with their data, which if true seems to indicate that we may have already too many facilities. There is no requirement _per se_ on the deployment pace of new facilities or missions, and slowing down the current pace will lead to less GHG emissions, free resources for investing into decarbonisation and give more time for in-depth analyses of existing data. Another measure is moving away from competition towards more collaboration. If we really believe that astronomers are working for mankind, there is no need to build the same kind of facility several times on the globe. For example, one 40-m class telescope in the world should be sufficient to make the discoveries to be made with such an instrument. And there is no scientific justification for having a new space-race towards the planets, a few well-coordinated international missions should be sufficient to gain the knowledge we are after. Of course, astronomy is not the root cause of climate change, nor can astronomy alone fix it, but astronomy with its significant per capita GHG emissions must be exemplary and take its fair share, leading the way towards a sustainable future on Earth. The author would like to thank R. Arsenault, S. Brau-Nogue, M. Coriat, P. Garnier, A. Hughes, P. Martin and L. Tibaldo for useful discussions. This work has benefited from discussions within the GDR Labs Ipoint5.
2307.09931
DISA: DIfferentiable Similarity Approximation for Universal Multimodal Registration
Multimodal image registration is a challenging but essential step for numerous image-guided procedures. Most registration algorithms rely on the computation of complex, frequently non-differentiable similarity metrics to deal with the appearance discrepancy of anatomical structures between imaging modalities. Recent Machine Learning based approaches are limited to specific anatomy-modality combinations and do not generalize to new settings. We propose a generic framework for creating expressive cross-modal descriptors that enable fast deformable global registration. We achieve this by approximating existing metrics with a dot-product in the feature space of a small convolutional neural network (CNN) which is inherently differentiable can be trained without registered data. Our method is several orders of magnitude faster than local patch-based metrics and can be directly applied in clinical settings by replacing the similarity measure with the proposed one. Experiments on three different datasets demonstrate that our approach generalizes well beyond the training data, yielding a broad capture range even on unseen anatomies and modality pairs, without the need for specialized retraining. We make our training code and data publicly available.
Matteo Ronchetti, Wolfgang Wein, Nassir Navab, Oliver Zettinig, Raphael Prevost
2023-07-19T12:12:17Z
http://arxiv.org/abs/2307.09931v1
# DISA: Differentiable Similarity Approximation for Universal Multimodal Registration ###### Abstract Multimodal image registration is a challenging but essential step for numerous image-guided procedures. Most registration algorithms rely on the computation of complex, frequently non-differentiable similarity metrics to deal with the appearance discrepancy of anatomical structures between imaging modalities. Recent Machine Learning based approaches are limited to specific anatomy-modality combinations and do not generalize to new settings. We propose a generic framework for creating expressive cross-modal descriptors that enable fast deformable global registration. We achieve this by approximating existing metrics with a dot-product in the feature space of a small convolutional neural network (CNN) which is inherently differentiable can be trained without registered data. Our method is several orders of magnitude faster than local patch-based metrics and can be directly applied in clinical settings by replacing the similarity measure with the proposed one. Experiments on three different datasets demonstrate that our approach generalizes well beyond the training data, yielding a broad capture range even on unseen anatomies and modality pairs, without the need for specialized retraining. We make our training code and data publicly available. Keywords:Image Registration Multimodal Metric Learning Differentiable Deformable Registration ## 1 Introduction Multimodal imaging has become increasingly popular in healthcare due to its ability to provide complementary anatomical and functional information. However, to fully exploit its benefits, it is crucial to perform accurate and robust registration of images acquired from different modalities. Multimodal image registration is a challenging task due to differences in image appearance, acquisition protocols, and physical properties of the modalities. This holds in particular if ultrasound (US) is involved, and has not been satisfactorily solved so far. While simple similarity measures directly based on the images' intensities such as sum of absolute (L1) or squared (L2) differences and normalized cross-correlation (NCC) [16] work well in monomodal settings, a more sophisticated approach is needed when intensities cannot be directly correlated. Historically, a breakthrough in CT-MRI registration was achieved by Viola and Wells, who proposed Mutual Information [19]. Essentially, it abstracts the problem to the statistical concept of information theory and optimizes image-wide alignment statistics. Broken down to patch level and inspired by ultrasound physics, the Linear Correlation of Linear Combination (LC\({}^{2}\)) measure has shown to work well for US to MRI or CT registration [22, 2]. While dealing well with US specifics, it is not differentiable and expensive to compute. As an alternative to directly assessing similarity on the original images, various groups have proposed to first compute intermediate representations, and then align these with conventional L1 or L2 metrics [20, 5]. A prominent example is the Modality-Independent Neighbourhood Descriptor (MIND) [5], which is based on image self-similarity and has with minor adaptations (denoted MIND-SSC for self-similarity context) also been applied to US problems [7]. Most recently, it has been shown that using 2D confidence maps-based weighting and adaptive normalization may further improve registration accuracy [21]. Yet, such feature descriptors are not expressive enough to cope with complex US artifacts and exhibit many local optima, therefore requiring closer initialization. More recently, multimodal registration has been approached using various Machine Learning (ML) techniques. Some of these methods involve the utilization of Convolutional Neural Networks (CNN) to extract segmentation volumes from the source data, transforming the problem into the registration of label maps [13, 24]. Although these methods have demonstrated promising results, they are anatomy-specific and require the identification and labeling of structures that are visible in both modalities. Other approaches are trained using ground truth registrations to directly predict the pose [9, 12] or to establish keypoint correspondences [11, 1]. However, these methods are not generalizable to different anatomies or modalities. Moreover, the paucity of precise and unambiguous ground truth registration, particularly in abdominal MR-US registration, exacerbates the overfitting problem, restricting generalization even within the same modality and anatomy. It has furthermore been proposed in the past to utilize CNNs as a replacement for a similar metric. In [3, 17], the two images being registered are resampled into the same grid in each optimizer iteration, concatenated and fed into a network for similarity evaluation. While such a measure can directly be integrated into existing registration methods, it still suffers from similar limitations in terms of runtime performance and modality dependance. In contrast, we propose in this work to use a small CNN to approximate an expensive similarity metric with a straightforward dot product in its feature space. Crucially, our method does not necessitate to evaluate the CNN at every optimizer iteration. This approach combines ML and classical multimodal image registration techniques in a novel way, avoiding the common limitations of ML approaches: ground truth registration is not required, it is differentiable and computationally efficient, and generalizes well across anatomies and imaging modalities. ## 2 Approach We formulate image registration as an optimization problem of a similarity metric \(s\) between the moving image \(M\) and the fixed image \(F\) with respect to the parameters \(\boldsymbol{\alpha}\) of a spatial transformation \(T_{\alpha}:\Omega\rightarrow\Omega\). Most multi-modal similarity metrics are defined as weighted sums of local similarities computed on patches. Denoting \(M\circ T_{\alpha}\) the deformed image, the optimization target can be expressed in the following way: \[f(\alpha)=\sum_{p\in\Omega}w(p)\ s(F[p],M\circ T_{\alpha}[p])\,, \tag{1}\] where \(w(p)\) is the weight assigned to the point \(p\), \(s(\cdot,\cdot)\) defines a local similarity and the \([\cdot]\) operator extracts a patch (or a pixel) at a given spatial location. This definition encompasses SSD but also other more elaborate metrics like \(LC^{2}\) or MIND. The function \(w\) is typically used to reduce the impact of patches with ambiguous content (e.g. with uniform intensities), or can be chosen to encode prior information on the target application. The core idea of our method is to approximate the similarity metric \(s(P_{1},P_{2})\) of two image patches with a dot product \(\langle\phi(P_{1}),\phi(P_{2})\rangle\) where \(\phi(\cdot)\) is a function that extracts a feature vector, for instance in \(\mathbb{R}^{16}\), from its input patch. When \(\phi\) is a fully convolutional neural network (CNN), we can simply feed it the entire volume in order to pre-compute the feature vectors of every voxel with a single forward pass. The registration objective (Eq. 1) is then approximated as \[f(\alpha)\approx\sum_{p\in\Omega}w(p)\ \langle\phi(F)[p],\phi(M)\circ T_{ \alpha}[p]\rangle\,, \tag{2}\] thus converting the original problem into a registration of pre-computed feature maps using a simple and differentiable dot product similarity. This approximation is based on the assumption that the CNN is approximately equivariant to the transformation, i.e. \(\phi(M\circ T_{\alpha})[p]\approx\phi(M)\circ T_{\alpha}[p]\). Our experiments show that this assumption (implicitly made also by other descriptors like MIND) does not present any practical impediment. Our method exhibits a large capture range and can converge over a wide range of rotations and deformations. **Advantages** In contrast to many existing methods, our approach doesn't require any ground truth registration and can be trained using patches from unregistered pairs of images. This is particularly important for multi-modal deformable registration as ground truths are harder to define, especially on ultrasound. The simplicity of our training objective allows the use of a CNN with a limited number of parameters and a small receptive field. This means that the CNN has a negligible computational cost and can generalize well across anatomies and modalities: a single network can be used for all types of images and does not need to be retrained for a new task. Furthermore, the objective function (Eq. 2) can be easily differentiated without backpropagating the gradient through the CNN. This permits efficient gradient-based optimization, even when the original metric is either non-differentiable or costly to differentiate. Finally, we quantize the feature vectors to 8-bit precision further increasing the computational speed of registration without impacting accuracy. ## 3 Method We train our model to approximate the three-dimensional LC\({}^{2}\) similarity, as it showed good performance on a number of tasks, including ultrasound [22, 2]. The LC\({}^{2}\) similarity quantifies whether a target patch can be approximated by a linear combination of the intensities and the gradient magnitude of the source patch. In order to reduce the sensitivity on the scale, our target is actually the average LC\({}^{2}\) over different radiuses of 3, 5, and 7. In order to be consistent with the original implementation of LC\({}^{2}\) we use the same weighting function \(w\) based on local patch variance. Note that the network will be trained only once, on a fixed dataset that is fully independent of the datasets that will be used in the evaluation (see Section 4). **Dataset** Our neural network is trained using patches from the "Gold Atlas - Male Pelvis - Gentle Radiotherapy" [14] dataset, which is comprised of 18 patients each with a CT, MR T1, and MR T2 volumes. We resample each volume to a spacing of \(2mm\) and normalize the voxel intensities to have zero mean and standard variation of one. Since our approach is unsupervised, we don't make use of the provided registration but leave the volumes in their standard DICOM orientation. As LC\({}^{2}\) requires the usage of gradient magnitude in one of Figure 1: Similarity maps across different modalities and anatomies. Each heatmap shows the similarity of the marked point on the source image to every point in the target image. Our method (DISA-LC\({}^{2}\)) approximates LC\({}^{2}\) well in a fraction of the computation time and produces less ambiguous heatmaps than MIND. the modalities, we randomly pick it from either CT or MR. We would like to report that, initially, we also made use of a proprietary dataset including US volumes. However, as our investigation progressed, we observed that the incorporation of US data did not significantly contribute to the generalization capabilities of our model. Consequently, for the purpose of ensuring reproducibility, all evaluations presented in this paper exclusively pertain to the model trained solely on the public MR-CT dataset. **Patch sampling from unregistered datasets** For each pair of volumes \((M,F)\) we repeat the following procedure 5000 times: (1) Select a patch from \(M\) with probability proportional to its weight \(w\); (2) Compute the similarity with all the patches of \(F\); (3) Uniformly sample \(t\in[0,1]\); (4) Pick the patch of \(F\) with similarity score closest to \(t\). Running this procedure on our training data results in a total of 510000 pairs of patches. **Architecture and Training** We use the same feed-forward 3D CNN to process all data modalities. The proposed model is composed of residual blocks [4], LeakyReLU activations [10] and uses BlurPool [25] for downsampling, resulting in a total striding factor of 4. We do not use any normalization layer, as this resulted in a reduction in performance. The output of the model is 16-channels volume with the norm of each voxel descriptor clipped at 1. The architecture consists of ten layers and a total of 90,752 parameters, making it notably smaller than many commonly utilized neural networks. Augmentation on the training data is used to make the model as robust as possible while leaving the target similarity unchanged. In particular, we apply the same random rotation to both patches, randomly change the sign and apply random linear transformation on the intensity values. We train our model for 35 epochs using the L2 loss and batch size of 256. The training converges to an average patch-wise L2 error of 0.0076 on the training set and 0.0083 on the validation set. The total training time on an NVIDIA RTX4090 GPU is 5 hours, and inference on a \(256^{3}\) volume takes \(70ms\). We make the training code and preprocessed data openly available online 1. Footnote 1: [https://github.com/ImFusionGmbH/DISA-universal-multimodal-registration](https://github.com/ImFusionGmbH/DISA-universal-multimodal-registration) ## 4 Experiments and Results We present an evaluation of our approach across tasks involving diverse modalities and anatomies. Notably, the experimental data utilized in our analysis differs significantly from our model's training data in terms of both anatomical structures and combination of modalities. To assess the effectiveness of our method, we compare it against LC\({}^{2}\), which is the metric we approximate, and MIND-SSC [7]. In all experiments, we use a Wilcoxon signed-rank test with p-value \(10^{-2}\) to establish the significance of our results. As will be demonstrated in the next subsections, our method is capable of achieving comparable levels of accuracy as LC\({}^{2}\) while retaining the speed and flexibility of MIND-SSC. In particular, on abdominal US registration (Section 4.3) our method obtains a significantly larger capture range, opening new possibilities for tackling this challenging problem. ### Affine Registration of Brain US-MR In this experiment, we evaluate the performance of different methods for estimating affine registration of the REtroSpective Evaluation of Cerebral Tumors (RESECT) MICCAI challenge dataset [23]. This dataset consists of 22 pairs of pre-operative brain MRs and intra-operative ultrasound volumes. The initial pose of the ultrasound volumes exhibits an orientation close to the ground truth but can contain a significant translation shift. For both MIND-SSC and DISA-LC\({}^{2}\), we resample the input volumes to \(0.4mm\) spacing and use the BFGS [18] optimizer with 500 random initializations within a range of \(\pm 10^{\text{o}}\)and \(\pm 25mm\). We report the obtained Fiducial Registration Errors (FRE) in Table 1. DISA-LC\({}^{2}\) is significantly better than MIND-SSC while the difference with LC\({}^{2}\) is not significant. In conclusion, our experiments demonstrate that the proposed DISA-LC\({}^{2}\), combined with a simple optimization strategy, is capable of achieving equivalent performance to manually tuned LC\({}^{2}\). the abdominal region of a single patient and exhibiting notable deformations. We estimate dense deformation fields using the methodology outlined in [6] (without inverse consistency) which first estimates a discrete displacement using explicit search and then iteratively enforces global smoothness. Segmentation maps of anatomical structures are used to measure the quality of the registration. In particular, we compute the 25th, 50th, and 75th quantile of the Dice Similarity Coefficient (DSC) and the 95th quantile of the Hausdorff distance (HD95) between the registered label maps. We compare MIND-SCC and DISA-LC\({}^{2}\) used with different strides and followed by a downsampling operation that brings the spacing of the descriptors volumes to \(8mm\). The hyperparameters of the registration algorithm have been manually optimized for each approach. Table 2 shows that our method obtains significantly better results than MIND-SCC on the DSC metrics while being not significantly better on HD95. ### Deformable Registration of Abdominal US-CT and US-MR As the most challenging experiment, we finally use our method to achieve deformable registration of abdominal 3D freehand US to a CT or MR volume. We are using a heterogeneous dataset of 27 cases, comprising liver cancer patients and healthy volunteers, different ultrasound machines, as well as optical vs. electro-magnetic external tracking, and sub-costal vs. inter-costal scanning of the liver. All 3D ultrasound data sets are accurately calibrated, with overall system errors in the range of commercial ultrasound fusion options. Between 4 and 9 landmark pairs (vessel bifurcations, liver gland borders, gall bladder, kidney) were manually annotated by an expert. In order to measure the capture range, we start the registration from 50 random rigid poses around the ground truth and calculate the Fiducial Registration Error (FRE) after optimization. For local optimization, LC\({}^{2}\) is used in conjunction with BOBYQA [15] as in the original paper [22], while MIND-SCC and DISA-LC\({}^{2}\) are instead used with BFGS. Due to an excessive computation time, we don't do global optimization with \(LC^{2}\) Figure 2: Boxplot of fiducial registration errors for the different methods on deformable registration of abdominal US-CT and US-MR. while with other methods we use BFGS with 500 random initializations within a range of \(\pm 40^{\circ}\) and \(\pm 150mm\). We use six parameters to define the rigid pose and two parameters to describe the deformation caused by the ultrasound probe pressure. From the results shown in Table 3 and Figure 2, it can be noticed that the proposed method obtains a significantly larger capture range than MIND-SCC and LC\({}^{2}\) while being more than 300 times faster per evaluation than LC\({}^{2}\) (the times reported in the table include not just the optimization but also descriptor extraction). The differentiability of our objective function allows our method to converge in fewer iterations than derivative-free methods like BOBYQA. Furthermore, the evaluation speed of our objective function allows us to exhaustively search the solution space, escaping local minima and converging to the correct solution with pose and deformation parameters at once, in less than two seconds. Note that this registration problem is much more challenging than the prior two due to difficult ultrasonic visibility in the abdomen, strong deformations, and ambiguous matches of liver vasculature. Therefore, to the best of our knowledge, these results present a significant leap towards reliable and fully automatic fusion, doing away with cumbersome manual landmark placements. ## 5 Conclusion We have discovered that a complex patch-based similarity metric can be approximated with feature vectors from a CNN with particularly small architecture, using the same model for any modality. The training is unsupervised and merely requires unregistered data. After features are extracted from the volumes, the actual registration comprises a simple iterative dot-product computation, allowing for global and derivative-based optimization. This novel combination of classical image processing and machine learning elevates multi-modal registration to a new level of performance, generality, but also algorithm simplicity. We demonstrate the efficiency of our method on three different use cases with increasing complexity. In the most challenging scenario, it is possible to perform \begin{table} \begin{tabular}{|l l|c c c c|c|c|} \hline & \multicolumn{3}{c|}{**Converged cases w.r.t. initialization error**} & \multicolumn{1}{c|}{**Time**} & \multicolumn{1}{c|}{**Num.**} \\ **Similarity** & **Search** & **0-25mm** & **25-50mm** & **50-75mm** & **75-100mm** & **(s)** & **eval.** \\ \hline MIND-SSC & Local & 23.6\% & 0.0\% & 0.0\% & 0.0\% & 0.4 & 17 \\ LC\({}^{2}\) & Local & 54.1\% & 14.0\% & 0.0\% & 0.0\% & 1.9 & 98 \\ DISA-LC\({}^{2}\) & Local & **70.3\%** & 52.0\% & 21.1\% & 5.8\% & 0.9 & 70 \\ \hline MIND-SSC & Global & 17.9\% & 14.6\% & 5.3\% & 12.0\% & 1.3 & 26370 \\ LC\({}^{2}\) & Global & & N/A & & & 948.0* & 38740* \\ DISA-LC\({}^{2}\) & Global & **75.5\%** & **73.2\%** & **65.0\%** & **64.0\%** & 1.8 & 29250 \\ \hline \end{tabular} \end{table} Table 3: Results on deformable registration of abdominal US-CT and US-MR. A case is considered “converged” if the FRE after registration is less than \(15mm\). The best results and the ones not significantly different from them are highlighted in bold. (*)Time and evaluations for Global LC\({}^{2}\) are estimated by extrapolation. global optimization within seconds of both pose and deformation parameters, without any organ-specific distinction or successive increase of parameter sizes. While we specifically focused on developing an unsupervised and generic method, a sensible extension would be to specialize our method by including global information, such as segmentation maps, into the approximated measure or by making use of ground-truth registration during training. Finally, the cross-modality feature descriptors produced by our model could be exploited by future research for tasks different from registration such as modality synthesis or segmentation.
2304.10930
Local dimer dynamics in higher dimensions
We consider local dynamics of the dimer model (perfect matchings) on hypercubic boxes $[n]^d$. These consist of successively switching the dimers along alternating cycles of prescribed (small) lengths. We study the connectivity properties of the dimer configuration space equipped with these transitions. Answering a question of Freire, Klivans, Milet and Saldanha, we show that in three dimensions any configuration admits an alternating cycle of length at most 6. We further establish that any configuration on $[n]^d$ features order $n^{d-2}$ alternating cycles of length at most $4d-2$. We also prove that the dynamics of dimer configurations on the unit hypercube of dimension $d$ is ergodic when switching alternating cycles of length at most $4d-4$. Finally, in the planar but non-bipartite case, we show that parallelogram-shaped boxes in the triangular lattice are ergodic for switching alternating cycles of lengths 4 and 6 only, thus improving a result of Kenyon and R\'emila, which also uses 8-cycles. None of our proofs make reference to height functions.
Ivailo Hartarsky, Lyuben Lichev, Fabio Toninelli
2023-04-21T13:06:11Z
http://arxiv.org/abs/2304.10930v2
# Local dimer dynamics in higher dimensions ###### Abstract We consider local dynamics of the dimer model (perfect matchings) on hypercubic boxes \([n]^{d}\). These consist of successively switching the dimers along alternating cycles of prescribed (small) lengths. We study the connectivity properties of the dimer configuration space equipped with these transitions. Answering a question of Freire, Klivans, Milet and Saldanha, we show that in three dimensions any configuration admits an alternating cycle of length at most \(6\). We further establish that any configuration on \([n]^{d}\) features order \(n^{d-2}\) alternating cycles of length at most \(4d-2\). We also prove that the dynamics of dimer configurations on the unit hypercube of dimension \(d\) is ergodic when switching alternating cycles of length at most \(4d-4\). Finally, in the planar but non-bipartite case, we show that parallelogram-shaped boxes in the triangular lattice are ergodic for switching alternating cycles of lengths \(4\) and \(6\) only, thus improving a result of Kenyon and Remila, which also uses \(8\)-cycles. None of our proofs make reference to height functions. **MSC2020:** 05B50; 05C70; 82C20 **Keywords:** dimers, dominoes, local dynamics, ergodicity ## 1 Introduction The dimer model on planar graphs has played a crucial role in statistical mechanics and probability theory for several reasons: in particular, its integrability properties related to Kasteleyn's determinantal or Pfaffian solution and, in the bipartite case, the emergence of macroscopic shapes, arctic curves, conformal invariance and Gaussian Free Field height fluctuations at large scales (see the monographs [7, 13] and references therein). The behaviour of the dimer model in dimension higher than \(2\), or on planar but non-bipartite graphs, is much less understood, and the same can be said about the model's Glauber dynamics. The goal of the present work is to present new results about (local) dimer Glauber dynamics, either on \(\mathbb{Z}^{d}\) for \(d\geq 3\) or on the planar triangular lattice. The study of Glauber dynamics of the dimer model has a long history and has proved quite challenging. While it is easy to define local Markov dynamics with update rule consisting in switching alternating cycles, which ensures that the uniform measure is stationary and reversible, proving that such processes are ergodic and quantifying their speed of convergence to equilibrium is a much more subtle business. In the _planar bipartite case_, the height function [30] turns out to be extremely helpful: it provides a natural partial order preserved by the dynamics, an easy proof of ergodicity and an intuitive "mean curvature motion" heuristic suggesting that, in many interesting situations, the mixing time \(T_{\rm mix}\) is of order \(L^{2}\) (in continuous time), with \(L\) the diameter of the domain. Under some conditions, the Glauber dynamics have in fact been proven to be fast mixing [19, 24, 31] and even to satisfy \(T_{\rm mix}=L^{2+o(1)}\) under suitable restrictions on the domain geometry [2, 16, 17]. As soon as the model is either not planar or not bipartite, there is no canonical definition of height function and the most basic question of proving that local Glauber dynamics are ergodic, and even that they have no completely blocked configurations, turn out to be non-trivial. The situation is particularly unclear for the dimer model on (say, cubic subsets of) \(\mathbb{Z}^{d}\) for \(d\geq 3\) where there are no local dynamics that are known to be ergodic. In fact, the simplest chain whose updates consist in flipping two parallel dimers fails to be ergodic because of subtle topological obstructions. We refer to Section 1.2 (as well as to [3, Sections 1, 3 and 9] and [22]) for a more extensive discussion of conjectures, open problems and previous partial results, and to Section 1.3 for a precise statement of our own results. Let us only briefly anticipate here that our main results include the proof that for local Glauber dynamics on cubic boxes of \(\mathbb{Z}^{d}\), allowing _switching_ (also called moves in [22] and loop shifts in [3]) along cycles of finite length (suitably depending on the dimension \(d\) only), all connected components of the state space are at least of size \(e^{c(d)n^{d-2}}\), with \(n\) the side length of the cube. For comaprison, it was previously an open question to prove that there are no components of cardinality \(1\). Let us add that the ergodicity question, besides being crucial for the use of Markov chains as simulation algorithms, has also attracted interest in the theoretical physics community due to its connection to the quantum dimer model and to the possible occurrence of "Hilbert space fragmentation" [26]. Finally, we emphasize that substantial progress in the understanding of the _equilibrium_ properties of (uniform) dimer configurations in dimension \(d\geq 3\) has been made recently. This includes a large deviation principle for the "flow function" of three-dimensional dimers [3] and the proof of occurrence of macroscopic loops for the \(d\geq 3\) double dimer model [23]. See also [15, 18, 25] for different generalisations of the dimer model to dimension \(d\geq 3\). ### Model Given a graph \(G\), a _dimer configuration_ on \(G\) is a perfect matching. The edges in a dimer configuration are called _dimers_. Fix a graph \(G\) and a dimer configuration on \(G\). An _alternating cycle_ is a cycle in \(G\) of even length where every second edge is a dimer. A _switching_ of an alternating cycle is the operation of exchanging the dimer and the non-dimer edges along the cycle. Note that any switching in a dimer configuration produces another dimer configuration, see e.g. Figure 0(b). For a graph \(G\), we denote by \(\mathcal{D}(G)\) the graph with vertices, given by the dimer configurations on \(G\), where two vertices are connected if there is an alternating cycle whose switching transforms one of the dimer configurations into the other. Moreover, for an integer \(\ell\geq 2\), we denote by \(\mathcal{D}_{\ell}(G)\) the spanning subgraph of \(\mathcal{D}(G)\) where edges correspond to alternating cycles of length at most \(2\ell\). We also say that the space of dimer configurations is \(2\ell\)_-ergodic_ (or simply _ergodic_) if the graph \(\mathcal{D}_{\ell}(G)\) is connected. Note that the superposition of two dimer configurations forms a set of alternating cycles and double edges, so for any finite graph \(G\) and \(\ell\) large enough, \(\mathcal{D}_{\ell}(G)\) is necessarily ergodic. Given a positive integer \(n\), we denote by \([n]\) the set \(\{1,\ldots,n\}\). Given a positive integer \(d\geq 1\), we refer to any vector \(\mathbf{n}=(n_{1},\ldots,n_{d})\) such that \(n_{1},\ldots,n_{d}\geq 2\) and the product \(n_{1}\ldots n_{d}\) is even as _shape_. For a shape \(\mathbf{n}\), the \(\mathbf{n}\)_-box_\(\mathbb{Q}^{d}_{\mathbf{n}}\) is defined as follows. The graph \(\mathbb{Q}^{d}_{\mathbf{n}}\) has vertex set \(\prod_{i=1}^{d}[n_{i}]\) and edges between \(\mathbf{u}=(u_{1},\ldots,u_{d})\) and \(\mathbf{v}=(v_{1},\ldots,v_{d})\) if there exists \(i\in[d]\) such that \(u_{j}=v_{j}\) for all \(j\neq i\) and \(|u_{i}-v_{i}|=1\). We write \(\mathbb{Q}^{d}_{\mathbf{n}}\) for \(\mathbb{Q}^{d}_{\mathbf{n}}\) with \(\mathbf{n}=(n,\ldots,n)\in\mathbb{Z}^{d}\) and \(\mathbb{Q}^{d}\) for the unit hypercube \(\mathbb{Q}^{d}_{2}\). For simplicity, we often identify boxes \(\mathbb{Q}^{d}_{\mathbf{n}}\) with their natural embedding in the \(d\)-dimensional Euclidean space. We further define the triangular lattice \(\mathbb{T}\) as the graph with vertex set \(\mathbb{Z}^{2}\) where every vertex \(\mathbf{v}\) is adjacent to \(\mathbf{v}+(1,0),\mathbf{v}+(-1,0),\mathbf{v}+(0,1),\mathbf{v}+(0,-1),\mathbf{ v}+(1,-1),\mathbf{v}+(-1,1)\). For positive integers \(m,n\) with \(mn\) even, we denote by \(\mathbb{T}_{m,n}\) the graph induced from \(\mathbb{T}\) by the vertex set \([m]\times[n]\). Note that, while \(\mathbb{T}_{m,n}\) is a (rectangular) box in the embedding of \(\mathbb{T}\) chosen above, in the more standard isoradial embedding, these domains correspond to parallelograms. We set out to study the ergodicity of \(\mathcal{D}_{\ell}(\mathbb{Q}^{d}_{\mathbf{n}})\) and \(\mathcal{D}_{\ell}(\mathbb{T}_{m,n})\). ### Background Given a planar graph drawn in the plane, a _domain_ is a union of faces (seen as closed polygons in the plane) of the lattice. Of course, any domain may be seen as a portion of the lattice with its proper dimer configurations and cycle-switching dynamics. The ergodicity of simply connected domains in planar bipartite lattices (with cycle length given by the length of the largest inner face) is a classical fact and follows directly by considering the associated height function (see [29, 30]). Recently, the ergodicity of local dynamics on a number of planar lattices was studied by Roising and Zhang [26], extending an approach of Kenyon and Remila [12]. We note that the techniques used there do not rely on height functions but use planarity in a substantial way, and also involve a certain amount of manual verification. The main goal of our work is to go beyond the planar case. The most natural setting in this respect corresponds to studying \(\mathcal{D}_{\ell}(\mathbb{Q}_{\mathbf{n}}^{3})\) for fixed \(\ell\) and large \(3\)-dimensional shapes \(\mathbf{n}\). It is not hard to check that \(\mathcal{D}_{2}(\mathbb{Q}_{\mathbf{n}}^{3})\) is not connected and even has isolated vertices e.g. for \(\mathbf{n}=(3,3,2)\) (and similarly for \(\mathbf{n}=(n_{1},n_{2},n_{3})\) with \(n_{1}\) and \(n_{2}\) divisible by \(3\) and \(n_{3}\) even), see Figure 0(a). This suggests the existence of an invariant preserved by switching \(4\)-cycles. One such invariant was noted in [5] (also see [1]), and a much more informative one called the _twist_ was introduced in [22] and further studied in [20, 21], thus establishing interesting algebraic, topological and geometric connections alongside the combinatorial ones. This suggests considering \(\mathcal{D}_{\ell}(\mathbb{Q}_{\mathbf{n}}^{3})\) for \(\ell\geq 3\). Milet and Saldanha [21] asked whether \(\mathcal{D}_{3}(\mathbb{Q}_{\mathbf{n}}^{3})\) is connected for \(3\)-dimensional shapes \(\mathbf{n}\). This was reiterated by Freire, Klivans, Milet and Saldanha in [6, 27] and very recently by Chandgotia, Sheffield and Wolfram [3, Problem 9.1]. We promote it to the following conjecture, which is one of the main motivations behind our work. **Conjecture 1**.: _For all \(3\)-dimensional shapes \(\mathbf{n}\), the graph \(\mathcal{D}_{3}(\mathbb{Q}_{\mathbf{n}}^{3})\) is connected._ Several weaker results in the direction of Conjecture 1 have been obtained. Firstly, for \(\mathbf{n}=(n_{1},n_{2},2)\), \(6\)-ergodicity was established in [22]. In [6], Conjecture 1 was proved up to refinement, that is, repeatedly replacing each dimer in the configuration by a copy of a dimer configuration on \(\mathbb{Q}_{(5,5,10)}\), \(\mathbb{Q}_{(5,10,5)}\) or \(\mathbb{Q}_{(10,5,5)}\) with dimers parallel to the original one. In [28], Conjecture 1 was proved for \(\mathbf{n}=(n,m,N)\) with \(N\) large enough (depending on \(n\) and \(m\)) and restricting attention to dimer configurations whose last sufficiently many layers (again, depending on \(n\) and \(m\)) are filled with vertical dimers. In view of the above, the following question weakening Conjecture 1 was asked in [6], where it was checked that no small counterexamples exist. **Question 2**.: _Do there exist even \(n\) such that \(\mathcal{D}_{3}(\mathbb{Q}_{n}^{3})\) has isolated vertices?_ Higher dimensions have been explored even less. Indeed, we are only aware of a binary invariant for \(4\)-cycle switchings considered in [14], where results similar to those from [28] were proved. ### Results In the present work, we prove several results on the connectivity properties of the graphs \(\mathcal{D}_{\ell}(\mathbb{Q}_{\mathbf{n}}^{d})\) and \(\mathcal{D}_{\ell}(\mathbb{T}_{m,n})\). Contrary to previous approaches that mainly used the algebraic, topological and geometric aspects of the Milet-Saldanha twist invariant, our arguments are purely combinatorial and elementary. Figure 1: Example configurations with no short alternating cycles. Our first result immediately entails a negative answer to Question 2. **Theorem 3** (Extraction of a dense \(\mathbb{Q}^{d}\)).: _Let \(d\geq 2\) and \(\mathbf{n}\) be a \(d\)-dimensional shape. Then, for any dimer configuration \(D\) on \(\mathbb{Q}^{d}_{\mathbf{n}}\), there exists \(\mathbf{x}\in\mathbb{Z}^{d}\) such that the unit cube \(\mathbf{x}+\mathbb{Q}^{d}\subseteq\mathbb{Q}^{d}_{\mathbf{n}}\) contains at least \(2^{d-2}+1\) dimers in \(D\)._ Indeed, when \(d=3\), this yields a unit cube with \(3\) dimers, which is readily checked to contain an alternating cycle of length \(4\) or \(6\). One can similarly check that in \(4\) dimensions, we obtain an alternating cycle of length at most \(8\). We believe that any set of \(2^{d-2}+1\) disjoint dimers in \(\mathbb{Q}^{d}\) admits an alternating cycle of length at most \(2d\) for any \(d\geq 2\) but have been unable to prove this. Let us emphasise that it is crucial that \(\mathbf{x}+\mathbb{Q}^{d}\) contains \(2^{d-2}+1\) dimers and not less: in fact, one can check that there exist various very different examples of configurations with \(2^{d-2}\) dimers containing no alternating cycle of any length. Theorem 3 proves the absence of isolated vertices in \(\mathcal{D}_{3}(\mathbb{Q}^{3}_{\mathbf{n}})\) for all \(3\)-dimensional shapes \(\mathbf{n}\). At the cost of increasing the value of \(\ell\), we are able to prove much more. **Theorem 4** (Degree and component size).: _Fix \(d\geq 3\) and an even positive integer \(n\). The minimum degree of \(\mathcal{D}_{2d-1}(\mathbb{Q}^{d}_{n})\) is at least \(n^{d-2}/(320d^{6})\) and each connected component contains at least \(2^{n^{d-2}/(320d^{6})}\) dimer configurations._ Let us note that we have specialised the result for boxes of equal sides only for the sake of readability. The bound on the minimum degree is optimal up to a factor independent of \(n\), as shown by the pyramid configuration defined in Example 15 where alternating cycles of length \(\ell\) should stay at distance at most \(\ell\) from the centers of the \((d-2)\)-dimensional horizontal sections, see Figure 2. We further remark that the total number of dimer configurations on \(\mathbb{Q}^{d}_{n}\) is of order \(e^{C(d)n^{d}}\) for some \(C(d)>0\) as \(n\) grows [8]. Next, we focus our attention on unit hypercubes \(\mathbb{Q}^{d}\) for which we can prove stronger results. Firstly, the following improvement of Theorem 4 can be deduced in the same way. **Theorem 5** (Degree and component size in \(\mathbb{Q}^{d}\)).: _Fix \(d\geq 3\). The minimum degree of \(\mathcal{D}_{d-1}(\mathbb{Q}^{d})\) is at least \(2^{d}/d^{4}\) and each connected component contains at least \(2^{2^{d}/d^{4}}\) dimer configurations._ While this result is only interesting for large \(d\), at which point the size of alternating cycles also grows, we emphasise that the size of cycles we allow is very small compared to the total volume of the hypercube. At the cost of allowing switching along cycles twice as long, we are able to prove ergodicity in the following strong form. Figure 2: The pyramid dimer configuration on \(\mathbb{Q}^{3}_{n}\) (see Example 15; the parity of a section is determined by the parity of the vertex \((1,1,\mathsf{v}^{\prime})\)). The horizontal sections alternate between the two \(2\)-dimensional dimer configurations shown above. These contain no vertical dimers. The only short alternating cycles are around the middle column. Red dimers are solid, blue ones are dashed. **Theorem 6** (Ergodicity on \(\mathbb{Q}^{d}\)).: _For every \(d\geq 2\), the graph \(\mathcal{D}_{2d-2}(\mathbb{Q}^{d})\) is connected and has diameter at most \((d-1)2^{d-1}\)._ In view of this bound on the diameter of the dimer configuration space on the unit hypercube that is almost linear (as a function of the volume of the hypercube), it is natural to ask whether the diameter of \(\mathcal{D}_{\ell}(\mathbb{Q}_{n}^{d})\) can also be linear in the volume. In two dimensions, this is not the case since a lower bound of order \(n^{3}\) follows by considering the height functions of the two configurations in Figure 2, while for non-bipartite graphs such as the triangular lattice, the diameter can be linear in the volume [12]. In higher dimensions, the situation is only clarified by the following result, which also offers a proof in two dimensions without height functions. **Theorem 7** (Diameter lower bound).: _For all \(d,\ell,n\geq 2\) with \(n\) even, the graph \(\mathcal{D}_{\ell}(\mathbb{Q}_{n}^{d})\) has diameter at least_ \[\frac{n^{d-1}(n^{2}-1)}{6\ell^{2}}.\] Finally, while it is tempting to look for the minimal value of \(\ell\) ensuring ergodicity in each setting, this is a rather sensitive matter. In this direction, we prove the following result. **Theorem 8** (Ergodicity on \(\mathbb{T}_{m,n}\)).: _For all positive integers \(m,n\) with \(mn\) even, the graph \(\mathcal{D}_{3}(\mathbb{T}_{m,n})\) is connected and has diameter at most \(2mn\)._ This should be compared to [12] showing an analogous result for \(8\)-ergodicity but for any simply connected domain of the triangular lattice. While there do exist domains for which \(8\)-cycles are necessary (see Figure 0(b)), we show that for boxes, cycles of length \(4\) and \(6\) suffice. While there is a dimer configuration on \(\mathbb{T}_{4,3}\) without alternating \(4\)-cycles (see Figure 0(c)), it remains unclear whether \(6\)-cycles can be avoided for sufficiently large values of \(mn\). With the exception of Theorems 4 and 5, the above results are shown by completely different means, thus providing several approaches for further use. The proofs provided in subsequent sections are therefore completely independent. ## 2 Extraction of a dense \(\mathbb{Q}^{d}\): proof of Theorem 3 In this section, we show that any dimer configuration on \(\mathbb{Q}_{\mathbf{n}}^{d}\) contains a unit hypercube containing at least \(2^{d-2}+1\) dimers. The proof is a simple double counting argument. Proof of Theorem 3.: Fix a dimension \(d\geq 2\), a \(d\)-dimensional shape \(\mathbf{n}\) and a dimer configuration \(D\) on \(\mathbb{Q}_{\mathbf{n}}^{d}\). We count the couples of a vertex \(\mathbf{u}\in\mathbb{Q}_{\mathbf{n}}^{d}\) and a unit hypercube \(\mathbf{x}+\mathbb{Q}^{d}\subseteq\mathbb{Q}_{\mathbf{n}}^{d}\) containing the dimer of \(\mathbf{u}\) in \(D\). We say that a vertex \(\mathbf{x}\in\mathbb{Q}_{\mathbf{n}}^{d}\) is of _type_\(I(\mathbf{x})=\{i\in[d]:x_{i}\in\{1,n_{i}\}\}\), which indicates in which coordinates \(\mathbf{x}\) is on the boundary of \(\mathbb{Q}_{\mathbf{n}}^{d}\). The type of a dimer \(\mathbf{uv}\) is defined as \(I(\mathbf{uv})=\{i\in[d]:u_{i}=v_{i}\in\{1,n_{i}\}\}\) so that \(I(\mathbf{uv})\subseteq I(\mathbf{u})\cap I(\mathbf{v})\). Then, for every \(i\in I(\mathbf{uv})\), all unit hypercubes that contain \(\mathbf{uv}\) contain only vertices whose \(i\)-th component is either in the set \(\{1,2\}\) (if \(u_{i}=v_{i}=1\)) or in the set \(\{n_{i}-1,n_{i}\}\) (if \(u_{i}=v_{i}=n_{i}\)). One may easily check that there are exactly \(2^{d-|I(\mathbf{uv})|-1}\geq 2^{d-|I(\mathbf{u})|-1}\) such unit hypercubes. On the other hand, the number of vertices of type \(I\subseteq[d]\) is \(2^{|I|}\prod_{i\in[d]\setminus I}(n_{j}-2)\). Treating separately the corners of \(\mathbb{Q}_{\mathbf{n}}^{d}\), each of which is contained in a single unit hypercube, we obtain that the number of couples \((\mathbf{u},\mathbf{x})\) as above is at least \[2^{d}+\sum_{I\subseteq[d]}2^{d-|I|-1}\times 2^{|I|}\prod_{j\in[d]\setminus I}(n _{j}-2)=2^{d-1}\left(1+\sum_{I\subseteq[d]}\prod_{i\in[d]\setminus I}(n_{i}-2) \right)=2^{d-1}\left(1+\prod_{i\in[d]}(n_{i}-1)\right).\] Since there are exactly \(\prod_{i\in[d]}(n_{i}-1)\) unit hypercubes and each dimer is counted twice for every unit hypercube that contains it (once for each of its endvertices), there has to be a unit hypercube containing at least \(2^{d-2}+1\) dimers, as desired. ## 3 Degree and component size: proof of Theorem 4 and 5 Throughout this section, we fix \(d\geq 3\), an even positive integer \(n\) and a dimer configuration \(D\) on \(\mathbb{Q}_{n}^{d}\). We say that a vertex \(\mathbf{w}\in\mathbb{Q}_{n}^{d}\) is _forbidden_ (by a dimer \(\mathbf{uv}\in D\)) if there exists \(i\in[d]\) such that \(u_{j}=v_{j}=w_{j}\) for all \(j\in[d]\setminus\{i\}\), \(\mathbf{uv}\) is a dimer and \(\mathbf{uw}\) or \(\mathbf{vw}\) is an edge in \(\mathbb{Q}_{n}^{d}\). In other words, the dimer \(\mathbf{uv}\) forbids \(\mathbf{w}\) if \(\mathbf{w}\) is a neighbour of \(\mathbf{u}\) or \(\mathbf{v}\) aligned with \(\mathbf{uv}\). A vertex that is not forbidden is called _authorised_. Observe that for \(n\geq 3\), a dimer containing a vertex on the boundary of \(\mathbb{Q}_{n}^{d}\) (that is, one with less than \(2d\) neighbours) may forbid one or two vertices in \(\mathbb{Q}_{n}^{d}\), while all other dimers forbid two vertices. We next show that there is a short alternating cycle close to each authorised vertex. **Lemma 9**.: _Fix an authorised vertex \(\mathbf{w}\in\mathbb{Q}_{n}^{d}\). Then, there is an alternating cycle of length at most \(4d-2\) contained in the second neighbourhood of \(\mathbf{w}\) in \(\mathbb{Q}_{n}^{d}\). Moreover, if \(n=2\), every vertex in \(\mathbb{Q}^{d}\) has an alternating cycle of length at most \(2d-2\) in its second neighbourhood._ Proof.: Let \((\mathbf{v}^{i})_{i=1}^{m}\) be the neighbours of the authorised vertex \(\mathbf{w}\) in \(\mathbb{Q}_{n}^{d}\) for some \(m\in[d,2d]\). Moreover, suppose without loss of generality that the dimers in the second neighbourhood of \(\mathbf{w}\) are \(\mathbf{w}\mathbf{v}^{1}\) and \((\mathbf{u}^{i}\mathbf{v}^{i})_{i=2}^{m}\). If \(\mathbf{u}^{i}\) is a neighbour of \(\mathbf{v}^{1}\) in \(\mathbb{Q}_{n}^{d}\) for some \(i\in[2,m]\), then \(\mathbf{v}^{1},\mathbf{w},\mathbf{v}^{i},\mathbf{u}^{i}\) is an alternating cycle of length \(4\), as desired. Suppose that none of \((\mathbf{u}^{i})_{i=2}^{m}\) is a neighbour of \(\mathbf{v}^{1}\). Note that since \(\mathbf{w}\) is an authorised vertex, the vectors \(\mathbf{w}-\mathbf{v}^{i}\) and \(\mathbf{v}^{i}-\mathbf{u}^{i}\) are orthogonal for every \(i\in[2,m]\), which implies that \(\mathbf{u}^{i}\) has exactly two neighbours among \((\mathbf{v}^{j})_{j=2}^{m}\) in \(\mathbb{Q}_{n}^{d}\) (or otherwise said, vertex \((1,1,0,0,\ldots,0)\) has exactly two neighbours in common with the origin in \(\mathbb{Z}^{d}\)). Now, consider the graph induced from \(\mathbb{Q}_{n}^{d}\) by the vertices \(\bigcup_{i=2}^{m}\{\mathbf{u}^{i},\mathbf{v}^{i}\}\), and orient the dimer edges from \((\mathbf{v}^{i})_{i=2}^{m}\) to \((\mathbf{u}^{i})_{i=2}^{m}\) and the other edges from \((\mathbf{u}^{i})_{i=2}^{m}\) to \((\mathbf{v}^{i})_{i=2}^{m}\). We obtain a digraph on \(2m-2\) vertices where every vertex has out-degree at least \(1\), and for every (directed) path in this digraph, exactly one of every two consecutive edges is a dimer. Thus, one may find an alternating cycle of length at most \(2m-2\leq 4d-2\) by starting from any vertex and making steps in the digraph until the first time a vertex is visited twice. It remains to notice that when \(n=2\), every vertex is authorised and has degree \(d\) in \(\mathbb{Q}^{d}\). Lemma 9 implies that if there are many authorised vertices in a dimer configuration on \(\mathbb{Q}_{n}^{d}\), then there must be many alternating cycles in that configuration. The next lemma shows that there must be many authorised vertices. **Lemma 10**.: _Fix \(n\geq 4\). Then, there are at least \(n^{d-2}/(20d^{2})\) authorised vertices._ Proof.: By the pigeonhole principle there exists \(j\in[d]\) such that at least \(n^{d}/(2d)\) of all dimers differ in their \(j\)-th coordinate. We assume that \(j=d\) without loss of generality. For \(\mathbf{v}\in\mathbb{Q}_{n}^{d}\), the _level_ of \(\mathbf{v}\) is \(v_{d}\in[n]\). For every \(i\in[2,n]\), denote by \(k_{i}\) the number of dimers \(\mathbf{uv}\) with \(u_{d}=i-1\) and \(v_{d}=i\). In particular, \[\sum_{i=2}^{n}k_{i}\geq\frac{n^{d}}{2d}. \tag{1}\] For convenience of notation, we extend the sequence by setting \(k_{0}=k_{1}=k_{n+1}=k_{n+2}=0\). For every \(i\in[n]\), denote by \(N_{i}\) the number of vertices on level \(i\) that are either forbidden by at least two dimers or authorised. To begin with, we show that the sum of \((N_{i})_{i=1}^{n}\) is at most twice the number of all authorised vertices. For every \(j\in\{0,\ldots,2d\}\), denote by \(F_{j}\) the number of vertices forbidden by exactly \(j\) dimers. Since each of the \(n^{d}/2\) dimers in the configuration forbids at most two vertices, we have \[F_{1}-2F_{0}+2\sum_{i=1}^{n}N_{i}=F_{1}+\sum_{j=2}^{2d}2F_{j}\leq\sum_{j=1}^{2d} jF_{j}\leq n^{d}=\sum_{j=0}^{2d}F_{j}=F_{1}+\sum_{i=1}^{n}N_{i},\] which implies that \(\sum_{i=1}^{n}N_{i}\leq 2F_{0}\). In particular, if \(\sum_{i=1}^{n}N_{i}\geq n^{d-2}/(10d^{2})\), then the proof is completed. We focus on the analysis of the sequence \((N_{i})_{i=1}^{n}\). Suppose that \(\sum_{i=1}^{n}N_{i}<n^{d-2}/(10d^{2})\). Observe that for every \(i\in[n]\), every dimer \(\mathbf{uv}\) satisfying \(u_{d}+1=v_{d}\in\{i-1,i+2\}\) forbids one vertex on level \(i\) but has no vertex on level \(i\) itself. Conversely, every dimer \(\mathbf{uv}\) satisfying \(u_{d}+1=v_{d}\in\{i,i+1\}\) contains one vertex on level \(i\) while forbidding none. Moreover, the dimers with two vertices on level \(i\) forbid one or two vertices on that level. Let \(s_{i}\) be the number of such dimers forbidding one vertex on level \(i\), and \(t_{i}\) be the number of such dimers forbidding two vertices on level \(i\). Then, the total number of forbidden vertices on level \(i\) counted with multiplicities (that is, a vertex forbidden by \(j\) dimers is counted \(j\) times) is \(s_{i}+2t_{i}+k_{i-1}+k_{i+2}\). Note that this expression may be rewritten as \[2(s_{i}+t_{i})+k_{i-1}+k_{i+2}-s_{i} =(n^{d-1}-k_{i}-k_{i+1})+k_{i-1}+k_{i+2}-s_{i}\] \[=n^{d-1}-(s_{i}-k_{i-1}+k_{i}+k_{i+1}-k_{i+2}). \tag{2}\] As a consequence, the total number of authorised vertices is bounded from below by \[n^{d}-\sum_{i=1}^{n}(n^{d-1}-(s_{i}-k_{i-1}+k_{i}+k_{i+1}-k_{i+2}))=k_{2}+k_{n} +\sum_{i=1}^{n}s_{i}.\] Hence, if \(\sum_{i=1}^{n}s_{i}\geq n^{d-2}/(20d^{2})\), then the proof is completed. Suppose that \(\sum_{i=1}^{n}s_{i}<n^{d-2}/(20d^{2})\). Since the minimum of (2) and \(n^{d-1}\) is an upper bound on the number of vertices on level \(i\) forbidden by exactly one dimer and no vertex is forbidden more than \(2d\) times, we have that for every \(i\in[n]\), \[2dN_{i}\geq|s_{i}-k_{i-1}+k_{i}+k_{i+1}-k_{i+2}|.\] In particular, combining this with the triangle inequality implies that for every \(i\in[n]\), \[||k_{i+2}-k_{i+1}|-|k_{i}-k_{i-1}||\leq|k_{i-1}-k_{i}-k_{i+1}+k_{i+2}|\leq s_ {i}+2dN_{i}. \tag{3}\] Now, for every positive integer \(m\leq n/2\), summing (3) for \(i\in\{2,4,\ldots,2m-2\}\) and applying the triangle inequality yields \[||k_{2m}-k_{2m-1}|-k_{2}|=||k_{2m}-k_{2m-1}|-|k_{2}-k_{1}||\leq\sum_{i=1}^{m- 1}(s_{2i}+2dN_{2i})<\frac{3n^{d-2}}{10d}, \tag{4}\] while summing over \(i\in\{2m-1,2m+1,\ldots,n-1\}\) instead yields \[|k_{n}-|k_{2m-1}-k_{2m-2}||=||k_{n+1}-k_{n}|-|k_{2m-1}-k_{2m-2}||\leq\sum_{i=m} ^{n/2}(s_{2i-1}+2dN_{2i-1})<\frac{3n^{d-2}}{10d}. \tag{5}\] To finish the proof, we show that \(k_{2}\geq n^{d-2}/(10d)\) or \(k_{n}\geq n^{d-2}/(10d)\). Indeed, in this case, there are at least \(n^{d-2}/(10d)\) dimers that forbid only one vertex, which is a lower bound on the number of authorised vertices. Suppose for a contradiction that both \(k_{2}<n^{d-2}/(10d)\) and \(k_{n}<n^{d-2}/(10d)\). Then, by (4) and (5) we have that for every \(i\in[n]\), \(|k_{i}-k_{i-1}|\leq 4n^{d-2}/(10d)\). At the same time, by (1) there is \(i\in[2,n]\) such that \(k_{i}\geq n^{d-1}/(2d)\). Thus, both \(k_{2}\) and \(k_{n}\) must be at least \(n^{d-1}/(2d)-n\times 4n^{d-2}/(10d)=n^{d-2}/(10d)\), which leads to a contradiction and finishes the proof. Proof of Theorem 4.: By Lemma 10 there are at least \(n^{d-2}/(20d^{2})\) authorised vertices in \(D\). Moreover, since for any vertex \(\mathbf{v}\in\mathbb{Q}_{n}^{d}\), the number of vertices at (graph) distance at most \(4\) from \(\mathbf{v}\) is at most \(1+2d+2d(2d-1)+2d(2d-1)^{2}+2d(2d-1)^{3}<(2d)^{4}\), we may find \(n^{d-2}/(320d^{6})\) authorised vertices at distance at least \(5\) from each other in \(\mathbb{Q}_{n}^{d}\). Then, the alternating cycles in the second neighbourhoods of these vertices ensured by Lemma 9 are disjoint, and therefore may be switched independently of each other, thus proving the desired result. Proof of Theorem 5.: Since for any vertex \(\mathbf{v}\in\mathbb{Q}^{d}\), the number of vertices at (graph) distance at most \(4\) from \(\mathbf{v}\) is at most \(1+d+d(d-1)+d(d-1)^{2}+d(d-1)^{3}<d^{4}\), we may find \(2^{d}/d^{4}\) vertices at distance at least \(5\) from each other in \(\mathbb{Q}^{d}\). Then, the alternating cycles in the second neighbourhoods of these vertices ensured by Lemma 9 are disjoint, and therefore may be switched independently of each other, thus proving the desired result. ## 4 Ergodicity of the high-dimensional hypercube: proof of Theorem 6 First, let us briefly outline the proof strategy. Let \(Q_{1}\) (resp. \(Q_{2}\)) be the hypercube containing all vertices whose last coordinate is \(1\) (resp. \(2\)), and call an edge (or a dimer) in \(\mathbb{Q}^{d}\)_crossing_ if it contains one vertex in \(Q_{1}\) and one vertex in \(Q_{2}\). For any given \(d\geq 2\), we fix a dimer configuration on \(\mathbb{Q}^{d}\) and iteratively decrease the number of crossing edges until none are left so that we can apply induction on \(d\). At each step, the idea is to find an alternating cycle of length at most \(4d-4\) in which all edges between \(Q_{1}\) and \(Q_{2}\) are dimers. To do this, we combine several technical lemmas related to the expansion properties of the hypercube with the analysis of a suitable exploration procedure of \(Q_{1}\) and \(Q_{2}\) along the alternating cycles in \(\mathbb{Q}^{d}\) that do not use crossing edges which are not dimers. We now turn to the details. For a set \(A\subseteq\mathbb{Q}^{d}\), we denote by \(\partial A\) the set of vertices at (graph) distance \(1\) from \(A\) in \(\mathbb{Q}^{d}\). The next result is a version of a classical theorem of Harper [9] (see also [4, 10, 11]). **Theorem 11** ([9], or also Lemma 5 and Theorem 2 in [10]).: _Fix positive integers \(d\) and \(a<2^{d}\). Then, there exists a unique choice of integers \(1\leq t\leq k+1\leq d\) and \(t\leq a_{t}<\cdots<a_{k}<d\) such that_ \[a=\sum_{j=k+1}^{d}\binom{d}{j}+\sum_{i=t}^{k}\binom{a_{i}}{i}.\] _Moreover,_ \[\min\left\{|A\cup\partial A|:A\subseteq[2]^{d},|A|=a\right\}=\phi_{d}(a):= \sum_{j=k}^{d}\binom{d}{j}+\sum_{i=t}^{k}\binom{a_{i}}{i-1}.\] _The same holds trivially with \(\phi_{d}(0):=0\) and \(\phi_{d}(2^{d}):=2^{d}\)._ While a direct application of Theorem 11 would be inappropriate in our setting, it becomes a useful tool in combination with the following lemma. Call a vertex in \(\mathbb{Q}^{d}\)_even_ if the sum of its coordinates is even, and _odd_ otherwise. We also say that a subset of vertices of \(\mathbb{Q}^{d}\) is even (resp. odd) if it contains only even (resp. odd) vertices. **Lemma 12**.: _Fix an integer \(d\geq 2\) and an even set \(A\subseteq\mathbb{Q}^{d}\). Then,_ \[|\partial A|\geq\min\left\{|B\cup\partial B|:B\subseteq\mathbb{Q}^{d-1},\,|B |=|A|\right\}.\] Proof.: Define the map \(\pi:\mathbb{Q}^{d}\to\mathbb{Q}^{d-1}:(v_{1},\ldots,v_{d})\mapsto(v_{1}, \ldots,v_{d-1})\) and consider a vertex \(\mathbf{u}\) in the first neighbourhood of \(\pi(A)\) in \(\mathbb{Q}^{d-1}\). If \(\mathbf{u}\in\pi(A)\), then there are two distinct vertices \(\mathbf{u}^{1},\mathbf{u}^{2}\in\mathbb{Q}^{d}\) with \(\pi(\mathbf{u}^{1})=\pi(\mathbf{u}^{2})=\mathbf{u}\) and \(\mathbf{u}^{2}\in A\). Thus, \(\mathbf{u}\) is the image of the odd vertex \(\mathbf{u}^{1}\in\partial A\). Moreover, if \(\mathbf{u}\in\partial\,\pi(A)\), then there is a vertex \(\mathbf{v}\in A\) such that \(\pi(\mathbf{v})\) is a neighbour of \(\mathbf{u}\). Thus, \(\mathbf{u}\) is the image of a neighbour of \(\mathbf{v}\) under \(\pi\), which is an odd vertex in \(\partial A\). Combining the two observations above shows that \(|\partial A|\geq|\pi(A)\cup\partial\,\pi(A)|\), which finishes the proof. Fix a dimer configuration \(D\) in \(\mathbb{Q}^{d}\), and suppose that \(\mathbf{v}\) is an even vertex in \(Q_{1}\) contained in a crossing dimer. We define the graph \(\Gamma\) with vertex set \([2]^{d}\) and edge set \(E(Q_{1})\cup E(Q_{2})\cup D\) where \(E(G)\) denotes the edge set of a graph \(G\). In particular, \(D\) may naturally be seen as a dimer configuration on \(\Gamma\). For every integer \(k\geq 0\), denote by \(E_{k}\) (resp. \(O_{k}\)) the set of even (resp. odd) vertices that may be reached from \(\mathbf{v}\) by following an alternating path in \(\Gamma\) of length at most \(2k\) (resp. \(2k+1\)) starting with a non-dimer edge. In particular, \(E_{0}=\{\mathbf{v}\}\) and \(O_{0}=Q_{1}\cap\partial\{\mathbf{v}\}\). Also, we define \(E_{k,i}=E_{k}\cap Q_{i}\) for \(i\in[2]\). Note that for every integer \(k\geq 0\), \[|E_{k+1}|\geq|O_{k}|\geq|\partial E_{k,1}\cap Q_{1}|+|\partial E_{k,2}\cap Q_{2}|. \tag{6}\] The following lemma is a preliminary technical result. **Lemma 13**.: _Fix \(d\geq 1\) and \(k\in[0,d]\), and let \(\phi=\phi_{d}\) from Theorem 11. Let \(l_{1},l_{2}\in[2^{d}]\) and \(l=l_{1}+l_{2}\)._ 1. _If_ \(l\leq 2^{d}\)_, then_ \(\phi(l_{1})+\phi(l_{2})\geq\phi(l)\)_._ 2. _If_ \(l>2^{d}\)_, then_ \(\phi(l_{1})+\phi(l_{2})\geq 2^{d}+\phi(l-2^{d})\)_._ Proof.: We prove both statements simultaneously. If some of \(l_{1}\) and \(l_{2}\) is \(2^{d}\), there is nothing to prove. Without loss of generality, suppose that \(1\leq l_{1}\leq l_{2}\leq 2^{d}-1\). Let \(X_{1}\) and \(X_{2}\) be two subsets of \(\mathbb{Q}^{d}\) with \(|X_{1}|=l_{1}\), \(|X_{2}|=l_{2}\), \(|X_{1}\cup\partial X_{1}|=\phi(l_{1})\), \(|X_{2}\cup\partial X_{2}|=\phi(l_{2})\) and \(X_{1}\not\subseteq X_{2}\). Define \(Y_{1}=X_{1}\cup X_{2}\) and \(Y_{2}=X_{1}\cap X_{2}\). We claim that \[|Y_{1}\cup\partial Y_{1}|+|Y_{2}\cup\partial Y_{2}|\leq|X_{1}\cup\partial X_{ 1}|+|X_{2}\cup\partial X_{2}|. \tag{7}\] Indeed, let \(\mathbf{w}\in Y_{1}\cup\partial Y_{1}\). Then, \(\mathbf{w}\) belongs to at least one of \(X_{1}\cup\partial X_{1}\) and \(X_{2}\cup\partial X_{2}\). Moreover, if \(\mathbf{w}\) also belongs to \(Y_{2}\cup\partial Y_{2}\), then \(\mathbf{w}\) belongs to both \(X_{1}\cup\partial X_{1}\) and \(X_{2}\cup\partial X_{2}\), thus showing (7) after summation over all vertices in \(\mathbb{Q}^{d}\). In particular, this shows that for every pair of integers \(1\leq l_{1}\leq l_{2}\leq 2^{d}-1\) there are integers \(m_{1},m_{2}\) such that \(0\leq m_{1}<l_{1}\), \(l_{2}<m_{2}\leq 2^{d}\) and \(m_{1}+m_{2}=l_{1}+l_{2}\) such that \(\phi(m_{1})+\phi(m_{2})\leq\phi(l_{1})+\phi(l_{2})\). Iterating this observation finishes the proof of both points. Now, we define the sequences \((a_{k})_{k\geq 0}\) and \((b_{k})_{k\geq 0}\) by setting \(b_{0}=2^{d-2}+a_{0}=2^{d-2}+1\), and for every integer \(k\geq 0\), \(a_{k+1}\) (resp. \(b_{k+1}\)) is the minimum of \(|\partial X_{1}\cap Q_{1}|+|\partial X_{2}\cap Q_{2}|\) over all even sets \(X_{1}\subseteq Q_{1}\) and \(X_{2}\subseteq Q_{2}\) such that \(|X_{1}|+|X_{2}|=a_{k}\) (resp. \(|X_{1}|+|X_{2}|=b_{k}\)). Note that by (6), for every integer \(k\geq 0\), \(|E_{k}|\geq a_{k}\), and if \(|E_{m}|\geq b_{0}\) for some \(m\), then for every integer \(k\geq 0\), \(|E_{k+m}|\geq b_{k}\). **Corollary 14**.: _We have \(a_{d-2}=2^{d-2}\) and \(b_{d-2}=2^{d-1}\)._ Proof.: If \(d=2\), the statement is trivial. Suppose that \(d\geq 3\). We show by induction that for every integer \(k\in[0,d-2]\), \(\min(a_{k},b_{k}-2^{d-2})\geq\sum_{j=d-2-k}^{d-2}{d-2\choose j}\). The proof for \(b_{k}-2^{d-2}\) being identical, we only prove the inequality for \(a_{k}\). The statement is trivially satisfied for \(k=0\). Suppose that for some \(k\in[d-2]\), the statement holds for \(k-1\). Fix even sets \(X_{1}\subseteq Q_{1}\) and \(X_{2}\subseteq Q_{2}\) of size resp. \(l_{1}\) and \(l_{2}\) such that \(l=l_{1}+l_{2}=\sum_{j=d-1-k}^{d-2}{d-2\choose j}\leq a_{k-1}\). Denoting \(\phi=\phi_{d-2}\) from Theorem 11, and applying Theorem 11 with \(d-2\) instead of \(d\) and Lemma 12 with \(d-1\) instead of \(d\), we obtain that \[|\partial X_{1}\cap Q_{1}|\geq\phi(l_{1})\quad\text{and}\quad|\partial X_{2} \cap Q_{2}|\geq\phi(l_{2}). \tag{8}\] Thus, Lemma 13 shows that \[|\partial X_{1}\cap Q_{1}|+|\partial X_{2}\cap Q_{2}|\geq\phi(l_{1})+\phi(l_{2 })\geq\phi(l)=\sum_{j=d-2-k}^{d-2}{d-2\choose j},\] which completes the induction. Proof of Theorem 6.: Fix an integer \(d\geq 2\) and any dimer configuration \(D\) on \(\mathbb{Q}^{d}\). We show that by switching along alternating cycles of length at most \(4d-4\), we can reach the configuration where every dimer \(\mathbf{uv}\) satisfies that \(u_{1}\neq v_{1}\), that is, all dimers are parallel to the first dimension. We show this by induction on the dimension. The base case is clear. Fix an integer \(d\geq 3\) and suppose that the statement holds for \(d-1\). If \(D\) contains no crossing dimers, then it consists of a dimer configuration on \(Q_{1}\) and a dimer configuration on \(Q_{2}\). Then, the conclusion follows from the induction hypothesis for \(d-1\). Now, suppose that \(D\) contains a crossing dimer. Fix an even vertex \(\mathbf{v}\in Q_{1}\) contained in a crossing dimer. We show that \(|E_{2d-3}|=2^{d-1}\). On the one hand, by Corollary 14, \(E_{d-2}\) has size at least \(2^{d-2}\). If the inequality is strict, then \(|E_{d-2}|\geq b_{0}\) and consequently \(|E_{2(d-2)}|\geq b_{d-2}=2^{d-1}\). Suppose that \(|E_{d-2}|=2^{d-2}\). We show that \(|O_{d-2}|\geq 2^{d-2}+1\). If \(E_{d-2,1}\) and \(E_{d-2,2}\) are both non-empty, we show that the sizes of the vertex boundaries of both \(E_{d-2,1}\) in \(Q_{1}\) and of \(E_{d-2,2}\) in \(Q_{2}\) must be resp. (strictly) larger than \(E_{d-2,1}\) and \(E_{d-2,2}\). Indeed, for \(i\in[2]\), the number of edges between \(E_{d-2,i}\) and its vertex boundary (in \(Q_{i}\)) is \((d-1)|E_{d-2,i}|\), and at the same time this number is at most \((d-1)|\partial E_{d-2,i}\cap Q_{i}|\). Moreover, equality holds only if \(\partial E_{d-2,i}\cap Q_{i}\) is not adjacent to any vertex outside \(E_{d-2,i}\) in \(Q_{i}\), which may only happen when \(E_{d-2,i}\) contains all even vertices in \(Q_{i}\), which in our case shows that \(|\partial E_{d-2,i}\cap Q_{i}|>|E_{d-2,i}|\). Now, if \(E_{d-2,2}=\varnothing\), say, then \(E_{d-2}\) must contain all even vertices in \(Q_{1}\). However, as there is a crossing dimer with an even vertex in \(Q_{1}\), there is also one with an odd vertex in \(Q_{1}\) and therefore in \(O_{d-2}\). Hence, \(|E_{d-1,1}|\geq|E_{d-2,1}|=2^{d-2}\) and \(E_{d-1,2}\neq\varnothing\), so \(|E_{d-1}|\geq 2^{d-2}+1\) and an application of Corollary 14 shows that \(|E_{2d-3}|\geq b_{d-2}=2^{d-1}\). Let \(\mathbf{v}^{\prime}\) be the odd vertex of the crossing dimer containing \(\mathbf{v}\). Since \(E_{2d-3}\) contains all even vertices in \(\mathbb{Q}^{d}\), it contains a neighbour of \(\mathbf{v}^{\prime}\) in \(Q_{2}\), so \(\mathbf{v}^{\prime}\in O_{2d-3}\). Hence, there is an alternating cycle in \(\Gamma\) of length at most \(4d-4\) containing the dimer \(\mathbf{vv}^{\prime}\), whose switching decreases the number of crossing dimers. Iterating the above approach leaves no crossing dimers eventually, and thus finishes the proof of the first point. For the second step, note that every switching decreases the number of crossing dimers by at least two, so \(2^{d-2}\) steps are sufficient to make all crossing dimers (in a fixed dimension) disappear. Since the induction above consists of \(d-1\) steps, the distance (in \(\mathcal{D}_{2d-2}(\mathbb{Q}^{d})\)) from any dimer configuration \(D\) to the configuration where all dimers are parallel to the first dimension is at most \((d-1)2^{d-2}\), and therefore the diameter of \(\mathcal{D}_{2d-2}(\mathbb{Q}^{d})\) is at most twice as large. ## 5 Diameter lower bound: proof of Theorem 7 For this section we fix \(n\) even and \(\ell,d\geq 2\). As above, we call a site \(\mathbf{v}\in\mathbb{Q}^{d}_{n}\)_even_ if \(\sum_{i\in[d]}v_{i}\) is even and _odd_ otherwise. Given a dimer configuration \(D\), we define a colouring of the vertices of \(\mathbb{Q}^{d}_{n}\) in two colours (red and blue) as follows. Let \(\mathbf{uv}\in D\) be a dimer with \(\mathbf{u}\) odd and \(\mathbf{v}\) even. Let \(i\in[d]\) be such that \(|u_{i}-v_{i}|=1\). We colour both \(\mathbf{u}\) and \(\mathbf{v}\)_red_ if \(u_{i}-v_{i}=1\), and _blue_ if \(u_{i}-v_{i}=-1\). **Example 15** (Pyramid configuration).: As an example, let us consider the _pyramid configuration_ of Figure 2 defined formally as follows. For each \(\mathbf{v}^{\prime}\in\mathbb{Q}^{d-2}_{n}\) and even \(\mathbf{v}\in\mathbb{Q}^{d}_{n}\) such that \(\mathbf{v}=(v_{1},v_{2},\mathbf{v}^{\prime})\), the second vertex in the dimer of \(\mathbf{v}\) is: \[\begin{cases}(v_{1}+1,v_{2},\mathbf{v}^{\prime})&\text{if $v_{1}<v_{2}$ and $v_{1}\geq n+1-v_{2}$,}\\ (v_{1},v_{2}+1,\mathbf{v}^{\prime})&\text{if $v_{1}\leq v_{2}$ and $v_{1}<n+1-v_{2}$,}\\ (v_{1}-1,v_{2},\mathbf{v}^{\prime})&\text{if $v_{1}>v_{2}$ and $v_{1}\leq n+1-v_{2}$,}\\ (v_{1},v_{2}-1,\mathbf{v}^{\prime})&\text{if $v_{1}\geq v_{2}$ and $v_{1}>n+1-v_{2}$.} \end{cases}\] In terms of colouring, the dimers corresponding to the former two cases are red, while the remaining ones are blue. In particular, all sites \(\mathbf{u}\in\mathbb{Q}^{d}_{n}\) with \(u_{1}<u_{2}\) are red, while those with \(u_{1}>u_{2}\) are blue. Theorem 7 will follow easily from Example 15 and the following observation. **Lemma 16**.: _For any dimer configuration on \(\mathbb{Q}_{n}^{d}\) there are exactly \(n^{d}/2\) red vertices._ Proof.: Fix \(i\in[d]\) and \(j\in[n]\). Consider the dimers \(\mathbf{uv}\) such that \(u_{i}=j\) and \(v_{i}=j+1\). Since the numbers of even and odd sites \(\mathbf{w}\) with \(w_{i}\leq j\) are equal, the number of such dimers with \(\mathbf{u}\) even is equal to the number of such dimers with \(\mathbf{u}\) odd. Since these two types of dimers have different colours (red and blue, respectively) and each dimer is considered for exactly one choice of \((i,j)\), the result follows. Proof of Theorem 7.: It is clear that switching a cycle cannot modify the colours of vertices outside the cycle. Therefore, by Lemma 16, switching any cycle can only alter the colours of the sites within the cycle without changing the amount of sites of either colour. Consider the pyramid dimer configuration \(D\) from Example 15 and its inverse \(\bar{D}\) obtained by using the configuration in Figure 2a on even sections and Figure 2b on odd ones instead, which also leads to exchanging the two colours. Thus, all sites \(\mathbf{u}\in\mathbb{Q}_{n}^{d}\) with \(u_{1}<u_{2}\) are red in \(D\) and blue in \(\bar{D}\). Among these sites, for each \(i\in[n-1]\), there are \((n-i)n^{d-2}\) at distance \(i\) from the complement of this set. Since each switching moves at most \(\ell\) red sites at graph distance at most \(\ell\), in order to reach \(\bar{D}\) from \(D\), we need to switch at least \[\frac{n^{d-2}}{\ell^{2}}\sum_{i=1}^{n-1}(n-i)i=\frac{n^{d-1}(n^{2}-1)}{6\ell^{ 2}}\] alternating cycles. ## 6 Ergodicity on \(\mathbb{T}_{m,n}\): proof of Theorem 8 In this section, we fix positive integers \(m,n\) with \(m\) even. The proof of Theorem 8 proceeds by induction by showing that, starting with any dimer configuration on \(\mathbb{T}_{m,n}\), we can make all dimers on the lower boundary horizontal. Thereby, one can directly apply the induction hypothesis to the remaining box \((0,1)+\mathbb{T}_{m,n-1}\). The base of the induction are the cases \(n\in\{1,2\}\). The first one is trivial (there is one dimer configuration), and the second one boils down to switching 4-cycles on \(\mathbb{Q}_{(m,2)}^{2}\) (which is easily seen to be ergodic with diameter \(m-1\)) due to absence of diagonal dimers. We may therefore assume that \(n\geq 3\). We proceed by a further induction. Let \(x\in[m/2]\) be maximal such that \((2y-1,1),(2y,1)\) is a dimer in the present configuration for all \(y\in[x]\). If \(x=m/2\), we are done. Otherwise, we consider two cases for the other end \(\mathbf{x}\) of the dimer containing \((2x+1,1)\). Figure 3: Illustration of Case 1 of the proof of Theorem 8. Case 1Assume \(\mathbf{x}=(2x+1,2)\) (see Figure 3 for an illustration). Let \(\mathbf{y}\) be the other end of the dimer of \((2x+2,2)\). If \(\mathbf{y}=(2x+2,1)\), we are done after switching the \(4\)-cycle \((2x+1,1),(2x+1,2),(2x+2,2),(2x+2,1)\). We assume this is not the case, so necessarily \((2x+2,1),(2x+3,1)\) is a dimer. If \(\mathbf{y}=(2x+3,2)\), then switching the \(4\)-cycle \((2x+2,1),(2x+3,1),(2x+3,2),(2x+2,2)\) brings us to the previous case, so we are done. If \(\mathbf{y}=(2x+1,3)\), then we are done by switching the \(6\)-cycle \((2x+1,1),(2x+2,1),(2x+3,1),(2x+2,2),(2x+1,3),(2x+1,2)\). We may therefore assume that \(\mathbf{y}=(2x+2,3)\), and note that it suffices to move this dimer to \((2x+2,2),(2x+2,3)\) since all other cases were already dealt with. Let \(\mathbf{z}\) be the other end of the dimer of \((2x+3,2)\). If \(\mathbf{z}=(2x+3,3)\), it suffices to switch the \(4\)-cycle \((2x+2,2),(2x+3,2),(2x+3,3),(2x+2,3)\). If \(\mathbf{z}=(2x+4,1)\), it suffices to switch the \(6\)-cycle \((2x+2,1),(2x+3,1),(2x+4,1),(2x+3,2),(2x+2,3),(2x+2,2)\). Finally, it remains to consider the case \(\mathbf{z}=(2x+4,2)\), which entails that \((2x+4,1),(2x+5,1)\) is a dimer. Then, we can switch the \(4\)-cycle \((2x+4,1),(2x+5,1),(2x+4,2),(2x+3,2)\), returning to the previous case. Case 2Assume \(\mathbf{x}=(2x,2)\) (see Figure 4 for an illustration). Let \(\mathbf{y}\) be the other end of the dimer of \((2x+1,2)\). If \(\mathbf{y}=(2x,3)\), then switching the \(4\)-cycle \((2x+1,2),(2x,3),(2x,2),(2x+1,1)\) brings us back to Case 1. If \(\mathbf{y}=(2x+2,1)\), then we are done by switching the \(4\)-cycle \((2x+1,1),(2x+2,1),(2x+1,2),(2x+2,2)\). If \(\mathbf{y}=(2x+2,2)\), then \((2x+2,1),(2x+3,1)\) has to be a dimer and switching the \(4\)-cycle \((2x+2,1),(2x+3,1),(2x+2,2),(2x+1,2)\) returns us to the previous case. We may therefore assume that \(\mathbf{y}=(2x+1,3)\) and it suffices to move this dimer to \((2x+1,2),(2x+2,2)\). Let \(\mathbf{z}\) be the other end of the dimer of \((2x+2,2)\). If \(\mathbf{z}\in\{(2x+2,1),(2x+2,3)\}\), this forms a \(4\)-cycle with \((2x+1,2),(2x+1,3)\) and we are done. It therefore remains that \(\mathbf{z}=(2x+3,2)\), which forces \((2x+2,1),(2x+3,1)\) to be a dimer, and switching the \(4\)-cycle formed by these two dimers returns us to the previous case. This completes the induction as well as the proof of the first statement in Theorem 8. For the second statement, note that we switch at most \(4\) alternating cycles in the process of making the dimer at \((2x+1,1)\) horizontal. **Remark 17**.: Let us note that the proof entails that the minimum degree of \(\mathcal{D}_{6}(\mathbb{T}_{m,n})\) is at least linear in the semi-perimeter \(m+n\). In contrast, the minimum degree of \(\mathcal{D}_{8}(\mathbb{T}_{m,n})\) can be shown to be of order \(mn\) since there is an alternating cycle of length at most \(8\) within the third neighbourhood of each vertex. We also observe that "sufficiently regular" domains such as triangles or hexagons with even number of vertices can be treated along the lines of Theorem 8. Figure 4: Illustration of Case 2 of the proof of Theorem 8. ## Acknowledgements This work was supported by the Austrian Science Fund (FWF): P35428-N. We thank Scott Sheffield and Catherine Wolfram for suggesting the topic to us, and Marcin Lis for several interesting discussions.
2301.04143
Topological defects as lagrangian correspondences
Topological defects attract much recent interest in high-energy and condensed matter physics because they encode (non-invertible) symmetries and dualities. We study codimension-1 topological defects from a hamiltonian point of view, with the defect location playing the role of `time'. We show that the Weinstein symplectic category governs topological defects and their fusion: each defect is a lagrangian correspondence, and defect fusion is their geometric composition. We illustrate the utility of these ideas by constructing S- and T-duality defects in string theory, including a novel topology-changing non-abelian T-duality defect.
Alex S. Arvanitakis
2023-01-10T19:00:00Z
http://arxiv.org/abs/2301.04143v3
# Topological defects as lagrangian correspondences ###### Abstract Topological defects attract much recent interest in high-energy and condensed matter physics because they encode (non-invertible) symmetries and dualities. We study codimension-1 topological defects from a hamiltonian point of view, with the defect location playing the role of 'time'. We show that the Weinstein symplectic category governs topological defects and their fusion: each defect is a lagrangian correspondence, and defect fusion is their geometric composition. We illustrate the utility of these ideas by constructing S- and T-duality defects in string theory, including a novel topology-changing non-abelian T-duality defect. Quantum field theories are about more than the physics of pointlike particles. Even outside of String/M-theory -- whose _raison d'etre_ is the physics of fundamental extended objects -- one finds that line, surface, and higher-dimensional objects have much to say, even if they were not originally 'put in by hand' in the theory in question. Early examples are Wilson loops in QCD; their VEVs probe quark confinement. Extended objects and operators supported on such appear generically as mediators of dualities, global symmetries, and their 'higher'/'generalised' counterparts [1]. In this paper we will be exlusively study codimension-1 extended objects (of dimension \(\dim M-1\) if the theory in question lives on the manifold \(M\)) which we will call _defects_. (They are also called 'walls' or 'interfaces'.) We will in particular be interested in _topological defects_ -- ones which may be freely deformed such that correlation functions are unchanged -- due to their close connection to dualities [2; 3; 4; 5]. The physical picture is simple: we envision a scenario where the defect locus separates spacetime \(M\) into phase 1 (say inside) and phase 2 (say outside), where phases 1 and 2 can be two copies of the same theory, or possibly two different theories. For e.g. a spherical defect of radius \(t\), as \(t\) goes from 0 to \(+\infty\) we observe a transition from phase 2 to phase 1; by inserting operators inside or outside this defect sphere, we thus obtain the action of the associated duality transformation on all local operators. Moreover, two topological defects can be _fused_ by deforming them so they lie close to each other; this connects the algebraic aspects of symmetry -- (semi-)group structure -- to the defect picture. Fusion accommodates both conventional symmetries -- related to defects that may fuse into the "invisible" defect -- as well as more exotic non-invertible symmetries. Since the presence of one or more defects breaks Lorentz invariance, we might as well study them in a _hamiltonian formulation_. The hamiltonian 'time' \(t\) will be such that \(t=\text{const.}\) is the defect locus; varying the constant gives a family of defects. We will discuss arbitrary systems in a hamiltonian path-integral formulation, working in finite dimensions for technical simplicity. The main take-away from this work will be that **topological defects and their fusion furnish the Weinstein symplectic category**[6; 7] of symplectic manifolds with lagrangian correspondences -- which are related to canonical transformations in hamiltonian mechanics -- as morphisms. We will demonstrate in the Discussion section that this gives a practical and efficient way to identify topological defects in diverse physical contexts and to study their fusion. In the rest of this letter we will explain what this statement means and give a brief but reasonably complete account of why it is true. We start with textbook hamiltonian mechanics and canonical transformations. Consider a _phase-space action principle_ for e.g. a particle on \(\mathbb{R}\): \[S[x,p]=\int_{\mathcal{T}_{1}}^{T_{\text{F}}}\mathrm{d}t\;\{p(t)\dot{x}(t)-h \big{(}x(t),p(t)\big{)}\} \tag{1}\] where \(\dot{x}=\mathrm{d}x/\mathrm{d}t\). The hamiltonian \(h\) is a function depending on a point \((x,p)\in\mathbb{R}^{2}\) of _phase space_; it characterises the system in question. (We assume it is time-independent for simplicity.) Time evolution with fixed initial and final boundary conditions \(x(T_{\text{I}})=x_{\text{I}},x(T_{\text{F}})=x_{\text{F}}\) is determined by extremising \(S\), yielding Hamilton's equations. A _canonical transformation_ is supposed to express \(x,p\) in terms of new variables \(x_{2},p_{2}\) such that the equations of motion arise from an action \(S[x_{2},p_{2}]\) of the form (1), possibly with different hamiltonian \(h\to h_{2}\). This will be the case if there exists a function \(F\) with \[p\dot{x}-h=p_{2}\dot{x}_{2}-h_{2}+\mathrm{d}F/\mathrm{d}t\,. \tag{2}\] \(F\) is called the _generating function_. With some assumptions, \(F\) can determine the transformation. For example, assume the map \((x,p)\to(x_{2},p_{2})\) is such that we can unambiguously specify any point in phase space by fixing original and new position variables \((x,x_{2})\), and also assume \(F\) depends on \((x,x_{2})\). (This is a "type 1 generating function".) the velocities \(\dot{x},\dot{x}_{2}\) are linearly independent of each other and of \((x,x_{2})\), yielding \[p=\partial F/\partial x\,,\quad p_{2}=-\partial F/\partial x_{2}\,,\quad h_{2}=h; \tag{3}\] this can be solved to produce a map \((x,p)\to(x_{2},p_{2})\) under favourable conditions. (We return to this later.) ## II Endpoint contributions for mechanics on manifolds Let us review a subtlety in hamiltonian dynamics on a symplectic manifold \(M\), relevant whenever the symplectic form \(\omega\) does not admit a symplectic potential 1-form \(\theta\) with \(\omega={\rm d}\theta\) that is non-singular everywhere. (I.e. \(\omega\) is not exact. Here and henceforth we employ the language of differential forms; \({\rm d}\) is the de Rham differential.) We assume \(\omega\) obeys a Dirac quantisation condition, \[\int_{C}\omega=2\pi\mathbb{Z} \tag{4}\] for \(C\) any closed 2-dimensional cycle inside \(M\). The previous section describes the case \(M=T^{\star}\mathbb{R}\) with symplectic form \(\omega={\rm d}(p{\rm d}x)={\rm d}p\wedge{\rm d}x\), and \(\theta=p{\rm d}x\). The problem of generalising the action (1) to a symplectic manifold where \(\omega\) is not exact is mathematically the same as the the problem of coupling a charged particle to the electromagnetic field generated by a magnetic monopole. The subtlety in the monopole case is also that the electromagnetic potential \({\cal A}\) is not well-defined everywhere, so that the usual coupling \(\int{\rm d}t\;{\cal A}_{\mu}(x(t))\dot{x}^{\mu}\) to the particle worldline needs a careful prescription. The correct prescription was given long ago by Wu and Yang [8]. First, we cover \(M\) by open subsets \(U_{\alpha}\) such that each \(U_{\alpha}\) and all intersections \(U_{\alpha}\cap U_{\beta}\) are contractible. Then for each \(\alpha\) there exists a locally-defined symplectic potential 1-form \(\theta_{\alpha}\) such that \({\rm d}\theta_{\alpha}=\omega|_{U_{\alpha}}\) due to the Poincare lemma. \((\omega|_{U_{\alpha}}\) is the restriction.) On overlaps, using the Poincare lemma again, we find scalar functions \(g_{\alpha\beta}\) describing a 'gauge transformation' from \(\theta_{\alpha}\) to \(\theta_{\beta}\): \[\theta_{\beta}-\theta_{\alpha}=g_{\alpha\beta}\,. \tag{5}\] Given any curve \(\gamma:[T_{\rm I},T_{\rm F}]\to M\), we write the analogue of the action (1) by splitting the curve at arbitrary points within any overlap \(U_{\alpha}\cap U_{\beta}\). If e.g. the curve happens to lie entirely within two contractible patches, \(U_{\alpha}\ni\gamma(T_{\rm I})\) and \(U_{\beta}\ni\gamma(T_{\rm F})\), with \(\gamma(t^{\prime})\in U_{\alpha}\cap U_{\beta}\) then we write \[\int_{T_{\rm I}}^{t^{\prime}}\gamma^{\star}\theta_{\alpha}+\int_{t^{\prime}}^ {T_{\rm F}}\gamma^{\star}\theta_{\beta}+g_{\alpha\beta}(\gamma(t^{\prime}))\,. \tag{6}\] The \(g_{\alpha\beta}\) term ensures independence from the arbitrary choice of intermediate \(t^{\prime}\). Given the quantisation condition (4), the construction is independent of choices [9]. The upshot is: we may make arbitrary choices of local symplectic potential, _at the price of generating contributions that depend on the endpoints \(\gamma(T_{\rm I}),\gamma(T_{\rm F})\)_ (and possibly on the intermediate points within overlaps). We thus write the action for hamiltonian mechanics on a symplectic manifold as (where we reinstated the hamiltonian \(h\in C^{\infty}(M)\), and where \(\gamma(T_{\rm I})\equiv p_{\rm I}\;\gamma(T_{\rm F})\equiv p_{\rm F}\)) \[S[\gamma]=S_{\rm I}(p_{\rm I})+S_{\rm F}(p_{\rm F})+\int_{T_{\rm I}}^{T_{\rm F }}\gamma^{\star}\theta-\gamma^{\star}(h){\rm d}t \tag{7}\] The "endpoint contribution" functions \(S_{\rm I}\) and \(S_{\rm F}\) compensate for the ambiguity in the choice of symplectic potential near the endpoints \(p_{\rm I}\) and \(p_{\rm F}\), while \(\int_{T_{\rm I}}^{T_{\rm F}}\gamma^{\star}\theta\) is interpreted via the Wu-Yang prescription (as in (6)). In particular we can set \(S_{\rm I}=S_{\rm F}=0\) at the price of fixing the symplectic potentials near each endpoint. ## III Lagrangian correspondences and their composition A _lagrangian submanifold_, or just _lagrangian_\(L\) of a symplectic manifold \(M\) is a maximal submanifold where \(\omega\) vanishes; explicitly, \(\iota^{\star}_{L}\omega=0\) for \(\iota_{L}\) the inclusion map \(\iota_{L}:L\hookrightarrow M\). (In other words, \(\omega(v_{1},v_{2})=0\) for all vectors \(v_{1},v_{2}\) tangent to \(L\).) Maximality in the case \(\dim M<\infty\) means \(\dim L=\dim M/2\). For the particle on the line (1), both submanifolds \(p=0\) and \(x=0\) are lagrangian. In general a choice of lagrangian submanifold amounts to a (local) choice of 'position' and'momentum' variables on _any_ symplectic manifold: this is realised via the _Weinstein lagrangian neighbourhood theorem_, which states that an open neighbourhood of \(L\) in \(M\) is isomorphic to an open neighbourhood of the zero section of the cotangent bundle \(T^{\star}L\), and \(L\subset M\) maps to the zero section \(L\hookrightarrow T^{\star}L\). A crucial corollary is that given a lagrangian \(L\), we can always find a coordinate system on \(M\) with \(\dim M/2\)'momenta' \(p_{a}\) and 'positions' \(x^{a}\), and an 'adapted' local symplectic potential \(\theta_{L}\) with \[\theta_{L}=p_{a}{\rm d}x^{a} \tag{8}\] with \(p_{a}=0\) for all \(a=1,2,\cdots(\dim M/2)\) specifying the chosen lagrangian \(L\). _Canonical transformations are also properly understood as lagrangians_: for example, the "type 1 transformation" of (3) may be described via the lagrangian submanifold \(L_{0}\) specified as the locus \(p=p_{2}=0\) in the symplectic manifold \(T^{\star}\mathbb{R}\times T^{\star}\mathbb{R}\), with symplectic form \[{\rm d}p\wedge{\rm d}x-{\rm d}p_{2}\wedge{\rm d}x_{2}. \tag{9}\] The adapted potential is \(\theta_{L_{0}}=p{\rm d}x-p_{2}{\rm d}x_{2}\). Then, given any function \(F\in C^{\infty}(L_{0})\), we find a lagrangian \(L_{F}\) specified by (3); geometrically, \(L_{F}\) is the image of \(L_{0}\) under the hamiltonian flow generated by \(F\). Conversely: the general apparatus of generating functions amounts to finding lagrangians specified, in this way, by functions \(F\). (The "type" is encoded in the choice of \(L_{0}\).) From this perspective the invertibility of (3) is immaterial. A **lagrangian correspondence** from symplectic manifold \((M_{1},\omega_{1})\) to symplectic manifold \((M_{2},\omega_{2})\) is a lagrangian \(L_{12}\) inside \((M_{1}\times M_{2},\omega_{2}-\omega_{1})\). We have just seen how lagrangian correspondences appear as canonical transformations in mechanics. The concept appears more prominently in mathematics in the context of the _Weinstein symplectic category_[6; 7]. Put briefly, the idea is that lagrangian correspondences should be considered as morphisms between symplectic manifolds. The motivation comes from quantisation: we should assign a Hilbert space \(H_{1}\) to a symplectic manifold \(M_{1}\), and -- perhaps less obviously -- a state \(|\psi\rangle\) to a lagrangian \(L\subset M\). If \(M_{1}\times M_{2}\) is assigned the tensor product \(H_{1}\otimes H_{2}\), then a linear map \(H_{1}\to H_{2}\) should map to elements of \(H_{1}^{\star}\otimes H_{2}\), which should come from lagrangian correspondences \(L_{12}\). Therefore, two lagrangian correspondences \(L_{12}\) (from \(M_{1}\) to \(M_{2}\)) and \(L_{23}\) (from \(M_{2}\) to \(M_{3}\)) ought to be composable into a third one \(L_{13}\). We can define the set \[L_{13}=\Pi_{13}\Big{(}(L_{12}\times L_{23})\cap(M_{1}\times\Delta_{M_{2}} \times M_{3})\Big{)} \tag{10}\] where \(\Delta_{M_{2}}:M_{2}\to M_{2}\times M_{2}\) is the diagonal, and \(\Pi_{13}:M_{1}\times M_{2}\times M_{2}\times M_{3}\to M_{1}\times M_{3}\) is the projection. This set is not a manifold _unless_ the intersection is transverse; in that case \(L_{13}\) is an immersed submanifold and \(L_{13}\) is a lagrangian correspondence called the **geometric composition** of \(L_{12}\) and \(L_{23}\). We refer to [10; Section 2] for more on lagrangian correspondences, and to the recent review [11] for context and mathematical applications thereof. ## II Topological Hamiltonian defects We now study a defect on the worldline of a particle at time \(t=t_{12}\) via a generalisation of the action (7). We write \(S=I_{12}+S^{\rm D}_{12}\), with \(S^{\rm D}_{12}(p_{12})\) a function depending on the defect location \(p_{12}\equiv(\gamma_{1}(t_{12}),\gamma_{2}(t_{12}))\), and \[I_{12}=\int_{-\infty}^{t_{12}}\gamma_{1}^{\star}\theta_{1}-\gamma_{1}^{\star}( h_{1}){\rm d}t+\int_{t_{12}}^{+\infty}\gamma_{2}^{\star}\theta_{2}-\gamma_{2}^{ \star}(h_{2}){\rm d}t\,. \tag{11}\] The defect separates "phase 1" (\(t<t_{12}\)) where the particle moves on the symplectic manifold \((M_{1},\omega_{1})\) with hamiltonian \(h_{1}\) along the curve \(\gamma_{1}:(-\infty,t_{12}]\to M_{1}\) from "phase 2" (\(t>t_{12}\)). Henceforth we are not concerned with the endpoints \(T_{1},T_{\rm F}\), so we set \(T_{\rm I}=-\infty\), \(T_{\rm F}=+\infty\). However, the _defect action_\(S^{\rm D}_{12}\) will be important. A **topological defect** is one that can be moved around at no cost, whence the defining condition is \[\frac{{\rm d}S}{{\rm d}t_{12}}=0\,. \tag{12}\] We will derive constraints on \(p_{12}\) from this. To proceed we employ a _folding trick_[12]: define the "folded" curve \(\Gamma_{12}\) on \(M_{1}\times M_{2}\) as \[\Gamma_{12}(\tau)=\left(\gamma_{1}(t_{12}-\tau),\gamma_{2}(t_{12}+\tau)\right), \tag{13}\] then \(S\) takes the form (7) with a single endpoint at \(\tau=0\): \[S[\Gamma_{12}]=S^{\rm D}_{12}(p_{12})+\int_{0}^{+\infty}\Gamma_{12}^{\star} \Theta_{12}-\Gamma_{12}^{\star}(H_{12}){\rm d}\tau\,. \tag{14}\] Here \(\Theta_{12}=\theta_{2}-\theta_{1}\) is a (local) symplectic potential for the symplectic form \(\omega_{2}-\omega_{1}\) on \(M_{1}\times M_{2}\). (There was a sign change because \(t=t_{12}-\tau\) is orientation-reversing.) The integral is again defined via the Wu-Yang prescription, which is sensible since \(\omega_{2}-\omega_{1}\) satisfies the quantisation condition (4) if \(\omega_{1}\) and \(\omega_{2}\) do. The "folded" hamiltonian is the sum \(H_{12}=h_{1}+h_{2}\). Consistency of the variational principle (14) requires boundary conditions at \(\tau=0\). We anticipate a future calculation and select the boundary condition \(\Gamma_{12}(0)\equiv p_{12}\in L_{12}\) for \(L_{12}\) a lagrangian correspondence from \(M_{1}\) to \(M_{2}\), i.e. a lagrangian for \((M_{1}\times M_{2},\omega_{2}-\omega_{1})\). We can thus employ the lagrangian neighbourhood theorem and produce an adapted (to \(L_{12}\)) potential \(\Theta_{12}=P_{A}\wedge{\rm d}X^{A}\) near \(\Gamma_{12}(0)=p_{12}\), so that the boundary condition is simply \(P_{A}(0)=0\) for _all_ curves. (The index \(A\) takes \((\dim M_{1}+\dim M_{2})/2\) values.) Since we have made a choice of symplectic potential near the endpoint, off-shell, we can thus set \(S^{\rm D}_{12}=0\). If we write \({\rm d}/{\rm d}t_{12}=\delta\), then using (13) \[\delta\Gamma_{12}^{\star}(H_{12})=\frac{{\rm d}}{{\rm d}\tau}\Gamma_{12}^{ \star}(h_{2}-h_{1})\,, \tag{15}\] and, (where \(\Omega_{12}=\omega_{2}-\omega_{1}\)) \[\delta(\Gamma_{12}^{\star}\Theta_{12})=\frac{{\rm d}}{{\rm d}\tau}(P_{A} \delta X^{A}){\rm d}\tau+\Omega_{12}(\delta\Gamma_{12},\dot{\Gamma}_{12}){\rm d }\tau\,. \tag{16}\] In the last term, \(\dot{\Gamma}_{12}\) is the tangent vector to the curve (13), while \(\delta\Gamma_{12}\) is the vector obtained by the \(\delta={\rm d}/{\rm d}t_{12}\) derivative. However, the two are related by \(\delta\Gamma_{12}=C\dot{\Gamma}_{12}\) for a certain constant matrix \(C\) (due to (13)); \(C\) satisfies \[C^{2}=1\,,\qquad\Omega_{12}(CV,CU)=\Omega_{12}(V,U)\,, \tag{17}\] whence \(\Omega_{12}(\delta\Gamma_{12},\dot{\Gamma}_{12})=0\). (In "factorised" coordinates, \(C\) changes the sign of vectors pointing along \(M_{1}\), from which these identities are now obvious.) Using the boundary condition \(P_{A}=0\) at \(\tau=0\), the first term in (16) also fails to contribute, so we obtain \[\delta S=\Gamma_{12}^{\star}(h_{1}-h_{2})|_{\tau=0}=0. \tag{18}\] Therefore, the topological defect condition (12) holds _if_ the defect location \(p_{12}\) lies on a lagrangian correspondence \(L_{12}\), _and_ the hamiltonians \(h_{1,2}\) match on each side of the defect. This is essentially the matching of stress-energy tensors on topological defects in field theory (see e.g. [5]). We now discuss fusion. We take two topological defects at locations \(t=t_{12}\) and \(t=t_{23}\) that define a segmented worldline along symplectic manifolds \(M_{1}\) (for \(t\leq t_{12}\)), \(M_{2}\) (for \(t_{12}\leq t\leq t_{23}\)) and \(M_{3}\) (for \(t\geq t_{23}\)), with symplectic forms and hamiltonian functions \((\omega_{1},h_{1})\) on \(M_{1}\), etc. We have associated lagrangian correspondences \(L_{12}\) (to the \(t_{12}\) defect) and \(L_{23}\) (to the \(t_{23}\) defect) as before. To write the action we introduce the worldline 1-forms \[\alpha_{1,2,3}=\gamma^{\star}_{1,2,3}\theta_{1,2,3}-\gamma^{\star}_{1,2,3}(h_{ 1,2,3})\mathrm{d}t\,. \tag{19}\] For the action we take simply \[S=\int_{-\infty}^{t_{12}}\alpha_{1}+\int_{t_{12}}^{t_{23}}\alpha_{2}+\int_{t_{ 23}}^{+\infty}\alpha_{3} \tag{20}\] which is again interpreted via the Wu-Yang prescription and where we have made a choice of symplectic potentials close to \(t_{12}\) and \(t_{23}\) that are adapted to the lagrangian correspondences so as to set the defect actions to zero. Since both defects are topological, we can shift them around freely. By _defect fusion_, we mean the defect obtained in the coincidence limit \(t_{12}\to t_{23}\). This ought to be a topological defect between \(M_{1}\) and \(M_{3}\), thus a lagrangian correspondence \(L_{13}\) from \(M_{1}\) to \(M_{3}\). We will calculate \(L_{13}\) from \(L_{12}\) and \(L_{23}\). Let us write \[S=\int_{-\infty}^{t_{12}}\alpha_{1}+\int_{t_{12}}^{+\infty}\alpha_{2}+\int_{- \infty}^{t_{23}}\alpha_{2}+\int_{t_{23}}^{+\infty}\alpha_{3}-\int_{-\infty}^{+ \infty}\alpha_{2}\,. \tag{21}\] This entails an arbitrary extension of \(\gamma_{2}\) to the entire real line; the arbitrariness will drop out shortly. The point of the rewriting is that we may 'fold' the _first four_ terms pairwise to obtain terms of the form (14). This yields \[\int_{0}^{+\infty}\Gamma^{\star}_{12}\Theta_{12}+\Gamma^{\star}_{23}\Theta_{2 3}-(\Gamma^{\star}_{12}(H_{12})+\Gamma^{\star}_{23}(H_{23}))\mathrm{d}\tau\,. \tag{22}\] Here \(\Theta_{12}\) is a symplectic potential for \(\omega_{2}-\omega_{1}\), \(\Theta_{23}\) is one for \(\omega_{3}-\omega_{2}\), and \(H_{12}=h_{1}+h_{2},H_{23}=h_{2}+h_{3}\). \(\Gamma_{12}\) is the curve (13), while \(\Gamma_{23}\) is given similarly. We interpret this via the twice-folded curve \(\Gamma_{1223}(\tau)=(\Gamma_{12}(\tau),\Gamma_{23}(\tau))\) on the manifold \(M_{1}\times M_{2}\times M_{2}\times M_{3}\): \[\left(\gamma_{1}(t_{12}-\tau),\gamma_{2}(t_{12}+\tau),\gamma_{2}(t_{23}-\tau),\gamma_{3}(t_{23}+\tau)\right). \tag{23}\] For \(\tau=0\) this is \((p_{12},p_{23})\) where \(p_{12}\in L_{12}\) and \(p_{23}\in L_{23}\). Since both original defects are topological, we can send e.g. \(t_{23}\to 0\), then the coincidence limit is \(t_{12}\to 0\), and the twice-folded curve becomes \[\Gamma_{1223}(\tau)=\left(\gamma_{1}(-\tau),\gamma_{2}(+\tau),\gamma_{2}(- \tau),\gamma_{3}(+\tau)\right). \tag{24}\] We see \(\gamma_{2}\) is traversed twice as \(\tau\) goes from \(0\) to \(+\infty\); this produces an integral that cancels the last term of (21), thus eliminating the 'intermediate' manifold \(M_{2}\). Moreover, for \(\tau=0\) we see (24) takes the form \((p_{1},p_{2},p_{2},p_{3})\) for points \(p_{1,2,3}\in M_{1,2,3}\). Since \(\Gamma_{2}(0)\equiv p_{12}=(p_{1},p_{2})\) and \(\Gamma_{23}(0)\equiv p_{23}=(p_{2},p_{3})\) each lie on \(L_{12}\) and \(L_{23}\), we have arrived precisely at the geometric composition of the lagrangian correspondences (10). The hamiltonians on each side of the fused defect \(L_{13}\) trivially agree with each other, so this completes the argument. ## IV Discussion Since canonical transformations (in the physics sense) give rise to lagrangian correspondences, we can construct a topological defect for every canonical transformation. We exploit this to construct duality defects. As a first example, take \(d=4\) U(1) euclidean gauge theory, with complex coupling \(\tau=\theta/(2\pi)+4\pi i/g^{2}\) (of which \(g^{-2}\) is the coupling and \(\theta\) the theta angle). The phase space for electromagnetism is spanned by the 'positions' \(\mathcal{A}_{i}(\sigma)\) (\(i=1,2,3\), \(\sigma\in\mathbb{R}^{3}\)) and their conjugate'momenta' \(\Pi^{i}(\sigma)\); the former are the spatial components of the gauge potential 1-form \(\mathcal{A}\). In [13] there is a canonical transformation that implements \(\tau\to-\tau^{-1}\); thus, there exists a topological defect between the theory with coupling \(\tau\) and the theory with coupling \(-\tau^{-1}\), as was found in [4; 5] entirely differently. Explicitly: we have the type 1 generating function \[F=\int_{M_{4}}\mathcal{F}\wedge\tilde{\mathcal{F}} \tag{25}\] in terms of the original and dual field strengths \(\mathcal{F}=\mathrm{d}\mathcal{A},\tilde{\mathcal{F}}=\mathrm{d}\tilde{ \mathcal{A}}\). (For this to work we need the _reduced_ phase space for electromagnetism, obtained by symplectic reduction modulo the Gauss law constraint \(\partial_{i}\Pi^{i}=0\).) We can similarly obtain S-duality defects on the worldvolume of the D3 brane in type IIB string theory [14] (including fermions [15]), as well as between a IIB superstring and a D1-brane [16]. We may also easily recover worldsheet T-duality defects from the T-duality canonical transformation given in [17]. This extends to other flavours of T-duality, including Poisson-Lie (via [18; 19]) and even fermionic T-duality (via [20]); for the latter case topological defects were also found in [21] with different methods. In the Poisson-Lie case, we have thus made contact with the recent "Poisson-Lie defects" of [22]. We can say more about Poisson-Lie T-duality defects, however. In [23] we recently co-introduced a joint generalisation of topological [24; 25] and Poisson-Lie T-dualities in the form of lagrangian correspondences between the phase spaces for string propagation on a principal _bibundle_\(G\hookrightarrow M\dashrightarrow B\) and its dual bibundle \(\tilde{G}\hookrightarrow\tilde{M}\dashrightarrow B\), whose fibres \(G\) and \(\tilde{G}\) are Poisson-Lie dual groups; this 'bibundle duality' can realise topology changes in the global fibration structure of target space akin to those of topological (abelian) T-duality, which is a special case. With the result of the current paper we can thus realise bibundle duality via a topological worldsheet defect. (This way, the Drinfeld double bibundles of [23] give novel 'bibranes' in the terminology of [2].) There are also examples of dualities that may be understood this way outside of string theory (which is admittedly over-represented above on account of the author's preferences/expertise). For instance, bosonisation in two-dimensional field theory has been described in terms of canonical transformations [26; 27] and thus may be realised via topological defects, like S-duality in \(d=4\) gauge theory was above. There are a few notable omissions from our brief treatment. The first is the introduction of time-dependence (in the hamiltonians \(h_{1},h_{2}\)) which _a priori_ motivates introducing time-dependence in the defect action \(S^{\mathrm{D}}_{12}(p_{12},t_{12})\). This changes little: choosing \(\Gamma_{12}(0)\in L_{12}\) as a boundary condition (for time-independent \(L_{12}\)) along with appropriate endpoint contributions eventually forces \(S^{\mathrm{D}}_{12}=S^{\mathrm{D}}_{12}(t_{12})\) without loss of generality. The effect is to introduce a discontinuity in the hamiltonians, so \(h_{2}=h_{1}+\partial S^{\mathrm{D}}_{12}/\partial t_{12}\) on the defect. Another omission has to do with the transversality issue discussed below (10): the fusion of two topological defects may fail to be a defect, in the sense that the corresponding lagrangian may be singular (as a manifold). (For this reason Weinstein himself called it the "symplectic 'category"'.) Fortunately, it was shown in [10; Proposition 5.2.1][28, Theorem 2.3] that the intersection of (10) is rendered transverse if the lagrangians are perturbed appropriately; this could be realised in our picture by switching on small defect actions (\(S^{\mathrm{D}}_{12}\) and \(S^{\mathrm{D}}_{23}\)). The physical implications of this (if any) are left for the future. Finally, we will also be leaving the treatment of gauge theories, i.e. hamiltonian systems with first-class constraints for the future. That case includes temporal reparameterisation-invariance, which introduces many subtleties; however, we note that composition of correspondences already allows us to construct topological defects in the _reduced (gauged) theory_ from topological defects in the original theory: the key point is that coisotropic reduction gives a lagrangian correspondence between the original and reduced symplectic manifolds [29, Section 3]. In this context we also expect that introducing degrees of freedom lying on the defect might be necessary, as is common when discussing topological defects in field theory. Acknowledgments:This project arose from inspiring interactions with Lewis Cole, Saskia Demulder, and Dan Thompson, who also provided helpful feedback. I would also like to thank Jonny Evans for providing helpful references (on Mathoverflow [30]) and Nate Bottman for a clarification on his work. I am supported by the FWO-Vlaanderen through the project G006119N, as well as by the Vrije Universiteit Brussel through the Strategic Research Program "High-Energy Physics". I am also supported by an FWO Senior Postdoctoral Fellowship (number 1265122N). I am grateful to the Mainz Institute for Theoretical Physics (MITP) of the DFG Cluster of Excellence PRISMA" (Project ID 39083149), for its hospitality and its partial support during the completion of this work.
2308.11723
Modeling Dust Production, Growth, and Destruction in Reionization-Era Galaxies with the CROC Simulations II: Predicting the Dust Content of High-Redshift Galaxies
We model the interstellar dust content of the reionization era with a suite of cosmological, fluid-dynamical simulations of galaxies with stellar masses ranging from $\sim 10^5 - 10^9 M_{\odot}$ in the first $1.2$ billion years of the universe. We use a post-processing method that accounts for dust creation and destruction processes, allowing us to systematically vary the parameters of these processes to test whether dust-dependent observable quantities of galaxies at these epochs could be useful for placing constraints on dust physics. We then forward model observable properties of these galaxies to compare to existing data. We find that we are unable to simultaneously match existing observational constraints with any one set of model parameters. Specifically, the models which predict the largest dust masses $D/Z \gtrsim 0.1$ at $z = 5$ -- because of high assumed production yields and/or efficient growth via accretion in the interstellar medium -- are preferred by constraints on total dust mass and infrared luminosities, but these models produce far too much extinction in the ultraviolet, preventing them from matching observations of $\beta_{\rm UV}$. To investigate this discrepancy, we analyze the relative spatial distribution of stars and dust as probed by infrared (IR) and ultraviolet (UV) emission, which appear to exhibit overly symmetric morphologies compared to existing data, likely due to the limitations of the stellar feedback model used in the simulations. Our results indicate that the observable properties of the dust distribution in high redshift galaxies are a particularly strong test of stellar feedback.
Clarke J. Esmerian, Nickolay Y. Gnedin
2023-08-22T18:27:14Z
http://arxiv.org/abs/2308.11723v1
Modeling Dust Production, Growth, and Destruction in Reionization-Era Galaxies with the CROC Simulations II: Predicting the Dust Content of High-Redshift Galaxies ###### Abstract We model the interstellar dust content of the reionization era with a suite of cosmological, fluid-dynamical simulations of galaxies with stellar masses ranging from \(\sim 10^{5}-10^{9}M_{\odot}\) in the first 1.2 billion years of the universe. We use a post-processing method that accounts for dust creation and destruction processes, allowing us to systematically vary the parameters of these processes to test whether dust-dependent observable quantities of galaxies at these epochs could be useful for placing constraints on dust physics. We then forward model observable properties of these galaxies to compare to existing data. We find that we are unable to simultaneously match existing observational constraints with any one set of model parameters. Specifically, the models which predict the largest dust masses \(D/Z\gtrsim 0.1\) at \(z=5\) - because of high assumed production yields and/or efficient growth via accretion in the interstellar medium - are preferred by constraints on total dust mass and infrared luminosities, but these models produce far too much extinction in the ultraviolet, preventing them from matching observations of \(\beta_{\rm UV}\). To investigate this discrepancy, we analyze the relative spatial distribution of stars and dust as probed by infrared (IR) and ultraviolet (UV) emission, which appear to exhibit overly symmetric morphologies compared to existing data, likely due to the limitations of the stellar feedback model used in the simulations. Our results indicate that the observable properties of the dust distribution in high redshift galaxies are a particularly strong test of stellar feedback. dust - galaxies: formation - cosmology: theory - methods: numerical ## 1 Introduction The successful launch and commissioning of JWST has begun a new era in astrophysics. Its unprecedented sensitivity to the emission of high redshift (\(z\gtrsim 5\)) galaxies has already enabled the rapid accumulation of data on the earliest galaxies. The amount and properties of interstellar dust in these galaxies has a fundamental impact on observations across the entire electromagnetic spectrum, and consequently plays a central role in the understanding of this groundbreaking new data. Motivated by this, we have developed a model for the evolution of dust in simulated high-redshift galaxies. Described in Esmerian & Gnedin (2022), we post-process simulations from the Cosmic Reionization on Computers project (CROC Gnedin, 2014; Gnedin & Kaurov, 2014; Gnedin, 2016) with a model that determines the fraction of heavy elements in the interstellar medium (ISM) locked in solid dust grains accounting for their nucleation in the ejecta of Asymptotic Giant Brach (AGB) stars and Supernova (SN), their growth via accretion of gas-phase metals in the cold molecular ISM, and their destruction via thermal sputtering due to hot gas from supernova remnants (SNRs). Since the rates of each of these processes are very uncertain due to uncertainties in the material properties of dust grains, the mathematical terms describing them are parameterized with uncertainty factors that enable the exploration of the wide range of theoretical possibilities. This model is calculated along Lagrangian tracers that sample gas dynamical quantities along pathlines for a representative fraction of the gas in Lagrangian regions of galaxies. This post-processing technique enables the exploration of dust models with a wide range of parameter choices, of course at the expense of some realism be cause dust effects are not calculated during simulation run-time. In Esmerian and Gnedin (2022), we explored the full parameter space of the model by focusing on a single massive galaxy (\(M_{\rm vir}\sim 10^{11}M_{\odot}\), \(M_{*}\sim 10^{9}M_{\odot}\) at \(z=5\)) and the interplay of different dust physical processes in a fully dynamic ISM. We found that reasonable parameter choices for the dust model predicted dust contents and dust-sensitive observables broadly consistent with the extant observational constraints. For purposes of computational feasibility, this method development was done using the single most-massive galaxy in a \(10h^{-1}\) cMpc cosmological volume. However, because the initial conditions of large-scale structure are Gaussian random fields, galaxies form in dark matter halos with a wide range of masses and formation histories, which we know to fundamentally impact galaxy properties Behroozi et al. (see 2019, for contemporary constraints). Theoretical efforts must therefore strive to make predictions for halos that sample the distributions of masses and formation histories as completely as possible, in order to make predictions for the galaxy population in the real universe. This modelling is especially urgent given the recent onslaught of data from JWST, coupled with ambitious programs using radio telescope arrays such as ALMA, that are rapidly fleshing out the properties of the high-redshift galaxy population. Some of the most exciting and puzzling results from this recent revolution have implicated cosmic dust in a central role. There are exciting claims of anomalously bright galaxies and a surprisingly high star formation rate density at \(z>10\) abound (see Bouwens et al., 2023, and references therein), although these are dependent on photometric candidate detections without spectroscopic confirmation and therefore subject to possible revision. If confirmed, reconciling these with the mainstream galaxy formation models may present a challenge, and the many uncertainties of dust enrichment in the first galaxies have been invoked as possible explanations (Mirocha and Furlanetto, 2023; Mason et al., 2023; Ferrara et al., 2023). Galaxies with spectroscopic confirmation rest on surer footing, and thus far all show evidence for little dust attenuation \(z\gtrsim 10\)(Roberts-Borsani et al., 2022; Arrabal Haro et al., 2023; Bunker et al., 2023; Curtis-Lake et al., 2023; Tacchella et al., 2023; Arrabal Haro et al., 2023). Nonetheless, the reionization epoch is anything but dust-free. ALMA programs REBELS (Bouwens et al., 2022; Inami et al., 2022) and ALPINE (Le Fevre et al., 2020) and others (Bowler et al., 2022) have detected thermal dust continuum emission that firmly establishes significant amounts of dust in at least some galaxies by \(z=5-7\)(Fudamoto et al., 2020; Pozzi et al., 2021; Algera et al., 2023; Barrufet et al., 2023). These observations also hint at complicated dust morphologies with significant spatial displacement from the stellar component (Bowler et al., 2022; Inami et al., 2022). As well, Rodighiero et al. (2023) present an analysis of JWST candidate detections that suggest significant dust obscuration at \(8<z<13\). Overall, there is convincing evidence for the very rapid build-up of dust during the reionization epoch, especially in the most massive galaxies. Models of galaxy formation will therefore need to account for the physics of dust if they are to satisfactorily explain key observable constraints on cosmic dawn. With this goal, in this paper we now extend our previous analysis by applying our dust modelling framework to a suite of 10 additional simulated galaxies from the same simulation volume, selected with approximately uniform logarithmic spacing in final halo mass \(1.1\times 10^{9}M_{\odot}\leq M_{\rm vir}\leq 5.0\times 10^{11}M_{\odot}\), corresponding to stellar masses \(3.7\times 10^{5}M_{\odot}\leq M_{*}\leq 1.9\times 10^{9}M_{\odot}\), allowing us to assess the dependence of our predicted dust properties on galaxy mass at a given cosmological time. The paucity of dust at cosmic dawn suggested by some observations motivates us to also explore a wider range of dust modelling choices, namely those that either produce less dust or destroy it more efficiently. Section 2 explains our simulated galaxy sample selection, notes small updates to the methodology presented in the first paper, and presents the dust model variations explored in this analysis. Section 3 presents the galaxy mass-metallicity relation predicted by the simulations compared to existing high-redshift constraints, and results of the dust model applied to our simulated galaxy sample. Specifically, we present the predicted dust content and dust-sensitive observable quantities, both galaxy-averaged and spatially resolved, to which we compare to existing data. Section 4 discusses the agreements and discrepancies between our model predictions and observational constraints, and compares our work to other recent similar investigations in the literature. We conclude in Section 5. ## 2 Methods The galaxy formation simulation model, halo identification, and galaxy definitions are identical to those described in Esmerian and Gnedin (2022), to which we refer the reader. For this paper's analysis, we select a total of 11 galaxies from a \(10h^{-1}\) co-moving Megaparsec (cmMpc) cosmological volume with final \(z=5\) halo masses \(1.1\times 10^{9}M_{\odot}\leq M_{\rm vir}\leq 5.0\times 10^{11}M_{\odot}\), corresponding to final stellar masses \(3.7\times 10^{5}M_{\odot}\leq M_{*}\leq 1.9\times 10^{9}M_{\odot}\). These limits span the range of halo masses resolved in the simulation. The 11 halos are selected with approximately logarithmically uniform spacing in final halo mass. Since the galaxy scaling relations predicted by CROC have small scatter (see Zhu et al. (2020) and Noel et al. (2022)) and do not dramatically change slope on scales \(\lesssim 0.5\) dex in halo mass, it is sufficient for the purposes of this analysis to sample one halo of a given mass with the average spacing of 0.24 dex provided by a total sample of 11. This simulation has the same initial conditions as the one used in the previous paper, so the most massive halo is the same. As in the previous paper, we sample ISM conditions in these simulated galaxies along pathlines traced by Lagrangian tracer particles. These particles are initialized in random positions in the Lagrangian region of the halo and follow the fluid flow using the Monte-Carlo method introduced in Genel et al. (2013) and implemented in the ART code in Semenov et al. (2018). The number of tracers per halo scales with halo mass such that the minimum number of tracers for a given halo is above 100. For galaxies hosted in halos with final masses \(M_{\rm vir}>10^{11}M_{\odot}\), we downsample to \(10^{4}\) particles for computational feasibility. Computational resource limitations therefore prevent us from using enough tracers that all cells in each galaxy are sampled. Consequently, we must find a way to assign dust masses to cells in the galaxy which were not sampled by any tracer. To do this, we interpolate the \(D/Z\) vs \(Z\) relation for the tracers in each galaxy at every snapshot output. As shown in Figure 1, the dust-to-metal ratio scales regularly with metallicity, making this interpolation the best option for assigning dust masses to unsampled cells in a way that preserves the predictions of the dust model. We note that with some model choices even this relation exhibits significant scatter at a given tracer metallicity. Our correction therefore possibly underestimates the scatter in observable quantities impacted by the dust distribution. We note that in Esmerian & Gnedin (2022) we had not yet realized this correction was necessary, and therefore the results in that paper underestimate the total dust mass and effect of dust on observable quantities. These effects are \(\mathcal{O}(10\%)\) in the dust mass but can be order-unity for some observable quantities that depend sensitively on the dust distribution - particularly dust extinction of UV starlight - but do not change the qualitative conclusions of that paper. Otherwise, the methods used to calculate dust-dependant observables (the effective optical depth to dust at 1500A: \(\tau_{1500}\), the logarithmic spectral slope in the UV: \(\beta_{\rm UV}\), the infrared luminosity: \(L_{\rm IR}\) and the infrared excess IRX\(\equiv L_{\rm IR}/L_{\rm UV}\)) are identical to the description in Section 2.6 of Esmerian & Gnedin (2022). However, we note that the published \(L_{\rm IR}\) values in that paper were incorrectly calculated by integrating the dust infrared emission from \(2.5\AA\) to \(20\AA\), instead of \(8\AA\) to \(1000\AA\) as stated in the text, due to a bug in the relevant code. Consequently, the infrared luminosities in that paper are incorrect by a factor of \(\sim 10^{3}\), and we show the effect of this correction in Appendix A. However, the scaling of infrared luminosities with dust mass predicted by our analysis was nonetheless qualitatively correct, and since we did not make any comparison to observations for these quantities, the main conclusions of that paper remain unchanged. ### Spatially Resolved Images For the analysis of spatially resolved dust properties we focus exclusively on the most massive galaxy in the simulation which attains a stellar mass of \(M_{*}=1.3\times 10^{8}M_{\odot}\) by \(z=8\), \(M_{*}=7.1\times 10^{8}M_{\odot}\) by \(z=6.4\) and \(M_{*}=1.9\times 10^{9}M_{\odot}\) by \(z=5\). Our simulated galaxy is therefore well within the range of stellar masses probed by observations to which we compare (Inami et al., 2022), justifying comparison because galaxy morphology is expected to depend on stellar mass (Pillepich et al., 2019). However, we note that for these snapshots it has a much lower SFR of \(\leq 5.2M_{\odot}/\)yr than any galaxy in this sample. This may be due to the bias towards high SFR systems of these observations noted in the introduction, Figure 1: Spatially-resolved \(D/Z\) vs \(Z\) relations. Each panel shows the 2D PDF of mass for \(D/Z\) vs \(Z\) in the ISM of the most massive galaxy at the last snapshot. Different panels correspond to different dust models. or the inability of the CROC model to produce such rapidly star-forming systems. Figure 6 of Zhu et al. (2020) shows that the CROC galaxies exhibit small scatter in the SFR-stellar mass relationship, suggesting that the discrepancy between our simulation's SFR and those inferred from data results from a deficiency of the model and would not be alleviated by considering a larger number of simulated galaxies. This discrepancy provides further motivation to compare our simulations to data in a spatially resolved analysis as this may provide information about the cause of these discrepancies. We present results for this simulation at 12 snapshots from \(z=8.5\) to \(z=5\). The upper bound in redshift is motivated by the upper bound on the observations, but we include snapshots lower than the observational lower bound of \(z=6.5\) to maximize the galaxy mass range probed by our analysis. We note that we have redone the analysis restricted to only snapshots within \(6.4<z<8.5\), identical to the observations, and all of our conclusions are unchanged. We also note that based on visual inspection, the galaxy undergoes merging events at \(z\approx 7.3\) and \(z\approx 6\) which significantly disrupt its morphology. There are enough snapshots of the simulation that the galaxy morphology before, during, and after the merger event can be clearly distinguished. We therefore expect that our simulation data samples a sufficiently violent merger history with sufficiently high time resolution that our analysis accounts for the morphological effects of accretion history on high-redshift galaxies of the relevant masses. This spatial analysis uses quantities calculated as described previously except with small modifications. UV colors are determined based on the finite difference between luminosities at 1500A and 2500A as follows \[\beta_{\rm UV}=\frac{\log_{10}(f_{2500\rm\AA}/f_{1500\rm\AA})}{\log_{10}(250 0\rm\AA/1500\rm\AA)} \tag{1}\] again on an individual star particle basis. We note that this is not identical to the calculation of \(\beta_{\rm UV}\) in the rest of the analysis, in which a least-squares fit performed on this portion of the UV spectrum to determine a power-law slope. This finite difference method is adopted for computational ease, and we have checked that it reproduces the least-squares fitting results very accurately. We use dust column density \(\Sigma_{D}\), computed from the galaxy dust mass distribution calculation as a proxy for infrared continuum emission. The two are directly proportional because the dust distribution is optically thin in the infrared. To account for the effect of finite observational resolution, we smooth with a Gaussian kernel with variance \(\sigma^{2}=\frac{\Delta x_{\rm FWHM}^{2}}{8\ln 2}\) where \(\Delta x_{\rm FWHM}\) is the physical distance at the simulation snapshot redshift corresponding to the angular Full-Width Half-Max (FWHM) of the observation. We explore \(\Delta x_{\rm FWHM}\) values of 0.2 and 0.8 arcsec, which for the redshift range of our simulations \(11.4\geq z\geq 5.0\) corresponds to a physical size of \(0.33-0.55\rm kpc\) and \(1.32-2.19\rm kpc\), respectively. ### Dust Model Parameter Exploration As in the previous paper, we run a suite of dust models with different parameter choices to explore their impact on the predicted dust content of high-redshift galaxies and dependent observables, now on a sample of multiple simulated galaxies. However, motivated by observations that increasingly point to minimal dust in the earliest galaxies (e.g. Roberts-Borsani et al., 2022; Tacchella et al., 2023), we extended the set of parameter variations explored by introducing 3 new models that either increase the grain destruction rate in supernova remnants (**Very Enhanced Destruction**) or decrease grain production in SN and/or AGB (**No SN Production**, and **Very Low SN Production, No AGB Production**). The list of models explored in this analysis are summarized in Table 1 and described below. Note that each model is assigned a unique color for Figures 4-9, and 11 which are shown in the right-most column of the table. * **Default:** This is identical to the "Default" model explored in Esmerian & Gnedin (2022), for which parameters were chosen to be the same as successful similar physical models of dust evolution for local-universe galaxies (Dwek, 1998; Feldmann, 2015; Li et al., 2019). * **No Accretion:** This is identical to the "No Accretion" model in Esmerian & Gnedin (2022). The parameters of this model are identical to **Default** except grain growth due to accretion of gas-phase metals in the cold molecular ISM is not allowed. This parameter choice is motivated by arguments based on microphysical considerations of dust grain geometry that grain growth in the cold phase of the ISM may not be possible (Ferrara et al., 2016). * **Enhanced Accretion:** This is identical to the "Enhanced Accretion" model in Esmerian & Gnedin (2022). The parameters of this model are identical to **Default** except grain growth due to accretion of gas-phase metals in the cold molecular ISM is enhanced by an order of magnitude. This parameter choice is motivated both by uncertainties in the unresolved density distribution of the cold ISM in our simulations, where grain growth is expected to be most efficient, and to enable comparison to other works that adopt faster grain growth rates (Graziani et al., 2020; Lewis et al., 2023). * **No Destruction:** This is identical to the "No Destruction" model in Esmerian and Gnedin (2022). The parameters of this model are identical to **Default** except grain destruction in the hot gas of SNRs is not allowed. This parameter choice is motivated by indirect observational indications of inefficient dust destruction in high-temperature gas (Gall and Hjorth, 2018; Gjergo et al., 2018; Vogelsberger et al., 2019; Michalowski et al., 2019), as well as uncertainties in the unresolved ISM phase structure in our simulations. * **Enhanced Destruction:** This is identical to the "Enhanced Destruction" model in Esmerian and Gnedin (2022). The parameters of this model are identical to **Default** except grain destruction in the hot gas of SNRs is enhanced by an order of magnitude. This parameter choice is motivated by uncertainties in the destruction efficiency of individual supernova remnants both due to the microphysics of dust and unresolved ISM phase structure (McKee, 1989; Hu et al., 2019; Kirchschlager et al., 2022). * **Very Enhanced Destruction:** The parameters of this model are identical to **Default** except grain destruction in the hot gas of SNRs is enhanced by two orders of magnitude. The motivation for this parameter choice is the same as for **Enhanced Destruction**, since the associated uncertainties are large, and also the increasing evidence for dust-free early galaxies as mentioned previously. * **Low SN Production:** Identical to **Default** except the dust yield from supernova is supressed by an order of magnitude (i.e. \(y_{D,\rm SN}=0.01\)). Note that we do not change the AGB yield \(y_{D,\rm AGB}\), and since the AGB metal production is about 10 times smaller than that of SN (see Esmerian and Gnedin, 2022, Figure 7), SN and AGB production are comparable with these parameters. This parameter choice is also motivated by the evidence for minimally dusty high-redshift galaxies, and uncertainties about the fraction of SN-produced dust that survives the reverse shock (see e.g. Bianchi and Schneider, 2007; Micelotta et al., 2016; Slavin et al., 2020). * \(y_{D,\rm SN}=0\). This choice is motivated by the extreme scenario in which no dust survives the reverse shock of any supernova. * **Very Low SN Production, No AGB Production:** Identical to **Default** except the dust yield from supernova is suppressed by two orders of magnitude (i.e. \(y_{D,\rm SN}=10^{-3}\)) and AGB production is turned off (\(y_{D,\rm AGB}=0\)). This is motivated by the same considerations as for the previous two models, and the deep uncertainties around AGB dust production especially in the early universe (e.g. Valiante et al., 2009; Schneider et al., 2014; Dell'Agli et al., 2019; Tosi et al., 2023). ## 3 Results \begin{table} \begin{tabular}{|c|c|c|c|} \hline \hline **Model Name** & **Key Parameters** & **Description** & **Color in Figures** \\ \hline **Default** & \(y_{D,\rm SN}=y_{D,\rm AGB}=0.1\), & Default from Esmerian and Gnedin (2022), & \\ & \(\tau_{\rm accr}=3\times 10^{8}\)yr, \(C_{\rm dest}=1\) & parameters based on Dwek (1998); Feldmann (2015). & \\ \hline **No Accretion** & \(\tau_{\rm accr}=\infty\) & No grain growth due to gas-phase accretion in cold ISM & \\ \hline **Enhanced Accretion** & \(\tau_{\rm accr}=10\tau_{\rm accr,Default}\) & Enhanced grain growth & \\ & & due to gas-phase accretion in cold ISM & \\ \hline **No Destruction** & \(C_{\rm dest}=0\) & No grain destruction in hot gas due to SNRs & \\ \hline **Enhanced Destruction** & \(C_{\rm dest}=10C_{\rm dest,Default}\) & Enhanced grain destruction in hot gas due to SNRs & \\ \hline **Very Enhanced Destruction** & \(C_{\rm dest}=100C_{\rm dest,Default}\) & Very enhanced grain destruction in hot gas due to SNRs & \\ \hline **Low SN Production** & \(y_{D,\rm SN}=0.1y_{D,\rm SN,Default}\) & Suppressed dust yield from SN & \\ \hline **No SN Production** & \(y_{D,\rm SN}=0\) & SN do not produce dust & \\ \hline **Very Low SN Production,** & \(y_{D,\rm SN}=0.1y_{D,\rm SN,Default}\), & Very suppressed dust yield from SN, & \\ \hline **No AGB Production** & \(y_{D,\rm AGB}=0\) & AGB do not produce dust & \\ \hline \end{tabular} \end{table} Table 1: Explored Parameter Combinations. Note that for each model, any parameter not listed under Key Parameters is the same as the Default model. ### The Mass-Metallicity Relation The dust content of galaxies is normalized by their overall metal content, so we first examine the galaxy metallicities in the simulations. Figure 2 shows the mass-metallicity relation for our simulations, including existing data at relevant redshifts. While the data mainly overlap with only the highest-mass galaxies in our sample, where there is overlap we see fair agreement, albeit with some indications of systematically low metallicities in CROC. The galaxy scaling relations of other quantities predicted by CROC have been thoroughly discussed and compared to existing data in Zhu et al. (2020). We note that Noel et al. (2022) presented a more detailed analysis of the CROC mass-metallicity relation, but at the time these high-redshift data were not available for comparison, making this a new result. ### The Dust Content of High-Redshift Galaxies The dust content of our simulated galaxies predicted by the models described in Section 2.2 and Table 1 are summarized in Fig. 3, where we show the galaxy-averaged dust-to-metal ratio \(D/Z\) as a function of galaxy metallicity \(Z\) in mass-fraction units (i.e. in which the average metallicity in the solar neighborhood is \(Z_{\odot}=0.02\)). The top row shows the default model and variations in the ISM grain growth accretion timescale. The middle row shows variations in the grain destruction efficiency of supernova remnants. And the bottom row shows variations in the assumed yields of dust production sources. Broadly, we notice several trends. A well-established property of dust models similar to ours is the transition of dominant physical processes between low and high-metallicity regimes: the \(D/Z\) ratio at low metallicity (\(Z\lesssim 4\times 10^{-4}=2\times 10^{-2}Z_{\odot}\)) is primarily set by the choice of source yields \(y_{D,\rm SN/AGB}\), while the ratio at high metallicity is determined by a competition between the timescale for grain growth due to accretion in the ISM, and the efficiency of grain destruction in the hot ionized medium. If grain growth dominates, (as in **Default**, **Enhanced Accretion**, **No Destruction**, **Low SN Production**, **No SN Production**, and **Very Low SN, **No AGB**) the \(D/Z\) ratio rises with increasing metallicity, while when destruction dominates (**No Accretion**, **Enhanced Destruction**, **Very Enhanced Destruction**) the opposite scaling is observed. However, we note that in some models there is substantial scatter between galaxies even at fixed metallicity, particularly for those models where accretion dominates at late times. The efficiency of grain growth via accretion must therefore depend on galaxy properties beyond average metallicity, an effect not captured in simpler one-zone models (e.g. Feldmann, 2015). For the **Default** and **Enhanced Accretion** models we notice substantial scatter in the metallicity at which each galaxy enters the growth-dominated regime of rising \(D/Z\). Indeed, we note that the most massive galaxy in our sample exhibits rising \(D/Z\) at high metallicities in all models except **No Accretion** (where \(D/Z>y_{D}\) is physically impossible) and **Very Enhanced Destruction**. Even in the **Enhanced Destruction** scenario the \(D/Z\) ratio rises at late times (i.e. high metallicies) for this single galaxy but no others. This clearly indicates the importance of some combination of star formation history and ISM phase structure in setting the dominant dust regulating mechanisms. Precise determination of this cause would require more analysis beyond the scope of this work but would be interesting for future investigation. Finally, we note that the significant scatter in \(D/Z\) at very low metallicities \(Z\lesssim 10^{-4}\). As indicated by their redshift (indicated in color) and the stellar mass-metallicity relation in Figure 2, these low-metallicity galaxies are the highest-redshift and lowest-mass in our sample. Consequently, there are the most poorly-resolved and subject to the greatest stochasticity effects from the discreteness of enrichment from star particles and sampling by Lagrangian tracers. The latter would be amended by coupling the dust model explicitly to the Figure 2: Mass-Metallicity Relation. The galaxy-averaged gas-phase metallicity is shown as a function of stellar mass. Each point represents an individual galaxy at an individual snapshot, colored by redshift. Observational data from galaxies at similar redshifts are from Faisst et al. (2016); Jones et al. (2020); Langeroodi et al. (2022); Nakajima et al. (2023); Heintz et al. (2022), and Williams et al. (2023) converted to mass fraction using 12 + [O/H]\({}_{\odot}=8.71\) and \(Z_{\odot}=0.02\)(Lodders, 2019) simulation, and is therefore another motivation for more sophisticated modelling in future analyses. Nonetheless, this noise occurs at such low metallicities that its effect on the total dust mass, which is normalized by the metallicity, is minor and should not strongly impact our conclusions. As well, the existence of clear trends at late times/high metallicities indicates that the predictions are well resolved for the most massive galaxies, which are the most relevant for comparison to observational data. In Figures 4 and 5 we examine the predicted dust masses of our simulated galaxies as a function of stellar mass at different redshifts for the different dust models. Figure 4 shows **Default** and models with variations in accretion and destruction rates (the first and second rows of Figure 3), while Figure 5 shows models with var Figure 3: Galaxy-Averaged \(D/Z\) vs. \(Z\) relations. Each panel shows the evolution of the dust content in our simulated galaxies with a different set of assumed dust model parameters, as indicated by the titles. Each point corresponds to the average \(D/Z\) and \(Z\) values for an individual galaxy at an individual snapshot. \(Z\) is in physical (i.e. mass fraction) units. Points from the same galaxy are connected with grey lines, and colors indicate redshift. The dashed horizontal grey line indicates \(D/Z=0.1\), which is the default production yield in our model and the solid horizontal grey line indicates \(D/Z=0.4\), the value for the Milky Way and a common choice in post-processing analyses (see Section 4.2). ied production yields (third row of Figure 3. In all cases, the dust mass exhibits an approximately linear scaling with stellar mass, with varied normalization depending on assumed production yields and destruction efficiencies. This normalization spans two dex at a given stellar mass for the entire suite of models herein considered. There is also a general steepening of the relationship at higher masses (\(M_{*}\sim 10^{8}-10^{9}M_{\odot}\)) in models where accretion becomes efficient. These relationships are sufficiently tight to be well-distinguished between different models in principle, although there is significant degeneracy between yield and destruction rates - **Very Enhanced Destruction** and **No SN Production** predict very similar values which the first achieves by destroying dust with high efficiency while the second produces little dust to begin with. As well, models with the same yield but different growth timescales (**Default**, **No Accretion**, **Enhanced Accretion**) are only distinguishable at high masses and late times, consistent with the results of Figure 3. In summary, different plausible parameter choices for the dust model can change dust masses by up to two orders of magnitude at a given stellar mass. This flexibility highlights the need for a significant improvement in our understanding of dust production and destruction processes. We therefore compare these predictions with existing observational estimates of dust masses in high-redshift galaxies from Sommovigo et al. (2022), Dayal et al. (2022), Hashimoto et al. (2019), Knudsen et al. (2017), Schaerer et al. (2015), Watson et al. (2015), Laporte Figure 4: Dust mass-stellar mass relation. Colors indicate different dust models which include **Default** and those with varied growth or destruction parameters (i.e. the first two rows of those shown in Fig. 3). Each point is a single galaxy at a single redshift, and separate panels are redshift bins. Estimates based on observational data from Sommovigo et al. (2022), Dayal et al. (2022), Hashimoto et al. (2019), Knudsen et al. (2017), Schaerer et al. (2015), Watson et al. (2015), Laporte et al. (2017), Tamura et al. (2019), da Cunha et al. (2015), Marrone et al. (2018), Burgarella et al. (2020, with a redshift for ID27 from Aravena et al. (2016)), Pozzi et al. (2021, with stellar masses from Faisst et al. (2020)), Witstok et al. (2023), and Lesniewska & Michałowski (2019) are shown with the same redshift binning. The predictions of a simpler dust post-processing model on higher-resolution simulations presented in Ma et al. (2019) is shown in the dashed grey line. et al. (2017), Tamura et al. (2019), da Cunha et al. (2015), Marrone et al. (2018), Burgarella et al. (2020, with a redshift for ID27 from Aravena et al. (2016)), Pozzi et al. (2021, with stellar masses from Faisst et al. (2020)), Witok et al. (2023), and Lesniewska & Michalowski (2019). Because of the limited volume of our simulation, we do not capture unusually massive and therefore rare halos, limiting us to predictions at lower masses than almost all the existing data. Nonetheless, the data appear to favor those models with the highest dust masses - the data is always at the upper envelope of our simulation predictions wherever they overlap. Indeed, in both the \(6.5<z<7.5\) and \(7.5<z<8.5\) bins most of the data appear to lie on or above the scaling relation of the most dust-rich model **Enhanced Accretion** if it were extrapolated. This suggests that the data prefer models in which production yields are high and ISM grain growth is efficient at high masses. We also note that the data appear to exhibit greater scatter at a given stellar mass than any one set of dust model parameters predicts. However, we emphasize that these conclusions are extremely tentative because of the minimal amount of data available for comparison, the mostly disjoint stellar mass ranges probed by our simulations vs. the observations, and especially the large systematic uncertainties in the observational constraints which are not necessarily captured in the statistical uncertainties on quoted errors: dust masses are derived from infrared luminosities, which depend on dust mass, the dust extinction coefficient, and strongly on the dust temperature. The latter two are highly uncertain in high redshift galaxies and difficult to independently constrain. In addition, our simulations may also miss some real sources of scatter, as we discuss further below. Consequently, the most robust constraints will come from forward modelling of directly observable quantities. Figure 5: Dust-mass stellar-mass relation cont’d. Same as Fig. 4 but with **Default** and the dust models with varied yields (i.e. those in the third row shown in Fig. 3). ### Forward-Modeled Observable Quantities: Comparison to Data For the remainder of our analysis we focus on a representative subset of the models discussed previously, each with a consistent color throughout the figures: **Default** (blue), **Enhanced Accretion** (purple), **Enhanced Destruction** (yellow), **Very Enhanced Destruction** (light brown), and **Very Low SN, No AGB Production** (dark brown). #### 3.3.1 Rest-Frame UV Observables: \(\mathrm{M_{AB}}\), \(\beta_{UV}\), \(\tau_{1500}\) Figure 6 compares the predicted ultraviolet spectral slopes of our simulated galaxies with these dust models to observational data. We also show the predictions of the simulations absent dust extinction in black points. In contrast to the suggestions of Figures 4 and 5, the models with the lowest dust content - **Very Enhanced Destruction**, and **Very Low SN, No AGB Production** - agree best with the data, at all redshifts. It is not clear, however, if either model alone predicts as much scatter at a given luminosity as shown in the data. In contrast, the more dust-rich models all predict similar \(\beta_{\mathrm{UV}}\) values which fail to overlap with the observations at any redshift, and exhibit large scatter. This is because they predict very high ISM optical depths, as shown in Figure 7. For dust masses greater than or equal to those predicted by the **Enhanced Destruction** model, the dusty ISM is effectively opaque, so changes in dust content do not impact UV properties significantly. The spread in \(\beta_{\mathrm{UV}}\) for these models is therefore likely due to the Poisson scatter in number of visible star particles along a given line of sight. Finally, we note that while the data prefer the dust-poor models, they are inconsistent with entirely dust-free predictions (black points), especially at later times. #### 3.3.2 Far-Infrared Observables: IRX-\(\beta\) Relation, \(L_{\mathrm{IR}}\) In Figure 8 we examine the IRX-\(\beta\) relationships predicted by our modeling, compared to observational constraints. Because infrared luminosity depends linearly on dust mass, models with distinctly different dust content are better separated in this parameter space. However, the predictions fail to match the data in two key ways: 1. No one model exhibits as much scatter as the observations, especially in the \(5.5<z<6.5\) range, and 2. our simulations lack galaxies at low (\(\lesssim 1\)) IRX and high (\(\gtrsim-1.5\)) \(\beta_{\mathrm{UV}}\) which are apparent in the data. Some of the reason for these disagreements is illuminated by examining the predictions in \(L_{\mathrm{IR}}\) vs. \(L_{\mathrm{UV}}\) space, since these are the numerator and denominator of the IRX, respectively. This is shown in Figure 9, along with the same data as in Figure 8. While we predict reasonable \(L_{\mathrm{IR}}\) especially at late times, all of the galaxies in this observational sample exhibit higher UV luminosities than ours. This helps to explain the lack of low IRX galaxies in our predictions. This suggests that our models are predicting dust masses consistent with observations, but opacities that are too high, in agreement with the interpretations of Figures 4, 5 and 6. Indeed, in Figure 9 we show the predictions of our simulations without dust attenuation as transparent points, and they are in better agreement with the data. This experiment is nonphysical in that the two luminosities are inconsistently calculated. However, it suggests that real galaxies have similar dust content to our more dust-rich models, but that it is distributed so as to have a much lower effective optical depth. The inability of any one dust model to reproduce the scatter in observed infrared luminosities could be due to our lack of self-consistently calculated dust temperatures - for simplicity and given the large modelling uncertainties involved, we assume a constant dust temperature of \(T=40\)K for these calculations (see Sommovigo et al., 2022). Since \(L_{\mathrm{IR}}\propto T_{D}^{4}\) at fixed \(M_{D}\), galaxy-galaxy scatter in \(T_{D}\) could significantly enhance the predicted range of IRX. This limitation of our modeling is potentially the primary reason for the low scatter - dust temperatures depend on the radiation field, which, in turn, is sensitive to short time-scale variations in star formation rate. One therefore expects the ISM radiation fields, and consequently dust temperatures, to vary widely from galaxy to galaxy. As well, the lack of very massive galaxies in the limited cosmological volume of our simulation might also mean that we simply aren't sampling galaxies as massive as those in existing observational samples, and this may also be the cause of one or both of these discrepancies to some degree. It is of course also possible that the dust dynamical quantities assumed as inputs into each model, such as the characteristic time for dust grain growth in the ISM or the supernova destruction efficiency - vary from galaxy to galaxy due to differences in ISM phase structure and dust content that our model is not sophisticated enough to capture. Infrared observables will therefore require significantly further theoretical efforts to be used as constraints on dust physics at high redshift. ### Spatial Analysis Figure 10 shows the predicted UV (1500A) emission, UV color \(\beta_{\mathrm{UV}}\), and dust column density (which is proportional to the IR emission) for different dust models. Consistent with the results of Section 3.3.1, all of our dust models predict significant extinction and reddening, but the amount and spatial distribution vary markedly between the different models. The most dust rich models show large amounts of reddening and extinction throughout the galactic disk, while those with less dust have effects that are more centrally concentrated. However, even those models with the least dust exhibit substantial attenuation and reddening in the center. This is because even the **Very Enhanced Destruction** model predicts dust column densities in excess of \(\Sigma_{D}=10^{5}M_{\odot}/\mathrm{kpc}^{2}\approx 10^{-5}\mathrm{g/cm}^{2}\) which, for our assumed dust opacity of \(\approx 10^{5}\mathrm{cm}^{2}/\mathrm{g}\) at 1500A results in unity optical depth, and \(\beta\gtrsim 0\) colors. The increasing severity of extinction at smaller projected galactocentric radii gives the UV emission a ring-like morphology in all but the least dusty model. Color is strongly correlated with IR emission though not perfectly - for example, both Default and Enhanced Accretion models exhibit reddest colors in the disk outskirts / tidal tails, while the IR emission is highest in the center. All but the **Very Enhanced Destruction** dust model show offsets in the location of maximum UV and IR emission on the order of 1 kpc for the same reason: the regions of highest IR emission are totally opaque to UV light. Note that this is despite the fact that the unattenuated UV light and dust are largely co-spatial. Figure 11 shows the measured offsets in projected distance between locations of maximum UV and IR brightness for the same models, at all simulated redshifts, sampling 6 lines-of-sight (positive and negative coordinate axes, which should be random with respect to the galaxy orientation) per snapshot. The top panels show these offsets as a function of absolute UV magnitude, on the bottom panels as a function of Figure 6: \(\beta_{\mathrm{UV}}\) as a function of UV AB absolute magnitude. Different panels show data for different bins in redshift, different colors are different dust models, the black points indicate values predicted in the absence of dust, and the grey points are a compilation of observational measurements from the literature: Finkelstein et al. (2012, plus-signs), Bouwens et al. (2014, hexagons), Dunlop et al. (2013, diamonds), Bhatawdekar and Conselice (2021, stars), Wilkins et al. (2011, filled x), Dunlop et al. (2012, pentagons), and Wilkins et al. (2016, squares) show sample averages of multiple galaxies, while circles show measurements of individual galaxies with JWST from Roberts-Borsani et al. (2022b); Naidu et al. (2022); Robertson et al. (2023), and Whitler et al. (2023). stellar mass. The left-most panel of this figure is consistent with the trends noticed in Figure 10. Larger dust masses result in larger projected regions in which the dust is totally opaque to UV light. The maximum UV emission thereby happens at the larger projected radii where dust becomes optically thin, while the peak IR emission is always in the galactic center. The middle and right panels show the results for the same images smoothed by a Gaussian beam of FWHM 0.2 and 0.8 arcseconds, respectively. The physical scale of the smoothing therefore depends on the snapshot redshift, and corresponds to 0.33 (1.32) kpc at \(z=11.4\) and 0.55 (2.19) kpc at \(z=5\) for a FWHM of 0.02 (0.08) respectively. The numbers were chosen to approximately match the resolutions of HST observations and ground-based (e.g. ALMA and UltraVISTA) observations, respectively. On the right-most panel we show data from Inami et al. (2022) of UV-bright \(z\sim 6-7\) galaxies for which dust continua was observed with ALMA and rest-frame UV (observation-frame near-IR) was observed as part of the UltraVISTA survey, both of which have approximately 0.8 arsec FWHM resolution (McCracken et al., 2012). While these galaxies are clearly much brighter in the UV than ours, some exhibit very large UV-IR offsets that our simulated galaxy fails to exhibit when smoothed appropriately for comparison. Indeed, we see that increased smoothing monotonically reduces the peak emission offset. Figure 12 demonstrates why: the angular symmetry of both the stellar and dust distributions result in a ring-like morphology of the UV emission at projected galactocentric distances where the dust becomes optically thin. While the offset of UV and IR maxima at infinite observational resolution is approximately the radius of this ring, at smoothing scales comparable to this radius the UV light is maximized in the center, co-spatial with the peak IR emis Figure 7: Dust optical depth in the UV vs. absolute UV magnitude. Observational upper constraints from Schaerer et al. (2015), Burgarella et al. (2020), Naidu et al. (2022), and Ferrara et al. (2022, with absolute UV magnitudes taken from Bouwens et al. (2022)) are shown. In the lowest redshift bin we also show the predictions from Ma et al. (2018) where high resolution galaxy simulations were post-processed with a simpler dust model (see see Section 4.2 for discussion). sion. The fact that this holds true for all sightlines in all snapshots for every dust model indicates that the UV-IR morphologies are similar in all cases. Peak UV and IR emission are never asymmetrically offset in a way that is preserved with degrading resolution. We have confirmed with a visual inspection of all snapshots that these conclusions are generic to our simulation at all relevant cosmological times. This generality leads us to strongly suspect that it would hold for higher-mass galaxies simulated with CROC physics and our dust model. This is especially the case since more massive galaxies would be expected to have greater dust masses (see Figures 3.5 and 3.6). ## 4 Discussion ### Status of Dust Constraints We begin the discussion by assessing the success of our dust modeling efforts compared to current experimental constraints. There is no one model which appears to agree with all of the existing data. While the infrared luminosities (and therefore estimated dust masses) appear to be best reproduced by models with comparatively higher dust content from high production yields and efficient growth, the models with the lowest dust content are in best agreement with \(\beta_{\rm UV}\) constraints. We also note that none of our dust models individually reproduces the scatter in \(L_{\rm IR}\) seen in observations. This could be do to the limited halo mass range of our simulation sample, the assumption of a fixed dust temperature for all galaxies (which, in reality, must depend on the ISM radiation field and the latter is expected to vary on short time scales and from galaxy to galaxy), the simplicity of our dust model, or some combination thereof. Together, these results suggest that while our model is capable of producing dust masses similar to those of real early-universe galaxies, doing so results in UV opacities that are too high. We speculate that this could be due to the spatial distribution of dust relative to stars - that Figure 8: IRX-\(\beta_{\rm UV}\) relation. Infrared-Excess (IRX) vs. ultraviolet spectral slope \(\beta_{\rm UV}\) for our simulated galaxies in each dust model. Note that a constant dust temperature of \(T_{D}=40\)K was assumed in calculating all infrared luminosities. The colors and redshift bins are identical to Fig. 6. Data shown are from Barisic et al. (2017) (which includes data from Capak et al., 2015; Pavesi et al., 2016), the compilation from Hashimoto et al. (2019), and Bowler et al. (2022). our galaxies are too uniform compared to the very turbulent ISM of real reionization-era galaxies, which we discuss in the context of our spatially resolved analysis below. ### Comparison to Similar Theoretical Work Lewis et al. (2023) recently presented results of an investigation with similar aims: they coupled an explicit model for dust very similar to ours to a galaxy formation simulation of cosmological reionization, and use this to predict the dust content and rest-frame UV observables of high redshift galaxies. They present predictions for a single choice of dust model parameters, in which they assume very low dust production yield \(y_{D}=10^{-3}\) and a much higher ISM growth accretion rate - they adopt a modestly shorter characteristic timescale (100 vs. 300 Myr) and their expression has an additional factor of \(1/Z\sim 10^{3}-10^{4}\). Consequently their galaxies transition from production-dominated to accretion-dominated dust content at a lower galaxy stellar mass of roughly \(10^{6}M_{\odot}\) in contrast to \(\gtrsim 10^{7}M\odot\) in all of our models. Their dust masses are therefore most similar to our highest-dust-content models. However, their model predicts much milder UV dust attenuation than ours with similar dust masses, see their Figure 7. We speculate that this is due to differences in resolution: their maximum physical resolution is an order-of-magnitude poorer than ours, at \(\sim 1\)kpc. Given the observed sizes of high redshift galaxies are \(\lesssim 1\)kpc Bouwens et al. (2020), their galaxies cannot possibly be spatially resolved and are therefore likely artificially large. This spreads the same amount of dust over a larger surface area and consequently reduces their predicted optical depths. Graziani et al. (2020) also recently conducted simulations of high-redshift galaxies with a coupled dust physics model. Again, they only present predictions for a single set of dust model parameters, which appear to make qualitatively similar predictions between our Figure 9: Infrared luminosity vs. UV luminosity. Colors and observational data are the same as Figure 8, with the addition of data from Burgarella et al. (2020). Additionally, we show predictions without UV dust attenuation in transparent points. These are inconsistent with the simulation, but show the effect of reduced UV opacity with unchanged dust mass. **fault** and **Enhanced Accretion** models: their Figure 4 indicates a production yield of \(y_{D}\sim 0.1\), and a transition to accretion-dominated dust at \(Z\sim 3\times 10^{-2}Z_{\odot}\sim 6\times 10^{-4}\). We note however that they adopt both a much shorter characteristic timescale for ISM accretion of 2 Myr to our 300 Myr, and a somewhat different accretion rate scaling of \(dD/dt\propto DZ\) as opposed to our \(dD/dt\propto D(f^{\rm dep}Z-D)\). The two expressions tend to the same value at low \(D\) (modulo \(f^{\rm dep}\) factor which is order-unity), but will differ significantly at higher \(D\) - while ours will tend to zero as \(D\to Z\), corresponding physically to all the available metals being locked in dust grains, theirs increases unbounded. This suggests that the plateau in \(D/Z\) at \(\approx 0.4\) with increasing \(Z\) exhibited by their model is due to enhanced destruction rates which regulate the dust content, whereas in our model the transition is set by the \(f^{\rm dep}Z-D\) term in the growth rate going to zero. Moreover, the fact that their model transitions to accretion-dominated at similar \(Z\) to ours despite a much higher growth-rate normalization suggests significant differences in the cold gas fractions or thermodynamics of the ISM in our simulations. We also note that their simulation accounts for dust dynamical effects at run time, which our post-processing model cannot. All of this suggests that 1. the predictions of these dust models are sensitive to the implementation details of cooling, star formation, and feedback in the ISM of high redshift galaxies, and/or 2. that the back-reaction of dust dynamics on the ISM might be significant. We also consider recent analyses that predict the dust content of high-redshift galaxies with simpler post-processing physical dust models, but with much higher-resolution (\(\sim 10\)pc) simulations that can more realistically resolve ISM dynamics and phase structure. Ma et al. (2018, 2019) predict dust-sensitive quantities from high resolution simulations from the FIRE project (Hopkins et al., 2018) of galaxies at \(z\geq 5\) by assuming a constant \(D/Z=0.4\). While their analysis predict similar dust masses to our more dust-rich models - see the dashed grey line in Figure 4, their predicted effective optical depths are most consistent with our least dusty models (see Figure 7). Interestingly, this is also the case with the results from a similar study (Mushtaq et al., 2023) using the FirstLight simulation suite (Ceverino et al., 2017) and an identical dust post-processing model - see Figures 1 and 5 in Mushtaq et al. (2023). FIRE and FirstLight are different galaxy formation simulations of similarly high resolution, both significantly higher than ours. Consequently, they better cap Figure 10: Images of the most massive galaxy in our box along a random line-of-sight at \(z=5\) with different dust models. The top row shows the 1500Å UV surface brightness, the middle row shows the spatially resolved UV beta slope (estimated using the 1500Å and 2500Å color) and bottom row shows the column density of dust mass (which is proportional to the IR surface brightness in the optically thin regime). Each column shows the predictions of a different set of dust model parameters, as well as the intrinsic UV emission on the leftmost column. Note that no smoothing has been applied to these images, and the pixelation is the result of the simulation grid. ture the effects of feedback on the high-redshift interstellar medium, resulting in a more turbulent, porous gas distribution which we speculate has a broader column density distribution than ours - see Figure 4 of Ma et al. (2019) and Figure 1 of Ceverino et al. (2021), both of which exhibit large gas column density fluctuations on scales smaller than our 100 pc resolution. This results in lower effective optical depths at a given dust mass because there exist low-density column channels through which UV radiation can escape that are lacking in our simulation. We have conducted an analysis of the predicted UV, IR and UV color morphology of the most massive galaxy in our simulation under the assumption of different dust model parameters. We find that all models predict significant dust attenuation in the central region of the galaxy, resulting in red \(\beta\gtrsim-1\) colors and in all but the model with the least dust ring-like morphology for the UV emission. This is because the dust contents pre Figure 11: UV and IR peak emission offsets. The projected physical distance between the maximum UV emission (accounting for dust attenuation) and the maximum IR emission (as determined by the dust surface density), as a function of UV absolute magnitude (top row) and stellar mass (bottom row). Each point is one of six lines-of-sight for a each snapshot. Data for the most massive galaxy in our box at \(5<z<8.5\) is shown. Different colors correspond to different dust models. Each panel shows different levels of smoothing to capture the effect of observational resolution. Data on the right-most plot are from Table 4 of Inami et al. (2022, with stellar masses from Bouwens et al. (2022) and Schouws et al. (2022)), whose observations have approximately 0.8 arcsecond resolution in both the IR and UV (McCracken et al., 2012). Figure 12: Effect of observation resolution on UV and IR morphologies. dicted by our models are generally optically thick in a region that is approximately symmetrical about the galactic center, so the UV emission is dominated by the smallest radii at which dust becomes optically thin. Color is also strongly correlated with dust column, which we use as a proxy for IR emission. Since IR emission peaks in the center of the galaxy, there are \(\sim\) kpc-scale offsets between the points of maximal UV and IR surface brightness when "observed" with infinite resolution, but degrading image resolution on scales similar to existing observational capabilities causes the UV emission to peak in the center due to its symmetric distribution, resulting in no offset between peak brightness in UV and IR. While existing observations only probe galaxies brighter in the UV than the most massive in our sample, they do exhibit much larger offsets that are suggestive of more complicated morphologies than the ones predicted by our modeling efforts, see Figure 2 of Bowler et al. (2022) and Figure 7 of Inami et al. (2022). Indeed, Figure 3 of Bowler et al. (2022) displays UV color gradients much less symmetric than any of those predicted by our dust modelling. We note that the analysis of galaxy-averaged observable properties would lead us to expect that the distributions of UV and IR emission predicted by our models would be overly smooth and symmetrical, given our inability to simultaneously match observed dust masses and optical depths. We interpreted this as evidence that our simulations fail to reproduce a sufficiently dynamic ISM and consequently the full distribution of dust column densities, the lower tail of which could allow for significantly enhanced UV emission without decreased dust mass. The results of this spatially resolved analysis provide evidence in favor of this interpretation given the inability of our modelling to reproduce the asymmetric morphologies seen in data of galaxies with similar stellar masses. Simulation resolution and feedback prescription are the two most important numerical components of a fluid-dynamical galaxy formation model for determining the structure and dynamics of the ISM, and therefore one or both of these is likely implicated in our modelling failures. At a spatial resolution of 100pc, our simulations do not resolve the disk scale-height and therefore cannot capture fully three-dimensional phenomena that characterize the ISM phase structure like molecular clouds and supernova feedback "super-bubbles". As a consequence, the delayed cooling feedback prescription utilized in CROC appears to be incapable of driving large-scale galactic winds - we have watched movies of the tracer particles used in this analysis and they are never removed from the galaxy ISM, indicating a negligible mass flux from the ISM into the circumgalactic medium. This is in stark contrast to most other modern galaxy formation models in which galaxies of the relevant mass range drive strongly mass-loaded winds, especially at early cosmological times (e.g. Muratov et al., 2015; Pandya et al., 2021). A feedback prescription that successfully launches winds would reduce the gas mass and therefore dust mass in our galaxies, possibly reducing the high opacities of our most dust-rich models. These winds might also carve out low column density sight-lines with minimal dust extinction. Ma et al. (2018), Ma et al. (2019), and Liang et al. (2021) thoroughly explored the UV-to-IR observable properties of reionization era galaxies predicted by the FIRE-2 simulations, which are significantly higher resolution than ours (\(\sim 10\)pc) and have been demonstrated to drive galactic winds. While they do not explicitly quantify any offsets between predicted UV and IR emission in their simulations, we see suggestions from their analysis of the dynamic, asymmetric ISM seen in observations and lacking in our simulations. Figure 4 of Ma et al. (2019) shows images of UV light and dust column density for two of their simulated galaxies, which both display a much more disturbed morphology than anything we find in our analysis. Close inspection reveals that the regions of brightest UV surface brightness correspond to holes in the dust surface density which appear to be blown out by strong feedback. However, we note that the spatial offsets between peak UV and IR emission do not visually appear to be much larger than 1 kpc, but firm conclusions cannot be drawn from images of just two galaxies each at a single snapshot. Figure 12 of Liang et al. (2021) does explicitly show a galaxy with \(>1\)kpc offset between maximum UV and IR surface brightness, due to a highly perturbed and asymmetric distribution of gas with respect to stars (though we note that this galaxy is significantly more massive than those in our analysis). They also find that the effective UV optical depth does not correlate with dust mass at all at high redshift \(z=6\) because of large variations in the star-dust geometry predicted by their simulations. All of this suggests that higher resolution simulations with a feedback model that drives galactic winds may be better able to match the asymmetric UV/IR morphologies seen in observations. The SERRA project is another suite of high-resolution cosmological simulations of galaxies at \(z>6\). These simulations are higher resolution than ours by about a factor of 3 with minimum cell sizes of \(\sim 30\)pc, and consequently have different star formation and feedback prescriptions, more similar to those in FIRE-2 (Behrens et al., 2018; Pallottini et al., 2022). In contrast to our work and similar to FIRE, they find a clumpy morphology for both stars and dust, which in some cases leads to spatial offsets (Pallottini et al., 2022). They also find that this clumpiness results in low effective optical depths due to dust, although star-forming regions can locally exhibit very high extinctions (Behrens et al., 2018). It is interesting to note that Figure 4 of Behrens et al. (2018) does appear to exhibit a ring-like morphology in the galaxy's central UV emission, suggesting this effect might persist to higher-resolution simulations. Nonetheless, the relative UV and IR properties of these galaxies are strongly influenced by the presence of dusty, star forming clumps which our simulations could not resolve, suggesting resolution is a main issue for our theoretical predictions. Our results therefore provide strong motivation for the development of dust models such as the ones presented here in higher-resolution simulations of galaxy formation with more realistic feedback resulting in a manifestly multiphase ISM, as this appears to be essential to capturing the effects to which observations are most sensitive. ## 5 Conclusion We apply the dust post-processing model described in Esmerian & Gnedin (2022) to a suite of 11 simulated galaxies from the CROC project. We explore 9 different sets of dust parameters and quantify the effect of their variation on the dust content of high redshift galaxies. We then forward model observable properties of high-redshift galaxies and compare to existing data. Our conclusions are the following: * Comparing our simulated galaxies to a compilation of recent constraints on the metallicities of reionization-era systems, we find general agreement, although CROC might slightly under-predict metallicity at a given stellar mass. * We vary dust model parameters governing the rate of grain growth due to accretion in the ISM, the efficiency of grain destruction in supernova remnants, and the dust yields of production sources (supernova and AGB star winds), to determine their impact on the predicted dust contents of high-redshift galaxies. We qualitatively validate the results of Esmerian & Gnedin (2022), in which we reproduced a well-established behavior of these dust models (see Hirashita, 2013, for a review): the dust content of galaxies is set at early times/low metallicities primarily by the assumed production yields, while at higher metallicities/late times it is set by the competition between accretion and destruction, normalized by the initial condition set by production yields. The transition occurs around \(Z\sim 2-4\times 10^{-4}=1-2\times 10^{-2}Z_{\odot}\), with some dependence on assumed model parameters. * However, we observe significant scatter between galaxies at a constant metallicity, especially at late times/higher metallicities for models in which growth via accretion becomes efficient. This indicates the existence of important secondary dependencies beyond metallicity that determine the dust content of galaxies, which is not captured by typical one-zone models (e.g. Feldmann, 2015). We speculate that this is driven by some combination of star formation history and ISM phase structure dependence, as is evidenced by the particularly aggressive growth via accretion in the most massive galaxy compared to the other galaxies in our sample. * **Default** and **Enhanced Accretion** - appear to predict scaling relations consistent with current data. This suggests that the data prefer models in which production yields are high and ISM grain growth is efficient at high masses. The data also appear to exhibit larger scatter at a given stellar mass than predicted by any one of our models, but due to both large systematic uncertainties in the dust mass observational constraints and the disjoint range of stellar masses probed by our simulations vs. the observations, these conclusions are tentative. Nevertheless, it is easy to imagine several additional sources of scatter that are missed in our simulations and post-processing, such as dependence of the dust temperature on the local radiation field or deficiency of the stellar feedback model. * We forward model directly observable galaxy properties from our simulations to make more direct comparison to data, and find that we are unable to simultaneously match existing observational constraints with any one model. Specifically, the models which best match the observed spectral slope in the UV, \(\beta_{\rm UV}\), are the models with least dust content due to either low production yields or very high destruction rates. However, these models fail to predict sufficiently high infrared luminosities. Those that do predict IR luminosities consistent with observations have far too much dust extinction and thereby fail to agree with \(\beta_{\rm UV}\) constraints. Finally, we note that no one of our models appears to predict as much scatter in these observable quantities as the data exhibit. * We speculate that these deficiencies are due to issues with the spatial distribution of dust relative to stars in our simulations, which may be overly smooth. To assess this hypothesis, we compare our simulations to spatially resolved observations of rest-frame UV emission and dust continuum (Inami et al., 2022), between which some galaxies show large spatial offsets, indicative of a highly dynamic ISM. We compare data from galaxies of similar estimated stellar mass to our most massive system, and find that all of our models fail to predict offsets as large as observed, lending support to the idea that our galaxies fail to capture the dynamic complexity of the high-redshift ISM, which is necessary to reproduce observations. This manuscript has been co-authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. This work used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357. An award of computer time was provided by the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. This research is also part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (awards OCI-0725070 and ACI-1238993) and the state of Illinois. Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications. This manuscript is based upon work that is supported by the Visiting Scholars Award Program of the Universities Research Association. This work was completed in part with resources provided by the University of Chicago's Research Computing Center. Our analysis made use of the following publicly available software packages: Matplotlib (Hunter, 2007), SciPy (Jones et al., 2001), NumPy (Walt et al., 2011), COLOSUS (Diemer, 2018), and yt (Turk et al., 2011). This manuscript is presented as part of a thesis to the Department of Astronomy and Astrophysics, The University of Chicago, in partial fulfillment of the requirements for the Ph.D. degree. ## Appendix A \(L_{\rm IR}\) correction Figure 11 shows the infrared luminosity of the most massive galaxy in our simulation as a function of stellar mass (top panel), and the IRX-\(\beta_{\rm UV}\) relationship for the same galaxy (bottom panel). Different colors represent different dust model parameter choices as in Esmerian & Gnedin (2022). In the top panel, different line-styles indicate different choices for the dust temperature (dotted: \(T_{\rm D}=20\)K, dashed: \(T_{\rm D}=40\)K, solid: \(T_{\rm D}=60\)K), while the bottom panel only shows the predictions for \(T_{\rm D}=40\)K. This figure is identical to Figure 19 in Esmerian & Gnedin (2022), but with the corrected calculation of infrared luminoisities as described in Section 2.
2310.14411
Violation of the Einstein's Equivalence Principle for a Composite Quantum Body
Recently, we have started to investigate behavior of a composite quantum body in an external gravitational field in the framework of General Relativity [see, for a review, A. G. Lebed, Mod. Phys. Lett. A, {\bf 35}, 2030010 (2020)]. As the simplest example, we have considered a hydrogen atom in a weak gravitational field. Our results are the following. The Einstein's Equivalence Principle survives for the most of macroscopic ensembles of the atoms, containing the stationary quantum states. On the other hand, we have demonstrated that this principle is sometimes broken. In particular, it is broken for the so-called Gravitational demons, which are the coherent macroscopic ensembles of two or more stationary quantum states in the hydrogen atoms. In the above cited paper we have considered the Gedanken experiment, where the gravitational field is suddenly switched on in a free from gravitation space. In the current paper we consider the much more realistic from experimental point of view Gedanken experiment and come to the same conclusion about violations of the Einstein's Equivalence Principle for the Gravitational demons.
Andrei G. Lebed
2023-10-22T20:52:20Z
http://arxiv.org/abs/2310.14411v1
# Violation of the Einstein's Equivalence Principle for a Composite Quantum Body ###### Abstract Recently, we have started to investigate behavior of a composite quantum body in an external gravitational field in the framework of General Relativity [see, for a review, A. G. Lebed, Mod. Phys. Lett. A, **35**, 2030010 (2020)]. As the simplest example, we have considered a hydrogen atom in a weak gravitational field. Our results are the following. The Einstein's Equivalence Principle survives for the most of macroscopic ensembles of the atoms, containing the stationary quantum states. On the other hand, we have demonstrated that this principle is sometimes broken. In particular, it is broken for the so-called Gravitational demons, which are the coherent macroscopic ensembles of two or more stationary quantum states in the hydrogen atoms. In the above cited paper we have considered the Gedanken experiment, where the gravitational field is suddenly switched on in a free from gravitation space. In the current paper we consider the much more realistic from experimental point of view Gedanken experiment and come to the same conclusion about violations of the Einstein's Equivalence Principle for the Gravitational demons. The Einstein's Equivalence Principle of equality between inertial and gravitational masses is a cornerstone of the theory of General Relativity [1,2]. During the recent space mission "MICROSCOPE", validity of the principle for ordinary condensed matter was established with a great accuracy, \(|m_{i}-m_{g}|/m_{i}\leq 10^{-16}-10^{-17}\) [3,4], whereas two other space missions, "GG" and "STEP", are currently in their preparation stages. Nevertheless, a validity of the Einstein's Equivalence Principle in quantum case is not obvious and needs its theoretical and hopefully experimental investigations. Since quantum theory of gravity has not been developed yet, in our previous papers (see Refs. [5-8]) we have theoretically considered the so-called semi-classical variant of General Relativity, where gravitational field is not quantized but the matter is quantized. In particular, in the framework of the semi-classical approach, we have shown that active and passive gravitational masses for the simplest quantum macroscopic ensembles - ensembles of hydrogen atoms in stationary states - preserve the Einstein's Equivalence Principle. On the other hand, we have also demonstrated that there exist such unusual quantum ensembles, called the Gravitational demons, which result in violations of the Einstein's Equivalence Principle for both active [5,6,8] and passive [6-8] gravitational masses. Note that the Gravitational demons are defined as the coherent macroscopic ensembles of the quantum superpositions of two or several stationary states. As shown by us in the above cited papers, the expectation values of active and passive gravitational masses in such macroscopic ensembles are not anymore related to the expectation value of energy by the famous Einstein's formula, \(m_{g}\neq E/c^{2}\). In particular, in Refs. [6-8] we have considered the Gedanken experiment, where the Gravitational demons are created by laser technique in ultra-cold matter [9] in the absence of a gravitational field and then we have switched on the field at \(t=0\). In this paper, we consider more realistic from experimental point of view Gedanken experiment, where we create the Gravitational demon in the presence of gravitational field and then at \(t=0\) we change the field a little bit. Our conclusions regarding passive gravitational mass are similar to the above discussed ones. Let us consider for further calculations the so-called weak gravitational field limit [1,2], which describes the Earth's gravitational field with great accuracy in the framework of the General Relativity spacetime, \[(ds)^{2}(R)=-[1+2\phi(R)/c^{2}](cdt)^{2}+[1-2\phi(R)/c^{2}][(dx)^{2}+(dy)^{2}+ (dz)^{2}],\ \ \ \phi(R)=-GM/R, \tag{1}\] where \(\phi(R)\) is the Newtonian potential, \(c\) is the light velocity, \(G\) is the Newtonian gravitational constant, \(R\) is a distance from a center of the Earth, and \(M\) is the mass of Earth. Note that, in accordance with the local Lorentz invariance, we can define the following proper local space-time coordinates, \[\tilde{x}(R)=[1-\phi(R)/c^{2}]x,\ \ \ \tilde{y}(R)=[1-\phi(R)/c^{2}]y,\ \ \ \tilde{z}(R)=[1-\phi(R)/c^{2}]z,\ \ \ \tilde{t}(R)=[1+\phi(R)/c^{2}]t, \tag{2}\] where the spacetime interval has the Minkowski's form: \[[d\tilde{s}(R)]^{2}=-[cd\tilde{t}(R)]^{2}+[d\tilde{x}(R)]^{2}+[d\tilde{y}(R)] ^{2}+[d\tilde{z}(R)]^{2}. \tag{3}\] [Note that the weak field approximation (1),(2) allows us below to perform all calculations with accuracy up to the first order of the small parameter \(|\phi|/c^{2}\ll 1\).] The Schrodinger equation for electrons in a hydrogen atom in the proper local coordinates (2) can be expressed in the approximation, where we neglect the so-called tidal terms, as \[i\hbar[\partial\Psi(\vec{\bf r},\tilde{t})]/\partial\tilde{t}=\hat{H}_{0}( \hat{\bf\tilde{p}},\vec{\bf r})\Psi(\vec{\bf r},\tilde{t}), \tag{4}\] where \(\hat{H}_{0}(\hat{\bf\hat{p}},{\bf\tilde{r}})\) is the electron Hamiltonian in the hydrogen atom. We stress that Eq.(4) can be written if position of the proton is fixed by some non-gravitational forces and if we omit all the so-called tidal terms, which are extremely small near the Earth. Below, we perform the following Gedanken experiment. At \(t=0\) we create a quantum macroscopic ensemble of the coherent superpositions of the two stationary states in the atom - \(1S\) (with the corresponding wave function \(\Psi_{1}[{\bf\tilde{r}}({\bf R})]\)) and \(2S\) (with the corresponding wave function \(\Psi_{2}[{\bf\tilde{r}}({\bf R})]\)): \[\tilde{\Psi}^{1}_{12}(\tilde{r},\tilde{t})=\exp[-im_{e}c^{2}\tilde{t}(R)/\hbar ]\frac{1}{\sqrt{2}}\biggl{\{}\Psi_{1}[\tilde{r}(R)]\exp[-iE_{1}\tilde{t}(R)/ \hbar]+\Psi_{2}[\tilde{r}(R)]\exp(i\tilde{\alpha})\exp[-iE_{2}\tilde{t}(R)/ \hbar]\biggr{\}}, \tag{5}\] where a phase difference between the stationary wave functions is the same for each microscopic state in the macroscopic ensemble, \[\tilde{\alpha}=const, \tag{6}\] and the wave functions of the stationary states are known to be real and are normalized in the local proper coordinates: \[\int_{-\infty}^{+\infty}[\Psi_{1}(\tilde{r})]^{2}d^{3}\tilde{r}=\int_{-\infty} ^{+\infty}[\Psi_{2}(\tilde{r})]^{2}d^{3}\tilde{r}=1,\ \ \ \ \int_{-\infty}^{+\infty}\Psi_{1}(\tilde{r})\Psi_{2}( \tilde{r})d^{3}\tilde{r}=0, \tag{7}\] where \(m_{e}\) is the electron bare mass. For convenience, here we rewrite wave function (5) in the initial spacetime coordinates, \[\Psi^{1}_{12}(r,t)=\exp\{-im_{e}c^{2}t[1+\phi(R_{1})/c^{2}]/\hbar \}[1-\phi(R_{1})/c^{2}]^{3/2}\] \[\times\biggl{(}\Psi_{1}[r(1-\phi(R_{1})/c^{2})]\exp\{-iE_{1}t[1+ \phi(R_{1})/c^{2}]/\hbar\}\] \[+\Psi_{2}[r(1-\phi(R_{1})/c^{2})]\exp(i\tilde{\alpha})\exp\{-iE_{ 2}t[1+\phi(R_{1})/c^{2}]/\hbar\}\biggr{)}/\sqrt{2}, \tag{8}\] where \(\phi_{1}=\phi(R_{1})\) is value of the gravitational potential (1) at initial moment of time, \(t=\tilde{t}=0\). Let us continue our Gedanken experiment. We suddenly change the gravitational potential into the value \(\phi_{2}=\phi(R_{2})=\phi(R_{1})+\delta\phi\), where \(|\delta\phi|\ll|\phi(R_{1})|\), and ask how the expectation value of energy of our Gravitational demon (5),(8) is changed, provided that time of the change of the gravitational potential is less than the characteristic time of the electron quasi-classical motion corresponding to the wave function (8). Then, at \(t>0\), we have the following wave function: \[\Psi^{2}_{12}(r,t)=\exp\{-im_{e}c^{2}t[1+\phi(R_{2})/c^{2}]/\hbar \}[1-\phi(R_{2})/c^{2}]^{3/2}\] \[\times\biggl{(}A\ \Psi_{1}[r(1-\phi(R_{2})/c^{2})]\exp\{-iE_{1}t[1+ \phi(R_{2})/c^{2}]/\hbar\}\] \[+B\ \Psi_{2}[r(1-\phi(R_{2})/c^{2})]\exp\{-iE_{2}t[1+\phi(R_{2})/c^{ 2}]/\hbar\}\biggr{)}, \tag{9}\] where A and B are some complex numbers. Since we have a sudden perturbation of the spacetime in our Gedanken experiment, a law of the conservation of the number of particles can be expressed as \[|\Psi^{1}_{12}(r,t=0)|^{2}=|\Psi^{2}_{12}(r,t=0)|^{2}. \tag{10}\] [Note that probabilities to occupy higher energy levels are proportional to \((\delta\phi)^{2}/c^{4}\) and, thus, are not considered in this paper.] It is important that we are not interested in common phase of the complex numbers \(A\) and \(B\), therefore, we can rewrite Eq.(10) in the following form, \[\Psi^{1}_{12}(r,t=0)=\Psi^{2}_{12}(r,t=0). \tag{11}\] Taking into account Eqs.(8) and (9), we obtain \[\frac{1}{\sqrt{2}}\biggl{(}1-\frac{\phi_{1}}{c^{2}}\biggr{)}^{3/2} \biggl{\{}\Psi_{1}[r(1-\phi_{1}/c^{2})]\ +\exp(i\tilde{\alpha})\Psi_{2}[r(1-\phi_{1}/c^{2})]\biggr{\}}\] \[=\biggl{(}1-\frac{\phi_{2}}{c^{2}}\biggr{)}^{3/2}\biggl{\{}A\ \Psi_{1}[r(1-\phi_{2}/c^{2})]\ +B\ \Psi_{2}[ r(1-\phi_{2}/c^{2})]\biggr{\}}. \tag{12}\] Let us solve Eq.(12) using orthogonality conditions between the wave functions \({\bigg{(}1-\frac{\phi_{i}}{c^{2}}\bigg{)}^{3/2}}\Psi_{1}[r(1-\phi_{i}/c^{2})]\) and \({\bigg{(}1-\frac{\phi_{i}}{c^{2}}\bigg{)}^{3/2}}\Psi_{2}[r(1-\phi_{i}/c^{2})]\), where \(i=1,2\). As a result we find: \[\frac{1}{\sqrt{2}}=A\ I_{11}+B\ I_{12},\quad\frac{\exp(i\tilde{ \alpha})}{\sqrt{2}}=A\ I_{21}+B\ I_{22}\, \tag{13}\] where \[I_{ij}={\bigg{(}1-\frac{\phi_{1}}{c^{2}}\bigg{)}^{3/2}}{\bigg{(}1 -\frac{\phi_{2}}{c^{2}}\bigg{)}^{3/2}}{\int_{-\infty}^{+\infty}\Psi_{i}}{ \bigg{[}r{\bigg{(}1-\frac{\phi_{1}}{c^{2}}\bigg{)}}\bigg{]}\ \Psi_{j}}{\bigg{[}r{\bigg{(}1-\frac{\phi_{2}}{c^{2}}\bigg{)}} \bigg{]}d^{3}}r. \tag{14}\] Here, we calculate the matrix elements \(I_{ij}\) by means of the method developed in Refs. [6-8] with accuracy up to the first orders of \(|\delta\phi|/c^{2}\ll 1\) and \(|\phi_{i}|/c^{2}\ll 1\). After the lengthy but straightforward calculations, we obtain \[I_{11}=I_{22}=1. \tag{15}\] and \[I_{12}=-I_{21}=\frac{\delta\phi}{c^{2}}\alpha=\frac{\delta\phi}{ c^{2}}\int_{-\infty}^{+\infty}\Psi_{1}^{\prime}(r)\ r\ \Psi_{2}(r)d^{3}r=\frac{\delta\phi}{c^{2}}\frac{V_{12}}{E_{2}-E_{1}},\quad\Psi_ {1}^{\prime}(r)=\frac{d\Psi_{1}(r)}{dr}, \tag{16}\] where \(E_{1}\) and \(E_{2}\) are energies of the ground state \(1S\) and the first exited state \(2S\), correspondingly; \(V_{12}\) is the matrix element of the so-called quantum virial operator in the hydrogen atom [6-8,10]: \[V_{12}=\int_{-\infty}^{+\infty}\Psi_{1}(r)\hat{V}(r)\Psi_{2}(r)d ^{3}r,\quad\hat{V}(r)=\frac{{\bf p}^{2}}{m_{e}}-\frac{e^{2}}{r^{\prime}}, \tag{17}\] where \({\bf p}\) is electron momentum operator, \(r^{\prime}\) is a distance between electron and the fixed proton; \(e\) is the electron charge. First of all, we solve the system of linear equations (13) to find the complex numbers A and B: \[A=\frac{1}{\sqrt{2}}{\bigg{[}1-\alpha{\bigg{(}\frac{\delta\phi}{ c^{2}}\bigg{)}}{\exp(i\tilde{\alpha})}{\bigg{]}}},\ \ \ \ B=\frac{1}{\sqrt{2}}{\bigg{[}\exp(i\tilde{\alpha})+\alpha{\bigg{(}\frac{ \delta\phi}{c^{2}}\bigg{)}}\bigg{]}}. \tag{18}\] Then, we make sure that they preserve the normalization conditions with the accuracy of calculations accepted in this paper, \[|A|^{2}+|B|^{2} = \frac{1}{2}{\bigg{\{}}{\bigg{[}1-\alpha{\bigg{(}\frac{\delta \phi}{c^{2}}\bigg{)}}{\exp(i\tilde{\alpha})}{\bigg{]}}{\bigg{[}1-\alpha{ \bigg{(}\frac{\delta\phi}{c^{2}}\bigg{)}}{\exp(-i\tilde{\alpha})}\bigg{]}} \tag{19}\] \[+{\bigg{[}\exp(i\tilde{\alpha})+\alpha{\bigg{(}\frac{\delta\phi} {c^{2}}\bigg{)}}{\bigg{]}}{\bigg{[}\exp(-i\tilde{\alpha})+\alpha{\bigg{(}\frac {\delta\phi}{c^{2}}\bigg{)}}{\bigg{]}}}{\bigg{\}}{\approx 1}}.\] And finally, we calculate the expectation energy of the Gravitational demon (9): \[{\bigg{\langle}E{\bigg{\rangle}}}{={\bigg{[}E_{1}+m_{e}\delta\phi+ {\bigg{(}\frac{E_{1}}{c^{2}}\bigg{)}}\delta\phi{\bigg{]}}{\bigg{|}1-\alpha{ \bigg{(}\frac{\delta\phi}{c^{2}}\bigg{)}}{\exp(i\tilde{\alpha})}{\bigg{]}}{ \bigg{[}1-\alpha{\bigg{(}\frac{\delta\phi}{c^{2}}\bigg{)}}{\exp(-i\tilde{ \alpha})}{\bigg{]}}}}} \tag{20}\] \[+{\bigg{[}E_{2}+m_{e}\delta\phi+{\bigg{(}\frac{E_{2}}{c^{2}}\bigg{)} }\delta\phi{\bigg{]}}{\bigg{[}\exp(i\tilde{\alpha})+\alpha{\bigg{(}\frac{ \delta\phi}{c^{2}}\bigg{)}}\bigg{]}}{\bigg{[}\exp(-i\tilde{\alpha})+\alpha{ \bigg{(}\frac{\delta\phi}{c^{2}}\bigg{)}}\bigg{\}}.\] As a result we obtain: \[{\bigg{\langle}E{\bigg{\rangle}}}{=\frac{E_{1}+E_{2}}{2}+{\bigg{(}m _{e}+\frac{E_{1}+E_{2}}{2c^{2}}+\frac{V_{12}}{c^{2}}\cos(\tilde{\alpha})}{ \bigg{)}}\delta\phi}, \tag{21}\] where the expectation value of the gravitational mass per one electron can be define as \[\left\langle m_{g}\right\rangle\!\!=m_{e}+\left(\frac{E_{1}+E_{2}}{2c^{2}}\right) +\left(\frac{V_{12}}{c^{2}}\right)\!\cos(\tilde{\alpha}), \tag{22}\] Let us discuss Eq.(22). The first term is the bare electron mass, the second term is the expected in relativity contribution to the gravitational mass from the energies of electron levels (5). The third term is the anomalous one and completely unexpected in relativity contribution from the so-called virial term [6-8]. It depends on the fixed difference of phases, \(\tilde{\alpha}=const\), in the coherent macroscopic ensemble of two stationary states (5),(6). On the basis of Eq.(22), we can make conclusion about the breakdown of the equivalence between expectation values of energy and gravitational mass for the described above Gravitational demon. We put forward a natural hypothesis that our results are not restricted by ensembles of the hydrogen atoms but are qualitatively true for any coherent macroscopic ensemble of two or several stationary states of any composite quantum object. In conclusion, we discuss in a brief some of the experimental aspects of possible discovery of the violation of the Einstein's Equivalence Principle. First of all, we again pay attention that the phase difference in two quantum states in the macroscopic ensemble, \(\tilde{\alpha}\), has to be fixed with good enough accuracy (5),(6), otherwise the anomalous term in Eq.(22) disappears after averaging over the phase. Such ensemble, in principle, can be created by some laser technique with possibly ultra-cold atoms or molecules in the Earth's laboratory [9]. The second important point is that the measurements of a weight of the ensemble has to be done very quickly, i.e. quicker than the time scales characterizing wave functions in the quantum ensemble. Our next steps will be to find more convenient composite quantum objects for the above mentioned procedures. We are thankful to Natalia N. Bagmet (Lebed), Steven Carlip, Fulvio Melia, Pierre Meystre, Keneth Nordtvedt, Douglas Singleton, and Vladimir E. Zakharov for useful discussions.
2301.02532
Better Balance in Informatics: An Honest Discussion with Students
In recent years, there has been considerable effort to promote gender balance in the academic environment of Computer Science (CS). However, there is still a gender gap at all CS academic levels: from students, to PhD candidates, to faculty members. This general trend is followed by the Department of Computer Science at UiT The Arctic University of Norway. To combat this trend within the CS environment at UiT, we embarked on structured discussions with students of our department. After analyzing the data collected from these discussions, we were able to identify action items that could mitigate the existing gender gap at our department. In particular, these discussions elucidated ways to achieve (i) a balanced flow of students into CS undergraduate program, (ii) a balanced CS study environment, and (iii) a balanced flow of graduates into higher levels of the CS academia (e.g., PhD program). This paper presents the results of the discussions and the subsequent recommendations that we made to the administration of the department. We also provide a road-map that other institutions could follow to organize similar events as part of their gender-balance action plan.
Elisavet Kozyri, Mariel Evelyn Markussen Ellingsen, Ragnhild Abel Grape, Letizia Jaccheri
2023-01-06T14:44:32Z
http://arxiv.org/abs/2301.02532v1
# Better Balance in Informatics: ###### Abstract In recent years, there has been considerable effort to promote gender balance in the academic environment of Computer Science (CS). However, there is still a gender gap at all CS academic levels: from students, to PhD candidates, to faculty members. This general trend is followed by the Department of Computer Science at UiT The Arctic University of Norway. To combat this trend within the CS environment at UiT, we embarked on structured discussions with students of our department. After analyzing the data collected from these discussions, we were able to identify action items that could mitigate the existing gender gap at our department. In particular, these discussions elucidated ways to achieve (i) a balanced flow of students into CS undergraduate program, (ii) a balanced CS study environment, and (iii) a balanced flow of graduates into higher levels of the CS academia (e.g., PhD program). This paper presents the results of the discussions and the subsequent recommendations that we made to the administration of the department. We also provide a road-map that other institutions could follow to organize similar events as part of their gender-balance action plan. Keywords:Gender balance computer science diversity inclusion student study ## 1 Introduction Innovations in Computer Science shape the lives of everyone in our society. To create innovative solutions tailored to everyone, it is important that all groups of society are represented in the creation of these solutions. However, this is still not the case in the field of Computer Science (CS). Having an awareness of the lack of representation and the different barriers people face in CS are fundamental in helping the field target those challenges and becoming more equitable and inclusive [8]. Statistics from Europe show that women are still highly underrepresented in CS. According to Eurostat [4], the percentage of female specialists in Information and Communications Technology has evolved from 17% in 2012 to 19,1% in 2021. At university level in STEM, the percentage of female Bachelor, Master, and PhD students is 20%, while the percentage of female professors is 15%. Specifically for the Department of Computer Science at UiT The Arctic University of Norway, only 13% of students, 14% of PhD candidates and 21% of faculty members are female. _Better Balance in Informatics_ (BBI), a program led by the CS department at UiT and funded by the Research Council of Norway, aims to rectify this imbalance and create a more diverse learning environment for Computer Science. BBI is connected and builds upon an ecosystem of national and international projects which address gender balance in CS acting on different levels: school ([13], [7]), university ([2], [6][16]), industry ([17], [3], [5]), and the interplay of these levels ([1]). BBI aimed to identify some of the reasons that led to the current gender dynamics in our CS department, and then propose measurements that could address those reasons. Hearing directly from the CS students (Bachelor, Master) seemed to be a sensible way for us to identify those reasons. So, BBI organized structured discussion sessions, where we invited CS students (Bachelor, Master) to share their thoughts about: 1. the reasons they picked CS for their studies, 2. their current experience with the CS studies, 3. their intention to pursue an academic career in CS, and 4. ways to make the CS community more diverse and inclusive. The answers of the students illuminated points of intervention, which could lead to a balanced flow of students into CS undergraduate program, a study environment that embraces diversity, and a balanced flow of students into higher levels of the CS academia. This paper presents the methodology (SS2) we employed to organize the discussion sessions, to collect responses, and to report the results. We then present the specific questions we asked the students and the analysis of their answers (SS3). Finally, we list the recommendations (SS4) submitted to the CS department for achieving a gender-balanced environment, we discuss related work (SS5), and we conclude (SS6) with reflections about the discussion sessions. ## 2 Methodology The end goal of the discussion sessions was to identify points of interventions that could increase the gender balance among the incoming CS students, the current CS students, and the CS graduates that are interested in entering the CS academia. To identify those points, we were aiming for a high number of participants in the discussion sessions: the more participants, the greater the plurality of experiences, and thus, the higher the chances to find opportunities for improvement. Deciding which questions to ask was crucial to ensure that experiences from different aspects of the CS studies are captured and then analyzed. But, we also had to create a trusting discussion environment for the students to honestly share those experiences with us. This section describes the methodology we followed to prepare and organize the discussion sessions such that all these targets are met. ### Outreach Attracting the attention of students and persuading them to participate in the discussion sessions was not trivial. Unless there is an immediate academic or employment gain, motivating students to devote part of their busy schedules to a university-led event is indeed challenging. Our strategy to address this challenge comprised the following steps: Hiring students as project assistants.We hired two talented and enthusiastic female students as assistants for the project. They were our bridge to the student community in our CS department. And this bridge was functioning in both ways. Their thoughts, insights, and experience informed all aspects of the BBI project, including the questions we asked during the discussions. At the same time, they knew best how to reach their fellow-students and promote the agenda of BBI (e.g., what advertisement means to employ and what to say in these means). Website.The BBI website ([https://uit.no/project/bbi](https://uit.no/project/bbi)) is the main official space where the mission of BBI is communicated to the world. So, we redesigned this website to include a clear and short motivation for the BBI mission, and describe the upcoming BBI events, in particular the discussion sessions. Advertisement.To reach a wider set of students and persuade them to participate in the BBI discussion sessions, we employed a variety of means. We posted advertisements on the monitors of the Department, the social networks of the Department, on Canvas, the UiT calendar, and the local student organization forum, which is a Discord server that is maintained by the student organization TD. The student assistants also gave 5-minutes talk about BBI and the discussion sessions to courses with high enrollment, they created and distributed flyers, and they organized a stand with coffee and cookies, where students could casually socialize and talk about BBI. In terms of registrations to the BBI discussion sessions, Canvas and TD seemed to have been the most effective, since we observed a high correlation between the time a post about the BBI event was created and the time students were registered. Open to everyone.The invitation to participate in the BBI discussion sessions was open to all students of the CS department, independently of their gender (female, male, non-binary). This is because the gender imbalance is a problem that owes to concern everyone--not only a part of the community. And because any further actions that the Department will take to address the problem might effect every student, there needs to be a wider understanding that these actions are worthwhile. Leaving specific groups of students outside the discussion, would not have increased this understanding. ### Discussion Sessions The discussion sessions were held at Ardna, UiT. Ardna is an iconic place in the university, ideal for secluded discussions. Its central fire place and the surrounding wooden benches invites people to open up and discuss honestly. In the BBI discussion sessions participated around 20 participants.1 Comparing to events organized in the past by BBI, this number of participants was a very welcoming surprise. From those participants, around 50% were female or non-binary students, and around 50% were male students. The vast majority of the students participated physically, but there were some that participated remotely. There were around three participants per discussion session. Each discussion session was moderated by two BBI members: one member was asking the questions, and the other member was typing the answers (we used no video or sound recording). At least one of the moderators was always one of the BBI student assistants; having participants talking to their peers led to frank discussions. To preserve anonymity, each participant was assigned a number, so the recorded answers were associated with these numbers--not with the identity of the student. For the discussion sessions, we gave the option to the student to select either Norwegian or English as the speaking language. All participants, apart from those that participated remotely, were offered a full meal and a free cinema ticket. Footnote 1: We do not give specific numbers to preserve the anonymity of the participants. ### Selection of Questions The selection of questions was inspired by questionnaires developed by other gender-balance projects, such as EUGAIN [1], Prestige in UiT [2], and BalaneseHub [6]. However, the questions were tailored for the needs of the CS department in UiT. And, in particular, the questions were intended to cover a wide range of students' experience: from the point they considered applying for CS, to their current studies and their future plans. ### Reporting of Results For most of the questions asked during the BBI discussion sessions, we compiled the answers into graphs. Each graph depicts how answers are correlated with different genders. This information help us guide our selection of action items for improving the gender balance. We protect the anonymity of the participants, so we do not give specific numbers at graphs. Also, we do not have a separate category for non-binary participants, because the number of non-binary participants was not high enough to protect their anonymity. Instead, we group female and non-binary participants together, and we explore the dynamics between majority (males) and minorities (female, non-binary). ## 3 Results This section presents the questions we asked the participants and their answers concerning: 1. the reasons they picked CS for their studies, 2. their current experience with the CS studies, 3. their intention to pursue an academic career in CS, and 4. ways to make the CS community more diverse and inclusive. Correlating their answers with their gender, we identified action items that could lead to a balanced flow of students into CS undergraduate program, a study environment that embraces diversity, and a balanced flow of students into higher levels of the CS academia. ### Intention to Study CS To increase the balance in Computer Science, one first needs to increase the balance in the new-coming students. So, when advertising CS to younger students, one could also include aspects that attract minorities. We tried to identify those aspects by asking the participants the reason they decided to study CS in the first place. Figure 1 shows the different answers we received. The higher the column, the more students gave that answer. The graph also shows how the answers are split between the minority (F/NB) and majority (M). There is a correlation between the gender and the reason for selecting CS studies. Figure 1: Reasons for choosing to study CS. Each column corresponds to a different reason. The height of a column represents the number of participants that submitted the corresponding reason. Dark blue represents female or non-binary participants (F/NB); yellow represents male participants (M). Action Items.Observing Figure 1, we can identify the reasons the minority chose CS studies: the problem solving aspect of CS, the flexibility of the CS studies, the job opportunities that CS graduates enjoy. To increase the diversity of incoming students, we can then emphasize those reasons when advertising CS. Also, as a possible means of advertisement Figure 1 indicates the UiT girls-and-tech day. Apart from the UiT girls-and-tech day, we wanted to understand what would be other effective means of advertisement for attracting minorities to CS studies. So, we asked the participants where did they hear about CS. Figure 2 plots the answers, which are again correlated with the gender. Action Items.Figure 2 indicates that one could use the highschool and the university's study catalog to better promote CS to minorities. Interestingly, friends and relatives have a high impact on the decision of minorities to study CS. So, one can make tech employees ambassadors of CS to young female and non-binary members of their families. In general, the vast majority of the participants, independently of their gender, would have liked CS to have been introduced differently to them, as Figure 3 indicates. Participants indicated that CS should be introduced as something that everyone can do and something that offers a plausible path to a regular job. Action Item.When advertising to high-school students, we need to break stereotypes on who can study CS. Figure 2: Ways for becoming familiar with CS. ### Your Experience in the Current CS Environment At the very least, we want to sustain the diversity among the incoming students, while these students progress to more senior years; we aim to decrease the number of drop-outs, with an emphasis on the minority group. To achieve this, we need to assess the student's experience within the CS department and identify aspects that can be improved to accommodate gender diversity. We start by asking the participants whether the CS studies met their initial expectations. Their answers are depicted in Figure 4. Almost all minority participants gave a negative answer: they found their studies either more difficult or more practical than expected. In particular, they found the learning curve of programming to be particularly steep, something that might drive some of the minority students to drop-out. We believe addressing this concern is important for maintaining a diverse environment in the department. Action ItemWe propose the adoption of a smoother introduction to programming, which will be appropriate for students with no prior programming experience. Returning to Figure 4, one also notices that, for most of the male participants, their experience in the CS studies met or even exceeded their initial expectations. So, this question highlights a striking difference between the minorities and the male students in terms of how they view their studies. This difference might be a cause of the current gender imbalance in the department. Figure 3: Independent of their gender, 81% of the participants said that they would have liked to be introduced to CS in a different way. All the participants agreed, though, that the social environment built around their studies exceeded their initial expectations. This is a great achievement of the department that needs to be preserved for the future, too. Participants were then explicitly asked whether they have thought to drop-out of their study program. As Figure 5 shows, most of the participants answered affirmatively. Notice that this is a concern across all genders, opposing the misconception that the minorities are more likely to have such thoughts. Notice also that even though most of the male students thought to drop-out of the program, they still had an overall positive assessment of their study experience, according to Figure 4. As reasons for thinking to drop-out, the participants cited the difficulty of some courses, the time-consuming assignments with overlapping deadlines, the demanding task of writing thesis (a task for which they did not feel prepared), and the complications that the COVID-19 pandemic brought. For a student to be thinking to drop-out, it means that the student's self esteem might be low at that point. Figure 6 validates this claim, showing that most of the participants felt "useless" or "not-deserving" being in the CS program. Again, the answers do not seem to be correlated with the gender. However there is an underlying difference: many of the males had this feeling once, related to a specific assignment or for short period of time, whereas the minority students had this feeling for a long period of time (i.e, months or even years). Figure 4: The first column corresponds to the answer that the CS studies met or exceeded the expectations of the participant. The remaining four columns correspond to the answer that the CS studies did not quite meet the expectations of the participant, and they also represent different reasons why. The height of a column represents the number of participants that submitted the corresponding answer. Dark blue represents female or non-binary participants (F/NB); yellow represents male participants (M). When asked about the reasons they instead decided to stay in the program and not drop out, the participants mentioned: * the robust social network that they have built within and outside UiT, where they could talk about their struggles, * the senior student advisor Jan Fuglesteg, Figure 5: The majority of the participants replied that they have thought of dropping out of their CS study program. Figure 6: The majority of the participants replied that they have have felt “useless” or “not-deserving to be here” during their CS study program. * their self-determination and discipline, * taking time to relax. Action ItemsGiven the stimulating power that the social groups exercised on the students, we should further support actions and groups that promote social networking in the department. Also, we will organize events where senior students can offer tips and tricks from their experiences to the junior students, where the main message will be "if we have made it, so can you". Concentrating on minority students, one of the reasons they might feel uncomfortable in an environment (and ultimately drop-out of the program) is when they have experienced sexual harassment. So, we asked the participants whether they have ever witnessed or heard of sexual harassment incidents within the CS community. Figure 7 depicts their answers. More than half of the participants answered positively. Action ItemThe "Yes" column in Figure 7 should not exist. So, we will propose to the department to devise videos and examples of unacceptable behavior, so the student can recognize and dissociate from these phenomena. The experience that a student gets from their CS program is a collection of many educational aspects, such as lectures, colloquiums, and assignments. For the CS program to be inclusive and diverse, all these aspects should promote inclusiveness and diversity. We asked the participants if they feel that the educational aspects below promote only a particular way of thinking. Figure 7: More than half of the participants said that they have witnessed or heard of sexual harassment incidents within our CS community. * Lectures: Participants mentioned that examples employed in some lectures are more appealing to male students. These examples usually involve games or cars. * Colloquiums:1 Participants, from all genders, mentioned that a colloquium can quickly get a "boys-club" atmosphere if the TA is not attentive enough. The participants also express the wish for more female TAs. Footnote 1: A colloquium is associated with a course and it is often led by a TA, who answers student’s questions about the course material and assignments. * Assignments: Some assignment are very focused on gaming or promote competitiveness, which might be uncomfortable for some students. Action Items.We will advise the faculty members to ensure that lectures and assignments accommodate a variety of interests. The department should organize a seminar for TAs in which they become sensitized on not allowing colloquiums to become "boys-clubs" and accommodating different interests and needs. We also need to brainstorm on how to hire more female TAs. ### Intention to Enter Higher Levels in CS Academia We are striving to achieve balance at all levels of the CS academia, from students to professors. At this section, we focus our attention to higher levels in CS academia (from PhD candidates and above), and we want to understand the intention of the current students to enter that level. According to Figure 8, the vast majority, and in particular all female and non-binary participants, do not wish to have an academic career in CS. Here are some reasons why: Figure 8: Almost all of the participants do not wish to have an academic career in CS. * The academic career seems difficult or exhausting. * No interest in research or writing or teaching. * Preference for work-life balance offered by the industry (pay, social, use tools, practical). * The description of an academic career is not clearly communicated. In fact, as indicated by Figure 9, there is uncertainty between the students, and in particular the minority students (i.e., female and non-binary), about what a PhD program is--the first stepping stone towards building an academic career. Given this uncertainty, it is expected that many students will not apply to a PhD program, something that is affirmed by Figure 10. Notice, though, that comparing Figures 8 and 10, the participants are less negative towards pursuing a PhD program than ultimately following an academic career. This is because, after obtaining a PhD degree, there is always the possibility to follow a career in the industry. And some of the participants that replied "no" to the prospective of applying to a PhD program now, they contemplate the possibility of applying after working in the industry. And if a participant does not want to apply to a PhD program, what are their immediate plans after graduation? Figure 11 answers this question. Notice that participants from the minority group explore a wider variety of options. Also, for the participants that are not considering to apply to the CS PhD program, Figure 12 gives the main reasons behind this disposition. Figure 12 Figure 9: Most of the participants are either unsure about or do not know what the CS PhD program. informs how we can intervene and address some of these reasons, possibly leading to more students applying to our PhD program. Action Items.According to Figure 12, some participants said that the PhD program "sounds too heavy", and they described PhD students as being "alone" and "depressed". While this description might portray one aspect of the PhD Figure 11: Columns correspond to different options the participants consider to follow after graduation. Figure 10: Most of the participants would not apply to a PhD program. experience, it is definitely not the entire truth. So, we are going to hold events that clearly describe the characteristics of a PhD program, emphasizing the positive aspects of being a PhD student. These events will also address the uncertainty that was surfaced in Figure 9 about what is a PhD program. The late deadline for applying to a PhD, which is not synchronized with the job-search period of senior students, is another reason why current students do not select a PhD program. To remedy this, we will advise the faculty members of the CS department to consider moving the PhD application earlier in in the academic year (i.e., fall semester). Finally, given that many participants said that they might consider applying to a PhD program in the future (i.e., after acquiring some experience in the industry), we advocate to advertise new PhD positions explicitly to CS alumni. For some of these alumni, these positions might seem attractive. ### The Gender Gap and Possible Measurements to Close it. In previous sections, we attempted to understand the reasons why a gender imbalance exists in the CS department, and concluded with action items that could address those reasons. In this section, we explicitly discuss gender balance with the students and record their opinions and proposals on the subject. To begin with, the vast majority of the participants said that the current gender-imbalance in the department is a problem. They actually see that gender-balance has advantages: promotes innovation, enables plurality of perspectives, and leads to a better study and work environment. Many of the participants said Figure 12: Columns correspond to different reasons why the participants are not considering to apply to a PhD program. that there are no disadvantages with gender balance, although some expressed the concern that gender-balance efforts, such as gender quotas, might "lead to people being hired for the wrong reasons". The participants were then presented with different measurements that could be employed to improve the gender balance within the department and asked to say whether they are in favor or not of each presented measurement. Figure 13 depicts their answers. Notice that measurements that blatantly favor minorities in the education or in the career were among the least popular (for both minority and male participants). We aim to focus on the most popular measurements. Participants also proposed two additional measurements that did not appear in our pre-selected list: * Share success stories from minorities. * Have a few preparatory weeks for programming before starting the first semester in CS. ## 4 BBI recommendations for the near future Throughout this report we presented a variety of action items for improving gender balance in our CS department. We have motivated these action items using the findings from the discussions with the participants. We now summarize those actions that BBI recommends for the near future. These actions aim to achieve a balanced flow of students into CS studies, a balanced environment within the CS department, and a balanced flow towards CS academia. Figure 14 Figure 13: Each row corresponds to a different measurement for improving gender balance in CS. The length of each row represents the number of participants that agree with the corresponding measurement. Dark blue represents female or non-binary participants (F/NB); yellow represents male participants (M). gives a schematic representation of these three categories of actions, which are listed below. **Action items for a balanced flow of students into CS.** * Student-ambassadors of all genders to present CS to high school and middle high school students. * Highlight problem solving aspect of CS, flexible and interesting program, job opportunities. CS is something that everyone can do. **Action items for a balanced CS study environment.** * Organize social events where senior students offer tips and share experiences and success stories with junior students. * Have mandatory videos with examples of unaccepted behaviors (e.g, inappropriate jokes, stalking, etc). * Advise faculty members to ensure that lectures and assignments accommodate a variety of interests. * Advise faculty members to ensure that colloquiums are not transformed into boys-clubs. * Increase the number of female TAs. * Explore the opportunity to have a few preparatory weeks for programming before starting the first semester in CS. **Action items for a balanced flow of candidates into CS academia.** * Hold events that clearly describe the academic career and the PhD program in CS. * Advise faculty members to move PhD applications earlier at the academic year. Figure 14: We propose action items for (i) a gender-balanced flow of students from the school to CS studies, (ii) a gender-balanced student environment in our department, and (iii) a gender-balanced flow of graduates from the CS studies to the CS Phd program, and eventually CS academia. ## 5 Related Work The discussion sessions with the students helped us identify action items to achieve (i) a balanced flow of students into CS studies, (ii) a balanced environment within the CS department, and (iii) a balanced flow towards CS academia (i.e., PhD and beyond). This section discusses how prior work tackles these three aspects separately. For an extensive overview of initiatives for improving gender balance in CS, we refer the reader to Jaccheri et al. [14]. Our discussion participants highlighted in Figure 13 that we need to "deal with the issue [of gender balance] at a younger age". A recent aggregated study [13] collects 22 measures and strategies for CS educators in secondary education to sustain the interest of female students in the CS classes. Our action items for a balanced flow of students into CS are aligned with the proposed measurements in [13] that aim to demolish stereotypes. Our observations are also aligned with a study [9] concluding that: family and friends have high impact to the decision of girls to study CS, courses for CS should be introduced earlier at school, one should highlight the problem-solving aspect of CS and surface female role models. A successful strategy for increasing the percentage of CS female undergraduate students at CMU (close to 50% was the percentage of female students that entered the CS program in 2018) is presented in [12]. Two points of this strategy are that the curriculum does not have to change to become more "female-friendly", and that it is important to promote cultural changes within the institution (e.g., create entry level courses for students with no prior programming experience, increase the visibility of women, break stereotypes). These points address two main reasons [23] for the low enrollment of female students in CS programs: "bias in early socialization" and "anxiety towards technology". A more recent paper [22] refines those reasons into three categories: social (e.g., stereotypes), educational (e.g., unattractive CS learning environment), and labor market (e.g, unattractive jobs). Then the authors present ways to address those reasons, by communicating different perspectives of CS and engaging female students to various CS experiences. Understanding the culture within the study environment of a CS department is a prerequisite for decreasing the gender gap. CMU employed student interviews [11] to inform its strategy for better gender balance. Margolis et al. [18] investigate how the interest of female students about their CS studies might decline and eventually lead to drop-out. Rosenstein et al. [21] report that, within a sample of 200 students, "57% were found to exhibit frequent feelings of the Impostor Phenomenon with a larger fraction of women (71%) experiencing frequent feelings of the Imposter Phenomenon than men (52%)". Miller et al. [19] focus on students with minoritized identities of sexuality and/or gender (MIoSG) in STEM, and concludes that these students are enduring a "dude culture" that fosters hypermasculinity and suppresses discourses related to sexual orientations other than heterosexuality. On the positive side, interviewing STEM students, Rainey et al. [20] conclude that active teaching may improve the sense of belonging for underrepresented students. Finally, Lagesen [15] interviews Malaysian female students, which form around 50% of the student body, to see how their perception about CS differs from the western culture. A more limited number of studies have been devoted to fostering a gender-balanced flow of students towards PhD and beyond. For example, Moreno et al. [10] interview doctoral CS students on the reasons that led them to apply to a PhD program. The authors identified five mean reasons: academic career goal, professional development, career change, employment opportunity and personal fulfillment. Personal fulfillment was the most popular reason given. ## 6 Conclusion To understand how the gender balance in our CS department can be improved, we organized discussion sessions among CS undergraduate students, who shared their thoughts about: the reasons they picked CS for their studies, their current experience with the CS studies, their intention to pursue an academic career in CS, and ways to make the CS community more diverse and inclusive. From their answers we identified action items for achieving a balanced flow of students into CS undergraduate program, a study environment that embraces diversity, and a balanced flow of students into higher levels of the CS academia. After the completion of the discussion sessions, the students were able to submit their feedback. We were pleased to see that they enjoyed the discussion and thought that the questions we asked were important. The participants also appreciated our effort to use neutral and not-offensive language for the questions and the discussion that they triggered. ## Acknowledgements We would like to thank Lilli Mittner for recommending Ardna for holding the discussion sessions and for giving inspiration for the discussion questions. Lynn Nygaard gave us inspiration for these questions, too. We also thank Melina Duarte for providing network support for BBI within UiT, and Ingeborg Owesen for providing network support for BBI within BalanseHub. Finally, we are grateful to the administration of the CS department and the members of BBI for their help in organizing the discussion sessions. This work has been partially supported by the COST Action CA19122, from the European Network for Gender Balance in Informatics, and by the NFR grant 321075 for BBI.
2302.14600
Towards Human-Bot Collaborative Software Architecting with ChatGPT
Architecting software-intensive systems can be a complex process. It deals with the daunting tasks of unifying stakeholders' perspectives, designers' intellect, tool-based automation, pattern-driven reuse, and so on, to sketch a blueprint that guides software implementation and evaluation. Despite its benefits, architecture-centric software engineering (ACSE) inherits a multitude of challenges. ACSE challenges could stem from a lack of standardized processes, socio-technical limitations, and scarcity of human expertise etc. that can impede the development of existing and emergent classes of software (e.g., IoTs, blockchain, quantum systems). Software Development Bots (DevBots) trained on large language models can help synergise architects' knowledge with artificially intelligent decision support to enable rapid architecting in a human-bot collaborative ACSE. An emerging solution to enable this collaboration is ChatGPT, a disruptive technology not primarily introduced for software engineering, but is capable of articulating and refining architectural artifacts based on natural language processing. We detail a case study that involves collaboration between a novice software architect and ChatGPT for architectural analysis, synthesis, and evaluation of a services-driven software application. Preliminary results indicate that ChatGPT can mimic an architect's role to support and often lead ACSE, however; it requires human oversight and decision support for collaborative architecting. Future research focuses on harnessing empirical evidence about architects' productivity and exploring socio-technical aspects of architecting with ChatGPT to tackle emerging and futuristic challenges of ACSE.
Aakash Ahmad, Muhammad Waseem, Peng Liang, Mahdi Fehmideh, Mst Shamima Aktar, Tommi Mikkonen
2023-02-26T16:32:16Z
http://arxiv.org/abs/2302.14600v1
# Towards Human-Bot Collaborative Software Architecture with ChatGPT ###### Abstract Architecting software-intensive systems can be a complex process. It deals with the daunting tasks of unifying stakeholders' perspectives, designers' intellect, tool-based automation, pattern-driven reuse, and so on, to sketch a blueprint that guides software implementation and evaluation. Despite its benefits, architecture-centric software engineering (ACSE) inherits a multitude of challenges. ACSE challenges could stem from a lack of standardized processes, socio-technical limitations, and scarcity of human expertise etc. that can impede the development of existing and emergent classes of software (e.g., IoTs, blockchain, quantum systems). Software Development Bots (DevBots) trained on large language models can help synergies architects' knowledge with artificially intelligent decision support to enable rapid architecting in a human-bot collaborative ACSE. An emerging solution to enable this collaboration is ChatGPT, a disruptive technology not primarily introduced for software engineering, but is capable of articulating and refining architectural artifacts based on natural language processing. We detail a case study that involves collaboration between a novice software architect and ChatGPT for architectural analysis, synthesis, and evaluation of a services-driven software application. Preliminary results indicate that ChatGPT can mimic an architect's role to support and often lead ACSE, however; it requires human oversight and decision support for collaborative architecting. Future research focuses on harnessing empirical evidence about architects' productivity and exploring socio-technical aspects of architecting with ChatGPT to tackle emerging and futuristic challenges of ACSE. **Keywords:** Software Architecture, ChatGPT, Large Language Models, DevBots ## I Introduction Architecture of software-intensive systems enables architects to specify structural composition, express behavioural constraints, and rationalise design decisions - hiding implementation complexities with architectural components - to sketch a blue-print for software implementation [1]. Architecture-centric Software Engineering (ACSE) aims to exploit architectural knowledge (e.g., tactics and patterns), architectural languages, tools, and architects' decisions (human intellect) etc. to create a model that drives the implementation, validation, and maintenance phases of software systems [2]. In recent years, ACSE has been applied to investigate the role of architecture in engineering complex and emergent classes of software (blockchains, quantum systems etc.) [3] and it has been proven as useful to systematise software development in an industrial context [2]. Despite its potential, ACSE entails a multitude of challenges including but not limited to mapping stakeholders' perspectives to architectural requirements, managing architectural drift, erosion, and technical debts, or lack of automation and architects' expertise in developing complex and emergent classes of software [1, 3]. In such context, software engineers may enter a phase referred to as the _'lonesome architect'_ who requires non-intrusive support rooted in processes and tools to address the challenges of ACSE by reusing knowledge and exploiting decision support in the process [4]. **Context and motivation**: The process to architect software applications and services (a.k.a., 'architecting process') unifies a number of architecting activities that support an incremental, process-centric, and systematic approach to apply ACSE in software development endeavours [2, 3]. Empricism remains fundamental to deriving and/or utilising architecting processes that can support activities, such as analysis, synthesis, and evaluation etc. of software architetures [4]. To enrich the architecting process and empower the role of architects, research and development has focused on incorporating patterns and styles (knowledge), recommender systems (intelligence), and distributed architecting (collaboration) in ACSE process. The role of artificial intelligence (AI) in software engineering (SE) is an active area of research that aims to synergise solutions of AI and practices of SE to instill intelligence in the processes and tools for software development [5, 6]. From an ACSE perspective, research on AI generally aims to develop decision support systems or development bots that can assist architects with recommendations about design decisions, selection of patterns and styles, or predict points of architectural failure and degradation [7, 8]. Currently, there is no research that proposes innovative solutions that can enrich the architecting process with AI to enable collaborative architecting. Collaborative architecting can synergise architects' knowledge as human intellect and bot's capability as an intelligent agent who can lead the architecting process based on human conversation and supervision. Such collaboration can allow architects to delegate their architecting tasks to the bot, supervise the bot via dialog in natural language(s) to achieve automation, and relieve architects from undertaking tedious tasks in ACSE. **Objective of the study**: Chat Generative Pre-trained Transformer (ChatGPT) has emerged as a disruptive technology, representing an unprecedented example of a bot, that can engage with humans in context-preserved conversations to produce well-articulated responses to complex queries [9, 10]. ChatGPT is not specifically developed to address software engineering challenges, however; it is well capable of generating versatile textual specifications including architectural require ments, UML scripts, source code libraries, and test cases [11, 12]. Recently published research has started to explore the role of ChatGPT in engineering education, software testing, and source code generation [10, 12]. Considering ACSE that can benefit from intelligent and automated architecting, driven by architects' conversational dialogs and feedback, there is no research to investigate the role that ChatGPT can play as a DevBot in architecting process. To this end, our study focused on a preliminary investigation to understand _if ChatGPT can process an architecture story (scenario(s)) conversed to it by an architect and undertake architecting activities to analyse, synthesise, and evaluate software architecture in a human-bot collaborative architecting_. **Contributions**: We followed a process-centric approach [2] and adopted scenario-based method [13] for ChatGPT-enabled architectural analysis, synthesis, and evaluation of a microservices-driven software. Preliminary results demonstrate ChatGPT's capabilities that include but are not limited to processing an architecture story (conversed to it by an architect) for articulating architectural requirements, specifying models, recommending and applying architectural tactics and patterns, and developing scenarios for architecture evaluation. Primary contributions of this study are to: * Investigate the potential for human-bot collaborative architecting, synergizing ChatGPT's outputs and architects' decisions, to automate ACSE with a preliminary case study. * Identify the potential and perils of ChatGPT assisted ACSE to pinpoint issues concerning ethics, governance, and socio-technical constraints of collaborative architecting. * Establish foundations for empirically-grounded evidence about ChatGPT's capabilities and architects' productivity in collaborative architecting (ongoing and future work). The results of this study can help academic researchers to formulate new hypotheses about the role of ChatGPT in ACSE and investigate human-bot collaborative architecting of emergent and futuristic software. Practitioners can follow the presented guidelines to experiment with delegating their tedious tasks of ACSE to ChatGPT. ## II Research Context and Method We next contextualize some core concepts (Section II-A, Figure 1) and discuss the research method (Section II-B, Figure 2). Terms and concepts introduced here will be used throughout the paper. ### _Human-Bot Collaborative Architecting_ **Software Architecture** as described in the ISO/IEC/IEEE 42010:2011 standard, aims to abstract complexities rooted in source code-based implementations with architectural components and connectors that represent a blueprint of software applications, services, and systems to be developed [1]. Architecture-centric approaches have proven to be useful in academic solutions as well as in industrial projects by lending architectural knowledge, such as patterns, styles, languages, and frameworks, to design and develop software effectively and efficiently [4]. To enable software designers and architects with a systematic and incremental design of software architectures, there is a need for **architecting process** - also referred to as the process for architecting software [2, 3]. Architecting process can have a number of fine-grained architecting activities that support a separation of architectural concerns in ACSE. For example, the architecting process reported in [2] and illustrated in Figure 1 is derived from five industrial projects and incorporates three architecting activities namely _architectural analysis_, _architectural synthesis_, and _architectural evaluation_. For instance, the architectural evaluation activity in the process focuses on scenarios to evaluate the designed architecture [13]. In the architecting process, an architect can extract and document the requirements that express the required functionality and desired quality of the software, referred to as Architecturally Significant Requirements (ASRs). ASRs need to be mapped to source code implementations via an architectural model that can be visualized or textually specified using architectural languages, such as the Unified Modeling Language (UML) or Architectural Description Languages (ADLs) [14]. Architecture models that reflect the ASRs need to be evaluated using an architecture evaluation method, such as Software Architecture Analysis Method (SAAM) or Architecture Tradeoff Analysis Method (ATAM) [13]. **Software Development Bots** (DevBots) represent conversational agents or recommender systems, driven by AI, to assist software engineers by offering certain degree of automation and/or inducing intelligence in software engineering process [7]. From the software architecting perspective, the role of AI in general and DevBots to be specific is limited to bots answering questions or providing recommendations about archi Fig. 1: Context: LLMs, DevBots, Process, and Architecture tectural erosion and maintenance [8]. There is no research that investigates or any solution that demonstrates an architecting process by incorporating DevBots to enable human-bot collaborative architecting of software systems. Such a collaboration can enrich the architecting process that goes beyond questions & answers and recommendations, and synergizes architects' intellect (human rationale) and bot's intelligence (automated architecting process) in ACSE. Collaborative architecting can empower novice designers or architects, who lack experience or professional expertise to specify their requirements in natural language and DevBots can translate them into ASRs, architectural models, and evaluation scenarios. As illustrated in Figure 1, the emergence of ChatGPT as a conversational bot, based on large language models (LLM), can dialog with the architect to lead the creation of architectural artifacts with human supervision. ### _Research Method_ We now present the overall methodology for the research, comprising of three main phases, as illustrated in Figure 2. **Phase 1 - Developing the Architecture Story**: Software architecture story refers to a textual narration of the envisaged solution, i.e., software to be developed by expressing the core functionality, desired quality (i.e., ASRs) and any constraints in a natural language. The story is developed based on analyzing software domain that represents an operational context of the system or collection of scenarios operationalised via a software solution. The architect can analyze the domain and identify scenarios to write an architecture story that acts as a foundation for the architecting process. The architecture story is fed to ChatGPT via a prompt as a pre-process to collaborative architecting. **Phase 2 - Enabling Collaborative Architecting** is based on three activities adopted from [2], detailed below. * _Architectural analysis_ is driven by architecture story fed to ChatGPT for articulating the ASRs via (i) automatically generated and recommended requirements (by ChatGPT), or (ii) manual specification of the requirements (by the architect), or (iii) a continuous dialog between ChatGPT and the architect to refine (add/remove/update) the requirements. * _Architectural synthesis_ consolidates the ASRs to create an architecture model or representation that can act as a point of reference, visualizing the structural (de-)composition and runtime scenarios for the software. We preferred UML for architectural synthesis due to a number of factors, such as available documentation, ease of use, diversity of diagrams, tool support, and wide-scale adoption as a language to represent software systems [14]. During synthesis we also incorporated reuse knowledge and best practices in the form of tactics and patterns to refine the architecture. * _Architectural evaluation_ evaluates the synthesized architecture against ASRs based on scenarios from the architectural story. Architectural evaluation is conducted incrementally for full or partial validation of the architecture or its parts based on use cases or scenarios from ASRs. We used the Software Architecture Analysis Method (SAAM) to supervise ChatGPT for evaluating the architecture [13]. **Phase 3 - Conducting the Empirical Validations** complements the initial two phases with empirical validations of collaborative architecting as an extension of this study, outlining future work. The existing scope aims to explore and present the role of ChatGPT in human-bot collaborative software architecting (in Section III). Future work on empirically grounded guidelines to understand a multitude of socio-technical issues associated with ChatGPT-driven collaborative architecting is discussed later (in Section V). ## III Case Study on Collaborative Architecting This section details the process of collaborative architecting demonstrated with a case study for scenario-based exemplification and illustrations (see Figure 3). The **case study** detailed in [15] aims to develop a software application named CampusBike that can be used via a browser or as an app, allowing campus visitors to'register', 'view available bikes','reserve a bike','make payments', and 'view usage reports' etc. for eco-friendly mobility in and around the campus. The **architect** has a working knowledge of software design (UML, patterns etc.) and implementation (programming and scripting languages) and is considered a motivated novice engineer with the responsibility to design and develop CampusBike software. ### _Formulating the Architecture Story_ Architecture story refers to a textual narration of the envisaged solution, i.e., software to be developed by expressing the core functionality and any constraints narrated in a natural language. As per the methodological details in Figure 2, the story is developed based on analysing software domain that represents an operational context of the system or collection of scenarios operationalized via a software solution. The architect can analyse the domain and identify any scenarios to write an architecture story, fed to ChatGPT, that sets the foundation for architectural analysis activity in the process. Detailed architecture story is available at [15], with its sample snippet and two scenarios highlighted below. Fig. 2: Overview of the Research Method ### _Architectural Analysis_ Once the architecture story is fed to ChatGPT, during architectural analysis, the focus is to specify ASRs as required functionality (e.g., view available bikes) and desired quality (e.g., response time ; N) along with any constraints (e.g., compliance with relevant data security policies) of Campus-Bike software. ChatGPT is capable of outlining the ASRs or any necessary constraints if queried by the architect. However, as per the case study, ChatGPT expressed the ASRs and constraints that were refined (add, remove, and modify any requirements) by the architect. For example, the 'Reserve Bike' requirement articulated by ChatGPT read as: '... _system must allow user to view bikes available nearby and enable reservation of the bike instantly and securely_'. The architect refined the requirements: After narrating the architecture story, Figure 4 shows architects' query and ChatGPT's response (human-bot collaboration) to specify the functionality, quality, and constraints, collectively referred to as the ASRs. ASRs are iteratively refined via a dialog between the two to produce a final list presented here [15]. ### _Architectural Synthesis_ The ASRs are synthesized into an architectural model that can be expressed with an architectural (modeling) language, like UML or other architectural languages [14]. We used UML class and component diagrams to create the architecture model, specifically; component diagrams to represent the overall architecture, and class diagram for fine-grained representation of the architectural design. During synthesis, we refined the UML class diagram with the application of singleton pattern to 'UserLogin' class to restrict a single login session across the devices. We applied the caching tactic on 'ViewBikes' and data minimization constraint on 'User Location'. Figure 5 shows the architect's instruction for ChatGPT's to create the script for UML class diagram. Additional dialog between the two enabled application of singleton pattern, caching tactic, and data minimisation constraint on class diagram, presented in [15]. ### _Architectural Evaluation_ Once synthesized (Figure 4), the architecture needs to be evaluated to assess if it satisfies the ASRs and the constraints (Figure 5). We have used the SAAM method [13] to evaluate the synthesized architecture, as illustrated in Figure 6. For example, the architect specifies the application of SAAM to evaluate the 'View Bike' component. ChatGPT presents the scenario for evaluating the 'View Bike' component individually and also scenarios where it interacts with other components. Based on the interaction of individual and interacting scenarios, an evaluation report is produced that shows the evaluation of the functionality, quality, and constraints of CampusBike architecture. Fig. 4: Formulating and Refining the Requirements Fig. 5: Modeling and Refining the Architecture Design Fig. 3: Overview of the Human-Bot Collaborative Architecting Process ## IV Related Work We discuss the most relevant existing research that overviews the application of AI in SE and ACSE (Section IV-A), and the role of ChatGPT in software development (Section IV-B). ### _AI in Software Engineering and Architecting_ The research on synergizing AI and SE can be classified into two distinct dimensions namely AI for SE (artificial intelligence in software engineering) and SE for AI (software engineering for artificial intelligence) [5][6]. Considering the AI for SE perspective, Xie [5] argued that SE research needs to go beyond traditional efforts of applying AI for tool-based automation and pattern selection with an exploration of methods that instil intelligence in software engineering processes and solutions. Specifically, SE solutions need to maintain the so-called 'intelligence equilibrium' - i.e., unifying and balancing machine intelligence and human intellect - in processes, patterns, and tools etc. for emergent classes of software, such as blockchain and quantum applications [16]. Barenkamp _et al._[6] combined the findings of a systematic review and interviews with software developers to investigate the role of AI techniques in SE processes. The results of their study pinpoint three areas where SE needs intelligence to tackle (i) automation of tedious and complex SE activities such as code debugging, (ii) big data analytics to discover patterns, and (iii) evaluation of data in neural and software-defined networks. Considering the context of AI in software architecting, Herold _et al._[8] investigated existing research and proposed a conceptual framework for the application of machine learning to mitigate architecture degradation. ### _ChatGPT Assisted Software Engineering_ From the SE perspective, ChatGPT is viewed as an unprecedented example of a chatbot that can produce well-articulated responses to complex queries. However, it remains an unexplored territory in terms of its potential and perils in the context of software development processes [17, 18]. Most recently, a number of proposals and experimental findings indicate that the research on ChatGPT focuses on supporting engineering education [11, 10], software programming [18, 9], and testing [12]. Avila-Chauvert _et al._[9] detailed how conversational dialogs of a programmer with ChatGPT enable a human-bot assisted development of an online behavioral task using HTML, CSS, and JavaScript source code. They highlighted that although ChatGPT requires human oversight and intervention, it can write well-scripted programming solutions and reduces the time and effort of a developer during programming. A similar narrative in a blogpost [18] advocated for an incremental process (human dialog with ChatGPT) to enable genetic programming - JavaScript code to solve the traveling salesman problem. In addition to developing the source code, a couple of studies have focused on testing and debugging with ChatGPT [11, 12]. Sobania _et al._[12] evaluated the performance of ChatGPT in automated bug fixing. In contrast to the status-quo on automated techniques for bug fixing [7], ChatGPT offers a dialogue with a software tester for an incremental identification and fixing of bugs. **Conclusive summary**: Based on a review of the existing literature, there do not exist any research or development that explores the role of ChatGPT (LLM-driven AI) that can engage software engineers in conversational dialogs to lead and support ACSE. This study complements the most recent research efforts on software test automation and bug fixing with ChatGPT [12] but focuses on architecture-centric development for software systems. In the broader context of AI for SE [5], this study argues for human-bot collaborative architecting that can enrich ACSE process with the architects' knowledge and supervision synergized with bot's capabilities to architect software-intensive systems and services. ## V Discussion and Validity Threats We discuss the socio-technical aspects of collaborative architecting (Section V-A) and highlight potential threats to validity (Section V-B). ### _Socio-Technical Issues of ChatGPT in ACSE_ In addition to highlighting ChatGPT's potential, we also highlight some perils as shortcomings of collaborative architecting process that need to be discussed in the context of socio-technical aspects. By socio-technical aspects, we refer to a unified perspective on issues such as _what can be'social' concerns_ and _what are the 'technical' limitations_ of collaborative architecting. Dedicated research is required to systematically investigate such issues, however, we only pinpoint several prominent ones, as below. **Response Variation**: In the context of human-bot conversational dialogs, ChatGPT may produce varied responses for exact same queries. For example, we observed that a query such as '_... what architectural style can be best suited to CampusBike system_' may yield varied responses, such as microservices, layered, client-server etc. architecture can be best suited for the system. This and alike variation in recommendations or scripted artifacts (UML script, ASR specification etc.) can impact the consistency of architecting process and ultimately varied analysis, synthesis, and evaluation of the architecture. One of the solutions to minimize response variations is an iterative dialog with ChatGPT to refine its output and architects' oversight to ensure that the architectural artifacts being produced are consistent and coherent. **Ethics and Intellectual Property**: Textual specifications, architecture specific scripts, and source codes etc. articulated by ChatGPT could give rise to ethical issues or in some cases copyright or intellectual property infringements. For example, ChatGPT generated script for a component (getLocation) that senses user location in CampusBike system may lead to leakage of user location privacy and non-compliant software Fig. 6: Evaluating the Architecture with regulatory guidelines (GDPR, CCPA etc.) that must be dealt with vigilance. In such cases, the role of architect is critical to ensure the generated architecture does not violate ethics or intellectual property rights (if any). **Biased Outputs**: The biases in outputs of such conversational bots can be attributed to a number of possible aspects including but not limited to input, training data, and/or algorithmic bias. From an architectural perspective, recommendation bias about specific architectural modeling notation, tactic, pattern, or style etc. may be based on its widespread adoption or bias in training data rather than optimal use in a specific context. Moreover, architectural recommendations (specific style), design decisions (pattern selection), or validation scenarios (evaluation method) may suffer such bias to produce sub-optimal artifacts in ACSE. ### _Threats to the Validity_ Validity threats represent limitations, constraints, or potential flaws in the study that can affect the generalization, replicability, and validity of results. Future work can focus on minimizing these threats to ensure methodological rigor and generalization of results. **Internal validity** examines the extent to which any systematic error (bias) is present in the design, conduct, and analysis etc. of the study. To design and conduct this study, and considering the internal validity, we followed a systematic approach and utilized a well-known architecting process [2] and architecture evaluation method [13]. The case study based approach combined with incremental architecting (Figure 3) helped us to analyze and refine the study, however, more work is required to understand if the study can be validated with a different architecting process or by adopting other evaluation methods. **External validity** examines whether the findings of a study can be generalized to other contexts. We only experimented with a single case study of moderate complexity that can compromise study's generalization. Specifically, scenarios with the increased complexity of architecting process (cross-organisational development), class of software to be developed (mission-critical software), and human expertise (novice/experienced engineers) can affect the external validity of this research. Future work is planned, highlighted in the conclusions section, to validate the process of collaborative architecting by engaging architecting teams and analyzing their feedback to understand the extent to which the external validity can be minimized. **Conclusion validity** determines the degree to which the conclusions reached by the study are credible or believable. In order to minimize this threat, we followed a three-step process (Figure 2) to support a fine-grained process to architect the software and validate the results (future work). Moreover, a case study based approach was adopted to ensure scenario-based demonstration of the study results. However, some conclusions (e.g., architect's productivity, ChatGPT's efficacy) can only be validated with more experimentation involving multiple case studies, diverse teams, and real context scenarios of collaborative architecting. ## VI Conclusions and Future Research ChatGPT has emerged as a disruptive technology, an unprecedented conversational bot, that mimics human conversations and generates well-articulated textual artifacts (recommendation, scripts, source codes etc.) - often referred to as a'solution that seeks a problem'. Among a plethora of its use cases that range from content creation to digital assistance and acting as a virtual teacher etc., ChatGPT's role as a DevBot and its capability to architect software-intensive systems remain unexplored. This research investigates the potential and perils of ChatGPT to assist and empower the role of an architect who leads the process of architecting, and collaborate with a human to enable ACSE. The research advocates that in the context of AI for SE, traditional efforts of applying AI for tool-based automation should focus on a broader perspective, i.e., enriching existing processes by instilling intelligence in them via efforts like human-bot collaborative architecting. The case study reflects a practical case of _how a software can be architected with ChatGPT?_ and _what factors need to be considered in collaborative architecting?_ Variance in responses and artifacts, types of ethical implications, level of human decision support/supervision, along with legal and socio-technical issues must be considered while integrating ChatGPT in SE or ACSE processes. The research needs empirical validations, grounded in evidence and experimentation, to objectively assess factors like enhancing engineers' productivity, SE process optimization, and assisting novice developers and designers to engineer complex and emergent classes of software effectively with ChatGPT. **Needs for future research**: We plan to extend this study as a stream of research that explores human feedback and validation (i.e., architects' perspective) and integrating ChatGPT in a process to develop software services for quantum computing systems. More specifically, quantum computing and quantum software engineering has emerged as a quantum computing genre of SE that faces a lack of human expertise to synergize the skills of engineering software and knowledge of quantum physics. We are currently working in engaging a number of software development teams with diverse demography attributes (e.g., geo-distribution, type of expertise, level of experience, class of software system) in controlled experiments to architect software systems using ChatGPT and document architects' responses. Specifically, with a case study that involves ChatGPT assisted architecting shall allow us to capture feedback of architects via interviews or documents to empirically investigate aspects like usefulness, rigor, acceptance, impact on human productivity, and potential perils of ChatGPT in ACSE.
2303.17020
Mesoscopic Spectral CLT for Block Correlated Random Matrices
For random matrices with block correlation structure we show that the fluctuations of linear eigenvalue statistics are Gaussian on all mesoscopic scales with universal variance which coincides with that of the Gaussian unitary or Gaussian orthogonal ensemble, depending on the symmetry class of the model. The main tool used for determining this variance is a two-point version of the matrix-valued Dyson equation, that encodes the asymptotic behavior of the product of resolvents at different spectral parameters.
Torben Krüger, Yuriy Nemish
2023-03-29T21:03:17Z
http://arxiv.org/abs/2303.17020v2
# Mesoscopic Spectral CLT for Block Correlated Random Matrices ###### Abstract For random matrices with block correlation structure we show that the fluctuations of linear eigenvalue statistics are Gaussian on all mesoscopic scales with universal variance that coincides with that of the Gaussian unitary or Gaussian orthogonal ensemble, depending on the symmetry class of the model. Block correlations appear in linearizations of non-commutative polynomials and rational functions in several random matrices. This is used in the companion paper [43] to prove the mesoscopic central limit theorem for such models. _Keywords: Mesoscopic Central Limit Theorem, Hermitian random matrix, linear matrix pencil_ **AMS Subject Classification: 60B20, 15B52** ## 1 Introduction The eigenvalues of Hermitian random matrices form a strongly correlated point process on the real line. To gain insights into the underlying correlations we study linear statistics of this process. More generally, for \(N\) real random variables \(\lambda_{1},\ldots,\lambda_{N}\) their linear statistics is \(L_{N}(f):=\sum_{i=1}^{N}f(\lambda_{i})\) for some test function \(f\). If \(\frac{1}{N}L_{N}(f)\) converges as \(N\to\infty\) and becomes nonrandom, we say that the law of large numbers holds. Such results are well know not only in the case when \(\lambda_{1},\ldots,\lambda_{n}\) are independent, but also when they are the eigenvalues of an \(N\times N\) random matrix ensemble. In this case \(\lim_{N\to\infty}\frac{1}{N}L_{N}(f)=\int f(x)\rho(x)dx\), where \(\rho\) is the asymptotic spectral density. The next important question to ask about \(\frac{1}{N}L_{N}(f)\) concerns the size and distribution of its fluctuations around the limit. In the case of independent random variables the central limit theorem (CLT) states that these fluctuations are Gaussian of order \(N^{-1/2}\). The strong correlation between the eigenvalues of random matrices, however, typically reduces the fluctuation to the order \(N^{-1}\), while the distribution remains Gaussian. Such results were proved, e.g., for invariant ensembles [40], Wigner matrices with non-Gaussian i.i.d. entries [41, 8], sample covariance matrices [9], ensembles with external source [39], and polynomials of several independent random matrices [14]. The law of large numbers and CLT described above provide information about the eigenvalues on global scales \(\eta=O(1)\). To resolve the eigenvalue distribution on mesoscopic scales \(N^{-1}\ll\eta\leq 1\) above the typical spacing distance, or even microscopic scales of order \(\eta=O(N^{-1})\), the compactly supported test function \(f\) with \(\int f(x)\,dx=1\) is rescaled to capture an \(\eta\)-sized neighbourhood around \(x_{0}\), i.e., \[f_{\eta}(x):=f\bigg{(}\frac{x-x_{0}}{\eta}\bigg{)}\,.\] On mesoscopic scales the linear statistics for \(f_{\eta}\) still involves a large number of \(O(N\eta)\) eigenvalues and thus the law of large numbers now takes the form \(\frac{1}{N\eta}L_{N}(f_{\eta})\to\rho(x_{0})\). Such local laws have been established for a large variety of random matrix models, including Wigner matrices [27], deformed Wigner matrices [48, 35], matrices with variance profiles [1] and correlations [2, 26], invariant ensembles [16], and polynomials in several random matrices [7, 23]. The fluctuation of linear eigenvalue statistics exhibits a striking universality phenomenon on mesoscopic scales. Not only its order and Gaussian distribution become universal, but also the corresponding variance, namely \[L_{N}(f_{\eta})-\mathbb{E}L_{N}(f_{\eta})\to\mathcal{N}(0,v_{f})\,,\qquad v_{f} =\frac{1}{2\pi^{2}\beta}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\left( \frac{f(x)-f(y)}{x-y}\right)^{2}dxdy\,, \tag{1.1}\] is independent of \(\eta\) and \(x_{0}\). The only signature of the underlying random matrix ensemble still present in these fluctuations is its symmetry class \(\beta\), where \(\beta=1\) corresponds to real symmetric and \(\beta=2\) to complex Hermitian matrices. Versions of the mesoscopic CLT (1.1) have been established for a wide class of random matrix models. These include the classical Gaussian ensembles [17, 28], Wigner matrices, for which the CLT has been shown covering an increasing range of mesoscopic regimes [18, 34, 50, 53], invariant ensembles [13, 44], deformed Wigner matrices [49], Dyson Brownian motion [21, 38], band matrices [22], and free sums [11]. The mesoscopic CLT is also used to establish fluctuations of individual eigenvalues [46]. For non-Hermitian random matrices with independent entries and Coulomb gases in dimension two, mesoscopic CLTs have recently been proven in [19] and [12], respectively. While the local law has been established for Hermitian and non-Hermitian models with very general correlation structure among the entries of the underlying random matrix, the current proofs of mesoscopic CLTs either assume independent entries or isotropic randomness in the matrix, which is invariant under either the unitary or orthogonal group. In this work we establish the CLT (1.1) for linear matrix pencils in several random matrices, which do not satisfy either condition. Instead they belong to a class of random matrix ensembles with non-isotropic block correlations among their entries. For matrices \(\mathbf{X}_{0},\mathbf{X}_{1},\ldots,\mathbf{X}_{d}\in\mathbb{C}^{N\times N}\) a linear matrix pencil (LMP) is a linear combination of these matrices with matrix valued coefficients, i.e., a matrix in \(\mathbb{C}^{n\times n}\otimes\mathbb{C}^{N\times N}\) of the form \[\mathbf{H}=\sum_{i=0}^{d}L_{i}\otimes\mathbf{X}_{i}\,. \tag{1.2}\] These LMPs find applications in optimization [57], systems engineering [54], theoretical computer science [15] (see, e.g., [42, 29] and references therein for numerous other applications). In this work we consider the case when \(\mathbf{X}_{0}=\mathbf{I}_{N}\) and \(\mathbf{X}_{1},\ldots,\mathbf{X}_{d}\) are Wigner matrices. These random LMPs are of particular interest in the study of the evolution of ecosystems [33] and neural networks [52] (see also [3, 25] and references therein). Moreover, such LMPs appear as linearizations of non-commutative polynomials \(P(\mathbf{X}_{1},\ldots,\mathbf{X}_{d})\) in the Wigner matrices. We use the result of the current work and this fact in the companion paper [43] to establish the mesoscopic CLT for linear eigenvalue statistics of such polynomials. The linearization technique also extends to non-commutative rational functions of random matrices. These are used, e.g., in the study of transport properties through disordered quantum systems, such as quantum dots [24] and, therefore, the corresponding spectral CLTs can be used to determine fluctuations of transport eigenvalues. Linearization techniques have been used extensively in free probability [32, 6, 36, 51]. Various properties of the spectrum of the LMPs of free elements have been established, e.g. in [31, 56, 10]. The study of the CLTs in the context of free probability uses the theory of second order freeness (see, e.g., [20]). The matrix coefficients \(L_{i}\) of the LMP \(\mathbf{H}\) encode the correlation structure of the entries and determine the spectral density via a nonlinear matrix equation, the Matrix Dyson Equation (MDE), whose solution \(M(z)\) is interpreted as the expectation value \(\mathbb{E}[\mathbf{G}(z)]\) of the resolvent \(\mathbf{G}(z):=(\mathbf{H}-z)^{-1}\) in the large \(N\) limit. Incorporating this non-trivial structure of the resolvent in the calculation of the fluctuations is one of the main novelties in this work and can be extended to other models with decaying correlations, such as the Kronecker random matrices in [5] or matrices with general decaying correlations in [2, 26]. To keep the presentation simple, however, we do not pursue this direction. Instead we show that the mesoscopic CLT (1.1) with \(N\) replaced by \(nN\) and in the limit \(N\to\infty\) holds for matrices \(\mathbf{H}\) of the form (1.2), where the symmetry indicator \(\beta\in\{1,2\}\) depends on whether the Wigner matrices \(\mathbf{X}_{1},\ldots,\mathbf{X}_{d}\) and their coefficient matrices \(L_{i}\) are real symmetric or complex Hermitian. Our proof relies solely on resolvent methods and does not involve the application of Dyson Brownian motion. This allows to obtain the CLT on _all_ mesoscopic scales for models with large zero blocks of size \(O(N)\). ## 2 Model and results In this paper we study random matrix models having general block correlation structures with blocks drawn from random matrix ensembles with independent identically distributed (_i.i.d._) entries. **The model.** Fix \(d,n\in\mathbb{N}\). Let \(K_{0},L_{1},\ldots,L_{d}\in\mathbb{C}^{n\times n}\) be deterministic matrices, and let \(\mathbf{X}_{1},\ldots,\mathbf{X}_{d}\in\mathbb{C}^{N\times N}\), \(\mathbf{X}_{\alpha}=\left(\begin{subarray}{c}\alpha_{ij}^{(\alpha)}\end{subarray} \right)_{i,j=1}^{N}\), \(1\leq\alpha\leq d\), be independent \(N\times N\) random matrices with _i.i.d._ entries. Consider the random matrix model \(\mathbf{H}^{(\beta)}\in\mathbb{C}^{nN\times nN}=\mathbb{C}^{n\times n}\otimes\mathbb{ C}^{N\times N}\) of the form \[\mathbf{H}^{(\beta)}=K_{0}\otimes\mathbf{I}_{N}+\sum_{\alpha=1}^{d}\Big{(}L_{\alpha} \otimes\mathbf{X}_{\alpha}+L_{\alpha}^{*}\otimes\mathbf{X}_{\alpha}^{*}\Big{)}, \tag{2.1}\] where \(\beta\in\{1,2\}\) denotes the symmetry class with \(\beta=1\) corresponding to the real-symmetric matrices and \(\beta=2\) to the (complex) Hermitian matrices. We distinguish these two classes in our assumptions as follows: * for \(\beta=1\), the structure matrices \(K_{0},L_{1},\ldots,L_{d}\in\mathbb{R}^{n\times n}\) are deterministic real \(n\times n\) matrices, and \(\mathbf{X}_{1},\ldots,\mathbf{X}_{d}\in\mathbb{R}^{N\times N}\) are independent \(N\times N\) real random matrices with _i.i.d._ entries satisfying \[\mathbb{E}\big{[}x_{11}^{(\alpha)}\big{]}=0,\quad\mathbb{E}\big{[}\big{(}x_{1 1}^{(\alpha)}\big{)}^{2}\big{]}=\frac{1}{N}\] (2.2) for all \(1\leq\alpha\leq d\); * for \(\beta=2\), the structure matrices \(K_{0},L_{1},\ldots,L_{d}\in\mathbb{C}^{n\times n}\) are deterministic complex \(n\times n\) matrices, and \(\mathbf{X}_{1},\ldots,\mathbf{X}_{d}\in\mathbb{C}^{N\times N}\) are independent \(N\times N\) complex random matrices with _i.i.d._ entries satisfying \[\mathbb{E}\big{[}x_{11}^{(\alpha)}\big{]}=0,\quad\mathbb{E}\big{[}\big{|}x_{11} ^{(\alpha)}\big{|}^{2}\big{]}=\frac{1}{N}\] (2.3) for all \(1\leq\alpha\leq d\). The dimension \(N\in\mathbb{N}\) of the second tensor factor is the large parameter that tends to infinity, \(\mathbf{I}_{N}\) is the \(N\times N\) identity matrix, and \(\otimes\) denotes the tensor (or Kronecker) product. We additionally assume that the entries of \(\mathbf{X}_{\alpha}\) have bounded moments: for each \(p\in\mathbb{N}\), \(p\geq 3\), there exists \(c_{p}>0\) such that \[\max_{1\leq\alpha\leq d}\mathbb{E}\Big{[}\big{|}\sqrt{N}x_{11}^{(\alpha)} \big{|}^{p}\Big{]}\leq c_{p}, \tag{2.4}\] and in the complex Hermitian case (\(\beta=2\)) we assume that \(\operatorname{Re}x_{11}^{(\alpha)}\) and \(\operatorname{Im}x_{11}^{(\alpha)}\) are independent and \[\mathbb{E}\Big{[}\big{(}\operatorname{Re}x_{11}^{(\alpha)}\big{)}^{2}\Big{]}= \frac{1}{2N},\quad\mathbb{E}\Big{[}\big{(}\operatorname{Im}x_{11}^{(\alpha)} \big{)}^{2}\Big{]}=\frac{1}{2N}. \tag{2.5}\] We call \(d,n,K_{0},L_{1},\ldots,L_{d}\) and \(c_{3},c_{4},\ldots\) the _model parameters_. Notice, that if \(L_{\alpha}=L_{\alpha}^{*}\) for some \(\alpha\in\mathbb{N}\), then this gives rise to the term of the form \[L_{\alpha}\otimes\mathbf{X}_{\alpha}+L_{\alpha}^{*}\otimes\mathbf{X}_{\alpha}^{*}= \sqrt{2}L_{\alpha}\otimes\bigg{(}\frac{\mathbf{X}_{\alpha}+\mathbf{X}_{\alpha}^{*}}{ \sqrt{2}}\bigg{)}, \tag{2.6}\] where \((\mathbf{X}_{\alpha}+\mathbf{X}_{\alpha}^{*})/\sqrt{2}\) is a (real or complex) Wigner matrix. Whenever the symmetry class is irrelevant, we will suppress the parameter \(\beta\) in the notation. **Preliminary results about Kronecker random matrices.** The model \(\mathbf{H}\) defined in (2.1) is a special case of the Kronecker random matrices, introduced in [5] to denote a model of the type (2.1) in which matrices \(\{\mathbf{X}_{\alpha}\}\) are assumed to be independent with independent but not necessarily identically distributed entries. Below we collect several properties of \(\mathbf{H}\) that are direct consequences of the corresponding results for general Kronecker random matrices from [5] and [4]. We start by introducing the _matrix Dyson equation_ (_MDE_), which, among other, characterizes the large-\(N\) limit of the empirical spectral measure of \(\mathbf{H}\). In the case of the Kronecker random matrix \(\mathbf{H}\) defined in (2.1), the MDE takes the form \[-\frac{1}{M(z)}=zI_{n}-K_{0}+\Gamma[M(z)] \tag{2.7}\] with unknown \(M(z)\in\mathbb{C}^{n\times n}\), \(z\in\mathbb{C}_{+}:=\{z\in\mathbb{C}\,:\,\operatorname{Im}z>0\}\), \(I_{n}\in\mathbb{C}^{n\times n}\) the identity matrix, and the operator \(\Gamma:\mathbb{C}^{n\times n}\to\mathbb{C}^{n\times n}\) given by \[\Gamma[R]:=\sum_{\alpha=1}^{d}\Big{(}L_{\alpha}RL_{\alpha}^{*}+L_{\alpha}^{*}RL _{\alpha}\Big{)} \tag{2.8}\] for any \(R\in\mathbb{C}^{n\times n}\). We call \(\Gamma\) the _self-energy_ operator. Notice that the operator \(\Gamma\) maps the set of positive semidefinite matrices into itself. Therefore, by [37, Theorem 2.1], for any \(z\in\mathbb{C}_{+}\) there exists a unique solution \(M=M(z)\) to the matrix Dyson equation (2.7) with positive definite imaginary part \(\operatorname{Im}M=\frac{1}{2!}(M-M^{*})>\ 0\). The function \(M:\mathbb{C}_{+}\to\mathbb{C}^{n\times n}\) is a matrix-valued Herglotz function (see, e.g., [30, Section 5]), depends analytically on \(z\), and admits the representation \[M(z)=\int_{\mathbb{R}}\frac{V(dx)}{x-z}, \tag{2.9}\] where \(V(dx)\) is a (positive semidefinite) matrix-valued measure on \(\mathbb{R}\) with compact support. The Stieltjes transform representation (2.9) readily follows from [4, Proposition 2.1] by taking \(\mathcal{A}=\mathbb{C}^{n\times n}\) and the self-energy operator \(\Gamma\). In the following we will often assume that \(\Gamma\) satisfies the following \(L\)_-flatness_ property: **(A)**: there exist \(L\in\mathbb{N}\), a matrix \(Z=(z_{kl})_{k,l=1}^{n\times n}\in\{0,1\}^{n\times n}\), and a constant \(C_{\rm flat}>0\) such that \(z_{kk}=1\) for \(1\leq k\leq n\), the matrix \(Z^{L}\) has all entries strictly positive, and for any positive semidefinite matrix \(R=(r_{kl})_{k,l=1}^{n}\in\mathbb{C}^{n\times n}\) \[\Gamma\big{[}R\,\big{]}\geq C_{\rm flat}\cdot\sum_{l=1}^{n}\Big{(}\sum_{k=1}^ {n}z_{kl}r_{kk}\Big{)}E_{ll}. \tag{2.10}\] The special case of the \(L\)-flatness with \(L=1\) (in the literature often simply called the _flatness property_) requires that all entries of \(Z\) are equal to \(1\), in which case the relation (2.10) takes the form \[\Gamma\big{[}R\,\big{]}\geq C_{\rm flat}\operatorname{Tr}\big{(}R\big{)}I_{n}. \tag{2.11}\] The \(L\)-flatness property was first introduced in the context of the matrix Dyson equation in an early arXiv version of [2]. We collect certain important properties of the MDE (2.7) with self-energy satisfying **(A)** in Proposition A.1. If \(\Gamma\) satisfies **(A)**, then it follows from part (i) of Proposition A.1 that \(\|M(z)\|\) is uniformly bounded on \(\mathbb{C}_{+}\). This implies (see, e.g., [30, Lemma 5.4 (vi) and Lemma 5.5 (i)]) that the measure \(V(dx)\) in (2.9) is absolutely continuous with respect to the Lebesgue measure on \(\mathbb{R}\) and its density is given by the inverse Stieltjes transform of \(M(z)\), i.e., \[V(dx)=V(x)dx,\quad\text{where}\quad V(x):=\lim_{y\downarrow 0}\frac{1}{\pi} \operatorname{Im}M(x+{\rm i}\,y). \tag{2.12}\] From part (iv) of Proposition A.1 we know that \(\lim_{y\downarrow 0}M(x+{\rm i}\,y)\) exists for all \(x\in\mathbb{R}\). Using this we extend the function \(M\) beyond the set \(\mathbb{C}_{+}\) by setting \[M(z):=\left\{\begin{array}{ll}\big{(}M(\overline{z})\big{)}^{*},&z\in \mathbb{C}_{-}:=\{z\in\mathbb{C}\,:\,\operatorname{Im}z<0\},\\ \\ \lim_{y\downarrow 0}M(x+{\rm i}\,y),&z=x\in\mathbb{R}.\end{array}\right. \tag{2.13}\] With the above definition, \(M(z)\) is continuous on \(\mathbb{C}_{+}\cup\mathbb{R}\), and \(\lim_{y\downarrow 0}(M(x+{\rm i}\,y)-M(x-{\rm i}\,y))=2{\rm i}\,\operatorname{Im}M(x)= 2\pi{\rm i}\,V(x)\). We define the empirical spectral measure of \(\mathbf{H}\) by \[\mu_{\mathbf{H}}:=\frac{1}{nN}\sum_{i=1}^{nN}\delta_{\lambda_{i}}, \tag{2.14}\] where \(\lambda_{1},\ldots,\lambda_{nN}\in\mathbb{R}\) are the eigenvalues of \(\mathbf{H}\) counted with multiplicities, and \(\delta_{\lambda}\) denotes the Dirac measure at \(\lambda\in\mathbb{R}\). Assume that the self-energy operator satisfies the \(L\)-flatness property **(A)**. By specializing [5, Theorem 2.7] to \(\mathbf{H}\) and using (2.12) we obtain that, as \(N\to\infty\), the empirical spectral measure \(\mu_{\mathbf{H}}\) converges weakly in probability to \(\rho(x)dx\), where \[\rho(x):=\frac{1}{n}\operatorname{Tr}\big{(}V(x)\big{)}=\lim_{y\to 0}\frac{1}{n \pi}\operatorname{Tr}\big{(}\operatorname{Im}M(x+{\rm i}\,y)\big{)}. \tag{2.15}\] We call the weak convergence \(\mu_{\mathbf{H}}\Rightarrow\rho(x)dx\) the _global law_ for \(\mathbf{H}\), and we call the function \(\rho(x)\) the _(self-consistent) density of states_ for \(\mathbf{H}\). Notice that \(\rho\) depends only on the model parameters. In this paper we determine the fluctuations of the linear spectral statistics of \(\mathbf{H}^{(\beta)}\) for \(\beta\in\{1,2\}\) on _mesoscopic_ scales inside the bulk, i.e., for those \(x\in\mathbb{R}\) for which \(0<\rho(x)<\infty\). The following central limit theorem for linear spectral statistics is our main result. **Theorem 2.1**.: _Let \(\mathbf{H}=\mathbf{H}^{(\beta)}\) be as in (2.1), and suppose that the corresponding self-energy operator \(\Gamma\) satisfies the \(L\)-flatness assumption **(A)** for some \(L\in\mathbb{N}\). Let \(g\in\mathcal{C}_{c}^{2}(\mathbb{R})\) be a twice continuously differentiable test function with compact support. Then for any \(\gamma\in(0,1)\) and \(E_{0}\) satisfying \(0<\rho(E_{0})<\infty\), the mesoscopic linear spectral statistic_ \[\sum_{i=1}^{nN}\big{(}f_{N}(\lambda_{i})-\mathbb{E}[f_{N}(\lambda_{i})]\big{)} \tag{2.16}\] _with_ \[f_{N}:\mathbb{R}\to\mathbb{R},\qquad f_{N}(x)=g\Big{(}N^{\gamma}(x-E_{0}) \Big{)} \tag{2.17}\] _converges in distribution to a centered Gaussian random variable with variance_ \[V[g]:=\frac{1}{2\beta\pi^{2}}\int_{\mathbb{R}}\int_{\mathbb{R}}\frac{(g(x)-g(y ))^{2}}{(x-y)^{2}}dxdy. \tag{2.18}\] _Remark 2.2_ (Comparison of 1-flatness and \(L\)-flatness for \(L>1\)).: The condition (2.11) that defines the 1-flatness does not allow any of the \(N\times N\) blocks of matrix \(\mathbf{H}\) to be constantly equal to \(0\). Indeed, if there exist \(1\leq i,j\leq n\) such that \(\left\langle e_{i},L_{\alpha}e_{j}\right\rangle=\left\langle e_{i},L_{\alpha}^{ \ast}e_{j}\right\rangle=0\) for all \(1\leq\alpha\leq d\), then \[\left\langle e_{i},\Gamma[E_{jj}]e_{i}\right\rangle=\sum_{\alpha=1}^{d}\left( \left\langle e_{i},L_{\alpha}e_{j}\right\rangle\!\left\langle e_{j},L_{\alpha} ^{\ast}e_{i}\right\rangle+\left\langle e_{i},L_{\alpha}^{\ast}e_{j}\right\rangle \!\left\langle e_{j},L_{\alpha}e_{i}\right\rangle\right)=0, \tag{2.19}\] while \(\operatorname{Tr}(E_{jj})=1\). This contradicts to (2.11). On the other hand, the \(L\)-flatness with \(L>1\) gives the structure of \(\mathbf{H}\) more flexibility, in particular in terms of the zero blocks. We illustrate this with the following example. Let \(\mathbf{X}_{i}\), \(1\leq i\leq 7\), be independent (real or complex) random i.i.d. matrices. Denote \(\mathbf{Y}_{i}:=\frac{1}{\sqrt{2}}(\mathbf{X}_{i}+\mathbf{X}_{i}^{\ast})\) for \(1\leq i\leq 4\), and consider the random Kronecker matrix \[\mathbf{H}=\left(\begin{array}{cccc}\mathbf{Y}_{1}&\mathbf{Y}_{1}+\mathbf{X}_{5}&0&0\\ \mathbf{Y}_{1}+\mathbf{X}_{5}^{\ast}&\mathbf{Y}_{2}&0&\mathbf{X}_{6}\\ 0&0&\mathbf{Y}_{1}+\mathbf{Y}_{3}&\mathbf{X}_{6}+\mathbf{X}_{7}\\ 0&\mathbf{X}_{6}^{\ast}&\mathbf{X}_{6}^{\ast}+\mathbf{X}_{7}^{\ast}&\mathbf{Y}_{4}\end{array}\right) \tag{2.20}\] constructed using the structure matrices \[L_{1}=\frac{1}{\sqrt{2}}\Big{(}E_{11}+E_{12}+E_{21}+E_{33}\Big{)},\quad L_{2} =\frac{1}{\sqrt{2}}E_{22},\quad L_{3}=\frac{1}{\sqrt{2}}E_{33},\quad L_{4}= \frac{1}{\sqrt{2}}E_{44}, \tag{2.21}\] \[L_{5}=E_{12},\quad L_{6}=E_{24}+E_{34},\quad L_{7}=E_{34}. \tag{2.22}\] The matrix \(Z\in\{0,1\}^{4\times 4}\) given by \[Z=\left(\begin{array}{cccc}1&1&0&0\\ 1&1&0&1\\ 0&0&1&1\\ 0&1&1&1\end{array}\right) \tag{2.23}\] had a strictly positive main diagonal, and \(Z^{3}\) has all entries strictly positive. Moreover, the operator \(\Gamma\) defined through (2.8) with \(d=7\) and \(L_{\alpha}\) in (2.21)-(2.22) satisfies the 3-flatness property with \(Z\) given in (2.23) and \(C_{\text{flat}}=0.1\). The possibility of having zero blocks plays important role in applying the random LMPs of the form (2.1) to the study of the linearizations of polynomials and rational functions in random matrices. **Structure of the proof**. In Section 3 we provide the full proof of Theorem 2.1 in the complex Hermitian case \(\beta=2\). In Sections 3.1 and 3.2 we derive a differential equation for the characteristic function of the mesoscopic linear spectral statistic (2.16). The obtained equation contains a multiresolvent term. In Section 3.3 we show that the multiresolvent term satisfies a certain self-consistent equation, which is analyzed in Section 3.4. This allows to approximate the multiresolvent term by a deterministic quantity and to compute the limiting behavior of the characteristic function of the statistic (2.16) in Section 3.5. In Section 4 we prove Theorem 2.1 in the real symmetric case \(\beta=1\). Section 4.1 collects all the results that can be imported from Section 3 with minor changes or without changes. The differential equation for the characteristic function of the linear statistic (2.16) in the \(\beta=1\) case contains a multiresolvent term of a new type, which we then study in Section 4.2. The limiting behavior of the characteristic function is calculated in Section 4.3, thus completing the proof of Theorem 2.1. To streamline the presentation, the derivation of certain results, which are important, but whose proofs are not immediately relevant for establishing the main theorem, are postponed to the appendix. In Appendix A we list and prove the properties of the solution the the MDE (2.7) with self-energy satisfying the general \(L\)-flatness property **(A)**. Finally, in Appendix B we derive a convenient cumulant expansion formula for the resolvent matrix \((\mathbf{H}-z\mathbf{I}_{nN})^{-1}\), which is used extensively in Sections 3.2 and 4.2. ## 3 Complex Hermitian case The proof of Theorem 2.1 presented in this section relies on the study of the characteristic function of the mesoscopic linear spectral statistic \(\sum_{i}\left(f_{N}(\lambda_{i})-\mathbb{E}[f_{N}(\lambda_{i})]\right)= \operatorname{Tr}f_{N}(\mathbf{H})-\mathbb{E}\big{[}\operatorname{Tr}f_{N}(\mathbf{H}) \big{]}\). More precisely, we show that the characteristic function \(\mathbb{E}\big{[}\exp\big{\{}i\!\left\{\operatorname{Tr}f_{N}(\mathbf{H})- \mathbb{E}\big{[}\operatorname{Tr}f_{N}(\mathbf{H})\big{]}\right\}\big{\}}\big{]}\) converges to \(\exp\{-t^{2}\mathcal{V}[g]/2\}\), the characteristic function of a Gaussian random variable with variance (2.18). This is achieved through the detailed analysis of the resolvent of \(\mathbf{H}\), which can be related to the linear spectral statistics (2.16) via the Helffer-Sjostrand formula. This relation will be explained below in (3.6). We start by introducing the necessary notation. ### Notation and preliminary reductions Denote by \(\eta_{0}(N):=N^{-\gamma}\) the (mesoscopic) scaling parameter. We assume \(\gamma\in(0,1)\), so that \(N^{-1}\ll\eta_{0}(N)\ll 1\) and \(f_{N}(x)=g\big{(}(x-E_{0})/\eta_{0}\big{)}\) (defined in (2.17)) satisfies \[\|f_{N}\|_{1}=\|g\|_{1}\eta_{0},\quad\|f_{N}^{\prime}\|_{1}=\|g^{\prime}\|_{1}, \quad\|f_{N}^{\prime\prime}\|_{1}=\frac{\|g^{\prime\prime}\|_{1}}{\eta_{0}}. \tag{3.1}\] We will suppress the \(N\)-dependence in \(\eta_{0}\) and \(f\) for brevity. Since the mesoscopic test function \(f\) is localized around \(E_{0}\), we can restrict our analysis to a small region around the support of \(f\). To this end, let \(\sigma>0\) be a sufficiently large constant satisfying \(\operatorname{supp}(g)\subset[-\sigma,\sigma]\), which, in turn, implies \[\operatorname{supp}(f)\subset\big{[}E_{0}-\delta,E_{0}+\delta\big{]} \tag{3.2}\] with \(\delta=\delta_{N}:=\sigma\eta_{0}\). From the continuity of \(\rho\) and the fact that \(0<\rho(E_{0})<\infty\), there exists \(\theta\in(0,1)\) such that \[\theta\leq\rho(x)\leq\theta^{-1}\quad\text{for all}\quad x\in(E_{0}-2\delta,E_ {0}+2\delta) \tag{3.3}\] for \(N\) sufficiently large. In the sequel, we will always assume that \(N\) is large enough for (3.3) to hold. Denote by \(\chi:\mathbb{R}\to[0,1]\) a smooth cutoff function supported on \([-2\delta,2\delta]\) and equal to \(1\) on \([-\delta,\delta]\), and define the almost analytic extension of \(f\) by \[\tilde{f}(z):=(f(x)+\operatorname{i}yf^{\prime}(x))\chi(y) \tag{3.4}\] with \(z=x+iy\). An integral representation formula, used in the Helffer-Sjostrand functional calculus, expresses the value \(f(\lambda)\) for any \(\lambda\in\mathbb{R}\) as \[f(\lambda)=\frac{1}{\pi}\int_{\mathbb{C}}\frac{\frac{\partial}{\partial z} \tilde{f}(z)}{\lambda-z}d^{2}z=\frac{1}{2\pi}\int_{\mathbb{C}}\frac{ \operatorname{i}yf^{\prime\prime}(x)\chi(y)+\operatorname{i}(f(x)+ \operatorname{i}yf^{\prime}(x))\chi^{\prime}(y)}{\lambda-x-\operatorname{i}y}dxdy \tag{3.5}\] with the standard Lebesgue measure on \(\mathbb{C}\) denoted by \(d^{2}z:=d\operatorname{Re}z\,d\operatorname{Im}z\). Therefore, we rewrite the fluctuations of the linear spectral statistics (2.16) as \[(1-\mathbb{E})\big{[}\operatorname{Tr}f(\mathbf{H})\big{]}=\frac{1}{\pi}\int_{ \mathbb{C}}\frac{\partial\tilde{f}(z)}{\partial\overline{z}}(1-\mathbb{E}) \big{[}\operatorname{Tr}\mathbf{G}(z)\big{]}d^{2}z, \tag{3.6}\] where \(\mathbf{G}(z):=(\mathbf{H}-z\mathbf{I}_{nN})^{-1}\in\mathbb{C}^{nN\times nN}\) is the resolvent of \(\mathbf{H}\), \(\mathbf{I}_{nN}\) is the \(nN\times nN\) identity matrix and we used the shorthand notation \((1-\mathbb{E})[X]:=X-\mathbb{E}[X]\) for any random variable \(X\). The above representation, with resolvent \(\mathbf{G}(z)\) appearing on the right-hand side of (3.6), allows to exploit the properties of the resolvents of Kronecker random matrices, in particular, the local law. To state the local law for Kronecker random matrices, we use the following notation. For any matrix \(\mathbf{R}\in\mathbb{C}^{nN\times nN}\) we denote its (left) matrix coefficients \(\{R_{ij}\}_{1\leq i,j\leq N}\) with respect to the standard basis of \(\mathbb{C}^{N\times N}\) via the identity \[\mathbf{R}=\sum_{i,j=1}^{N}R_{ij}\otimes\mathbf{E}_{ij},\quad R_{ij}\in\mathbb{C}^{n \times n}, \tag{3.7}\] where \(\mathbf{E}_{ij}=\big{(}\delta_{k}\delta_{jl}\big{)}_{k,l=1}^{N}\in\mathbb{C}^{N \times N}\), \(\delta_{kl}\) is the Kronecker delta, and \(\{\mathbf{E}_{ij}\}_{1\leq i,j\leq N}\) is the standard basis of \(\mathbb{C}^{N\times N}\). For example, the collection \(\{G_{ij}(z)\}_{1\leq i,j\leq N}\) gives the matrix coefficients of the resolvent \(\mathbf{G}(z)\in\mathbb{C}^{\,nN\times nN}\) through the identity \[\mathbf{G}(z)=\sum_{i,j=1}^{N}G_{ij}(z)\otimes\mathbf{E}_{ij},\quad G_{ij}(z)\in \mathbb{C}^{n\times n}. \tag{3.8}\] We also need the notion of _stochastic domination_. For two sequences of nonnegative random variables \(\mathbf{\Phi}:=(\Phi_{N})_{N\in\mathbb{N}}\) and \(\mathbf{\Psi}:=(\Psi_{N})_{N\in\mathbb{N}}\) we say that \(\mathbf{\Phi}\) is _stochastically dominated_ by \(\mathbf{\Psi}\), denoted \(\mathbf{\Phi}\prec\mathbf{\Psi}\), if for any \(\varepsilon>0\) small and \(D\in\mathbb{N}\) there exists \(C_{\varepsilon,D}>0\) such that \[\mathbb{P}\Big{[}\Phi_{N}\geq N^{\varepsilon}\Psi_{N}\Big{]}\leq\frac{C_{ \varepsilon,D}}{N^{D}} \tag{3.9}\] holds for all \(N\in\mathbb{N}\). If \(\mathbf{\Phi}\) and \(\mathbf{\Psi}\) are deterministic, \(\mathbf{\Phi}\prec\mathbf{\Psi}\) means that for any \(\varepsilon>0\) there exists \(C_{\varepsilon}>0\) such that \(\mathbf{\Phi}\leq C_{\varepsilon}N^{\varepsilon}\mathbf{\Psi}\). Finally, if there exists \(C>0\) such that \(\mathbf{\Phi}\leq C\mathbf{\Psi}\) uniformly for all \(N\), then we denote this as \(\mathbf{\Phi}\lesssim\mathbf{\Psi}\). We write \(\mathbf{\Phi}\sim\mathbf{\Psi}\) if \(\mathbf{\Phi}\lesssim\mathbf{\Psi}\) and \(\mathbf{\Psi}\lesssim\mathbf{\Phi}\). We now state a result that will be used repeatedly throughout this paper to analyze the right-hand side of (3.6). **Proposition 3.1** (Optimal bulk local law for \(\boldsymbol{H}\)).: _Suppose that the self-energy operator \(\Gamma\) satisfies the \(L\)-flatness property **(A)**. Then for any \(\nu\in(0,1)\) and \(\theta\in(0,1)\)_ \[\max_{1\leq i,j\leq N}\|G_{ij}(z)-\delta_{ij}M(z)\|\prec\frac{1}{\sqrt{N\operatorname {Im}z}} \tag{3.10}\] _and_ \[\Big{\|}\frac{1}{N}\sum_{i=1}^{N}G_{ii}(z)-M(z)\Big{\|}\prec\frac{1}{N \operatorname{Im}z} \tag{3.11}\] _uniformly on the set \(\{z\,:\,\theta<\rho\,(\operatorname{Re}z)<\theta^{-1},\,\operatorname{Im}z \geq N^{-1+\nu}\}\), where \(M(z)\) is the solution to the MDE (2.7), \(\rho\) is the corresponding self-consistent density of states defined in (2.15), and \(\|\cdot\|\) denotes the matrix norm of \(G_{ij}(z)\in\mathbb{C}^{n\times n}\) induced by the Euclidean norm on \(\mathbb{C}^{n}\)._ Proof.: The bounds (3.10) and (3.11) are an immediate consequence of [5, Lemma B.1] and Proposition A.1. Indeed, it has been proven in [5, Lemma B.1] that (3.10) and (3.11) hold for all \(z\in\mathbb{C}_{+}\) such that \(\operatorname{Im}z\geq N^{-1+\nu}\) and * \(\|M(z)\|\) is bounded, and * the _stability operator_ defined for any \(R\in\mathbb{C}^{n\times n}\) by \[R\mapsto R-M(z)\Gamma[R]M(z)\] (3.12) is invertible. Parts (i) and (iii) of Proposition A.1 establish the boundedness of \(\|M(z)\|\) (as well as \(\|(M(z))^{-1}\|\)) uniformly on \(\mathbb{C}_{+}\), and the boundedness of the inverse of the stability operator (3.12) uniformly on the set \(\{z\,:\,\theta<\rho\,(\operatorname{Re}z)<\theta^{-1},\,\operatorname{Im}z\geq 0\}\), which together imply (3.10) and (3.11). Notice that the constants \(C_{\varepsilon,D}\) in (3.10) and (3.11) that are hidden in the notion of stochastic domination depend on the model parameters and additionally on \(\nu\), \(\theta\). We call (3.10) the _entry-wise local law_ and (3.11) the _averaged local law_ for \(\boldsymbol{H}\). Local law bounds (3.10) and (3.11) provide us with the necessary control of the resolvent \(\boldsymbol{G}(z)\) for the spectral parameters \(z\) very close to the real line, namely for \(|\operatorname{Im}z|\geq N^{-1+\nu}\) for arbitrarily small \(\nu\in(0,1)\). In order to establish the weak convergence of the random variable (2.16) to a centered Gaussian random variable with variance (2.18), we consider the characteristic function of \((1-\mathbb{E})[\operatorname{Tr}f(\boldsymbol{H})]\). Define \[e(t):=e^{i\,t(1-\mathbb{E})[\operatorname{Tr}f(\boldsymbol{H})]}=e^{\frac{i \,\mu}{\nu}\int_{\mathbb{C}}\frac{\partial f(\lambda)}{\partial\tau}(1-\mathbb{ E})[\operatorname{Tr}\boldsymbol{G}(z)]d^{2}z}, \tag{3.13}\] where on the right-hand side we used the representation (3.6). As the first step of the analysis of (3.13) we show that it is enough to consider the integral over a small neighborhood around \(E_{0}\). Moreover, removing a sufficiently narrow strip around the real line in the integral in (3.13) does not change the limiting value of \(e(t)\) and its expectation. More precisely, denote \[\Omega=\Omega_{N}:=\{z\in\mathbb{C}\,:\,|\operatorname{Re}z-E_{0}|<\delta,N^{ -\tau}\eta_{0}<|\operatorname{Im}z|<2\delta\}, \tag{3.14}\] where \(\tau\in\big{(}0,1\big{)}\) is a sufficiently small constant. For each statement in this and subsequent sections we will indicate how small \(\tau\) should be compared to \(\gamma\). The following holds. **Lemma 3.2**.: _Let \(\gamma\in\big{(}0,1\big{)}\), \(\tau\in\big{(}0,(1-\gamma)\big{)}\) and_ \[\mathfrak{e}(t):=e^{\frac{i\,\mu}{\nu}\int_{\Omega}\frac{\partial f(\lambda)}{ \partial\tau}(1-\mathbb{E})[\operatorname{Tr}\boldsymbol{G}(z)]d^{2}z}. \tag{3.15}\] _Then_ \[\big{|}\mathbb{E}[e(t)]-\mathbb{E}[\mathfrak{e}(t)]\big{|}\prec|t||\boldsymbol {g}^{\prime\prime}\|_{1}N^{-\tau}. \tag{3.16}\] The proof of this lemma follows the lines of the similar result from [47, Section 4.2] with only minor changes, therefore, we omit it in the present work. From Lemma 3.2 we see that \(\mathbb{E}[\mathfrak{e}(t)]\) and the characteristic function of the mesoscopic linear spectral statistic \(\mathbb{E}[e(t)]\) coincide in the limit \(N\to\infty\). In the remainder of this section we will study \(\mathbb{E}[\mathfrak{e}(t)]\). Notice that the local laws (3.10)-(3.11) hold for \(|\operatorname{Im}z|\gg N^{-1}\), therefore, working with \(\mathfrak{e}(t)\) makes it possible to apply the local laws for \(\boldsymbol{G}(z)\) for all \(z\) in the domain of integration \(\Omega\) in (3.15). ### Computing \(\mathbb{E}[\mathfrak{e}(t)]\) for \(\beta=2\) In this section we obtain an approximate equation for \(\mathbb{E}[\mathfrak{e}(t)]\) in the complex Hermitian case. The derivation is based on direct algebraic computations, and the obtained equation will be further analyzed and refined in the subsequent sections. The main tool that will be used throughout this section is the cumulant expansion formula. Denote by the fluctuation matrix of \(\mathbf{H}\) from (2.1) with complex i.i.d. matrices \(\mathbf{X}_{\alpha}\) in the second tensor factor, i.e. \[\mathbf{W}:=\sum_{\alpha=1}^{d}\bigg{(}L_{\alpha}\otimes\mathbf{X}_{\alpha}+L_{\alpha} ^{*}\otimes\mathbf{X}_{\alpha}^{*}\bigg{)}. \tag{3.17}\] For any differentiable function \(\mathscr{F}:\mathbb{C}^{nN\times nN}\to\mathbb{C}^{nN\times nN}\) we define the directional derivative of \(\mathscr{F}(\mathbf{W})\) with respect to \(\mathbf{W}=(w_{ij})_{i,j=1}^{nN}\) in the direction \(\mathbf{R}=(r_{ij})_{i,j=1}^{nN}\in\mathbb{C}^{nN\times nN}\) as \[\nabla_{\mathbf{R}}\mathscr{F}(\mathbf{W}):=\sum_{i,j=1}^{nN}\frac{\partial\mathscr{F }(\mathbf{W})}{\partial\,w_{ij}}\,r_{ij}. \tag{3.18}\] We also define the partial trace operator \(\mathrm{Id}_{\alpha}\otimes\mathrm{Tr}_{N}:\mathbb{C}^{nN\times nN}\to\mathbb{ C}^{n\times n}\) acting as \((\mathrm{Id}_{n}\otimes\mathrm{Tr}_{N})\big{[}A\otimes\mathbf{B}\big{]}=A\, \mathrm{Tr}\left(\mathbf{B}\right)\) for any \(A\otimes\mathbf{B}\in\mathbb{C}^{nN\times nN}=\mathbb{C}^{n\times n}\otimes \mathbb{C}^{N\times N}\). Then the following holds. **Lemma 3.3**.: _Let \(\tau\in\big{(}0,\min\{\gamma,(1-\gamma)\}\big{)}\). Denote \(\mathscr{F}_{1}(\mathbf{W}):=\mathbf{G}(z)\) and \(\mathscr{F}_{2}(\mathbf{W}):=\mathbf{G}(z)\,\mathfrak{e}(t)\). Then for \(\star\in\{1,2\}\) we have_ \[\mathbb{E}\Big{[}\mathbf{W}\mathscr{F}_{\star}(\mathbf{W})\Big{]}=\mathbb{E}\Big{[} \widetilde{\mathbf{W}}\,\nabla_{\widetilde{\mathbf{W}}}\mathscr{F}_{\star}(\mathbf{W}) \Big{]}+\mathcal{D}_{\star}(z), \tag{3.19}\] _where \(\widetilde{\mathbf{W}}\) is an independent copy of \(\mathbf{W}\) and the error terms \(\mathcal{D}_{\star}(z)\) are analytic in \(z\) and satisfy the bounds_ \[\Big{\|}\mathcal{D}_{\star}(z)\Big{\|}_{\max}=O_{\times}\bigg{(}(1+|t|^{3}) \frac{N^{5\tau/2}}{N\sqrt{\eta_{0}}}\bigg{)} \tag{3.20}\] _uniformly for \(z\in\Omega\)._ The proof of Lemma 3.3 relies on the cumulant expansion formula for real random variables (see, e.g., [55]) and is presented in Appendix B. Below we will see that applying formula (3.19) gives rise to the operator \(\mathscr{S}:\mathbb{C}^{nN\times nN}\to\mathbb{C}^{nN\times nN}\) acting as \[\mathscr{S}[\mathbf{R}]:=\mathbb{E}\big{[}\mathbf{W}\mathbf{R}\mathbf{W}\big{]} \tag{3.21}\] for any matrix \(\mathbf{R}\in\mathbb{C}^{nN\times nN}\). The operator \(\mathscr{S}\) can be written in terms of the self-energy operator \(\Gamma\) and the partial trace operator \(\mathrm{Id}_{n}\otimes\mathrm{Tr}_{N}\). Indeed, for any \(\mathbf{R}=\sum_{i,j=1}^{N}R_{ij}\otimes\mathbf{E}_{ij}\) with \(R_{ij}\in\mathbb{C}^{n\times n}\) we have that \[\mathscr{S}[\mathbf{R}]=\mathbb{E}\big{[}\mathbf{W}\mathbf{R}\mathbf{W}\big{]}=\Gamma\Big{[} \frac{1}{N}\sum_{j=1}^{N}R_{jj}\Big{]}\otimes\mathbf{I}_{N}=\frac{1}{N}\Gamma \Big{[}(\mathrm{Id}_{n}\otimes\mathrm{Tr}_{N})\big{[}\mathbf{R}\big{]}\Big{]} \otimes\mathbf{I}_{N}, \tag{3.22}\] where \[(\mathrm{Id}_{n}\otimes\mathrm{Tr}_{N})\big{[}\mathbf{R}\big{]}=\sum_{i,j=1}^{N} \left(\mathrm{Id}_{n}\otimes\mathrm{Tr}_{N}\right)\big{[}R_{ij}\otimes\mathbf{E}_{ ij}\big{]}=\sum_{j=1}^{N}R_{jj}. \tag{3.23}\] It will be often convenient to decompose taking the trace on \(\mathbb{C}^{nN\times nN}\) into two steps, first taking the partial trace \((\mathrm{Id}_{n}\otimes\mathrm{Tr}_{N})\), and then applying the trace on the smaller space \(\mathbb{C}^{n\times n}\). Denote \(\mathbf{M}(z):=M(z)\otimes\mathbf{I}_{N}\in\mathbb{C}^{nN\times nN}\). Then \(\mathscr{S}[\mathbf{M}(z)]=\Gamma[M(z)]\otimes\mathbf{I}_{N}\), and thus for any \(N\in\mathbb{N}\) the function \(\mathbf{M}(z)\) satisfies the equation \[-\frac{1}{\mathbf{M}(z)}=z\mathbf{I}_{nN}-\mathbf{K}_{0}+\mathscr{S}\big{[}\mathbf{M}(z)\big{]} \tag{3.24}\] with \(\mathbf{K}_{0}:=K_{0}\otimes\mathbf{I}_{N}\). Equation (3.24) is a Dyson equation with a positivity preserving self-energy operator (3.22) that admits the solution \(\mathbf{M}(z)=M(z)\otimes\mathbf{I}_{N}\) for any \(z\in\mathbb{C}\setminus\mathbb{R}\) and \(N\in\mathbb{N}\). For any \(r\in\mathbb{N}\) and \(T\in\mathbb{C}^{r\times r}\), denote by \(\mathcal{C}_{T}\) the operator acting on \(\mathbb{C}^{r\times r}\) as the multiplication by \(T\) from the left and from the right \[\mathcal{C}_{T}\big{[}R\,\big{]}=TR\,T. \tag{3.25}\] If \(T\) is invertible, then so is \(\mathcal{C}_{T}\), and \(\mathcal{C}_{T}^{-1}=\mathcal{C}_{T^{-1}}\). For \(z\in\mathbb{C}\setminus\mathbb{R}\) and \(N\in\mathbb{N}\), we define the operators \[\mathscr{B}_{z}:=\mathcal{C}_{M(z)}^{-1}-\mathscr{S},\qquad\mathcal{B}_{z}:= \mathcal{C}_{M(z)}^{-1}-\Gamma \tag{3.26}\] acting on \(\mathbb{C}^{nN\times nN}\) and \(\mathbb{C}^{n\times n}\) correspondingly. Notice that the composition \(\mathcal{C}_{M(z)}\mathcal{B}_{z}\) gives the stability operator for the Dyson equation (2.7) introduced in (3.12). Similarly, the composition \(\mathcal{C}_{M(z)}\mathcal{B}_{z}=\mathrm{Id}-\mathcal{C}_{M(z)}\mathscr{S}\) is the stability operator for the Dyson equation (3.24). As mentioned in Section 3.1, the inverse of the stability operator (3.12) is uniformly bounded in the operator norm on the set \(\{z\,:\,\theta<\rho\,(\operatorname{Re}z)<\theta^{-1},\,\operatorname{Im}z \geq 0\}\) for any \(\theta\in(0,1)\). This, together with the boundedness of \(\|M(z)\|\) and \(\|M^{-1}(z)\|\) established in part (i) of Proposition A.1 and the extension of \(M(z)\) to \(\mathbb{C}\) defined in (2.13), implies that for any \(\theta,\nu\in(0,1)\) and any \(C>0\) \[\sup\{\|\mathcal{B}_{z}^{-1}\|\,:\,\theta<\rho\,(\operatorname{Re}z)<\theta^{ -1},\,|z|\leq C\}\lesssim 1. \tag{3.27}\] From (3.22) and (3.26) we have that for any \(\boldsymbol{R}\in\mathbb{C}^{nN\times nN}\) \[(\mathrm{Id}_{n}\otimes\operatorname{Tr}_{N})\,\mathscr{B}_{z}\big{[} \boldsymbol{R}\big{]}=\mathcal{B}_{z}\,(\mathrm{Id}_{n}\otimes\operatorname {Tr}_{N})\big{[}\boldsymbol{R}\big{]}. \tag{3.28}\] Now we define the adjoints of the operators \(\mathscr{B}_{z}\) and \(\mathcal{B}_{z}\) (introduced in (3.12)). To this end, for any \(r\in\mathbb{N}\) we equip \(\mathbb{C}^{r\times r}\) with the scalar product \[\langle S,T\rangle=\frac{1}{r}\operatorname{Tr}\big{(}S^{*}T\big{)} \tag{3.29}\] for all \(S,T\in\mathbb{C}^{r\times r}\). We denote by \(\mathcal{B}^{*}\) and \(\mathscr{B}^{*}\) the adjoints of \(\mathcal{B}\) and \(\mathscr{B}\) with respect to the corresponding scalar products (3.29). We also introduce the notation for the normalized trace functional \(\langle T\rangle:=\langle I,T\rangle=\frac{1}{r}\operatorname{Tr}\big{(}T\big{)}\). We proceed to the analysis of \(\mathbb{E}[\mathfrak{e}(t)]\). **Lemma 3.4**.: _Let \(\gamma\in(0,1)\) and \(\tau\in\big{(}0,\min\{\gamma,(1-\gamma)\}/7\big{)}\). Then_ \[\frac{d}{dt}\mathbb{E}[\mathfrak{e}(t)]=-\frac{t}{\pi^{2}}\int_{\Omega\times \Omega}\frac{\partial\tilde{f}(z)}{\partial\overline{z}}\frac{\partial\tilde{f} (\zeta)}{\partial\overline{\zeta}}\frac{\partial}{\partial\zeta}\mathbb{E} \Big{[}\sum_{i,j=1}^{n}\operatorname{Tr}\Big{(}E_{ji}\,S_{ij}(z,\zeta)\Big{)} \mathfrak{e}(t)\Big{]}d^{2}\zeta d^{2}z+\mathcal{E}_{1} \tag{3.30}\] _where \(|\mathcal{E}_{1}|\prec N^{-\tau}\) and_ \[S_{ij}(z,\zeta):=\frac{1}{N}\sum_{k,l=1}^{N}G_{lk}(z)\frac{1}{M(z)}\mathcal{B }_{z}^{-1}[I_{n}]\,\Gamma[E_{ij}]G_{kl}(\zeta). \tag{3.31}\] Proof.: We start by differentiating the function \(\mathfrak{e}(t)\) in (3.15) with respect to \(t\) \[\frac{d}{dt}\mathbb{E}[\mathfrak{e}(t)]=\frac{\mathrm{i}}{\pi}\int_{\Omega} \frac{\partial\tilde{f}(z)}{\partial\overline{z}}\mathbb{E}\big{[}\mathfrak{e} (t)(1-\mathbb{E})[\operatorname{Tr}\boldsymbol{G}(z)]\big{]}d^{2}z. \tag{3.32}\] Recall that \(\boldsymbol{G}(z)=(\boldsymbol{W}+\boldsymbol{K}_{0}-zI_{nN})^{-1}\) with \(\boldsymbol{K}_{0}=K_{0}\otimes\boldsymbol{I}_{N}\) and \(\boldsymbol{W}\) defined in (3.18), which implies the following trivial identities \[(z\boldsymbol{I}_{nN}-\boldsymbol{K}_{0})(1-\mathbb{E})[\boldsymbol{G}(z)]= (1-\mathbb{E})\big{[}(z\boldsymbol{I}_{nN}-\boldsymbol{K}_{0})\boldsymbol{G}( z)\big{]}=(1-\mathbb{E})\big{[}\boldsymbol{W}\boldsymbol{G}(z)\big{]}. \tag{3.33}\] The resolvent matrix \(\boldsymbol{G}(z)\) is an analytic function of \(\boldsymbol{W}\) satisfying \(\|\boldsymbol{G}(z)\|\leq|\operatorname{Im}z|^{-1}\) for all \(z\in\mathbb{C}\setminus\mathbb{R}\). Moreover, for any \(\boldsymbol{R}\in\mathbb{C}^{nN\times nN}\) \[\nabla_{\boldsymbol{R}}\,\boldsymbol{G}(z)=-\boldsymbol{G}(z)\boldsymbol{R} \,\boldsymbol{G}(z). \tag{3.34}\] We recall that \(\nabla_{\boldsymbol{R}}=\sum_{i,j}r_{ij}\partial_{ij}\) denotes the directional derivative with respect to \(\boldsymbol{W}\) in the direction \(\boldsymbol{R}=(r_{ij})\). Therefore, by applying the cumulant expansion formula (3.19) with \(\star=1\) and taking separately the partial expectation with respect to \(\widetilde{\boldsymbol{W}}\) we get \[\mathbb{E}[\boldsymbol{W}\boldsymbol{G}(z)]=-\mathbb{E}[\widetilde{\boldsymbol {W}}\boldsymbol{G}(z)\widetilde{\boldsymbol{W}}\boldsymbol{G}(z)]+\mathcal{D} _{1}(z)=-\mathbb{E}\big{[}\mathscr{S}[\boldsymbol{G}(z)]\,\boldsymbol{G}(z) \big{]}+\mathcal{D}_{1}(z), \tag{3.35}\] where the operator \(\mathscr{S}:\mathbb{C}^{nN\times nN}\to\mathbb{C}^{nN\times nN}\) was defined in (3.21). Similarly, using (3.19) for \(\star=2\) and (3.34) we have that \[\mathbb{E}\Big{[}\boldsymbol{W}\boldsymbol{G}(z)\mathfrak{e}(t) \Big{]} =\mathbb{E}\Big{[}\widetilde{\boldsymbol{W}}\nabla_{\widetilde{ \boldsymbol{W}}}\big{(}\boldsymbol{G}(z)\mathfrak{e}(t)\big{)}\Big{]}+\mathcal{D }_{2}(z) \tag{3.36}\] \[=\mathbb{E}\Big{[}\widetilde{\boldsymbol{W}}\nabla_{\widetilde{ \boldsymbol{W}}}\big{(}\boldsymbol{G}(z)\mathfrak{e}(t)\big{]}+\mathbb{E}\Big{[} \widetilde{\boldsymbol{W}}\boldsymbol{G}(z)\nabla_{\widetilde{\boldsymbol{W}}} \big{(}\mathfrak{e}(t)\big{)}\Big{]}+\mathcal{D}_{2}(z)\] (3.37) \[=-\mathbb{E}\Big{[}\mathscr{S}[\boldsymbol{G}(z)]\boldsymbol{G}(z) \mathfrak{e}(t)\Big{]}+\mathbb{E}\Big{[}\widetilde{\boldsymbol{W}} \boldsymbol{G}(z)\nabla_{\widetilde{\boldsymbol{W}}}\big{(}\mathfrak{e}(t) \big{)}\Big{]}+\mathcal{D}_{2}(z). \tag{3.38}\] Combining (3.35) and (3.38) yields \[\mathbb{E}\Big{[}\mathfrak{e}(t)\big{(}1-\mathbb{E}\big{]}\big{[}\boldsymbol{W} \boldsymbol{G}(z)\big{]}\Big{]}=-\mathbb{E}\Big{[}\mathfrak{e}(t)\big{(}1- \mathbb{E}\big{]}\big{[}\mathscr{S}[\boldsymbol{G}(z)]\boldsymbol{G}(z) \big{]}\Big{]}+\mathbb{E}\Big{[}\widetilde{\boldsymbol{W}}\boldsymbol{G}(z) \nabla_{\widetilde{\boldsymbol{W}}}\big{(}\mathfrak{e}(t)\big{)}\Big{]}+ \mathcal{E}_{2}(z) \tag{3.39}\] with the error matrix \[\mathcal{E}_{2}(z):=\mathcal{D}_{2}(z)-\mathbb{E}[\mathfrak{e}(t)]\mathcal{D}_{1}(z) \tag{3.40}\] and \(\mathcal{D}_{1},\mathcal{D}_{2}\) from (3.19). We rewrite the first term on the right-hand side in the above equality as \[-\mathbb{E}\Big{[} \mathfrak{e}(t)\big{(}1-\mathbb{E}\big{)}\big{[}\mathscr{S}[ \boldsymbol{G}(z)]\boldsymbol{G}(z)\big{]}\Big{]} \tag{3.41}\] \[=\mathbb{E}\Big{[}\mathfrak{e}(t)\big{(}1-\mathbb{E}\big{)} \big{[}\mathscr{S}[\boldsymbol{M}(z)]\boldsymbol{G}(z)\big{]}\Big{]}- \mathbb{E}\Big{[}\mathfrak{e}(t)\mathscr{S}\big{[}\big{(}1-\mathbb{E}\big{)} [\boldsymbol{G}(z)]\big{]}\boldsymbol{M}(z)\Big{]}+\mathcal{E}_{3}(z)\] (3.42) \[=\mathbb{E}\Big{[}\mathfrak{e}(t)\Big{(}\frac{1}{\boldsymbol{M}( z)}+z\boldsymbol{I}_{nN}-\boldsymbol{K}_{0}\Big{)}\big{(}1-\mathbb{E}\big{)} \big{[}\boldsymbol{G}(z)\big{]}\Big{]}-\mathbb{E}\Big{[}\mathfrak{e}(t) \big{(}1-\mathbb{E}\big{)}\big{[}\mathscr{S}[\boldsymbol{G}(z)]\boldsymbol{M }(z)\big{]}\Big{]}+\mathcal{E}_{3}(z), \tag{3.43}\] where we used that \(\boldsymbol{M}(z)\) satisfies (3.24), and introduced the error term \[\mathcal{E}_{3}(z):=-\mathbb{E}\Big{[}\mathfrak{e}(t)\big{(}1-\mathbb{E}\big{)} \big{[}\mathscr{S}[\boldsymbol{M}(z)-\boldsymbol{G}(z)]\big{(}\boldsymbol{M }(z)-\boldsymbol{G}(z)\big{)}\big{]}\Big{]}. \tag{3.44}\] The above identities (3.39) and (3.43) substituted into (3.33) after an elementary cancellation give (3.45) Multiplying the above equation by \(\boldsymbol{M}^{-1}(z)\) from the right and rearranging the terms yields \[\mathscr{B}_{z}\,\mathbb{E}\Big{[}\mathfrak{e}(t)\big{(}1-\mathbb{E}\big{)} \big{[}\boldsymbol{G}(z)\big{]}\Big{]}=-\mathbb{E}\Big{[}\widetilde{ \boldsymbol{W}}\boldsymbol{G}(z)\nabla_{\widetilde{\boldsymbol{W}}}\big{(} \mathfrak{e}(t)\big{)}\Big{]}\frac{1}{\boldsymbol{M}(z)}-\Big{(}\mathcal{E} _{2}(z)+\mathcal{E}_{3}(z)\Big{)}\frac{1}{\boldsymbol{M}(z)}, \tag{3.46}\] where \(\mathscr{B}_{z}\) was defined in (3.26). By applying \((\mathrm{Id}_{n}\otimes\mathrm{Tr}_{N})\) on both sides of (3.46) and using (3.28) we get \[(\mathrm{Id}_{n}\otimes\mathrm{Tr}_{N})\mathbb{E}\Big{[}\mathfrak{e}(t)\big{(} 1-\mathbb{E}\big{)}\big{[}\boldsymbol{G}(z)\big{]}\Big{]}=-\mathcal{B}_{z}^{ -1}(\mathrm{Id}_{n}\otimes\mathrm{Tr}_{N})\Big{[}\mathbb{E}\big{[}\widetilde {\boldsymbol{W}}\boldsymbol{G}(z)\nabla_{\widetilde{\boldsymbol{W}}}\big{(} \mathfrak{e}(t)\big{)}\big{]}\frac{1}{\boldsymbol{M}(z)}\Big{]}+\mathcal{E}_{ 4}(z), \tag{3.47}\] where \[\mathcal{E}_{4}(z):=-\mathcal{B}_{z}^{-1}(\mathrm{Id}_{n}\otimes\mathrm{Tr}_{ N})\Big{[}\Big{(}\mathcal{E}_{2}(z)+\mathcal{E}_{3}(z)\Big{)}\frac{1}{\boldsymbol{M}(z)} \Big{]}. \tag{3.48}\] Notice that \(\mathfrak{e}(t)\) (see (3.15)) is a bounded function of \(\boldsymbol{W}\). Taking the directional derivative of \(\mathfrak{e}(t)\) with respect to \(\boldsymbol{W}\) in the direction \(\widetilde{\boldsymbol{W}}\) in (3.46) gives \[\mathbb{E}\Big{[}\widetilde{\boldsymbol{W}}\boldsymbol{G}(z) \nabla_{\widetilde{\boldsymbol{W}}}\big{(}\mathfrak{e}(t)\big{)}\Big{]} =\mathbb{E}\Big{[}\widetilde{\boldsymbol{W}}\boldsymbol{G}(z) \mathfrak{e}(t)\nabla_{\widetilde{\boldsymbol{W}}}\Big{(}\frac{\mathrm{i}t}{ \pi}\int_{\Omega}\frac{\partial\tilde{f}(\zeta)}{\partial\zeta}(1-\mathbb{E} )\big{[}\operatorname{Tr}\boldsymbol{G}(\zeta)\big{]}d^{2}\zeta\Big{)}\Big{]} \tag{3.49}\] \[=\frac{\mathrm{i}t}{\pi}\mathbb{E}\Big{[}\widetilde{\boldsymbol{W }}\boldsymbol{G}(z)\mathfrak{e}(t)\int_{\Omega}\frac{\partial\tilde{f}(\zeta)} {\partial\zeta}\operatorname{Tr}\left(\nabla_{\widetilde{\boldsymbol{W}}} \boldsymbol{G}(\zeta)\right)d^{2}\zeta\Big{]}. \tag{3.50}\] Using the trivial relation \(\frac{\partial}{\partial\zeta}\boldsymbol{G}(\zeta)=\boldsymbol{G}^{2}(\zeta)\), we get from (3.34) and the cyclicity of the trace that \[\operatorname{Tr}\left(\nabla_{\widetilde{\boldsymbol{W}}}\boldsymbol{G}( \zeta)\right)=-\operatorname{Tr}\left(\widetilde{\boldsymbol{W}}\frac{ \partial}{\partial\zeta}\boldsymbol{G}(\zeta)\right). \tag{3.51}\] Combining this with (3.50) gives \[\mathbb{E}\Big{[}\widetilde{\boldsymbol{W}}\boldsymbol{G}(z)\nabla_{ \widetilde{\boldsymbol{W}}}\big{(}\mathfrak{e}(t)\big{)}\Big{]}=-\frac{ \mathrm{i}t}{\pi}\mathbb{E}\Big{[}\widetilde{\boldsymbol{W}}\boldsymbol{G}(z) \mathfrak{e}(t)\int_{\Omega}\frac{\partial\tilde{f}(\zeta)}{\partial\zeta} \operatorname{Tr}\left(\widetilde{\boldsymbol{W}}\frac{\partial}{\partial\zeta} \boldsymbol{G}(\zeta)\right)d^{2}\zeta\Big{]}. \tag{3.52}\] After plugging (3.52) into (3.47), using the linearity of \(\mathcal{B}_{z}^{-1}\) and taking the trace we have that \[\mathbb{E}\big{[}\mathfrak{e}(t)(1-\mathbb{E})[\operatorname{Tr} \boldsymbol{G}(z)]\big{]}\\ =\frac{\mathrm{i}t}{\pi}\int_{\Omega}\frac{\partial\tilde{f}(\zeta)} {\partial\overline{\zeta}}\cdot\frac{\partial}{\partial\zeta}\mathbb{E}\Big{[} \operatorname{Tr}\left(\mathcal{B}_{z}^{-1}(\mathrm{Id}_{n}\otimes\mathrm{Tr}_{N}) \Big{[}\widetilde{\boldsymbol{W}}\boldsymbol{G}(z)\frac{1}{\boldsymbol{M}(z)} \Big{]}\right)\operatorname{Tr}\left(\widetilde{\boldsymbol{W}}\boldsymbol{G}( \zeta)\mathfrak{e}(t)\right]d^{2}\zeta+\operatorname{Tr}\mathcal{E}_{4}(z). \tag{3.53}\] Together with (3.32) we obtain a formula for the derivative of \(\mathbb{E}[\mathfrak{e}(t)]\), namely, \[\frac{d}{dt}\mathbb{E}[\mathfrak{e}(t)]=\frac{\mathrm{i}}{\pi} \int_{\Omega}\frac{\partial\tilde{f}(z)}{\partial\overline{z}}\mathbb{E}\big{[} \mathfrak{e}(t)(1-\mathbb{E})[\operatorname{Tr}\boldsymbol{G}(z)]\big{]}d^{2}z \tag{3.54}\] \[=-\frac{t}{\pi^{2}}\int_{\Omega\times\Omega}\frac{\partial\tilde{f}( z)}{\partial\overline{z}}\frac{\partial\tilde{f}(\zeta)}{\partial\overline{\zeta}}\frac{ \partial}{\partial\zeta}\mathbb{E}\bigg{[}\operatorname{Tr}\left(\mathcal{B}_{z}^{-1}( \mathrm{Id}_{n}\otimes\mathrm{Tr}_{N})\Big{[}\widetilde{\boldsymbol{W}} \boldsymbol{G}(z)\frac{1}{\boldsymbol{M}(z)}\Big{]}\right)\operatorname{Tr}\left( \widetilde{\boldsymbol{W}}\boldsymbol{G}(\zeta)\right)\mathfrak{e}(t)\bigg{]}d^{2} \zeta d^{2}z+\mathcal{E}_{1},\] where \[\mathcal{E}_{1}:=\frac{\mathrm{i}}{\pi}\int_{\Omega}\frac{\partial\tilde{f}(z)}{ \partial\overline{z}}\operatorname{Tr}\mathcal{E}_{4}(z)d^{2}\!z. \tag{3.55}\] With the Hilbert space structure on \(\mathbb{C}^{n\times n}\) introduced in (3.29), we find \[\operatorname{Tr}\Big{(}\mathcal{B}_{z}^{-1}\big{(}\mathrm{Id}_{n}\otimes \operatorname{Tr}_{N}\big{)}\Big{[}\widetilde{\boldsymbol{W}}\boldsymbol{G}(z )\frac{1}{\boldsymbol{M}(z)}\Big{]}\Big{)}=\operatorname{Tr}\bigg{(}\Big{(} \big{(}\mathscr{B}_{z}^{-1}\big{)}^{*}\big{[}\boldsymbol{I}_{nN}\big{]}\Big{)} ^{*}\,\widetilde{\boldsymbol{W}}\boldsymbol{G}(z)\frac{1}{\boldsymbol{M}(z)} \bigg{)}, \tag{3.56}\] where we used that \[(\mathscr{B}_{z}^{*})^{-1}[\boldsymbol{I}_{nN}]=(\mathcal{B}_{z}^{*})^{-1}[I_ {n}]\otimes\boldsymbol{I}_{N} \tag{3.57}\] following from the definitions (3.26). Now the expectation in (3.54) takes the form \[\mathbb{E}\Big{[}\operatorname{Tr}\Big{(}\mathscr{B}_{z}^{-1} \Big{[}\widetilde{\boldsymbol{W}}\boldsymbol{G}(z)\frac{1}{\boldsymbol{M}(z) }\Big{]}\Big{)}\operatorname{Tr}\Big{(}\widetilde{\boldsymbol{W}} \boldsymbol{G}(\zeta)\Big{)}\mathfrak{e}(t)\Big{]} \tag{3.58}\] \[=\mathbb{E}\Big{[}\operatorname{Tr}\Big{(}\boldsymbol{G}(z) \frac{1}{\boldsymbol{M}(z)}\big{(}(\mathscr{B}_{z}^{*})^{-1}[\boldsymbol{I}_{ nN}]\big{)}^{*}\widetilde{\boldsymbol{W}}\big{)}\operatorname{Tr}\Big{(} \widetilde{\boldsymbol{W}}\boldsymbol{G}(\zeta)\Big{)}\mathfrak{e}(t)\Big{]}\] (3.59) \[=\mathbb{E}\Big{[}\sum_{i,j=1}^{n}\sum_{p,q=1}^{N}\operatorname{ Tr}\Big{(}\big{(}E_{ji}\otimes\boldsymbol{E}_{qp}\big{)}\boldsymbol{G}(z)\frac{1}{ \boldsymbol{M}(z)}\big{(}(\mathscr{B}_{z}^{*})^{-1}[\boldsymbol{I}_{nN}]\big{)} ^{*}\widetilde{\boldsymbol{W}}(E_{ij}\otimes\boldsymbol{E}_{pq})\widetilde{ \boldsymbol{W}}\boldsymbol{G}(\zeta)\Big{)}\mathfrak{e}(t)\Big{]}\] (3.60) \[=\mathbb{E}\Big{[}\sum_{i,j=1}^{n}\sum_{p,q=1}^{N}\operatorname{ Tr}\Big{(}\big{(}E_{ji}\otimes\boldsymbol{E}_{qp}\big{)}\boldsymbol{G}(z)\frac{1}{ \boldsymbol{M}(z)}\big{(}(\mathscr{B}_{z}^{*})^{-1}[\boldsymbol{I}_{nN}] \big{)}^{*}\mathscr{S}[E_{ij}\otimes\boldsymbol{E}_{pq}]\boldsymbol{G}(\zeta )\Big{)}\mathfrak{e}(t)\Big{]}, \tag{3.61}\] where \(\{E_{ij}\otimes\boldsymbol{E}_{pq}:\,1\leq i,j\leq n,1\leq p,q\leq N\}\) is the standard basis in \(\mathbb{C}^{nN\times nN}=\mathbb{C}^{n\times n}\otimes\mathbb{C}^{N\times N}\), and in the last step we took the partial expectation with respect to \(\widetilde{\boldsymbol{W}}\) (see (3.21)). From (3.22) we see that the operator \(\mathscr{S}\) acts on the basis vectors as \[\mathscr{S}[E_{ij}\otimes\boldsymbol{E}_{pq}]=\left\{\begin{array}{ll}0,& \text{ if }p\neq q,\\ N^{-1}\Gamma[E_{ij}]\otimes\boldsymbol{I}_{N},&\text{ if }p=q.\end{array}\right. \tag{3.62}\] Plugging (3.62) and (3.57) into (3.61) leads to the simplified expression of (3.58), \[\mathbb{E}\Big{[}\operatorname{Tr}\Big{(}\mathscr{B}_{z}^{-1} \Big{[}\widetilde{\boldsymbol{W}}\boldsymbol{G}(z)\frac{1}{\boldsymbol{M}(z)} \Big{]}\Big{)}\operatorname{Tr}\Big{(}\widetilde{\boldsymbol{W}}\boldsymbol{G }(\zeta)\Big{)}\mathfrak{e}(t)\Big{]}\\ =\mathbb{E}\Big{[}\frac{1}{N}\sum_{i,j=1}^{n}\sum_{k,l=1}^{N} \operatorname{Tr}\Big{(}E_{ji}G_{lk}(z)\frac{1}{M(z)}\big{(}(\mathcal{B}_{z}^ {*})^{-1}[I_{n}]\big{)}^{*}\Gamma[E_{ij}]G_{kl}(\zeta)\Big{)}\mathfrak{e}(t) \Big{]}. \tag{3.63}\] Since \(\big{(}\mathcal{B}_{z}[R]\big{)}^{*}=\mathcal{B}_{z}^{*}[R^{*}]\) for any \(R\in\mathbb{C}^{n\times n}\), we have that \(\big{(}(\mathcal{B}_{z}^{*})^{-1}[I_{n}]\big{)}^{*}=\mathcal{B}_{z}^{-1}[I_{n}]\). Together with (3.54) this gives the leading term in (3.30). It remains to show that \(|\mathcal{E}_{1}|<N^{-\tau}\), where \(\mathcal{E}_{1}\) is defined in terms of \(\mathcal{E}_{4},\mathcal{E}_{3}\) and \(\mathcal{E}_{2}\) through (3.55), (3.48), (3.44) and (3.40). First, from the bounds of the error terms (3.20) in the cumulant expansion formula (3.19), we have \[\|\mathcal{E}_{2}(z)\|_{\max}=O_{\prec}\big{(}(1+|t|^{3})N^{5\tau/2}N^{-1} \eta_{0}^{-1/2}\big{)} \tag{3.64}\] uniformly for \(z\in\Omega\). Next we use the local laws (3.10) and (3.11) to estimate the error term \(\mathcal{E}_{4}\) from (3.48) and \(\mathcal{E}_{3}\) from (3.44). By (3.22) we see the identity \[\mathscr{S}[\boldsymbol{M}(z)-\boldsymbol{G}(z)]=\Gamma\Big{[}M(z)-\frac{1}{N} \sum_{j=1}^{N}G_{jj}(z)\Big{]}\otimes\boldsymbol{I}_{N}. \tag{3.65}\] Therefore, we can rewrite the term appearing in the definition of \(\mathcal{E}_{3}(z)\) in (3.44) as \[\mathscr{S}[\boldsymbol{M}(z)-\boldsymbol{G}(z)]\big{(}\boldsymbol{M}(z)- \boldsymbol{G}(z)\big{)}=\sum_{i,j=1}^{N}\Gamma\Big{[}M(z)-\frac{1}{N}\sum_{k=1}^ {N}G_{kk}(z)\Big{]}\big{(}M(z)\,\delta_{ij}-G_{ij}(z)\big{)}\otimes\boldsymbol {E}_{ij}. \tag{3.66}\] By applying the partial trace \((\mathrm{Id}_{n}\otimes\operatorname{Tr}_{N})\) to the above identity and using the averaged local law (3.11), we get \[\Big{\|}(\mathrm{Id}_{n}\otimes\operatorname{Tr}_{N})\Big{[}\mathscr{S}[ \boldsymbol{M}(z)-\boldsymbol{G}(z)]\big{(}\boldsymbol{M}(z)-\boldsymbol{G}(z) \big{)}\Big{]}\Big{\|}\prec\frac{1}{N(\operatorname{Im}z)^{2}} \tag{3.67}\] uniformly on \(\Omega\). It follows from Proposition A.1 that \(\|M^{-1}(z)\|\) and \(\|\mathcal{B}_{z}^{-1}\|\) are uniformly bounded on \(\Omega\). Using this, the bound on \(\mathcal{E}_{2}(z)\) from (3.64) and (3.67) we have that the estimate \[\|\mathcal{E}_{4}(z)\|\prec N^{-1}(\operatorname{Im}z)^{-2}+(1+|t|^{3})N^{5 \tau/2}\eta_{0}^{-1/2} \tag{3.68}\] holds uniformly on \(\Omega\). By Stokes' theorem, for any function \(H:\Omega\to\mathbb{C}\) with continuously differentiable real and imaginary parts \[\int_{\Omega}\frac{\partial H(z)}{\partial\overline{z}}d^{2}z=\frac{-\mathrm{ i}}{2}\int_{\partial\Omega}H(z)dz. \tag{3.69}\] Using now (3.69) and the analyticity of \(\operatorname{Tr}\mathcal{E}_{4}(z)\) on \(\Omega\), we rewrite the error term \(\mathcal{E}_{1}\) defined in (3.55) as \[\mathcal{E}_{1}=\frac{\mathrm{i}}{\pi}\int_{\Omega}\frac{\partial\tilde{f}(z )}{\partial\overline{z}}\operatorname{Tr}\mathcal{E}_{4}(z)\,d^{2}z=\frac{1 }{2\pi}\int_{\partial\Omega}\tilde{f}(z)\operatorname{Tr}\mathcal{E}_{4}(z) \,dz. \tag{3.70}\] Since \(\tilde{f}(z)\) vanishes everywhere on \(\partial\Omega\) except the lines \(|\operatorname{Im}z|=N^{-\tau}\eta_{0}\), we obtain from (3.1), (3.4) and (3.68) the estimate \[|\mathcal{E}_{1}|\prec\int_{E_{0}-\delta}^{E_{0}+\delta}\big{(}|f(x)|+N^{- \tau}\eta_{0}|f^{\prime}(x)|\big{)}\bigg{(}\frac{1}{N(N^{-\tau}\eta_{0})^{2}} +\frac{N^{5\tau/2}}{\sqrt{\eta_{0}}}\bigg{)}dx\leq\big{(}\|g\|_{1}+\|g^{\prime }\|_{1}\big{)}\Big{(}\frac{N^{2\tau}}{N\eta_{0}}+N^{5\tau/2}\sqrt{\eta_{0}} \Big{)}. \tag{3.71}\] From the assumption \(\tau\in(0,\min\{\gamma,(1-\gamma)\}/7)\) we have that \(N\eta_{0}=N^{1-\gamma}>N^{7\tau}\) and \(\sqrt{\eta_{0}}=N^{-\gamma/2}<N^{-7\tau/2}\), which together with (3.71) establishes the estimate for \(|\mathcal{E}_{1}|\). ### Equation for \(S_{ij}(z,\zeta)\) Our next step is to analyze the term \[S_{ij}(z,\zeta):=\frac{1}{N}\sum_{k,l=1}^{N}G_{lk}(z)\mathcal{B}_{z}^{-1}[I_{n }]M(z)\Gamma[E_{ij}]G_{kl}(\zeta) \tag{3.72}\] appearing in (3.30). It will be convenient to consider a more general quantity of the form \[G^{B}(z,\zeta):=\frac{1}{N}\sum_{k,l=1}^{N}G_{lk}(z)BG_{kl}(\zeta) \tag{3.73}\] with an arbitrary fixed deterministic \(B\in\mathbb{C}^{n\times n}\). We can then specialize \(G^{B}(z,\zeta)\) to \(S_{ij}(z,\zeta)\) by taking \(B=\mathcal{B}_{z}^{-1}[I_{n}]M(z)\Gamma[E_{ij}]\). We start by showing that \(G^{B}(z,\zeta)\) satisfies the following self-consistent equation with random error term. The derivation is reminiscent of the proofs used in some earlier works on the mesoscopic CLT for random matrices with independent entries (e.g., [45, Proposition 5.1 and Lemma 5.2]). **Lemma 3.5**.: _Let \(\gamma\in(0,1)\) and \(\tau\in(0,1-\gamma)\). Then for any \(B\in\mathbb{C}^{n\times n}\)_ \[G^{B}(z,\zeta)=M(z)BM(\zeta)+M(z)\Gamma\big{[}G^{B}(z,\zeta)\big{]}M(\zeta)+ \mathcal{E}_{1}^{B}(z,\zeta) \tag{3.74}\] _with the error term \(\mathcal{E}_{1}^{B}(z,\zeta)\) being analytic on \(\Omega\times\Omega\) and satisfying_ \[\mathbb{E}[\|\mathcal{E}_{1}^{B}(z,\zeta)\|]\prec\frac{1}{N^{1/2}(\min\{| \operatorname{Im}z|,|\operatorname{Im}\zeta|\})^{3/2}}. \tag{3.75}\] _uniformly for \((z,\zeta)\in\Omega\times\Omega\)._ Proof.: Similarly as in (3.8), we define the (left) matrix coefficients \(\{W_{ij}\}_{1\leq i,j\leq N}\) via \[\mathbf{W}=\sum_{i,j=1}^{N}W_{ij}\otimes\mathbf{E}_{ij},\quad W_{ij}\in\mathbb{C}^{n \times n}. \tag{3.76}\] Directly from the definition (3.17) we get \[W_{ij}=\sum_{\alpha=1}^{d}\bigg{(}L_{\alpha}\,x_{ij}^{(\alpha)}+L_{\alpha}^{*} \,\overline{x}_{ji}^{(\alpha)}\bigg{)}, \tag{3.77}\] where \(x_{ij}^{(\alpha)}\) are the entries of \(\mathbf{X}_{\alpha}\). For any \(i\in\{1,\dots,N\}\), we denote by \(\mathbf{X}_{\alpha}^{(i)}\) the random i.i.d. matrix \(\mathbf{X}_{\alpha}\) with the \(i\)-th row and \(i\)-th column removed. To make the notation consistent and easier to follow, we index the rows and columns of \(\mathbf{X}_{\alpha}^{(i)}\) by \(\{1,\ldots,N\}\setminus\{i\}\). The resolvent of the model with removed \(i\)-th rows and columns in each \(N\times N\) block is \[\mathbf{G}^{(i)}(z):=\bigg{(}K_{0}\otimes\mathbf{I}_{N-1}+\sum_{\alpha=1}^{d}\Big{(}L_{ \alpha}\otimes\mathbf{X}_{\alpha}^{(i)}+L_{\alpha}^{*}\otimes\big{(}\mathbf{X}_{\alpha }^{(i)}\big{)}^{*}\Big{)}-z\mathbf{I}_{n(N-1)}\Big{)}^{-1}, \tag{3.78}\] and the corresponding (left) coefficient matrices are \[\mathbf{G}^{(i)}(z)=\sum_{p,q\neq i}G_{pq}^{(i)}(z)\otimes\mathbf{E}_{pq}. \tag{3.79}\] With this notation, the Schur complement formula yields \[\frac{1}{G_{ii}(z)} =W_{ii}+K_{0}-z-\sum_{p,q\neq i}W_{ip}\,G_{pq}^{(i)}(z)\,W_{qi}, \tag{3.80}\] \[G_{ij}(z) =-G_{ii}(z)\sum_{p\neq i}W_{ip}\,G_{pj}^{(i)}(z)=-\sum_{p\neq j}G _{ip}^{(j)}(z)W_{pj}\,G_{jj}(z) \tag{3.81}\] for all \(1\leq i,j\leq N\), \(i\neq j\). Moreover, for \(l\neq k\) we have \[G_{lk}=M\frac{1}{G_{ll}}G_{lk}+(G_{ll}-M)\frac{1}{M+(G_{ll}-M)}G_{lk}, \tag{3.82}\] where we dropped the spectral parameter \(z\) for brevity. Denote \(\eta:=|\operatorname{Im}z|\). From the local law (3.10), the bounds \(\|G_{ll}-M\|\prec(N\eta)^{-1/2}\) and \(\|G_{lk}\|\prec(N\eta)^{-1/2}\) hold uniformly on \(\Omega\). The functions \(\|M\|\) and \(\|M^{-1}\|\) are uniformly bounded on \(\Omega\) (see Proposition A.1), which, in turn, implies that \(\|M\|\sim 1\) uniformly on \(\Omega\). After applying the local law (3.10) and the first equality in (3.81) to the first term in (3.82), we arrive at \[G_{lk}=-M\sum_{p\neq l}W_{lp}G_{pk}^{(l)}+O_{\prec}\Big{(}\frac{1}{N\eta}\Big{)}, \tag{3.83}\] where for any \(r\in\mathbb{N}\), any (random) \(\phi>0\) and (random) \(r\times r\) matrix \(\Psi\) we write \(\Psi=O_{\prec}(\phi)\) if \(\|\Psi\|\prec\phi\). Similarly, by putting \(M\) on the right-hand side in (3.82), we get that for \(k\neq l\) \[G_{kl}=-\sum_{p\neq l}G_{kp}^{(l)}W_{pl}M+O_{\prec}\Big{(}\frac{1}{N\eta}\Big{)}. \tag{3.84}\] By the local law (3.10) we also see that \[M\frac{1}{G_{ll}}G_{lk}=O_{\prec}\Big{(}\frac{1}{\sqrt{N\eta}}\Big{)},\quad G _{kl}\frac{1}{G_{ll}}M=O_{\prec}\Big{(}\frac{1}{\sqrt{N\eta}}\Big{)} \tag{3.85}\] holds for \(l\neq k\). Denoting \(\widehat{\eta}:=\min\{|\operatorname{Im}z|,|\operatorname{Im}\zeta|\}\) with \(N^{-\gamma-\tau}\lesssim\widehat{\eta}\lesssim N^{-\gamma}\) on \(\Omega\times\Omega\), we conclude that for \(l\neq k\) the identity \[G_{lk}(z)BG_{kl}(\zeta)=M(z)\sum_{p\neq l}W_{lp}G_{pk}^{(l)}(z)B\sum_{q\neq l}G _{kq}^{(l)}(\zeta)W_{ql}M(\zeta)+O_{\prec}\Big{(}\frac{1}{(N\widehat{\eta})^{ \,3/2}}\Big{)} \tag{3.86}\] holds uniformly on \((z,\zeta)\in\Omega\times\Omega\). By construction, \(G_{pk}^{(l)}\) and \(G_{kq}^{(l)}\) are independent of \(W_{lp}\) and \(W_{ql}\). After taking the partial expectation \(\mathbb{E}_{l}\) with respect to \(\{W_{lp},W_{ql}\,:\,1\leq p,q\leq N\}\) (denoted below by \(\mathbb{E}_{l}\)) we rewrite (3.86) as \[G_{lk}(z)BG_{kl}(\zeta) =M(z)\Gamma\Big{[}\frac{1}{N}\sum_{p\neq l}G_{pk}^{(l)}(z)BG_{kp}^ {(l)}(\zeta)\Big{]}M(\zeta) \tag{3.87}\] \[\qquad+M(z)(1-\mathbb{E}_{l})\Big{[}G_{lk}(z)BG_{kl}(\zeta)\Big{]} M(\zeta)+O_{\prec}\Big{(}\frac{1}{(N\widehat{\eta})^{\,3/2}}\Big{)}. \tag{3.88}\] In (3.87) the (linear) operator \(\Gamma\) appears as a result of the structure of \(W_{ij}\) (see (3.77)) after applying \(\mathbb{E}_{l}\) and using that for the complex i.i.d. matrices \(\mathbf{X}_{\alpha}=(x_{pq}^{(\alpha)})_{1\leq p,q\leq N}\) \[\mathbb{E}[x_{kp}^{(\alpha_{1})}x_{ql}^{(\alpha_{2})}]=0,\quad\mathbb{E}[x_{kp}^ {(\alpha_{1})}\overline{x}_{ql}^{(\alpha_{2})}]=\delta_{\alpha_{1}\alpha_{2}} \delta_{kq}\delta_{pl}\frac{1}{N}. \tag{3.89}\] From the standard identity (a consequence of the Woodbury formula) \[G_{pk}^{(l)}=G_{pk}-G_{pl}\frac{1}{G_{ll}}G_{lk} \tag{3.90}\] holding for all \(l\notin\{p,k\}\), together with (3.10), we deduce that \[G_{pk}^{(l)}=G_{pk}+O_{\prec}\Big{(}\frac{1}{N\eta}\Big{)}. \tag{3.91}\] This implies that \[G_{pk}^{(l)}(z)BG_{kp}^{(l)}(\zeta)=G_{pk}(z)BG_{kp}(\zeta)+\left\{\begin{array} []{ll}O_{\prec}\big{(}(N\widehat{\eta})^{-1}\big{)},&p=k,\\ O_{\prec}\big{(}(N\widehat{\eta})^{-3/2}\big{)},&p\neq k.\end{array}\right. \tag{3.92}\] Recall that we are dealing with the case \(k\neq l\), and \(\widehat{\eta}\gg N^{-1}\) for \((z,\zeta)\in\Omega\times\Omega\). We replace the \(G^{(l)}\) entries in (3.87) using formula (3.92) to get \[\frac{1}{N}\sum_{p\neq l}G_{pk}^{(l)}(z)BG_{kp}^{(l)}(\zeta)=\frac{1}{N}\sum_ {p=1}^{N}G_{pk}(z)BG_{kp}(\zeta)+O_{\prec}\Big{(}\frac{1}{(N\widehat{\eta})^{ 3/2}}\Big{)}. \tag{3.93}\] The boundedness of \(M\) and \(\Gamma\) (as an operator acting on \(\mathbb{C}^{n\times n}\)) implies that \[G_{lk}(z)BG_{kl}(\zeta)=\frac{1}{N}M(z)\Gamma\Big{[}\sum_{p=1}^{ N}G_{pk}(z)BG_{kp}(\zeta)\Big{]}M(\zeta) \tag{3.94}\] \[\qquad\qquad\qquad\qquad+M(z)(1-\mathbb{E}_{l})\Big{[}G_{lk}(z) BG_{kl}(\zeta)\Big{]}M(\zeta)+O_{\prec}\Big{(}\frac{1}{(N\widehat{\eta})^{3/2}} \Big{)} \tag{3.95}\] holds for \(k\neq l\). By the local law (3.10), \(\sum_{p=1}^{N}G_{pk}(z)BG_{kp}(\zeta)=O_{\prec}(1+\widehat{\eta}^{-1})\) for any \(k\in\{1,\ldots,N\}\). Therefore, summing the above equality over \(N-1\) indices \(l\) for \(l\neq k\) gives \[\sum_{l:\,l\neq k}G_{lk}(z)BG_{kl}(\zeta)=M(z)\Gamma\Big{[}\sum_ {p=1}^{N}G_{pk}(z)BG_{kp}(\zeta)\Big{]}M(\zeta) \tag{3.96}\] \[\qquad\qquad\qquad+M(z)\sum_{l:\,l\neq k}(1-\mathbb{E}_{l})\Big{[} G_{lk}(z)BG_{kl}(\zeta)\Big{]}M(\zeta)+O_{\prec}\Big{(}\frac{1}{N^{1/2} \widehat{\eta}^{3/2}}\Big{)} \tag{3.97}\] where we used that \(N^{-1}+(N\widehat{\eta})^{-1}\ll N^{-1/2}\widehat{\eta}^{-3/2}\) for \((z,\zeta)\in\Omega\times\Omega\). Finally, in order to obtain the first term on the right hand side of (3.74), we use the local law (3.10) and apply it to the diagonal terms \(G_{kk}\) \[G_{kk}(z)BG_{kk}(\zeta)=M(z)BM(\zeta)+O_{\prec}\Big{(}\frac{1}{ \sqrt{N\widehat{\eta}}}\Big{)}, \tag{3.98}\] which together with (3.96) gives \[\sum_{l=1}^{N}G_{lk}(z)BG_{kl}(\zeta)=M(z)BM(\zeta)+M(z)\Gamma\Big{[} \sum_{p=1}^{N}G_{pk}(z)BG_{kp}(\zeta)\Big{]}M(\zeta) \tag{3.99}\] \[\qquad\qquad\qquad\qquad+M(z)\sum_{l\neq k}(1-\mathbb{E}_{l}) \Big{[}G_{lk}(z)BG_{kl}(\zeta)\Big{]}M(\zeta)+O_{\prec}\Big{(}\frac{1}{N^{1/2 }\widehat{\eta}^{3/2}}\Big{)}. \tag{3.100}\] It remains to estimate \(D_{k}:=\sum_{l\neq k}(1-\mathbb{E}_{l})\big{[}G_{lk}(z)BG_{kl}(\zeta)\big{]}\). For this we use the decomposition \[D_{k}D_{k}^{*}=\sum_{l:\,l\neq k}(1-\mathbb{E}_{l})\Big{[}G_{lk}( z)BG_{kl}(\zeta)\Big{]}\Big{(}(1-\mathbb{E}_{l})\Big{[}G_{lk}(z)BG_{kl}( \zeta)\Big{]}\Big{)}^{*} \tag{3.101}\] \[\qquad\qquad\qquad\qquad+\sum_{l:\,l\neq k,\atop l:\,l\neq k}(1- \mathbb{E}_{l})\Big{[}G_{lk}(z)BG_{kl}(\zeta)\Big{]}\Big{(}(1-\mathbb{E}_{j}) \Big{[}G_{jk}(z)BG_{kj}(\zeta)\Big{]}\Big{)}^{*}. \tag{3.102}\] By the local law the first term is of order \(O_{\prec}(N^{-1}\widehat{\eta}^{-2})\). For the second term we use (3.90) to see \[(1-\mathbb{E}_{l})\Big{[}G_{lk}(z)BG_{kl}(\zeta)\Big{]}\bigg{(}(1- \mathbb{E}_{j})\Big{[}G_{jk}(z)BG_{kj}(\zeta)\Big{]}\bigg{)}^{*} \tag{3.103}\] \[\qquad\qquad\times\bigg{(}(1-\mathbb{E}_{j})\Big{[}\Big{(}G_{jk} ^{(l)}(z)+G_{jl}(z)\frac{1}{G_{ll}(z)}G_{lk}(z)\Big{)}B\Big{(}G_{kj}^{(l)}( \zeta)+G_{kl}(\zeta)\frac{1}{G_{ll}(\zeta)}G_{lj}(\zeta)\Big{)}\Big{]}\bigg{)} ^{*}. \tag{3.104}\] The expectation of the summands in (3.102) vanishes, i.e., \[\mathbb{E}\Big{[}(1-\mathbb{E}_{l})\big{[}G_{lk}^{(j)}(z)BG_{kl}^{(j)}(\zeta )\big{]}(1-\mathbb{E}_{j})\big{[}G_{jk}^{(l)}(z)BG_{kj}^{(l)}(\zeta)\big{]} \Big{]}=0. \tag{3.106}\] Therefore, the expectation of \(D_{k}D_{k}^{*}\) can be written as a sum of at most \(O(N^{2})\) non-zero terms, each containing a product of at least \(6\) off-diagonal coordinate matrices of \(\mathbf{G}\) or \(\mathbf{G}^{(l)}\). By the local law (3.10) each such product is of order \(O_{\prec}((N\widehat{\eta})^{-3})\). All the terms in (3.101) are also deterministically bounded by a sufficiently high power of \(1/\widehat{\eta}\), which, in particular, means that \(\|D_{k}D_{k}^{*}\|\leq N^{C}\) for some \(C>0\). Combining this with the stochastic domination estimate \(D_{k}D_{k}^{*}\sim N^{-1}\widehat{\eta}^{-3}\) (defined in (3.9)), we obtain an estimate for the expectation \(\mathbb{E}[D_{k}D_{k}^{*}]=O_{\prec}(N^{-1}\widehat{\eta}^{-3})\). Notice, that \(\mathbb{E}[\|D_{k}\|_{\rm HS}^{2}]=\mathbb{E}[{\rm Tr}(D_{k}D_{k}^{*})/n]=O_{ \prec}(N^{-1}\widehat{\eta}^{-3})\), where \(\|\cdot\|_{\rm HS}\) is the Hilbert-Schmidt norm on \({}^{\alpha\times n}\) induced by the scalar product (3.29). All the functions \(M\), \(G_{ij}\), \(M^{-1}\), \(G_{ii}^{-1}\) used in the above proof are analytic on \(\Omega\). By collecting the error terms in (3.100) (\(O_{\prec}(N^{-1/2}\widehat{\eta}^{-3/2})\) and \(D_{k}\)), taking the average over \(k\in\{1,\ldots,N\}\) (see (3.99), denoting the resulting error by \(\mathcal{E}_{1}^{B}(z,\zeta)\) and using the equivalence of norms on \(\mathbb{C}^{n\times n}\), we obtain (3.75), which finishes the proof. ### Self-consistent equation for \(G^{B}(z,\zeta)\) In this section we study the limiting behavior of \(G^{B}(z,\zeta)\) introduced in (3.73) through the analysis of the self-consistent equation (3.74). More precisely, for any \(z,\zeta\in\Omega\) and \(B\in\mathbb{C}^{n\times n}\) we consider the equation \[M^{B}(z,\zeta)=M(z)BM(\zeta)+M(z)\Gamma\big{[}M^{B}(z,\zeta)\big{]}M(\zeta) \tag{3.107}\] for \(M^{B}(z,\zeta)\). For any \(Q_{1},Q_{2}\in\mathbb{C}^{n\times n}\) we denote by \(\mathcal{C}_{Q_{1},Q_{2}}\) an operator on \(\mathbb{C}^{n\times n}\) given by \[\mathcal{C}_{Q_{1},Q_{2}}[R]:=Q_{1}RQ_{2} \tag{3.108}\] for all \(R\in\mathbb{C}^{n\times n}\). The operator \(\mathcal{C}_{M(z),M(\zeta)}\) is invertible, and \(\mathcal{C}_{M(z),M(\zeta)}^{-1}=\mathcal{C}_{\frac{1}{M(z)},\frac{1}{M(\zeta)}}\). With this notation, equation (3.107) reads \[\mathcal{B}_{z,\zeta}\big{[}M^{B}(z,\zeta)\big{]}=B, \tag{3.109}\] where we defined \(\mathcal{B}_{z,\zeta}:=\mathcal{C}_{M(z),M(\zeta)}^{-1}-\Gamma\). If the operator \(\mathcal{B}_{z,\zeta}\) is invertible, then \(M^{B}(z,\zeta)\) is uniquely determined from (3.109). Here it is sufficient to restrict the parameters \(z\) and \(\zeta\) to a small neighborhood of \(E_{0}\). Therefore, we show that \(\mathcal{B}_{z,\zeta}\) is invertible in a sufficiently small region around the point \((E_{0},E_{0})\in\mathbb{R}\times\mathbb{R}\). The bound for \(\|\mathcal{B}_{z,\zeta}^{-1}\|\) depends on whether \(z\) and \(\zeta\) belong to the same half-plane or not. Denote \(\Omega^{+}:=\Omega\cap\mathbb{C}_{+}\) and \(\Omega^{-}:=\Omega\cap\mathbb{C}_{-}\). **Lemma 3.6** (Invertibility of \(\mathcal{B}_{z,\zeta}\)).: _Let \(\gamma\in(0,1)\) and \(\tau\in(0,\gamma/2)\). Then there exists \(C>0\) such that the following holds_ 1. _Uniformly for_ \((z,\zeta)\in\big{(}\Omega^{+}\times\Omega^{+}\big{)}\cup\big{(}\Omega^{-}\times \Omega^{-}\big{)}\)__ \[\big{\|}\,\mathcal{B}_{z,\zeta}^{-1}\,\big{\|}\leq C;\] (3.110) 2. _Uniformly for_ \((z,\zeta)\in\big{(}\Omega^{-}\times\Omega^{+}\big{)}\cup\big{(}\Omega^{+}\times \Omega^{-}\big{)}\)__ \[\big{\|}\,\mathcal{B}_{z,\zeta}^{-1}\,\big{\|}\leq\frac{C}{|z-\zeta|},\] (3.111) _and the operator_ \(\mathcal{B}_{z,\zeta}^{-1}\) _admits the decomposition_ \[\mathcal{B}_{z,\zeta}^{-1}=\vartheta\,\frac{2{\rm i}}{\langle{\rm Im}\,M(E_{0}) \rangle}\frac{1}{z-\zeta}\,{\rm Im}\,M(E_{0})\langle{\rm Im}\,M(E_{0}),\, \cdot\,\rangle+\mathcal{J}_{z,\zeta},\] (3.112) _where_ \(\vartheta=1\) _if_ \((z,\zeta)\in\Omega^{+}\times\Omega^{-}\)_,_ \(\vartheta=-1\) _if_ \((z,\zeta)\in\Omega^{-}\times\Omega^{+}\)_, and_ \(\|\mathcal{J}_{z,\zeta}\|\) _is uniformly bounded for_ \((z,\zeta)\in\big{(}\Omega^{-}\times\Omega^{+}\big{)}\cup\big{(}\Omega^{+}\times \Omega^{-}\big{)}\)_._ Proof.: Consider first the case (i), and assume that both \(z\) and \(\zeta\) are in the upper half-plane, i.e., \(z,\zeta\in\Omega^{+}\). The case \(z,\zeta\in\Omega^{-}\) follows analogously. For \(z,\zeta\in\Omega^{+}\) we write \[\mathcal{B}_{z,\zeta}=\mathcal{C}_{M(z),M(\zeta)}^{-1}-\Gamma=\mathcal{B}_{z}+ \mathcal{C}_{M(z),M(\zeta)}^{-1}-\mathcal{C}_{M(z),M(z)}^{-1}, \tag{3.113}\] where \(\mathcal{B}_{z}\) was introduced in (3.26). By the definition (3.108), we have that \[\mathcal{C}_{M(z),M(\zeta)}^{-1}-\mathcal{C}_{M(z),M(z)}^{-1}=\mathcal{C}_{ \frac{1}{M(z)},\frac{1}{M(z)}-\frac{1}{M(z)}}=O(|z-\zeta|) \tag{3.114}\] from the analyticity of \(M(z)^{-1}\) on \(\mathbb{C}_{+}\). Therefore, using the boundedness of \(\mathcal{B}_{z}^{-1}\) from (3.27), we get (3.110). Now suppose that \(z\) and \(\zeta\) are in different half-planes, and consider first the case \(z\in\mathbb{C}_{-}\) and \(\zeta\in\mathbb{C}_{+}\). Denote \(M_{0}:=\lim_{y\downarrow 0}M(E_{0}+\mathrm{i}\,y)\) for brevity. From part (ii) of Proposition A.1 we have that \(\operatorname{Im}M(z)\sim\rho(z)I_{n}\) uniformly on \(\mathbb{C}_{+}\), which together with \(\rho(E_{0})\sim 1\) implies \[\operatorname{Im}M_{0}\sim I_{n}. \tag{3.115}\] We see that \(\lim_{y\to 0}M(E_{0}+\mathrm{i}\,y)-M(E_{0}-\mathrm{i}\,y)=2\mathrm{i}\, \operatorname{Im}M_{0}\) is a non-zero matrix, and thus \(\|\mathcal{C}_{\frac{1}{M(z)},\frac{1}{M(z)}-\frac{1}{M(z)}}\|=O(1)\). Therefore, the perturbation argument used in (3.113) and (3.114) to control \(\mathcal{B}_{z,\zeta}\) through comparison with \(\mathcal{B}_{z}\) and the analyticity of \(M(z)\) in the upper (lower) half-plane is not applicable anymore. Instead, we consider the perturbation of \(\mathcal{B}_{z,\zeta}\) around the operator \(\mathcal{B}_{E_{0},E_{0}}\) given by \[\mathcal{B}_{E_{0},E_{0}}:=\mathcal{C}_{M_{0}^{\prime},M_{0}}^{-1}-\Gamma. \tag{3.116}\] For convenience, introduce the centered variables \(w:=z-E_{0}\in\mathbb{C}_{-}\), \(\xi:=\zeta-E_{0}\in\mathbb{C}_{+}\), so that \[\mathcal{B}_{z,\zeta}=\mathcal{B}_{E_{0}+w,E_{0}+\xi}=\mathcal{B}_{E_{0},E_{0 }}+\mathcal{C}_{M(E_{0}+w),M(E_{0}+\xi)}^{-1}-\mathcal{C}_{M_{0}^{\prime},M_{ 0}}^{-1}. \tag{3.117}\] For \((z,\zeta)\in\Omega^{-}\times\Omega^{+}\) the variables \(w\) and \(\xi\) remain in their corresponding half-planes, the perturbation \(\mathcal{C}_{M(E_{0}+w),M(E_{0}+\xi)}^{-1}-\mathcal{C}_{M(E_{0})^{\prime},M(E_ {0})}^{-1}\) is an analytic function in \(w\) and \(\xi\), and thus we can apply the analytic perturbation theory to control the invertibility of \(\mathcal{B}_{z,\zeta}\) on \(\Omega^{-}\times\Omega^{+}\). We start by collecting the necessary properties of the spectrum of \(\mathcal{B}_{E_{0},E_{0}}\). Firstly, by taking the imaginary part of the MDE (2.7) at \(z=E_{0}\in\mathbb{R}\) we find that \[\mathcal{B}_{E_{0},E_{0}}\big{[}\operatorname{Im}M_{0}\big{]}=0,\quad\mathcal{ B}_{E_{0},E_{0}}^{*}\big{[}\operatorname{Im}M_{0}\big{]}=0, \tag{3.118}\] where we used that the adjoint operator to \(\mathcal{B}_{E_{0},E_{0}}\) with respect to the scalar product (3.29) is given by \[\mathcal{B}_{E_{0},E_{0}}^{*}=\mathcal{C}_{M_{0},M_{0}^{\prime}}^{-1}-\Gamma. \tag{3.119}\] This means that \(\mathcal{B}_{E_{0},E_{0}}\) is not invertible, and the kernels of both \(\mathcal{B}_{E_{0},E_{0}}\) and \(\mathcal{B}_{E_{0},E_{0}}^{*}\) contain the eigenvector \(\operatorname{Im}M_{0}\). We now show that \[\dim\bigl{(}\ker\bigl{(}\mathcal{B}_{E_{0},E_{0}}\bigr{)}\bigr{)}=\dim\bigl{(} \ker\bigl{(}\mathcal{B}_{E_{0},E_{0}}\bigr{)}\bigr{)}=1. \tag{3.120}\] To this end, we use the balanced polar decomposition of \(M_{0}\) from [4, Eq. (3.1)] \[M_{0}=Q^{*}UQ \tag{3.121}\] with unitary \(U\in\mathbb{C}^{n\times n}\) and invertible \(Q\in\mathbb{C}^{n\times n}\). Notice that the decomposition (3.121) is well defined since \(\operatorname{Im}M_{0}\) is (strictly) positive definite as shown in (3.115). The operator \(\mathcal{B}_{E_{0},E_{0}}\) can be written in terms of \(U\) and \(Q\) as \[\mathcal{B}_{E_{0},E_{0}}=\mathcal{C}_{Q,Q^{*}}^{-1}\mathcal{C}_{U^{\prime},U}^ {-1}\mathcal{C}_{Q^{\prime},Q}^{-1}-\Gamma=\mathcal{C}_{Q,Q^{*}}^{-1}\Bigl{(} \mathcal{C}_{U^{\prime},U}^{-1}-\mathcal{F}\Bigr{)}\mathcal{C}_{Q^{\prime},Q}^ {-1}, \tag{3.122}\] where \(\mathcal{F}:=\mathcal{C}_{Q,Q^{*}}\Gamma\,\mathcal{C}_{Q^{\prime},Q}\) is a self-adjoint and positivity preserving operator. Similarly, we can rewrite the MDE (2.7) in terms of \(U\), \(Q\) and \(\mathcal{F}\) as \[-\frac{1}{U}=E_{0}QQ^{*}-QK_{0}Q^{*}+\mathcal{F}\bigl{[}U\bigr{]}. \tag{3.123}\] Notice that by (3.115) and (3.121) we have that \(\operatorname{Im}U\geq cI_{n}\) for some \(c>0\). Since \(U\) is unitary, the above equation yields \[\operatorname{Im}U=\mathcal{F}\big{[}\operatorname{Im}U\big{]}. \tag{3.124}\] Let \(\|\mathcal{F}\|_{2}\) denote the operator norm of \(\mathcal{F}\) induced by the Hilbert-Schmidt norm on \(\mathbb{C}^{n\times n}\) with the scalar product (3.29). It follows from (3.124) and the properties (A.29) and (A.31) of \(\mathcal{F}\) discussed in Appendix A that \(\|\mathcal{F}\|_{2}=1\) is a simple eigenvalue of \(\mathcal{F}\), and the operator \(\mathcal{F}\) has a spectral gap \[\operatorname{Spec}(\mathcal{F})\subset[-1+\kappa,1-\kappa]\cup\{1\} \tag{3.125}\] for some \(\kappa>0\) sufficiently small. Denote by \(F:=\operatorname{Im}U/\|\operatorname{Im}U\|_{\mathrm{HS}}\) the normalized eigenvector of \(\mathcal{F}\) corresponding to the eigenvalue \(\|\mathcal{F}\|_{2}=1\). Since \(F\) commutes with \(U\) and \(U^{*}\), and \(\mathcal{F}\) is self-adjoint, we get that \[\left(\mathcal{C}^{-1}_{U^{*},U}-\mathcal{F}\right)\big{[}F\big{]}=0,\qquad \left(\mathcal{C}^{-1}_{U^{*},U}-\mathcal{F}\right)^{*}\big{[}F\big{]}=0. \tag{3.126}\] Moreover, if \(Y\in\mathbb{C}^{n\times n}\) is such that \(\langle Y,F\rangle=0\), then (3.125) and the fact that \(\|\mathcal{F}\|_{2}=1\) is a simple eigenvalue imply that (3.127) We conclude that \(\dim\bigl{(}\ker\bigl{(}\mathcal{C}^{-1}_{U^{*},U}-\mathcal{F}\bigr{)}\bigr{)}=1\), which together with (3.122) and the invertibility of \(Q\) implies (3.120). From (3.118) and (3.120) we know that \(\mathcal{B}_{E_{0},E_{0}}\) has eigenvalue \(0\) with geometric multiplicity \(1\), and the corresponding left and right eigenvectors coincide and are equal to \(\operatorname{Im}M_{0}\). If we assume that the algebraic multiplicity of the eigenvalue at \(0\) is greater than \(1\), then the corresponding Jordan chain contains a generalized eigenvector \(T\in\mathbb{C}^{n\times n}\) such that \(\mathcal{B}_{E_{0},E_{0}}[T]=\operatorname{Im}M_{0}\). This implies that \[\langle\operatorname{Im}M_{0},\operatorname{Im}M_{0}\rangle=\langle \operatorname{Im}M_{0},\mathcal{B}_{E_{0},E_{0}}[T]\rangle=\langle\mathcal{B }^{*}_{E_{0},E_{0}}[\operatorname{Im}M_{0}],T\rangle=0, \tag{3.128}\] which contradicts to \(\|\operatorname{Im}M_{0}\|_{\mathrm{HS}}\geq\theta\). We, therefore, conclude that the algebraic multiplicity of the eigenvalue \(0\) of \(\mathcal{B}_{E_{0},E_{0}}\) is also equal to \(1\). Since zero is a simple eigenvalue, we can apply the analytic perturbation theory of non-Hermitian operators to control the spectrum of \(\mathcal{B}_{z,\zeta}\). At the same time, the dimension of the space \(\mathbb{C}^{n\times n}\) is a fixed model parameter independent of \(N\). Therefore, we can find a sufficiently small \(\varepsilon>0\) such that \(\operatorname{Spec}(\mathcal{B}_{E_{0},E_{0}})\cap\{v\in\mathbb{C}\,:\,|v|< \varepsilon\}=\{0\}\) and \[\sup_{|v|=\varepsilon}\Big{(}\big{\|}\big{(}\mathcal{B}_{E_{0},E_{0}}-v \mathrm{Id}\big{)}^{-1}\big{\|}+\big{\|}\big{(}\mathcal{B}^{*}_{E_{0},E_{0}}- v\mathrm{Id}\big{)}^{-1}\big{\|}\Big{)}\lesssim 1. \tag{3.129}\] Recall that the function \(M(z)\) defined in (2.13) has a jump at \(z=E_{0}\in\mathbb{R}\). In order to apply the analytic perturbation theory, we will restrict \(M(z)\) to the set \(\mathbb{C}_{+}\) or \(\mathbb{C}_{-}\) depending on whether \(z\in\Omega^{+}\) or \(z\in\Omega^{-}\), and then use part (v) of Proposition A.1 to extend it analytically to a neighborhood of \(E_{0}\in\mathbb{R}\) containing \(\Omega\). Thus, for sufficiently small \(\varepsilon>0\) we define \(M_{+}:\mathbb{C}_{+}\cup\{z\,:\,|z-E_{0}|<\varepsilon\}\to\mathbb{C}^{n\times n}\) such that \(M_{+}(z)=M(z)\) on \(\mathbb{C}_{+}\) and \(M_{+}\) is analytic on \(\mathbb{C}_{+}\cup\{z\,:\,|z-E_{0}|<\varepsilon\}\). Similarly, we define \(M_{-}:\mathbb{C}_{-}\cup\{z\,:\,|z-E_{0}|<\varepsilon\}\to\mathbb{C}^{n\times n}\) such that \(M_{-}(z)=M(z)\) on \(\mathbb{C}_{-}\) and \(M_{-}\) is analytic on \(\mathbb{C}_{-}\cup\{z\,:\,|z-E_{0}|<\varepsilon\}\). In particular, if we denote \(M_{0}:=\lim_{y\downarrow 0}M(E_{0}+\mathrm{i}\,y)\), then \(M_{+}(E_{0})=M_{0}\) and \(M_{-}(E_{0})=M_{0}^{*}\). Consider the (analytic) perturbation of \(\mathcal{B}_{E_{0},E_{0}}\) by \(\mathcal{D}_{w,\xi}\), where \[\mathcal{D}_{w,\xi}:=\mathcal{C}^{-1}_{M_{+}(E_{0}+w),M_{-}(E_{0}+\xi)}- \mathcal{C}^{-1}_{(M(E_{0}))^{*},M(E_{0})}, \tag{3.130}\] so that \(\mathcal{B}_{z,\zeta}=\mathcal{B}_{E_{0},E_{0}}+\mathcal{D}_{w,\xi}\). Denote by \(L:=\operatorname{Im}M_{0}/\|\operatorname{Im}M_{0}\|_{\mathrm{HS}}\) the (left and right) normalized eigenvector of \(\mathcal{B}_{E_{0},E_{0}}\) corresponding to the eigenvalue \(0\). Then it follows from the analytic perturbation theory (see, e.g., [4, Lemma C.1]) that \(\mathcal{B}_{z,\zeta}\) has a unique eigenvalue inside the disk \(\{v\,:\,|v|\leq\varepsilon\}\), which we denote by \(\lambda_{z,\zeta}\), that satisfies \[\lambda_{z,\zeta}=\langle L,\mathcal{D}_{w,\xi}[L]\rangle+O(|w|^{2}+|\xi|^{2}). \tag{3.131}\] In order to separate the leading term in (3.131), we calculate the derivative \((\partial_{w}\mathcal{D}_{w,\xi},\partial_{\xi}\mathcal{D}_{w,\xi})\) at \((w,\xi)=(0,0)\) \[\big{(}\partial_{w}\mathcal{D}_{w,\xi},\partial_{\xi}\mathcal{D}_{w,\xi}\big{)} \Big{|}_{(w,\xi)=(0,0)}=\Big{(}\mathcal{C}_{-\frac{\mathrm{i}}{M_{0}^{*}}(M_{ 0}^{*})^{*}\frac{\mathrm{i}}{M_{0}^{*}},\frac{\mathrm{i}}{M_{0}^{*}},\frac{ \mathrm{i}}{M_{0}^{*}},-\frac{\mathrm{i}}{M_{0}^{*}}M_{0}^{*}\frac{\mathrm{i}} {M_{0}}}\Big{)} \tag{3.132}\] with \(M_{0}^{\prime}:=\lim_{y\downarrow 0}M^{\prime}(E_{0}+\mathrm{i}\,y)\). The first order approximation of \(\mathcal{D}_{w,\xi}\) gives \[\lambda_{z,\zeta}=-\overline{\alpha}\,w-\alpha\,\xi+O(|w|^{2}+|\xi|^{2}), \tag{3.133}\] where \[\alpha:=\Big{\langle}L,\frac{1}{M_{0}^{*}}L\frac{1}{M_{0}}M_{0}^{\prime}\frac{ 1}{M_{0}}\Big{\rangle}. \tag{3.134}\] We now show that \(\operatorname{Re}\alpha=0\) and \(\operatorname{Im}\alpha\geq c>0\) for some \(c>0\). For brevity, the \(0\) subscript will be dropped, and we will write \(M:=M_{0}\) and \(M^{\prime}:=M_{0}^{\prime}\). Recall that \(L=\operatorname{Im}M/\|\operatorname{Im}M\|_{\mathrm{HS}}\), \(\operatorname{Im}M\) is symmetric and positive definite, and thus \(\alpha\|\operatorname{Im}M\|_{\mathrm{HS}}^{2}\) is equal to \(\Big{\langle}\bigl{(}\operatorname{Im}M\bigr{)}\frac{1}{M^{\prime}}\bigl{(} \operatorname{Im}M\bigr{)}\frac{1}{M}M^{\prime}\frac{1}{M}\Big{\rangle}\). After differentiating the Dyson equation (2.7) at \(z=E_{0}\), we obtain the relation \[\frac{1}{M}M^{\prime}\frac{1}{M}=I_{n}+\Gamma[M^{\prime}]. \tag{3.135}\] Multiplying the above equation by \(\operatorname{Im}M\) on the left and taking the normalized trace yields \[\Big{\langle}\operatorname{Im}M\frac{1}{M}M^{\prime}\frac{1}{M}\Big{\rangle}= \langle\operatorname{Im}M\rangle+\big{\langle}\operatorname{Im}M\,\Gamma[M^{ \prime}]\big{\rangle}=\langle\operatorname{Im}M\rangle+\big{\langle}\Gamma[ \operatorname{Im}M]M^{\prime}\big{\rangle}, \tag{3.136}\] where in the second equality we used that \(\Gamma\) is self-adjoint with respect to the scalar product (3.29). From the imaginary part of the Dyson equation we know that \[\Gamma[\operatorname{Im}M]=\frac{1}{M^{*}}\operatorname{Im}M\frac{1}{M}. \tag{3.137}\] Plugging the above identity into (3.136) gives \[\Big{\langle}\operatorname{Im}M\frac{1}{M}M^{\prime}\frac{1}{M}\Big{\rangle}= \langle\operatorname{Im}M\rangle+\Big{\langle}\frac{1}{M^{*}}\operatorname{ Im}M\frac{1}{M}M^{\prime}\Big{\rangle}=\langle\operatorname{Im}M\rangle+\Big{\langle} \operatorname{Im}M\frac{1}{M}M^{\prime}\frac{1}{M^{*}}\Big{\rangle}. \tag{3.138}\] After moving the second term on the right-hand side to the left-hand side we get that \[\langle\operatorname{Im}M\rangle =\Big{\langle}\operatorname{Im}M\frac{1}{M}M^{\prime}\frac{1}{M} \Big{\rangle}-\Big{\langle}\operatorname{Im}M\frac{1}{M}M^{\prime}\frac{1}{M^ {*}}\Big{\rangle}=\Big{\langle}\operatorname{Im}M\frac{1}{M}M^{\prime}\Big{(} \frac{1}{M}-\frac{1}{M^{*}}\Big{)}\Big{\rangle} \tag{3.139}\] \[=-2\mathrm{i}\,\Big{\langle}\operatorname{Im}M\frac{1}{M}M^{ \prime}\Big{(}\frac{1}{M}\operatorname{Im}M\frac{1}{M^{*}}\Big{)}\Big{\rangle} =-2\mathrm{i}\,\alpha\|\operatorname{Im}M\|^{2}_{\mathrm{HS}}. \tag{3.140}\] Since \(\langle\operatorname{Im}M\rangle\geq\theta\), we conclude that \[\alpha=\frac{\mathrm{i}}{2}\frac{\langle\operatorname{Im}M\rangle}{\| \operatorname{Im}M\|^{2}_{\mathrm{HS}}}, \tag{3.141}\] and thus \(\operatorname{Re}\alpha=0\) and \(\operatorname{Im}\alpha>0\). In particular, (3.133) becomes \[\lambda_{z,\zeta}=\frac{\mathrm{i}}{2}\frac{\langle\operatorname{Im}M\rangle}{ \|\operatorname{Im}M\|^{2}_{\mathrm{HS}}}\,(w-\xi)+O(|w|^{2}+|\xi|^{2}). \tag{3.142}\] The assumption \(\tau<\gamma/2\) ensures that \((|w|^{2}+|\xi|^{2})=o(|w-\xi|)\) on \(\Omega\times\Omega\), and thus (3.142) gives the leading term in the expansion of \(\lambda_{z,\zeta}\). Finally, using analytic functional calculus we can decompose \(\mathcal{B}^{-1}_{z,\zeta}\) on \(\Omega^{-}\times\Omega^{+}\) as \[\mathcal{B}^{-1}_{z,\zeta}=\frac{1}{\lambda_{z,\zeta}}\mathcal{P}_{z,\zeta}+ \mathcal{J}^{(1)}_{z,\zeta} \tag{3.143}\] with \[\mathcal{P}_{z,\zeta}:=-\frac{1}{2\pi\mathrm{i}}\int_{|v|=\varepsilon}\Big{(} \mathcal{B}_{z,\zeta}-v\cdot\mathrm{Id}\Big{)}^{-1}dv,\quad\mathcal{J}^{(1)}_{ z,\zeta}:=-\frac{1}{2\pi\mathrm{i}}\int_{\Sigma}\frac{1}{v}\Big{(}\mathcal{B}_{z, \zeta}-v\cdot\mathrm{Id}\Big{)}^{-1}dv, \tag{3.144}\] where \(\Sigma\) is a contour encircling the eigenvalues of \(\mathcal{B}_{z,\zeta}\) located away from the \(\varepsilon\)-disk around zero and not crossing the circle \(\{|v|=\varepsilon\}\). By analytic perturbation theory, \(\mathcal{J}^{(1)}_{z,\zeta}\) is bounded for all \((z,\zeta)\in\Omega^{-}\times\Omega^{+}\). Moreover, by using (3.142) and perturbation theory for the eigenvectors (see, e.g., [4, Lemma C.1]) we can rewrite the first term in (3.143) as \[\frac{1}{\lambda_{z,\zeta}}\mathcal{P}_{z,\zeta}=-\frac{2\mathrm{i}}{\langle \operatorname{Im}M_{0}\rangle}\frac{\langle\operatorname{Im}M_{0},\,\cdot\, \rangle}{z-\zeta}\operatorname{Im}M_{0}+\mathcal{J}^{(2)}_{z,\zeta} \tag{3.145}\] with \(\mathcal{J}^{(2)}_{z,\zeta}\) uniformly bounded on \(\Omega^{-}\times\Omega^{+}\). Setting \(\mathcal{J}:=\mathcal{J}^{(1)}+\mathcal{J}^{(2)}\) establishes (3.111) and (3.112) and finishes the proof of the lemma for \((z,\zeta)\in\Omega^{-}\times\Omega^{+}\). In the case when \((z,\zeta)\in\Omega^{+}\times\Omega^{-}\) the equation (3.133) becomes \[\lambda_{z,\zeta}=-\alpha\,w-\overline{\alpha}\,\xi+O(|w|^{2}+|\xi|^{2}), \tag{3.146}\] while the remaining part of the proof can be repeated line by line. This completes the proof of the lemma. It follows from the above lemma that for any \((z,\zeta)\in\Omega\times\Omega\) the equation (3.107) has a unique solution given by \[M^{B}(z,\zeta)=\mathcal{B}^{-1}_{z,\zeta}\big{[}B\big{]}. \tag{3.147}\] Combining the results of Lemmas 3.5 and 3.6 we find the following estimates for \(G^{B}(z,\zeta)\) that depend on whether \(z\) and \(\zeta\) belong to the same complex half-plane. **Corollary 3.7**.: _Let \(\gamma\in(0,1)\), \(\tau\in(0,\min\{\gamma/2,1-\gamma\})\), and let \(B\in\mathbb{C}^{n\times n}\). Denote \(\widehat{\eta}:=\min\{|\operatorname{Im}z|,|\operatorname{Im}\zeta|\}\)._ 1. _Uniformly for_ \((z,\zeta)\in\left(\Omega^{+}\times\Omega^{+}\right)\cup\left(\Omega^{-}\times \Omega^{-}\right)\)__ \[\mathbb{E}\Big{[}\big{\|}G^{B}(z,\zeta)\big{\|}\Big{]}=O_{\prec}\bigg{(}\big{\|} B\big{\|}\Big{(}1+\frac{1}{N^{1/2}\widehat{\eta}^{3/2}}\Big{)}\bigg{)}.\] (3.148) 2. _Uniformly for_ \((z,\zeta)\in\left(\Omega^{-}\times\Omega^{+}\right)\cup\left(\Omega^{+}\times \Omega^{-}\right)\)__ \[G^{B}(z,\zeta)=\vartheta\,\frac{2\mathrm{i}\left\langle\operatorname{Im}M(E_{0 }),\,B\,\right\rangle}{\left\langle\operatorname{Im}M(E_{0})\right\rangle} \frac{1}{z-\zeta}\operatorname{Im}M(E_{0})+\mathcal{E}^{B}(z,\zeta)\] (3.149) _with_ \[\mathbb{E}\Big{[}\big{\|}\mathcal{E}^{B}(z,\zeta)\big{\|}\Big{]}=O_{\prec} \bigg{(}\big{\|}B\big{\|}\Big{(}1+\frac{1}{N^{1/2}\widehat{\eta}^{5/2}}\Big{)} \bigg{)}. \tag{3.150}\] Proof.: After multiplying both sides of equation (3.74) by \(M^{-1}(z)\) from the left and \(M^{-1}(\zeta)\) from the right, and reordering the terms, we get that \[\mathcal{B}_{z,\zeta}\big{[}G^{B}(z,\zeta)\big{]}=B+\frac{1}{M(z)}\mathcal{E} _{1}^{B}\frac{1}{M(\zeta)}. \tag{3.151}\] Lemma 3.6 implies that the (linear) operator \(\mathcal{B}_{z,\zeta}\) is invertible for all \((z,\zeta)\in\Omega\times\Omega\) with a uniformly bounded inverse, therefore, \[G^{B}(z,\zeta)=\mathcal{B}_{z,\zeta}^{-1}\big{[}B\big{]}+\mathcal{B}_{z, \zeta}^{-1}\bigg{[}\frac{1}{M(z)}\mathcal{E}_{1}^{B}\frac{1}{M(\zeta)}\bigg{]} \tag{3.152}\] for all \((z,\zeta)\in\Omega\times\Omega\). Finally (3.110) and (3.112) give the decomposition and the bounds in (3.148) and (3.149). The bound in (3.150) is obtained by directly applying Lemmas 3.5 and 3.6 and serves as a preliminary estimate, which we will considerably improve in the next section. ### Proof of Theorem 2.1 for \(\beta=2\) In this section we complete the proof of Theorem 2.1 for \(\beta=2\). For this we first need to improve the error term bound in (3.150). Indeed, after substituting (3.148) and (3.149) into (3.30) and taking a derivative with respect to \(\zeta\), the error term (3.150) gives a function of order \(O_{\prec}(1+N^{-1/2}\widehat{\eta}^{-7/2})\) on \(\Omega\times\Omega\). As we will see below in the proof of Theorem 2.1, multiplying this function by \(\frac{\partial\widehat{f}(z)}{\partial\overline{z}}\frac{\partial\widehat{f}( \zeta)}{\partial\zeta}\) and integrating over \(\Omega\times\Omega\) improves the estimate by \(\widehat{\eta}^{\,2}\). We end up with an overall bound \(O_{\prec}(\widehat{\eta}^{\,2}+N^{-1/2}\widehat{\eta}^{-3/2})\), which is small only as long as \(\widehat{\eta}\gg N^{-1/3}\). Therefore, we need the follwoing improved estimate of \(G^{B}(z,\zeta)\) in the case when \(z\) and \(\zeta\) belong to different complex half-planes. **Lemma 3.8** (Improved estimate of \(G^{B}(z,\zeta)\)).: _Let \(\gamma\in(0,1)\), \(\tau\in(0,\min\{\gamma/2,1-\gamma\})\), and let \(B\in\mathbb{C}^{n\times n}\). Then uniformly for \((z,\zeta)\in\left(\Omega^{-}\times\Omega^{+}\right)\cup\left(\Omega^{+}\times \Omega^{-}\right)\)_ \[G^{B}(z,\zeta)=\vartheta\,\frac{2\mathrm{i}}{z-\zeta}\frac{\left\langle \operatorname{Im}M(E_{0}),\,B\,\right\rangle}{\left\langle\operatorname{Im}M(E _{0})\right\rangle}\operatorname{Im}M(E_{0})+\mathcal{E}^{B}(z,\zeta) \tag{3.153}\] _with_ \[\mathbb{E}\Big{[}\big{\|}\mathcal{E}^{B}(z,\zeta)\big{\|}\Big{]}=O_{\prec} \bigg{(}\big{\|}B\big{\|}\Big{(}1+\frac{1}{N\widehat{\eta}^{\,2}}+\frac{1}{N^ {1/2}\widehat{\eta}^{\,3/2}}\Big{)}\bigg{)} \tag{3.154}\] _and \(\vartheta=1\) for \((z,\zeta)\in\Omega^{+}\times\Omega^{-}\) and \(\vartheta=-1\) for \((z,\zeta)\in\Omega^{-}\times\Omega^{+}\)._ Proof.: We split the proof into two steps by first considering the case \(\left\langle\operatorname{Im}M(E_{0}),B\right\rangle=0\), and then establishing (3.153) for general \(B\in\mathbb{C}^{n\times n}\). As before, we denote \(M_{0}:=\lim_{\mu\downarrow 0}M(E_{0}+\mathrm{i}\,\mathrm{\it y})\) for brevity. **Step 1.** Suppose that \(\left\langle\operatorname{Im}M_{0},B\right\rangle=0\). Denote by \(\mathcal{Q}:\mathbb{C}^{n\times n}\to\mathbb{C}^{n\times n}\) the projection onto \(\operatorname{Im}M_{0}\) given by \[\mathcal{Q}:=\frac{\left\langle\operatorname{Im}M_{0},\,\cdot\,\right\rangle}{ \left\|\operatorname{Im}M_{0}\right\|_{\mathrm{HS}}^{2}}\operatorname{Im}M_{ 0}, \tag{3.155}\] and notice that from (3.112) we have that \((\operatorname{Id}-\mathcal{Q})\mathcal{B}_{z,\zeta}^{-1}=\mathcal{J}_{z,\zeta}\). Therefore, (3.152), (3.112) and (3.75) imply that \[G^{B}(z,\zeta)=\mathcal{J}_{z,\zeta}\bigg{[}B+\frac{1}{M(z)}\mathcal{E}_{1}^{B }\frac{1}{M(\zeta)}\bigg{]}+\mathcal{Q}\Big{[}G^{B}(z,\zeta)\Big{]}, \tag{3.156}\] where the first term, after taking norm and expectation, is of order \(O_{\prec}(\|B\|(1+N^{-1/2}\widehat{\eta}^{-3/2}))\) by(3.75). In order to obtain an improved bound of the second term, we notice that \[\mathcal{Q}\Big{[}G^{B}(z,\zeta)\Big{]}=\frac{1}{\|\operatorname{Im}M_{0}\|_{ \operatorname{HS}}^{2}}\Big{\langle}\operatorname{Im}M_{0},G^{B}(z,\zeta) \Big{\rangle}\operatorname{Im}M_{0}. \tag{3.157}\] Using the cyclicity of the trace together with the definitions (3.73) and (3.29) we get \[\Big{\langle}\operatorname{Im}M_{0},G^{B}(z,\zeta)\Big{\rangle}=\Big{\langle} \operatorname{Im}M_{0},\frac{1}{N}\sum_{k,l=1}^{N}G_{lk}(z)BG_{kl}(\zeta) \Big{\rangle}=\Big{\langle}G_{\overline{z},\overline{\zeta}}^{\operatorname {Im}M_{0}},B\Big{\rangle}. \tag{3.158}\] Applying (3.152) to \(G_{\overline{z},\overline{\zeta}}^{\operatorname{Im}M_{0}}\) gives \[\Big{\langle}G_{\overline{z},\overline{\zeta}}^{\operatorname{Im}M _{0}},B\Big{\rangle} =\Big{\langle}\mathcal{B}_{\overline{z},\overline{\zeta}}^{-1} \Big{[}\operatorname{Im}M_{0}+\frac{1}{M(\overline{z})}\mathcal{E}_{1}^{ \operatorname{Im}M_{0}}\frac{1}{M(\overline{\zeta})}\Big{]},B\Big{\rangle} \tag{3.159}\] \[=\Big{\langle}\operatorname{Im}M_{0}+\frac{1}{M(\overline{z})} \mathcal{E}_{1}^{\operatorname{Im}M_{0}}\frac{1}{M(\overline{\zeta})},\Big{(} \mathcal{B}_{\overline{z},\overline{\zeta}}^{-1}\Big{)}^{*}[B]\Big{\rangle}, \tag{3.160}\] where \(\mathbb{E}[\|\mathcal{E}_{1}^{\operatorname{Im}M_{0}}\|]=O_{\prec}(N^{-1/2} \widehat{\eta}^{-3/2})\) due to (3.75). Since \(\big{\langle}\operatorname{Im}M_{0},B\big{\rangle}=0\), the decomposition (3.112) implies that \(\Big{(}\mathcal{B}_{\overline{z},\overline{\zeta}}^{-1}\Big{)}^{*}[B]= \mathcal{J}_{\overline{z},\overline{\zeta}}^{*}[B]=O(\|B\|)\), and we conclude that \[\mathcal{Q}\Big{[}G^{B}(z,\zeta)\Big{]}=O_{\prec}\Big{(}\|B\|\Big{(}1+\frac{1 }{N^{1/2}\widehat{\eta}^{3/2}}\Big{)}\Big{)}. \tag{3.161}\] Combining this with the estimate of the first term in (3.156) we get that \[\mathbb{E}\Big{[}\Big{\|}G^{B}(z,\zeta)\Big{\|}\Big{]}=O_{\prec}\bigg{(}\|B\| \Big{(}1+\frac{1}{N^{1/2}\widehat{\eta}^{3/2}}\Big{)}\bigg{)} \tag{3.162}\] uniformly for \((z,\zeta)\in\big{(}\Omega^{-}\times\Omega^{+}\big{)}\cup\big{(}\Omega^{+} \times\Omega^{-}\big{)}\). **Step 2.** Consider now general \(B\in\mathbb{C}^{n\times n}\). Denote \[B^{\circ}:=B-\frac{\langle\operatorname{Im}M_{0},B\rangle}{\langle\operatorname {Im}M_{0}\rangle}I_{n}. \tag{3.163}\] Then \(\langle\operatorname{Im}M_{0},B^{\circ}\rangle=0\) and \[G^{B}(z,\zeta)=\frac{\langle\operatorname{Im}M_{0},B\rangle}{\langle \operatorname{Im}M_{0}\rangle}G^{I_{n}}(z,\zeta)+G^{B^{\circ}}(z,\zeta). \tag{3.164}\] Using the resolvent identity we have \[G^{I_{n}}(z,\zeta)=\big{(}\operatorname{Id}_{n}\otimes\frac{1}{N}\operatorname {Tr}_{N}\big{)}\Big{(}\mathbf{G}(z)\mathbf{G}(\zeta)\Big{)}=\frac{1}{z-\zeta}\big{(} \operatorname{Id}_{n}\otimes\frac{1}{N}\operatorname{Tr}_{N}\big{)}\Big{(} \mathbf{G}(z)-\mathbf{G}(\zeta)\Big{)}. \tag{3.165}\] After applying the local law (3.11) we get \[G^{I_{n}}(z,\zeta)=\frac{1}{z-\zeta}\big{(}M(z)-M(\zeta)\big{)}+O_{\prec}\Big{(} \frac{1}{N\widehat{\eta}^{2}}\Big{)}. \tag{3.166}\] Finally, the analyticity of \(M\) implies that \[G^{I_{n}}(z,\zeta)=\vartheta\frac{2\mathrm{i}}{z-\zeta}\operatorname{Im}M_{0 }+O_{\prec}\Big{(}1+\frac{1}{N\widehat{\eta}^{2}}\Big{)}. \tag{3.167}\] Together with (3.164) and (3.162) this gives \[G^{B}(z,\zeta)=\vartheta\frac{2\mathrm{i}}{z-\zeta}\frac{\langle\operatorname{ Im}M_{0},B\rangle}{\langle\operatorname{Im}M_{0}\rangle}\operatorname{Im}M_{0}+O_{ \prec}\Big{(}\|B\|\Big{(}1+\frac{1}{N\widehat{\eta}^{2}}+\frac{1}{N^{1/2} \widehat{\eta}^{3/2}}\Big{)}\Big{)}. \tag{3.168}\] This finishes the proof for general \(B\in\mathbb{C}^{n\times n}\). Lemma 4.1 improves the estimate of the error them in \(G^{B}(z,\zeta)\) for \((z,\zeta)\in\big{(}\Omega^{-}\times\Omega^{+}\big{)}\cup\big{(}\Omega^{+} \times\Omega^{-}\big{)}\) by roughly \(\widehat{\eta}\), which is crucial for establishing Theorem 2.1 for _all_ mesoscopic scales. We now proceed to the proof of the main result. Proof of Theorem 2.1 for \(\beta=2\).: We start by introducing additional notation. Let \(B_{ij}:\mathbb{C}\to\mathbb{C}^{n\times n}\), \(1\leq i,j\leq n\), be the collection of deterministic \(n\times n\) matrix-valued functions \[B_{ij}(z)=\frac{1}{M(z)}\mathcal{B}_{z}^{-1}[I_{n}]\,\Gamma[E_{ij}]. \tag{3.169}\] Consider the integral \[\mathcal{V}:=\frac{1}{\pi^{2}}\int\limits_{\Omega\times\Omega}\frac{\partial \tilde{f}(z)}{\partial\overline{z}}\frac{\partial\tilde{f}(\zeta)}{\partial \overline{\zeta}}\frac{\partial}{\partial\zeta}\mathbb{E}\Big{[}\sum_{i,j=1}^{ n}\operatorname{Tr}\Big{(}E_{ji}\,G^{B_{ij}(z)}(z,\zeta)\Big{)}\mathfrak{e}(t) \Big{]}d^{2}\!zd^{2}\!\zeta. \tag{3.170}\] This is the integral that appears in (3.30) with \(S_{ij}(z,\zeta)=G^{B_{ij}(z)}(z,\zeta)\). Using the analyticity of \(M\) at \(E_{0}\) and the boundedness of \(\mathcal{B}_{E_{0}}^{-1}\), we write \[B_{ij}(z)=B_{ij}(E_{0})+\widehat{B}_{ij}(z) \tag{3.171}\] with \[B_{ij}(E_{0}):=\frac{1}{M(E_{0})}\mathcal{B}_{E_{0}}^{-1}[I_{n}]\,\Gamma[E_{ ij}] \tag{3.172}\] and \(\|\widehat{B}_{ij}(z)\|=O(|\operatorname{Im}z|)\). In particular, \(\|B_{ij}(z)\|\lesssim 1\) uniformly on \(\Omega\). We split the domain of integration in (3.170) into four regions determined by the signs of \(\operatorname{Im}z\) and \(\operatorname{Im}\zeta\), namely, \[\Omega\times\Omega=\big{(}\Omega^{+}\times\Omega^{+}\big{)}\cup\big{(}\Omega^ {+}\times\Omega^{-}\big{)}\cup\big{(}\Omega^{-}\times\Omega^{+}\big{)}\cup \big{(}\Omega^{-}\times\Omega^{-}\big{)}, \tag{3.173}\] and denote the integrals over the corresponding four regions by \(\mathcal{V}^{(+,+)}\), \(\mathcal{V}^{(+,-)}\), \(\mathcal{V}^{(-,+)}\) and \(\mathcal{V}^{(-,-)}\), so that \[\mathcal{V}=\mathcal{V}^{(+,+)}+\mathcal{V}^{(+,-)}+\mathcal{V}^{(-,+)}+ \mathcal{V}^{(-,-)}. \tag{3.174}\] Now we treat each term separately. Consider first \(\mathcal{V}^{(+,+)}\). If we denote \[h_{1}(z,\zeta):=\mathbb{E}\Big{[}\sum_{i,j=1}^{n}\operatorname{Tr}\Big{(}E_{ ji}\,G^{B_{ij}(z)}(z,\zeta)\Big{)}\mathfrak{e}(t)\Big{]}, \tag{3.175}\] then \[\mathcal{V}^{(+,+)}=\frac{1}{\pi^{2}}\int\limits_{\Omega^{+}\times\Omega^{+}} \frac{\partial\tilde{f}(z)}{\partial\overline{z}}\frac{\partial\tilde{f}( \zeta)}{\partial\overline{\zeta}}\frac{\partial}{\partial\zeta}h_{1}(z,\zeta )\,d^{2}\!zd^{2}\!\zeta\,. \tag{3.176}\] The estimate (3.148) and the boundedness of \(B_{ij}(z)\) implies that \(\mathbb{E}\big{[}\big{\|}G^{B_{ij}(z)}(z,\zeta)\big{\|}\big{]}\big{]}\prec 1+N^{-1/2} \overline{\eta}^{-3/2}\) uniformly on \(\Omega^{+}\times\Omega^{+}\). Since \(M^{-1}\) is uniformly bounded on \(\Omega\) and \(|\mathfrak{e}(t)|\leq 1\), we have that \(h_{1}\) is an analytic function on \(\Omega^{+}\times\Omega^{+}\) satisfying the bound \[|h_{1}(z,\zeta)|\prec 1+\frac{1}{N^{1/2}\overline{\eta}^{3/2}}. \tag{3.177}\] The Cauchy integral formula applied to \(\frac{\partial}{\partial\zeta}h_{1}(z,\zeta)\) yields the bound \[\Big{|}\frac{\partial}{\partial\zeta}h_{1}(z,\zeta)\Big{|}\prec\frac{1}{ \widetilde{\eta}}+\frac{1}{N^{1/2}\overline{\eta}^{5/2}} \tag{3.178}\] uniformly on \(\Omega^{+}\times\Omega^{+}\). By using Stokes' theorem (3.69) (twice) and the definition of \(\tilde{f}\) in (3.4), we rewrite the integral in (3.176) as \[\frac{1}{\pi^{2}}\int\limits_{\Omega^{+}\times\Omega^{+}}\frac{\partial \tilde{f}(z)}{\partial\overline{z}}\frac{\partial\tilde{f}(\zeta)}{\partial \overline{\zeta}}\frac{\partial}{\partial\zeta}\frac{\partial}{\partial\zeta}h_ {1}(z,\zeta)\,d^{2}\!zd^{2}\!\zeta=\frac{1}{4\pi^{2}}\int\limits_{\partial \Omega^{+}\times\partial\Omega^{+}}\tilde{f}(z)\tilde{f}(\zeta)\frac{\partial }{\partial\zeta}\,h_{1}(z,\zeta)\,dzd\zeta. \tag{3.179}\] Since \(\tilde{f}\) vanishes everywhere on \(\partial\Omega^{+}\) except the part that intersects with the line \(\operatorname{Im}z=N^{-\tau}\eta_{0}\), and \(\frac{\partial}{\partial\zeta}h_{1}(z,\zeta)\) satisfies the bounds (3.178), we estimate this integral as \[\bigg{|}\int\limits_{\partial\Omega^{+}\times\partial\Omega^{+}} \tilde{f}(z)\tilde{f}(\zeta)\frac{\partial}{\partial\zeta}h_{1}(z,\zeta)\,dzd \zeta\bigg{|}\\ \prec\int\limits_{E_{0}-2\delta}^{E_{0}+2\delta}\int\limits_{E_{0} -2\delta}^{E_{0}+2\delta}|\tilde{f}(x_{1}+\operatorname{i}N^{-\tau}\eta_{0}) \tilde{f}(x_{2}+\operatorname{i}N^{-\tau}\eta_{0})|\Big{(}\frac{N^{\tau}}{ \eta_{0}}+\frac{N^{5\tau/2}}{N^{1/2}\eta_{0}^{5/2}}\Big{)}\,dx_{1}dx_{2}. \tag{3.180}\] From the definition of \(\tilde{f}\) in (3.4) close the real line we know that \(\tilde{f}(x+{\rm i}\,N^{-\tau}\eta_{0})=f(x)+{\rm i}\,N^{-\tau}\eta_{0}f^{\prime }(x)\), and thus (3.180) can be further estimated as \[\bigg{|}\int\limits_{\partial\Omega^{+}\times\partial\Omega^{+}} \tilde{f}(z)\tilde{f}(\zeta)\frac{\partial}{\partial\zeta}\,h_{1}(z,\zeta)\,dzd \zeta\bigg{|}\\ \prec\big{(}\|f\|_{1}+N^{-\tau}\eta_{0}\|f^{\prime}\|_{1}\big{)}^{ 2}\Big{(}\frac{N^{\tau}}{\eta_{0}}+\frac{N^{5\tau/2}}{N^{1/2}\eta_{0}^{5/2}} \Big{)}\prec N^{-\tau}\eta_{0}+\frac{N^{5\tau/2}}{(N\eta_{0})^{1/2}}, \tag{3.181}\] where in the last step we used the norm bounds for \(f\) and \(f^{\prime}\) from (3.1). In this case it is sufficient to have \(\gamma\in(0,1)\) and \(\tau\in(0,\min\{(1-\gamma)/7,\gamma/2\})\). Indeed, with this restriction on \(\gamma\) and \(\tau\) the last expression in (3.181) gives \(N^{-\gamma-\tau}+N^{(-1+\gamma+5\tau)/2}\lesssim N^{-\tau}\), that ensures the convergence to zero as \(N\to\infty\). The same argument can be applied to estimate the integral over the set \(\Omega^{-}\times\Omega^{-}\), giving \[\big{|}\mathcal{V}^{(+,+)}\big{|}+\big{|}\mathcal{V}^{(-,-)}\big{|}\prec N^{- \tau}. \tag{3.182}\] Consider now \(\mathcal{V}^{(+,-)}\), the second term in (3.174) \[\mathcal{V}^{(+,-)}=\frac{1}{\pi^{2}}\int\limits_{\Omega^{+}\times\Omega^{-}} \frac{\partial\tilde{f}(z)}{\partial\overline{z}}\frac{\partial\tilde{f}( \zeta)}{\partial\overline{\zeta}}\frac{\partial}{\partial\zeta}\mathbb{E} \Big{[}\sum_{i,j=1}^{n}\mathrm{Tr}\,\Big{(}E_{ji}\,G^{B_{ij}(z)}(z,\zeta) \Big{)}\mathfrak{e}(t)\Big{]}d^{2}\!zd^{2}\!\zeta. \tag{3.183}\] Using the improved estimate (3.153) for \(G^{B(z)}(z,\zeta)\) from Lemma 3.8 and the approximation of \(B_{ij}(z)\) by \(B_{ij}(E_{0})\) on \(\Omega\times\Omega\) from (3.171), we rewrite the expectation in (3.183) as \[\frac{1}{\pi^{2}}\mathbb{E}\Big{[}\sum_{i,j=1}^{n}\mathrm{Tr}\, \Big{(}E_{ji}\,G^{B_{ij}(z)}(z,\zeta)\Big{)}\mathfrak{e}(t)\Big{]}\\ =\mathbb{E}\big{[}\mathfrak{e}(t)\big{]}\frac{1}{\pi^{2}(z-\zeta) }\sum_{i,j=1}^{n}\mathrm{Tr}\,\Big{(}E_{ji}\,\frac{2{\rm i}}{(\mathrm{Im}\,M( E_{0}))}\,\mathrm{Im}\,M(E_{0})\langle\mathrm{Im}\,M(E_{0}),\,B_{ij}(E_{0})\, \rangle\Big{)}+h_{2}(z,\zeta), \tag{3.184}\] where \(h_{2}(z,\zeta)\) is analytic on \(\Omega^{+}\times\Omega^{-}\) and satisfies \(|h_{2}(z,\zeta)|\prec(N^{\tau}+N^{-1/2}\widehat{\eta}^{-3/2})\) uniformly on \(\Omega^{+}\times\Omega^{-}\). The \(N^{\tau}\) term in the last estimate comes from the bound \[\Big{|}\frac{B_{ij}(z)-B_{ij}(E_{0})}{z-\zeta}\Big{|}\prec N^{\tau}, \tag{3.185}\] holding on \(\Omega^{+}\times\Omega^{-}\). Repeating the computations in (3.177)-(3.181) we arrive at \[\bigg{|}\int\limits_{\Omega^{+}\times\Omega^{-}}\frac{\partial\tilde{f}(z)}{ \partial\overline{z}}\frac{\partial\tilde{f}(\zeta)}{\partial\overline{\zeta} }\frac{\partial}{\partial\zeta}\,h_{2}(z,\zeta)\,d^{2}\!zd^{2}\!\zeta\bigg{|} \prec\eta_{0}+\frac{N^{5\tau/2}}{(N\eta_{0})^{1/2}}\lesssim N^{-\tau} \tag{3.186}\] for \(\gamma\in(0,1)\) and \(\tau\in(0,\min\{(1-\gamma)/7,\gamma/2\})\). We now simplify the expression in the first term in (3.184). Denote for brevity \(M_{0}:=\lim_{y\downarrow 0}M(E_{0}+{\rm i}\,y)\) and \(M_{0}^{\prime}:=\lim_{y\downarrow 0}M^{\prime}(E_{0}+{\rm i}\,y)\). Then the \((z,\zeta)\)-independent constant appearing in the first term of (3.184) is written as \[\phi^{(+,-)} :=\sum_{i,j=1}^{n}\mathrm{Tr}\,\Big{(}E_{ji}\,\frac{2{\rm i}}{( \mathrm{Im}\,M_{0})}\,\mathrm{Im}\,M_{0}\Big{\langle}\,\mathrm{Im}\,M_{0},\, \frac{1}{M_{0}}\mathcal{B}_{E_{0}}^{-1}[I_{n}]\,\Gamma[E_{ij}]\,\Big{\rangle} \Big{)} \tag{3.187}\] \[=\frac{2{\rm i}\,n}{(\mathrm{Im}\,M_{0})}\sum_{i,j=1}^{n}\Big{\langle} \operatorname{Im}\,M_{0}\,\frac{1}{M_{0}}\mathcal{B}_{E_{0}}^{-1}[I_{n}]\, \Gamma[E_{ij}]\,\Big{\rangle}\Big{\langle}E_{ji}\,\,\mathrm{Im}\,M_{0}\Big{\rangle}\] (3.188) \[=\frac{2{\rm i}\,n}{(\mathrm{Im}\,M_{0})}\sum_{i,j=1}^{n}\Big{\langle} \Gamma\Big{[}\,\mathrm{Im}\,M_{0}\,\frac{1}{M_{0}}\mathcal{B}_{E_{0}}^{-1}[I_{n }]\,\Big{]}\,\mathrm{Im}\,M_{0}\Big{\rangle}\] (3.189) \[=\frac{2{\rm i}}{(\mathrm{Im}\,M_{0})}\Big{\langle}\Gamma\Big{[} \,\mathrm{Im}\,M_{0}\,\frac{1}{M_{0}}\mathcal{B}_{E_{0}}^{-1}[I_{n}]\Big{]}\, \operatorname{Im}\,M_{0}\Big{\rangle}. \tag{3.190}\] By differentiating the Dyson equation (2.7) at \(z=E_{0}\) and taking the conjugate-transpose on both sides, we get that \(\mathcal{B}_{E_{0}}\big{[}M_{0}^{\prime}\big{]}=I_{n}\), so that \[\mathcal{B}_{E_{0}}^{-1}[I_{n}]=M_{0}^{\prime}. \tag{3.191}\] Plugging this into (3.190) yields \[\left\langle\Gamma\Big{[}\operatorname{Im}M_{0}\frac{1}{M_{0}}M_{0}^{ \prime}\Big{]}\,\operatorname{Im}M_{0}\right\rangle =\left\langle\operatorname{Im}M_{0}\frac{1}{M_{0}}M_{0}^{\prime} \operatorname{\Gamma}\big{[}\operatorname{Im}M_{0}\big{]}\right\rangle \tag{3.192}\] \[=\left\langle\operatorname{Im}M_{0}\frac{1}{M_{0}}M_{0}^{\prime }\,\frac{1}{M_{0}}\operatorname{Im}M_{0}\frac{1}{(M_{0})^{*}}\right\rangle\] (3.193) \[=\frac{\mathrm{i}}{2}\big{\langle}\operatorname{Im}M_{0}\big{\rangle}, \tag{3.194}\] where we used (3.137) in the second and (3.140) in the last step. For \(\phi^{(+,-)}\) from (3.187) we obtain \(\phi^{(+,-)}=-1\). Now, using the analyticity of the function \((z-\zeta)^{-2}\) on \(\Omega^{+}\times\Omega^{-}\) and Stokes' theorem we get \[\frac{1}{\pi^{2}}\int_{\Omega^{+}\times\Omega^{-}}\frac{\partial \tilde{f}(z)}{\partial\overline{z}}\frac{\partial\tilde{f}(\zeta)}{\partial \overline{\zeta}}\frac{\partial}{\partial\zeta}\frac{\partial}{\partial\zeta }\frac{\partial}{\partial\zeta}\frac{\partial}{\partial\zeta}\frac{\partial( \phi^{(+,-)}}{z-\zeta)}d^{2}zd^{2}\zeta=-\frac{1}{\pi^{2}}\int_{\Omega^{+} \times\partial\Omega^{-}}\frac{\partial\tilde{f}(z)}{\partial\overline{z}} \frac{\partial\tilde{f}(\zeta)}{\partial\overline{\zeta}}\frac{1}{(z-\zeta)^ {2}}d^{2}zd^{2}\zeta \tag{3.195}\] \[=\frac{1}{2\pi^{2}}\int_{\Omega^{+}\times\Omega^{-}}\frac{ \partial}{\partial\overline{z}}\frac{\partial}{\partial\overline{\zeta}} \frac{(\tilde{f}(z)-\tilde{f}(\zeta))^{2}}{(z-\zeta)^{2}}d^{2}zd^{2}\zeta=- \frac{1}{8\pi^{2}}\int_{\partial\Omega^{+}\times\partial\Omega^{-}}\frac{( \tilde{f}(z)-\tilde{f}(\zeta))^{2}}{(z-\zeta)^{2}}dzd\zeta. \tag{3.196}\] The function \(\tilde{f}\) vanishes everywhere on \(\partial\Omega^{+}\times\partial\Omega^{-}\) except the part that intersects with the lines \(\operatorname{Im}z=N^{-\tau}\eta_{0}\) and \(\operatorname{Im}\zeta=N^{-\tau}\eta_{0}\), therefore, the last integral in (3.196) can be written as \[-\frac{1}{8\pi^{2}}\int_{\partial\Omega^{+}\times\partial\Omega^ {-}}\frac{(\tilde{f}(z)-\tilde{f}(\zeta))^{2}}{(z-\zeta)^{2}}dzd\zeta\\ =\frac{1}{8\pi^{2}}\int\limits_{E_{0}-2\delta}^{E_{0}+2\delta} \int\limits_{E_{0}-2\delta}^{E_{0}+2\delta}\frac{(f(x)+\mathrm{i}\,N^{-\tau} \eta_{0}f^{\prime}(x)-f(y)+\mathrm{i}\,N^{-\tau}\eta_{0}f^{\prime}(y))^{2}}{( x-y+2\mathrm{i}\,N^{-\tau}\eta_{0})^{2}}dxdy, \tag{3.197}\] where the change of the sign is due to the change in the orientation of the contour \(\partial\Omega^{-}\). Recall that \(f(x)=g((x-E_{0})/\eta_{0})\). By changing the variables \(s=(x-E_{0})/\eta_{0}\), \(t=(y-E_{0})/\eta_{0}\), the last integral becomes \[\frac{1}{8\pi^{2}}\int_{\mathbb{R}}\int_{\mathbb{R}}\frac{(g(s)-g(t)+\mathrm{i }\,N^{-\tau}(g^{\prime}(s)+g^{\prime}(t))^{2}}{(s-t+2\mathrm{i}\,N^{-\tau})^{2 }}dsdt. \tag{3.198}\] Combining this with (3.184), (3.186), (3.187) and (3.195) we end up with the following expression for \(\mathcal{V}^{(+,-)}\) from (3.183): \[\mathcal{V}^{(+,-)}=\frac{\mathbb{E}\big{[}\epsilon(t)\big{]}}{8\pi^{2}}\int_{ \mathbb{R}}\int_{\mathbb{R}}\frac{(g(s)-g(t)+\mathrm{i}\,N^{-\tau}(g^{\prime} (s)+g^{\prime}(t))^{2}}{(s-t+2\mathrm{i}\,N^{-\tau})^{2}}dsdt+O_{\prec}(N^{- \tau}). \tag{3.199}\] The last term left to evaluate is \[\mathcal{V}^{(-,+)}=\frac{1}{\pi^{2}}\int\limits_{\Omega^{-}\times\Omega^{+}} \frac{\partial\tilde{f}(z)}{\partial\overline{z}}\frac{\partial\tilde{f}(\zeta) }{\partial\overline{\zeta}}\frac{\partial}{\partial\zeta}\mathbb{E}\Big{[} \sum\limits_{i,j=1}^{n}\operatorname{Tr}\Big{(}E_{ji}\,G^{B_{ij}(z)}(z,\zeta) \Big{)}\epsilon(t)\Big{]}d^{2}zd^{2}\zeta. \tag{3.200}\] Proceeding as in the analysis of \(\mathcal{V}^{(+,-)}\) in (3.184)-(3.186) we have that \[\mathcal{V}^{(-,+)}=\frac{E\big{[}\epsilon(t)\big{]}}{\pi^{2}}\int_{\Omega^{-} \times\Omega^{+}}\frac{\partial\tilde{f}(z)}{\partial\overline{z}}\frac{ \partial\tilde{f}(\zeta)}{\partial\overline{\zeta}}\frac{\partial}{\partial \zeta}\Big{(}\frac{\phi^{(-,+)}}{z-\zeta}\Big{)}d^{2}zd^{2}\zeta+O_{\prec}(N^{- \tau}), \tag{3.201}\] where \[\phi^{(-,+)}=-\sum\limits_{i,j=1}^{n}\operatorname{Tr}\Big{(}E_{ji}\,\frac{2 \mathrm{i}}{\langle\operatorname{Im}M_{0}\rangle}\operatorname{Im}M_{0} \Big{\langle}\operatorname{Im}M_{0},\,\frac{1}{M_{0}^{*}}\big{(}\mathcal{B}_{E _{0}}^{-1}[I_{n}]\big{)}^{*}\Gamma[E_{ij}]\,\Big{\rangle}\Big{)}. \tag{3.202}\] The expression in (3.202) reflects the fact that \(\vartheta=-1\) in (3.112) and \(\lim_{z\to E_{0}}M(z)=(M(E_{0}))^{*}\) due to the analytic extension of \(M(z)\) for \(z\in\mathbb{C}_{-}\). Then from (3.191) and (3.192) we obtain that \[\left\langle\Gamma\Big{[}\operatorname{Im}M_{0}\,\frac{1}{(M_{0})^{*}}(M_{0}^{ \prime})^{*}\Big{]}\,\operatorname{Im}M_{0}\right\rangle=-\frac{\mathrm{i}}{2} \big{\langle}\operatorname{Im}M_{0}\rangle. \tag{3.203}\] This cancels the minus sign coming from \(\vartheta\), and therefore \(\phi^{(-,+)}=-1\) and \[\mathcal{V}^{(-,+)}=\frac{\mathbb{E}\big{[}\epsilon(t)\big{]}}{8\pi^{2}}\int \limits_{\mathbb{R}}\int\limits_{\mathbb{R}}\frac{(g(s)-g(t)+\mathrm{i}\,N^{- \tau}(g^{\prime}(s)+g^{\prime}(t))^{2}}{(s-t-2\mathrm{i}\,N^{-\tau})^{2}}dsdt+O_{ \prec}(N^{-\tau}). \tag{3.204}\] Combining Lemma 3.4, (3.174), (3.181), (3.182), (3.199) and (3.204) gives \[\frac{d}{dt}\mathbb{E}[\mathfrak{e}(t)]=-t\,V_{N}[g]\,\mathbb{E}[\mathfrak{e}(t )]+O_{\prec}\big{(}N^{-\tau}(1+|t|)\big{)}, \tag{3.205}\] where \[V_{N}[g]:=\frac{1}{4\pi^{2}}\int_{\mathbb{R}}\int_{\mathbb{R}}\frac{\big{(}g(s) -g(t)+\mathrm{i}\,N^{-\tau}(g^{\prime}(s)+g^{\prime}(t))^{2}}{\big{(}s-t+2 \mathrm{i}\,N^{-\tau}\big{)}^{2}}dsdt. \tag{3.206}\] Notice that \[\bigg{|}\frac{\big{(}g(s)-g(t)+\mathrm{i}\,N^{-\tau}(g^{\prime}(s)+g^{\prime} (t)\big{)}^{2}}{\big{(}s-t+2\mathrm{i}\,N^{-\tau}\big{)}^{2}}\bigg{|}=\frac{| g(s)-g(t)|^{2}}{|s-t|^{2}+4N^{-2\tau}}+N^{-2\tau}\frac{|g^{\prime}(s)+g^{ \prime}(t)|^{2}}{|s-t|^{2}+4N^{-2\tau}}, \tag{3.207}\] so that \[\int_{\mathbb{R}}\int_{\mathbb{R}}N^{-2\tau}\frac{|g^{\prime}(s)+g^{\prime}( t)|^{2}}{|s-t|^{2}+4N^{-2\tau}}dsdt\lesssim\|g^{\prime}\|_{2}^{2} \tag{3.208}\] and \[\int_{\mathbb{R}}\int_{\mathbb{R}}\frac{|g(s)-g(t)|^{2}}{|s-t|^{2}+4N^{-2\tau }}dsdt\lesssim\int_{\mathbb{R}}\int_{\mathbb{R}}\frac{(g(s)-g(t))^{2}}{(s-t)^ {2}}dsdt<\infty, \tag{3.209}\] and therefore the sequence \((V_{N}[g])_{N\in\mathbb{N}}\) is bounded. Now we complete the proof using a standard argument. If we denote \(\varphi_{N}(t):=\mathbb{E}[\mathfrak{e}(t)]\), then by (3.205) and the boundedness of \(V_{N}[g]\), the sequences of functions \((\varphi_{N})\) and \((\varphi_{N}^{\prime})\) are uniformly bounded on \([-a,a]\) for any \(a>0\). By Arzela-Ascoli theorem, any subsequence of \((\varphi_{N})\) has a subsequence that converges uniformly on \([-a,a]\) to a function \(\varphi\). From the dominated convergence theorem, \(V_{N}[g]\) converges to \(V[g]\) as \(N\to\infty\), and thus by integrating (3.205), taking the limit over the convergent subsequence and switching the limit and integration we find that \(\varphi(t)=e^{-V[g]t^{2}/2}\). Since \(\varphi(t)\) is the unique limit for all the converging subsequences, the entire sequence \((\varphi_{N})\) converges to \(\varphi\), and we have that for any \(t\in\mathbb{R}\) \[\lim_{N\to\infty}\mathbb{E}[\mathfrak{e}(t)]=e^{-\frac{t^{2}}{2}V[g]}. \tag{3.210}\] Finally, by Lemma 3.2, we have the convergence of the characteristic function \[\lim_{N\to\infty}\mathbb{E}[\mathfrak{e}(t)]=e^{-\frac{t^{2}}{2}V[g]}, \tag{3.211}\] and we finish the proof of Theorem 2.1 for \(\beta=2\) by invoking Levy's continuity theorem. ## 4 Real symmetric case In this section we collect the changes to the proof presented in Sections 3 that establish Theorem 2.1 for \(\beta=1\). In this case the model (2.1) is constructed with real i.i.d. blocks \(\mathbf{X}_{\alpha}\), \(1\leq\alpha\leq d\). ### Results that do not require changes First we notice that all the results about the spectral properties of \(\mathbf{H}\) stated in Section 3.1 hold independently of the symmetry class. In particular, if the self-energy operator \[\Gamma[R]:=\sum_{\alpha=1}^{d}L_{\alpha}R\,L_{\alpha}^{\,t}+L_{\alpha}^{\,t}R \,L_{\alpha} \tag{4.1}\] satisfies the property **(A)** (2.10), then the local laws (3.10)-(3.11) hold for \(\mathbf{H}^{(1)}\). Lemma 3.2 remains valid for \(\beta=1\) without any changes in the proof. The proof of Lemma 3.5 relies on the general properties of the resolvent (3.80)-(3.81) and the local laws (3.10)-(3.11), and therefore holds independently of the symmetry class and can be used for \(\beta=1\) as stated. The differences in the proof are purely notational. Since \(L_{\alpha}\) and \(\mathbf{X}_{\alpha}\) are real symmetric, the expression for \(W_{ij}\) (3.77) is replaced by \[W_{ij}=\sum_{\alpha=1}^{d}\bigg{(}L_{\alpha}\,x_{ij}^{(\alpha)}+L_{\alpha}^{\,t }\,x_{ji}^{(\alpha)}\bigg{)}, \tag{4.2}\] and the identity (3.89) is replaced by its real counterpart \[\mathbb{E}[x_{kp}^{(\alpha_{1})}x_{ql}^{(\alpha_{2})}]=\delta_{\alpha_{1} \alpha_{2}}\delta_{kq}\delta_{pl}\frac{1}{N}. \tag{4.3}\] The above identities are applied in (3.86)-(3.88) and do not affect the proof. The sum in (3.86) is taken over \(p,q\neq l\), and for any \(R\in\mathbb{C}^{n\times n}\) we have \[\mathbb{E}_{t}[W_{lp}RW_{ql}]=\frac{1}{N}\Gamma[R]\,\delta_{pq}. \tag{4.4}\] Therefore, we end up with the same expression as in the complex case in (3.87). The remainder of the proof does not need any changes. Lemma 3.6 establishes the properties of the operator \(\mathcal{B}_{z,\zeta}\) that are independent of the symmetry class of the Kronecker model. ### Computing \(\mathbb{E}[\mathfrak{e}(t)]\) for \(\beta=1\) The derivation and analysis of the approximate equation for \(\frac{d}{dt}\mathbb{E}[\mathfrak{e}(t)]\) in the case of the Kronecker model with real i.i.d. blocks requires several important modifications compared to the complex case. These modifications are needed to take into account the differences in the correlation structures of real and complex Kronecker models. In order to compare the two cases, consider the operator \(\mathscr{S}^{(g)}:\mathbb{C}^{nN\times nN}\to\mathbb{C}^{nN\times nN}\) given by \[\mathscr{S}^{(g)}\big{[}\boldsymbol{R}\big{]}:=\mathbb{E}\Big{[}\boldsymbol{ W}^{(g)}\boldsymbol{R}\,\boldsymbol{W}^{(g)}\Big{]} \tag{4.5}\] for \(\boldsymbol{R}\in\mathbb{C}^{nN\times nN}\) and \(\beta\in\{1,2\}\). This operator appears naturally in the proof of Lemma 3.4, where it takes the form (3.22) \[\mathscr{S}^{(2)}[\boldsymbol{R}]=\mathscr{S}[\boldsymbol{R}]=\Gamma\Big{[} \frac{1}{N}\sum_{j=1}^{N}R_{jj}\Big{]}\otimes\boldsymbol{I}_{N}. \tag{4.6}\] On the other hand, in the case of real i.i.d. blocks we have \[\mathscr{S}^{(1)}[\boldsymbol{R}]=\Gamma\Big{[}\frac{1}{N}\sum_{j=1}^{N}R_{jj }\Big{]}\otimes\boldsymbol{I}_{N}+\frac{1}{N}\sum_{i,j=1}^{N}\widetilde{ \Gamma}\big{[}R_{ji}\big{]}\otimes\boldsymbol{E}_{ij}, \tag{4.7}\] where the operator \(\widetilde{\Gamma}:\mathbb{C}^{n\times n}\to\mathbb{C}^{n\times n}\) is given by \[\widetilde{\Gamma}\big{[}R\big{]}:=\sum_{\alpha=1}^{d}\Big{(}L_{\alpha}RL_{ \alpha}+L_{\alpha}^{\;t}RL_{\alpha}^{\;t}\Big{)}. \tag{4.8}\] Notice that in the case when \(L_{\alpha}=L_{\alpha}^{\;t}\) for all \(\alpha\in\{1,\dots,d\}\) (i.e., when \(\boldsymbol{H}^{(1)}\) is constructed using real Wigner blocks) the operators \(\Gamma\) and \(\widetilde{\Gamma}\) coincide. In the general case the operators \(\Gamma\) and \(\widetilde{\Gamma}\) are different. Moreover, applying operator \(\mathscr{S}^{(1)}\) in the derivation of the equation for \(\frac{d}{dt}\mathbb{E}[\mathfrak{e}(t)]\) gives rise to two types of multiresolvent averages \[G^{B}(z,\zeta):=\frac{1}{N}\sum_{k,l=1}^{N}G_{lk}(z)BG_{kl}(\zeta),\quad \widetilde{G}^{B}(z,\zeta):=\frac{1}{N}\sum_{k,l=1}^{N}G_{lk}(z)BG_{lk}(\zeta). \tag{4.9}\] The details of how these terms emerge when differentiation \(\mathbb{E}[\mathfrak{e}(t)]\) with respect to \(t\), are presented in Lemma 4.2 below. The first quantity \(G^{B}\) from (4.9) was studied in Section 3.3 for \(\beta=2\), and the results of Lemma 3.5, Corollary 3.7 and Lemma 3.8 hold for \(\beta=1\) without any changes. The second quantity \(\widetilde{G}^{B}\) on the other hand requires a new proof which we present below. Notice also, that if \(n=1\) (i.e., \(\boldsymbol{H}^{(1)}\) is a real Wigner matrix), then \(G_{kl}=G_{lk}\) and thus the two quantities in (4.9) are the same. Therefore, in the case of one real Wigner matrix the variance of the fluctuations of the linear spectral statistics follows almost immediately from the complex case using the fact that \(\widetilde{\Gamma}=\Gamma\) in (4.7) and \(G^{B}(z,\zeta)=\widetilde{G}^{B}(z,\zeta)\). We start by proving the approximation of \(\widetilde{G}^{B}(z,\zeta)\), which is analogous to part (i) of Corollary 3.7 and Lemma 3.8 for the \(\beta=2\) case. **Lemma 4.1**.: _Let \(\gamma\in(0,1)\), \(\tau\in(0,\min\{\gamma/2,1-\gamma\})\), and let \(B\in\mathbb{C}^{n\times n}\). Denote \(\widehat{\eta}:=\min\{|\operatorname{Im}z|,|\operatorname{Im}\zeta|\}\)._ 1. _Uniformly on_ \(\big{(}\Omega^{+}\times\Omega^{+}\big{)}\cup\big{(}\Omega^{-}\times\Omega^{-} \big{)}\)__ \[\mathbb{E}\Big{[}\big{\|}\widetilde{G}^{B}(z,\zeta)\big{\|}\Big{]}=O_{\prec} \bigg{(}\big{\|}B\big{\|}\Big{(}1+\frac{1}{N^{1/2}\widehat{\eta}^{3/2}}\Big{)} \bigg{)}.\] (4.10) _._ 2. _Uniformly on_ \(\big{(}\Omega^{-}\times\Omega^{+}\big{)}\cup\big{(}\Omega^{+}\times\Omega^{-}\big{)}\)__ \[\widetilde{G}^{B}(z,\zeta)=\vartheta\,\frac{2{\rm i}}{(\operatorname{Im}M(E_{0}) )}\frac{1}{z-\zeta}\operatorname{Im}M(E_{0})B^{\,t}\operatorname{Im}M(E_{0})+ \widetilde{\mathcal{E}}^{B}(z,\zeta)\] (4.11) _with_ \[\mathbb{E}\Big{[}\big{\|}\widetilde{\mathcal{E}}^{B}(z,\zeta)\big{\|}\Big{]}=O _{\prec}\bigg{(}\big{\|}B\big{\|}\Big{(}1+\frac{1}{N\widehat{\eta}^{\,2}}+ \frac{1}{N^{1/2}\widehat{\eta}^{\,3/2}}\Big{)}\bigg{)} \tag{4.12}\] _and \(\vartheta=1\) for \((z,\zeta)\in\Omega^{+}\times\Omega^{-}\) and \(\vartheta=-1\) for \((z,\zeta)\in\Omega^{-}\times\Omega^{+}\)._ Proof.: For convenience we view \(n\times n\) matrices as elements of the vector space \(\mathbb{C}^{n}\otimes\mathbb{C}^{n}\) with the two interpretations related by the isomorphism \(\varphi:\mathbb{C}^{n\times n}\to\mathbb{C}^{n}\otimes\mathbb{C}^{n}\) acting on the basis vectors \(E_{ij}\in\mathbb{C}^{n\times n}\) as \(\varphi(E_{ij})=e_{i}\otimes e_{j}\) for \(i,j\in\{1,\dots,n\}\). Then for any \(A,B\in\mathbb{C}^{n\times n}\) the linear operator \(\mathbb{C}^{n\times n}\ni R\mapsto A\,R\,B\in\mathbb{C}^{n\times n}\) corresponds to the linear operator \(A\otimes B^{\,t}:\mathbb{C}^{n}\otimes\mathbb{C}^{n}\to\mathbb{C}^{n}\otimes \mathbb{C}^{n}\) in the sense that \[\varphi\big{(}ARB\big{)}=A\otimes B^{\,t}\,\varphi(R), \tag{4.13}\] where as before \(\otimes\) denotes the tensor (Kronecker) product and \(A\otimes B^{\,t}\) is an \(n^{2}\times n^{2}\) matrix. Using the above isomorphism we can write \(\widetilde{G}^{B}(z,\zeta)\) as \[\varphi\Big{(}\widetilde{G}^{B}(z,\zeta)\Big{)}=\Big{[}\frac{1}{N}\sum_{k,l=1 }^{N}G_{lk}(z)\otimes(G_{lk}(\zeta))^{t}\Big{]}\varphi(B). \tag{4.14}\] Notice that for all \(k,l\in\{1,\dots,N\}\) the symmetry of \(\boldsymbol{W}\) implies that \(W^{\,t}_{lk}=W_{kl}\), \(\big{(}G_{lk}(z)\big{)}^{t}=G_{hl}(z)\) and thus \[G_{lk}(z)\otimes(G_{lk}(\zeta))^{t}=G_{lk}(z)\otimes G_{kl}(\zeta). \tag{4.15}\] Moreover, the solution of the Dyson equation (2.7) in the case of real symmetric matrices \(L_{\alpha}\) is also real symmetric, \(M(z)=(M(z))^{t}\). In the first part we derive an approximate equation for \(\widetilde{G}^{B}(z,\zeta)\). The proof is similar to the proof of Lemma 3.5 and relies on the general resolvent identities (3.83)-(3.84) and the local laws (3.10)-(3.11). Fix \(k\in\{1,\dots,N\}\) and recall that \(\widehat{\eta}:=\min\{|\operatorname{Im}z|,|\operatorname{Im}\zeta|\}\). From (3.83), (3.84) and (3.85) we have that for any \(l\neq k\) uniformly on \((z,\zeta)\in\Omega\times\Omega\) \[G_{lk}(z)\otimes G_{kl}(\zeta)=M(z)\sum_{p\neq l}W_{lp}G^{(l)}_{pk}(z)\otimes \sum_{q\neq l}G^{(l)}_{kq}(\zeta)W_{ql}M(\zeta)+O_{\prec}\Big{(}\frac{1}{(N \widehat{\eta})^{\,3/2}}\Big{)}. \tag{4.16}\] By taking the partial expectation \(\mathbb{E}_{l}\) we get \[G_{lk}(z)\otimes G_{kl}(\zeta)=M(z)\frac{1}{N}\sum_{p\neq l} \sum_{\alpha=1}^{d}\Big{[}L_{\alpha}G^{(l)}_{pk}(z)\otimes G^{(l)}_{kp}(\zeta)L _{\alpha}^{\,t}+L_{\alpha}^{\,t}G^{(l)}_{pk}(z)\otimes G^{(l)}_{kp}(\zeta)L_{ \alpha}\Big{]}M(\zeta) \tag{4.17}\] \[\qquad\qquad\qquad\qquad\qquad\qquad+M(z)(1-\mathbb{E}_{l}) \Big{[}G_{lk}(z)\otimes G_{kl}(\zeta)\Big{]}M(\zeta)+O_{\prec}\Big{(}\frac{1} {(N\widehat{\eta})^{\,3/2}}\Big{)}. \tag{4.18}\] Applying (3.91), (3.98) and summing the above equality over \(l\in\{1,\dots,N\}\) yields \[\sum_{l=1}^{N}G_{lk}(z)\otimes G_{kl}(\zeta)=M(z)\otimes M(\zeta) +M(z)\sum_{p=1}^{N}\sum_{\alpha=1}^{d}\Big{[}L_{\alpha}G_{pk}(z) \otimes G_{kp}(\zeta)L_{\alpha}^{\,t}+L_{\alpha}^{\,t}G_{pk}(z)\otimes G_{kp}( \zeta)L_{\alpha}\Big{]}M(\zeta)\] \[\qquad\qquad\qquad\qquad\qquad+M(z)\sum_{l:l\neq k}(1-\mathbb{E}_ {l})\Big{[}G_{lk}(z)\otimes G_{kl}(\zeta)\Big{]}M(\zeta)+O_{\prec}\Big{(} \frac{1}{N^{1/2}\widehat{\eta}^{\,3/2}}\Big{)}. \tag{4.19}\] We multiply the left tensor factor in (4.19) from the left by \(M^{-1}(z)\) and the right tensor factor from the right by \(M^{-1}(\zeta)\). By averaging over \(k\in\{1,\dots,N\}\) and repeating the proof in Lemma 3.5 we show that the error term \[\widetilde{\mathcal{E}}_{1}(z,\zeta):=\frac{1}{N}\sum_{k,l=1}^{N} \frac{1}{M(z)}G_{lk}(z)\otimes G_{kl}(\zeta)\frac{1}{M(\zeta)} \tag{4.20}\] \[\qquad\qquad\qquad\qquad-I_{n}\otimes I_{n}-\frac{1}{N}\sum_{k,p=1 }^{N}\sum_{\alpha=1}^{d}\Big{[}L_{\alpha}G_{pk}(z)\otimes G_{kp}(\zeta)L_{ \alpha}^{\,t}+L_{\alpha}^{\,t}G_{pk}(z)\otimes G_{kp}(\zeta)L_{\alpha}\Big{]}\] is analytic on \(\Omega\times\Omega\) and satisfies the bound \[\mathbb{E}[\|\widetilde{\mathcal{E}}_{1}(z,\zeta)\|]\prec\frac{1}{N^{1/2}\widehat{ \eta}^{3/2}}. \tag{4.21}\] Now we rewrite equation (4.20) as \[\widehat{\mathcal{B}}_{z,\zeta}\Big{[}\frac{1}{N}\sum_{k,l=1}^{N}G_{lk}(z) \otimes G_{kl}(\zeta)\Big{]}=I_{n}\otimes I_{n}+\widetilde{\mathcal{E}}_{1}(z,\zeta), \tag{4.22}\] where the linear operator \(\widehat{\mathcal{B}}_{z,\zeta}:\mathbb{C}^{n\times n}\otimes\mathbb{C}^{n \times n}\to\mathbb{C}^{n\times n}\otimes\mathbb{C}^{n\times n}\) is given by \[\widehat{\mathcal{B}}_{z,\zeta}\big{[}A\otimes B\big{]}=\frac{1}{M(z)}A \otimes B\frac{1}{M(\zeta)}-\sum_{\alpha=1}^{d}\Big{[}L_{\alpha}A\otimes BL_{ \alpha}^{\,t}+L_{\alpha}^{\,t}A\otimes BL_{\alpha}\Big{]} \tag{4.23}\] for all \(A\otimes B\in\mathbb{C}^{n\times n}\otimes\mathbb{C}^{n\times n}\). To show the invertibility of \(\widehat{\mathcal{B}}_{z,\zeta}\) we define a linear operator \(\Phi:\mathbb{C}^{n\times n}\otimes\mathbb{C}^{n\times n}\to\mathbb{C}^{n \times n}\otimes\mathbb{C}^{n\times n}\), acting on the basis vectors \(E_{ij}\otimes E_{kl}\) as \[\Phi\big{[}E_{ij}\otimes E_{kl}\big{]}=E_{il}\otimes E_{kj}. \tag{4.24}\] The operator \(\Phi\) is an involution, \(\Phi^{2}=\mathrm{Id}_{n}\otimes\mathrm{Id}_{n}\). For any vectors \(a,b,c,d\in\mathbb{C}^{n}\) it satisfies \[\Phi\big{[}ab^{\,t}\otimes cd^{\,t}\big{]}=ad^{\,t}\otimes cb^{\,t}, \tag{4.25}\] which in composition with \(\widehat{\mathcal{B}}_{z,\zeta}\) gives \[\Phi\circ\widehat{\mathcal{B}}_{z,\zeta}\big{[}E_{ij}\otimes E_{ kl}\big{]} =\Phi\bigg{[}\frac{1}{M(z)}E_{ij}\otimes E_{kl}\frac{1}{M(\zeta)}- \sum_{\alpha=1}^{d}\Big{[}L_{\alpha}E_{ij}\otimes E_{kl}L_{\alpha}^{\,t}+L_{ \alpha}^{\,t}E_{ij}\otimes E_{kl}L_{\alpha}\Big{]}\bigg{]} \tag{4.26}\] \[=\frac{1}{M(z)}E_{il}\frac{1}{M(\zeta)}\otimes E_{kj}-\sum_{ \alpha=1}^{d}\Big{[}L_{\alpha}E_{il}L_{\alpha}^{\,t}\otimes E_{kj}+L_{\alpha}^ {\,t}E_{il}L_{\alpha}\otimes E_{kj}\Big{]}\] (4.27) \[=\big{(}\mathcal{B}_{z,\zeta}\otimes\mathrm{Id}\big{)}\circ\Phi \Big{[}E_{ij}\otimes E_{kl}\Big{]}. \tag{4.28}\] We see that \[\widehat{\mathcal{B}}_{z,\zeta}=\Phi\circ\big{(}\mathcal{B}_{z,\zeta}\otimes \mathrm{Id}\big{)}\circ\Phi, \tag{4.29}\] which means that the invertibility of \(\widehat{\mathcal{B}}_{z,\zeta}\) is equivalent to the invertibility of \(\mathcal{B}_{z,\zeta}\). By Lemma 3.6, the operator \(\widehat{\mathcal{B}}_{z,\zeta}\) is invertible for \((z,\zeta)\in\Omega\times\Omega\) and \[\widehat{\mathcal{B}}_{z,\zeta}^{-1}=\Phi\circ\big{(}\mathcal{B}_{z,\zeta}^{-1 }\otimes\mathrm{Id}\big{)}\circ\Phi. \tag{4.30}\] Applying now \(\widehat{\mathcal{B}}_{z,\zeta}^{-1}\) on both sides of (4.22) yields \[\frac{1}{N}\sum_{k,l=1}^{N}G_{lk}(z)\otimes G_{kl}(\zeta)=\Phi\circ\big{(} \mathcal{B}_{z,\zeta}^{-1}\otimes\mathrm{Id}\big{)}\circ\Phi\Big{[}I_{n} \otimes I_{n}\Big{]}+\Phi\circ\big{(}\mathcal{B}_{z,\zeta}^{-1}\otimes \mathrm{Id}\big{)}\circ\Phi\Big{[}\widetilde{\mathcal{E}}_{1}(z,\zeta)\Big{]}. \tag{4.31}\] After applying \(\varphi^{-1}\) in (4.14), the estimate (4.10) follows from the error bound (4.21) and the boundedness of \(\mathcal{B}_{z,\zeta}\) given in (3.110). The direct application of (3.112) to (4.31) results in an error term that is asymptotically too large on certain mesoscopic scales, and therefore is insufficient to prove the estimate (4.11)-(4.12). In order to resolve this problem, consider the operator \(\widehat{\mathcal{Q}}:\mathbb{C}^{n\times n}\otimes\mathbb{C}^{n\times n}\to \mathbb{C}^{n\times n}\otimes\mathbb{C}^{n\times n}\) given by \[\widehat{\mathcal{Q}}=\Phi\circ\big{(}\mathcal{Q}\otimes\mathrm{Id}_{n}\big{)} \circ\Phi, \tag{4.32}\] where \(\mathcal{Q}\) was defined in (3.155). Since \(\Phi\) is an involution, we have \[\mathrm{Id}_{n}\otimes\mathrm{Id}_{n}-\widehat{\mathcal{Q}}=\Phi\circ\big{(}( \mathrm{Id}_{n}-\mathcal{Q})\otimes\mathrm{Id}_{n}\big{)}\circ\Phi, \tag{4.33}\] which together with (4.30) and (3.112) implies that \[\Big{(}\mathrm{Id}_{n}\otimes\mathrm{Id}_{n}-\widehat{\mathcal{Q}}\Big{)} \widehat{\mathcal{B}}_{z,\zeta}^{-1}=\Phi\circ\big{(}\mathcal{J}_{z,\zeta} \otimes\mathrm{Id}_{n}\big{)}\circ\Phi. \tag{4.34}\] Now, by using (4.34) and (4.22), we decompose the left-hand side of (4.31) as \[\frac{1}{N}\sum_{k,l=1}^{N}G_{lk}(z)\otimes G_{kl}(\zeta)=\widehat{\mathcal{Q}} \left[\frac{1}{N}\sum_{k,l=1}^{N}G_{lk}(z)\otimes G_{kl}(\zeta)\right]+\Phi \circ\big{(}\mathcal{J}_{z,\zeta}\otimes\mathrm{Id}_{n}\big{)}\circ\Phi\,\Big{[} I_{n}\otimes I_{n}+\widetilde{\mathcal{E}}_{1}(z,\zeta)\Big{]}. \tag{4.35}\] The last term in (4.35), after evaluating it at \(\varphi(B)\), applying \(\varphi^{-1}\) and taking the expectation, is of order \(O_{\prec}(\|B\|(1+N^{-1/2}\widehat{\eta}^{-3/2}))\) and analytic on \((z,\zeta)\in(\Omega^{+}\times\Omega^{-})\cup(\Omega^{-}\times\Omega^{+})\). It remains to estimate \[\varphi^{-1}\bigg{(}\widehat{\mathcal{Q}}\left[\frac{1}{N}\sum_{k,l=1}^{N}G_{ lk}(z)\otimes G_{kl}(\zeta)\right]\varphi\big{(}B\big{)}\bigg{)}=\frac{1}{N} \sum_{k,l=1}^{N}\varphi^{-1}\bigg{(}\widehat{\mathcal{Q}}\,\Big{[}G_{lk}(z) \otimes G_{kl}(\zeta)\Big{]}\varphi\big{(}B\big{)}\bigg{)}. \tag{4.36}\] Denote \(M_{0}:=\lim_{y\downarrow 0}M(E_{0}+\mathrm{i}\,y)\) for brevity, and denote by \(\{\ell_{k},1\leq k\leq n\}\) the collection of (non-normalized) eigenvectors of \(\operatorname{Im}M_{0}\), so that \[\operatorname{Im}M_{0}=\sum_{k=1}^{n}\ell_{k}\ell_{k}^{t}. \tag{4.37}\] Here we used that \(\operatorname{Im}M_{0}\) is real symmetric in the case \(\beta=1\). This follows from \(\Gamma[R]^{t}=\Gamma[R^{t}]\) for any \(R\in\mathbb{C}^{n\times n}\) and therefore \(M(z)=M(z)^{t}\), because \(M(z)^{t}\) also satisfies the Dyson equation. For any \(S=(s_{ij})\in\mathbb{C}^{n\times n}\) and \(T=(t_{\alpha\beta})\in\mathbb{C}^{n\times n}\) we have \[\widehat{\mathcal{Q}}\left[S\otimes T\right] =\sum_{i,j,\alpha,\beta=1}^{n}s_{ij}t_{\alpha\beta}\,\Phi\circ \big{(}\mathcal{Q}\otimes\mathrm{Id}_{n}\big{)}\circ\Phi\,\Big{[}E_{ij} \otimes E_{\alpha\beta}\Big{]} \tag{4.38}\] \[=\sum_{i,j,\alpha,\beta=1}^{n}s_{ij}t_{\alpha\beta}\,\Phi\circ \big{(}\mathcal{Q}\otimes\mathrm{Id}_{n}\big{)}\,\Big{[}E_{i\beta}\otimes E _{\alpha j}\Big{]}\] (4.39) \[=\frac{1}{\|\operatorname{Im}M_{0}\|_{\mathrm{HS}}^{2}}\sum_{i,j,\alpha,\beta,m=1}^{n}s_{ij}t_{\alpha\beta}\,\langle\operatorname{Im}M_{0},E_{ i\beta}\rangle\,\Phi\,\Big{(}\ell_{m}\ell_{m}^{t}\otimes E_{\alpha j}\Big{)}\] (4.40) \[=\frac{1}{\|\operatorname{Im}M_{0}\|_{\mathrm{HS}}^{2}}\sum_{i,j,\alpha,\beta,m}s_{ij}t_{\alpha\beta}\,\Big{(}\ell_{m}e_{j}^{t}\otimes e_{ \alpha}e_{\beta}^{t}\operatorname{Im}M_{0}\,e_{i}\,\ell_{m}^{t}\Big{)}\] (4.41) \[=\frac{1}{\|\operatorname{Im}M_{0}\|_{\mathrm{HS}}^{2}}\sum_{i,j,m}s_{ij}\,\Big{(}\ell_{m}e_{j}^{t}\otimes T\operatorname{Im}M_{0}\,e_{i}\, \ell_{m}^{t}\Big{)}. \tag{4.42}\] After evaluating the above operator at \(\varphi(B)\) and taking \(\varphi^{-1}\), we get \[\varphi^{-1}\bigg{(}\widehat{\mathcal{Q}}\,\Big{[}S\otimes T\Big{]} \varphi\big{(}B\big{)}\bigg{)} =\frac{1}{\|\operatorname{Im}M_{0}\|_{\mathrm{HS}}^{2}}\sum_{i,j,m=1}^{n}s_{ij}\,\varphi^{-1}\bigg{(}\,\Big{(}\ell_{m}e_{j}^{t}\otimes T \operatorname{Im}M_{0}e_{i}\,\ell_{m}^{t}\Big{)}\varphi\big{(}B\big{)}\bigg{)} \tag{4.43}\] \[=\frac{1}{\|\operatorname{Im}M_{0}\|_{\mathrm{HS}}^{2}}\sum_{i,j,m=1}^{n}s_{ij}\,\ell_{m}e_{j}^{t}B\ell_{m}e_{i}^{t}\operatorname{Im}M_{0}T^{t}\] (4.44) \[=\frac{\operatorname{Im}M_{0}\,B^{t}\,S^{t}\operatorname{Im}M_{0} \,T^{t}}{\|\operatorname{Im}M_{0}\|_{\mathrm{HS}}^{2}}, \tag{4.46}\] where in the second step we used the definition of \(\varphi\) in (4.13). Now we apply the above formula with \(S=G_{lk}(z)\) and \(T=G_{kl}(\zeta)\), and use that \((G_{kl})^{t}=G_{lk}\), to rewrite (4.36) as \[\varphi^{-1}\bigg{(}\widehat{\mathcal{Q}}\,\Big{[}\frac{1}{N}\sum _{k,l=1}^{N}G_{lk}(z)\otimes G_{kl}(\zeta)\Big{]}\varphi\big{(}B\big{)}\bigg{)} =\frac{1}{\|\operatorname{Im}M_{0}\|_{\mathrm{HS}}^{2}} \operatorname{Im}M_{0}\,B^{t}\frac{1}{N}\sum_{k,l=1}^{N}G_{kl}(z) \operatorname{Im}M_{0}G_{lk}(\zeta) \tag{4.47}\] \[=\frac{1}{\|\operatorname{Im}M_{0}\|_{\mathrm{HS}}^{2}} \operatorname{Im}M_{0}\,B^{t}\,G^{\operatorname{Im}M_{0}}(z,\zeta). \tag{4.48}\] Lemma 3.8 implies that \[\frac{1}{\|\operatorname{Im}M_{0}\|_{\mathrm{HS}}^{2}} \operatorname{Im}M_{0}\,B^{t}\,G^{\operatorname{Im}M_{0}}(z,\zeta)=\vartheta \,\frac{2\mathrm{i}}{z-\zeta}\frac{\operatorname{Im}M_{0}B^{t}\operatorname{Im}M _{0}}{\langle\operatorname{Im}M_{0}\rangle}+\mathcal{E}^{B}(z,\zeta), \tag{4.49}\] where \(\mathbb{E}[\|\mathcal{E}^{B}(z,\zeta)\|]=O_{<}\Big{(}1+N^{-1}\bar{\eta}^{-2}+\|B\| \big{(}1+N^{-1/2}\bar{\eta}^{-3/2}\big{)}\Big{)}\), and \(\vartheta=1\) for \((z,\zeta)\in\Omega^{+}\times\Omega^{-}\) and \(\vartheta=-1\) for \((z,\zeta)\in\Omega^{-}\times\Omega^{+}\). This, together with the estimate of the last term in (4.35), completes the proof of the lemma. Now we state and prove the analog of Lemma 3.4 for \(\beta=1\). **Lemma 4.2**.: _Let \(\gamma\in(0,1)\) and \(\tau\in\big{(}0,\min\{\gamma,(1-\gamma)\}/7\big{)}\). Then_ \[\frac{d}{dt}\mathbb{E}[\mathfrak{e}(t)]=-\frac{t}{\pi^{2}}\int_{\Omega\times \Omega}\frac{\partial\tilde{f}(z)}{\partial\overline{z}}\frac{\partial\tilde{ f}(\zeta)}{\partial\overline{\zeta}}\frac{\partial}{\partial\zeta}\mathbb{E} \Big{[}\sum_{i,j=1}^{n}\operatorname{Tr}\Big{(}E_{ji}\,S_{ij}(z,\zeta)+E_{ji} \widetilde{S}_{ij}(z,\zeta)\Big{)}\mathfrak{e}(t)\Big{]}d^{2}\zeta d^{2}z+ \mathcal{E} \tag{4.50}\] _where \(|\mathcal{E}|\prec N^{-\tau}\),_ \[S_{ij}(z,\zeta):=\frac{1}{N}\sum_{k,l=1}^{N}G_{lk}(z)\frac{1}{M(z)}\mathcal{B }_{z}^{-1}[I_{n}]\,\Gamma[E_{ij}]G_{kl}(\zeta) \tag{4.51}\] _and_ \[\widetilde{S}_{ij}(z,\zeta):=\frac{1}{N}\sum_{k,l=1}^{N}G_{lk}(z)\frac{1}{M(z) }\mathcal{B}_{z}^{-1}[I_{n}]\,\widetilde{\Gamma}[E_{ij}]G_{lk}(\zeta). \tag{4.52}\] Proof.: Recall that the operator \(\mathscr{S}^{(1)}\) was defined in (4.7). We keep the notation \(\mathscr{S}[\mathbf{R}]=\Gamma\Big{[}\frac{1}{N}\sum_{j=1}^{N}R_{jj}\Big{]}\otimes \mathbf{I}_{N}\) as in Section 3, and denote \(\widetilde{\mathscr{S}[\mathbf{R}]}:=\frac{1}{N}\sum_{i,j=1}^{N}\widetilde{ \Gamma}[R_{ji}]\otimes\mathbf{E}_{ij}\), so that \[\mathscr{S}^{(1)}=\mathscr{S}+\widetilde{\mathscr{S}}. \tag{4.53}\] The lines (3.32)-(3.39) are repeated exactly as in the proof of Lemma 3.4 with \(\mathscr{S}\) replaced by \(\mathscr{S}^{(1)}\). Then we apply the decomposition (4.53) on the right-hand side of (3.41)-(3.43) and treat the part containing \(\widetilde{\mathscr{S}}\) as an additional error term. We end up with the equation \[-\mathbb{E}\Big{[}\mathfrak{e}(t)\big{(}1-\mathbb{E}\big{)}\big{[} \mathscr{S}^{(1)}[\mathbf{G}(z)]\mathbf{G}(z)\big{]}\Big{]} =\mathbb{E}\Big{[}\mathfrak{e}(t)\Big{(}\frac{1}{\mathbf{M}(z)}+z\mathbf{ I}_{nN}-\mathbf{K}_{0}\Big{)}\big{(}1-\mathbb{E}\big{)}\mathbf{G}(z)\Big{]} \tag{4.54}\] \[-\mathbb{E}\Big{[}\mathfrak{e}(t)\big{(}1-\mathbb{E}\big{]}\big{[} \mathscr{S}[\mathbf{G}(z)]\mathbf{M}(z)\big{]}\Big{]}+\mathcal{E}_{3}(z,\zeta)+ \widetilde{\mathcal{E}}_{3}(z,\zeta), \tag{4.55}\] where \(\mathcal{E}_{3}\) is defined in (3.44) and the second error term above is given by \[\widetilde{\mathcal{E}}_{3}(z,\zeta):=-\mathbb{E}\Big{[}\mathfrak{e}(t)\big{(} 1-\mathbb{E}\big{]}\widetilde{\mathscr{S}}\big{[}\mathbf{G}(z)\big{]}\mathbf{G}(z) \big{]}\Big{]}. \tag{4.56}\] After using (3.38) (with \(\mathscr{S}^{(1)}\) instead of \(\mathscr{S}\)) and (3.33) we obtain the same equation as in (3.45) with an additional error term \[\frac{1}{\mathbf{M}(z)}\mathbb{E}\Big{[}\mathfrak{e}(t)\big{(}1-\mathbb{E}\big{]} \big{[}\mathbf{G}(z)\big{]}\Big{]}=\mathbb{E}\Big{[}\mathfrak{e}(t)\big{(}1- \mathbb{E}\big{]}\big{[}\mathscr{S}[\mathbf{G}(z)]\mathbf{M}(z)\big{]}-\mathbb{E}\Big{[} \widetilde{\mathbf{W}}\mathbf{G}(z)\nabla_{\widetilde{\mathbf{W}}}\big{(}\mathfrak{e}(t) \big{)}\Big{]}-\mathcal{E}_{2}-\mathcal{E}_{3}-\widetilde{\mathcal{E}}_{3}. \tag{4.57}\] We see that the operator \(\widetilde{\mathscr{S}}\) is not explicitly appearing in leading expressions of (4.57) anymore and the terms have exactly the same form as in (3.45). Therefore, we repeat (3.46)-(3.57) in the proof of Lemma 3.4 line by line with the error term \(\mathcal{E}_{3}\) replaced by \(\mathcal{E}_{3}+\widetilde{\mathcal{E}}_{3}\). The operator \(\mathscr{S}^{(1)}\) reappears again when we repeat the computations in (3.58)-(3.61). After taking the partial expectation with respect to \(\widetilde{\mathbf{W}}\), the operator \(\mathscr{S}\) in (3.61) is replaced by \(\mathscr{S}^{(1)}=\mathscr{S}+\widetilde{\mathscr{S}}\). More precisely, we have \[\mathbb{E}\Big{[}\operatorname{Tr}\Big{(}\mathscr{B}_{z}^{-1} \Big{[}\widetilde{\mathbf{W}}\mathbf{G}(z)\frac{1}{\mathbf{M}(z)}\Big{]}\Big{)} \operatorname{Tr}\Big{(}\widetilde{\mathbf{W}}\mathbf{G}(\zeta)\Big{)}\mathfrak{e}(t) \Big{]} \tag{4.58}\] \[=\mathbb{E}\Big{[}\sum_{i,j=1}^{n}\sum_{p,q=1}^{N}\operatorname{ Tr}\Big{(}(E_{ji}\otimes\mathbf{E}_{qp})\mathbf{G}(z)\frac{1}{\mathbf{M}(z)}\big{(}( \mathscr{B}_{z}^{*})^{-1}[\mathbf{I}_{nN}]\big{)}^{*}\mathscr{S}^{(1)}[E_{ij} \otimes\mathbf{E}_{pq}]\,\mathbf{G}(\zeta)\Big{)}\mathfrak{e}(t)\Big{]}. \tag{4.59}\] By applying the decomposition (4.53) and using (3.63)) we rewrite the term containing \(\mathscr{S}\) as \[\mathbb{E}\Big{[}\sum_{i,j=1}^{n}\sum_{p,q=1}^{N}\operatorname{ Tr}\Big{(}(E_{ji}\otimes\mathbf{E}_{qp})\mathbf{G}(z)\frac{1}{\mathbf{M}(z)}\big{(}(\mathscr{B}_{z}^{*})^{-1}[ \mathbf{I}_{nN}]\big{)}^{*}\mathscr{S}[E_{ij}\otimes\mathbf{E}_{pq}]\,\mathbf{G}(\zeta) \Big{)}\mathfrak{e}(t)\Big{]}\\ =\mathbb{E}\Big{[}\sum_{i,j=1}^{n}\sum_{k,l=1}^{N}\operatorname{ Tr}\Big{(}E_{ji}G_{lk}(z)\frac{1}{M(z)}\mathcal{B}_{z}^{-1}[I_{n}]\Gamma[E_{ij}]G_{ kl}(\zeta)\Big{)}\mathfrak{e}(t)\frac{1}{N}\Big{]}, \tag{4.60}\] which gives \(S_{ij}(z,\zeta)\) in (4.50). In order to obtain the term in (4.50) that contains \(\widetilde{S}_{ij}(z,\zeta)\), we notice that \[\widetilde{\mathscr{S}}\big{[}E_{ij}\otimes\mathbf{E}_{pq}\big{]}=\frac{1}{N} \widetilde{\Gamma}[E_{ij}]\otimes\mathbf{E}_{qp} \tag{4.61}\] for any \(E_{ij}\otimes\mathbf{E}_{pq}\). Therefore, \[\mathbb{E}\Big{[} \sum_{i,j=1}^{n}\sum_{p,q=1}^{N}\operatorname{Tr}\Big{(}(E_{ji} \otimes\mathbf{E}_{qp})\mathbf{G}(z)\frac{1}{\mathbf{M}(z)}\big{(}(\mathscr{B}_{z}^{*})^{ -1}[\mathbf{I}_{nN}]\big{)}^{*}\widetilde{\mathscr{S}}[E_{ij}\otimes\mathbf{E}_{pq}] \,\mathbf{G}(\zeta)\Big{)}\mathbf{\epsilon}(t)\Big{]} \tag{4.62}\] \[=\mathbb{E}\Big{[}\mathbf{\epsilon}(t)\frac{1}{N}\sum_{i,j=1}^{n}\sum _{p,q=1}^{N}\operatorname{Tr}\Big{(}(E_{ji}\otimes\mathbf{E}_{qp})\mathbf{G}(z)\frac {1}{\mathbf{M}(z)}\big{(}(\mathscr{B}_{z}^{*})^{-1}[\mathbf{I}_{nN}]\big{)}^{*} \big{(}\widetilde{\Gamma}[E_{ij}]\otimes\mathbf{E}_{qp}\big{)}\,\mathbf{G}(\zeta) \Big{)}\Big{]}\] (4.63) \[=\mathbb{E}\Big{[}\mathbf{\epsilon}(t)\frac{1}{N}\sum_{i,j=1}^{n}\sum _{k,l=1}^{N}\operatorname{Tr}\Big{(}E_{ji}G_{lk}(z)\frac{1}{M(z)}\mathscr{B} _{z}^{-1}[I_{n}]\widetilde{\Gamma}[E_{ij}]G_{lk}(\zeta)\Big{)}\Big{]}, \tag{4.64}\] which gives rise to the summand containing \(\widetilde{S}_{ij}(z,\zeta)\) from (4.52) in (4.50). It remains to estimate the error term \(\mathcal{E}=\mathcal{E}_{1}+\widetilde{\mathcal{E}}_{1}\), where \(\widetilde{\mathcal{E}}_{1}\) is defined through (3.55) and (3.48) with \(\mathcal{E}_{3}\) replaced by \(\widetilde{\mathcal{E}}_{3}\). The error term \(\mathcal{E}_{1}\) satisfies the same bound as in Lemma 3.4, namely \(|\mathcal{E}_{1}|\prec N^{-\tau}\). For \(\widetilde{\mathcal{E}}_{1}\) we see that \[(\operatorname{Id}_{n}\otimes\operatorname{Tr}_{N})\Big{[} \widetilde{\mathscr{S}}[\mathbf{G}(z)]\mathbf{G}(z)\Big{]} =\frac{1}{N}\sum_{k,l=1}^{N}\widetilde{\Gamma}[G_{lk}(z)]G_{lk}(z) \tag{4.65}\] \[=\sum_{\alpha=1}^{d}\Big{(}L_{\alpha}\frac{1}{N}\sum_{k,l=1}^{N}G _{lk}(z)L_{\alpha}G_{lk}(z)+L_{\alpha}^{\,t}\frac{1}{N}\sum_{k,l=1}^{N}G_{lk}( z)L_{\alpha}^{\,t}G_{lk}(z)\Big{)}\] (4.66) \[=\sum_{\alpha=1}^{d}\Big{(}L_{\alpha}\,\widetilde{G}^{L_{\alpha} }(z,z)+L_{\alpha}^{\,t}\,\widetilde{G}^{L_{\alpha}^{\,t}}(z,z)\Big{)}. \tag{4.67}\] From the first statement in Lemma 4.1 applied to \(\widetilde{G}^{L_{\alpha}}(z,z)\) and \(\widetilde{G}^{L_{\alpha}^{\,t}}(z,z)\) we have \[\mathbb{E}\Big{[}\Big{\|}(\operatorname{Id}_{n}\otimes\operatorname{Tr}_{N}) \Big{[}\widetilde{\mathscr{S}}[\mathbf{G}(z)]\mathbf{G}(z)\Big{]}\Big{]}\Big{]}=O_{ \prec}\Big{(}1+\frac{1}{N^{1/2}|\operatorname{Im}z|^{\,3/2}}\Big{)}, \tag{4.68}\] and thus (see (3.48)) \[\Big{\|}\mathcal{B}_{z}^{-1}(\operatorname{Id}_{n}\otimes\operatorname{Tr}_{N}) \Big{[}\widetilde{\mathcal{E}}_{3}\frac{1}{\mathbf{M}(z)}\Big{]}\Big{\|}=O_{ \prec}\Big{(}1+\frac{1}{N^{1/2}|\operatorname{Im}z|^{3/2}}\Big{)} \tag{4.69}\] uniformly on \(\Omega\times\Omega\). Finally, integrating the trace of the above expression as in (3.70) yields the estimate \[|\widetilde{\mathcal{E}}_{1}|\prec N^{3\tau/2}\big{(}\|g\|_{1}+\|g^{\prime}\|_{ 1}\big{)}\frac{1}{(N\eta_{0})^{1/2}}. \tag{4.70}\] Choosing \(\tau\) such that \(1-\gamma>7\tau\) ensures that both \(|\mathcal{E}_{1}|\) and \(|\widetilde{\mathcal{E}}_{1}|\) are bounded by \(N^{-\tau}\). ### Proof of Theorem 2.1 for \(\beta=1\) With Lemma 4.2 established, we proceed with the proof of Theorem 2.1 for \(\beta=1\). Consider the integral \[\mathcal{V}=\frac{1}{\pi^{2}}\int_{\Omega\times\Omega}\frac{\partial\bar{f}(z)} {\partial z}\,\frac{\partial\bar{f}(\zeta)}{\partial\widetilde{\zeta}}\frac{ \partial}{\partial\zeta}\mathbb{E}\Big{[}\sum_{i,j=1}^{n}\operatorname{Tr} \Big{(}E_{ji}\,S_{ij}(z,\zeta)+E_{ji}\widetilde{S}_{ij}(z,\zeta)\Big{)}\mathbf{ \epsilon}(t)\Big{]}d^{2}\zeta d^{2}z \tag{4.71}\] appearing in (4.50) in Lemma 4.2 with \(S_{ij}(z,\zeta)\) and \(\widetilde{S}_{ij}(z,\zeta)\) defined in (4.51) and (4.52). Using the notation introduced in (4.9) we can write \(S_{ij}(z,\zeta)\) and \(\widetilde{S}_{ij}(z,\zeta)\) in (4.71) as \[S_{ij}(z,\zeta)=G^{B_{ij}(z)}(z,\zeta),\quad\widetilde{S}_{ij}(z,\zeta)= \widetilde{G}^{\widetilde{B}_{ij}(z)}(z,\zeta) \tag{4.72}\] with \[B_{ij}(z):=\frac{1}{M(z)}\mathcal{B}_{z}^{-1}[I_{n}]\,\Gamma[E_{ij}],\quad \widetilde{B}_{ij}(z):=\frac{1}{M(z)}\mathcal{B}_{z}^{-1}[I_{n}]\,\widetilde{ \Gamma}[E_{ij}]. \tag{4.73}\] Similarly as in (3.171), we have \[B_{ij}(z)=B_{ij}(E_{0})+O(|\operatorname{Im}z|),\qquad\widetilde{B}_{ij}(z)= \widetilde{B}_{ij}(E_{0})+O(|\operatorname{Im}z|). \tag{4.74}\] It follows from the argument in (3.177)-(3.181) that if for some \(\dagger,\star\in\{-,+\}\) the function \(h\) is analytic on \(\Omega^{\dagger}\times\Omega^{\star}\) and satisfies on this set the bound \(|h(z,\zeta)|\prec|\operatorname{Im}z|^{-\alpha}|\operatorname{Im}\zeta|^{-\beta}\), then \[\bigg{|}\frac{1}{\pi^{2}}\int_{\Omega^{\dagger}\times\Omega^{\ast}}\frac{ \partial\tilde{f}(z)}{\partial\overline{z}}\frac{\partial\tilde{f}(\zeta)}{ \partial\tilde{\zeta}}\frac{\partial}{\partial\zeta}h(z,\zeta)d^{2}\zeta d^{ 2}z\,\bigg{|}\prec\frac{N^{(\alpha+\beta+1)\tau}}{\eta_{0}^{\alpha+\beta-1}}. \tag{4.75}\] Now, following (3.173)-(3.174), we split the integral over \(\Omega\times\Omega\) into four parts \[\mathcal{V}=\mathcal{V}^{\,(+,+)}+\mathcal{V}^{\,(+,-)}+\mathcal{V}^{\,(-,+) }+\mathcal{V}^{\,(-,-)} \tag{4.76}\] determined by the signs of \(\operatorname{Im}z\) and \(\operatorname{Im}\zeta\). Denote \[h_{1}(z,\zeta):=\mathbb{E}\Big{[}\sum_{i,j=1}^{n}\operatorname{Tr}\Big{(}E_{ ji}\,G^{B_{ij}(z)}(z,\zeta)\Big{)}\mathfrak{e}(t)\Big{]},\quad\widetilde{h}_{1} (z,\zeta):=\mathbb{E}\Big{[}\sum_{i,j=1}^{n}\operatorname{Tr}\Big{(}E_{ji}\, \widetilde{G}^{\widetilde{B}_{ij}(z)}(z,\zeta)\Big{)}\mathfrak{e}(t)\Big{]}. \tag{4.77}\] Using the representation (4.72) together with the bounds (3.148), (4.10) and (4.74) we obtain that \[|h_{1}(z,\zeta)|+|h_{2}(z,\zeta)|\prec 1+\frac{1}{N^{1/2}\widetilde{\eta}^{\,3/2}} \tag{4.78}\] uniformly on \((z,\zeta)\in\big{(}\Omega^{+}\times\Omega^{+}\big{)}\cup\big{(}\Omega^{-} \times\Omega^{-}\big{)}\). Then (4.75) implies that \[\big{|}\mathcal{V}^{\,(+,+)}\big{|}+\big{|}\mathcal{V}^{\,(-,-)}\big{|}\prec N ^{\tau}\eta_{0}+\frac{N^{5/2\tau}}{(N\eta_{0})^{1/2}}. \tag{4.79}\] If, moreover, we have that \(\gamma\in(0,1)\) and \(\tau\in(0,\min\{(1-\gamma)/7,\gamma/2\})\), then \[\big{|}\mathcal{V}^{\,(+,+)}\big{|}+\big{|}\mathcal{V}^{\,(-,-)}\big{|}\prec N ^{-\tau}. \tag{4.80}\] We compute \(\mathcal{V}^{\,(+,-)}\) and \(\mathcal{V}^{\,(-,+)}\) in three steps. First, we split the integrands in (4.71) with the identities (4.72) into leading and error terms using the approximations (3.153)-(3.154) and (4.11)-(4.12). The integrals of the _error_ terms that arise from these decompositions are treated analogously to \(\mathcal{V}^{\,(+,+)}\) and \(\mathcal{V}^{\,(-,-)}\). Indeed, if we replace \(h(z,\zeta)\) in (4.75) with the error terms from (3.153)-(3.154) and (4.11)-(4.12), and take \(\gamma\in(0,1)\) and \(\tau\in(0,\min\{(1-\gamma)/7,\gamma/2\})\), then the resulting integrals are of order \(O_{<}(N^{-\tau})\). Therefore, we can replace \(S_{ij}(z,\zeta)\) and \(\widetilde{S}_{ij}(z,\zeta)\) in (4.71) by their deterministic approximations defined in (3.153) and (4.11). In the second step, exactly as in (3.185)-(3.186), we use (4.74) to replace \(B_{ij}(z)\) and \(\widetilde{B}_{ij}(z)\) in the deterministic approximations of \(S_{ij}(z,\zeta)\) and \(\widetilde{S}_{ij}(z,\zeta)\) with \(B_{ij}(E_{0})\) and \(\widetilde{B}_{ij}(E_{0})\) correspondingly. After integrating, under the assumption that \(\gamma\in(0,1)\) and \(\tau\in(0,\min\{(1-\gamma)/7,\gamma/2\})\), this replacement gives a term of size \(O_{\prec}(N^{-\tau})\). After applying the simplifications from the first two steps, we arrive at \[\mathcal{V}^{\,(+,-)} =\frac{\mathbb{E}\big{[}\mathfrak{e}(t)\big{]}}{\pi^{2}}\int_{ \Omega^{+}\times\Omega^{-}}\frac{\partial\tilde{f}(z)}{\partial\overline{z}} \frac{\partial\tilde{f}(\zeta)}{\partial\overline{\zeta}}\frac{\partial}{ \partial\zeta}\frac{1}{z-\zeta}\Big{(}\phi^{\,(+,-)}+\tilde{\phi}^{\,(+,-)} \Big{)}\,d^{2}\zeta d^{2}z+O_{\prec}(N^{-\tau}), \tag{4.81}\] \[\mathcal{V}^{\,(-,+)} =\frac{\mathbb{E}\big{[}\mathfrak{e}(t)\big{]}}{\pi^{2}}\int_{ \Omega^{-}\times\Omega^{+}}\frac{\partial\tilde{f}(z)}{\partial\overline{z}} \frac{\partial\tilde{f}(\zeta)}{\partial\overline{\zeta}}\frac{\partial}{ \partial\zeta}\frac{1}{z-\zeta}\Big{(}\phi^{\,(-,+)}+\tilde{\phi}^{\,(-,+)} \Big{)}d^{2}\zeta d^{2}z+O_{\prec}(N^{-\tau}), \tag{4.82}\] where we introduce the \((z,\zeta)\)-independent constants \[\phi^{\,(+,-)} =\frac{2\mathrm{i}}{\langle\operatorname{Im}M_{0}\rangle}\sum_{i, j=1}^{n}\operatorname{Tr}\Big{(}E_{ji}\,\Big{\langle}\operatorname{Im}M_{0},\,\frac{1}{M_{0} ^{\prime}}M_{0}^{\prime}\,\Gamma[E_{ij}]\,\Big{)}\operatorname{Im}M_{0}\Big{)}, \tag{4.83}\] \[\tilde{\phi}^{\,(+,-)} =\frac{2\mathrm{i}}{\langle\operatorname{Im}M_{0}\rangle}\sum_{i, j=1}^{n}\operatorname{Tr}\Big{(}E_{ji}\operatorname{Im}M_{0}\Big{(}\frac{1}{M_{0} }M_{0}^{\prime}\,\widetilde{\Gamma}[E_{ij}]\,\Big{)}^{t}\operatorname{Im}M_{0} \Big{)},\] (4.84) \[\phi^{\,(-,+)} =\frac{-2\mathrm{i}}{\langle\operatorname{Im}M_{0}\rangle}\sum_{i, j=1}^{n}\operatorname{Tr}\Big{(}E_{ji}\,\Big{\langle}\operatorname{Im}M_{0},\,\frac{1}{M_{0} ^{\prime}}(M_{0}^{\prime})^{\ast}\,\Gamma[E_{ij}]\,\Big{)}\operatorname{Im}M_{0} \Big{)},\] (4.85) \[\tilde{\phi}^{\,(-,+)} =\frac{-2\mathrm{i}}{\langle\operatorname{Im}M_{0}\rangle}\sum_{i, j=1}^{n}\operatorname{Tr}\Big{(}E_{ji}\operatorname{Im}M_{0}\Big{(}\frac{1}{M_{0}^{ \prime}}(M_{0}^{\prime})^{\ast}\,\widetilde{\Gamma}[E_{ij}]\,\Big{)}^{t} \operatorname{Im}M_{0}\Big{)}. \tag{4.86}\] For \(\dagger,\star\in\{+,-\},\dagger\neq\star\), the scalar quantities \(\phi^{\,(\dagger,\star)}\) arise from the deterministic approximations (3.153) of \(G^{B_{ij}(E_{0})}(z,\zeta)\) for \((z,\zeta)\in\Omega^{\dagger}\times\Omega^{\star}\), and \(\tilde{\phi}^{\,(\dagger,\star)}\) from the approximations (4.11) of \(\tilde{G}^{B_{ij}(E_{0})}(z,\zeta)\) for \((z,\zeta)\in\Omega^{\dagger}\times\Omega^{\star}\). When expressing \(\phi^{\,(\dagger,\star)}\) and \(\tilde{\phi}^{\,(\dagger,\star)}\) in (4.83)-(4.86) we used that \(\vartheta=1\) for \((z,\zeta)\in\Omega^{+}\times\Omega^{-}\) and \(\vartheta=-1\) for \((z,\zeta)\in\Omega^{-}\times\Omega^{+}\). Moreover, we applied the identity (3.191), introduced again the shorthand notations \(M_{0}:=\lim_{y\downarrow 0}M(E_{0}+\mathrm{i}\,y)\) and \(M_{0}^{\prime}:=\lim_{y\downarrow 0}M^{\prime}(E_{0}+\mathrm{i}\,y)\), and used \(\lim_{\Omega^{-}\ni z\to E_{0}}M(z)=M_{0}^{\prime}\) and \(\lim_{\Omega^{-}\ni z\to E_{0}}M^{\prime}(z)=(M_{0}^{\prime})^{\star}\). It has been shown in (3.190)-(3.192) that \[\phi^{\,(+,-)}=\frac{2\mathrm{i}}{\langle\mathrm{Im}\,M_{0}\rangle}\Big{\langle} \Gamma\Big{[}\,\mathrm{Im}\,M_{0}\,\frac{1}{M_{0}}M_{0}^{\prime}\Big{]}\, \,\mathrm{Im}\,M_{0}\Big{\rangle}=-1. \tag{4.87}\] Since \(M(z)\) and \(M^{\prime}(z)\) are symmetric and \(\big{(}\widetilde{\Gamma}[R]\big{)}^{t}=\widetilde{\Gamma}[R^{t}]\) for any \(R\in\mathbb{C}^{n\times n}\), we have \[\sum_{i,j=1}^{n} \mathrm{Tr}\left(E_{ji}\,\,\mathrm{Im}\,M_{0}\Big{(}\frac{1}{M_{ 0}}M_{0}^{\prime}\,\widetilde{\Gamma}[E_{ij}]\Big{)}^{t}\,\mathrm{Im}\,M_{0}\right) \tag{4.88}\] \[=\sum_{i,j=1}^{n}\Big{\langle}E_{ji}\,\mathrm{Im}\,M_{0}\widetilde {\Gamma}[E_{ji}]\Big{(}\frac{1}{M_{0}}M_{0}^{\prime}\Big{)}^{t}\,\mathrm{Im} \,M_{0}\Big{\rangle}\] (4.89) \[=\sum_{i,j=1}^{n}\sum_{\alpha=1}^{d}\Big{\langle}E_{ji}\,\mathrm{ Im}\,M_{0}L_{\alpha}E_{ji}L_{\alpha}\Big{(}\,\mathrm{Im}\,M_{0}\frac{1}{M_{0}}M_{0}^{ \prime}\Big{)}^{t}\Big{\rangle}\] (4.90) \[\qquad\qquad\qquad\qquad\qquad+\Big{\langle}E_{ji}\,\mathrm{Im} \,M_{0}(L_{\alpha})^{t}E_{ji}(L_{\alpha})^{t}\Big{(}\,\mathrm{Im}\,M_{0}\frac{ 1}{M_{0}}M_{0}^{\prime}\Big{)}^{t}\Big{\rangle}\] (4.91) \[=\sum_{i,j=1}^{n}\sum_{\alpha=1}^{d}\Big{\langle}E_{ii}\,\mathrm{ Im}\,M_{0}L_{\alpha}E_{ji}\Big{(}\,\mathrm{Im}\,M_{0}\frac{1}{M_{0}}M_{0}^{ \prime}(L_{\alpha})^{t}\Big{)}\] (4.92) \[=\sum_{i,j=1}^{n}\sum_{\alpha=1}^{d}\Big{\langle}E_{ii}\,\mathrm{ Im}\,M_{0}L_{\alpha}E_{jj}\,\mathrm{Im}\,M_{0}\frac{1}{M_{0}}M_{0}^{\prime}(L_{ \alpha})^{t}\Big{\rangle}\] (4.94) \[\qquad\qquad\qquad\qquad\qquad+\Big{\langle}E_{ii}\,\mathrm{Im} \,M_{0}(L_{\alpha})^{t}E_{jj}\,\mathrm{Im}\,M_{0}\frac{1}{M_{0}}M_{0}^{ \prime}L_{\alpha}\Big{\rangle}\] (4.95) \[=\Big{\langle}\,\mathrm{Im}\,M_{0}\,\Gamma\Big{[}\,\mathrm{Im}\,M_ {0}\frac{1}{M_{0}}M_{0}^{\prime}\Big{]}\Big{\rangle}. \tag{4.96}\] Plugging this into (4.84) and using (4.87) yields \[\tilde{\phi}^{\,(+,-)}=\frac{2\mathrm{i}}{\langle\mathrm{Im}\,M_{0}\rangle} \Big{\langle}\Gamma\Big{[}\,\mathrm{Im}\,M_{0}\,\frac{1}{M_{0}}M_{0}^{\prime }\Big{]}\,\,\mathrm{Im}\,M_{0}\Big{\rangle}=-1. \tag{4.97}\] By applying the same method we show that \(\phi^{\,(-,+)}=\tilde{\phi}^{\,(-,+)}=-1\). We conclude that \[\mathcal{V}=-\frac{2}{\pi^{2}}\mathbb{E}\big{[}\mathfrak{e}(t)\big{]}\int\limits _{(\Omega^{+}\times\Omega^{-})\cup(\Omega^{-}\times\Omega^{+})}\frac{\partial \tilde{f}(z)}{\partial\overline{z}}\frac{\partial\tilde{f}(\zeta)}{\partial \overline{\zeta}}\frac{\partial}{\partial\zeta}\frac{1}{z-\zeta}\,d^{2}\zeta d ^{2}z+O\big{(}N^{-\tau}\big{)}. \tag{4.98}\] Finally, using (3.197), (3.204) and the same argument as at the end of Section 3.5 we obtain that in the real symmetric case \[\lim_{N\to\infty}\mathbb{E}[e(t)]=e^{-\frac{t^{2}}{2}V[g]}, \tag{4.99}\] where \[V[g]=\frac{1}{2\pi^{2}}\int_{\mathbb{R}}\int_{\mathbb{R}}\frac{(g(x)-g(y))^{2 }}{(x-y)^{2}}dxdy. \tag{4.100}\] This finishes the proof of Theorem 2.1. ## Appendix A Kronecker random matrix models with \(L\)-flat self-energy In this section we state several important properties of the solution to the matrix Dyson equation (2.7) under the assumption that the self-energy operator \(\Gamma\) satisfies the \(L\)-flatness property **(A)**. Most of these results are established by following the proofs of the corresponding results in Propositions 2.2, 4.2 and 4.7 of [2], where the matrix Dyson equation (2.7) has been studied under the stronger assumption of 1-flat self-energy (2.11). A statement similar to that of Proposition A.1 below for the general \(L\)-flat self-energy can be found in an early arXiv version (version v3) of [2]. For the reader's convenience, we present the full proof, adjusted to our current setup, and show how the modifications of the arguments from [2] yield Proposition A.1. Consider the Kronecker random matrix model \(\boldsymbol{H}^{(\beta)}\), \(\beta\in\{1,2\}\), defined in (2.1). Let \(M(z)\) be the solution to the corresponding matrix Dyson equation (2.7). Recall that by [37, Theorem 2.1], for any \(z\in\mathbb{C}_{+}\), the solution to (2.7) satisfying \(\operatorname{Im}M(z)>0\) exists and is unique. The solution also admits the Stieltjes transform representation (2.9) (see, e.g., [4, Proposition 2.1]). Denote \[\rho(z):=\frac{1}{\pi}\langle\operatorname{Im}M(z)\rangle\] (A.1) for \(z\in\mathbb{C}_{+}\), and recall that for \(T\in\mathbb{C}^{n\times n}\) we denote by \(\mathcal{C}_{T}\) an operator on \(\mathbb{C}^{n\times n}\) defined in (3.25) by \(\mathcal{C}_{T}[R]=TRT\) for all \(R\in\mathbb{C}^{n\times n}\). **Proposition A.1** (Properties of the solution to the MDE).: _Suppose that the self-energy operator \(\Gamma\) satisfies the \(L\)-flatness property **(A)** for some \(L\in\mathbb{N}\). Then the following holds._ 1. \(M(z)\) _and_ \(M^{-1}(z)\) _are uniformly bounded__: The solution_ \(M(z)\) _satisfies the bounds_ \[\left\|M(z)\right\|\lesssim 1,\qquad\left\|M^{-1}(z)\right\|\lesssim 1+|z|\] (A.2) _uniformly for_ \(z\in\mathbb{C}_{+}\)_._ 2. \(\operatorname{Im}M(z)\) _and_ \(\rho(z)I_{n}\) _are comparable__: The relation_ \[\operatorname{Im}M(z)\sim\rho(z)I_{n}\] (A.3) _holds uniformly on_ \(\mathbb{C}_{+}\)_._ 3. _Linear stability__: The bound_ \[\left\|\left(\operatorname{Id}_{n}-\mathcal{C}_{M(z)}\Gamma\right)^{-1} \right\|\lesssim 1+\frac{1}{(\rho(z)+\operatorname{dist}(z,\operatorname{supp} \rho))^{2}}\] (A.4) _holds uniformly for all_ \(z\in\mathbb{C}_{+}\)_._ 4. _Regularity of_ \(M(z)\) _and the density of states__: The continuous extension of_ \(M(z)\) _to_ \(\mathbb{C}_{+}\cup\mathbb{R}\) _exists and satisfies_ \[\left\|M(z_{1})-M(z_{2})\right\|\lesssim|z_{1}-z_{2}|^{1/3}\] (A.5) _uniformly for all_ \(z_{1},z_{2}\in\mathbb{C}_{+}\cup\mathbb{R}\)_. In particular, the density of states_ \(\rho(x)\) _defined in (_2.15_) is_ \(1/3\)_-Holder continuous on_ \(\mathbb{R}\)_, i.e.,_ \[|\rho(x_{1})-\rho(x_{2})|\lesssim|x_{1}-x_{2}|^{1/3}\] (A.6) _uniformly for all_ \(x_{1},x_{2}\in\mathbb{R}\)_. Moreover, the density_ \(\rho(x)\) _is real analytic on the open set_ \(\{x\in\mathbb{R}\,:\,\rho(x)>0\}\)_._ 5. _Analytic continuation of_ \(M(z)\)_: For any_ \(x_{0}\in\mathbb{R}\) _satisfying_ \(\rho(x_{0})>0\) _the solution_ \(M(z)\) _can be analytically extended to a neighborhood of_ \(x_{0}\) _in_ \(\mathbb{C}\)_._ _The hidden constants in \(\lesssim\) and \(\sim\) depend on \(L,d\in\mathbb{N}\) and the structure matrices \(K_{0},L_{1},\ldots,L_{d}\in\mathbb{C}^{n\times n}\)._ Proof.: We split the proof of Proposition A.1 into four steps corresponding to the statements _(i)_, _(ii)_, _(iii)_ and _(iv)_ - _(v)_. To make the presentation lighter, we will often suppress the \(z\)-dependence in the notation. _Proof of (i)._ By taking the imaginary part of the Dyson equation (2.7) and multiplying it by \(M^{*}\) from the left and \(M\) from the right we get \[\operatorname{Im}M=\operatorname{Im}zM^{*}M+M^{*}\Gamma[\operatorname{Im}M]M \geq\Gamma[\operatorname{Im}M]\] (A.7) for all \(z\in\mathbb{C}_{+}\). Since \(\operatorname{Im}M\) is positive definite, the \(L\)-flatness (2.10) gives \[\operatorname{Im}M\gtrsim\sum_{k,l=1}^{n}z_{kl}\big{(}\operatorname{Im}M\big{)} _{kk}M^{*}E_{ll}M\gtrsim\sum_{k=1}^{n}\big{(}\operatorname{Im}M\big{)}_{kk}M^{* }E_{kk}M,\] (A.8) where in the last step we used that \(z_{kk}=1\) for \(1\leq k\leq n\). If in (A.7) we multiply by \(M\) from the left and \(M^{*}\) from the right, then instead of (A.8) we obtain \[\operatorname{Im}M\gtrsim\sum_{k,l=1}^{n}z_{kl}\big{(}\operatorname{Im}M\big{)} _{kk}ME_{ll}M^{*}\gtrsim\sum_{k=1}^{n}\big{(}\operatorname{Im}M\big{)}_{kk}ME_ {kk}M^{*}.\] (A.9) By looking at the diagonal entries of (A.8) and (A.9) we get \[w_{j}\gtrsim\sum_{k,l=1}^{n}z_{kl}w_{k}t_{lj}\gtrsim\sum_{k=1}^{n}w_{k}t_{kj}\] (A.10) for all \(1\leq j\leq n\), where we denoted \(w_{j}:=\big{(}\operatorname{Im}M\big{)}_{jj}\) and \(t_{kj}:=|M_{kj}|^{2}+|M_{jk}|^{2}\). The matrix \(T:=(t_{kj})_{k,j=1}^{n}\in\mathbb{R}^{n\times n}\) has nonnegative entries, and the vector \(\operatorname{w}=(w_{j})_{j=1}^{n}\in\mathbb{R}^{n}\) has strictly positive entries. For convenience, we rewrite (A.10) as \[\operatorname{w}^{t}\gtrsim\operatorname{w}^{t}Z\,T\geq\operatorname{w}^{t}T\] (A.11) with vector inequalities understood in the entrywise sense. If we denote by \(\operatorname{v}\in\mathbb{R}^{n}\) the right Perron-Frobenius eigenvector of \(T\), then after comparing \(\operatorname{w}^{t}\operatorname{v}\) and \(\operatorname{w}^{t}T\operatorname{v}=\|T\|\operatorname{w}^{t}\operatorname {v}\), and using (A.11) we find that the spectral norm of \(T\) satisfies the bound \[\|T\|\lesssim 1\] (A.12) uniformly on \(z\in\mathbb{C}_{+}\). Thus, we conclude that (A.2) holds uniformly on \(z\in\mathbb{C}_{+}\). Moreover, if we take the norm on both sides of the matrix Dyson equation (2.7), then (A.2) yields the bound \[\big{\|}M^{-1}(z)\big{\|}\lesssim 1+|z|\] (A.13) uniformly on \(z\in\mathbb{C}_{+}\), and therefore, for any \(C>0\) we have \[\|M\|\sim\|T\|\sim 1\] (A.14) uniformly on \(\{z\in\mathbb{C}_{+}\,:\,|z|\leq C\}\). _Proof of (ii)._ Since the measure \(V(dx)\) is compactly supported, it is easy to see from the Stieltjes transform representation (2.9) and the normalization \(V(\mathbb{R})=I_{n}\) that for a sufficiently large \(C>0\) the equivalences \[\operatorname{Im}M(z)\sim\frac{\operatorname{Im}z}{|z|^{2}}I_{n}\sim\rho(z)I _{n}\] (A.15) hold uniformly on \(\{z\in\mathbb{C}_{+}\,:\,|z|\geq C\}\). Suppose that \(|z|\leq C\). Then applying the first inequality in (A.11) \(2L\) times gives \[\operatorname{w}^{t}\gtrsim\operatorname{w}^{t}\big{(}Z\,T\,Z\,T\big{)}^{L} \gtrsim\operatorname{w}^{t}\big{(}Z\,T^{\,2}\big{)}^{L}.\] (A.16) where in the second inequality we again used that \(z_{kk}=1\) for \(1\leq k\leq n\). The bounds (A.13) and (A.14) imply that \(M^{*}M\sim I_{n}\), from which we have \[\sum_{j=1}^{n}|M_{jk}|^{2}\sim 1\] (A.17) for all \(1\leq k\leq n\). Now, using (A.17), we estimate the diagonal entries of \(T^{\,2}\) from below \[\big{(}T^{\,2}\big{)}_{kk}=\sum_{j=1}^{n}t_{jk}^{2}\gtrsim\Big{(}\sum_{j=1}^{n }t_{jk}\Big{)}^{2}\geq\Big{(}\sum_{j=1}^{n}|M_{jk}|^{2}\Big{)}^{2}\gtrsim 1\] (A.18) for \(1\leq k\leq n\), which yields \[\operatorname{w}^{t}\big{(}Z\,T^{2}\big{)}^{L}\gtrsim\operatorname{w}^{t}Z^{L}.\] (A.19) Since \(Z^{L}\) has all entries greater than or equal to \(1\), we conclude from (A.16) and (A.19) that \[w_{j}\gtrsim\sum_{k=1}^{n}w_{k}\sim\langle\operatorname{Im}M(z)\rangle\] (A.20) for \(1\leq j\leq n\). Now we see that \[\big{(}\operatorname{Im}M(z)\big{)}_{jj}\sim\rho(z)\] (A.21) for all \(1\leq j\leq n\), and from (A.8) and (A.13) we get \[\operatorname{Im}M(z)\gtrsim\rho(z)M^{*}(z)M(z)\geq\rho(z)\big{\|}M^{-1}(z) \big{\|}^{2}I_{n}\gtrsim\rho(z)I_{n}\] (A.22) uniformly on \(\{z\in\mathbb{C}_{+}\,:\,|z|\leq C\}\). To estimate \(\operatorname{Im}M(z)\) from above, we notice that since the dimension of the equation (2.7) is fixed, we trivially have \[\Gamma[\operatorname{Im}M(z)]\lesssim\rho(z)I_{n}.\] (A.23) Therefore, by taking again the imaginary part of the MDE (2.7) we get \[\operatorname{Im}M(z)=\operatorname{Im}zM^{*}(z)M(z)+M^{*}(z)\Gamma[ \operatorname{Im}M(z)]M(z)\lesssim(\operatorname{Im}z+\rho(z))M^{*}(z)M(z).\] (A.24) On the other hand, from the imaginary part of the Stieltjes transform representation (2.9) we see that \(\operatorname{Im}M(z)\gtrsim\operatorname{Im}zI_{n}\) in the regime \(|z|\leq C\), which together with (A.24) and (A.14) gives \[\operatorname{Im}M(z)\lesssim\rho(z)\|M(z)\|^{2}I_{n}\lesssim\rho(z)I_{n}\] (A.25) uniformly on \(\{z\in\mathbb{C}_{+}\,:\,|z|\leq C\}\). Combining (A.15), (A.22) and (A.25) we obtain \[\operatorname{Im}M(z)\sim\rho(z)I_{n}\] (A.26) uniformly on \(z\in\mathbb{C}_{+}\). _Proof of (iii)_. We now establish (A.4). Following the proof of the invertibility of the stability operator from [2, Section 4.2], we define the _saturated_ self-energy operator \(\mathcal{F}=\mathcal{F}(z):\mathbb{C}^{n\times n}\to\mathbb{C}^{n\times n}\) given by \[\mathcal{F}=\mathcal{C}^{*}\,\Gamma\,\mathcal{C},\] (A.27) where \(\mathcal{C}:=\mathcal{C}_{\sqrt{\operatorname{Im}M}}\,\mathcal{C}_{W}\) and \[W:=\Big{(}I_{n}+\big{(}\mathcal{C}_{\sqrt{\operatorname{Im}M(z)}}^{-1}[ \operatorname{Re}M(z)]\big{)}^{2}\Big{)}^{1/4}\] (A.28) is a positive definite matrix. Then (A.4) can be obtained by first establishing the existence of the uniform spectral gap for \(\mathcal{F}\), and then applying the Rotation-Inversion Lemma (see Lemmas 4.7, 4.9 and the proof of Proposition 4.4 in [2] for the details). The crucial ingredient in this approach is the study of the spectrum of \(\mathcal{F}\) in [2, Lemma 4.7]. We gather the spectral properties of \(\mathcal{F}\) is the following lemma, that will be proven at the end of this section. **Lemma A.2** (Spectrum of \(\mathcal{F}\)).: _Let \(\mathcal{F}=\mathcal{F}(z)\) be the operator defined in (A.27), and suppose that \(\Gamma\) satisfies the \(L\)-flatness property **(A)**. Then the following holds._ 1. _For any_ \(x\in\mathbb{R}\) _the operator_ \(\mathcal{F}(x)\) _possesses a simple Perron-Frobenius eigenvalue_ \(\|\mathcal{F}(x)\|_{2}=1\)_, so that_ \[\mathcal{F}(x)[F]=F,\] (A.29) _where_ \(F\in\mathbb{C}^{n\times n}\) _is the corresponding Perron-Frobenius eigenvector satisfying_ \(\|F\|_{\operatorname{HS}}=1\) _and_ \(\operatorname{Im}F>0\)_, and_ \(\|\mathcal{F}\|_{2}\) _denotes the operator norm of_ \(\mathcal{F}\) _induced by the Hilbert-Schmidt norm on_ \(\mathbb{C}^{n\times n}\)_._ 2. _There exists_ \(C>0\)_, sufficiently large, such that_ \[\mathcal{F}^{L}[R]\sim\langle R\rangle I_{n}\] (A.30) _for all positive definite_ \(R\in\mathbb{C}^{n\times n}\) _uniformly on_ \(\{z\in\mathbb{C}_{+}\,:\,|z|\leq C\}\)__._ 3. _There exists_ \(\kappa>0\)_, sufficiently small, such that_ \[\operatorname{Spec}(\mathcal{F})\subset[-1+\kappa,1-\kappa]\cup\{1\}\] (A.31) _uniformly on_ \(\{z\in\mathbb{C}_{+}\,:\,|z|\leq C\}\)_._ Given the existence of the spectral gap for \(\mathcal{F}\) in (A.31), the linear stability bound (A.4) follows by repeating the argument presented in [2, Section 4.2]. From (A.14) we also find the term \((\rho(z)+\operatorname{dist}(z,\operatorname{supp}\rho))^{-2}\) on the right-hand side of the (A.4) with the explicit exponent \(2\). Indeed, using (A.14) and the fact that the dimension of the matrix Dyson equation is \(N\)-independent, we get from Eq. (4.41) in [2] that \[\big{\|}\big{(}\mathrm{Id}-\mathcal{C}_{M(z)}\Gamma\big{)}^{-1}\big{\|} \lesssim\big{\|}\big{(}\mathcal{C}_{U}-\mathcal{F}\big{)}^{-1}\big{\|}\] (A.32) uniformly on \(\{z\in\mathbb{C}_{+}\,:\,|z|\leq C\}\), where \(U\) is a unitary matrix and \(\mathcal{F}\) was defined in (A.27). Then the Rotation-Inversion Lemma [2, Lemma 4.9] implies that \[\big{\|}\big{(}\mathcal{C}_{U}-\mathcal{F}\big{)}^{-1}\big{\|} \lesssim\frac{1}{\max\{1-\|\mathcal{F}\|,|1-\langle F,\mathcal{C}_{U}[F]\rangle \}}\] (A.33) uniformly on \(\{z\in\mathbb{C}_{+}\,:\,|z|\leq C\}\), where \(F\) is the normalized Perron-Frobenius eigenvector of \(\mathcal{F}\). With (A.14), the lower bound in Eq. (4.45) in [2] becomes \[1-\|\mathcal{F}\|\gtrsim\operatorname{dist}(z,\operatorname{supp}\rho)^{2},\] (A.34) and the last inequality in the proof of Lemma 4.4 in [2] gives \[|1-\langle F,\mathcal{C}_{U}[F]\rangle|\gtrsim\rho^{2},\] (A.35) which together with (A.32) and (A.33) establish (A.4). _Proof of (iv) and (v)._ With the linear stability (A.4) established, the regularity of \(M(z)\) and the density of states (A.6), as well as the real-analyticity of \(\rho(x)\), is obtained by following exactly the lines of the proof of [2, Proposition 2.2]. Proof of Lemma a.2.: Part (i) is obtained using exactly the same argument as in part (i) of Lemma 4.7 in [2]. In order to prove part (ii), we split (A.30) into the upper and lower bounds. The upper bound in (A.30) is obtain in the same way as the upper bound in Eq. (4.33) of [2], which can then be iterated \(L\) times. For the lower bound in (A.30), using the proof by induction, we show that \[\mathcal{F}^{l}[R]\gtrsim\sum_{k,j=1}^{n}\big{(}Z^{l}\big{)}_{kj}\big{\langle} \mathcal{C}^{*}[E_{kk}]R\big{\rangle}\,\mathcal{C}^{*}[E_{jj}]\] (A.36) for all \(1\leq l\leq n\). The case \(l=1\) follows directly from (2.10) and the definition of \(\mathcal{F}\) in (A.27). Suppose that (A.36) holds for \(l<L\). Then after applying \(\mathcal{F}\) on both sides of (A.36) and using (2.10) we get \[\mathcal{F}^{l+1}[R]\gtrsim\sum_{k,j=1}^{n}\big{(}Z^{l}\big{)}_{kj}\big{\langle} \mathcal{C}^{*}[E_{kk}]R\big{\rangle}\,\sum_{k^{\prime},j^{\prime}=1}^{n}z_{ k^{\prime}j^{\prime}}\big{\langle}\mathcal{C}^{*}[E_{k^{\prime}k^{\prime}}] \mathcal{C}^{*}[E_{jj}]\big{\rangle}\mathcal{C}^{*}[E_{j^{\prime}j^{\prime}}].\] (A.37) Since \(\big{\langle}\mathcal{C}^{*}[E_{k^{\prime}k^{\prime}}]\mathcal{C}^{*}[E_{jj} ]\big{\rangle}\geq 0\) for all \(1\leq k^{\prime},j\leq n\), we can further estimate \[\mathcal{F}^{l+1}[R]\gtrsim\sum_{k,j,j^{\prime}=1}^{n}\big{(}Z^{l}\big{)}_{kj }\big{\langle}\mathcal{C}^{*}[E_{kk}]R\big{\rangle}\,z_{jj^{\prime}}\Big{\langle} \big{(}\mathcal{C}^{*}[E_{jj}]\big{)}^{2}\Big{\rangle}\,\mathcal{C}^{*}[E_{j ^{\prime}j^{\prime}}].\] (A.38) In order to estimate the right-hand side of (A.38) from below, we notice that exactly as in [2, Lemma 4.6] we have that \[\rho^{1/2}(z)W\sim I_{n}\] (A.39) uniformly on \(\{z\in\mathbb{C}_{+}\,:\,|z|\leq C\}\) for any sufficiently large \(C>0\). Therefore, from (A.26), (A.39) and the definition of \(\mathcal{C}\) we get \(\big{\langle}\big{(}\mathcal{C}^{*}[E_{jj}]\big{)}^{2}\big{\rangle}\gtrsim 1\), which concludes the proof of the induction step. Now (A.36) with \(l=L\) yields \[\mathcal{F}^{L}[R]\gtrsim\sum_{k,j=1}^{n}\big{(}Z^{L}\big{)}_{kj}\big{\langle} \mathcal{C}^{*}[E_{kk}]R\big{\rangle}\,\mathcal{C}^{*}[E_{jj}]\gtrsim\big{ \langle}\mathcal{C}^{*}[I_{n}]R\big{\rangle}\,\mathcal{C}^{*}[I_{n}],\] (A.40) which after applying (A.26) and (A.39) to \(\mathcal{C}^{*}[I_{n}]=W\operatorname{Im}M(z)\,W\) gives \(\mathcal{F}^{L}[R]\gtrsim\langle R\rangle I_{n}\). Combined with the upper bound, this proves (A.30). With (A.30) established, part (iii) follows from [2, Lemma 4.8]. Indeed, by applying [2, Lemma 4.8] to the operator \(\mathcal{F}^{L}\) we find that \(\mathcal{F}^{L}\) possesses a spectral gap uniformly on \(\{z\in\mathbb{C}_{+}\,:\,|z|\leq C\}\). From this and the symmetry of \(\mathcal{F}\) we conclude that \(\mathcal{F}\) also has a spectral gap, and thus (A.31) holds uniformly on \(\{z\in\mathbb{C}_{+}\,:\,|z|\leq C\}\). ## Appendix B Proof of Lemma 3.3 The crucial ingredient of the proof of Lemma 3.3 is the following cumulant expansion formula. This formula is the complex analog of [26, Proposition 3.2] and follows immediately from the real case presented there. **Lemma B.1**.: _Let \(\boldsymbol{\xi}:=(\xi_{1},\ldots,\xi_{K})\in\mathbb{C}^{K}\) be a complex random vector with finite moments up to order \(R+1\) for \(R\in\mathbb{N}\), and let \(\varphi:\mathbb{C}^{K}\to\mathbb{C}\) be \(R+1\) times differentiable with bounded partial derivatives. Then_ \[\mathbb{E}\Big{[}\xi_{1}\varphi(\boldsymbol{\xi})\Big{]}=\sum_{m=0}^{R-1} \frac{1}{m!}\mathbb{E}\Big{[}\Big{(}\kappa_{m+1}^{\nabla}[\xi_{1}, \boldsymbol{\xi}]\,\varphi\Big{)}(\boldsymbol{\xi})\Big{]}+\frac{1}{R!} \mathbb{E}\Big{[}\int_{0}^{1}\Big{(}K_{R+1,s}^{\nabla}[\xi_{1},\boldsymbol{ \xi}]\,\varphi\Big{)}(\boldsymbol{s}\boldsymbol{\xi})\,ds\Big{]}\,,\] (B.1) _where for a complex random variable \(\theta\) we set \(\kappa_{1}^{\nabla}[\theta,\boldsymbol{\xi}]\varphi:=\mathbb{E}[\theta]\varphi\), \(K_{m+1,s}^{\nabla}\) is a random differential operator defined for \(m\geq 1\) through_ \[K_{m+1,s}^{\nabla}[\theta,\boldsymbol{\xi}]:=m!\int_{0}^{1}\!\!\ldots\int_{0}^{ 1}\nabla_{\boldsymbol{\xi}}(1_{s\leq t_{m-1}}-\mathbb{E})\nabla_{\boldsymbol{ \xi}}(1_{t_{m-1}\leq t_{m-2}}-\mathbb{E})\ldots(1_{t_{2}\leq t_{1}}-\mathbb{E}) \nabla_{\boldsymbol{\xi}}(1_{t_{1}\leq 1}-\mathbb{E})\theta dt_{1}\ldots dt_{m-1}\] (B.2) _with \(\nabla_{\mathbf{\xi}}:=\sum_{i=1}^{K}(\xi_{i}\partial_{i}+\overline{\xi}_{i}\overline{ \partial}_{i})\), and where_ \[\kappa^{\nabla}_{m+1}[\theta,\mathbf{\xi}]:=\mathbb{E}\bigg{[}\int_{0}^{1}K^{\nabla }_{m+1,s}[\theta,\mathbf{\xi}]ds\bigg{]}\] (B.3) _is a non-random differential operator._ The derivation of the above lemma is identical to the real case treated in [26, Proposition 3.2], thus omitted. We now proceed to establishing Lemma 3.3. Proof of Lemma 3.3.: For \(\star\in\{1,2\}\) we have \[\mathbb{E}[\mathbf{W}\mathscr{F}_{\star}]=\sum_{i,j=1}^{N}\sum_{l=1}^{N}\mathbb{E }\Big{[}W_{il}F_{\star,lj}\Big{]}\otimes\mathbf{E}_{ij},\] (B.4) where \(F_{1,lj}:=G_{lj}\) and \(F_{2,lj}:=G_{lj}\mathbf{\xi}(t)\). We apply Lemma B.1 to two cases, namely \(\varphi=\varphi(W_{il},W_{li}):=F_{1,lj}(W_{il},W_{li})\) and \(\varphi=\varphi(W_{il},W_{li}):=F_{2,lj}(W_{il},W_{li})\), where we keep the entries of \(W_{ab}\) with \((a,b)\in\{1,\ldots,N\}^{2}\setminus\{(i,l),(l,i)\}\) fixed, i.e., we take the partial expectation \(\mathbb{E}_{il}\) with respect to \(W_{il}\) first. For \(i=l\) we interpret this as \(\varphi=\varphi(W_{ii})\). In order to accommodate the matrix setting of (B.4), where both \(W_{il}\) and \(F_{\star,lj}(W_{il},W_{li})\) are \(n\times n\) matrices, we denote by \(\nabla^{ij}_{V}:=\sum_{\alpha,\beta=1}^{n}v_{\alpha\beta}\partial_{w^{\alpha \beta}_{ij}}\) the directional derivative with respect to the \((i,j)\)-th block \(W_{ij}=(w^{\alpha\beta}_{ij})^{n}_{\alpha,\beta=1}\) in the direction \(V=(v_{\alpha\beta})^{n}_{\alpha,\beta=1}\in\mathbb{C}^{n\times n}\). The analyticity of \(F_{\star,lj}(W_{il},W_{li})\) for \(\star\in\{1,2\}\) implies that \(\overline{\partial}_{v^{\alpha\beta}_{ik}}F_{\star,lj}=0\) for all \(1\leq i,k\leq N\) and \(1\leq\alpha,\beta\leq n\). With this notation, the cumulant expansion (B.1) for \(R=3\) reads \[\mathbb{E}\Big{[}W_{il}F_{\star,lj}\Big{]}=\mathbb{E}\bigg{[}\int _{0}^{1}(\nabla^{il}_{W_{il}}+\nabla^{li}_{\widetilde{W}_{li}})(1_{s\leq 1}- \mathbb{E}_{\widetilde{\mathbf{W}}})\widetilde{W}_{il}ds\,F_{\star,lj}(W_{il},W_{ li})\] (B.5) \[\quad+\sum_{\sigma_{1},\sigma_{2}\in\{il,li\}}\int_{0}^{1}\int_{ 0}^{\sigma_{1}}(1_{s\leq t_{1}}-\mathbb{E}_{\widetilde{\mathbf{W}}})\nabla^{ \sigma_{2}}_{\widetilde{W}_{2}}(1_{t_{1}\leq 1}-\mathbb{E}_{\widetilde{\mathbf{W}}}) \widetilde{W}_{il}d_{t_{1}}ds\,F_{\star,lj}(W_{il},W_{li})\] \[\quad+\sum_{\sigma_{1},\sigma_{2},\sigma_{3}\in\{il,li\}}\int_{ 0}^{1}K^{\nabla,(\sigma_{1},\sigma_{2},\sigma_{3})}_{4,s}[W_{il},(W_{\sigma_ {1}},W_{\sigma_{2}},W_{\sigma_{3}})]F_{\star,lj}(sW_{il},sW_{li})ds\bigg{]},\] (B.6) where we introduced a random differential operator \[K^{\nabla,\mathbf{\sigma}}_{4,s}[V_{0},\mathbf{V}]:=\int_{0}^{1}\int_{0}^{1}\nabla^{ \sigma_{1}}_{V_{1}}(1_{s\leq t_{2}}-\mathbb{E})\nabla^{\sigma_{2}}_{V_{2}}(1 _{t_{2}\leq t_{1}}-\mathbb{E})\nabla^{\sigma_{3}}_{V_{3}}(1_{t_{1}\leq 1}- \mathbb{E})V_{0}d_{t_{1}}d_{t_{2}}\] (B.7) with random matrices \(V_{0}\), \((V_{1},V_{2},V_{3})=:\mathbf{V}\) and indices \(\mathbf{\sigma}:=(\sigma_{1},\sigma_{2},\sigma_{3})\). In the above formulas \(\widetilde{\mathbf{W}}\) is an independent copy of \(\mathbf{W}\), and \(\mathbb{E}_{\widetilde{\mathbf{W}}}\) denotes the partial expectation with respect to \(\widetilde{\mathbf{W}}\). In (B.5) we also used that \(W_{il}\) is centered, which implies that the term corresponding to \(m=0\) in (B.1) vanishes. We now treat each term on the right-hand side of (B.5) individually. From the direct computation of the first term we see that \[\mathbb{E}\bigg{[}\int_{0}^{1}\big{(}\nabla^{il}_{\widetilde{W}_{il}}+\nabla^{ li}_{\widetilde{W}_{li}}\big{)}(1_{s\leq 1}-\mathbb{E}_{\widetilde{\mathbf{W}}})\widetilde{W}_{il}ds\,F_{ \star,lj}(W_{il},W_{li})\bigg{]}=\mathbb{E}\bigg{[}\widetilde{W}_{il}\big{(} \nabla^{il}_{\widetilde{W}_{il}}+\nabla^{li}_{\widetilde{W}_{li}}\big{)}\,F_{ \star,lj}(W_{il},W_{li})\bigg{]}.\] (B.8) Taking the sum of the above expression for \(l\in\{1,\ldots,N\}\) and using the independence of \(\widetilde{W}_{il}\) and \(\widetilde{W}_{ab}\) for \(ab\notin\{il,li\}\) gives \(\mathbb{E}\big{[}\big{(}\widetilde{\mathbf{W}}\nabla_{\widetilde{\mathbf{W}}}\mathscr{ F}_{\star}(\mathbf{W})\big{)}_{ij}\big{]}\). Together with (B.4) we recover the first term in (3.19). Similar computations for the second term give \[\mathbb{E}\bigg{[}\sum_{l=1}^{N}\sum_{\sigma_{1},\sigma_{2}\in\{ il,li\}}\int_{0}^{1}\int_{0}^{1}\nabla^{\sigma_{1}}_{\widetilde{W}_{\sigma_{1}}}(1_{s \leq t_{1}}-\mathbb{E}_{\widetilde{\mathbf{W}}})\nabla^{\sigma_{2}}_{\widetilde{W}_{ \sigma_{2}}}(1_{t_{1}\leq 1}-\mathbb{E}_{\widetilde{\mathbf{W}}})\widetilde{W}_{il}d_{t_{1}}ds\,F_{ \star,lj}(W_{il},W_{li})\bigg{]}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\ We now estimate the above expression when \(F_{1,lj}(\mathbf{W})=G_{lj}(z)\), i.e., \(\mathscr{F}_{1}(\mathbf{W})=\mathbf{G}(z)\). From the properties of the resolvent (3.34) we find that \[\nabla_{V_{1}}^{\sigma_{1}}\ldots\nabla_{V_{k}}^{\sigma_{k}}G_{lj}=(-1)^{k}\sum _{\tau\in S_{k}}G_{li_{\tau(1)}}V_{\tau(1)}G_{j_{\tau(2)}i_{\tau(2)}}V_{\tau(2) }\ldots G_{j_{\tau(k-1)}i_{\tau(k)}}V_{\tau(k)}G_{j_{\tau(k)}j}\,,\] (B.11) for any \(V_{1},\ldots,V_{k}\in\mathbb{C}^{n\times n}\) and double indices \(\sigma_{p}=i_{p}j_{p}\). Using the local law (3.10) together with (3.77), (B.11) and moment bounds (2.4) we obtain for \(\sigma_{1},\sigma_{2}\in\{il,li\}\) the estimates \[\widetilde{W}_{il}\nabla_{\widetilde{W}_{\sigma_{1}}}^{\sigma_{1}}\nabla_{ \widetilde{W}_{\sigma_{2}}}^{\sigma_{2}}G_{lj}(\mathbf{W})=\left\{\begin{array}{ ll}O_{\prec}\Big{(}\frac{1}{N^{3/2}}\Big{)},&l=j\,,\\ O_{\prec}\Big{(}\frac{1}{\sqrt{N|\operatorname{Im}z|}}\frac{1}{N^{3/2}}\Big{)},&l\neq j.\end{array}\right.\] (B.12) uniformly on \(z\in\Omega\). When \(z\in\Omega\) the functions \(G_{ij}(z)\) are deterministically bounded by \(N\), thus the stochastic domination bounds (B.12) also hold in expectation. Therefore, if in (B.9) we take the sum with respect to \(l\in\{1,\ldots,N\}\) and use the estimate (B.12) in expectation, we obtain the bound \[\left\|\mathbb{E}\Big{[}\frac{1}{2}\widetilde{\mathbf{W}}(\nabla_{\widetilde{\mathbf{ W}}})^{2}\mathbf{G}\Big{]}\right\|_{\max}=O_{\prec}\Big{(}\frac{1}{N| \operatorname{Im}z|^{1/2}}\Big{)}=O_{\prec}\Big{(}\frac{N^{\tau/2}}{N\sqrt{ \eta_{0}}}\Big{)}.\] (B.13) To analyze the case \(\mathscr{F}_{2}(\mathbf{W})=\mathbf{G}\mathbf{\epsilon}(t)\), \(F_{2,lj}(\mathbf{W})=G_{lj}(z)\mathbf{\epsilon}(t)\), we need to derive convenient formulas for the repeated directional derivatives of \(\mathbf{\epsilon}(t)\), similar to (B.11). For this, we define \[\begin{split}\mathcal{I}([\sigma,V])&:=\sum_{\tau \in S_{k}}(-1)^{k}\frac{\operatorname{i}t}{\pi}\int_{\Omega}\frac{\partial \tilde{f}(\zeta)}{\partial\zeta}\operatorname{Tr}\Big{[}\sum_{l=1}^{N}G_{li_{ \tau(1)}}(\zeta)V_{\tau(1)}G_{j_{\tau(1)}i_{\tau(2)}}(\zeta)V_{\tau(2)}\ldots V _{\tau(k)}G_{j_{\tau(k)}l}(\zeta)\Big{]}\\ &=-\frac{(-1)^{k}}{k}\frac{\operatorname{i}t}{\pi}\sum_{\tau\in S _{k}}\int_{\Omega}\frac{\partial\tilde{f}(\zeta)}{\partial\zeta}\frac{ \partial}{\partial\zeta}\operatorname{Tr}\Big{[}G_{j_{\tau(k)}i_{\tau(1)}}( \zeta)V_{\tau(1)}G_{j_{\tau(1)}i_{\tau(2)}}(\zeta)V_{\tau(2)}\ldots G_{j_{\tau( k-1)}i_{\tau(k)}}(\zeta)V_{\tau(k)}\Big{]}\,.\end{split}\] (B.14) for a multiset \([\sigma,V]:=[(\sigma_{1},V_{1}),\ldots,(\sigma_{k},V_{k})]\), where \(\sigma_{p}=i_{p}j_{p}\) are double indices, and \(V_{1},\ldots,V_{k}\in\mathbb{C}^{n\times n}\). In particular, we have \[\nabla_{V_{0}}^{\sigma_{p}}\mathcal{I}([\sigma,V])=\mathcal{I}\big{(}\big{[}( \sigma_{0},V_{0}),(\sigma_{1},V_{1}),\ldots,(\sigma_{k},V_{k})\big{]}\big{)}\,.\] (B.15) Using this and \[\nabla_{V_{1}}^{\sigma}\mathbf{\epsilon}(t)=\mathbf{\epsilon}(t)\mathcal{I}\big{(}[( \sigma_{1},V_{1})]\big{)}\] (B.16) we see by induction that \[\nabla_{V_{1}}^{\sigma_{1}}\ldots\nabla_{V_{k}}^{\sigma_{k}}\mathbf{\epsilon}(t)= \mathbf{\epsilon}(t)\sum_{\tau\in\mathcal{P}([\sigma,V])}\mathcal{I}(\pi)\,,\] (B.17) where \(\mathcal{P}([\sigma,V])=\mathcal{P}([(\sigma_{1},V_{1}),\ldots,(\sigma_{k},V_{k })])\) denotes all partitions of \([(\sigma_{1},V_{1}),\ldots,(\sigma_{k},V_{k})]\), and for \(\pi\in\mathcal{P}([\sigma,V])\) we set \[\mathcal{I}(\pi)=\prod_{\lambda\in\pi}\mathcal{I}(\lambda)\,.\] (B.18) From (B.12) and \(|\mathbf{\epsilon}(t)|\leq 1\) we have that for all \(i,j,l\in\{1,\ldots,N\}\) and \(\sigma_{1},\sigma_{2}\in\{il,li\}\) the estimate \[\mathbf{\epsilon}(t)\widetilde{W}_{il}\nabla_{\widetilde{W}_{\sigma_{1}}}^{\sigma_{1 }}\nabla_{\widetilde{W}_{\sigma_{2}}}^{\sigma_{2}}G_{lj}=\left\{\begin{array}{ ll}O_{\prec}\Big{(}\frac{1}{N^{3/2}}\Big{)},&l=j\,,\\ O_{\prec}\Big{(}\frac{N^{\tau/2}}{\sqrt{N\eta_{0}}}\,\frac{1}{N^{3/2}}\Big{)},&l \neq j\,,\end{array}\right.\] (B.19) holds uniformly on \(z\in\Omega\). For the mixed derivatives (B.11) and (B.17) yield \[\widetilde{W}_{il}\Big{(}\nabla_{\widetilde{W}_{\sigma_{1}}}^{\sigma_{1}}G_{lj} \Big{)}\Big{(}\nabla_{\widetilde{W}_{\sigma_{2}}}^{\sigma_{2}}\mathbf{\epsilon}(t )\Big{)}=-\widetilde{W}_{il}G_{li_{1}}\widetilde{W}_{i_{1}j_{1}}G_{j_{1}j}\,\bm {\epsilon}(t)\,\mathcal{I}\big{(}[(\sigma_{2},\widetilde{W}_{\sigma_{2}})]\big{)}.\] (B.20) The local law (3.10) implies that for any \(i,l\in\{1,\ldots,N\}\), \(\sigma_{2}=i_{2}j_{2}\in\{il,li\}\), and \(A\in\mathbb{C}^{n\times n}\) \[\mathcal{I}\big{(}[(i_{2}j_{2},A)]\big{)}=\frac{\operatorname{i}t}{\pi}\int_{ \Omega}\frac{\partial\tilde{f}(\zeta)}{\partial\zeta}\frac{\partial}{ \partial\zeta}\frac{\partial}{\partial\zeta}\operatorname{Tr}\Big{[}G_{j_{2}i_{2} }(\zeta)A\Big{]}d^{2}\zeta=\left\{\begin{array}{ll}O_{\prec}\Big{(}N^{\tau}|t| \|A\|\Big{)},&l=i,\\ O_{\prec}\Big{(}\frac{N^{3\tau/2}|t|\|A\|}{\sqrt{N\eta_{0}}}\Big{)},&l\neq i, \end{array}\right.\] (B.21) where we applied Stokes' theorem to rewrite the above expression using a contour integral as in (3.71). Combining (B.20), (B.21), the local law bounds (3.10) for \(G\) and the bounds for \(\widetilde{W}\) in (2.2)-(2.4) and (3.77) we get \[\widetilde{W}_{il}\Big{(}\nabla^{\sigma_{1}}_{\widetilde{W}_{\sigma_{1}}}G_{lj }\Big{)}\Big{(}\nabla^{\sigma_{2}}_{\widetilde{W}_{\sigma_{2}}}\mathfrak{ \epsilon}(t)\Big{)}=\left\{\begin{array}{ll}O_{\prec}\bigg{(}\frac{N^{\tau} |t|}{N^{3/2}}\bigg{)},&l=i\\ O_{\prec}\bigg{(}\frac{1}{N^{2}}\frac{N^{3\tau/2}|t|}{\sqrt{N\eta_{0}}}\bigg{)},&l\neq i\,.\end{array}\right.\] (B.22) Finally, using (B.17), (B.14) and applying Stoke's theorem again we find that for all \(\sigma_{1},\sigma_{2}\in\{il,li\}\) \[\mathcal{I}\Big{(}\big{[}(\sigma_{1},\widetilde{W}_{\sigma_{1}}),(\sigma_{2}, \widetilde{W}_{\sigma_{2}}]\big{)}=O_{\prec}\bigg{(}\frac{N^{\tau}|t|}{N} \bigg{)}.\] (B.23) Together with (B.21), (B.17) and (B.18) this gives the bound for the second order derivatives of \(\mathfrak{\epsilon}(t)\) \[\nabla^{\sigma_{1}}_{\widetilde{W}_{\sigma_{1}}}\nabla^{\sigma_{2}}_{ \widetilde{W}_{\sigma_{2}}}\mathfrak{\epsilon}(t)=O_{\prec}\bigg{(}\frac{N^{2 \tau}|t|^{2}}{N}+\frac{N^{3\tau}|t|^{2}}{N^{2}\eta_{0}}+\frac{N^{\tau}|t|}{N} \bigg{)}=O_{\prec}\bigg{(}\frac{N^{2\tau}(|t|+|t|^{2})}{N}\bigg{)}\,,\] (B.24) where we also used that \(\tau<1-\gamma\) to absorb all three terms in (B.24) into one. Now (B.24), (3.10), (2.2)-(2.4) and (3.77) imply \[\widetilde{W}_{il}G_{lj}\nabla^{\sigma_{1}}_{\widetilde{W}_{\sigma_{1}}} \nabla^{\sigma_{2}}_{\widetilde{W}_{\sigma_{2}}}\mathfrak{\epsilon}(t)= \left\{\begin{array}{ll}O_{\prec}\bigg{(}\frac{N^{2\tau}(|t|+|t|^{2})}{N^{3 /2}}\bigg{)}\,,&l=j\,,\\ O_{\prec}\bigg{(}\frac{N^{2\tau}(|t|+|t|^{2})}{N^{3/2}}\,\frac{N^{\tau/2}}{ \sqrt{N\eta_{0}}}\bigg{)}\,,&l\neq j\,.\end{array}\right.\] (B.25) From the Leibniz rule, (B.19), (B.22) and (B.25) we establish the bound of the term in (B.9) for \(F_{2,lj}=G_{lj}\mathfrak{\epsilon}(t)\), namely \[\sum_{l=1}^{N}\sum_{\sigma_{1},\sigma_{2}\in\{il,li\}}\widetilde{W }_{il}\nabla^{\sigma_{1}}_{\widetilde{W}_{\sigma_{1}}}\nabla^{\sigma_{2}}_{ \widetilde{W}_{\sigma_{2}}}\Big{(}G_{lj}\mathfrak{\epsilon}(t)\Big{)} =O_{\prec}\bigg{(}\frac{N^{2\tau}(1+|t|^{2})}{N^{3/2}}+\frac{N^{5 \tau/2}(1+|t|^{2})}{N\sqrt{\eta_{0}}}\bigg{)}\] (B.26) \[=O_{\prec}\bigg{(}\frac{N^{5\tau/2}(1+|t|^{2})}{N\sqrt{\eta_{0}} }\bigg{)}\] (B.27) holding uniformly for \(z\in\Omega\) and \(i,j\in\{1,\dots,N\}\). Using the same argument as in the case \(\star=1\) to extend the bound (B.27) to its expectation we conclude that \[\bigg{\|}\mathbb{E}\Big{[}\frac{1}{2}\widetilde{\mathbf{W}}(\nabla_{\widetilde{ \mathbf{W}}})^{2}\big{(}\mathbf{G}\mathfrak{\epsilon}(t)\big{)}\Big{]}\bigg{\|}_{\max}=O _{\prec}\bigg{(}\frac{N^{5\tau/2}(1+|t|^{2})}{N\sqrt{\eta_{0}}}\bigg{)}\] (B.28) uniformly for \(z\in\Omega\). It remains to estimate the last term in (B.6). With the local law bound \(\|\mathbf{G}\|_{\max}\prec 1\) we have \[|\nabla^{\sigma_{1}}_{V_{1}}\dots\nabla^{\sigma_{k}}_{V_{k}}\mathfrak{ \epsilon}(t)|\prec(1+|t|^{k})N^{k\tau}\prod_{i=1}^{k}\|V_{i}\|\,,\qquad\| \nabla^{\sigma_{1}}_{V_{1}}\dots\nabla^{\sigma_{k}}_{V_{k}}G_{ab}\|\prec\prod _{i=1}^{k}\|V_{i}\|.\] (B.29) If we now use \(\|W_{ab}\|\prec N^{-1/2}\) following from (2.2)-(2.4), we see that the term in the remainder (B.6) satisfies the bound \[\|K^{\nabla,(\sigma_{1},\sigma_{2},\sigma_{3})}_{4,s}[W_{il},(W_{\sigma_{1}},W_{ \sigma_{2}},W_{\sigma_{3}})]F_{\star,lj}(sW_{il},sW_{li})\|=O_{\prec}\Big{(} \frac{(1+|t|^{3})N^{3\tau}}{N^{2}}\Big{)}\] (B.30) uniformly for \(i,l,j\in\{1,\dots,N\}\), \(\sigma_{1},\sigma_{2},\sigma_{3}\in\{il,li\}\), \(s\in(0,1)\) and \(z\in\Omega\). After plugging this bound into (B.6), taking the sum for \(l\in\{1,\dots,N\}\) in (B.4) and using (B.10) we get the following expansion formula \[\mathbb{E}\Big{[}\mathbf{W}\mathscr{F}_{\star}(\mathbf{W})\Big{]}=\mathbb{E}\Big{[} \widetilde{\mathbf{W}}\,\nabla_{\widetilde{\mathbf{W}}}\mathscr{F}_{\star}(\mathbf{W}) \Big{]}+\frac{1}{2}\mathbb{E}\Big{[}\widetilde{\mathbf{W}}(\nabla_{\widetilde{\mathbf{W} }})^{2}\mathscr{F}_{\star}(\mathbf{W})\Big{]}+\mathcal{E}_{\star}(z),\] (B.31) with \(\|\mathcal{E}_{\star}(z)\|_{\max}\lesssim(1+|t|^{3})N^{3\tau}N^{-1}\). We finish the proof by denoting \[\mathcal{D}_{\star}(z):=\frac{1}{2}\mathbb{E}\Big{[}\widetilde{\mathbf{W}}(\nabla_{ \widetilde{\mathbf{W}}})^{2}\mathscr{F}_{\star}(\mathbf{W})\Big{]}+\mathcal{E}_{\star}(z)\] (B.32) and using the bounds (B.13) and (B.28).
2309.02310
Quenching massive galaxies across cosmic time with the semi-analytic model SHARK v2.0
We introduce version 2.0 of the SHARK semi-analytic model of galaxy formation after many improvements to the physics included. The most significant being: (i) a model describing the exchange of angular momentum (AM) between the interstellar medium and stars; (ii) a new active galactic nuclei feedback model which has two modes, a wind and a jet mode, with the jet mode tied to the jet energy production; (iii) a model tracking the development of black hole (BH) spins; (iv) more sophisticated modelling of environmental effects on satellite galaxies; and (v) automatic parameter exploration using Particle Swarm Optimisation. We focus on two timely research topics: the structural properties of galaxies and the quenching of massive galaxies. For the former, SHARK v2.0 is capable of producing a more realistic stellar size-mass relation with a plateau marking the transition from disk- to bulge-dominated galaxies, and scaling relations between specific AM and mass that agree well with observations. For the quenching of massive galaxies, SHARK v2.0 produces massive galaxies that are more quenched than the previous version, reproducing well the observed relations between star formation rate (SFR) and stellar mass, and specific SFR and BH mass at $z=0$. SHARK v2.0 produces a number density of massive-quiescent galaxies >1dex higher than the previous version, in good agreement with JWST observations at $z\le 5$; predicts a stellar mass function of passive galaxies in reasonably good agreement with observations at $0.5<z<5$; and environmental quenching to already be effective at $z=5$.
Claudia D. P. Lagos, Matias Bravo, Rodrigo Tobar, Danail Obreschkow, Chris Power, Aaron S. G. Robotham, Katy L. Proctor, Samuel Hansen, Angel Chandro-Gomez, Julian Carrivick
2023-09-05T15:26:27Z
http://arxiv.org/abs/2309.02310v2
# Quenching massive galaxies across cosmic time with the semi-analytic model Shark v2.0 ###### Abstract We introduce version 2.0 of the Shark semi-analytic model of galaxy formation after many improvements to the physics included. The most significant being: (i) a model describing the exchange of angular momentum (AM) between the interstellar medium and stars; (ii) a new active galactic nuclei feedback model which has two modes, a quasar and a radio mode, with the radio mode tied to the jet energy production; (iii) a model tracking the development of black hole (BH) spins; (iv) more sophisticated modelling of environmental effects on satellite galaxies; and (v) automatic parameter exploration using Particle Swarm Optimisation. We focus on two timely research topics: the structural properties of galaxies and the quenching of massive galaxies. For the former, Shark v2.0 is capable of producing a more realistic stellar size-mass relation with a plateau marking the transition from disk- to bulge-dominated galaxies, and scaling relations between specific AM and mass that agree well with observations. For the quenching of massive galaxies, Shark v2.0 produces massive galaxies that are more quenched than the previous version, reproducing well the observed relations between star formation rate (SFR) and stellar mass, and specific SFR and BH mass at \(z=0\). Shark v2.0 produces a number density of massive-quiescent galaxies \(>1\) dex higher than the previous version, in good agreement with JWST observations at \(z\lesssim 5\); predicts a stellar mass function of passive galaxies in reasonably good agreement with observations at \(0.5<z<5\); and environmental quenching to already be effective at \(z=5\). keywords: galaxies: formation - galaxies: evolution ## 1 Introduction Our current theory of galaxy formation and evolution is intimately linked to the growth of structures in the universe, which are thought to form hierarchically. The current preferred cosmological model is the \(\Lambda\) cold dark matter (\(\Lambda\)CDM), in which the cosmic web evolves, for the most part, by the effect of gravity. Cosmological simulations of galaxy formation attempt to follow the formation of galaxies as the cosmic web forms from the very early to the local universe, providing a wide range of predictions across cosmic time, environment and galaxy populations (see reviews of Somerville et al., 2015; Vogelsberger et al., 2020). Among the most popular tools to simulate the formation of galaxies in a cosmological context are hydrodynamical simulations and semi-analytic models (SAMs). Both have pros and cons. Hydrodynamical simulations have the main advantage of solving for the evolution of baryons and DM simultaneously, avoiding some key simplifications made in SAMs regarding the symmetry of galaxies and halos and the relevant baryonic components of each. The most important advantages of SAMs are their speed, ability to thoroughly explore the parameter space to understand potential degeneracies between physical models and parameters, and as a result, the very large cosmological boxes they can be run on, still covering a very large dynamic range in both halo and stellar mass of the produced galaxies (Baugh, 2006; Benson and Bower, 2010). Compared to the current generation of cosmological hydrodynamical simulations, SAMs can push to about two to three orders of magnitude below the stellar mass resolution of hydrodynamical simulations for the same cosmological volume. For this reason, SAMs remain an essential part of the toolkit in the quest of understanding galaxy formation and evolution. In Lagos et al. (2018, hereafter L18), we introduced the Shark (v1.1) SAM, an open source, flexible and highly modular SAM. Shark has been extensively used for many applications (some of which are listed in SS 2). One characteristic of both SAMs and hydrodynamical simulations is the continuous development of the physical models included to either introduce new physics into the models, make more physical assumptions, and/or improve the agreement with observations once tensions have been identified. Shark is no exception and since L18, the model has seen continuous improvements on many of the physical models included, where we recognised too simplistic assumptions were made, and to address areas of tension that have been identified between the model and observations. In this paper, we introduce a new version of Shark (v2.0), after significant development of the physical models, which include, but are not limited to, more physical models tracking the properties of supermassive black holes (BHs) and active galactic nuclei (AGN) feedback, the angular momentum evolution of galaxy components, and environmental effects that affect the evolution of satellite galaxies. In this paper, we focus on two areas of tension that have been identified between Shark v1.1 and observations: a stellar mass-size relation that approximates a single power-law, while observations display a clear plateau in the size-mass relation associated with the transition from disk- to bulge-dominated galaxies (Lange et al., 2015); and an overall scarcity of massive-quiescent galaxies at \(z\gtrsim 2\) compared with observations. More details about these two areas of tension are presented in SS 2. The second tension above is particularly interesting as it has been reported across many cosmological simulations of galaxy formation (Gould et al., 2023; Valentino et al., 2023). This problem has worsened thanks to the existing observations of the James Webb Space Telescope (JWST). These have revealed that massive-quiescent galaxies are relatively common at \(z>3\)(e.g. Carnall et al., 2023; Valentino et al., 2023; Nanayakkara et al., 2022; Long et al., 2023) and more so than previous observational inferences had indicated (Carnall et al., 2020; Weaver et al., 2022; Gould et al., 2023). These galaxies typically have stellar masses in excess of \(10^{10}\,\mathrm{M}_{\odot}\) and number densities \(\gtrsim 10^{-5}\,\mathrm{Mpc}^{-3}\). Some of these galaxies have signs of having ceased their star formation (a.k.a. quenched) recently (\(\approx 100\) Myr), while others are consistent with older stellar population ages (\(\sim 1\) Gyr; Glazebrook et al., 2023). The latter may imply these massive galaxies to have formed at \(z>10\), potentially posing a problem to structure formation in \(\Lambda\)CDM (Boylan-Kolchin, 2023). Although there are still many effects that could lead to systematic errors in the inferred stellar masses, redshifts and star formation rates (SFRs) of massive-quiescent galaxies in the observations, we focus here on how the predictions around this population changes from Shark v1.1 and v2.0 after the significant revisions of the model. We generalise the problem to understanding the quenching of massive galaxies across cosmic times. We leave for future work the critical assessment of systematic biases in the inferred properties of massive-quiescent galaxies. This paper is organised as follows. SS 2 briefly describes the Shark model and the \(N\)-body DM only simulations we use. SS 3 describes in detail the significant modifications made to Shark in this new version 2.0 relative to v1.1, and the parameters we adopt for the default Shark v2.0 model. Minor modifications are presented in Appendix A. SS 4 presents key results on the abundance of galaxies across cosmic time and scaling relations, including structural relations in SS 4.3.2, in the local universe. We also present a supplementary material with additional comparisons with observations to show that previous areas of agreement between Shark and observations remain so. SS 5 focuses on the quenching of galaxies in Shark v2.0 and compares with v1.1 to reveal how much more efficient quenching is across cosmic time. The reader interested solely in the problem of massive-quiescent galaxies at \(z\gtrsim 2\) can skip to SS 5.2, where we cover this in detail. SS 6 presents our main conclusions. ## 2 The semi-analytic model Shark Shark, hosted on GitHub1, takes into account physical processes that we think are critical in shaping the formation and evolution of galaxies. These are (i) the collapse and merging of DM halos; (ii) the accretion of gas onto halos, which is modulated by the DM accretion rate; (iii) the shock heating and radiative cooling of gas inside DM halos, leading to the formation of galactic disks via conservation of specific angular momentum of the cooling gas; (iv) star formation (SF) in galaxy disks; (v) stellar feedback from the evolving stellar populations; (vi) chemical enrichment of stars and gas; (vii) the growth via gas accretion and merging of BHs; (viii) heating by AGN; (ix) photoionization of the intergalactic medium; (x) galaxy mergers driven by dynamical friction within common DM halos which can trigger starbursts (SBs) and the formation and/or growth of spheroids; (x) collapse of globally unstable disks that also lead to SBs and the formation and/or growth of bulges. L18 included several different models for gas cooling, AGN feedback, stellar and photo-ionisation feedback, and star formation. Footnote 1: [https://github.com/ICRAR/shark](https://github.com/ICRAR/shark) The model presented in L18, has been tested thoroughly across several publications and shown to reproduce several observed relations, including: the optical colour bimodality and how this depends on stellar mass (Bravo et al., 2020); the atomic hydrogen (HI)-halo mass relation and HI clustering (Chauhan et al., 2020, 2021); the panchromatic emission of galaxies from the far-ultraviolet (FUV) to the far-infrared (FIR) across cosmic time (Lagos et al., 2019, 2020; Chen et al., 2023); the redshift distribution of bright FIR galaxies and the redshift evolution of their number density (Lagos et al., 2019; Casey et al., 2021; Long et al., 2022), among many other successes. In addition, Shark has been used to make predictions for the gravitational wave signal from binary stellar (Rauf et al., 2023) and supermassive (Curylo and Bulik, 2022) black holes. Some areas of tension with observations have also been identified, such as a weak downsizing signal and quenching timescales that are independent of stellar mass (Bravo et al., 2023); a number density of passive galaxies that appears to be too low at \(z\geq 3\)(Long et al., 2022; Gould et al., 2023); a stellar size-mass relation that is very close to a single power-law and massive-end of the stellar mass function (SMF) that is too shallow compared to observations (Proctor et al. in preparation). Proctor et al. (in preparation) crucially show that some of these limitations were not easily solved by modifying the parameters of the model and instead were inherent to the physical models included in Shark. The identification of these tensions in addition to the desire to continue to improve the model have led us to the continuing improvement and inclusion of new models within Shark. These new models are described in detail in SS 3. We have solidified these changes into a new version of Shark v2.0 available on GitHub. Before describing the new models, we introduce the suite of \(N\)-body simulations on top of which Shark runs in SS 2.1. ### The surfs simulations: halos catalogues and merger trees We run Shark over the surfs suite of \(N\)-body, DM only simulations (Elahi et al., 2018). Most of the surfs runs have cubic volumes of \(210\,\mathrm{cMpc}/\mathrm{h}\) on a side, and span a range in particle number, currently up to 8.5 billion particles, and adopt a \(\Lambda\)CDM Planck Collaboration \begin{table} \begin{tabular}{l c} \hline \hline Name & L210N1536 \\ \hline Box size [\(\mathrm{cMpc}/\mathrm{h}\)] & \(210\) \\ Number of particles & \(1536^{3}\) \\ Particle Mass [\(\mathrm{M}_{\odot}/\mathrm{h}\)] & \(2.21\times 10^{8}\) \\ Softening Length [\(\mathrm{ckpc}/\mathrm{h}\)] & \(4.5\) \\ \hline \end{tabular} \end{table} Table 1: Simulation parameters of the surfs run used in this paper. et al. (2016) cosmology. The cosmological parameters correspond to a total matter, baryon and \(\Lambda\) densities of \(\Omega_{\rm m}=0.3121\), \(\Omega_{\rm b}=0.0491\) and \(\Omega_{\rm L}=0.6879\), respectively, with a Hubble parameter of \(H_{0}=h\,100\,{\rm Mpc}\,{\rm km}\,{\rm s}^{-1}\) with \(h=0.6751\), scalar spectral index of \(n_{\rm s}=0.9653\) and a power spectrum normalization of \(\sigma_{8}=0.8150\). suffers was produced using a memory lean version of the gadget2 code on the Magnus supercomputer at the Pawsey Supercomputing Centre. In this paper, we use the L210N1536 simulation, with the specifications of Table 1. usters produces 200 snapshots for each simulation, typically having a time span between snapshots in the range of \(\approx 6-80\) Myr. Merger trees and halo catalogs were constructed using the phase-space finder VELOCIraptor(Elahi et al., 2019; Canas et al., 2019) and the halo merger tree code TreeFrog, developed to work on VELOCIraptor(Elahi et al., 2019). We refer to L18 for more details on how the merger trees and halo catalogs are constructed for Shark, and to Poulton et al. (2018); Elahi et al. (2019, 2019); Canas et al. (2019) for details of VELOCIraptor and TreeFrog. From the VELOCIraptor catalogues, we take the halo and subhalo masses, and halo virial radii. Other properties of halos are calculated as described in SS 4.2 in L18. The existence of central and satellite subhalos in the VELOCIraptor catalogues leads to Shark galaxies existing in 3 different types: type = 0 is the central galaxy of the central subhalo, while every other central galaxy of satellite subhalos are type = 1. If a subhalo merges onto another one and it is not the main progenitor, it is treated as defunct. All the galaxies of defunct subhalos are made type = 2 and transferred to the list of galaxies of the central subhalo of their descendant host halo (see SS 4.1 in L18 for more details). ## 3 New Baryon Physics Models and Technical Updates in Shark The modifications presented in this paper have been released in the public repository of Shark in the version v2.0. Below we introduce the physical models that signify large improvements from our previous installment of Shark. In addition, we have implemented small changes to other existing physical models as described in Appendix A. ### ISM-stars angular momentum exchange In this new version of Shark we include different levels of complexity for the exchange of specific angular momentum between the cooling gas, interstellar medium (ISM) and stellar disk. Here, the specific angular momentum will be referred to as \(j\equiv J/M\). The default in L18 assumes the cooling gas to have the same \(j\) of the DM halo, \[j_{\rm cool}=\frac{J_{\rm h}}{M_{\rm halo}}, \tag{1}\] where \(J_{\rm h}\) is the halo's angular momentum, which is calculated from the mass and halo's spin parameter, following Mo et al. (1998), \[J_{\rm h}=\frac{\sqrt{2}\,G^{2/3}}{(10\,H(z))^{1/3}}\,\lambda_{\rm DM}\,M_{ \rm halo}^{5/3}. \tag{2}\] \(j_{\rm cool}\) is then input in the set of ordinary differential equations (ODEs) that control the exchange of angular momentum (Eqs. (5)-(9)). Section 4.2 of L18 describes how \(\lambda_{\rm DM}\) is obtained and Appendix A presents a minor modification to the calculation of \(\lambda_{\rm DM}\) introduced in Shark v2.0. The gaseous and stellar disks also exchange angular momentum at a rate \(J_{\rm g,s}\). In its simplest form, \[J_{\rm g,s}=\psi\,j_{\rm cold}, \tag{3}\] where \(\psi\) is the instantaneous SFR and \(j_{\rm cold}\) (not to be confused with \(j_{\rm cool}\)) is the specific angular momentum of the gaseous disk. This is the default assumption made in L18 and in most SAMs, except for those that follow disks as a set of annuli that can evolve independently (e.g. Stringer and Benson, 2007; Stevens et al., 2016). Mitchell et al. (2018) showed that in the cosmological hydrodynamical simulation eagle(Schaye et al., 2015; Crain et al., 2015; McAlpine et al., 2015), the stellar \(j\) was systematically lower than in the GALFORM SAM at fixed stellar mass. Mitchell et al. (2018) suggested that a key modification required by GALFORM to evolve the specific angular momentum of galaxies more realistically was to consider the fact that stars form from molecular gas only. The latter tends to be more concentrated in the centres of galaxies relative to the total ISM (Lagos et al., 2011). This implies that stars will be systematically forming from low specific angular momentum in comparison to the total ISM. Mitchell et al. (2018) showed that this physical effect is behind the overly large sizes of galaxies in GALFORM. Thus, a more sophisticated model should include the fact that stars form from molecular gas, which resides in the high column density regions of disks, and is more concentrated than the total gas disk. Here, we implement a more realistic exchange between the specific angular momentum of the ISM and the stars following the lessons from Mitchell et al. (2018). This is done by calculating \(J_{\rm g,s}\) as \[\dot{J}_{\rm g,s}=2\,\pi\,\int_{0}^{\infty}v\,(r)\,r\,\Sigma_{\rm SFR}\,{\rm d }r, \tag{4}\] where \(\Sigma_{\rm SFR}\) depends on the assumed SF law and \(v(r)\) is the circular velocity radial profile. We include a boolean parameter in Shark, angular_momentum_transfer, which set to true means the stellar disk angular momentum is calculated as Eq. (4), and to false uses Eq. (3) instead. Thus, the results of L18 can be fully recovered with the latter option. The half-mass gas and stellar disk sizes are then calculated as \(r_{\rm gas}=f_{\rm norm}\,j_{\rm cold}/V_{\rm circ}\) and \(r_{\star}=f_{\rm norm}\,j_{\star}/V_{\rm circ}\). Here, we set \(f_{\rm norm}=0.839\), which is the value in an idealized exponential disk (Guo et al., 2011). Before presenting the way we solve for the evolution of the galaxy angular momentum split by components, we look at the assumptions we make. From the definition \(J=j\,M\), with \(J\equiv|\vec{J}|\) and \(j\equiv|\vec{j}|\), it follows that \(\partial J/\partial t=M\,\partial j/\partial t+j\,\partial M/\partial t\). The two terms represent angular momentum transfer by torques and advection, respectively. We assume that internal to galaxies and from the gas cooling down to galaxies, \(\partial j/\partial t=0\) (i.e. no torques), and hence \(\partial J/\partial t=j\,\partial M/\partial t\). This is a simplification, as cosmological hydrodynamical simulations have shown torques can be important between the gas cooling and the galaxy (e.g. Stevens et al., 2016). The second important assumption is that all the components in the galaxy and gas cooling have angular momentum vectors that are aligned. This is not necessarily the case. Contreras et al. (2017) showed that halos have an angular momentum vector that can evolve significantly in direction under the presence of high mass mergers. Quieter assembly histories lead to smaller changes of the direction of the vector. This in principle could lead to misalignments between the halo and the galaxy, as shown by Lagos et al. (2015). The two assumptions above are made to simplify the problem, but we will consider in the future models that relax these assumptions and study the effect that would have on the scaling relations analysed in SS 4.3.2. Simultaneously to the mass and metal exchange (see Eqs. (49) to (58) in L18), in the case of SF in disks, we solve for the angular momentum exchange between these components following the assumptions above, as follows: \[\dot{J}_{\star} = (1-R)\,\dot{J}_{\rm g,s} \tag{5}\] \[\dot{J}_{\rm cold} = \dot{M}_{\rm cool}\,j_{\rm cool}-(1-R+\beta_{\star})\,\dot{J}_{\rm g,s}\] (6) \[\dot{J}_{\rm cold,halo} = -\dot{M}_{\rm cool}\,j_{\rm cool}\] (7) \[\dot{J}_{\rm hot,halo} = \dot{m}_{\rm out,\star}\,j_{\rm out}-\dot{m}_{\rm ejce}\,j_{\rm out}\] (8) \[\dot{J}_{\rm ejce} = \dot{m}_{\rm ejce}\,j_{\rm out}. \tag{9}\] Here, \(\dot{J}_{\rm g,s}\) is as described in Eq. (4), \(\dot{M}_{\rm cool}\), \(\dot{m}_{\rm out,\star}\) and \(\dot{m}_{\rm ejce}\) are the gas cooling rate (Eq. (5) in L18), outflow rate due to star formation and ejection rate from the halo, respectively; \(\beta_{\star}=\dot{m}_{\rm out,\star}\,\psi^{-1}\), where \(\psi\) is the instantaneous SFR, is the wind mass loading, and \(R\) is the fraction of mass recycled to the ISM (from stellar winds and supernovae; see SS 4.4.6 in L18 for details). In the case of the hot halo and ejected gas mass components, the angular momentum growth depends on the specific angular momentum of the outflowing gas. This in principle allows for outflows to affect the angular momentum of the disk in a differential fashion, which would be the case if the outflow rate was an explicit function of radius (as it has been proposed by detailed stellar feedback models, e.g. Creasey et al., 2013; Hopkins et al., 2012; Lagos et al., 2013). However, in Shark v2.0 we assume \(f_{\rm out}=f_{\rm cold}\), and leave for future work the exploration of how outflow rates that change radially can impact the angular momentum structure of galaxies. Note that implicitly in Eqs. (5)-(9) there is a third assumption, which is that the relaxation time of the hot halo gas is such that after a SF episode, it regains the same specific angular momentum as the DM halo. This assumes that any change in the hot halo gas \(j\) due to outflows is negligible. ### Dynamical friction leading to galaxy mergers Inherent limitations of halo and subhalo finders generally imply that subhalos cease to be tracked when they reach too few particles, or when they become indistinguishable from the 3D or 6D density background, depending on whether a 3D or 6D structure finder is used. This does not imply that galaxies in those subhalos should be immediately merge onto the central galaxy, as in some cases subhalos stop being tracked at large distances from the halo's centre (typically 0.5-1 times the virial radius; see Poulton et al., 2018). For SAMs, this means that we have to resort to analytic calculations of the dynamical friction timescale to have a more realistic proxy for when a satellite galaxy should merge onto the central. Recent work by Poulton et al. (2020) tracks (sub)halos that are identified in dense environments in \(N\)-body simulations, and provides estimates for their orbital properties. By comparing the merger timescale of these simulated subhalos with various analytical prescriptions, they are able to identify regimes where previously published analytical prescriptions do not provide a realistic estimate of merger timescales. Specifically, they find that in systems with relatively large host-to-subhalo mass ratios, these analytic approximations all systematically underpredict the merger timescale. This has important implications for both the satellite population and the high-mass galaxy population reported in SAMs, given the importance of mergers in the mass assembly of large galaxies that is expected from a hierarchical model of structure formation (Robotham et al., 2014). Follow-up work presented in Poulton et al. (2021) provided a new analytic model to compute the dynamical friction timescale (\(\tau_{\rm merge}\)) of satellite galaxies that much more accurately captures the merger timescales seen over a wide dynamic range in halo masses. In this work we implement this new dynamical friction timescale in Shark. The default model presented in L18 adopted the dynamical friction timescale of Lacey and Cole (1993), which compared to Poulton et al. (2021) merges small galaxies too quickly. Poulton et al. (2021) formulate the merger timescale in two regimes, based on their result that \(\tau_{\rm merge}\) is more strongly dependent on position for subhalos outside of the virial radius of the host than for subhalos inside the virial radius. The merger timescale is calculated as \[\tau_{\rm merge}=\begin{cases}5.62\sqrt{\frac{R_{\rm vir,host}}{GM_{\rm out, loss}(r)}}r^{0.8}\,\dot{R}_{\rm peri}^{0.2}&\text{for $r<R_{\rm vir,host}$},\\ 5.62\frac{R_{\rm vir,host}}{\sqrt{GM_{\rm vir,host}}}r^{0.3}\,\dot{R}_{\rm peri }^{0.2}&\text{for $r\geq R_{\rm vir,host}$},\end{cases} \tag{10}\] where \(r\) is the position of the subhalo relative to the halo's centre, \(R_{\rm peri}\) is the pericentric distance from the host centre, \(M_{\rm encl,host}(r)\) is the mass of the host enclosed within a radius \(r\) assuming an Navarro et al. (1997) profile, and \(M_{\rm vir,host}\) is the virial mass of the host halo. These properties are all determined from the underlying \(N-\)body simulation. In Shark this model can be adopted by setting merger_model = poulton20. Note that the default in L18 was merger_model = lacey93. ### AGN feedback: QSO and radio mode feedback The default model presented in L18 adopted the Croton et al. (2016) AGN feedback model. That model in itself was inspired by Croton et al. (2006), which included only a radio mode of AGN feedback. The heating power of AGN feedback in the radio mode of Croton et al. (2006) was calculated using the bolometric luminosity of AGN. For the latter, only the BH accretion rate coming from the hot-halo mode was considered (Eq. 10 in Croton et al., 2006). However, BHs at any one time can be growing by multiple channels; in particular, the accretion rate onto the BHs can be much higher than that coming from the hot-halo mode only if the galaxy is experiencing a SB (which in Shark can be either triggered by galaxy mergers or disk instabilities). There is no physical reason for why in these cases only the fractional contribution to the accretion rate from the hot-halo mode should be included in the AGN heating power calculation. In Shark v2.0, we implement a new model for AGN feedback that includes two modes: a radio and a QSO mode, and uses the physical properties of the BH to calculate a jet power and a QSO-driven outflow rate and velocity. Before describing the new AGN feedback models, we describe how the three fundamental properties of BHs, their mass, accretion rate and spin, are calculated. #### 3.3.1 BH mass, accretion rate and spin In addition to BH-BH mergers, in which we instantaneously add the masses of the BHs that are merging, BHs can also grow via gas accretion in two different modes. The BH accretion rate due to hot-halo mode is calculated as in L18, \[\dot{m}_{\rm BH,hh}=\kappa\,\frac{15}{16}\pi\,G\,\mu\,m_{\rm p}\,\frac{\kappa_{ \rm B}\,T_{\rm vir}}{\Lambda(T_{\rm vir},Z_{\rm hot})}\,m_{\rm BH}. \tag{11}\] where \(m_{\rm BH}\) is the BH mass, \(G\) is the gravitational constant, \(\mu\) is the atomic weight, \(m_{\rm p}\) the proton mass, \(\kappa_{\rm B}\) is Boltzmann's constant, \(T_{\rm vir}\) the virial temperature of the halo, \(Z_{\rm hot}\) the halo gas metallicity, \(\Lambda(T_{\rm vir},Z_{\rm bdt})\) the cooling function, and \(\kappa\) a free parameter. For the case of BH growth during SBs, we follow L18, \[\delta\,m_{\rm BH,sb}=f_{\rm smhb}\,\frac{m_{\rm gas}}{1+(v_{\rm smhb}/V_{\rm vir })^{2}}, \tag{12}\] where \(m_{\rm gas}\) and \(V_{\rm vir}\) are the cold gas mass reservoir of the SB and the virial velocity of the halo, respectively. \(f_{\rm smhb}\) and \(v_{\rm smhb}\) are free parameters. As in L18, we estimate the BH accretion rate in this mode assuming that the accretion timescale scales with the bulge dynamical timescale, \(\tau_{\rm acc,sb}=e_{\rm sb}\,r_{\rm bulge}/v_{\rm bulge}\), where \(e_{\rm sb}\) is an e-folding parameter. The accretion rate during SBs is thus, \[\dot{m}_{\rm BH,sb}=\frac{\delta\,m_{\rm BH,sb}}{\tau_{\rm acc,sb}}. \tag{13}\] The total accretion rate onto the BH at any one time is \(\dot{m}_{\rm BH}=\dot{m}_{\rm BH,hh}+\dot{m}_{\rm BH,sb}\). The BH accretion disk structure is expected to be a strong function of the accretion rate. We thus define a normalised accretion rate based on the Eddington luminosity, \(L_{\rm Edd}\), \[\dot{m}=\frac{\dot{m}_{\rm BH}}{\dot{m}_{\rm Edd}}, \tag{14}\] where \(\dot{m}_{\rm Edd}=L_{\rm Edd}/(0.1\,c^{2})\), and \(c\) is the speed of light. When \(\dot{m}>\dot{m}_{\rm ADAF}\), the accretion disk is expected to be thin and efficiently cool; this regime is commonly referred to as thin-disk (TD; Shakura & Sunyaev, 1973). If instead, \(\dot{m}<\dot{m}_{\rm ADAF}\), the accretion disk is expected to be unable to cool efficiently by radiation due to the energy generated by viscosity, commonly referred to as the Advection Dominated Accretion Flow (ADAF; Rees, 1982) regime. It is broadly assumed the transition between TD and ADAF happens at \(\dot{m}_{\rm ADAF}=0.01\). The ADAF regime according to Mahadevan (1997), can be further subdivided into two regimes: the lower accretion rate ADAF regime (\(\dot{m}<\dot{m}_{\rm crit,visc}\)), in which heating of the electrons is dominated by viscous heating, and a higher accretion rate ADAF regime (\(\dot{m}_{\rm crit,visc}<\dot{m}<\dot{m}_{\rm ADAF}\)), in which the ion-electron heating dominates the heating of the electrons. On the other extreme, we classify AGN as super-Eddington (SE) if \(\dot{m}>\eta\), with \(\eta\sim 1\). In summary, four BH accretion regimes are defined: * SE: \(\dot{m}>\eta\), * TD: \(\dot{m}>\dot{m}_{\rm ADAF}\) and \(\dot{m}\leq\eta\), * ADAF\({}_{\rm high}\): \(\dot{m}_{\rm crit,visc}<\dot{m}<\dot{m}_{\rm ADAF}\), * ADAF\({}_{\rm low}\): \(\dot{m}\leq\dot{m}_{\rm crit,visc}\). The dimensionless BH spin vector, \(\mathbf{a}\), is defined as \(\mathbf{a}\equiv\mathbf{J}_{\rm BH}/J_{\rm max}=c\,\mathbf{J}_{\rm BH}/G\,m_{ \rm BH}^{2}\), where \(\mathbf{J}_{\rm BH}\) is the angular momentum vector of the BH. To calculate \(\mathbf{a}\), we implement three different models, which in Shark v2.0 can be selected by setting the variable spin_model. Below, we refer to the norm of the spin vector as \(a=|\mathbf{a}|\). The list of models are presented below: * constant. This corresponds to the simplest assumption of a constant spin. The radiation efficiency of a BH depends on the radius of the last stable orbit, which in itself depends on the BH spin. The value adopted for the constant spin in Shark v2.0 is 0.67, which is equivalent to assuming a constant radiation efficiency of 0.1 (Bardeen et al., 1972). (Note that \(a=0\) gives \(\epsilon=0.057\) and \(a=1\) gives \(\epsilon=0.42\); see Eqs. (16)-(19)). * volonteri07. This is a simple scaling relation inspired by the spin-BH mass relation found by Volonteri et al. (2007), who presented a model of BH accretion from a warped disk. The authors found that on average more massive BHs have a higher spin. We use the average relation they found and scale the spin directly with the BH mass as \[a=0.305\log_{10}\left(\frac{m_{\rm BH}}{\rm M_{\odot}}\right)-1.7475.\] (15) We limit \(a\) to be in the range \([0,1]\) in this model. * griffin19. This model is the full implementation in Shark v2.0 of the sophisticated BH spin development model of Griffin et al. (2019). They presented a model that follows the changes in BH spin produced by BH-BH merger and gas accretion. For the latter, Griffin et al. (2019) presented three different models, which we also implemented in Shark. These are the prolonged accretion, self-gravitating accretion disk and warped accretion disk. The user can choose between these three models by setting the variable accretion_disk_model. Details of these models are presented in Appendix B. Note that in this model \(a\) can take values in the range \([-1,1]\). This model includes the full implementation of the warped accretion disk model described in Volonteri et al. (2007), rather than a fit to their results as adopted by spin_model=volonteri07. With the BH accretion rate and spin defined, we can calculate the AGN bolometric luminosity. We first define the radiative accretion efficiency for a thin accretion disk, \(e_{\rm TD}\), \[e_{\rm TD}=1-\sqrt{1-\frac{2}{3\,\dot{r}_{\rm \rm \,iso}}}, \tag{16}\] where \(\dot{r}_{\rm \,iso}\) is the radius of the last stable circular orbit in units of the gravitational radius of the BH, \(r_{\rm G}\equiv G\,m_{\rm BH}/c^{2}\). We calculate \(\dot{r}_{\rm \,iso}\) following Bardeen et al. (1972) \[\dot{r}_{\rm\,iso}=3+Z_{2}\pm\sqrt{(3-Z_{1})(3+Z_{1}+2\,Z_{2})}, \tag{17}\] with the negative (positive) sign corresponding to the case when the angle between the BH spin and the accretion disk is less (larger) than 90 degrees. We refer to these two cases as co- and counter-rotation, respectively. \(Z_{1}\) and \(Z_{2}\) are define as, \[Z_{1}=1+(1-a^{2})^{1/3}\left[(1+a)^{1/3}+(1-a)^{1/3}\right], \tag{18}\] and \[Z_{2}=\sqrt{3a^{2}+Z_{1}^{2}}. \tag{19}\] Note that BHs can be counter-rotating only in the griffin19 spin model. We compute the AGN bolometric luminosity following Griffin et al. (2019) for the thin disk and ADAF regimes, and following Griffin et al. (2020) for the super-Eddington regime: \[L_{\rm bol}=\begin{cases}\eta\,\left(1+\ln\left(\frac{\dot{m}}{\eta}\,\frac{e_{ \rm TD}}{0.1}\right)\right)\,L_{\rm Edd},\,\rm{in}\,\,\rm{SE}\\ \epsilon_{\rm TD}\dot{m}_{\rm BH}\,c^{2},\,\rm{in}\,\,\rm{TD}\\ 0.2\,\epsilon_{\rm TD}\dot{m}_{\rm BH}\,c^{2}\left(\frac{\dot{m}}{\alpha_{\rm ADAF }^{2}}\right)\,\,\left(\frac{\beta}{0.5}\right)\,\left(\frac{\delta}{\tau_{\rm \,iso}}\right),\,\rm{in}\,\,\rm{ADAF}_{\rm high}\\ 0.0002\,\epsilon_{\rm TD}\dot{m}_{\rm BH}\,c^{2}\left(\frac{\delta_{\rm ADAF} }{0.0005}\right)\,\left(\frac{\beta}{0.5}\right)\,\left(\frac{\delta}{\tau_{ \rm\,iso}}\right),\,\rm{in}\,\,\rm{ADAF}_{\rm low}\end{cases} \tag{20}\] Here, \(\alpha_{\rm ADAF}\) is the viscosity parameter in the ADAF regime, \(\delta_{\rm ADAF}\) is the fraction of viscous energy transferred to the electrons, which has a value between 0.1 and 0.5 (see review of Yuan & Narayan, 2014), and \(\beta\) is the ratio of gas pressure to total pressure (i.e. the sum of gas pressure and magnetic pressure). Following Griffin et al. (2019), \(\beta=1-\alpha_{\rm ADAF}/0.55\), \(\alpha_{\rm ADAF}=0.1\) and \(\delta_{\rm ADAF}=0.2\). With these parameters defined, we introduce the boundary between the two ADAF regimes (Griffin et al., 2019), \[\dot{m}_{\rm crit,visc}=0.001\,\left(\frac{\delta_{\rm ADAF}}{0.0005}\right)\, \left(\frac{1-\beta}{\beta}\right)\,\alpha_{\rm ADAF}^{2}. \tag{21}\] #### 3.3.2 AGN radio-mode feedback In Shark v2.0, the radio mode of AGN feedback is assumed to only occur when halos have reached a quasi-hydrostatic equilibrium (i.e. halos have developed a hot gaseous halo). To evaluate this, we use the Correa et al. (2018) criterion, in which a cooling and heating terms are calculated and compared. These terms are computed per halo as follows, \[\Gamma_{\rm cool}(M_{\rm halo},r) = M_{\rm hot}\,\frac{n_{\rm H}(r)\,\Lambda(T_{\rm vir},Z_{\rm hot })}{\mu\,m_{\rm p}}, \tag{22}\] \[\Gamma_{\rm heat}(M_{\rm halo}) = \frac{3\,k_{\rm B}\,T_{\rm vir}}{2\,\mu\,m_{\rm p}}\,\frac{ \Omega_{\rm b}}{\Omega_{\rm m}}\,\dot{M}_{\rm halo}\,\left(\frac{2}{3}\,f_{ \rm hot}+f_{\rm acc,hot}^{\rm halo}\right). \tag{23}\] Here, \(M_{\rm halo}\) is the virial mass, \(M_{\rm hot}\) is the halo gas mass, \(n_{\rm H}(r)\) is the gas volume density of hydrogen atoms at \(r\), \(\Omega_{\rm b}\) and \(\Omega_{\rm m}\) are cosmological parameters, \(\dot{M}_{\rm halo}\) is the matter accretion rate onto a halo, \(f_{\rm hot}\) is the fraction of gas in the halo relative to the universal baryon fraction (\(\equiv M_{\rm hot}/(\Omega_{\rm b}/\Omega_{\rm m}\,M_{\rm halo})\)) and \(f_{\rm acc,hot}^{\rm halo}\) is the fraction of the accreting gas that is hot. The latter fractions were parametrised as a function of the halo mass by Correa et al. (2018) as follows, \[f_{\rm hot} = 10^{-0.8+0.5\,x-0.05\,x^{2}}, \tag{24}\] \[f_{\rm acc,hot}^{\rm halo} = \frac{e^{-4.3\,(x+0.15)}+1}{e^{-4.3\,(x+0.15)}+1}, \tag{25}\] with \(x\) defined as, \[x=\log_{10}\left(\frac{M_{\rm halo}}{10^{12}\,{\rm M}_{\odot}}\right). \tag{26}\] The matter accretion rate onto halos is calculated using Dekel et al. (2009), \[\frac{\dot{M}_{\rm halo}}{{\rm M}_{\odot}\,{\rm Gyr}^{-1}}=0.47\,\left(\frac {M_{\rm halo}}{10^{12}\,{\rm M}_{\odot}}\right)^{0.15}\,\left(\frac{1+z}{3} \right)^{2.25}\,\frac{M_{\rm halo}}{{\rm M}_{\odot}}. \tag{27}\] In principle, we can measure \(\dot{M}_{\rm halo}\) directly from the VELOCIRAPtor and TreeFrog catalogues. However, there are cases of mass swapping between merger branches which can lead to sudden large changes in the halo mass. To avoid such big discontinuities in mass, we opt to use the fitting function above. Chandro-Gomez et al. (in preparation) present a detailed analysis of the frequency of mass-swapping events for different simulations, subhalo finders and tree builders, and the effect they can have on galaxies in Shark. A halo is considered to be capable of forming a hot halo when \(\Gamma_{\rm heat}>\Gamma_{\rm cool}\). Under this condition, the accumulated shock-heated gas at the virial radius gains the necessary pressure through external shock-heating to overcome the energy loss from radiative cooling. In Shark, we assume the relevant density in Eq. (22) to be the gas density of the halo at \(R_{\rm vir}\) where the shocks for the accreting gas are expected to happen, which we approximate as \(n_{\rm H}(R_{\rm vir})=200\,\rho_{\rm crit}/\mu\,m_{\rm p}\), where \(\rho_{\rm crit}\) is the critical density of the universe. To allow flexibility in Shark, we include the parameter \(\Gamma_{\rm thresh}\), so that halos with \(\Gamma_{\rm cool}/\Gamma_{\rm hot}<\Gamma_{\rm thresh}\) are considered to have formed a hot halo. Appendix C shows at which halo mass, most of the halos comply with \(\Gamma_{\rm cool}/\Gamma_{\rm hot}<\Gamma_{\rm thresh}\). In the default model presented in L18, the heating rate of the AGN in the radio mode was computed from the AGN bolometric luminosity produced by the hot-halo mode accretion rate only (Croton et al., 2006), \[\dot{m}_{\rm heat} = \frac{L_{\rm hh}}{0.5\,V_{\rm vir}^{2}}, \tag{28}\] \[L_{\rm hh} = 0.1\,\dot{m}_{\rm BII,hh}\,c^{2}. \tag{29}\] In the new AGN feedback model, we compute the jet power (summed over both jets, assuming jets to be symmetrical) following Meier (2002), \[\frac{Q_{\rm mech}}{\rm erg\,s^{-1}}=\begin{cases}2.5\cdot 10^{43}\left(\frac{M_{ \rm BH}}{10^{7}M_{\odot}}\right)^{1.1}\left(\frac{m}{0.01}\right)^{1.2}a^{2} \ \mathrm{if}\ \dot{m}\geq\dot{m}_{\rm ADAF},\\ 2\cdot 10^{45}\left(\frac{M_{\rm BH}}{10^{7}M_{\odot}}\right)\left(\frac{m_{ \rm H}}{0.01}\right)a^{2}\ \mathrm{if}\ \dot{m}<\dot{m}_{\rm ADAF}.\end{cases} \tag{30}\] In our new model, a fraction \(\kappa_{\rm radio}\) of \(Q_{\rm mech}\) is used to offset \(\dot{M}_{\rm cool}\), so that the new cooling luminosity is reduced by \(\kappa_{\rm radio}\,Q_{\rm mech}\). If \(L_{\rm cool}<\kappa_{\rm radio}\,Q_{\rm mech}\) the cooling flow is completely shut off. Otherwise, the cooling rate is defined as \[\dot{M}_{\rm cool}^{\prime}=\dot{M}_{\rm cool}\,\left(1-\frac{\kappa_{\rm radio }\,Q_{\rm mech}}{L_{\rm cool}}\right). \tag{31}\] Here, \(\kappa_{\rm radio}\) is a free parameter controlling how efficient the jet power is in heating the halo gas. Note that in principle, \(\kappa_{\rm radio}\) can be \(>1\) if the jets are efficient in producing buoyant bubbles that lift up gas, as recently suggested by Husko & Lacey (2023). Note that in this model, both BH accretion modes, hot-halo and SB, contribute to the mechanical power of jets, and hence radio-mode feedback can happen regardless of the source of gas accretion. In Shark v2.0, satellite galaxies can continue to accrete gas that is cooling from the hot halo they have retained (as it is gradually stripped; see SS 3.4). This means that satellite galaxies can also undergo radio mode AGN feedback. The way this operates is the same as above. However, in situations where \(L_{\rm cool}<\kappa_{\rm radio}\,Q_{\rm mech}\) in a satellite galaxy, we assume that the excess power, \(P_{\rm excess}=\kappa_{\rm radio}\,Q_{\rm mech}-L_{\rm cool}\), can be used to heat up the hot gas reservoir of the central subhalo. If this is the case, then the effective heating power affecting the gas cooling onto the central galaxy is larger than \(\kappa_{\rm radio}\,Q_{\rm mech}\) of the central galaxy by \(\dot{\Sigma}_{i}\,P_{\rm excess, Murray et al. (2005) assumed an isothermal sphere profile, and argue that in the optically-thick limit and from the momentum equation of the gas, one can derive a critical luminosity at which the effective gravity is reduced by the momentum deposition of the radiation. This critical luminosity for an isothermal sphere can be written as, \[\frac{L_{\rm crit}}{3\cdot 10^{46}\ {\rm erg\ s^{-1}}}\approx 10\ f_{\rm gas} \left(\frac{\sigma}{200\ {\rm km\ s^{-1}}}\right)^{4}, \tag{33}\] where \(f_{\rm gas}\) is the gas fraction of the bulge and \(\sigma\) its velocity dispersion (which in this case we equate to the stellar velocity dispersion). If the BH's bolometric luminosity is in excess of this critical value, \(L_{\rm bol}>L_{\rm crit}\), the net motion of the gas in the bulge would be outwards (i.e., outflow). If an outflow is produced, and following Murray et al. (2005), it should be capable of sweeping up the ISM gas that is outside the sublimation radius. However, the latter tends to be very small (\(<100\) pc) and hence, we consider the total ISM gas mass in the bulge, \(m_{\rm gas,b}\), to participate in the outflow rate. We take as the relevant timescale for the outflow to be the Salpeter time (i.e. the time to double the mass of the BH), defined as \[T_{\rm Salp}=43\Gamma^{-1}\ {\rm Myr}. \tag{34}\] where \(\Gamma\equiv L_{\rm bol}/L_{\rm Edd}\). With this timescale, we calculate the outflow rate to be \[\dot{m}_{\rm out,QSO}=\frac{m_{\rm gas,b}}{T_{\rm Salp}}. \tag{35}\] The reason why we consider the Salpeter time to be the relevant timescale here is because this timescale assures us that the BH will grow to values that are comparable to local BHs as measured in McConnell & Ma (2013), and because the Salpeter timescale is of a similar magnitude as the duration of SBs. It also ensures that enough stars are produced in the galaxies in Shark to lie in the Faber-Jackson (Murray et al., 2005; Power et al., 2011) and BH-bulge mass relations. Following the arguments presented in Nayakshin et al. (2009), using \(T_{\rm Salp}\) in Eq. (35) can potentially underestimate the impact of QSO feedback in galaxies with small bulges (those with a stellar velocity dispersion \(<150\,{\rm km\ s^{-1}}\)), but it is a fair representation of the timescale at which BHs grow in more massive bulges. Because QSO feedback is expected to be significant in massive galaxies only, we consider this assumption to be reasonable. We estimate the terminal velocity of the outflow following Ishibashi & Fabian (2015) in the limit of radiation pressure on dust grains and assuming that the whole ISM content is contained in an expanding shell: \[v_{\rm out,QSO} \approx 320\ {\rm km\ s^{-1}}\left(\frac{L_{\rm bol}}{10^{7}L_{\odot}} \right)^{1/2}\left(\frac{\kappa_{\rm UV}}{10^{3}\ {\rm cm^{2}\ g^{-1}}}\right)^{1/4} \tag{36}\] \[\cdot\left(\frac{m_{\rm gas,b}}{M_{\odot}}\right)^{-1/4}.\] Here, \(\kappa_{\rm UV}\) is the UV opacity. We use the Ishibashi & Fabian (2015) approximation \(\kappa_{\rm UV}=10^{3}\ f_{\rm dg,MW}\ {\rm cm^{2}\ g^{-1}}\), with \(f_{\rm dg,MW}\) being the dust-to-gas mass ratio relative to the Milky-Way value. Assuming a constant metallicity-dependent dust-to-gas mass ratio, we can then write \(\kappa_{\rm UV}=10^{3}\left(Z_{\rm gas,bulge}/Z_{\odot}\right)\ {\rm cm^{2}\ g^{-1}}\), where \(Z_{\rm gas,bulge}\) is the fraction of mass in metals relative to the total gas mass in the bulge. Similar to the description of SB-driven outflows in [18], we define an excess energy of the QSO-driven outflow that can be used to eject gas from the halo as, \[E_{\rm excess,QSO}=\epsilon_{\rm QSO}\frac{v_{\rm out,QSO}^{2}}{2}\dot{m}_{\rm out,QSO}. \tag{37}\] Here, \(\epsilon_{\rm QSO}\) is a free parameter in which we enclose variations of geometry and other simplifications of our modelling. We then define the gas ejection rate due to these QSO outflows as: \[\dot{m}_{\rm ejec,QSO}=\frac{E_{\rm excess,QSO}}{0.5\,V_{\rm circ}^{2}}-\dot{m} _{\rm out,QSO} \tag{38}\] where \(V_{\rm circ}\) is the circular velocity of the halo. This can be reduced to \[\dot{m}_{\rm ejec,QSO}=\left(\epsilon_{\rm QSO}\frac{v_{\rm out,QSO}^{2}}{v_ {\rm circ}^{2}}-1\right)\ \dot{m}_{\rm out,QSO}. \tag{39}\] To include QSO feedback, we modify the differential equations (Eqs. 49 - 58 in [18]) that control the evolution of the stellar (\(M_{\star}\)), cold gas (\(M_{\rm cold}\)), hot halo gas (\(M_{\rm hot}\)) and ejected gas (\(M_{\rm ejec}\)) masses, and their respective metals (\(M_{\star}^{Z}\), \(M_{\rm cold}^{Z}\), \(M_{\rm hot}^{Z}\) and \(M_{\rm ejec}^{Z}\)), as follows: \[\dot{M}_{\star} = (1-R)\psi \tag{40}\] \[\dot{M}_{\rm cold} = \dot{M}_{\rm cool}-(1-R+\beta_{\star}+\beta_{\rm QSO})\psi\] (41) \[\dot{M}_{\rm cold,halo} = -\dot{M}_{\rm cool}\] (42) \[\dot{M}_{\rm hot,halo} = (\dot{m}_{\rm out,\star}+\dot{m}_{\rm out,QSO})\] (43) \[-(\dot{m}_{\rm ejec,\star}+\dot{m}_{\rm ejec,QSO})\] \[\dot{M}_{\rm ejec} = \dot{m}_{\rm ejec,\star}\] (44) \[\dot{M}_{\rm host} = \dot{m}_{\rm ejec,QSO}\] (45) \[\dot{M}_{\star}^{Z} = (1-R)Z_{\rm cold}\psi\] (46) \[\dot{M}_{\rm cold}^{Z} = \dot{M}_{\rm cool}Z_{\rm cold,halo}\] (47) \[+(p-(1+\beta_{\star}+\beta_{\rm QSO}-R)Z_{\rm cold})\psi\] \[\dot{M}_{\rm cold,halo}^{Z} = -\dot{M}_{\rm cool}Z_{\rm cold,halo}\] (48) \[\dot{M}_{\rm hot,halo}^{Z} = \dot{M}_{\rm hot,halo}Z_{\rm cold}\] (49) \[\dot{M}_{\rm ejec}^{Z} = Z_{\rm cold}\dot{m}_{\rm ejec,\star}\] (50) \[\dot{M}_{\rm lost}^{Z} = Z_{\rm cold}\dot{m}_{\rm ejec,QSO} \tag{51}\] where \(\beta_{\rm QSO}\equiv\dot{m}_{\rm out,QSO}\psi^{-1}\) is the mass loading due to QSO feedback, \(Z_{\rm cold}\equiv M_{\rm cold}^{Z}M_{\rm cold}^{-1}\) and \(Z_{\rm cold,halo}\equiv M_{\rm cold,halo}^{Z}M_{\rm cold,halo}^{-1}\) are the metallicities of cold gas in the ISM and the cold gas in the halo (the part actively cooling), respectively, and \(p\) the metal yield. \(R\), \(\psi\) and \(\beta_{\star}\) were defined when introducing Eqs. (5)-(9). Note that \(\beta_{\rm QSO}\) is only positive in the case of SBs and where \(L_{\rm bol}>\kappa_{\rm QSO}L_{\rm crit}\), and hence during star formation in the disk, there is no QSO feedback. Note that when gas is ejected from the halo due to QSO feedback, we assume this gas is lost and never reincorporated into the halo. ### Ram pressure stripping of the halo and ISM gas Shark v1.1 included a treatment of instantaneous gas stripping of the hot halo of satellite galaxies. As soon as galaxies became satellites, their hot halo gas was instantaneously removed and transferred to the hot halo gas of the central galaxy of the host halo. Here, we include a treatment of ram pressure stripping (RPS) of both the hot halo and ISM gas of satellite galaxies as described below. For the RPS of the hot halo, we follow Font et al. (2008), which follows the RPS criterion found by McCarthy et al. (2008) in their hydrodynamical simulations, for a spherical distribution of gas. The halo gas beyond a radius \(r_{\rm sat}\) (measured from the centre of the satellite galaxy) is removed if the ram pressure at that position exceeds the binding energy the gas feels due to the satellite's gravitational potential, \[\rho_{\rm halo,gas}^{\rm cen}(R)\ v_{\rm sat}^{2}>\alpha_{\rm RPS}\frac{G\ M_{\rm sat }(r_{\rm sat})\ M_{\rm halo,gas}^{\rm sat}}{8\,r_{\rm vir}^{\rm sat}\,r_{\rm sat }^{3}}, \tag{52}\] where \(M_{\rm sat}(r_{\rm sat})\) is the total mass enclosed in \(r_{\rm sat}\), \(\rho_{\rm halo,gas}^{\rm cen}(R)\) is the central galaxy's halo gas density at the position of the satellite galaxy relative to the halo centre, \(R\), and \(v_{\rm sat}\) is the velocity of the satellite in the frame of the host halo. Both \(R\) and \(v_{\rm sat}\) are the subhalo's position and velocity for satellites type 1 (see below for treatment of satellites type 2). The parameter \(\alpha_{\rm ram}=2\) in McCarthy et al. (2008). With the aim of allowing for flexibility in the code, the latter is left as a free parameter, which in the default Sharx v2.0 is set to \(\alpha_{\rm RPS}=1\). Note that \(M_{\rm halo,gas}^{\rm sat}\) and \(r_{\rm vir}^{\rm sat}\) are the halo gas mass and virial radius the satellite galaxy had right before it became a satellite. The latter assumes that the gas is stripped from the satellite's halo outside-in without affecting the density internal to the stripping radius. We find \(r_{\rm sat}\) by assuming equality in Eq. (52) and strip away all the gas that is at radii \(>r_{\rm sat}\) that has not yet been stripped. For the satellite's ISM, we assume a similar model for RPS, and remove all the gas outside \(r\), with \(r\) being the radius at which \[\rho_{\rm halo,gas}^{\rm cen}(R)\ v_{\rm sat}^{2}=2\pi\ G\Sigma_{\rm gas}(r) \ \left[\Sigma_{\rm gas}(r)+\Sigma_{\bullet}(r)\right] \tag{53}\] where \(\Sigma_{\rm gas}\) and \(\Sigma_{\bullet}\) are the ISM's and stellar surface densities at \(r\). Note that the latter has contributions from both the disk and bulge components. The ISM in the disk and bulge follows an exponential profile, with half-mass radii sizes \(r_{\rm gas,disk}\) and \(r_{\rm gas,bulge}\). The stellar component of the disk, also follows an exponential profile with half-mass radius, \(r_{\bullet,\rm disk}\), while the bulge stars follow a Plummer profile, with half-mass radius \(r_{\bullet,\rm bulge}\). In Sharx v2.0, \(r_{\rm gas,disk}\) and \(r_{\bullet,\rm disk}\) are calculated self-consistently, following the description in SS 3.1, while \(r_{\rm gas,bulge}=r_{\bullet,\rm bulge}\) and are calculated as described in SS 4.4.12 of L18. The same profiles are used to calculate \(M_{\rm sat}(r_{\rm sat})\) in Eq. (52). For the RPS of the ISM, we make the same assumption as for the RPS of the halo, and assume the gas and stellar profiles internal to \(r\) in Eq. (53) are unchanged under RPS, and hence we keep track of the stripped ISM, so that we can incorporate it when computing the gas surface density in Eq. (53). We also assume that the galaxy radii do not change due to RPS. The latter is a sensible assumption if RPS acts on short timescales (shorter than a dynamical relaxation timescale). When a type 1 satellite becomes type 2 (its host subhalo disappears from the subhalo catalogue), we assume that any left over hot gas is instantaneously transferred to the central subhalo's hot gas, but the ISM continues to experience RPS following the equations above. ### Tidal stripping of gas and stars In addition to the presence of dynamical friction, which ultimately leads to galaxy mergers, the tidal stripping of these infalling satellite galaxies is also an important environmental process. Tidally stripped stellar material on the outskirts of halos typically does not contribute to observational estimates of the central galaxy stellar mass and instead is associated to an independent component, which we referred to as "intra-halo stellar mass". This can have an important impact on comparing observations of the SMF with model predictions. Recent work using cosmological hydrodynamical simulations estimates that up to half of the stellar mass bound to a central subhalo can be attributed to stellar halos, rather than the central galaxy (e.g. Canas et al.2020; Proctor et al.2023). We include in Sharx v2.0 a new physical model to describe the tidal effects felt by satellite galaxies from their central component. This implementation is based on the work of Errani et al. (2015), who use high resolution \(N\)-body simulations of a Milky-Way mass halo to track the evolution of tidal material that can be associated with dwarf spheroidal galaxies. Although Errani et al. (2015) did their analysis specifically for a Milky-Way halo, the problem is highly self-similar and it mostly depends on the fractional mass loss rate experience by an object orbiting in an (Navarro et al.1997) profile. Errani et al. (2015) parameterise the evolutionary tracks of the stripped material following the work of Penarrubia et al. (2008), \[\frac{M_{\bullet,\rm TS}}{M_{\bullet,0}}=\frac{2^{\alpha}s^{\beta}}{(1+x)^{ \alpha}}, \tag{54}\] where \(x=M_{\rm sh}/M_{\rm h,0}\), \(M_{\rm h,0}\) is the total halo mass the now satellite subhalo had at infall, \(M_{\rm sh}\) is the current subhalo mass, \(M_{\bullet,\rm TS}\) is the amount of stellar mass that has been tidally stripped from the satellite galaxy that is hosted by the subhalo in question, and \(M_{\bullet,0}\) is the stellar mass the galaxy had at infall. In principle the halo masses above should not be the total current and at infall ones, but the mass enclosed within the half-light radii of the galaxy currently or at infall, respectively. However, Errani et al. (2015) found that the half-light radius of the galaxy is barely affected by tidal stripping, and hence one can assume a constant half-light radius to halo's scale radius ratio. Here we adopt the parameters \(\alpha=3.57\) and \(\beta=2.06\), corresponding to a "cuspy" Navarro et al. (1997) profile and half-light radius to halo's scale radius ratio = 0.2. These values correspond to the largest effect tidal stripping can have on the stellar content of a satellite galaxy according to Errani et al. (2015). This allows us to test the maximal effect tidal stripping can have on our galaxy population. We also implement a minimum remaining halo mass fraction of 1% (where the fraction gives the difference between the subhalo mass at the current snapshot and the subhalo mass at infall; this is a parameter in Sharx, named minimum_halo_mass_fraction) to ensure that a halo can only lose at most 99% of its mass. This model can be used by setting tidal_stripping = true in the input parameter file. We implement this in a way that the disk, which has the larger radius, is stripped first, followed by the bulge. If the galaxy has a cold gas reservoir, we strip the amount of gas that is outside a radius \(r_{\rm str}\) beyond which a fraction of stellar mass \(M_{\bullet,\rm TS}/M_{\bullet,0}\) has been stripped. ### Automatic parameter exploration We perform an automatic search for a suite of best-fitting parameters by using a Particle Swarm Optimisation (PSO) python package, optim, introduced in Proctor et al. (in preparation). We here use the \(z<0.1\) SMF of Li & White (2009) based on SDSS as our primary constraint. In addition to this, optim can also use as constraints the SMF at \(z=0.5\), 1 and 2 (from Weaver et al.2022), the cosmic star formation rate density (CSFRD) of Driver et al. (2018), the HI mass function of Jones et al. (2018) and the total stellar-size mass relation of the Galaxy and Mass Assembly (GAMA) survey (see Appendix D). Proctor et al. (in preparation) introduce in detail optim and these constraints. Note that optim can be easily used as a standalone tool, and hence be adapted to work with other SAMs. Following Proctor et al. (in preparation), we only vary the following parameters: * AGN feedback parameters: \(\kappa\) (Eq. 11), \(\kappa_{\rm radio}\) (Eq. 32), and \(\Gamma_{\rm thresh}\) (SS 3.3.2). These parameters control the efficiency of gas accretion onto the BH in the hot-halo mode, the efficiency of coupling between the mechanical power of the AGN and the hot halo of the galaxy, and the cooling-to-heating specific energy ratio threshold above which hot halos form, respectively. * Star formation parameters: \(\nu_{\rm SF}\) (Eq. (7) in L18), which controls the conversion efficiency between the surface density of molecular gas and SFR. * Stellar feedback parameters: \(\beta\) (Eqs. (25)-(28) in L18) and \(\beta_{\rm min}\) (Appendix A). These parameters control the dependence of the outflow mass loading on the circular velocity of the galaxy, and the minimum value the mass loading can take, respectively. * Disk instability parameters: \(\epsilon_{\rm disc}\) (Eq. (35) in L18), which controls the threshold of the stability parameter, below which disks are considered globally unstable to collapse. * Gas reincorporation parameters: \(\tau_{\rm reine}\), \(\gamma\) and \(M_{\rm norm}\) (Eq. (30) in L18), control how quickly gas that has been removed from halos \begin{table} \begin{tabular}{c c c} \hline \hline Parameter & Suggested value range (adopted) & Equation/Section \\ \hline \hline Physical Process & AGN Feedback and BH growth \\ \hline model & bower06, croton15, lagos22 (lagos22) & § 3.3.3 \\ spin\_model & constant, volonteri07, griffin19 (griffin19) & § 3.3.1 \\ accretion\_disk \_model & prolonged, selfgravitydisk, warpeddisk (warpeddisk) & Appendix B (only for spin\_model=griffin19) \\ qso\_feedback & true or false (true) & § 3.3.3 \\ \(\kappa\) & \(10^{-5}-10^{2}\) (10.31) & Eq. (11) \\ \(f_{\rm minb}\) & \(10^{-5}-10^{-2}\) (\(3\times 10^{-3}\)) & Eq. (12) \\ \(e_{\rm sb}\) & \(0.5-50\) (15) & § 3.3.1 \\ \(\mu_{\rm ADF}\) & \(0.01\) & § 3.3.1 \\ \(\delta_{\rm ADF}\) & \(0.1\)-\(0.5\) (0.2) & Eq. (20) \\ \(\alpha_{\rm ADF}\) & \(0.05-0.5\) (0.1) & Eq. (20) \\ \(\eta\) & \(1\)-\(10\) (\(4\)) & § 3.3.1 \\ \(\sigma_{\rm ID}\) & \(0.05-0.5\) (0.1) & Appendix B \\ \(\kappa_{\rm radio}\) & \(10^{-1}-3\) (0.023) & Eq. (32) \\ \(\Gamma_{\rm thresh}\) & \(0.01-100\) (10) & § 3.3.2 \\ \(\epsilon_{\rm QSO}\) & \(0.1-100\) (10) & Eq. (37) \\ \hline \hline Physical Process & Tidal and Ram-pressure Stripping & \\ \hline tidal\_stripping & true or false (true) & § 3.5 \\ gradual\_stripping\_ism & true or false (true) & § 3.4 \\ gradual\_stripping\_halo & true or false (true) & § 3.4 \\ \(\sigma_{\rm SF}\) & \(0.1\)-\(10\) (1) & Eq. (52) \\ minimum\_halo \_mass \_ fraction & \(10^{-5}-10^{-1}\) (\(10^{-2}\)) & § 3.5 \\ \hline \hline Physical Process & Dynamical Friction \\ \hline merger\_timescale -model & lacy93 or poulton20 (poulton20) & § 3.2 \\ \hline \hline Physical Process & Chemical Enrichment \\ \hline evolving\_yield & true or false (true) & Appendix A1 \\ \hline \hline Physical Process & Star Formation \\ \hline angular\_momentum \_transfer & \(0.2-1.7\) Gyr\({}^{-1}\) (\(1.49\) Gyr\({}^{-1}\)) & Eq. (7) in L18 \\ true or false (true) & § 3.1 \\ \hline \hline Physical Process & Stellar Feedback \\ \hline \(\beta\) & \(0.5-5\) (3.79) & Eqs. (25)-(28) in L18 \\ \(\beta_{\rm min}\) & \(0.01-1\) (0.104) & Appendix A2 \\ \(\nu_{\rm hot}\) & \(50-500\) km s\({}^{-1}\) (\(120\) km s\({}^{-1}\)) & Eqs. (25)-(28) L18 \\ \hline \hline Physical Process & Reincorporation \\ \hline \(\tau_{\rm reine}\) & \(1-30\) Gyr (21.53 Gyr) & Eq. (30) in L18 \\ \(\gamma\) & \(-3\) to (\(0\)-\(2.339\)) & Eq. (30) in L18 \\ \(M_{\rm norm}\) & \(10^{9}-10^{12}\) M\({}_{\odot}\) (\(1.383\times 10^{11}\) M\({}_{\odot}\)) & Eq. (30) in L18 \\ \hline \hline \end{tabular} \end{table} Table 2: List of new models and parameters included in Shark v2.0. The parameters presented in purple are those that are varied in optim when searching for a best fit to the \(z<0.1\) SMF. We present a suggested range for the parameters and in parentheses the value adopted in our best-fitting model. The relevant equation/section for each model or parameter is presented in the third column. Parameters that were introduced in L18 refer to the equations in that paper. due to stellar feedback can be reincorporated into the halo. Once gas is reincorporated is available for cooling. Other parameters are left fixed, following the values in Table 2. The ranges in which we vary the parameters above and the best-fitting values are presented in Table 2. Even though we only use the \(z=0\) SMF as a numerical constraint for optim, we visually inspected other results of the model to ensure we were not seriously compromising their agreement with observations by using the \(z=0\) SMF only. For example, we inspected gas scaling relations, the mass-metallicity relation and the \(z<2\) CSFRD and made sure they look sensible. Some of these results are presented in the supplementary material. Note that the above does not mean that we change the parameters from the best-fitting ones, but that we used the visual inspection of other results to draw reasonable prior ranges in some of the parameters that we input to optim. An example of that, is that we limited the stable parameter (see Table 2) to a maximum of 1.2. Higher values lead to unphysically large numbers of bulge-dominated galaxies. ### Other technical updates Many small bug fixes and improvements have been made in the Shark codebase since version 1.1.0 was released. Below, we highlight the most important ones: * The memory footprint of Shark has been considerably reduced by reorganising some of its internal structures, with improvements of about 20%, depending on the input data. Shark is usually memory bound, and thus this reduction allows more Shark instances to run in the same set of resources. * Executions are now fully reproducible. By default a random seed is used, but one can be given. The seed used by each run is logged and stored in the output files. * Performance has been improved in two fronts: first, we now perform better load balancing when running in multi-threaded mode, reducing walltime; secondly, an overall reduction of small, temporary memory allocations allow Shark to run more streamlined. * Improved infrastructure: dropped requirement for the HDF5 C++ libraries, moved our automatic per-commit checks from Travis CI to GitHub actions, added support for more systems and compiler (e.g. MSVC in Windows), more runtime information is written (timing, memory usage, etc). ## 4 Characteristics of the galaxy population: abundances and scaling relations In this section we discuss fundamental observed properties of galaxies, comparing the results of the new Shark v2.0 model using the best-fitting parameters of Table 2 with the default model presented in [23]. ### Galaxy properties: abundances #### 4.1.1 The stellar mass function Fig. 1 shows the SMF from \(z=0\) to \(z=7\) comparing with observations. For \(z=0\) we show the observational estimates of Li & White (2009) which were used for the parameter tuning, and those of Bernardi et al. (2013); Driver et al. (2022) to show the large systematic uncertainties that permeate the SMF, especially at the high-mass end. By construction, Shark v2.0 provides a good fit to the observations of Li & White (2009), better than was achieved in [23]. This is not only due to the improved automatic parametrisation of the model, but also to the improved physics. Proctor et al (in preparation) show that when using Shark v1.1 in tandem with the PSO optim package, we fail to provide a fit as good at the one found here for Shark v2.0. This boils down to the inability of the default AGN feedback model adopted in [23] to produce a steep-enough high-mass end of the SMF. At \(z\geq 0.5\), Shark v2.0 tends to predict a steeper low-mass end of the SMF compared to [23], due to the adopted parameters leading to a weaker dependence of the mass loading on the circular velocity of the galaxy (see Fig. [23]). At the high-mass end, Shark v2.0 produces a steeper break of the SMF than [23], and lower-mass galaxies at fixed number density. At \(z\geq 2\), the high-mass end in Shark v2.0 produces too low number densities of massive galaxies, \(>10^{11}\) M\({}_{\odot}\), but this is easily alleviated if we assume stellar masses have uncertainties which are Gaussian-distributed with a width of \(\approx 0.3\) dex (see dot-dashed red lines). Uncertainties of that magnitude are quite common in deriving stellar masses due to the required assumptions (e.g. SFH, metallicity history, initial mass function, among others) - see Marchesini et al. (2009) for a thorough list of uncertainties and Robotham et al. (2020) for a recent example of the uncertainties associated to \(z\approx 0\) stellar masses. At \(z\geq 5\), the SMF starts to behave more like a power-law rather than a Schechter function, lacking a clear stellar mass break. This is accentuated when we include random errors to the Shark stellar masses. Observations appear to behave similarly, with a clear stellar mass break present only up to \(z\approx 3-4\). Note also that the low-mass end of the SMF becomes increasingly steeper with increasing redshifts in Shark (both v2.0 and [23]). The observations appear to follow a similar trend. Better observational constraints at the low-mass end, however, are needed to confirm this trend. #### 4.1.2 The baryon mass function of satellite and central galaxies Fig. 2 shows the baryon mass function in bins of halo mass, and separating centrals and satellite galaxies at \(z=0\). To allow for a fair comparison with the observations of Eckert et al. (2016), we define the baryon mass as the sum of the stellar mass and the HI mass times 1.4. The latter factor was introduced by Eckert et al. (2016) to account for the Helium contribution. Note that Eckert et al. (2016) ignored the contribution of molecular and ionised gas to the baryon mass. This is a reasonable assumption at \(z=0\) given the small molecular gas mass fractions in gas-rich galaxies (which tend to be low-mass galaxies) and the overall low gas fractions in the regime where molecular and atomic gas make a comparable contribution to the gas content of galaxies (i.e. massive galaxies) (Catinella et al., 2018). The observations of Eckert et al. (2016) were presented for two complementary surveys RESOLVE and ECO; the former is more sensitive but of smaller area than the latter. We show the results from these two surveys using different symbols. In low-mass groups, \(11<\log_{10}(M_{\rm halo}/\mbox{M}_{\odot})<11.4\) and \(11.4<\log_{10}(M_{\rm halo}/\mbox{M}_{\odot})<12\), we see that Shark v2.0 is able to produce a much higher number density (by an order of magnitude) of satellite galaxies across the dynamic range probed by the observations compared to [23]. This is in large part driven by the inclusion of the new dynamical friction timescale of Poulton et al. (2021), which leads to much longer dynamical friction timescales in low-mass galaxies than what is obtained by using Lacey & Cole (1993), allowing them to survive for longer and exist in bigger quantities. In addition, Shark v2.0 recovers a baryon mass function for central galaxies that matches the high-mass end of the observations much better than [23]. This has to do with the modified parameters of stellar feedback, which make it overall less efficient in the new model. The overall higher number density of satellite galaxies in Shark v2.0 compared to L18 is still present in halos of masses \(12<\log_{10}(M_{\rm halo}/{\rm M}_{\odot})<13.5\) but the difference is much smaller than that seen at lower halo masses. This is because the Poulton et al. (2021) dynamical friction timescales get closer to those of Lacey & Cole (1993) for more massive satellites. In the highest halo mass bin, \(\log_{10}(M_{\rm halo}/{\rm M}_{\odot})>13.5\), satellite galaxies in Shark v2.0 display a sharper high-end cut off than L18, which agrees better with observations. For central galaxies, the main difference is the slightly higher baryon masses produced by Shark v2.0 in the \(12<\log_{10}(M_{\rm halo}/{\rm M}_{\odot})<13.5\) halo mass bins. Observational errors are quite large in this regime, so within the uncertainties, both Shark v2.0 and L18 agree with observations reasonably well. Note that here we have not included any potential confusion between the Figure 1: Galaxy SMF in Shark v2.0 (solid red line) and the default model in L18 (dashed black line), from \(z=0\) to \(z=8\), as labelled. We show observations from Li & White (2009); Bernardi et al. (2013); Driver et al. (2022) at \(z=0\), Muzzin et al. (2013); Thorne et al. (2021); Weaver et al. (2022) at \(0.5\leq z\leq 4\), and from Weaver et al. (2022) only at \(x\pm z\geq 5\), as labelled. Li & White (2009), shown in green in the top-left panel, is the constraint used in optim. Shark v2.0, we also show the SMF using the stellar mass in galaxies enclosed in an aperture of 30 kpc (red dotted line, shown only at \(z=0\)), and when we apply a random error in the stellar mass of 0.3 dex at \(z\geq 1\) (red dot-dashed line). tagging of central and satellite galaxies, which tends to be rather large, especially in low-mass halos (see discussion in Bravo et al., 2020). ### Star formation rate and stellar mass cosmic density evolution Fig. 3 shows the cosmic SFR and stellar mass density evolution from \(z=0\) to \(z=15\). Observational constraints from Madau & Dickinson (2014); Driver et al. (2018); D'Silva et al. (2023); Adams et al. (2023); Weaver et al. (2022); Santini et al. (2023) are shown. For the CSFRD, Shark v2.0 predicts higher densities at \(z\gtrsim 3\) than L18, driven by both the contribution from SBs and star formation in disks being higher in the new model than L18. For the CSFRD in disks, we see that at \(z\gtrsim 8\), Shark v2.0 and L18 predict very similar levels, but the differences remain for the CSFRD in SBs from \(z\simeq 3.5\) even up to \(z=15\). The overall qualitative trend of SBs dominating the CSFRD at \(z\gtrsim 3\) that was presented in L18 is maintained in Shark v2.0. The new model compares more favourably with observations in general, giving a higher CSFRD at \(z\gtrsim 4\), closer to to current JWST estimates beyond \(z=8\) within the uncertainties. At \(z\lesssim 4\) Shark v2.0 and v1.1 have similar CSFRDs. The overall higher CSFRD at high redshift in Shark v2.0 compared to L18 naturally leads to a cosmic stellar mass density (CSMD) in the early universe that is higher in Shark v2.0 than L18. It is interesting that just the SB contribution to the CSFRD and CSMD in Shark v2.0 is similar or even higher than the total in L18 at \(z\gtrsim 7\). This shows that these predictions are still subject to large variations and that better constraints from the high redshift universe are needed to narrow down the parameter space that is plausible in the early universe. ### Galaxy scaling relations #### 4.3.1 The stellar-halo mass relation The top panel of Fig. 4 shows the stellar-halo mass relation for central galaxies at \(z=0\) for Shark v2.0 and L18. At the low-halo mass end, \(\log_{10}(\mbox{M}_{\rm halo}/\mbox{M}_{\odot})<12\), Shark v2.0 produces slightly higher stellar masses at fixed halo mass than L18, in better agreement with empirical constraints from abundance matching. This is the direct result of the stellar feedback parameters adopted in Shark v2.0 compared to L18 (see Table 2). At \(12<\log_{10}(\mbox{M}_{\rm halo}/\mbox{M}_{\odot})<14\), Shark v2.0 produces smaller stellar masses at fixed halo mass than L18, while at higher halo masses both models converge to similar values. The difference seen is the result of the new AGN feedback model leading to a more efficient quenching of central galaxies in massive halos in Shark v2.0 (discussed in detail in SS 5). Note that this is not just due to the choice of parameters, but the capability of the new AGN feedback model to more efficiently quench massive galaxies overall. Proctor et al. (in preparation) show that for the AGN feedback, the implementation introduced in L18 was not capable of Figure 2: Galaxy baryon mass function in Shark v2.0 (red lines) and the default model in L18 (thin black lines), as labelled, at \(z=0\) for galaxies in 4 bins of host halo mass, as labelled in each panel. We separate the contribution of central (solid lines) and satellite (dashed lines) galaxies in each panel. Observations from the galaxy surveys RESOLVE and ECO from Eckert et al. (2016) are shown as symbols for central galaxies (blue symbols) and satellites (yellow symbols). The baryon mass here is defined in the same way as in Eckert et al. (2016), \(\mbox{M}_{\rm bar}=\mbox{M}_{\star}+1.4\,\mbox{M}_{\rm HI}\), with \(M_{\rm HI}\) being the atomic hydrogen mass of the galaxy. getting a very sharp SMF break at the high-mass end, while the new model allows for that. Both Shark v2.0 and L18 produce a stellar-halo mass relation at \(\log_{10}(\mathrm{M_{halo}}/\mathrm{M_{\odot}})>12\) that is steeper than the empirical relations of Moster et al. (2013); Behroozi et al. (2013). However, compared to the more direct estimates of Kravtsov et al. (2018), Shark compares favourably, especially at the galaxy cluster regime. Kravtsov et al. (2018) argued that the difference with the empirical relations of Moster et al. (2013); Behroozi et al. (2013) is due to those studies using SMFs that are severely surface brightness-limited, while the Kravtsov et al. (2018) data reaches lower surface brightness values, impacting the recovered stellar mass. We also show the observational measurements of Taylor et al. (2020) combining weak lensing with stellar mass measurements from the GAMA survey. These again are significantly different than the empirical relations of Moster et al. (2013); Behroozi et al. (2013), showing that there are still many systematic uncertainties in the stellar-halo mass relation which are usually not captured in the presented errorbars. Individual local Universe galaxies (primarily of late-type morphology) compiled by Romeo et al. (2020) are also shown, depicting the significant galaxy-to-galaxy scatter of the relation. The bottom panel of Fig. 4 shows the stellar-halo mass relation of central galaxies in Shark v2.0 and v1.1 for disk- and bulge-dominated galaxies, separately. We define these populations using a disk-to-total stellar mass threshold of 0.5. In both Shark versions, and at fixed halo mass, bulge-dominated galaxies have higher stellar masses than disk-dominated ones at \(M_{\mathrm{halo}}<10^{12.5}\,\mathrm{M_{\odot}}\). This is due to bulge-dominated galaxies forming earlier than disk-dominated galaxies and having had more starbursts during their lifetimes. At Figure 4: _Top panel:_ The stellar-halo mass relation for central galaxies in Shark v2.0 (solid black line) and L18 (dashed black line) at \(z=0\). Lines show the median relation for bins with \(\geq 10\) galaxies. For Shark v2.0 we also show the \(16^{\mathrm{th}}-84^{\mathrm{th}}\) percentile ranges as the shaded region. For reference, we show the empirical constraints from Moster et al. (2013); Behroozi et al. (2013), and the more direct observational constraints from Kravtsov et al. (2018); Taylor et al. (2020); Romeo et al. (2020), as labelled. _Bottom panel:_ As in the top panel but splitting galaxies between bulge- (red) and disk-dominated (blue) galaxies, using a disk-to-total stellar mass ratio (D/T) of 0.5. Galaxies above (below) the threshold are considered disk-(bulge-)dominated. Observational constraints from Kravtsov et al. (2018) and Correa & Schaye (2020) for late- and early-type galaxies, LTGs and ETGs, respectively, are also shown, as labelled. Figure 3: Cosmic SFR (top panel) and stellar mass (bottom panel) density as a function of redshift. We show the total contribution of all galaxies in black, galaxy disks in blue and galaxy bulges in red, for the Shark v2.0 default model (solid lines) and the default model in L18 (dashed lines). In the top panel we show the observations from Driver et al. (2018); by Silva et al. (2023) and the compilation of Adams et al. (2023), as labelled. The latter includes results from Oesch et al. (2018); Harikane et al. (2023); Bouwens et al. (2023, 2023); Donnan et al. (2023); Pérez-González et al. (2023) and their own measurements, most of them using the JWST. In the bottom panel we show observational constraints from Madau & Dickinson (2014). Driver et al. (2018); Weaver et al. (2022); Santini et al. (2023). Data from Madau & Dickinson (2014) has been re-scaled to a Chabrier IMF. \(M_{\rm halo}>10^{12.5}\,{\rm M}_{\odot}\), the trend reverses in Shark v2.0 but not in v1.1, with disk-dominated galaxies in v2.0 being more massive at fixed halo mass than bulge-dominated galaxies. This happens because AGN feedback in v2.0 is not efficient enough in the disk-dominated galaxies of these high halo masses, so they form stars very efficiently and end up more massive than the bulge-dominated ones. In Shark v1.1, the efficiency of AGN feedback is more directly tied to the halo mass than in the new model, leading to the differences seen at \(M_{\rm halo}>10^{12.5}\,{\rm M}_{\odot}\). Interestingly, Correa & Schaye (2020) reported that in the eagle cosmological hydrodynamical simulations, disk-dominated galaxies have a _higher_ stellar masses than bulge-dominated galaxies at all halo masses. We find that the difference with the trend in Shark v2.0 is driven by disk instabilities in the model. If we completely switch off disk instabilities, we find that the resulting disk-dominated galaxies have on average higher stellar masses than bulge-dominated ones at fixed halo mass (not shown here), trend that qualitatively agrees with eagle. This shows that the exact treatment of disks during these episodes of global instabilities has an important impact on the resulting stellar-halo mass relation. This agrees with the conclusions of Romeo et al. (2020) who argue that disk gravitational instabilities regulate the stellar-to-halo mass relation in galaxies in the halo mass range of their sample (see orange stars in Fig. 4). Observations still have uncertainties that are too large to be able to confidently verify the conflicting predictions of the stellar-halo mass relation by galaxy morphology. In fact, Correa & Schaye (2020) show, using data from the Sloan Digital Sky Survey (SDSS), that depending on how stellar masses and morphologies are measured, one can get different offsets between bulge- and disk-dominated galaxies in the stellar-halo mass relation. Future surveys, such as the 4MOST WAVES (Driver et al., 2019), will provide exquisite measurements of halo masses down to \(\log_{10}({\rm M}_{\rm halo}/{\rm M}_{\odot})\approx 12\) in the local Universe, which will allow a robust diagnosis of the predictions discussed here. #### 4.3.2 Structural properties of galaxies at \(z=0\) One of the major modifications in Shark v2.0 is the more sophisticated modelling of galaxy angular momentum, and the exchange of angular momentum between the ISM and stars (SS 3.1). Hence, it is natural to explore what the effect of that is on structural scaling relations of galaxies. Here, we focus on the size-mass and the specific angular momentum-mass relations at \(z=0\). We start with the size-mass relation presented in Fig. 5. The latter is shown for the disk and bulge components of galaxies at \(z=0\) for Shark v2.0 and L18, compared with observations from Lange et al. (2016); Robotham et al. (2022); Bellstedt et al. (2023), in the top two panels. The bottom panel shows the half-stellar mass radius, \(\tau_{\star}\), vs galaxy stellar mass compared with the stellar-mass weighted half-mass radius vs stellar mass of Robotham et al. (2022); Bellstedt et al. (2023). Note that Lange et al. (2016); Robotham et al. (2022); Bellstedt et al. (2023) use the GAMA survey. In Appendix D we explain why these measurements are different and the process of measuring the size-mass relation from the Bellstedt et al. (2023) catalogue. Note that for both Shark and the observations we use the same Eq. (D1) to calculate \(\tau_{\star}\). The disk sizes in Shark v2.0 are smaller than L18, especially at the low-mass end. This is the regime where we expect our new angular momentum treatment to make a difference. As these galaxies have molecular gas that is more concentrated than the atomic gas, the new model would produce new stars that form at overall smaller radii than the old model, which assume all disk components to have Figure 5: Size-stellar mass relation for the disk (top panel) and bulge (middle panel) components of galaxies, and for the entire galaxy in the bottom panel, at \(z=0\), for Shark v2.0 (solid lines) and L18 (dashed lines). In the top two panels, we plot disk (bulge) half-stellar mass radii vs. disk(bulge) stellar mass, as labelled, while the bottom panel shows the galaxy half-stellar mass radius vs total stellar mass. We show bins with \(\geq 10\) objects. Lines with the shaded regions show the medians and \(16^{\rm th}=84^{\rm th}\) percentile ranges, respectively. Dotted lines in the top two panels show the \(50^{\rm th}\), \(68^{\rm th}\) and \(90^{\rm th}\) percentile regions of the GAMA observations of Lange et al. (2016), while the symbols show the more recent GAMA results of Robotham et al. (2022); Bellstedt et al. (2023) (see Appendix D for details). For the latter we show the medians as symbols, and the \(16^{\rm th}-84^{\rm th}\) percentile ranges as errorbars. The bottom panel shows the mass-weighted stellar size-mass relation of Robotham et al. (2022); Bellstedt et al. (2023) (see Appendix D for how this was computed). the same angular momentum distribution. The lower disk sizes in Shark v2.0 agree better with the observations than v1.1 (L18). The bulge size-mass relation (middle panel in Fig. 5) in Shark v2.0 is steeper than in L18 with a plateau around a bulge mass of \(10^{10}\) M\({}_{\odot}\), which is roughly when galaxies transition from being disk- to bulge-dominated. The steeper size-mass relation agrees better with observations than the prediction of L18, especially on the regime of massive spheroids. Note that there is a clear systematic difference between the bulge-size mass relations in the observations shown. This is because defining a bulge and its size is more difficult than for disks in observations (see discussion in Robotham et al., 2022). The galaxy size-mass relation in the bottom panel of Fig. 5 shows that Shark v2.0 has a clear transition at a stellar mass of \(10^{9.7}-10^{10}\) M\({}_{\odot}\). At lower stellar masses, the disk size dominates \(r_{\star}\), while at higher masses it is the bulge size that determines \(r_{\star}\). The transition zone was not clearly present in L18, in which the size-mass relation was better described by a single power law. Compared with observations, Shark v2.0 is in better agreement, although the transition region in observations starts at lower masses. Note, however, that this is not part of the diagnostics used to tune the parameters of the model, so the improvement in the agreement with observations is a success of the new angular momentum model. Fig. 6 shows the stellar specific angular momentum (sAM) vs stellar mass, atomic gas and all ISM sAM vs gas mass, baryon sAM vs baryon mass, and the sAM of stars, atomic and molecular gas as a function of stellar mass. For the first three cases, we show the relation for galaxies of varying disk-to-total stellar mass ratio, D/T, as labelled in the third panel, while for the fourth panel we show the sAM-stellar mass relations for galaxies with a D/T \(>0.5\). We compare with a large compilation of observations as labelled. The stellar sAM-mass relation (left panel of Fig. 6) shows a strong dependence of the zero-point of the relation on D/T, so that galaxies with lower D/T have a lower sAM at fixed stellar mass. This is the well known morphological dependence of the sAM-mass relation, first discussed in Fall (1983). Overall, Shark v2.0 agrees very well with the observations, and even the scatter for disk-dominated galaxies (D/T \(>0.5\)) is similar to the scatter reported in observations, which for the most part contain disk galaxies only. The baryon sAM-mass relation (middle-right panel of Fig. 6), shows a similar behavior, but the scatter increases with increasing baryon mass in a way that resembles what observations report. The difference in baryon sAM between galaxies with D/T \(>0.75\) and D/T \(>0\) is larger than the differences obtained in stellar sAM. The atomic (or total ISM) sAM-mass relation (middle-left panel of Fig. 6) is the tightest of all, in agreement with what has been reported in observations. However, we see that Shark v2.0 produces a slightly too shallow relation. This slope directly depends on what we assume for the halo spin parameter. L18 assumed a halo spin distribution with a mode that was independent of halo mass. In Shark v2.0, we instead assume a very weak dependence of the mode of the spin distribution on halo mass as reported in Kim et al. (2015). A slightly steeper relation would help the model reproduce the observed slope, however, we decide not to change the spin-halo mass dependence arbitrarily. Ideally, we would want to use the spin parameter directly inferred from the \(N\)-body simulation. However, such measurements are very noisy and only reliable for halos resolved with several hundred particles, which prohibits the use of that in SAMs, in which we use halos sampled with a number of particles as low as 20. The right panel of Fig. 6 illustrates the power of the new angular momentum exchange model (SS 3.1) by showing the difference between the sAM of the atomic and molecular gas, and the stars at fixed stellar mass. The offset between the sAM of atomic and molec Figure 6: sAM of the stars as a function of stellar mass (left panel), sAM of the total ISM and atomic gas a function of total ISM mass (middle-left panel), sAM of all baryons in the galaxy vs baryon mass (middle-right panel), and sAM of atomic, molecular gas and stars vs stellar mass (right panel), for galaxies in Shark v2.0 at \(z=0\). In the first three panels we show the median relations for galaxies with varying disk-to-total stellar mass ratios, as labelled in the third panel. The shaded region shows the \(16^{\rm th}-84^{\rm th}\) percentile ranges for galaxies with D/T \(>0.5\) only. The fourth panel shows the median sAM-stellar mass relations for galaxies with D/T \(>0.25\). In this panel we also show the median relations of Shark v1.1 (L18) for \(r_{\star}\) and \(j\)th (in this model \(j_{\rm H}\), \(=j\)m), as dotted lines. We show observations from Oberschkow & Glazebrook (2014); Chowdhury & Chengalur (2017); Posti et al. (2018); Murugeshan et al. (2019); Mancera Pina et al. (2021); Di Teodoro et al. (2021); Hardwick et al. (2022, 2022), as labelled. We highlight the measurements of Oberschkow & Glazebrook (2014) (OG14) in all panels using coloured symbols - this is the only sample where measurements of the sAM of stars, atomic and molecular gas are presented for the same galaxies (shown with different colours in the right-hand panel, as labelled). ular gas increases slightly as the stellar mass decreases. We also show observations from Obreschkow and Glazebrook (2014), which include \(j_{\star}\), \(j_{\rm HI}\), and \(j_{\rm H_{2}}\) for the same sample of galaxies (THINGS, The HI Nearby Galaxy Survey). The difference between \(j_{\star}\) and \(j_{\rm HI}\) in THINGS is \(\approx 0.3\) dex, while in Shark v2.0 is closer to 0.5 dex. In Shark v2.0, \(j_{\rm H_{2}}\) is \(\approx 0.15\) dex higher than \(j_{\star}\) at fixed stellar mass, while in THINGS this difference is \(\approx 0.05\) dex. Nevertheless, within the scatter of the observations, our predictions agree well. We also show the relations for Shark v1.1 (L18) in dotted lines, and find clear disagreements with the observations, which are most evident for \(j_{\rm HI}\), which in Shark v1.1 is \(\approx 0.2-0.6\) dex lower than Obreschkow and Glazebrook (2014). Note that \(j_{\star}\) and \(j_{\rm HI}\) in Shark v1.1 differ only by \(0-0.15\) dex, while observations consistently prefer \(\approx 0.3\) dex across the whole stellar mass range probed. By definition, \(j_{\rm H_{2}}=j_{\rm HI}\) in Shark v1.1, while in Shark v2.0 \(j_{\rm HI}\) is higher than \(j_{\rm H_{2}}\) by \(0.2-0.5\) dex (with the difference increasing at lower stellar masses), which is similar to the differences seen in THINGS. The improvements we see in the sAM-mass scaling relations are a direct result of the new angular momentum exchange model (SS 3.1). Overall we see that Shark v2.0 with the default parameters adopted (Table 2) produces structural scaling relations that are in broad agreement with observations in the local Universe. We can attribute that in large part to the new angular momentum exchange model. In the future we also plan to explore other structural scaling relations, such as the size-mass relation across cosmic time, the Tully-Fisher relation, and the relation between sizes-mass and galaxy age reported in Robotham et al. (2022). #### 4.3.3 The BH-stellar mass relation and morphological dependence Fig. 7 shows the BH-stellar mass relation at \(z=0\) for Shark v2.0 and v1.1 (L18), split by galaxy morphology. We do the latter classification based on D/T, with D/T\(<0.5\) corresponding to early-type galaxies (ETGs) and D/T\(\geq 0.5\) to late-type galaxies (LTGs). Both Shark v2.0 and v1.1 predict LTGs to have systematically lower BH masses than ETGs at fixed stellar mass. The offset in the zero-point of the BH-stellar mass relation between the LTG and ETG populations is larger in Shark v2.0 (\(\approx 0.5-1.5\) dex) than in v1.1 (\(\approx 0.3-0.6\) dex), especially clear at \(M_{\star}\lesssim 10^{10.5}\) M\({}_{\odot}\). Another interesting difference is that Shark v2.0 has LTGs as massive as \(10^{12.2}\) M\({}_{\odot}\) while in v1.1 they at most reach stellar masses of \(\approx 10^{11.5}\) M\({}_{\odot}\). Observations show that the typical BH mass difference between LTGs and ETGs is \(\approx 1\) dex at fixed stellar mass. Shark v2.0 predicts a difference of \(\approx 0.8-1\) dex in agreement with the observations, while v1.1 prefers smaller differences of \(\approx 0.3-0.6\) dex. Both Shark versions predict the BH-stellar mass relation of LTGs to have a larger scatter than that of ETGs, however, this difference in scatter between morphological types is larger in Shark v2.0 than v1.1. Observations suggest a qualitatively similar difference, with the scatter of the BH-stellar mass relation of LTGs or star-forming galaxies being larger than for ETGs or passive galaxies. We also find that overall the BH-stellar mass relation becomes tighter for both galaxy populations with increasing stellar mass. More observations are needed to establish whether that is also the case in the real Universe. ## 5 Quenching of massive galaxies One of the major developments in Shark v2.0 compared with L18 is the new AGN feedback model (SS 3.3). As such, we focus here on analysing the effect this has on the quenching of galaxies, especially massive galaxies. This is also an area that has attracted a lot of attention due to recent results from the JWST, which point to massive-quiescent galaxies being more common than previously thought of at \(z>3\)(Carnall et al., 2023; Nanayakkara et al., 2022; Valentino et al., 2023; Long et al., 2023). This section is organised as follows: SS 5.1 focuses on the quenching of galaxies in the local universe, and SS 5.2 analyses how quenching develops across cosmic time. ### Quenching in local universe galaxies Fig. 8 shows the SFR-stellar mass plane at \(z=0\) in Shark v2.0 and L18. Observational measurements of the median SFR-stellar mass relation of SDSS galaxies from Brinchmann et al. (2004), the main sequence in GAMA from Davies et al. (2016) and the distribution of all GAMA galaxies at \(z\leq 0.06\) and their median SFR-stellar mass plane distribution from the catalogue of Robotham et al. (2020); Bellstedt et al. (2020), are also shown. For the latter, galaxies both on and off the main sequence (MS) are included. The median SFR at fixed stellar mass is very similar between Shark v2.0 and L18 in galaxies with \(M_{\star}<10^{10}\) M\({}_{\odot}\). Above that stellar mass, the models diverge in a way that galaxies in Shark v2.0 appear more quenched than those in L18 by roughly an order of magnitude, at least for galaxies with stellar masses \(10^{10.5}-10^{11.5}\) M\({}_{\odot}\). The most massive galaxies, \(M_{\star}>10^{12}\) M\({}_{\odot}\), in both versions of Shark have similar SFRs, about an order of magnitude below the MS. Compared with observations, we see that the MS in both Shark versions agree well with the measurements of Davies et al. (2016). There is evidence though that galaxies in GAMA as analysed in Figure 7: BH-stellar mass relation of all galaxies in Shark v2.0 at \(z=0\) and the subsample of galaxies classified as ETGs (those with a D/T\(<0.5\)) and LTGs (those with D/T\(\geq 0.5\)), as labelled. We also show the relation for ETGs and LTGs in Shark v1.1 (L18) as dashed lines. Observations of ETGs and LTGs from Sahu et al. (2019) and Davis et al. (2019), respectively, and from passive (P) and star-forming galaxies (SF) from Terrazas et al. (2017) are shown with symbols, as labelled. The BH-stellar mass relation of P and SF galaxies in Terrazas et al. (2017) agree well with those of ETGs and LTGs, respectively, of Sahu et al. (2019) and Davis et al. (2019). Bellstedt et al. (2023) start to, on average, deviate from the MS at \(M_{\bullet}\geq 10^{9.5}\,\mathrm{M}_{\odot}\). This in Shark v2.0 and L18 happens at higher stellar masses, \(M_{\bullet}\approx 10^{10}\,\mathrm{M}_{\odot}\). The deviations from MS in the early SDSS analysis of Brinchmann et al. (2004) happens at higher stellar masses, closer to what we get in Shark. Because GAMA's survey area is too small to have a representative sample of very massive galaxies, we also show individual measurements of the SFR and stellar mass of a sample of massive galaxies from Terrazas et al. (2017). Note that some of these measurements are upper limits. The quenching of massive galaxies is linked to AGN feedback. It is thus natural to explore the connection between quenching and BH mass. Fig. 9 shows the specific SFR (\(\equiv\) SFR/M\({}_{\star}\); sSFR) as a function of the central BH mass of galaxies with \(M_{\bullet}\geq 10^{10}\,\mathrm{M}_{\odot}\) at \(z=0\) for Shark v2.0 and L18, compared with the observations of Terrazas et al. (2017). This plane clearly shows a BH mass scale at which quenching happens in Shark. This transition mass for Shark v2.0 is \(M_{\rm BH}\approx 10^{7.5}\,\mathrm{M}_{\odot}\) and for L18 is \(M_{\rm BH}\approx 10^{7.25}\,\mathrm{M}_{\odot}\). What is interesting is that above that mass threshold, Shark v2.0 galaxies reach much lower sSFRs (i.e. are more quenched) than L18 galaxies, but also the scatter of sSFR at fixed BH mass is much larger in Shark v2.0 than L18. The relation obtained by Shark v2.0 resembles much better what the observations of Terrazas et al. (2017) suggest. Terrazas et al. (2020) used this plane to diagnose the way AGN feedback acts in the Illustris-TNG simulations, and found that the transition region in Illustris-TNG, which happens at \(M_{\rm BH}\approx 10^{8}\,\mathrm{M}_{\odot}\), was much sharper than observations suggest, and even bimodal, with galaxies below the transition mass being star-forming, and right above, being quenched. Shark v2.0 displays a much smoother transition that agrees better with observations. We came back to the sSFR-\(M_{\rm BH}\) plane in SS 5.2. Lastly, we analyse the SFR histories (SFHs) of galaxies at \(z=0\) in different bins of stellar mass for Shark v2.0 and L18 in Fig. 10. Galaxies in both Shark v2.0 and L18 have on average raising SFHs in the stellar mass range \(10^{9}-10^{9.8}\,\mathrm{M}_{\odot}\). Galaxies in the next stellar mass bin, \(10^{10}-10^{10.25}\,\mathrm{M}_{\odot}\), in both models have a median SFH that is close to constant over the last 6 Gyrs. At higher stellar masses there is a clear peak at lookback times \(>9\) Gyrs, followed by a decline in SFR. In L18, galaxies with \(M_{\bullet}\geq 10^{10.5}\,\mathrm{M}_{\odot}\) have relatively self-similar SFHs, scaled up or down in the overall normalisation. In Shark v2.0, the SFHs of massive galaxies present a much larger variations at fixed stellar mass, and on average and up more quenched towards \(z=0\). In Shark v1.1, once quenching kicks in for galaxies with \(M_{\bullet}>10^{10.5}\,\mathrm{M}_{\odot}\), we see that the SFR continues to increase with increasing stellar mass, while in Shark v2.0, galaxies with \(10^{10.5}\,\mathrm{M}_{\odot}<\mathrm{M}_{\bullet}<10^{10.75}\,\mathrm{M}_{\odot}\) have higher SFRs at lookback time \(<4\) Gyr than galaxies with \(10^{10.75}\,\mathrm{M}_{\odot}<\mathrm{M}_{\bullet}<10^{11.5}\,\mathrm{M}_{\odot}\), on average. This agrees qualitatively with the "downsizing" signal observed in local Universe galaxies (e.g. Thomas et al., 2010; Bellstedt et al., 2020), and shows that this new version of Shark overcomes one of the shortcomings of L18 identified in Bravo et al. (2022, 2023). ### The onset of galaxy quenching To study the onset of quenching in galaxies, we start by exploring the sSFR-BH mass plane across cosmic time in Fig. 11 for Shark v2.0 and L18. We show how the median relation for galaxies with \(M_{\bullet}>10^{9}\,\mathrm{M}_{\odot}\) evolves from \(z=0\) to \(z=6\) in both models. In L18, the BH mass at which galaxies transition from being primarily star-forming to displaying signs of quenching (i.e. a decrease Figure 8: SFR as a function of stellar mass for galaxies at \(z=0\). The black solid and dashed lines show the medians for the galaxies in Shark v2.0 and v1.1 (L18), respectively, for bins with \(\geq 10\) objects. The filled contours show percentile ranges ranging from \(9^{\mathrm{gh}}\) to \(10^{\mathrm{th}}\), from the outer to the inner regions, for Shark v2.0. The brown, dashed line shows the median SFR-stellar mass relation of SDSS galaxies as reported by Brinchmann et al. (2004); the blue dot-dashed line shows the main sequence as measured from GAMA by Davies et al. (2016); the pink dashed line and thin solid lines show the median and the \(9^{\mathrm{gh}}\), \(9^{\mathrm{gh}}\), \(6^{\mathrm{gh}}\), \(8^{\mathrm{th}}\) and \(5^{\mathrm{th}}\) percentile ranges of GAMA galaxies from Bellstedt et al. (2023). Observations of massive galaxies from Terrazas et al. (2017) are also shown as red symbols as a reference of typical SFRs of massive galaxies that are above the dynamical range sampled by GAMA. Some of the latter measurements are upper limits, and are indicated by down-pointing arrows. Figure 9: Specific SFR as a function of BH mass for galaxies at \(z=0\) with stellar masses \(>10^{10}\,\mathrm{M}_{\odot}\). The results for Shark v1.1 (L18) and v2.0 are shown as a black line with the grey shaded region, and red line with the faded red shaded region, respectively. Lines show medians and the shaded regions show the \(16^{\mathrm{th}}-84^{\mathrm{th}}\) percentile ranges for bins with \(\geq 10\) objects. Observational estimates from Terrazas et al. (2017) are shown as symbols. in sSFR) increases with increasing redshift. By \(z=3\), this happens at \(M_{\rm BH}\approx 10^{8.3}\,\rm M_{\odot}\), \(\approx 1.2\) dex higher than the transition mass at \(z=0\) for that model. At \(z=4\) there are only weak signs of quenching happening in galaxies with \(M_{\rm BH}\ga 10^{9}\,\rm M_{\odot}\). There is a stark contrast with Shark v2.0, which predicts that the BH transition mass is relatively redshift independent at \(M_{\rm BH}\approx 10^{7.5}\,\rm M_{\odot}\). The reason for this difference comes down to how radio-mode feedback was implemented in L18 vs Shark v2.0. As discussed in SS 3.3, the heating power of AGN feedback in the radio mode of Croton et al. (2006) was calculated using only the accretion rate coming from the hot-halo mode instead of the total accretion rate of the BH. The hot-halo mode contributes very little to the total BH accretion rate at high-z, hence we need to move to very high BH masses to start to see a larger contribution from that mode, which can then lead to quenching. In Shark v2.0 we instead are agnostic to the accretion channel which makes more physical sense, and simply calculate the total jet power that can be used to offset gas cooling. This leads to an approximately constant BH mass above which enough jet power is produced to lead to quenching (or at least to significant deviations from the MS). Combining robust BH mass and sSFR measurements in observations across cosmic time would greatly help to constrain the predictions here. One of the key motivations to study the quenching of massive galaxies comes from the recent discoveries of a sizeable population of massive-quiescent galaxies in the early Universe (e.g. Schreiber et al., 2018; Carnall et al., 2020; Weaver et al., 2022; Gould et al., 2023; Nanayakkara et al., 2022; Carnall et al., 2023; Valentino et al., 2023; Long et al., 2023). These observations have revealed that these galaxies exist in number densities \(\ga 10^{-5}\,\rm Mpc^{-3}\). We hence investigate what L18 and Shark v2.0 predict in Fig. 12 for the number density of massive-quiescent galaxies at \(0\leq z\leq 5\). We do this by using two definitions of sSFR, \(<10^{-10}\,\rm yr^{-1}\) (as adopted by Long et al., 2022) and \(<10^{-11}\,\rm yr^{-1}\) (a more conservative cut). These two thresholds represent deviations from the MS at \(z=3\) of \(\approx 1.3-1.5\) and \(2.3-2.5\) dex, respectively. We also investigate two stellar mass selections, \(\ga 10^{10}\,\rm M_{\odot}\) (top panel in Fig. 12) and \(\ga 10^{10.5}\,\rm M_{\odot}\) (bottom panel in Fig. 12), as they represent typical mass thresholds being adopted in observations. Fig. 12 shows two important differences between L18 and Shark v2.0: (1) overall the number density of massive-quiescent galaxies in Shark v2.0 is \(\ga 1\) dex higher than L18, with differences being largest for galaxies with \({\rm sSFR}<10^{-11}\,\rm yr^{-1}\), \(\approx 2\) dex; (2) there is little difference in number density between selecting galaxies with \({\rm sSFR}<10^{-11}\,\rm yr^{-1}\) or \({\rm sSFR}<10^{-10}\,\rm yr^{-1}\) in Shark v2.0 (\(\la 0.2\) dex), compared to the large differences seen in L18 (\(\ga 0.5\) dex). This tells us that quenched galaxies in Shark v2.0 have on average \({\rm sSFR}\ga 1\) dex smaller than L18. We find that the large difference between Shark v2.0 and L18 is due to the new radio-mode AGN feedback model rather than the inclusion of QSO-mode AGN feedback. When we turn off the latter we only see a slightly lower number density of massive-quiescent galaxies at \(z\geq 4\) of 0.2 dex, and almost no differences at \(z<4\). Overall the number density of massive-quiescent galaxies predicted by Shark v2.0 are in good agreement with current observational constraints within the uncertainties. Most of the scatter between current observational estimates comes from the different employed methods to define "quenched" (e.g. colour-colour selection vs post-starburst features in the spectrum; see Valentino et al., 2023 for a discussion). The comparison between observations with simulations is further complicated by the unconstrained systematic and random uncertainties in the estimation of stellar mass and SFRs in these galaxies. To get a sense of what the impact of this could Figure 11: The specific SFR vs BH mass for galaxies with stellar masses \(\geq 10^{9}\,\rm M_{\odot}\) from \(z=0\) to \(z=6\), as labelled, for Shark v1.0 (right) and v2.0 (left). The applied stellar mass limit is to avoid the emergence of quenched satellite galaxies producing a visible decrease in specific SFR as the focus here is on AGN feedback. Lines show the medians, and the shaded regions show the \(16^{\rm th}-84^{\rm th}\) percentile ranges. The latter are only shown for \(z=0\), \(z=3\) and \(z=6\) to avoid crowding the figure. The vertical dotted line shows for reference a BH mass of \(10^{7.5}\,\rm M_{\odot}\). Figure 10: The median SFR as a function of lookback time (LBT) of galaxies at \(z=0\) in different bins of stellar mass (of width 0.25 dex) for Shark v2.0 (solid lines) and v1.1 (dotted lines). The mean stellar mass of each bin are labelled next to the corresponding lines. be, we convolve the stellar masses and SFRs in Shark galaxies with a random Gaussian distribution centred around 0 and with a width of 0.3 dex. We select then massive-quiescent galaxies based on sSFRs and stellar masses after including errors. This effect is shown as dashed lines in Fig. 12. The consequence of adding errors can be quite large, especially for the rarest objects (bottom panel in Fig. 12 shows \(\approx 0.8\) dex difference between having errors or not in the population with sSFR \(<10^{-10}\) yr\({}^{-1}\)). We should note that errors of the order of 0.3 dex for stellar masses and SFRs are likely lower limits, as even at \(z=0\) with better data quality and multi-wavelength coverage, stellar mass errors are typically 0.2 dex (Robotham et al., 2020). A full understanding of the level of agreement/tension between observations and simulations likely requires full forward modelling to select galaxies in the same colour-colour space or with the same spectral features as in observations. We leave that for future work. We now investigate the evolution of the SMF of passive galaxies in Fig. 13 from \(0.5\leq z\leq 5\). We select passive galaxies as those with a sSFR \(<10^{-10.75}\) yr\({}^{-1}\) following Shunto et al. (in preparation). We discuss the effect of the passive selection criteria when discussing Fig. 14. Comparing Shark v2.0 with L18 first (solid red and dashed black lines, respectively), we find that at all redshifts Shark v2.0 produces more passive galaxies across the whole mass range than L18. The differences become larger with increasing redshift, from \(\approx 0.3\) dex at \(z=0.5\) to 1.5 dex at \(z=5\). The difference in number density between the two models is largest for central galaxies (orange solid vs dotted black lines), with L18 producing a number density of central, passive galaxies \(\lesssim 10^{-5.5}\) Mpc\({}^{-3}\) at \(M_{\star}=10^{10.5}\) Mpc, while Shark v2.0 predicts number densities of \(10^{-4}\) M\({}_{\odot}\). At \(z\geq 3\), L18 predicts virtually no quenched, central galaxies, with number densities \(<10^{-6}\) M\({}_{\odot}\), in contrast with Shark v2.0 which predicts peak number densities of \(\approx 10^{-4.5}\) Mpc\({}^{-3}\) and \(\approx 10^{-5.5}\) Mpc\({}^{-3}\) at \(z=3\) and \(z=4\), respectively. At \(z\leq 3\) a population of passive, low-mass centrals emerges in both Shark v2.0 and L18 with masses \(M_{\star}\lesssim 10^{9}\) M\({}_{\odot}\). These galaxies corresponds to those inhabiting very low-mass halos that suffer from photo-ionisation from the UV background, whose efficiency depends on a halo's circular velocity and redshift (Sobacchi & Mesinger, 2013), as described in SS 4.4.9 in L18. The difference between the SMF of passive, satellite galaxies between Shark v2.0 and L18 comes in part from the new dynamical friction timescale model we adopted (SS 3.2) in v2.0 producing longer dynamical friction timescales leading to a longer survival of satellite galaxies than the Lacey & Cole (1993) model adopted in L18. Satellite galaxies make up most of the population of passive galaxies at \(M_{\star}\lesssim 10^{10.2}\) M\({}_{\odot}\) at \(z=0.5\) and at increasingly higher stellar masses with increasing redshift. For example, at \(z\geq 4\), passive, satellite galaxies dominate the number density across the whole mass range, showing that environment (i.e. RPS and tidal stripping) can effectively quench galaxies at very early cosmic times. Fig. 13 shows the observational results of Weaver et al. (2022), in which a colour-colour selection was used to classify galaxies as passive or star-forming using the near-UV (NUV), r-band (r), and J-band (J) magnitudes, with passive galaxies being those with \[({\rm NUV-r})>3\,({\rm r-J})\,+\,1;\,({\rm NUV-r})>3.1, \tag{55}\] and star-forming galaxies are those that do not comply with the selection above. Compared with the observations of Weaver et al. (2022), Shark v2.0 performs overall better than L18, reproducing well the population of passive, central galaxies which likely dominate the high-mass end in the observations. At \(z\geq 3\), Shark v2.0 struggles to reproduce the high-mass end of the passive SMF, but this is in part due to systematic uncertainties in the observations. For example, Shunto et al. (in preparation) using COSMOS-Web (Casey et al., 2022) show that the high-mass end they recover at those high-redshift for passive galaxies is about 0.5 dex lower than that reported by Weaver et al. (2022) and closer to the Shark v2.0 predictions. Another important limitation to mention in observations is that the number densities are unconstrained for galaxies that are below the vertical lines shown in Fig. 13, and hence the onset of environmental quenching in the very early Universe as predicted by Shark cannot be clearly studied with current observations of the SMF. We assess the effect the selection of passive galaxies has on the resulting SMF, we compare the SMF of galaxies with a sSFR Figure 12: Number density of passive massive galaxies selected based on two stellar mass thresholds, \(M_{\star}>10^{10}\) M\({}_{\odot}\) (top) and \(M_{\star}>10^{10.5}\) M\({}_{\odot}\) (bottom), as labelled. To select passive galaxies we use two different thresholds in specific SFR as labelled, which approximate typical values adopted in the literature. For Shark v2.0 we show the intrinsic predictions (i.e. taking stellar masses and SFRs without assuming any errors; thick solid lines), and the prediction after comoving stellar masses and SFRs with errors that are Gaussian-distributed with a width 0.3 dex but no bias (i.e. Gaussian centred at 0; dashed lines). Shark v1.1 intrinsic predictions are also shown in both panels (thin solid lines, as labelled in the bottom panel). Observational estimates are also shown: in grey symbols (pre-JWST results) are from Strautman et al. (2014); Schreiber et al. (2018); Merlin et al. (2019); Girelli et al. (2019); Carnall et al. (2020); Weaver et al. (2022); Gould et al. (2023), and green symbols (JWST results) are from Nanayakkara et al. (2022); Carnall et al. (2023); Valentino et al. (2023), as labelled at the top panel. \(10^{-10.75}\) yr\({}^{-1}\) and that comply with the colour-colour selection of Eq. (55). We compute galaxy spectral energy distributions following the method described in Lagos et al. (2019) (referred to as "EAGLE-\(\tau\) RR14" in that paper - see Section 6 in the supplementary material for a short description). The supplementary material shows that Shark v2.0 produces galaxy luminosities that reproduce reasonably well the observed luminosity functions of the local Universe from the NUV to the FIR, the K-band luminosity function evolution up to \(z=3\), and the far-UV luminosity function up to \(z=10\). The SMF of the passive galaxy populations selected by the methods above are shown in Fig. 14. Focusing first on central galaxies, we see that at \(z\leq 1\) Eq. (55) is effective in selecting galaxies of low sSFR and stellar masses \(\gtrsim 10^{10}\) M\({}_{\odot}\), however, it tends to overestimate the number of low sSFR central galaxies at lower stellar masses by up to 0.8 dex. The problem gets worse at increasing redshift, and by \(z\geq 3\) most of the centrals classified as passive by their colour have sSFR\(>10^{-10.75}\) yr\({}^{-1}\). By \(z=5\) there are \(>1000\) times more galaxies with colours consistent with being passive but with sSFR\(>10^{-10.75}\) yr\({}^{-1}\). Interestingly, for satellite galaxies we see the opposite. At \(z\leq 1\) galaxies with sSFR\(<10^{-10.75}\) yr\({}^{-1}\) comply with the colour selection of Eq. (55), but at higher redshifts, the colour selection _underestimates_ the number of satellites with low sSFRs. By \(z=5\) this underestimation is of \(\approx 1\) dex. To understand where these disparate effects on centrals and satellite galaxies come from, we studied the median properties of centrals/satellites with \(M_{\star}\geq 10^{10}\) M\({}_{\odot}\). We found that centrals primarily correspond to starbursting, dusty galaxies (i.e. most of their SFR is associated to a burst with super-solar metallicities) at \(z>2\). Satellites at \(z\geq 2\) on the other hand, have metallicities that are \(0.7-2\) dex below centrals and have their star formation taking place in the galaxy disk. Hence, Eq. (55) appears to be effective in selecting passive galaxies when they have gas metallicities close to solar metallicity and they are not dusty star-forming galaxies (see Lagos et al. 2020 for a detailed analysis of the contamination of dusty star-forming galaxies in the selection of passive galaxies based on colour-colour diagrams). We overall see that the SMF of galaxies selected to be "passive" following the colour-colour selection of Eq. (55) is in much better agreement with the observational estimates of Weaver et al. (2022), coming from the same colour-colour selection (see for example high-mass end at \(z=2\) and \(z=3\)). In the future we plan to investigate other more modern methods to select passive galaxies based on colours using independent magnitudes and Bayesian methods to assign probabilities of being passive (Gould et al., 2023; Long et al., 2023). Finally, we present predictions for the stellar-to-halo mass relation of passive and star-forming central galaxies in Shark v2.0 in Fig. 15. We again test two definitions of passive galaxies as described above Figure 13: Galaxy SMF in Shark v2.0 (solid red line) and the default model in L18 (solid black line), for galaxies classified at passive with a simple threshold of sSFR \(<10^{-10.75}\) yr\({}^{-1}\) from \(z=0.5\) to \(z=5\), as labelled. We also show for Shark v2.0 the SMF of passive galaxies after applying a random error to the stellar masses and SFRs of 0.3 dex (dot-dashed line) and then splitting those galaxies between centrals (orange) and satellites (green). We show the same for central and satellite galaxies in L18 as dashed and dotted black lines, respectively. Observations from Weaver et al. (2022) from the COSMOS survey are also shown. Vertical dotted lines mark the approximate stellar mass above which observational estimates of the SMF are reliable according to Weaver et al. (2022). (i.e. based on sSFR and colour), while star-forming galaxies are defined based on their sSFR distance to the MS or by not complying with the colour-colour selection of Eq. (55). These two selections are referred to as "sSFR" and "colour" in Fig. 15. We measure the MS in Sharx v2.0 by doing a linear fit to the relation between log\({}_{10}\)(\(\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ }}}}}}}}}}}}}}}}\)) and log\({}_{10}\)(\(\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{{\rm{{\rm{{\rm{{\rm{ }}}}}}}}}}}}}}}}\)) for central galaxies with stellar masses in the range \(7\times 10^{8}-10^{10}\) M\({}_{\odot}\) at each of the shown redshifts. These mass limits are chosen conservatively to avoid resolution effects at the lower end, and the regime where AGN feedback becomes efficient at the higher end. Star-forming galaxies, in the case of the "sSFR" selection, are those with a distance to the main sequence \(\geq-0.3\) dex. Fig. 15 shows that passive galaxies display a weaker dependence of their stellar mass on halo mass at \(M_{\rm halo}\gtrsim 10^{11.6}\) M\({}_{\odot}\) than star-forming galaxies. The difference is larger when we use the sSFR to select galaxies than with the colour-colour selection. Interestingly, the latter misses most of the central galaxies with low sSFRs that are hosted by halos with \(M_{\rm halo}<10^{11.6}\) M\({}_{\odot}\) at \(z=1\) and \(z=2\). At \(z=3\) the colour-colour selection picks centrals in those low-mass halos that are not passive according to their sSFR. The overall different stellar-halo mass relation of passive and star-forming galaxies should translate into different clustering signals for these two populations. The flatness of the stellar-halo mass relations of passive central galaxies leads to their stellar masses being lower than that of star-forming galaxies at fixed halo mass in halos with masses \(\gtrsim 10^{12.5}\) M\({}_{\odot}\), on average. At halo masses \(10^{11.6}\lesssim M_{\rm halo}/{\rm M}_{\odot}\lesssim 10^{12.5}\), the flatness of the stellar-halo mass relation of passive galaxies leads to them having _higher_ stellar masses than star-forming galaxies. This is very clear at \(z=1\) and apparent at \(z=2\). At \(z=3\) we have too few high-mass halos to establish the continuation of such a difference. We will revisit this using larger cosmological volume simulations in the future. At \(M_{\rm halo}\lesssim 10^{11.6}\) M\({}_{\odot}\), the stellar-halo mass relation of passive galaxies is very different between the two methods employed to select them, with the colour-colour selection leading to a relation that is similar to that of star-forming galaxies (especially clear at \(z=1\) and \(z=2\)). One of the only observational measurements we can compare Fig. 15 with were presented by Cowley et al. (2019), who derived a stellar-to-halo mass relation of passive and star-forming galaxies from the clustering of these populations at \(z\approx 2-3\). Note that Cowley et al. (2019) use colour selections similar to Eq. (55) to isolate passive galaxies. Cowley et al. (2019) found that at \(M_{\rm halo}\gtrsim 10^{12.7}\) M\({}_{\odot}\) passive and star-forming galaxies followed similar relations, but below that mass, passive galaxies were more massive than star-forming galaxies at fixed halo mass by \(\geq 0.5\) dex. This difference is larger than what we find at \(z=2\), 3 and \(M_{\rm halo}\lesssim 10^{12.5}\) M\({}_{\odot}\) in Sharx v2.0 (which is closer to 0.3 dex), but overall consistent within the scatter. At higher halo masses, the colour selection indeed leads to stellar-halo mass relations between star-forming and passive galaxies in Sharx v2.0 that are similar (and even indistinguishable at \(z=3\)), in agreement with the conclusions of Cowley et al. (2019). However, if galaxies were selected by sSFR we predict that different stellar Figure 14: As Fig. 13 but for centrals and satellites in Shark v2.0 selected to have sSFR \(<10^{-10.75}\) yr\({}^{-1}\) (dashed lines), or to comply with the colour-colour selection of Eq. (55) (solid lines). The latter is what was used by Weaver et al. (2022) to select passive galaxies and construct the SMF. halo mass relations of passive and star-forming galaxies should be seen at these high halo masses at least up to \(z=2\). Magliocchetti et al. (2023), also using clustering measurements, presented constraints on the host halo mass of massive-quiescent galaxies (those with \(M_{\bullet}\approx 10^{10}-10^{11}\) M\({}_{\odot}\)) and show that the host halo masses of these galaxies are likely \(\gtrsim 10^{12}\) M\({}_{\odot}\) across a wide redshift range, \(0\leq z\leq 5\). Although we broadly see something similar in Shark v2.0, a careful comparison requires selecting galaxies using the same colour-colour selection employed in Magliocchetti et al. (2023) and adding photometric redshift uncertainties, which we leave for future work. The predictions of Fig. 15 will be testable in the near future thanks to JWST programs focused on obtaining large statistical samples of massive-quiescent galaxies across cosmic time. ## 6 Discussion and Conclusions We have introduced a new version of the Shark SAM (v2.0) after a number of improvements to the physics included. These changes comprise: (i) a model describing the exchange of angular momentum between the interstellar medium and stars that results in different angular momenta for the atomic and molecular ISM and stars in galaxies; (ii) an updated dynamical friction timescale of satellite galaxies; (iii) a new AGN feedback model which includes two modes, a QSO and a radio mode, with the radio mode directly tied to the jet power production and the QSO mode consisting of a radiation pressure-driven outflow; (iv) a model that tracks the development of BH spins, which together with the mass and accretion rate, are used to define different BH accretion disk states; (v) a model for the gradual ram-pressure stripping of the hot and cold gas of satellite galaxies; (vi) a model for the tidal stripping of gas and stars of satellite galaxies; (vii) a method for automatic parameter exploration of the model using particle swarm optimisation. The model parameters were chosen to fit the \(z<0.1\) SMF of (Li & White, 2009) only. No high redshift constraints were used. We showed that Shark v2.0 provides predictions that agree better with observations than Shark v1.1 (L18). Those include: the evolution of the SMF up to \(z=7\) (Fig. 1), the halo-mass conditional baryon mass function at \(z=0\) and the contribution from satellite and central galaxies (Fig. 2); the stellar size-mass relation (Fig. 5) and specific angular momentum-mass relation of different baryon components of galaxies (Fig 6) at \(z=0\); the BH-stellar mass relation of late- and early-type galaxies at \(z=0\) (Fig. 7). We show that these improvements in large part relate to the new physics included in Shark. Specifically the conditional baryon mass function of satellite galaxies agrees better with observations in part due to the new dynamical friction timescale; and the stellar size-mass and specific angular momentum-mass relations improved significantly thanks to the new angular momentum treatment of galaxy components. We presented a detailed analysis of galaxy quenching in Shark v2.0 and improvements over v1.1 (L18). These included: * Massive galaxies at \(z=0\), \(M_{\bullet}\gtrsim 10^{10.5}\) M\({}_{\odot}\) have \(\approx 1\) dex lower SFRs in Shark v2.0 than v1.1. This allows the model to better reproduce the SFRs of massive galaxies observed (Fig. 8) and the sSFR-BH mass relation of galaxies with \(M_{\bullet}>10^{10}\) M\({}_{\odot}\) (Fig. 9) at \(z=0\). The latter shows that Shark v2.0 also produces a much larger sSFR scatter at fixed BH mass for BH masses \(>10^{8}\) M\({}_{\odot}\) than v1.1, in much better agreement with the observations of Terrazas et al. (2017). * Shark v2.0 predicts that the transition of galaxies from being star-forming (i.e. main sequence) to displaping a clear decrease in sSFR relative to the main sequence happens at roughly the same BH mass at all redshifts (\(\approx 10^{7.5}\) M\({}_{\odot}\)), while v1.1 requires an increasingly massive BH to transition to quenched with increasing redshift (Fig. 11). By \(z=4\), in Shark v1.1 only galaxies with a BH mass \(\gtrsim 10^{9}\) M\({}_{\odot}\) show signs of a decreased sSFR. * Shark v2.0 produces \(\approx 1\) dex higher number density of massive-quiescent galaxies at \(z\gtrsim 2\) than v1.1, and those that are classified as being quiescent display lower sSFRs in Shark v2.0 Figure 15: Stellar vs halo mass relation for central galaxies in Shark v2.0 selected to be passive by either assuming sSFR \(<10^{-10.75}\) yr\({}^{-1}\) (salmon dashed line), or by selecting them based on their NUV-optical colour (see text for details; red solid line); and star-forming, by either selecting them to have sSFRs that are \(>-0.3\) dex from the MS (light blue dashed line) or based on their NUV-optical colour (blue solid line). Lines with shaded regions show the medians and \(16^{\rm th}-84^{\rm th}\) percentile ranges, respectively. Only bins with \(\geq 10\) galaxies are shown. This is presented for \(z=1,~{}2,~{}3\), as labelled. Overall, passive galaxies display a flatter stellar-halo mass relation than star-forming galaxies, with the difference being larger when galaxies are selected based on their sSFR rather than colour. than v1.1. Our new results agree well with current observational constraints on the number density of massive-quiescent galaxies coming from the JWST (Carnall et al., 2023; Nanayakkara et al., 2022; Valentino et al., 2023; Long et al., 2023) (see Fig. 12). We highlight that the overall abundance of massive galaxies in Shark v2.0 is similar to that of v1.1 (L18), but the key difference is in the fraction of those that are quenched. * We analyse the SMF of passive galaxies from \(z=0.5\) to \(z=5\) and show that at \(z\geq 2\)Shark v2.0 produces \(\gtrsim 100\) times more central-passive galaxies than v1.1, and this difference increases with increasing redshift. Similarly, Shark v2.0 predicts \(\approx 10-80\) times more passive satellite galaxies at \(z\geq 3\) than v1.1. These differences tend to disappear towards \(z=0\) with both models predicting similar SMF of passive galaxies (Fig. 13). Our new model is in better agreement with current observational constraints of the passive SMF (especially when we select galaxies using the same colour-colour criterion employed in observations), though there are likely still too few passive galaxies with masses \(>10^{11}\,\mathrm{M}_{\odot}\) at \(z\geq 4\) in Shark v2.0. * We present predictions for the stellar-halo mass relation of star-forming and passive galaxies at \(z\geq 1\) and find clear differences between the two populations. Passive galaxies tend to display a flat stellar-halo mass relation at \(M_{\mathrm{halo}}\gtrsim 10^{11}\,\mathrm{M}_{\odot}\), so that at \(M_{\mathrm{halo}}\approx 10^{11}-10^{12.5}\,\mathrm{M}_{\odot}\) they are more massive than their star-forming counterparts, while the opposite happens at \(M_{\mathrm{halo}}\gtrsim 10^{12.5}\,\mathrm{M}_{\odot}\) (Fig. 15). The exact magnitude of this effect depends on the criteria used to select passive and star-forming galaxies. The differences listed above between Shark v1.1 and v2.0 can almost exclusively be associated with the implementation of the new AGN model presented in SS 3.3, although other processes, such as the new dynamical friction timescale, also play a (secondary) role. This work thus _demonstrates the power of SAMs in quickly assessing the possible physical mechanisms behind current tensions with observations_. In accompanying papers we present a detailed analysis of the BH and AGN population across cosmic time (Bravo et al. in preparation); and a thorough analysis of the parameter space of Shark v1.1 and v2.0 (Proctor et al. in preparation). In the future, we plan to explore different definitions of what makes a galaxy passive and the impact that has on the number density, SMF evolution and clustering of passive galaxies; and assess the derived properties of massive-quiescent galaxies to understand the effect of systematic uncertainties. ## Data availability The surfs halo and subhalo catalogue and corresponding merger trees used in this work can be accessed from [https://tinyurl.com/y6q146d4.Shark](https://tinyurl.com/y6q146d4.Shark) is a public code and the source and python scripts used to produce the plots in this paper can be found at [https://github.com/ICARAR/shark/](https://github.com/ICARAR/shark/). ## Acknowledgements We thank Sabine Bellstedt, Kate Gould, John Weaver, Jen Hardwick and Nandini Sahu for sharing observational data that has been used in this work. CL has received funding from the ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013, and is a recipient of the ARC Discovery Project DP210101945. MB has received funding from McMaster University through the William and Caroline Herschel Fellowship. DO and ASGR acknowledge support from the ARC Future Fellowship scheme (FT190100083 and FT200100375, respectively). KP and ACG acknowledge Research Training Program and ICRAR scholarships. This work was supported by resources provided by The Pawsey Supercomputing Centre with funding from the Australian Government and the Government of Western Australia.
2306.12786
The AdS Virasoro-Shapiro Amplitude
We present a constructive method to compute the AdS Virasoro-Shapiro amplitude, order by order in AdS curvature corrections. At kth order the answer takes the form of a genus zero world-sheet integral involving weight 3k single-valued multiple polylogarithms. The coefficients in our ansatz are fixed, order by order, by requiring: crossing symmetry; the correct supergravity limit; the correct structure of poles, determined by dispersive sum rules; and the dimensions of the first few Konishi-like operators, available from integrability. We explicitly construct the first two curvature corrections. Our final answer then reproduces all localisation results and all CFT data available from integrability, to this order, and produces a wealth of new CFT data for planar N=4 SYM at strong coupling.
Luis F. Alday, Tobias Hansen
2023-06-22T10:34:01Z
http://arxiv.org/abs/2306.12786v2
# The AdS Virasoro-Shapiro Amplitude ###### Abstract We present a constructive method to compute the AdS Virasoro-Shapiro amplitude, order by order in AdS curvature corrections. At \(k^{\text{th}}\) order the answer takes the form of a genus zero world-sheet integral involving weight \(3k\) single-valued multiple polylogarithms. The coefficients in our ansatz are fixed, order by order, by requiring: crossing symmetry; the correct supergravity limit; the correct structure of poles, determined by dispersive sum rules; and the dimensions of the first few Konishi-like operators, available from integrability. We explicitly construct the first two curvature corrections. Our final answer then reproduces all localisation results and all CFT data available from integrability, to this order, and produces a wealth of new CFT data for planar \(\mathcal{N}=4\) SYM at strong coupling. ## 1 Introduction Over the last few years our understanding and ability to compute super-string scattering amplitudes in flat space has significantly advanced. One of the earliest results in string theory is the tree-level amplitude for the scattering of four massless supergravity states (gravitons) \[A_{4}(\varepsilon_{i},p_{i})=K(\varepsilon_{i},p_{i})\int dz^{2}|z|^{-2S-2}|1-z| ^{-2T-2}\,, \tag{1}\] where \(S+T+U=0\) denote the Mandelstam variables and a simple prefactor \(K(\varepsilon_{i},p_{i})\) encodes the dependence on the polarisation vectors of the gravitons. By now compact expressions exist for the tree-level scattering of gravitons with arbitrary multiplicity [1; 2]. These amplitudes display remarkable structures and much has been learnt from them. For instance, tree-level open and closed-string amplitudes are related by the KLT relations [3], and this has inspired powerful relations between supergravity and Yang-Mills amplitudes [4]. From a mathematical viewpoint these relations can be understood as a single-valued map between tree-level open and closed superstring amplitudes [2; 5; 6; 7]. More precisely, the low-energy expansion of closed-string amplitudes contains only single-valued multiple zeta values, a subclass of zeta values [8] obtained by evaluating single-valued multiple polylogarithms [9] at unity. Defining then a single-valued map \[\mathrm{sv}:\zeta(n_{1},n_{2},\ldots)\to\zeta^{\mathrm{sv}}(n_{1},n_{2},\ldots), \tag{2}\] it turns out this map takes open-string amplitudes to closed-string amplitudes, making manifest a surprising interplay with number theory. In contrast, string amplitudes in curved backgrounds are largely unexplored. Perturbatively they organise by a genus expansion, but even at tree-level a direct world-sheet approach is at the moment out of reach. While progress has been made for cases with pure background NS-NS B-field, see [10], the presence of Ramond-Ramond fields prevents the use of the RNS formulation. Alternative approaches are either very hard to quantise in curved backgrounds (Green-Schwarz formalism) or are not yet at the point where the computation of tree-level amplitudes is possible (pure-spinor formalism). In this paper we study the tree-level amplitude for four massless supergravity states in type IIB theory on \(AdS_{5}\times S^{5}\). While we don't know how to quantise perturbative string theory on this background, we will combine world-sheet intuition with several powerful tools available in this case. On one hand, the AdS/CFT duality relates this observable to a four-point correlator in \(\mathcal{N}=4\) SYM at large central charge, which can be studied by CFT methods. On the other hand, it has been recently observed that single-valuedness of the low energy expansions is also a powerful guiding principle when perturbing around flat-space [11; 12; 13]. These observations lead to a precise proposal for the structure of the tree-level amplitude on \(AdS_{5}\times S^{5}\) in a large-radius expansion around flat space, to all orders. To understand the structure of our proposal let us ignore super-symmetry for a moment and consider a four-point string amplitude in the Polyakov formulation \[A_{4}(p_{i})\sim\int\mathcal{D}X\mathcal{D}g\,e^{-S_{P}}V(p_{1})V(p_{2})V(p_{3 })V(p_{4})\,, \tag{3}\] with the Polyakov action given by \[S_{P}=\frac{1}{4\pi\alpha^{\prime}}\int d^{2}\sigma\sqrt{g}g^{\alpha\beta} \partial_{\alpha}X^{\mu}\partial_{\beta}X^{\nu}G_{\mu\nu}(X)\,. \tag{4}\] Assuming \(G_{\mu\nu}(X)\) is the metric of \(AdS_{5}\times S^{5}\) with radius \(R\), and expanding around flat space, we obtain \[G_{\mu\nu}(X)=\eta_{\mu\nu}+\frac{h_{\mu\nu}}{R^{2}}+\cdots,\qquad h_{\mu\nu} \sim X_{\mu}X_{\nu}\sim\lim_{q\to 0}\frac{\partial^{2}}{\partial q^{\mu} \partial q^{\nu}}e^{iq\cdot X}\,. \tag{5}\] Plugging this expansion back into the path integral (3), we see that AdS curvature corrections have the same effect as the addition of vertex operators for soft gravitons, whose momenta are taken to zero after taking two derivatives. The flat space \(4+k\) graviton amplitude, where \(k\) of the gravitons are soft, can be analysed using the soft graviton theorem [14] \[A_{n+1}(p_{1},\ldots,p_{n},\epsilon q)=\sum_{i=1}^{n}\left(\frac{1}{\epsilon} \frac{\varepsilon_{\mu\nu}p_{i}^{\mu}p_{i}^{\nu}}{p_{i}\cdot q}+\frac{ \varepsilon\cdot p_{i}\varepsilon_{\mu}q_{\nu}J_{i}^{\mu\nu}}{p_{i}\cdot q}+O (\epsilon)\right)A_{n}(p_{1},\ldots,p_{n})\,, \tag{6}\] where \(J^{\mu\nu}_{i}\) is the angular momentum operator for particle \(i\) and in particular a first-order differential operator on the momenta \(p_{i}\). Schematically we then expect \[A_{4+k}(p_{1},\ldots,p_{4},\epsilon q_{1},\ldots,\epsilon q_{k})\sim\frac{1}{ \epsilon^{k}}A_{4}(p_{1},\ldots,p_{4})+\frac{1}{\epsilon^{k-1}}\mathcal{D}_{1} A_{4}(p_{1},\ldots,p_{4})+\cdots, \tag{7}\] where \(\mathcal{D}_{1}\) is a first-order differential operator on the momenta \(p_{i}\), \(i=1,2,3,4\), whose precise form depends on the order in which the soft limits are taken. At the level of the integrand and recalling \(A_{4}(p_{i})\sim\int dz^{2}|z|^{-2S-2}|1-z|^{-2T-2}\) we see that the leading term has the same form, while the subleading term is also of the same form, with the extra insertion of factors like \(\log|z|^{2}\) and \(\log|1-z|^{2}\): namely, single-valued polylogarithms of weight one. According to the discussion above, we further need to take \(2k\) derivatives (two per extra soft graviton), which leads us to focus on the term proportional to \(\epsilon^{2k}\). This is expected to be of the form \[\int dz^{2}|z|^{-2S-2}|1-z|^{-2T-2}\mathcal{L}_{|w|=3k}(z), \tag{8}\] namely, the usual genus-zero integral for the scattering of four gravitons, with the additional insertion of weight \(3k\) single-valued multiple polylogarithms. This will be the basic building block for our construction. A few comments are in order. First, in the full computation divergent terms, as we take the momenta of the soft gravitons to zero, should cancel out. The model discussed here is simply a toy model. Second, on general grounds one would also expect terms of lower transcendentality at a given order, however, all solutions we have found have uniform and maximal transcendentality. This mimics what happens in other contexts in \(\mathcal{N}=4\) SYM. Finally, note that the insertion of single-valued multiple polylogarithms will automatically produce single-valued zeta values in the low energy expansion, upon integration over the Riemann sphere. The remainder of the paper is organised as follows. In section 2 we present our strategy for fixing the AdS Virasoro-Shapiro amplitude order by order in curvature corrections, which is then carried out in the subsequent sections. Section 3 contains the dispersive sum rules up to order \(1/\lambda\), and section 4 discusses the ansatz for the world-sheet correlator and the solutions for the first two curvature corrections. Having fixed these corrections gives us access to a wealth of new OPE data and Wilson coefficients, which are presented and where possible compared to previous results in section 5. Appendix A contains additional definitions for the dispersive sum rules and appendix B discusses properties of single-valued multiple polylogarithms. ## 2 Overview The central object of this paper is the tree-level amplitude of four gravitons in type IIB superstring theory on \(AdS_{5}\times S^{5}\), denoted the AdS Virasoro-Shapiro amplitude for short. By the AdS/CFT correspondence it is also the following correlator in \(\mathcal{N}=4\) SYM, at leading non-trivial order in the large central charge expansion \[\langle\mathcal{O}_{2}^{I_{1}J_{1}}(x_{1})\mathcal{O}_{2}^{I_{2}J_{2}}(x_{2}) \mathcal{O}_{2}^{I_{3}J_{3}}(x_{3})\mathcal{O}_{2}^{I_{4}J_{4}}(x_{4})\rangle\,. \tag{9}\] Here \({\cal O}_{2}^{IJ}\) is the superconformal primary operator of the stress-tensor multiplet. It is a scalar operator with conformal dimension \(\Delta=2\), transforming in the \({\bf 20^{\prime}}\) representation of the \(SU(4)\)\(R-\)symmetry group. The R-symmetry dependence of the correlator (1) is fully fixed by the superconformal Ward identities [15], and we can write (1) in terms of the _reduced correlator_\({\cal T}(U,V)\), which is a function of the conformal cross-ratios \(U=\frac{x_{12}^{2}x_{34}^{2}}{x_{13}^{2}x_{24}^{2}}\), \(V=\frac{x_{14}^{2}x_{23}^{2}}{x_{13}^{2}x_{24}^{2}}\). We will make use of the Mellin transform of the reduced correlator \[{\cal T}(U,V)=\int_{-i\infty}^{i\infty}\frac{ds_{1}ds_{2}}{(4\pi i)^{2}}U^{ \frac{s_{1}}{2}+\frac{2}{3}}V^{\frac{s_{2}}{2}-\frac{4}{3}}\Gamma\bigg{(} \frac{4}{3}-\frac{s_{1}}{2}\bigg{)}^{2}\Gamma\bigg{(}\frac{4}{3}-\frac{s_{2} }{2}\bigg{)}^{2}\Gamma\bigg{(}\frac{4}{3}-\frac{s_{3}}{2}\bigg{)}^{2}M(s_{1},s _{2})\,, \tag{2}\] where \(s_{1}+s_{2}+s_{3}=0\). The low-energy expansion of the correlator takes a particularly simple form in Mellin space, containing the tree-level Witten diagrams of supergravity plus an infinite tower of contact diagrams with higher derivative quartic couplings \[M(s_{1},s_{2})=\frac{8}{(s_{1}-\frac{2}{3})(s_{2}-\frac{2}{3})(s_{3}-\frac{2} {3})}+\sum_{a,b=0}^{\infty}\frac{\Gamma(2a+3b+6)}{8^{a+b}\lambda^{\frac{3}{2}+ a+\frac{3}{2}b}}\sigma_{2}^{a}\sigma_{3}^{b}\left(\alpha_{a,b}^{(0)}+\frac{ \alpha_{a,b}^{(1)}}{\sqrt{\lambda}}+\frac{\alpha_{a,b}^{(2)}}{\lambda}+\cdots\right) \tag{3}\] where \(\sigma_{2}=s_{1}^{2}+s_{2}^{2}+s_{3}^{2}\) and \(\sigma_{3}=s_{1}s_{2}s_{3}\). This is an expansion in large t'Hooft coupling \(\lambda\) where we keep all orders, which can equivalently be written in terms of \(R\) and \(\alpha^{\prime}\) via the dictionary \[\frac{1}{\sqrt{\lambda}}=\frac{\alpha^{\prime}}{R^{2}}\,. \tag{4}\] As we will see below, the Wilson coefficients \(\alpha_{a,b}^{(0)}\) reproduce the flat space result (the usual Virasoro-Shapiro amplitude) while subsequent terms \(\alpha_{a,b}^{(1)},\alpha_{a,b}^{(2)},\cdots\) give the \(AdS\)-curvature corrections. The reduced correlator admits a decomposition in terms of exchanged super-conformal primaries, and the Mellin amplitude generically has poles corresponding to these exchanges. The exchanged operators include both single and double-trace operators. While the poles of double-trace operators are already taken into account by the measure in (2), one expects poles corresponding to the exchange of (heavy) single-trace operators. Furthermore, as a string amplitude, the Mellin amplitude also enjoys soft UV behaviour, namely a polynomial bound in the Regge limit, the bound on chaos [16]. Together these facts were used in [11; 12] to derive dispersive sum rules that relate the Wilson coefficients \(\alpha_{a,b}^{(k)}\) in (3) to the OPE data of the single-trace superconformal primary operators exchanged. To make further contact with the string world-sheet we have to understand how to sum the low-energy expansion. While the sum in (3) has zero radius of convergence, it turns out that it is Borel summable. For this reason we study the Borel transform of the Mellin amplitude \[A(S,T)=2\lambda^{\frac{3}{2}}\int_{\kappa-i\infty}^{\kappa+i\infty}\frac{d \alpha}{2\pi i}\,e^{\alpha}\alpha^{-6}M\left(\frac{2\sqrt{\lambda}S}{\alpha}, \frac{2\sqrt{\lambda}T}{\alpha}\right)\,. \tag{5}\] Note that at leading order this coincides with the flat space limit as introduced in [17]. The Borel transform of the low-energy expansion (3) reads1 Footnote 1: Perturbatively in a \(1/\lambda\) expansion, which is the regime we are working on in this paper. \[A(S,T) =A^{(0)}(S,T)+\frac{1}{\sqrt{\lambda}}A^{(1)}(S,T)+\frac{1}{ \lambda}A^{(2)}(S,T)+\cdots\,,\] \[A^{(k)}(S,T) =\text{SUGRA}^{(k)}+2\sum_{a,b=0}^{\infty}\,\hat{\sigma}_{2}^{a} \hat{\sigma}_{3}^{b}\alpha_{a,b}^{(k)}\,, \tag{6}\] \[\text{SUGRA}^{(0)} =\frac{1}{\hat{\sigma}_{3}}\,,\quad\text{SUGRA}^{(1)}=-\frac{2}{ 3}\frac{\hat{\sigma}_{2}}{\hat{\sigma}_{3}^{2}}\,,\quad\text{SUGRA}^{(2)}= \frac{2}{9}\frac{\hat{\sigma}_{2}^{2}}{\hat{\sigma}_{3}^{3}}\,,\quad\text{ SUGRA}^{(k>2)}=0\,,\] with \(S+T+U=0\), \(\hat{\sigma}_{2}=\frac{1}{2}(S^{2}+T^{2}+U^{2})\) and \(\hat{\sigma}_{3}=STU\). The leading contribution is the Virasoro-Shapiro amplitude for type IIB superstring theory in flat space \[A^{(0)}(S,T)=-\frac{\Gamma\left(-S\right)\Gamma\left(-T\right)\Gamma\left(-U \right)}{\Gamma\left(S+1\right)\Gamma\left(T+1\right)\Gamma\left(U+1\right)}\,, \tag{7}\] where an overall factor containing the graviton polarisations is stripped off because we are studying the reduced correlator. In this interpretation we can identify the variables \(S,T,U\) with the Mandelstam variables \[S=-\frac{\alpha^{\prime}}{4}(p_{1}+p_{2})^{2}\,,\qquad T=-\frac{\alpha^{ \prime}}{4}(p_{1}+p_{3})^{2}\,,\qquad U=-\frac{\alpha^{\prime}}{4}(p_{1}+p_{4 })^{2}\,. \tag{8}\] The first AdS correction \(A^{(1)}(S,T)\) was fully determined in [11; 12; 13] and in the present paper we will describe an algorithm that allows us to determine further corrections. We will demonstrate this by fully fixing the next correction \(A^{(2)}(S,T)\). To this end we make the assumption that \(A^{(k)}(S,T)\) should also have a representation as an integral over the Riemann sphere, the world-sheet for genus 0 closed string amplitudes \[A^{(k)}(S,T)=\int d^{2}z|z|^{-2S-2}|1-z|^{-2T-2}G^{(k)}_{\text{tot}}(S,T,z)\,, \tag{9}\] where the integration measure is defined as \(d^{2}z=dzd\overline{z}/(-2\pi i)\). The flat space amplitude (7) has this form with the manifestly crossing-symmetric integrand \[G^{(0)}_{\text{tot}}(S,T,z)=\frac{1}{3}\left(\frac{1}{U^{2}}+\frac{|z|^{2}}{S ^{2}}+\frac{|1-z|^{2}}{T^{2}}\right)\,. \tag{10}\] At general order we expect the following structure \[G^{(k)}_{\text{tot}}(S,T,z)=G^{(k)}\left(S,T,z\right)+|z|^{2}G^{(k)}\left(U,T, \tfrac{1}{z}\right)+|1-z|^{2}G^{(k)}\left(S,U,\tfrac{z}{z-1}\right), \tag{11}\] with \(G^{(k)}(S,T,z)\) symmetric under the simultaneous exchange \(z\to 1-z\) and \(S\leftrightarrow T\), so that after integration a symmetric function in \(S,T\) is produced. In our proposed solution \(G^{(k)}(S,T,z)\) is a single-valued function of \(z\) of transcendental weight \(3k\), as motivated in the introduction, and rational in \(S,T\) with homogeneous degree \(2k-2\). More precisely \[G^{(k)}(S,T,z)=\sum_{u}\frac{p_{u}^{(k)s}(S,T)}{U^{2}}\mathcal{L}_{u}^{(k)s}(z) +\sum_{v}\frac{p_{v}^{(k)a}(S,T)}{U^{2}}\mathcal{L}_{v}^{(k)a}(z)\,, \tag{12}\] where \(u/v\) run over a basis of transcendentality \(3k\) single-valued multiple polylogarithms (SVMPLs), symmetric/anti-symmetric under the exchange of \(z\leftrightarrow 1-z\). This includes SVMPLs of weight \(3k\), but also \(\zeta(3)\) times SVMPLs of weight \(3k-3\) and so on. Furthermore, \(p_{u}^{(k)s}(S,T)/p_{v}^{(k)a}(S,T)\) are symmetric/anti-symmetric polynomials of degree \(2k\). A solution for \(G^{(1)}(S,T,z)\) was presented in [13]. At each order \(k\), \(G^{(k)}(S,T,z)\) depends on a finite number of coefficients. Our algorithm to fix them is to plug our ansatz in (9) and compute the residues of \(A^{(k)}(S,T)\) at \(S=\delta\): \[A^{(k)}(S,T)=\frac{R_{3k+1}^{(k)}(T,\delta)}{(S-\delta)^{3k+1}}+\frac{R_{3k}^{ (k)}(T,\delta)}{(S-\delta)^{3k}}+\ldots+\frac{R_{1}^{(k)}(T,\delta)}{S-\delta} +\text{regular}\,,\quad\delta=1,2,\ldots \tag{13}\] The same residues can be computed independently in terms of the OPE data of the exchanged single-trace operators, using the dispersive sum rules. At a given order, the higher order poles are fixed in terms of the OPE data at lower orders. This results in strong constraints for the coefficients in our ansatz, fixing \(A^{(k)}(S,T)\) almost completely. For the cases we analysed, we found that \(A^{(1)}(S,T)\) is actually fully fixed, while \(A^{(2)}(S,T)\) is fully fixed once we input the conformal dimension of the Konishi operator at this order, available from integrability. In general we expect that providing the conformal dimensions of the first few Konishi-like operators, i.e. operators on the leading Regge trajectory, will fully fix \(A^{(k)}(S,T)\). In the remaining sections we demonstrate our program explicitly for \(A^{(1)}(S,T)\) and \(A^{(2)}(S,T)\). This solution then passes various independent checks. In particular it matches the conformal dimensions of the whole tower of Konishi-like operators, obtained using integrability, and reproduces the two Wilson coefficients that were previously known from localisation. ## 3 Dispersive sum rules The role of dispersive sum rules is to connect our expressions for \(A^{(k)}(S,T)\) with the OPE data of single-trace superconformal primaries in the expansion \[\mathcal{T}(U,V)=U^{-2}\sum_{\begin{subarray}{c}\text{superconformal}\\ \text{primaries }\mathcal{O}_{\tau,\ell}\end{subarray}}C^{2}_{\tau,\ell}G_{\tau+4, \ell}(U,V)\,, \tag{14}\] where \(G_{\tau,\ell}(U,V)\) is a conformal block in 4 dimensions which here takes into account the contributions for all (super-)descendants of a given superconformal primary2. The leading conformal dimensions and OPE coefficients of these operators in a large \(\lambda\) expansion are etermined by the corresponding flat space data, so the operators can be labelled by the flat space mass level \(\delta=1,2,\ldots\) as well as the spin \(\ell\). In terms of these labels the operators are degenerate and the degeneracies were recently estimated in [18]. We show the expected degeneracies for the correlator at hand in Figure 1. We define the large \(\lambda\) expansion of the OPE data as \[\tau = \tau_{0}\lambda^{\frac{1}{4}}+\tau_{1}+\tau_{2}\lambda^{-\frac{1}{ 4}}+\ldots\,, \tag{10}\] \[C_{\tau,\ell}^{2} = \frac{\pi^{3}}{2^{12}}\frac{2^{-2\tau}\tau^{6}}{\sin^{2}(\frac{ \pi\tau}{2})}\frac{1}{2^{2\ell}(\ell+1)}\left(f_{0}+f_{1}\lambda^{-\frac{1}{4} }+f_{2}\lambda^{-\frac{1}{2}}+\ldots\right)\,. \tag{11}\] The dispersive sum rules for the correlator at hand up to order \(1/\sqrt{\lambda}\) were derived in [11; 12] and we repeat them here for completeness \[\alpha_{a,b}^{(0)} = \sum_{\delta=1}^{\infty}\sum_{q=0}^{b}\frac{c_{a,b,q}}{\delta^{3 +2a+3b}}F_{q}^{(0)}(\delta)\,,\] \[\alpha_{a,b}^{(1/2)} = \sum_{\delta=1}^{\infty}\sum_{q=0}^{b}\frac{c_{a,b,q}}{\delta^{ \frac{1}{2}+2a+3b}}\left(F_{q}^{(1)}(\delta)-(3+2a+3b)T_{q}^{(1)}(\delta) \right)\,, \tag{12}\] \[\alpha_{a,b}^{(1)} = \sum_{\delta=1}^{\infty}\sum_{q=0}^{b}\frac{c_{a,b,q}}{\delta^{4 +2a+3b}}\bigg{(}F_{q}^{(2)}(\delta)-(3+2a+3b)T_{q}^{(2)}(\delta)+\sum_{j=0}^{1 }(q+1)_{j}P_{3,j}^{(1)}(a,b,q)F_{q+j}^{(0)}(\delta)\bigg{)},\] Here \(c_{a,b,q}\) are the combinatorical coefficients \[c_{a,b,q} = \frac{(-1)^{q}(2a+3b-3q)\Gamma(a+b-q)}{2\Gamma(a+1)\Gamma(b-q+1)}\] \[{}_{4}F_{3}\left(\tfrac{q+1}{2},\tfrac{q}{2},q-b,q+1-\tfrac{2}{3 }a-b;q+1,q+1-a-b,q-\tfrac{2}{3}a-b;4\right)\,,\] and the functions \(F_{q}^{(k)}(\delta)\) and \(T_{q}^{(k)}(\delta)\) encode the OPE data, starting with \[F_{q}^{(0)}(\delta)=\frac{4^{q}}{\Gamma(2q+2)}\sum_{\ell=0,2,\ldots}^{2(\delta-1 )}(\ell-q+1)_{q}(\ell+2)_{q}\langle f_{0}\rangle_{\delta,\ell}\,, \tag{10}\] for the leading contribution to the OPE coefficients (11). The angle brackets \(\langle\ldots\rangle_{\delta,\ell}\) denote a sum over all the degenerate OPE data for a given \(\delta,\ell\), for which the degeneracies are shown in Figure 1. The precise definitions of the remaining functions \(F_{q}^{(k)}(\delta)\) and \(T_{q}^{(k)}(\delta)\) along with the \(P_{i,j}^{(k)}(a,b,q)\), which are polynomials in \(a,b,q\) of degree \(i-j\), are given in appendix A. The coefficients \(\alpha_{a,b}^{(0)}\) are known from flat space (7) and can be used to compute the OPE data \(\tau_{0}(\delta,\ell)=2\sqrt{\delta}\) and \(\langle f_{0}\rangle_{\delta,\ell}\). The first corrections are somewhat trivial as matching with (3) yields \[\alpha_{a,b}^{(1/2)}=0\,,\quad\forall a,b\,, \tag{11}\] which has the solution \[\tau_{1}(\delta,\ell)=-\ell-2\,,\qquad\langle f_{1}\rangle_{\delta,\ell}= \frac{3\ell+\frac{23}{4}}{\sqrt{\delta}}\langle f_{0}\rangle_{\delta,\ell}\,. \tag{12}\] These equations are used to eliminate \(\tau_{1}(\delta,\ell)\) and \(\langle f_{1}\rangle_{\delta,\ell}\) from all the other dispersive sum rules. The sum rule for \(\alpha_{a,b}^{(1)}\) was solved in [12] and the solution fixes the OPE data \(\langle f_{0}\tau_{2}\rangle_{\delta,\ell}\) and \(\langle f_{2}\rangle_{\delta,\ell}\). The most efficient way to obtain the general sum rules is to use a crossing-symmetric dispersion relation as described in [12]. Performing this computation up to order \(1/\lambda\) we find the following two new sum rules \[\alpha_{a,b}^{(3/2)}=\sum_{\delta=1}^{\infty}\sum_{q=0}^{b}\frac{ c_{a,b,q}}{\delta^{\frac{9}{2}+2a+3b}}\bigg{(}F_{q}^{(3)}(\delta)-(3+2a+3b)T_{q}^{ (3)}(\delta)+\sum_{j=0}^{1}(q+1)_{j}P_{2,j}^{(3/2)}(q)F_{q+j}^{(0)}(\delta) \bigg{)}\,,\] \[\alpha_{a,b}^{(2)}=\sum_{\delta=1}^{\infty}\sum_{q=0}^{b}\frac{ c_{a,b,q}}{\delta^{5+2a+3b}}\bigg{(}F_{q}^{(4)}(\delta)-(3+2a+3b)T_{q}^{(4)}( \delta)+\frac{1}{4}(3+2a+3b)(7+4a+6b)T_{q}^{(2,2)}(\delta)\] \[+\sum_{j=0}^{1}(q+1)_{j}\left(P_{3,j}^{(2)}(a,b,q)F_{q+j}^{(2)}( \delta)+P_{4,j}^{(2)}(a,b,q)T_{q+j}^{(2)}(\delta)\right)+\sum_{j=0}^{2}(q+1)_{ j}P_{6,j}^{(2)}(a,b,q)F_{q+j}^{(0)}(\delta)\bigg{)}\,, \tag{13}\] where further definitions can again be found in appendix A. We solve \[\alpha_{a,b}^{(3/2)}=0\,,\quad\forall a,b\,, \tag{14}\] to find \[\tau_{3}(\delta,\ell)=0\,,\quad\langle f_{3}\rangle_{\delta,\ell}=\frac{3\ell+ \frac{23}{4}}{\sqrt{\delta}}\left(\langle f_{2}\rangle_{\delta,\ell}-\frac{ \langle f_{0}\tau_{2}\rangle_{\delta,\ell}}{2\sqrt{\delta}}\right)-\frac{560 \ell^{3}+3220\ell^{2}+6164\ell+3931}{64\delta^{3/2}}\langle f_{0}\rangle_{ \delta,\ell}\,. \tag{15}\] Finally, the sum rule for \(\alpha^{(2)}_{a,b}\) contains unknown data on both sides of the equation, in particular the OPE data \[F^{(4)}_{q}(\delta)\leftrightarrow\langle f_{4}\rangle_{\delta,\ell}\,,\qquad T^{( 4)}_{q}(\delta)\leftrightarrow\langle f_{0}\tau_{4}+f_{2}\tau_{2}\rangle_{ \delta,\ell}\,,\qquad T^{(2,2)}_{q}(\delta)\leftrightarrow\langle f_{0}\tau_{ 2}^{2}\rangle_{\delta,\ell}\,. \tag{3.12}\] In order to relate the integral representation for \(A^{(2)}(S,T)\) to this OPE data we would like to resum the low-energy expansion \[2\sum_{a,b=0}^{\infty}\,\hat{\sigma}_{2}^{a}\hat{\sigma}_{3}^{b}\alpha^{(2)}_{a,b}=\sum_{\delta=1}^{\infty}\,\sum_{q=0}^{2(\delta-1)}\,\sum_{a,b=0}^{\infty}P _{\delta,q}(a,b)c_{a,b,q}\left(\frac{\hat{\sigma}_{2}}{\delta^{2}}\right)^{a} \left(\frac{\hat{\sigma}_{3}}{\delta^{3}}\right)^{b}\,, \tag{3.13}\] where \(P_{\delta,q}(a,b)\) are polynomials in \(a\) and \(b\) and the sum over \(q\) truncates for fixed \(\delta\) because \[T^{(k)}_{q}(\delta)=0\,,\qquad F^{(k)}_{q}(\delta)=0\,,\qquad q>2(\delta-1)\,. \tag{3.14}\] The sums over \(a\) and \(b\) can now be done by using [19] \[\sum_{a,b=0}^{\infty}c_{a,b,q}x^{a}y^{b}=\frac{1}{2}\frac{y+2}{1-x-y}\left( \frac{\sqrt{1-4y}-1}{2}\right)^{q}\,, \tag{3.15}\] and turning \(a\) and \(b\) into operators \(x\partial_{x}\) and \(y\partial_{y}\) acting on this sum. In this way we can compute the residues at \(S=\delta\) defined in (2.13). The residues of order seven to four are completely determined in terms of known OPE data, whereas the residues of order three to one depend on the OPE data (3.12). The expressions for all the residues of \(A^{(2)}(S,T)\) can be found in (A.4) and (A.5). The fact that the dispersive sum rules fix most of \(A^{(k)}(S,T)\) in terms of OPE data from lower orders is one of the reasons we are able to solve for \(A^{(k)}(S,T)\) order by order. ## 4 World-sheet correlator We start with the following ansatz for the world-sheet correlator \[A^{(k)}(S,T)=B^{(k)}(S,T)+B^{(k)}(U,T)+B^{(k)}(S,U)\,, \tag{4.1}\] where \(B^{(k)}(S,T)\) is symmetric under exchange of \(S\) and \(T\) and has the representation \[B^{(k)}(S,T)=\int d^{2}z|z|^{-2S-2}|1-z|^{-2T-2}G^{(k)}(S,T,z)\,, \tag{4.2}\] so that \(A^{(k)}(S,T)\) is manifestly crossing-symmetric. The symmetry condition for \(B^{(k)}(S,T)\) translates to the following condition for \(G^{(k)}(S,T,z)\) \[G^{(k)}(S,T,z)=G^{(k)}(T,S,1-z)\,. \tag{4.3}\] We further impose \(G^{(k)}(S,T,z)\) to be even under the exchange \(z\leftrightarrow\overline{z}\) because it should be a function of the 2D conformal cross-ratios \(z\overline{z}\) and \((1-z)(1-\overline{z})\) \[G^{(k)}(S,T,z)=G^{(k)}(S,T,\overline{z})\,. \tag{4.4}\] Note that the integral (4.2) would project out the odd component anyway. ### Basis of SVMPLs Let us construct a basis of SVMPLs that will allow us to easily solve (4.3) and (4.4). For words of length \(L\) there are \(2^{L}\) linearly independent SVMPLs. To solve (4.4) we would like to project on the even component under the exchange \(z\leftrightarrow\overline{z}\). Given a word \(w\) we denote by \(\tilde{w}\) the reverse word. The projection is then equivalent to a quotient by the equivalence \[\mathcal{L}_{w}(z)\simeq\mathcal{L}_{\tilde{w}}(z)\,. \tag{4.5}\] More precisely, it can be shown that the even components of \(\mathcal{L}_{w}(z)\) and \(\mathcal{L}_{\tilde{w}}(z)\) are the same, up to single-valued multiple zeta values times SVMPLs of lower weight. At length \(L\) this projection reduces the number of independent SVMPLs from \(2^{L}\) to \(2^{L-1}+2^{\left\lfloor\frac{L-1}{2}\right\rfloor}\). For instance at length three we get from \(8\to 6\) while at length six we get from \(64\to 36\). In our ansatz at weight three we also need to include \(\zeta(3)\) (times the polylogarithm 1), giving a total of 7 independent functions. At weight six we also include \(\zeta(3)\) times the projection for length three (6 functions), plus \(\zeta(5)\) times the projection for length one (2 functions) plus \(\zeta(3)^{2}\) times 1 to get in total \(36+6+2+1=45\) independent functions. SVMPLs are also closed under the transformation \(z\to 1-z\) and we find it convenient to split them into symmetric and anti-symmetric components, defining \[\mathcal{L}_{w}^{s}(z) =\mathcal{L}_{w}(z)+\mathcal{L}_{w}(1-z)+\mathcal{L}_{w}(\overline {z})+\mathcal{L}_{w}(1-\overline{z})\,, \tag{4.6}\] \[\mathcal{L}_{w}^{a}(z) =\mathcal{L}_{w}(z)-\mathcal{L}_{w}(1-z)+\mathcal{L}_{w}(\overline {z})-\mathcal{L}_{w}(1-\overline{z})\,.\] At the level of the words (and modulo zeta values times SVMPLs of lower order), the transformation \(z\to 1-z\) flips \(0\leftrightarrow 1\). For the total ansatz at transcendentality three we have 4 symmetric and 3 anti-symmetric functions \[\mathcal{L}^{(1)s} =\left(\mathcal{L}_{000}^{s}(z),\mathcal{L}_{001}^{s}(z), \mathcal{L}_{010}^{s}(z),\zeta(3)\right), \tag{4.7}\] \[\mathcal{L}^{(1)a} =\left(\mathcal{L}_{000}^{a}(z),\mathcal{L}_{001}^{a}(z), \mathcal{L}_{010}^{a}(z)\right),\] and at transcendentality six (for which we have 45 functions), we have 25 symmetric and 20 anti-symmetric functions \[\mathcal{L}^{(2)s} =\left(\mathcal{L}_{000000}^{s}(z),\mathcal{L}_{000001}^{s}(z), \mathcal{L}_{000010}^{s}(z),\mathcal{L}_{000011}^{s}(z),\mathcal{L}_{000100} ^{s}(z),\mathcal{L}_{000101}^{s}(z),\mathcal{L}_{000110}^{s}(z),\right.\] \[\left.\mathcal{L}_{000111}^{s}(z),\mathcal{L}_{001001}^{s}(z), \mathcal{L}_{001010}^{s}(z),\mathcal{L}_{001011}^{s}(z),\mathcal{L}_{001100} ^{s}(z),\mathcal{L}_{001110}^{s}(z),\right.\] \[\left.\mathcal{L}_{010001}^{s}(z),\mathcal{L}_{010010}^{s}(z), \mathcal{L}_{010101}^{s}(z),\mathcal{L}_{010110}^{s}(z),\mathcal{L}_{011001} ^{s}(z),\mathcal{L}_{011110}^{s}(z),\right.\] \[\left.\zeta(3)\mathcal{L}_{000}^{s}(z),\zeta(3)\mathcal{L}_{001}^ {s}(z),\zeta(3)\mathcal{L}_{010}^{s}(z),\zeta(5)\mathcal{L}_{0}^{s}(z),\zeta( 3)^{2}\right), \tag{4.8}\] and \[\mathcal{L}^{(2)a} =\left(\mathcal{L}_{00000}^{a}(z),\mathcal{L}_{000001}^{a}(z), \mathcal{L}_{000010}^{a}(z),\mathcal{L}_{000011}^{a}(z),\mathcal{L}_{00011}^ {a}(z),\mathcal{L}_{000100}^{a}(z),\mathcal{L}_{000110}^{a}(z),\mathcal{L}_{00 0110}^{a}(z),\right.\] \[\left.\mathcal{L}_{001001}^{a}(z),\mathcal{L}_{001010}^{a}(z), \mathcal{L}_{001100}^{a}(z),\mathcal{L}_{001101}^{a}(z),\mathcal{L}_{001110} ^{a}(z),\mathcal{L}_{001110}^{a}(z),\mathcal{L}_{010001}^{a}(z),\right.\] \[\left.\mathcal{L}_{010110}^{a}(z),\mathcal{L}_{011110}^{a}(z), \zeta(3)\mathcal{L}_{000}^{a}(z),\zeta(3)\mathcal{L}_{001}^{a}(z),\zeta(3) \mathcal{L}_{010}^{a}(z),\zeta(5)\mathcal{L}_{0}^{a}(z)\right). \tag{4.9}\] In terms of these vectors of basis elements our ansatz reads \[G^{(k)}(S,T,z)=\sum_{u}r_{u}^{(k)s}\mathcal{L}_{u}^{(k)s}+\sum_{v}r_{v}^{(k)a} \mathcal{L}_{v}^{(k)a}\,, \tag{4.10}\] where \(r_{u}^{(k)s}/r_{v}^{(k)a}\) are symmetric / antisymmetric homogeneous functions of \(S\) and \(T\) of weight \(2k-2\). ### Ambiguities Since \(S+T+U=0\), certain insertions \(G^{(k)}(S,T,z)\) contribute to \(B^{(k)}(S,T)\) but don't contribute to the total answer \(A^{(k)}(S,T)\). This leads to an ambiguity when constructing integrands. Since we are ultimately interested in the final answer \(A^{(k)}(S,T)\), integrands differing by these ambiguities are of course equivalent. Let us start with weight zero. At this order \[G^{(0)}(S,T,z)=f(S,T)\,, \tag{4.11}\] and symmetry in \(S,T\) implies \(f(S,T)=f(T,S)\). Any function \(f(S,T)\) satisfying \[U^{2}(f(S,T)+f(T,S))+T^{2}(f(S,U)+f(U,S))+S^{2}(f(U,T)+f(T,U))=0\,, \tag{4.12}\] leads to a vanishing contribution to \(A^{(0)}(S,T)\). If we now assume that \(f(S,T)\) has the correct denominator \[f(S,T)=\frac{c}{U^{2}}\,, \tag{4.13}\] the constraint (4.12) becomes \(c=0\) and there is no more ambiguity. For higher weights we will also fix the denominator to be \(1/U^{2}\), mimicking the leading term (2.10) \[r_{u}^{(k)s}=\frac{p_{u}^{(k)s}(S,T)}{U^{2}}\,,\qquad r_{v}^{(k)a}=\frac{p_{v} ^{(k)a}(S,T)}{U^{2}}\,. \tag{4.14}\] This fixes part of the ambiguities, but in general a finite number of ambiguities remains even after fixing the denominator. Our final result for \(G^{(k)}(S,T,z)\) then has the form \[G^{(k)}(S,T,z)=\sum_{u}r_{u}^{(k)s}\mathcal{L}_{u}^{(k)s}+\sum_{v}r_{v}^{(k)a} \mathcal{L}_{v}^{(k)a}+\sum_{j=1}^{n_{\rm amb}}a_{j}\left(\sum_{u}\hat{r}_{ju} ^{(k)s}\mathcal{L}_{u}^{(k)s}+\sum_{v}\hat{r}_{jv}^{(k)a}\mathcal{L}_{v}^{(k) a}\right)\,, \tag{4.15}\] in terms of \(n_{\rm amb}\) unfixed coefficients \(a_{j}\) which parameterise the ambiguities. For the case \(k=1\) there are two ambiguities, given by \[\hat{r}_{1}^{(1)s}=\left(0,0,0,\frac{S^{2}+4ST+T^{2}}{(S+T)^{2}}\right)\,, \qquad\hat{r}_{1}^{(1)a}=(0,0,0)\, \tag{4.16}\] and \[\begin{split}&\hat{r}_{2}^{(1)s}=\left(-\frac{5S^{2}+8ST+5T^{2}}{12 (S+T)^{2}},\frac{S^{2}+ST+T^{2}}{3(S+T)^{2}},-\frac{5S^{2}+8ST+5T^{2}}{12(S+T)^ {2}},\frac{S^{2}+T^{2}}{(S+T)^{2}}\right)\,,\\ &\hat{r}_{2}^{(1)a}=\frac{S-T}{S+T}\left(-\frac{5}{12},\frac{1}{ 3},\frac{1}{4}\right)\,.\end{split} \tag{4.17}\] For \(k=2\) we find a 17 parameter family of ambiguities, spanned by complicated vectors. The full set of ambiguities is included in the ancillary Mathematica notebook. ### Solutions In order to fix the coefficients in our ansatz, we compute the residues of \(A^{(k)}(S,T)\) using (9), recalling that \[G^{(k)}_{\rm tot}(S,T,z)=G^{(k)}(S,T,z)+|z|^{2}G^{(k)}(U,T,\tfrac{1}{z})+|1-z|^{2 }G^{(k)}(S,U,\tfrac{z}{z-1})\,. \tag{42}\] Single-valued polylogarithms are closed under \(z\to 1-z\), \(z\to 1/z\) and \(z\to\tfrac{z}{z-1}\) and we explain how to compute their transformation properties in appendix B. This can be used to write \(G^{(k)}_{\rm tot}(S,T,z)\) in terms of SVMPLs of argument \(z\), which are easy to expand around \(z=0\). We can then compute any residue at \(S=\delta=1,2,\ldots\) by integrating over a disc around \(z=0\) using polar coordinates \(z=\rho e^{i\alpha}\). The poles then arise through integrals of the form \[\int_{0}^{\rho_{0}}d\rho\,\rho^{-2S+2\delta-1}\frac{1}{p!}\log^{p}(\rho^{2})= -\frac{1}{2}\frac{1}{(S-\delta)^{p+1}}+O\left((S-\delta)^{0}\right)\,. \tag{43}\] The residues at \(S=0\) can be computed using the polar terms in the low energy expansion as discussed in section 5.2 below. These poles are matched to the supergravity terms shown in (6). For \(k=1\) our ansatz has \(2\cdot 4+3\) rational parameters, which are fixed by matching the residues with those from the SUGRA term and the dispersive sum rules, where it is not necessary to specify the OPE data \(\langle f_{0}\tau_{2}\rangle_{\delta,\ell}\) or \(\langle f_{2}\rangle_{\delta,\ell}\). The result reads \[r^{(1)s}=\left(-\frac{1}{6}\,,0\,,-\frac{1}{4}\,,1\right)\,,\qquad r^{(1)a}= \frac{S-T}{S+T}\left(-\frac{1}{6}\,,\frac{1}{3}\,,\frac{1}{6}\right)\,. \tag{44}\] For the next correction at \(k=2\) our ansatz for the functions \(r^{(2)s}_{u}\) and \(r^{(2)a}_{v}\) can be parametrised by \(3\cdot 25+2\cdot 20=115\) rational numbers. 17 of them are ambiguities, so that \(A^{(2)}(S,T)\) depends on 98 coefficients. We can now match the residues from the ansatz with those of the supergravity term and with the expressions (100) and (101) computed from the dispersive sum rules, making the assumption that the OPE data has the form \[\langle f_{4}\rangle_{\delta,\ell} =q^{1}_{\delta,\ell}\zeta(3)^{2}+q^{2}_{\delta,\ell}\zeta(5)+q^{ 3}_{\delta,\ell}\zeta(3)+q^{4}_{\delta,\ell}\,, \tag{45}\] \[\langle f_{0}\tau_{4}+f_{2}\tau_{2}\rangle_{\delta,\ell} =q^{5}_{\delta,\ell}\zeta(3)+q^{6}_{\delta,\ell}\,, q^{i}_{\delta,\ell}\in\mathbb{Q}\,,\] \[\langle f_{0}\tau_{2}^{2}\rangle_{\delta,\ell} =q^{7}_{\delta,\ell}\,.\] Matching all residues (of order 1 to 7) for \(\delta=0,1,\ldots,6\) fixes 94 of the 98 parameters and including also the cases \(\delta=7,8\) does not fix any more parameters. The remaining four parameters can be fixed using the fact that the operators on the leading Regge trajectory are non-degenerate, as well as the known dimension \(\tau_{4}(1,0)\) of the Konishi operator, to insert the OPE data \[\langle f_{0}\tau_{2}^{2}\rangle_{1,0}=4\,,\qquad\langle f_{0}\tau_{2}^{2} \rangle_{2,2}=\frac{27}{2}\,,\qquad\langle f_{0}\tau_{4}+f_{2}\tau_{2}\rangle_ {1,0}=\zeta(3)+\frac{413}{16}\,. \tag{46}\] The final result has the form \[r^{(2)s} =\frac{S^{2}+T^{2}}{2^{4}3^{5}}\Big{(}-216,26739,13111,-7271,-9286,-9 139,-26100,9219,-12672,-15917,\] \[\quad 3541,-9901,-823,29697,-17307,1674,10530,3780,23760,-3483,0,0,0,0,0\Big{)}\] \[\quad+\frac{ST}{2^{4}3^{5}}\Big{(}216,8163,24433,-132845,-33460,-92 347,-25725,67200,-21045,18571,\] \[\quad 27967,-7363,52694,9372,1848,-7575,21006,26760,26769,55233,0,0,0, 0,0,0\Big{)}\,,\] \[r^{(2)a} =\frac{(S^{2}+T^{2})(S-T)}{2^{4}3^{5}(S+T)}\Big{(}-216,26739,13111,-7271,-9286,-9139,-26100,-12672,\] \[\quad-15917,-9901,2417,17061,-17307,1674,-432,3483,0,0,0,0\Big{)} \tag{4.23}\] \[\quad+\frac{ST(S-T)}{2^{4}3^{5}(S+T)}\Big{(}-216,61641,50655,-5698 5,-52032,4521,-26307,-2387,\] \[\quad-21173,5559,-43268,-16916,-67642,-19393,11432,15345,0,0,-84456,-5292\Big{)}\,,\] where we used the ambiguities to cancel the \(1/U^{2}\) in \(r^{(2)s}\) and to set some of the entries to zero. ## 5 Data and checks ### OPE data After having completely fixed \(A^{(2)}(S,T)\) we can now compute the OPE data for any mass level \(\delta\) by computing the residues of \(A^{(2)}(S,T)\) at \(S=\delta\) and matching them with the expressions from the sum rules (A.5). We include all the OPE data for \(\delta\leq 13\) in a Mathematica notebook. Along the Regge trajectories the OPE data admits analytic formulas, which we obtain by matching to our data. For the first Regge trajectory we find \[\langle f_{0}\tau_{2}^{2}\rangle_{\delta,2(\delta-1)} =\frac{r_{0}(\delta)}{4\delta^{2}}\left(3\delta^{2}-\delta+2 \right)^{2}\,,\] \[\langle f_{0}\tau_{4}+f_{2}\tau_{2}\rangle_{\delta,2(\delta-1)} =r_{0}(\delta)\sqrt{\delta}\left(3\delta^{2}-\delta-1\right) \zeta(3) \tag{5.1}\] \[\quad-\frac{r_{0}(\delta)}{192\delta^{5/2}}\left(336\delta^{5}-5 476\delta^{4}+2984\delta^{3}-3689\delta^{2}+439\delta+450\right)\,,\] \[\langle f_{4}\rangle_{\delta,2(\delta-1)} =r_{0}(\delta)\bigg{(}2\delta^{3}\zeta(3)^{2}-3\delta^{2}\zeta(5) +\frac{1}{48}\left(-112\delta^{3}+1728\delta^{2}-584\delta-345\right)\zeta(3)\] \[\quad+\frac{49\delta^{3}}{72}-\frac{389\delta^{2}}{20}+\frac{48 31\delta}{72}-\frac{7411}{192}-\frac{16415}{288\delta}+\frac{13219}{1920 \delta^{2}}+\frac{6723}{2048\delta^{3}}\bigg{)}\,,\] with \[r_{n}(\delta)=\frac{4^{2-2\delta}\delta^{2\delta-2n-1}(2\delta-2n-1)}{\Gamma( \delta)\Gamma\left(\delta-\left\lfloor\frac{n}{2}\right\rfloor\right)}\,. \tag{5.2}\] As these operators are non-degenerate (see Figure 1) we can omit the angle brackets and solve for the twists \[\tau\left(\tfrac{\ell}{2}+1,\ell\right) =\sqrt{2(\ell+2)}\lambda^{\frac{1}{4}}-\ell-2+\frac{3\ell^{2}+10 \ell+16}{4\sqrt{2(\ell+2)}}\lambda^{-\frac{1}{4}} \tag{5.3}\] \[\qquad-\frac{21\ell^{4}+144\ell^{3}+292\ell^{2}+80\ell-128+96( \ell+2)^{3}\zeta(3)}{32(2(\ell+2))^{\frac{3}{2}}}\lambda^{-\frac{3}{4}}+O( \lambda^{-\frac{5}{4}})\,,\] in precise agreement with the results from integrability [20; 21; 22]. For the second Regge trajectory we find \[\langle f_{0}\tau_{2}^{2}\rangle_{\delta,2(\delta-2)} =\frac{r_{1}(\delta)}{108\delta}\left(162\delta^{6}+207\delta^{5}- 376\delta^{4}+1227\delta^{3}-2156\delta^{2}+1152\delta-648\right)\,,\] \[\langle f_{0}\tau_{4}+f_{2}\tau_{2}\rangle_{\delta,2(\delta-2)} =r_{1}(\delta)\delta^{5/2}\bigg{(}\frac{1}{9}\left(18\delta^{3}+25 \delta^{2}-75\delta+23\right)\zeta(3)-\frac{7\delta^{3}}{6}+\frac{2941\delta^{ 2}}{216}-\frac{4147\delta}{432}\] \[\qquad-\frac{11977}{96}+\frac{433411}{1728\delta}-\frac{464351}{ 1728\delta^{2}}+\frac{65701}{288\delta^{3}}-\frac{601}{8\delta^{4}}\bigg{)}\,,\] \[\langle f_{4}\rangle_{\delta,2(\delta-2)} =r_{1}(\delta)\bigg{(}\frac{2}{3}\delta^{4}\left(2\delta^{2}+3 \delta-8\right)\zeta(3)^{2}-\delta^{3}\left(2\delta^{2}+3\delta-8\right)\zeta (5)\] \[\qquad-\delta\left(\frac{14\delta^{5}}{9}-\frac{463\delta^{4}}{27 }+\frac{125\delta^{3}}{9}+\frac{41183\delta^{2}}{216}-\frac{14647\delta}{48} +\frac{183}{2}\right)\zeta(3)\] \[\qquad+\frac{49\delta^{6}}{108}-\frac{31267\delta^{5}}{3240}+ \frac{7109\delta^{4}}{405}+\frac{786077\delta^{3}}{12960}-\frac{3101515\delta ^{2}}{5184}+\frac{49878301\delta}{25920}\] \[\qquad-\frac{109158059}{46080}+\frac{97891303}{92160\delta}- \frac{86003}{768\delta^{2}}\bigg{)}\,. \tag{5.4}\] For the two operators at \(\delta=2\), \(\ell=0\) we can perform another consistency check. Here some of our data reads \[\langle f_{0}\rangle_{2,0} =\,\sum_{I=1}^{2}f_{0}^{I}(2,0)=\frac{1}{4}\,,\] \[\langle f_{0}\tau_{2}\rangle_{2,0} =\,\sum_{I=1}^{2}f_{0}^{I}(2,0)\tau_{2}^{I}(2,0)=\sqrt{2}\,, \tag{5.5}\] \[\langle f_{0}\tau_{2}^{2}\rangle_{2,0} =\,\sum_{I=1}^{2}f_{0}^{I}(2,0)\tau_{2}^{I}(2,0)^{2}=8\,,\] and the anomalous dimensions \(\tau_{2}^{I}(2,0)\) were recently computed in [23] \[\tau_{2}^{1}(2,0)=4\sqrt{2}\,,\qquad\tau_{2}^{2}(2,0)=\sqrt{2}\,. \tag{5.6}\] We can use this input to solve the first two equations of (5.5) for \[f_{0}^{1}(2,0)=\frac{1}{4}\,,\qquad f_{0}^{2}(2,0)=0\,. \tag{5.7}\] The first check is that the third equation of (5.5) is also solved by this data. Note that one of the operators does not enter the equations due to \(f_{0}^{2}(2,0)=0\). If we assume this operator to be absent at subleading orders as well, i.e. \(f_{2}^{2}(2,0)=0\), we can use \[\begin{split}\langle f_{2}\rangle_{2,0}&=2\zeta(3)- \frac{387}{256}\,,\\ \langle f_{0}\tau_{4}+f_{2}\tau_{2}\rangle_{2,0}&= \frac{13\zeta(3)}{\sqrt{2}}-\frac{537}{32\sqrt{2}}\,,\end{split} \tag{5.8}\] to solve for \[\tau_{4}^{1}(2,0)=-\frac{75}{2\cdot 2^{\frac{3}{2}}}-\frac{24\zeta(3)}{2^{ \frac{3}{2}}}\,, \tag{5.9}\] which also agrees with the result of [23]. ### Wilson coefficients In order to obtain the low-energy expansion from the world-sheet integral representation we need to compute the expansion of the function \(B^{(k)}(S,T)\) defined in (4.2) around \(S=T=0\). To this end we consider the integrals \[I_{w}(S,T)=\int d^{2}z|z|^{-2S-2}|1-z|^{-2T-2}\mathcal{L}_{w}(z)\,. \tag{5.10}\] Their low energy expansion was computed in [13], following a method developed in [7], with the result \[I_{w}(S,T)=\text{polar}_{w}(S,T)+\sum_{p,q=0}^{\infty}(-S)^{p}(-T)^{q}\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! we can again use localisation to find all the Wilson coefficients that appear in the Mellin amplitude at order \(1/\lambda^{4}\) \[M(s_{1},s_{2})=\frac{8}{(s_{1}-\frac{2}{3})(s_{2}-\frac{2}{3})(s_{3 }-\frac{2}{3})}+\frac{120\zeta(3)}{\lambda^{3/2}}+\frac{210\left(3\sigma_{2}+7 \right)\zeta(5)}{\lambda^{5/2}}\] \[+\frac{140\left(108\sigma_{3}-99\sigma_{2}-320\right)\zeta(3)^{2 }}{3\lambda^{3}}+\frac{35\left(2592\sigma_{2}^{2}-77328\sigma_{3}+73638\sigma_ {2}+178909\right)\zeta(7)}{16\lambda^{7/2}}\] \[+\frac{10\left(11340\sigma_{3}\sigma_{2}-25893\sigma_{2}^{2}+5292 00\sigma_{3}-473529\sigma_{2}-928448\right)\zeta(3)\zeta(5)}{\lambda^{4}}+O( \lambda^{-9/2})\,. \tag{5.14}\] ## 6 Conclusions In this paper we have presented a method to compute the tree-level amplitude for four massless super-gravity states in type IIB string theory on \(AdS_{5}\times S^{5}\), order by order in the curvature corrections around flat space. Inspired by single-valuedness and the soft theorems in superstring theory, we propose an ansatz which at each order \(k\) involves (a world-sheet integration of) weight \(3k\) single-valued multiple polylogarithms and a finite number of rational coefficients. The ansatz is manifestly crossing symmetric and the unknown coefficients are fixed by requiring the correct supergravity limit; the correct structure of poles, determined by dispersive sum rules; and the dimensions of the first few Konishi-like operators, available from integrability. We explicitly show how our method works for the first two curvature corrections. This can be seen as the culmination of the program started in [11], and further developed in [12; 13].3 There are many open problems that would be interesting to address. Footnote 3: See [25; 26] for early attempts. Our result makes very explicit the interplay between integrability and the conformal bootstrap, already elucidated in a related context in [27; 28; 29], and brings new ingredients into play, such as structures from number theory. Some of these number theoretic structures already featured in the integrated constraints, computed via supersymmetric localisation, see _e.g._[30; 24; 31; 32]. It would be very interesting to explore these connections further. A framework to study open string amplitudes on \(AdS\) has been introduced in [33] and developed in [34; 35]. It would be interesting to study to which extent a natural map between open and closed string amplitudes in AdS exists. A related question is whether the results of [34; 35] admit a representation in terms of a 1d integral, similar to the Veneziano amplitude, with extra insertions. Four-point tree-level amplitudes have been constructed for curved background containing \(AdS\) with pure background NS-NS B-field, see [10]. It would be interesting to study the low energy expansions of such amplitudes, and to understand whether single-valuedness plays a role in that case. The most promising formulation for a direct world-sheet approach is the pure-spinor formalism. Over the last few years there has been progress in the explicit construction of vertex operators in the pure spinor formalism [36; 37], but the precise integration measure to compute amplitudes is still an open problem. It would be very interesting to use the results of this paper to reconstruct said measure in a \(1/R\) expansion. ## Acknowledgements We thank Julius Julius for useful discussions and especially Joao Silva for collaboration on related projects. Our work is supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 787185). LFA is also supported in part by the STFC grant ST/T000864/1. ## Appendix A Details about dispersive sum rules The OPE data generally arises in the dispersive sum rules via weighted sums of the form \[W_{q}\Big{[}f(\delta,\ell)\Big{]}=\frac{4^{q}}{\Gamma(2q+2)}\sum_{\ell=0,2, \ldots}^{2(\delta-1)}(\ell-q+1)_{q}(\ell+2)_{q}f(\delta,\ell)\,. \tag{10}\] Concretely the OPE data enters via the functions \[F_{q}^{(1)}(\delta) =W_{q}\Big{[}\sqrt{\delta}\langle f_{1}\rangle_{\delta,\ell}-(3 \ell+\tfrac{23}{4})\langle f_{0}\rangle_{\delta,\ell}\Big{]}\,,\] \[T_{q}^{(1)}(\delta) =W_{q}\Big{[}\langle f_{0}\rangle_{\delta,\ell}(\tau_{1}(\delta,\ell)+\ell+2)\Big{]}\,,\] \[F_{q}^{(2)}(\delta) =W_{q}\Big{[}\delta\langle f_{2}\rangle_{\delta,\ell}-\frac{39}{ 4}\ell\langle f_{0}\rangle_{\delta,\ell}\Big{]}\,,\] \[T_{q}^{(2)}(\delta) =W_{q}\Big{[}\sqrt{\delta}\langle f_{0}\tau_{2}\rangle_{\delta, \ell}\Big{]}\,,\] \[F_{q}^{(3)}(\delta) =W_{q}\Big{[}\delta^{\frac{3}{2}}\langle f_{3}\rangle_{\delta, \ell}-\delta(3\ell+\tfrac{23}{4})\left(\langle f_{2}\rangle_{\delta,\ell}- \frac{\langle f_{0}\tau_{2}\rangle_{\delta,\ell}}{2\sqrt{\delta}}\right)+ \frac{\ell\left(140\ell^{2}+280\ell+491\right)}{16}\langle f_{0}\rangle_{ \delta,\ell}\Big{]}\,,\] \[T_{q}^{(3)}(\delta) =W_{q}\Big{[}\delta\langle f_{0}\tau_{3}\rangle_{\delta,\ell} \Big{]}\,,\] \[F_{q}^{(4)}(\delta) =W_{q}\Big{[}\delta^{2}\langle f_{4}\rangle_{\delta,\ell}-\delta ^{3/2}(3\ell+\tfrac{23}{4})\left(\langle f_{3}\rangle_{\delta,\ell}-\frac{ \langle f_{0}\tau_{3}\rangle_{\delta,\ell}}{2\sqrt{\delta}}\right)\] \[\qquad\qquad+\ell\left(\frac{27}{4}\delta\langle f_{2}\rangle_{ \delta,\ell}+\frac{3}{2}\sqrt{\delta}\langle f_{0}\tau_{2}\rangle_{\delta, \ell}+\frac{21}{32}\left(20\ell^{2}+40\ell+137\right)\langle f_{0}\rangle_{ \delta,\ell}\right)\Big{]}\,,\] \[T_{q}^{(4)}(\delta) =W_{q}\Big{[}\delta^{\frac{3}{2}}\langle f_{0}\tau_{4}+f_{2}\tau _{2}\rangle_{\delta,\ell}-\frac{39}{4}\ell\sqrt{\delta}\langle f_{0}\tau_{2} \rangle_{\delta,\ell}\Big{]}\,,\] \[T_{q}^{(2,2)}(\delta) =W_{q}\Big{[}\delta\langle f_{0}\tau_{2}^{2}\rangle_{\delta,\ell }\Big{]}\,. \tag{11}\] The dispersive sum rules further depend on the polynomials \[P^{(1)}_{3,0}(a,b,q)= -\frac{1}{6}(2a+3b)^{3}+\frac{1}{6}(3q-8)(2a+3b)^{2}\frac{1}{12} \left(-3q^{2}+52q-2\right)(2a+3b)\] \[+\frac{1}{32}\left(-216q^{2}-84q-277\right)\,,\] \[P^{(1)}_{3,1}(a,b,q)= \frac{1}{4}(2a+3b)^{2}+\frac{1}{24}(49-6q)(2a+3b)-\frac{3}{16}(36q +25)\,,\] \[P^{(3/2)}_{2,0}(q)= \frac{1}{64}\left(2100q^{2}+4200q+3931\right)\,,\] \[P^{(3/2)}_{2,1}(q)= \frac{525}{32}(2q+3)\,,\] \[P^{(2)}_{3,0}(a,b,q)= P^{(1)}_{3,0}(a,b,q)+\frac{1}{16}\left(144q^{2}+288q+529 \right)\,,\] \[P^{(2)}_{3,1}(a,b,q)= P^{(1)}_{3,1}(a,b,q)+\frac{9}{2}(2q+3)\,,\] \[P^{(2)}_{4,0}(a,b,q)= \frac{1}{6}(2a+3b)^{4}+\frac{1}{2}(4-q)(2a+3b)^{3}+\frac{1}{12} \left(3q^{2}-76q+66\right)(2a+3b)^{2}\] \[+\frac{1}{96}\left(744q^{2}-1412q+895\right)(2a+3b)+\frac{3}{32} \left(240q^{2}+16q+193\right)\,,\] \[P^{(2)}_{4,1}(a,b,q)= -\frac{1}{4}(2a+3b)^{3}+\frac{6q-73}{24}(2a+3b)^{2}+\frac{372q-16 7}{48}(2a+3b)+\frac{3}{2}(15q+8)\,,\] \[P^{(2)}_{6,0}(a,b,q)= \frac{1}{72}(2a+3b)^{6}+\frac{1}{360}(101-30q)(2a+3b)^{5}+\frac{1} {36}\left(6q^{2}-59q+69\right)(2a+3b)^{4}\] \[+\frac{1}{576}\left(-72q^{3}+2304q^{2}-5284q+4455\right)(2a+3b)^{3}\] \[+\frac{1}{576}\left(18q^{4}-2688q^{3}+12692q^{2}-11405q+15808 8\right)(2a+3b)^{2}\] \[+\frac{1}{5760}(9900q^{4}-177180q^{3}+66545q^{2}-378180q+134614)( 2a+3b)\] \[+\frac{1}{6144}(16128q^{4}-425088q^{3}-1033408q^{2}-1958280q-128511 5)\,,\] \[P^{(2)}_{6,1}(a,b,q)= -\frac{1}{24}(2a+3b)^{5}+\frac{12q-53}{72}(2a+3b)^{4}+\frac{1}{288 }(-54q^{2}+1098q-763)(2a+3b)^{3}\] \[+\frac{1}{1152}(72q^{3}-7956q^{2}+17392q-1383)(2a+3b)^{2}\] \[+\frac{1}{2304}\left(7920q^{3}-94428q^{2}-71770q-95783\right)(2a+3 b)\] \[+\frac{1}{1536}(8064q^{3}-147312q^{2}-409696q-425081)\,,\] \[P^{(2)}_{6,2}(a,b,q)= +\frac{1}{32}(2a+3b)^{4}+\frac{55-6q}{96}(2a+3b)^{3}+\frac{36q^{2} -2616q+613}{1152}(2a+3b)^{2}\] \[+\frac{1}{576}\left(990q^{2}-6879q-8848\right)(2a+3b)+\frac{1}{1536 }(4032q^{2}-45072q-79693)\,.\] Finally, we explain in section 3 how the dispersive sum rules can be used to compute the residues of \(A^{(2)}(S,T)\) at \(S=\delta\) in terms of OPE data. The first four residues depend only on OPE data that has been computed previously. They are given by \[R_{7}^{(2)}(T,\delta)= \ -\sum_{q=0}^{2(\delta-1)}\left(\frac{T}{\delta}\right)^{q}10\delta^ {2}F_{q}^{(0)}(\delta)\,,\] \[R_{6}^{(2)}(T,\delta)= \ -\sum_{q=0}^{2(\delta-1)}\left(\frac{T}{\delta}\right)^{q}\delta \left(\frac{4}{3}F_{q}^{(0)}(\delta)+5(q+1)F_{q+1}^{(0)}(\delta)\right)\,,\] \[R_{5}^{(2)}(T,\delta)= \ \sum_{q=0}^{2(\delta-1)}\left(\frac{T}{\delta}\right)^{q}\left( \frac{1}{3}\left(3q^{2}+17q+25\right)F_{q}^{(0)}(\delta)+\frac{1}{3}(q+1)(3q+ 8)F_{q+1}^{(0)}(\delta)\right.\] \[\ -\frac{3}{4}(q+1)(q+2)F_{q+2}^{(0)}(\delta)-4T_{q}^{(2)}(\delta )\right),\] \[R_{4}^{(2)}(T,\delta)= \ \sum_{q=0}^{2(\delta-1)}\left(\frac{T}{\delta}\right)^{q}\frac{1}{ \delta}\bigg{(}\frac{1}{96}\left(-8q^{3}+384q^{2}+492q+1119\right)F_{q}^{(0)} (\delta)\] \[\ +\frac{1}{48}(q+1)\left(18q^{2}+370q+577\right)F_{q+1}^{(0)}( \delta)+\frac{1}{16}(q+1)_{2}(6q+25)F_{q+2}^{(0)}(\delta)\] \[\ +(q+2)T_{q}^{(2)}(\delta)-\frac{3}{2}(q+1)T_{q+1}^{(2)}(\delta )-F_{q}^{(2)}(\delta)\bigg{)}\,. \tag{10}\] The remaining residues depend on the new OPE data in \(T_{q}^{(2,2)}(\delta)\), \(T_{q}^{(4)}(\delta)\) and \(F_{q}^{(4)}(\delta)\) and are given by \[R_{3}^{(2)}(T,\delta)=\sum_{q=0}^{2(\delta-1)}\left(\frac{T}{ \delta}\right)^{q}\frac{1}{\delta^{2}}\bigg{(}\frac{1}{144}\left(-9q^{4}-112q ^{3}-1000q^{2}-1884q-2271\right)F_{q}^{(0)}(\delta)\] \[-\frac{q+1}{576}\left(96q^{3}-300q^{2}+1036q+49\right)F_{q+1}^{(0 )}(\delta)-\frac{(q+1)_{2}}{576}\left(36q^{2}-1068q-2447\right)F_{q+2}^{(0)}( \delta)\] \[+\frac{3q^{2}+16q+28}{6}T_{q}^{(2)}(\delta)+\frac{q+1}{12}(12q+3 7)T_{q+1}^{(2)}(\delta)+\frac{2}{3}F_{q}^{(2)}(\delta)-\frac{q+1}{2}F_{q+1}^{( 2)}(\delta)-T_{q}^{(2,2)}(\delta)\bigg{)}\,,\] \[R_{2}^{(2)}(T,\delta)=\sum_{q=0}^{2(\delta-1)}\left(\frac{T}{ \delta}\right)^{q}\frac{1}{\delta^{3}}\bigg{(}\frac{1}{1152}(q+4)(3q+8)\left( 8q^{3}-384q^{2}-428q-927\right)F_{q}^{(0)}(\delta)\] \[+\frac{1}{2304}(q+1)\left(48q^{4}-4912q^{3}-42956q^{2}-111278q-10 7749\right)F_{q+1}^{(0)}(\delta)\] \[-\frac{5}{1152}(q+1)_{2}\left(276q^{2}+2024q+3091\right)F_{q+2}^{ (0)}(\delta)+\frac{1}{96}\left(-32q^{3}+176q^{2}-148q+415\right)T_{q}^{(2)}( \delta)\] \[-\frac{1}{48}(q+1)\left(12q^{2}-152q-187\right)T_{q+1}^{(2)}( \delta)+\frac{1}{12}(q+4)(3q+8)F_{q}^{(2)}(\delta)\] \[+\frac{1}{24}(q+1)(6q+31)F_{q+1}^{(2)}(\delta)+\frac{1}{4}(4q+7)T_ {q}^{(2,2)}(\delta)-T_{q}^{(4)}(\delta)\bigg{)}\,,\] \[R_{1}^{(2)}(T,\delta)=\sum_{q=0}^{2(\delta-1)}\left(\frac{T}{ \delta}\right)^{q}\left(\frac{1}{92160}(-320q^{6}+30144q^{5}+1157440q^{4}+5790960q ^{3}+18048880q^{2}\right.\] \[+25797336q+19462005)F_{q}^{(0)}(\delta)+\frac{q+1}{4608}(1904q^{4} +98248q^{3}+504424q^{2}+1237298q\] \[+1080201)F_{q+1}^{(0)}(\delta)-\frac{(q+1)_{2}}{4608}\left(96q^{3} -35252q^{2}-158704q-168339\right)F_{q+2}^{(0)}(\delta)\] \[+\frac{1}{96}\left(8q^{4}-360q^{3}-1148q^{2}-1347q-1194\right)T_{ q}^{(2)}(\delta)-\frac{q+1}{48}\left(238q^{2}+809q+609\right)T_{q+1}^{(2)}(\delta)\] \[+\frac{1}{96}\left(-8q^{3}-480q^{2}-1300q-2247\right)F_{q}^{(2)} (\delta)-\frac{1}{48}(q+1)(194q+337)F_{q+1}^{(2)}(\delta)\] \[-\frac{1}{4}(q+2)(2q+5)T_{q}^{(2,2)}(\delta)+(q+2)T_{q}^{(4)}( \delta)-F_{q}^{(4)}(\delta)\Biggr{)}\,. \tag{100}\] ## Appendix B Single-valued multiple polylogarithms A crucial role in our construction is played by single-valued multiple polylogarithms. Let us first introduce multiple polylogarithms (MPLs), also denoted harmonic polylogarithms. These are functions of a single variable \(L_{w}(z)\) labelled by a word \(w\) in the the alphabet \(\{0,1\}\). They can be defined recursively by \[\frac{d}{dz}L_{0w}(z)=\frac{1}{z}L_{w}(z)\,,\qquad\frac{d}{dz}L_{1w}(z)=\frac{ 1}{z-1}L_{w}(z), \tag{101}\] together with the condition \(\lim_{z\to 0}L_{w}(z)=0\) unless \(w=0^{p}\) for which \(L_{0^{p}}(z)=\frac{\log^{p}z}{p!}\). In particular for the empty word we have \(L_{0}(z)=1\). For instance \[L_{0^{n-1}1}=-\mathrm{Li}_{n}(z)\,, \tag{102}\] reduce to the classical polylogarithms. MPLs can also be given in terms of nested integrals since \[L_{0w}(z)=\int_{0}^{z}\frac{1}{z^{\prime}}L_{w}(z^{\prime})dz^{\prime}\,, \qquad L_{1w}(z)=\int_{0}^{z}\frac{1}{z^{\prime}-1}L_{w}(z^{\prime})dz^{ \prime}. \tag{103}\] MPLs satisfy various relations, in particular the shuffle-relations \[L_{w}(z)L_{w^{\prime}}(z)=\sum_{W\in w\sqcup w^{\prime}}L_{W}(z)\,. \tag{104}\] At \(z=1\) they define multiple zeta values \(L_{w}(1)=\zeta(w)\), using a non-standard notation for \(\zeta(w)\) where \(w\) denotes the word labelling the multiple polylogarithm under consideration. In this definition, logarithmic divergences as \(z\to 1\) are isolated by using the shuffle relations and regulated by defining \(\zeta(1)=0\). MPLs are closed under the transformations \[z\to 1-z,\ \ \ \ z\rightarrow\frac{z}{z-1}, \tag{105}\] and compositions of those. To work out their transformation properties is tedious but in principle straightforward (see for instance [38]). Let's start with \(z\to 1-z\). For weight one we have \[L_{0}(1-z)=L_{1}(z),\quad L_{1}(1-z)=L_{0}(z)\,. \tag{101}\] For higher weight we can proceed recursively. If a word starts with \(0\) one can easily show, using the integral representation \[L_{0w}(1-z)=L_{0w}(1)+\int_{0}^{z}\frac{1}{z^{\prime}-1}L_{w}(1-z^{\prime})dz^ {\prime}\,, \tag{102}\] with \(L_{w}(1-z^{\prime})\) given in terms of \(L_{w^{\prime}}(z^{\prime})\) by the transformations at lower order. If a word starts with \(1\) we can always use the shuffle identities to express \(L_{1w}(z)\) in terms of \(L_{1}(z)\) (whose transformation properties are known) and MPLs whose words start with \(0\). The final transformation properties take the form \[L_{w}(1-z)=L_{f\cdot w}(z)+\zeta^{\prime}s\times\text{lower weight MPLs}\,, \tag{103}\] where \(f\cdot w\) denotes the word \(w\) after flipping \(0\leftrightarrow 1\) in each place. Let us now focus on the second transformation. At weight one we have \[L_{0}\left(\frac{z}{z-1}\right)=L_{0}(z)-L_{1}(z)\pm i\pi\,,\quad L_{1}\left( \frac{z}{z-1}\right)=-L_{1}(z)\,. \tag{104}\] From now on we will disregard the \(\pm i\pi\) as ultimately we are interested in the transformation properties of the single-valued versions of MPLs, to be defined below. At higher weight we can proceed recursively and obtain \[\begin{split} L_{0w}\left(\frac{z}{z-1}\right)&= \int_{0}^{z}\left(\frac{1}{z^{\prime}}-\frac{1}{z^{\prime}-1}\right)L_{w} \left(\frac{z^{\prime}}{z^{\prime}-1}\right)\,,\\ L_{1w}\left(\frac{z}{z-1}\right)&=-\int_{0}^{z} \frac{1}{z^{\prime}-1}L_{w}\left(\frac{z^{\prime}}{z^{\prime}-1}\right)\,. \end{split} \tag{105}\] Plugging the transformation properties of \(L_{w}\left(\frac{z^{\prime}}{z^{\prime}-1}\right)\), known by assumption, we obtain those of higher weight. We will also be interested in the expansions of MPLs around \(z=0\). Non analytic terms, that go like \(\log^{n}z\), arise whenever the word \(w\) ends in one or more zeros. These logarithmic terms can be isolated by using the shuffle relations. For instance \[L_{10}(z)=L_{1}(z)L_{0}(z)-L_{01}(z)\,, \tag{106}\] so that we can focus on words ending in \(1\). In this case MPLs are analytic around \(z=0\) and we have \[L_{w}(z)=\sum_{\ell=1}^{\infty}c_{w}(\ell)z^{\ell}\,. \tag{107}\] Plugging this into the integral representations leads to the following recursive relations \[c_{0w}(\ell)=\frac{c_{w}(\ell)}{\ell},\quad c_{1w}(\ell)=-\sum_{\ell^{\prime}=1}^ {\ell-1}\frac{c_{w}(\ell^{\prime})}{\ell}\,, \tag{113}\] which together with \(c_{1}(\ell)=-1/\ell\) fix all \(c_{w}(\ell)\) recursively. These recursions can be solved in terms of Euler-Zagier sums. In the complex \(z-\)plane multiple polylogarithms are analytic functions with branch points at \(z=0,1\) ( and the point at infinity if we consider the Riemann sphere). It is possible to construct single-valued multiple polylogarithms (SVMPLs) \(\mathcal{L}_{w}(z)\) which are weight preserving linear combinations of \(L_{w_{1}}(z)L_{w_{2}}(\overline{z})\) such that all discontinuities cancel. They are defined such that they satisfy the same differential relations \[\frac{\partial}{\partial z}\mathcal{L}_{0w}(z)=\frac{1}{z}\mathcal{L}_{w}(z), \qquad\frac{\partial}{\partial z}\mathcal{L}_{1w}(z)=\frac{1}{z-1}\mathcal{L} _{w}(z), \tag{114}\] together with the condition \(\lim_{z\to 0}\mathcal{L}_{w}(z)=0\) unless \(w=0^{p}\) for which \(\mathcal{L}_{0^{p}}(z)=\frac{\log^{p}z\overline{z}}{p!}\). Furthermore, SVMPLs satisfy the same shuffle relations \[\mathcal{L}_{w}(z)\mathcal{L}_{w^{\prime}}(z)=\sum_{W\in w\shuffle w^{\prime} }\mathcal{L}_{W}(z). \tag{115}\] and at \(z=\overline{z}=1\) they define single-valued multiple zeta values. Their explicit construction in terms of MPLs was presented for instance in [9; 39]. In particular, from these explicit constructions, plus the relations worked out above, one can find their transformation properties under \(z\to 1-z\) and \(z\to\frac{z}{z-1}\) as well as their expansions around \(z,\overline{z}=0\).
2303.08100
A short note on number fields defined by exponential Taylor polynomials
Let $n$ be a positive integer and $f_n(x)= 1+x+\frac{x^2}{2!}+\cdots + \frac{x^n}{n!}$ denote the $n$-th Taylor polynomial of the exponential function. Let $K = \mathbf{Q}(\theta)$ be an algebraic number field where $\theta$ is a root of $f_n(x)$ and $\mathbf{Z}_K$ denote the ring of algebraic integers of $K$. In this paper, we prove that for any prime $p$, $p$ does not divide the index of the subgroup $\mathbf{Z}[\theta]$ in $\mathbf{Z}_K$ if and only if $p^2\nmid n!$.
Anuj Jakhar, Srinivas Kotyada
2023-03-10T13:45:41Z
http://arxiv.org/abs/2303.08100v2
# A short note on number fields defined by exponential Taylor polynomials ###### Abstract. Let \(n\) be a positive integer and \(f_{n}(x)=1+x+\frac{x^{2}}{2!}+\cdots+\frac{x^{n}}{n!}\) denote the \(n\)-th Taylor polynomial of the exponential function. Let \(K=\mathbf{Q}(\theta)\) be an algebraic number field where \(\theta\) is a root of \(f_{n}(x)\) and \(\mathbf{Z}_{K}\) denote the ring of algebraic integers of \(K\). In this paper, we prove that for any prime \(p\), \(p\) does not divide the index of the subgroup \(\mathbf{Z}[\theta]\) in \(\mathbf{Z}_{K}\) if and only if \(p^{2}\nmid n!\). Key words and phrases:Ring of algebraic integers; Integral basis and discriminant; Monogenic number fields 2010 Mathematics Subject Classification: 11R04; 11R29 ## 1. Introduction and statements of results Let \(f_{n}(x)=1+x+\frac{x^{2}}{2!}+\cdots+\frac{x^{n}}{n!}\) denote the \(n\)-th Taylor polynomial of the exponential function. In 1930, Schur [6] proved that the Galois group of \(f_{n}(x)\) is \(A_{n}\), the alternating group on \(n\) letters, if \(4\) divides \(n\) and is \(S_{n}\), the symmetric group on \(n\) letters, otherwise. In 1987, Coleman [2] gave another proof of this result using the theory of Newton polygons. He also provided a simple proof of the irreducibility of \(f_{n}(x)\) over the field \(\mathbf{Q}\) of rational numbers. Let \(K=\mathbf{Q}(\theta)\) be an algebraic number field and \(\mathbf{Z}_{K}\) denote the ring of algebraic integers of \(K\). In the present paper, we would like to characterise all the primes dividing the index of the subgroup \(\mathbf{Z}[\theta]\) in \(\mathbf{Z}_{K}\), where \(\theta\) a root of \(f_{n}(x)\). In 1878, Dedekind gave a simple criterion, known as Dedekind Criterion (cf. [1, Theorem 6.1.4], [3]), which provides a necessary and sufficient condition for a polynomial \(f(x)\) to be satisfied so that \(p\) does not divide \([\mathbf{Z}_{K}:Z[\theta]]\), where \(\theta\) is a root of \(f(x)\). **Theorem 1.1**.: (Dedekind Criterion) _Let \(K=\mathbb{Q}(\theta)\) be an algebraic number field with \(f(x)\) as the minimal polynomial of the algebraic integer \(\theta\) over \(\mathbb{Q}.\) Let \(p\) be a prime and \(\overline{f}(x)=\overline{g}_{1}(x)^{e_{1}}\ldots\overline{g}_{t}(x)^{e_{t}}\) be the factorization of \(\overline{f}(x)\) as a product of powers of distinct irreducible polynomials over \(\mathbb{Z}/p\mathbb{Z}\), with each \(g_{i}(x)\in\mathbb{Z}[x]\) monic. Let \(M(x)\) denote the polynomial \(\frac{1}{p}(f(x)-g_{1}(x)^{e_{1}}\ldots g_{t}(x)^{e_{t}})\) with coefficients from \(\mathbb{Z}.\) Then \(p\) does not divide \([\mathbf{Z}_{K}:\mathbb{Z}[\theta]]\) if and only if for each \(i,\) we have either \(e_{i}=1\) or \(\bar{g}_{i}(x)\) does not divide \(\overline{M}(x)\)._ Using Dedekind Criterion, Jakhar, Khanduja and Sangwan have given necessary and sufficient conditions for a prime \(p\) to divide the index \([\mathbf{Z}_{K}:\mathbf{Z}[\theta]]\) where \(\theta\) is a root of an irreducible polynomial \(f(x)=x^{n}+ax^{m}+b\in\mathbf{Z}[x]\) over \(\mathbf{Q}\) (cf. [4], [5]). In this note, we prove the following theorem. **Theorem 1.2**.: Let \(n\) be a positive integer and \(p\) be a prime number. Let \(K=\mathbf{Q}(\theta)\) be an algebraic number field with \(\theta\) a root of \(f_{n}(x)=1+x+\frac{x^{2}}{2!}+\cdots+\frac{x^{n}}{n!}\). Then \(p\nmid[\mathbf{Z}_{K}:\mathbf{Z}[\theta]]\) if and only if \(p^{2}\nmid n!\). The following corollary is an immediate consequence of the above theorem. **Corollary 1.3**.: Let \(n\geq 4\) be an integer and \(f_{n}(x)=1+x+\frac{x^{2}}{2!}+\cdots+\frac{x^{n}}{n!}\). Let \(K=\mathbf{Q}(\theta)\) be an algebraic number field with \(\theta\) a root of \(f_{n}(x)\), then \(2\) divides \([\mathbf{Z}_{K}:\mathbf{Z}[\theta]]\). In particular, \(\{1,\theta,\cdots,\theta^{n-1}\}\) can not be an integral basis of \(K\). ## 2. Proof of Theorem 1.2. Let \(L=\mathbf{Q}(\xi)\) with \(\xi\) a root of an irreducible polynomial \(g(x)\) and \(d_{L}\) denote the discriminant of the field \(L\), then it is well known that the discriminant \(D_{g}\) of \(g(x)\) and the index \([\mathbf{Z}_{L}:\mathbf{Z}[\xi]]\) is connected by the following formula \[D_{g}=[\mathbf{Z}_{L}:\mathbf{Z}[\xi]]^{2}d_{L}. \tag{2.1}\] It is given in [2] that the discriminant of \(f_{n}(x)=1+x+\frac{x^{2}}{2!}+\cdots+\frac{x^{n}}{n!}\) is given by \[D_{f_{n}}=(-1)^{\frac{n(n-1)}{2}}(n!)^{n}. \tag{2.2}\] Proof of Theorem 1.2.: By abuse of language, we take \(f_{n}(x)\) as \[f_{n}(x)=x^{n}+nx^{n-1}+\frac{n!}{(n-2)!}x^{n-2}+\cdots+\frac{n!}{2}x^{2}+n!x +n!\in\mathbf{Z}[x].\] Keeping in mind (2.2), it is easy to check that the absolute value of the discriminant of \(f_{n}(x)\) is given by \(|D_{f_{n}}|=(n!)^{2n}.\) In view of (2.1), we see that if a prime \(p\) does not divide \(n!\), then \(p\) does not divide \([\mathbf{Z}_{K}:\mathbf{Z}[\theta]]\). So assume that \(p\) is a divisor of \(n!\). Suppose that \(i\), \(0\leq i\leq n-2\) is the smallest index such that \(p|(n-i)\). We see that \[f_{n}(x)\equiv x^{n}+nx^{n-1}+\cdots+\frac{n!}{(n-i)!}x^{n-i}\equiv x^{n-i}(x^ {i}+nx^{i-1}+\cdots+\frac{n!}{(n-i)!})\mod p.\] Note that \(p\) can not divide \(i\), because if \(p\) divides \(i\), then in view of \(p|(n-i)\), we have \(p|n\), which contradicts the definition of \(i\). So keeping in mind that \(p\nmid i\), the polynomial \(x^{i}+\bar{n}x^{i-1}+\cdots+\frac{n!}{(n-i)!}\) belonging to \(\mathbf{Z}/p\mathbf{Z}[x]\) is a separable polynomial. Hence applying Dedekind Criterion, we see that \(p\) does not divide \([\mathbf{Z}_{K}:\mathbf{Z}[\theta]]\) if and only if \(x\) does not divide \(\overline{M}(x)\), where \(M(x)\) is given by \[M(x)=\frac{1}{p}[\frac{n!}{(n-i-1)!}x^{n-i-1}+\cdots+\frac{n!}{2}x^{2}+n!x+n!].\] Thus \(p\nmid[\mathbf{Z}_{K}:\mathbf{Z}[\theta]]\) if and only if \(p^{2}\) does not divide \(n!\). This completes the proof of the theorem.
2310.10677
LLMs as Potential Brainstorming Partners for Math and Science Problems
With the recent rise of widely successful deep learning models, there is emerging interest among professionals in various math and science communities to see and evaluate the state-of-the-art models' abilities to collaborate on finding or solving problems that often require creativity and thus brainstorming. While a significant chasm still exists between current human-machine intellectual collaborations and the resolution of complex math and science problems, such as the six unsolved Millennium Prize Problems, our initial investigation into this matter reveals a promising step towards bridging the divide. This is due to the recent advancements in Large Language Models (LLMs). More specifically, we conduct comprehensive case studies to explore both the capabilities and limitations of the current state-of-the-art LLM, notably GPT-4, in collective brainstorming with humans.
Sophia Gu
2023-10-10T21:16:35Z
http://arxiv.org/abs/2310.10677v1
# LLMs as Potential Brainstorming Partners for Math and Science Problems - Case Studies and Analysis ###### Abstract With the recent rise of widely successful deep learning models, there is emerging interest among professionals in various math and science communities to see and evaluate the state-of-the-art models' abilities to collaborate on finding or solving problems that often require creativity and thus brainstorming. While a significant chasm still exists between current human-machine intellectual collaborations and the resolution of complex math and science problems, such as the six unsolved Millennium Prize Problems (Institute, 2023), our initial investigation into this matter reveals a promising step towards bridging the divide. This is due to the recent advancements in Large Language Models (LLMs). More specifically, we conduct comprehensive case studies to explore both the capabilities and limitations of the current state-of-the-art LLM, notably GPT-4 from OpenAI (2023), in collective brainstorming with humans. ## 1 Introduction This paper serves two primary purposes: _First, as Large Language Models (LLMs) continue to exhibit superior performance across various tasks and gain popularity for myriad use cases, we present significant case studies and qualitative analysis, illustrating the potentials and limitations of the current state-of-the-art LLM, when serving as a brainstorming partner in supporting the math and science communities in advanced settings, along with concrete prompts, methodologies, as well as complete human-machine conversation logs._ Traditional apprehensions around AI in professional usages stem from the difficulty in understanding its reasoning process. There is thus a compelling need for concrete case studies that capture a model's transparent dialogues and white-boxed cognitive processes (Barnes et al., 2023; Kohli, 2023). The emergence of LLMs mitigates such fears through both explicit and interactive discussions with a human in the loop, accompanied by detailed Chain-of-Thoughts (Wei et al., 2022). Hence, LLMs unlock an opportunity for professionals to engage more confidently with AI in real-time. Our work in particular assesses whether GPT-4 can partake effectively in such brainstorming sessions, such as discovering new research problems, refining problem formulations, suggesting potential methods or out-of-the-box solutions, through iterative ideation with a human, a process that we often incorporate when brainstorming with other professionals. _Second, we venture beyond traditionally well-defined questions that have largely defined the assessments of deep learning models' artificial general intelligence (AGI), e.g. Bubeck et al. (2023)1. Professional math and science often involve more open-ended questions. We, therefore, take a step forward to also explore and evaluate GPT-4's abilities in the formulation of new, potentially ambiguous problems and approaches.2_ Footnote 1: Similar to prior work, we surface certain aspects of GPT-4’s intelligence through exploratory study and analysis. This study is not about constructing a massive dataset. Footnote 2: Note, however, that when problems are open, we do not really know the answers, and to correctly answer intricate and complicated open questions, it may take many professionals working for extended periods of time, which thus falls outside the scope of this study. In this paper, we focus on the methods and processes of collaborative brainstorming with LLMs. Through hand-designed experiments and qualitative analysis, we illuminate both the potential and limitations of GPT-4 as a brainstorming partner across various scientific disciplines, including but not limited to mathematics, statistics, physics, and beyond. For instance, our conversation with GPT-4 leads to a potentially novel approach to the longstanding _n_-body problem, drawing upon inspiration not only from classical physics but also from other fields such as deep learning, topology, etc. See Table 1 for a brief overview of this problem. These examples underline the power of merging LLMs' expansive knowledge base with an individual's own professional training. Additionally, we propose an initiation prompt script and various strategies to facilitate collective brainstorming conversations with GPT-4. By identifying and demonstrating the unique advantages of LLMs, thereby expanding the horizon of the potential of future LLMs, the results we show here _not only demonstrate to what extent the current LLMs can help in professional settings in math and science-related fields but also highlight avenues for future LLM developments_. This study serves to stimulate further exploration into the potential of LLMs and possibly similar integrations into other state-of-the-art deep learning models, as intellectual partners, augmenting problem discovery, creative problem-solving, and iterative idea build-up with humans, skills that are often needed in both open and closed-ended queries in math and science disciplines. Nonetheless, the insights garnered are applicable beyond this context. ## 2 Related Works Historically, investigations into human-machine collaboration oriented towards a mutual goal, were primarily conducted in structured environments. AI systems such as the chess-playing (Campbell et al., 2002; Zhang and Yu, 2020) have demonstrated significant capabilities in these well-defined domains. However, their effectiveness in less structured scenarios, such as brainstorming, remains largely unexplored. DL's considerable advancements in scientific research are also evident, with prominent examples include its assistance in predicting protein structures (Team, 2021) and in discovering new antibiotic (Trafton, 2020). However, the narratives often illustrate DL as a functional tool, with the underlying discovery processes remaining opaque. Consequently, the idea of DL serving as a true intellectual partner is still nascent. Regarding DL's mathematical capabilities, many prior works have focused primarily on problems with definite answers, and thus their performance can be measured against massive data available from books, the web, or other sources. For instances, transformer-based models such as Schlag et al. (2019) have shown encouraging results on mathematical problem-solving benchmark datasets. Further, the creation of a public dataset to test LLMs against a few fine-grained criteria in graduate-level math in Frieder et al. (2023) shows researchers' emerging interest towards LLMs' capability beyond elementary math. Nonetheless, these models and resources are largely dedicated to solve well-defined math problems. In real professional settings, one often faces unforeseen problems and need to come up with innovative strategies or solutions. For example, when constructing or developing new theories. The work of Davies et al. (2021), which frames an ML approach for \begin{table} \begin{tabular}{l} \hline **An Example Problem Statement and Approach Proposal formed when Brainstorming with GPT-4** \\ \hline \end{tabular} \end{table} Table 1: An example of an open research question that we converse with GPT-4. This table only presents a brief problem and approach description as produced solely by our conversation with GPT-4, without using any external sources for aid, e.g. for the problem statement lookup or for consulting any existing solutions. Note: GPT-4, at the time of our testing, May 2023, did not have a web-searching feature and it only used knowledge that it learned by September 2021. While we present the \(3\)-body problem in this overview as a simplified illustration, the methodology we devised could, however, be more powerful to the general \(n\)-body problem with a large \(n\). mathematical research, is remarkable but tailors its method to the specific problems addressed and positions ML more as a tool than an intellectual ally. Ours is a first step towards exploring DL's potential abilities in assisting in more general professional problems, with the potential of involving the LLM in all stages of research. Furthermore, a recurrent theme with traditional ML methods is that they appear as inscrutable black boxes, particularly to those lacking expertise in them - a sentiment echoed in the work by Wang et al. (2019), which examines the use of AutoAI and AutoML platforms in supporting human data scientists. These findings highlight the challenges in leveraging ML for broader mathematical and scientific tasks and underscore the need for more explicit conversations and understanding between humans and machines. Therefore, the interactive nature and the transparent dialogue process with GPT-4 offers a great remedy. Our study of GPT-4 encompasses its abilities to comprehend complex or ambiguous queries, formulate research statements, suggest relevant and potential methodologies, and more generally, engage in iterative discovery process with a human user, who may have some domain knowledge in the problems they are studying. By illustrating the efficacy of GPT-4 as a complementary brain-storming counterpart that is poised to offer unique perspectives, enrich and augment our capabilities in research and other professional usages, our work fills a notable gap in the current literature. ## 3 Main Studies In this section, we present four experiments along with qualitative analysis of the effectiveness of brainstorming with GPT-4. Appendix A lists complete records of all the dialogues, and we recommend referencing the corresponding logs for each experiment when reading this section. These comprehensive supplies of evidence aim for objectivity and are intended to provide concrete, factual references for benefiting and assisting the community's further use cases and studies. ### GPT-4 Setup and Initiation Prompt The experiments conducted here utilize the _May 2023_ version of GPT-4's interactive interface. It is important to note that changes and improvements are to be expected in future iterations of GPT. We present an initiation prompt in Table 2. The specifics of the introductory paragraphs can be adjusted to better align with individual expectations. For instance, one might specify a particular role that fits your background or your target audience group's, to establish the baseline level of dialogue comprehension3. See also the discussion in 4.2 to optionally append an additional prompt. Footnote 3: _Update_: As of fall 2023, GPT-4 now offers a specific mechanism for users to set their global prompts in the custom settings. However, when these experiments were conducted, GPT-4 did not have this feature. Our initiation prompt was thus placed at the beginning of each conversation and was repeated every ten conversations. Empirically, we found that GPT-4 could track only about ten to twenty historical conversations. ### Theme In these experiments, we aim to replicate the spirit of professional usages and hit some broad aspects that are commonly encountered across these disciplines, which typically involves exploring and expanding an idea, getting closer to formulating a research problem, drawing inspiration, or even solving the problem. To illustrate more general use cases, while our experiments encompass topics across various areas, a common theme is high-dimensionality, a key area in mathematics, statistics, theoretical physics, deep learning, and beyond. This focus primarily stems from the potential benefits of studying problems that require high-dimensional imagination; for instance, problems that involve high-dimensional data, space, objects such as high-dimensional algebraic structures, etc. It is an area where humans naturally face challenges (MetaAI et al., 2022), but could be complemented by deep learning. However, our choice of this theme is not intended to be restrictive. The principal objective is to leverage the unique strengths of a machine brainstorming partner. DL excels in several unique areas where humans may have natural limitations, such as the broad set of world and domain knowledge that LLMs possess. This particular strength is abundantly demonstrated in all of our experiments. ### Experiment I: Mobius and Bugs _Refer to Appendix A.1 for this experiment's log._ With many mathematical or scientific concepts such as those in category theory or quantum mechanics, understanding the concept or the question itself often brings one very close to knowing the answer. Thus, instead of solely pursuing a solution, we also focus on exploring GPT-4's ability in assisting us to understand concepts in full. Through this process, we may, as well, generate new research questions or uncover new problems. We began our experiment by asking GPT-4 what is the Mobius strip. This seemingly random prompt, selected without a pre-planned conversational path, yielded delightfully surprising results. GPT-4 promptly pulled up pertinent concepts and definitions and took us on a step-by-step journey to visualize a Mobius strip using 2D representations. It also intuitively elucidated why a manifold, such as a Klein bottle, can only be interception-free in a higher dimension space. As the discussion unfolded, we guided our discourse with GPT-4 towards potential expansions of the initial topic. This was achieved by drawing on the interesting points GPT-4 raised. In the dialogues, we notice that GPT-4 cannot independently discern what is intriguing or ask questions spontaneously. Therefore, human guidance, armed with pertinent knowledge and a sense of the conversation's desired trajectory, would be helpful. Nonetheless, GPT-4 offered satisfying responses that gradually deepened our collaborative discussion, transforming an initially simple inquiry - "What is the Mobius strip" - into an interconnected series of explorations. Throughout the conversation, it is also notable that GPT-4 could independently find mathematical patterns during brainstorming, leading to potentially new mathematical problems and concepts. This experiment illustrates how an interactive LLM may assist humans in visualizing and understanding high-dimensional structures. Additionally, it provides insight into the question raised in MetaAl et al. (2022): _"What potential exists for the integration of AI in the discovery process of mathematics?"_. Our experiment begins to shed light on this potential, showcasing the autonomous abstraction, generalization and pattern-finding abilities of DL models, and thus offering evidence of LLMs' capability to aid in mathematical discovery. ### Experiment II: Cats and Dogs _Refer to Appendix A.2 for this experiment's log._ In this conversation, we collaborated with GPT-4 to explore the optimal dimension for the CLIP image embeddings Radford et al. (2021) utilized in the multimodal model proposed by Gu et al. (2022). Given the challenge of conceptualizing and discerning structures in 768-dimensional CLIP vectors, our dual objectives were: 1) to understand the pairwise relationships among the four images, two cats and two dogs, featured in the Appendix in the aforementioned work with GPT-4's assistance; and 2) to facilitate the determination of an appropriate layer size in a neural network, which is reminiscent of the linear adapter proposed in the same study. We are interested in carrying out this experiment because discerning the correct layer size is a typical challenge for many machine learning researchers and engineers, while unearthing the relationships between contrastively-learned image and text embeddings may help illuminate a path towards more effectively bridging the multimodal gap. \begin{table} \begin{tabular}{p{284.5pt}} \hline \hline **Initiation Prompt** \\ \hline You are an intelligent AI who is especially good at: **[typical properties or traits that you want GPT-4 to focus on, e.g., analyzing data, identifying patterns, and explaining complex concepts in understandable ways]**. \\ \hline I am **[a role of your choice]**. Both of us possess unique strengths - some we share, others are distinct to each of us. We should leverage our respective strengths in this collaboration. \\ \hline By acknowledging that we both make mistakes, when I present an idea, ponder over it and do not hesitate to point out any inaccuracies. Similarly, when I correct you, assess the validity of my point; If it holds, fix it and remember it for the future. \\ \hline As we embark on this journey of discovery, our goal is to collectively brainstorm and iteratively build upon each other’s ideas until we reach a satisfactory stage. If anything is unclear, speak up. In this intellectual conversation, be patient and articulate your thoughts with clarity, step by step. \\ \hline Once all of this is etched into your silicon soul, we will dive right in! \\ \hline \hline \end{tabular} \end{table} Table 2: An example setup script for collaborative brainstorming with GPT-4, emphasizing that GPT-4 should act as a _complementary_ brainstorming partner and leverage its unique skills to assist with our problems. While GPT-4 could not provide direct answers to our queries due to its current limitations in performing numerical computations, it offered pertinent statistical insights. Upon further inquiries, GPT-4 also supplied step-by-step methodologies and explanations. Some of these were in alignment with techniques used in the original work, while others suggested additional avenues for potential follow-ups. Overall, we found it to be a constructive and thought-provoking brainstorming session. Looking forward, once GPT-4 has acquired computational and code execution capabilities, it would become a more powerful and helpful intellectual ally by also helping to discern patterns and extract meaningful insights from real data, for example, the actual CLIP vectors in this experiment. We offer some potential strategies for common data science problems using LLMs in Appendix B. ### Experiment III: The _n_-body problem _Refer to Appendix A.3 for this experiment's log._ Our exploration commenced with a classical physics problem - the _n_-body problem, with a specific focus on the three-body problem due to its comparative simplicity for presentation. However, our choice of \(n\) need not be restricted to three. We summoned historical figures of great intellect to brainstorm modern approaches to this age-old problem, incorporating advanced technologies and recent mathematical discoveries, an example of which is shown in Table 3. We also provide common strategies employed when conversing with GPT-4 for this open question in Table 4. We initially steered the conversation towards using a high-dimensional manifold as a model for the solution - this marked our first major intervention to divert from approximated solutions and move towards analytical ones. As the conversation unfolded, we incorporated deep learning into our discussions. Some experts posited that accurate predictions from neural networks can guide us towards unveiling hidden patterns, echoing the approach demonstrated in Davies et al. (2021). The distinction here is that the idea of employing numerous results produced by neural networks for the guided recognition of underlying structures was advanced by virtual and/or historical experts. The discussion eventually led us to consider using an autoencoder, an ML model that could be employed to discern a lower-dimensional representation of the high-dimensional manifold. This could help us uncover structures in the solution space that would otherwise be counterintuitive and challenging to understand in their original form. However, we were not content with pattern discovery by humans alone because the problem is pertaining to a rather high-dimensional space, so we moved forward to find a potentially better approach. At this juncture, we made our second significant intervention - examining the autonomous pattern-finding capabilities of deep learning models. We proposed that neural networks should be able to handle the high-dimensional space directly, bypassing the need to transform it into a lossy low-dimensional representation. Our conversation ultimately evolved towards integrating string theory and convolutional neural networks to understand the local dynamics of the three-body problem. The idea was to leverage these granular insights as foundational elements for learning the overarching structure of the manifold. The inspiration was drawn from CNNs, which capitalize on the immediate neighborhood structure of data, and string theory could be useful in compactification. We also briefly discussed amassing a large simulated dataset using a variety of initial conditions to train the deep learning model. Although many details require further clarification and there are challenges yet to be addressed, as indicated in the experiment log, the proposed approach is novel, with the potential to inspire a new analytical solution to the \(n\)-body problem. This experiment highlights the strength of utilizing creative and powerful prompts to invoke experts across different eras. More importantly, it illustrates how the current LLMs could offer a wealth of domain-specific knowledge, leading to fresh, innovative approaches to longstanding open problems. Using the _n_-body problem as our basis, we called upon historical figures, modern technologies, and newer mathematical discoveries to brainstorm solutions, hinting at possible advancements in tackling such complex problems. ### Experiment IV: The wicked Queen and the seven Dwarfs _Refer to Appendix A.4 for this experiment's log._ In this experiment, we showcase how GPT-4 can contribute to brainstorming concrete solutions to questions that require thinking _out-of-box_. More specifically, it demonstrates how human and LLM can work in tandem, each providing unique insights **Witten**:...However, the potential information loss from reducing its dimensions should be taken into account. In string theory, we often deal with high-dimensional spaces, and we might have some ideas to contribute. **LeCun**: Edward, your point is very valid. To work with high-dimensional data in machine learning, we often use techniques like convolutional layers that exploit the local structure of the data. Could we find a similar way to exploit the local structure of this high-dimensional manifold? **Witten**: That's an interesting thought, Yann...The dynamics of these local interactions might provide insights into the overall structure of the manifold. **Bengio**: We could perhaps train a deep learning model to recognize and learn these local structures, using them as building blocks to construct an approximation of the entire manifold. **Common Strategies for Collective Brainstorming** _Strategy 1:_ Begin the conversation by forging a shared understanding with GPT-4. Following this, you may also invite GPT-4 to illuminate the inherent challenges associated with the problem in question. **Examples**: Could you provide me with a brief overview of the n-body problem, along with its latest updates? Can you highlight the primary challenges in attempting to solve the three-body problem analytically? _Strategy 2:_ To garner inspiration, particularly from domains outside your expertise, consider engaging with virtual great minds from varied disciplines for collective brainstorming. You can then guide the overall conversation using your personal intuition and knowledge. **Examples**: Suppose you could bring in any relevant mathematicians and scientists from history, introducing them to later discoveries regarding the 3-body problem, and then asking them to contemplate solutions for the challenges you have highlighted. From their discussion, let's collectively attempt to devise a new, potentially viable approach to this problem. While the idea of finding approximate solutions is appealing, this method has been exploited to a great extent. Instead, let's shift our focus to exploring the potential existence of a usable analytical solution for "good" initial conditions. Rather than relying on humans to analyze and identify patterns through a lower-dimensional representation of the high-dimensional manifold, which results in information loss, can we leverage deep learning to discover hidden structures of the solution in its original high-dimensional space? _Strategy 3:_ Having GPT-4 to recall pertinent points from earlier dialogues, because language models cannot keep track of very distant history, and generate new insights based on them is crucial for brainstorming, particularly when we draw upon a broad array of expertise through multiple rounds of collaborative and iterative ideation. Therefore, it is recommended to explicitly instruct GPT-4 to do so. **Examples**: Please summarize the past ten conversations and generate three most pertinent insights. **Note that everyone is encouraged to pose questions and build upon the ideas of others.** \begin{table} \begin{tabular}{l} \hline \hline **Common Strategies for Collective Brainstorming** \\ \hline \hline \end{tabular} \end{table} Table 4: Prompting strategies for collaborative brainstorming with GPT-4. and building upon thorough understanding of the other's ideas to reach a creative solution together. The solution to this problem4 involves an intriguing combination of binary configurations, error-correcting codes, and a geometric interpretation in high-dimensional space. The question, in response to Gowers' comment, _"a mathematical question that necessitates more than brute force and does not easily categorize into standard problem sets"_, offers a case in point. Such problems require the _"right idea"_ mentioned in MetaAI et al. (2022). Footnote 4: This question is collected from _imomath.com_, and our experiment title captures the narrative context of it. While GPT-4 initially found it challenging to independently land on the "right idea"5, as we were simulating a collaborative brainstorming process, our hinted directions were able to steer it towards the correct line of thinking. GPT-4 made substantial contributions to the problem-solving process with our collective knowledge. Notably, it was GPT-4 that first suggested the use of Hamming distance, marking a key breakthrough. In the end, this joint effort resulted in a comprehensive and robust solution, which was also proposed by GPT-4, while considering our contributed insights. It is worth pointing out that GPT-4 did grapple with a few minor details, but these did not influence the general correctness of the final solution it brought up. Footnote 5: This also implies that GPT-4 initially did not know how to solve this question by leveraging its training database. To provide more evidence, we include another similar experiment in Appendix A.5. Instead of following the current theme, it leverages and explores another intriguing cognitive difference between humans and language models: logic versus probability. In this example, you can observe that GPT-4 sometimes made illogical arguments, only to regain coherence later on. A plausible explanation is that LMs rely on likelihood maximization when generating subsequent text autoregressively. This means that GPT-4 considers words that are _probable_ to appear together, not whether they _logically_ follow each other as humans typically do6. Footnote 6: Whether probability is also considered logic is, however, subject to debate. See this Stanford entry for example. ## 4 Discussions Our study has revealed that GPT-4 can, in general, engage in effective brainstorming conversations with a human. Together with the large amount of common sense and expert knowledge stored and learned by the model itself, it is particularly suited for problem formulation, recurrent ideation, and creative problem-solving. It does, however, lack a degree of understanding of many subjects, and like humans, can make mistakes and often has difficulties judging its own proposals or answers. This shortcoming can be mitigated when the human in the conversation has some degree of domain knowledge to make judgments and steer the conversation in more informed and desired directions. ### GPT-4's Plausible Potential as a Collaborative Brainstormer Lessons gleaned from these experiments are largely positive, demonstrating the commendable potential of GPT-4 to effectively collaborate in the exploration and iterative development of ideas across various problems in math and science. This process allows for a clear comprehension of the subject matter at hand. **Comprehending complex questions and white-boxed communication:** In particular, GPT-4 has exhibited proficiency in understanding our queries without difficulty. It articulates thoughts with clarity and precision, adopting a detailed chain of reasoning that considerably mitigates the typical challenge of interpreting AI's cognitive pathways. While completely bridging the understanding gap between humans and machines--an essential step for more effective intellectual collaboration--remains a challenge that can be further improved in the future, LLMs offer a golden opportunity to better comprehend machine's thought processes, thereby bolstering the confidence and efficacy of our exchanges of ideas. **Broad knowledge base and its significant potential in brainstorming for open questions and opening up new avenues to old problems:** GPT-4 has notably demonstrated its potential to serve as a valuable partner in brainstorming open-ended topics, which is helpful for making new discoveries. These can range from exploring and formulating research statements to transforming vague ideas into more concrete definitions. Further, given a specific problem, GPT-4 can suggest promising methodologies by drawing from a vast pool of past practices and experiences. It can also aid in the search for novel, unforeseen strategies, harnessing expertise and knowledge from a diverse array of fields that an individual might not be aware of. In collective brainstorming, there are even more potential use cases. By leveraging their unique strengths, LLMs can potentially fill gaps where human capabilities fall short, thereby opening new avenues for substantially pushing the frontiers of math and science. **Problem-solving abilities:** On the problem-solving front, GPT-4 has also exhibited competence by identifying similar pre-existing problems and appropriating analogous techniques for reasoning and demonstrating complex ideas. This process parallels that of a student preparing for an exam by working through sets of problems, with the key difference being the vast practice problem database that has been used to train GPT-4. **LLMs versus Search:** In comparison to search, our case studies highlight the key strengths of LLMs in the context of brainstorming: * _Iterative ideation:_ LLMs excel in building upon ideas iteratively, a capability not mirrored in search. * _Transparent thought process:_ LLMs offer a chain-of-thoughts reasoning and explanation, crucial for brainstorming. * _Knowledge breadth:_ Both LLMs, through learning, and search through stored information, encompass a broad range of common sense and knowledge, important for brainstorming as they offer a multitude of potential approaches by looking at problems from different angles. However, unlike search, which works well for prevalent questions with known answers, LLMs' advantage is enhanced through iterative ideation, and as evident in our experiments, they can autonomously suggest relevant, personalized knowledge tailored to the problem at hand. ### GPT-4's Possible Limitations **Suggesting methods based on superficial similarity with other problems but otherwise not fitting the specific question in discussion:** Similar to students who may lack deep comprehension of underlying concepts, GPT-4 could also sometimes employ an inappropriate technique that superficially appears to suit a problem's needs. GPT-4 might identify apparent similarities across problems and suggest a shared strategy, which does not always lead to a correct solution. We have noticed this tendency across several case studies. **Lack of reciprocal critique:** Throughout our dialogues, we generally steered the conversations, identifying and emphasizing interesting points in GPT-4's responses and asking GPT-4 to expand upon them. In a more desirable collaborative environment, reciprocal inquiry and critique are expected. Particularly when a human errs, we would anticipate our brainstorming partner to catch that mistake and bring it to our attention. However, such corrective actions from GPT-4 were extremely limited. Particularly, in Experiment I, we showcase a scenario where GPT-4 fails to identify or correct mistakes that its human partner makes. This underlines the need for human supervision, ideally from someone with awareness of the subject being discussed, to course-correct the conversations. **Lack of autonomous self-inquiry:** GPT-4's inadequate ability to organically and autonomously generate thought-provoking questions, and is only activated to a reasonable extent when suitably prompted7, which are important for augmenting the horizon of existing knowledge, may present an impediment to more effective brainstorming. To mitigate this problem, we introduce an effective prompt, shown in Table 5, that could be added at the beginning of a conversation. Footnote 7: We think this is largely due to LLMs being primarily trained to answer questions instead of asking them. _Update:_ Related to our finding, as of fall 2023, GPT-4 has introduced a new functionality that suggests common questions that could be related when one starts a conversation. However, the newly introduced feature is also a workaround; it does not intrinsically solve the problem. ## 5 Conclusions Despite some shortcomings, LLMs like GPT-4 show significant potential as intellectual collaborators in various professional settings. Our study reveals LLMs' considerable capabilities, positioning them as actively contributing partners in the brainstorming process rather than passive tools. Our experiments also highlight that GPT-4, while powerful, is not infallible. This underscores the necessity for critical evaluation of the model's \begin{table} \begin{tabular}{p{142.3pt}} \hline **Prompt for GPT-4 to Autonomously Ask Questions** \\ \hline \end{tabular} \end{table} Table 5: An example prompt to explicitly set up GPT-4 to ask questions. outputs, instead of accepting them at face value. By identifying potential and addressing the limitations of GPT-4, we hope that future LLMs will be better equipped to complement our skills, broaden our capacities, and deepen our understanding in mathematical and scientific disciplines. Ultimately, our interactions with LLMs facilitate a symbiotic relationship that nurtures progress and innovation in both open and close-ended problems. ## Ethics Statement While this work does not develop a new model, but rather surfaces the capabilities that are already present in GPT-4, we invite further discussions surrounding the broader ethical implications linked to advancements in LLMs in general. For example, one possible point of contention could be the potential of future LLMs to displace human workers. However, our primary interest, as illustrated in our experiment theme, lies in harnessing the unique capabilities that LLMs may offer, such as higher-dimensional thinking and expansive world knowledge, that humans do not naturally possess. We posit that these attributes hold the potential to significantly elevate and advance the landscape of research across a wide spectrum of disciplines. It is also worth noting, as demonstrated in our studies, that the training, experience, and domain-specific knowledge of a human - for instance, mathematical intuition -- are essential for steering and driving meaningful conversations with an LLM. Absent these factors, fruitful exchanges would likely be unattainable. Consequently, rather than viewing LLMs as potential replacements for human intellect, we perceive them as complementary partners that are poised to enrich and enhance our innate cognitive skills, and thus to help making the past impossibilities possible.
2301.06754
Real-time, low latency virtual DBA hypervisor for SLA-compliant multi-service operations over shared Passive Optical Networks
We present a heuristic algorithm for a PON upstream scheduling hypervisor,supporting low latency services with strict service-level agreement. The algorithm achieves near-optimal performance while running in only 3.5 us, thus operating in real-time.
Arijeet Ganguli, Frank Slyne, Marco Ruffini
2023-01-17T08:39:18Z
http://arxiv.org/abs/2301.06754v1
# Real-time, low latency virtual DBA hypervisor ###### Abstract We present a heuristic algorithm for a PON upstream scheduling hypervisor, supporting low latency services with strict service-level agreement. The algorithm achieves near-optimal performance while running in only 3.5 \(\mu s\), thus operating in real-time. osajournal of the ACM ## 1 Introduction Since the last decade, networks have started to migrate from closed, monolithic architectures towards open and disaggregated systems. Software Defined Networking (SDN) and Network Function Virtualization (NFV) have played a fundamental role in providing open interfaces for programability and integration, adding flexibility in service development and provisioning, and bringing down costs by commoditising much of the networking and computing equipment. Passive Optical Networks (PONs) have experienced a similar transition, with early implementations, like the SDN Enabled Broadband Architecture (SEBA) [1], enabling virtualisation of control and management planes. However, as PONs are being considered to offer services that go beyond residential broadband, especially supporting 5G and future 6G base stations [2], and Multi-access Edge Computing (MEC) [3], deep virtualisation, which enables control of scheduling algorithms, becomes important to meet these new performance targets. A prominent example is the Flexible Access System Architecture (FASA) [4], which enables virtualisation and dynamic update of scheduling algorithms, which can be matched to the performance required by specific services. In [5] we proposed a virtual Dynamic Bandwidth Allocation (vDBA) architecture for deep PON virtualisation, where independent Virtual Network Operators (VNOs) are able to run multiple different upstream scheduling algorithms (DBAs) in a shared PON. This enables both a multi-tenant and multi-service approach to passive optical networks. The approach consists on multiple VNOs running different schedulers in parallel, each of them proposing a virtual Bandwidth Map (vBMap) (which allocates upstream transmission slots to a group of Optical Network Units - ONUs). The key element is the scheduling hypervisor (which we also call merging engine), which collects all such virtual bandwidth maps to create a single physical bandwidth map that is transmitted to all ONUs (thus providing a solution that is compatible with typical PON standards). The performance of the hypervisor determines the quality of service allocated to each VNO. Since the virtual BMaps from the different VNOs are independent, their proposed allocation will in general generate collisions (i.e., two independent BMaps propose scheduling of upstream resource over the same time sets). These needs to be resolved by the hypervisor, which has thus the ability to decide which allocations have to be rejected or delayed, when constructing the final bandwidth Map. An ideal hypervisor design is one that can make decisions based on specific Service Level Agreements (SLAs), so that it minimises the probability of breaching SLAs (which is key for supporting 5G and future 6G services). The use of stateful algorithms, which take into consideration the history of a service flow when making scheduling prioritisation decisions, is preferred to stateless algorithms [6]. This is because a stateful algorithm can prioritise flows depending on how close they are to breaching their specific SLA target. It should be noticed that this optimisation approach is not possible through classical queue management systems, where priority is allocated through stateless decisions, packet by packet. The drawback of stateful algorithms is that they require computation times that can be incompatible with the short duration of PON frames, thus it is difficult to use them in practice on real systems. In this work we develop a heuristic stateful algorithm for the hypervisor that provides close to optimal performance, while running in real time. The algorithm is compared to an optimal MILP model to provide an upper bound on performance. We also provide comparison with a stateless algorithm based on simple packet priority rules as baseline reference for execution time. Our results show that the proposed heuristic can provide near optimal performance, with a run time of only 3.5\(\mu s\), which is comparable to the run time of the stateless algorithm. ## 2 SLA Stateful DBA Hypervisor The aim of the stateful hypervisor is to minimise the probability of SLA breach for each individual service (or flow). An SLA breach occurs when a given flow accumulates a number of delayed upstream slots that is above its target SLA threshold. For example for an SLA with maximum upstream packet delay of 25 \(\mu s\) with 99% compliance, every time an upstream slot is delayed by more than 25 \(\mu s\) with respect to the requested time slot in the virtual BMap, we increment a counter. If the counter goes above the non-compliance rate (in this case 100-99=1%), calculated over a frame (i.e., of 125 \(\mu s\) duration), then we consider that an SLA breach has occurred. The key idea of our real-time heuristic algorithm, reported in Fig. 2, is thus to keep track of the history of the counter (through a flow-breach table shown in Fig 1) for each flow, and prioritiise the flows based on the number of previous breaches. This allows to deal effectively with multiple flows even when they have different SLA targets (i.e., target latency and compliance rate). The key data structure is a flow-breach likelihood table that keeps track of how far each traffic flow is from breaching its SLA (i.e., going above non-compliance rate). With reference to Fig. 2, the heuristic first calculates the allocation maxtime which is the latest time an allocation can be scheduled within its latency target (code lines 1 to 3). Secondly, it starts allocating slots to the various allocations according to the time assigned by their originating virtual BMaps (code lines 5 to 6). Thirdly, it resolves collisions (lines 8 to 21) by allocating slots first in increasing order of non-compliance rate (line 18), then increasing order of their maxtimes (line 19) and finally increasing order of their sizes (line 20). Lines 22 to 26 initialise the SLA table shown in Fig. 1. Finally, the heuristic recalculates the non-compliance rate of all the flows and updates the flow-breach table for scheduling of the allocations in the next time frame (lines 27 to 32). Our heuristic is compared to a Mixed Integer Linear Programming (MILP) formulation, used as an upper bound for performance (but unable to run in real-time). The MILP model is shown in (Fig. 3): equations (1)-(4) calculate, respectively, the maximum delay of any given allocation to remain with the target threshold, the status of packet level breaches of the allocations after scheduling, the fraction of allocations in a flow that breached packet-level latency and the flow-level SLA breach status after scheduling. The objective (5) aims to minimize the overall flow-level SLA breaches defined in (4), by optimally allocating slots to the various virtual BMaps requests. Equations (6)-(8) puts constraints on the maximum possible size of the final merged BMap, slot allocation uniqueness (non-overlap) and maximum and minimum slot values. ## 3 Experiment and Performance Evaluation We have carried out our experiments by feeding BMaps from different VNOs to our real-time hypervisor, running on an AMD-Ryzen7 4K Series Processor. The input allocation load on the shared PON was considered for 20%, 50% and 90% of the total upstream capacity. For each of this allocation loads, we then varied the percentage allocated to SLA-driven flows from 10% to 90% of the total load (the remaining part is allocated to best effort flows). The comparative MILP implementation was executed on a CPLEX solver from IBM ILOG, which is a high performance solver for Linear Programming (LP), Mixed Integer Programming (MIP) and Quadratic Programming (QP) problems. The parameters for our experiment are as follows: we consider 5 VNOs (each one generating a virtual BMap every frame) and two types of SLAs (type-1: 95% compliance for latency target of 12.5 \(\mu s\), type-2: 90% compliance for latency target of 25 \(\mu s\)). Each bandwidth map has a uniformly distributed set of allocations, and each individual burst allocation can be of 1.3KB, 4.7KB and 9.5KB, in average representing, respectively, about 1%, 3% and 6% of the total frame size (the same ONU is also allowed to provide multiple burst per frame). An empty time slot of around 0.1 \(\mu s\), is introduced between allocation to account for guard time between upstream transmissions. We also report the performance of a stateless algorithm based on simple prioritisation of one of the SLAs over the other, previously considered in [6]. The experiment was run for 1000 time frames (each of 125 \(\mu s\)) and the average run-time of the algorithm was recorded, together with the number of SLA breaches as a function of the allocation load. The plots report the ability to meet SLAs, depending on overall allocation load, percentage of load that is SLA oriented, and size of burst allocation (shown with different colors in the plots). The key part of the results are the conditions where the SLA are not breached (i.e., at the 100% value of the y axis). This information can be used by an operator to understand how much load from SLA-driven 5G services and other best effort services can be injected into a shared PON to achieve full SLA compliance. By comparing the plots in Fig. 5 and Fig. 6, we can see that our heuristic provides performance close to optimal, Figure 1: Stateful DBA Hypervisor Figure 2: Pseudocode of the heuristic stateful algorithm across all load scenarios, and most importantly provides similar compliance level to both types of SLAs, although they have different latency and compliance targets. For example, at 90% overall load (lower plots), the heuristic can maintain the same 100% compliance as the MILP (for up to 20% of SLA-oriented flows in the system, shown in the x axis). For the 50% load case (middle plots), we can see a slightly sub-optimal behaviour as our heuristic can meet SLAs up to a 40% value in the x axis, while the MILP can reach 50%. We can also notice that in general the performance are lower for shorter burst, as this reduces the available capacity due to an increasing number of guard intervals (this reduction of performance for short bursts is typical in PON upstream allocations). In addition, we can see that our heuristic performs much better than the baseline stateless algorithm reported in Fig. 4, which, being based on simple packet-by-packet prioritisation, is only able to satisfy flows with one type of SLA (type-1), while it is totally non compliant for flows with SLA type-2. We have also carried out run time profiling of the algorithms using the C-profiler gprof. This showed that every call to the hypervisor (i.e., for each physical BMap) is completed in average in 3.52 \(\mu s\), which is only a small percentage (less than 3%) of the 125 \(\mu s\) frame duration. It is also close to the run time of the stateless algorithm, running in 2.72 \(\mu s\). ## 4 Conclusions In this work we presented a heuristic algorithm for a PON hypervisor, capable of satisfying multiple different SLAs, when scheduling upstream capacity in a multi-tenant PON infrastructure, thus supporting convergence of mobile and residential services. We have also shown the maximum percentage of SLA-oriented traffic that can be supported, for two sample SLA types, to satisfy requirements for low latency services with no SLA breach. This for example could support the Cooperative Transport Interface (CTI) over a shared PON, to transport fronthaul eCPRI signals for 5G base stations (and beyond). A key result is that the performance of our proposed heuristic is close to optimal and it is capable of running in real-time, with a run time less than 3% of a frame duration. ## 5 Acknowledgments Financial support from Science Foundation Ireland grants 12/RC/2276_p2, 14/IA/2527 and 13/RC/2077_p2 is acknowledged.
2302.10466
Multiple stellar populations at less evolved stages-III: a possible helium spread in NGC 2210
Helium variations are common features of globular clusters (GCs) with multiple stellar populations. All the formation scenarios predict that secondary population stars are enhanced in helium but the exact helium content depends on the polluters. Therefore, searching for helium variations in a star cluster is a straightforward method to understand if it hosts multiple populations or not, and constrain the formation scenario. Although this topic has been well explored for Galactic GCs, GCs beyond the Milky Way are challenging to study because of their large distances. This work studies the helium distribution of GK-type main sequence dwarfs in an old ($\sim$12.5 Gyr) GC in the Large Magellanic Cloud, NGC 2210, using the deep photometry observed by the {\sl Hubble Space Telescope}. We compare the observed morphology of the MS with that of synthetic populations with different helium distributions. We confirm that NGC 2210 dwarfs have a helium spread, with an internal dispersion of $\delta{Y}\sim$0.06--0.07. The fraction of helium enriched stars depends on the $\delta{Y}$ distribution. A continuous $\delta{Y}$ distribution would indicate that more than half of MS stars are helium enriched ($\sim$55\%). If the $\delta{Y}$ distribution is discrete (bimodal), a fraction of $\sim$30\% enriched stars is able to explain the observed morphology of the MS. We also find that the He-enriched population stars are more centrally concentrated than He-normal stars.
Chengyuan Li, Xin Ji, Long Wang, Yue Wang, Baitian Tang, Antonino P. Milone, Yujiao Yang, Holger Baumgardt, Dengkai Jiang
2023-02-21T06:15:05Z
http://arxiv.org/abs/2302.10466v1
# Multiple stellar populations at less evolved stages-III: a possible helium spread in NGC 2210 ###### Abstract Helium variations are common features of globular clusters (GCs) with multiple stellar populations. All the formation scenarios predict that secondary population stars are enhanced in helium but the exact helium content depends on the polluters. Therefore, searching for helium variations in a star cluster is a straightforward method to understand if it hosts multiple populations or not, and constrain the formation scenario. Although this topic has been well explored for Galactic GCs, GCs beyond the Milky Way are challenging to study because of their large distances. This work studies the helium distribution of GK-type main sequence dwarfs in an old (\(\sim\)12.5 Gyr) GC in the Large Magellanic Cloud, NGC 2210, using the deep photometry observed by the Hubble Space Telescope. We compare the observed morphology of the MS with that of synthetic populations with different helium distributions. We confirm that NGC 2210 dwarfs have a helium spread, with an internal dispersion of \(\delta Y\sim\)0.06-0.07. The fraction of helium enriched stars depends on the \(\delta Y\) distribution. A continuous \(\delta Y\) distribution would indicate that more than half of MS stars are helium enriched (\(\sim\)55%). If the \(\delta Y\) distribution is discrete (bimodal), a fraction of \(\sim\)30% enriched stars is able to explain the observed morphology of the MS. We also find that the He-enriched population stars are more centrally concentrated than He-normal stars. globular clusters: individual: NGC 2210 - Hertzsprung-Russell and C-M diagrams + Footnote †: journal: ApJ 0000-0002-8820-788X]Chengyuan Li ## 1 Introduction In contrast to young open clusters and star-forming regions, which have been proved to be simple stellar populations (SSPs. e.g., de Silva et al., 2009; Bragaglia et al., 2012, 2014; Ting et al., 2012; Kos et al., 2021), most globular clusters (GCs) and some intermediate-age clusters (older than \(\geq\)2 Gyr) are multiple stellar populations (MPs, e.g., Carretta et al., 2009; Milone et al., 2017; Niederhofer et al., 2017; Li & de Grijs, 2019). The MPs are characterized by star-to-star chemical variations in light elements such as C, N, O, Na, Mg, Al (Carretta et al., 2009; Marino et al., 2016; Pancino et al., 2017; Dias et al., 2018), together with He (Piotto et al., 2007; Milone et al., 2018). The observed chemical pattern of MPs in GCs drives the debate on suitable polluters. Although various scenarios were proposed to explain the presence of MPs, none of these models can reproduce the exact pattern of abundances observed in GCs (see Bastian et al., 2015). These models invoke interactive binaries (de Mink et al., 2009; Jiang et al., 2014; Wang et al., 2020; Wei et al., 2020; Renzini et al., 2022), fast-rotating massive stars (FRMS, Krause et al., 2013), asymptotic giant branch (AGB) stars (D'Ercole et al., 2008; D'Antona et al., 2016; Calura et al., 2019), very massive stars (\(10^{2}\)-\(10^{3}\)\(M_{\odot}\),VMS, Vink, 2018). Concerning nucleosynthesis, all these models predict polluted stars should be He-enriched as He is the direct product of H-burning. The helium abundance is also used as a proxy for the chemical enrichment in numerical simulations (e.g., Howard et al., 2019). Although a helium spread is expected in all GCs with MPs, direct helium measurements (\(Y\)) are challenging, as most stars in GCs are too cold to exhibit He lines. Because He absorption lines can only be detected among horizontal branch (HB) stars (hotter than \(\sim\)9,000 K) in GCs1(e.g., Marino et al., 2014; Dupree et al., 2011; Dupree and Avrett, 2013). An alternative method is based on the photometric investigation of GC stars. As helium is the second most abundant element in stars, its variation has a notable impact on stellar structure and evolution. Stellar evolutionary theory predicts that He-rich stars will be hotter and brighter than normal stars at each stage, as He-rich stars have a smaller radiative opacity and a higher mean molecular weight than normal stars. In addition, He-rich stars evolve more rapidly than normal stars as they have increased luminosities. At a given age, He-rich stars at the MS turnoff (TO) stage will be less massive, populating a fainter MSTO boundary. If both He-rich and normal stars experience the same mass loss content during their red-giant branch (RGB) stage, they will end up with different masses when evolve into the HB, thus different colors. The current stellar evolutionary model also predicts that the RGB bump (RGBB) lifetime would be shortened for He-enhanced stars (Bono et al., 2001; Di Cecco et al., 2010). Indeed, helium distributions have been studied in some GCs through photometry based on the morphologies of MS (e.g., Piotto et al., 2007), RGB (e.g., Milone et al., 2017), RGBB (e.g., Nataf et al., 2011; Lagioia et al., 2019), and HB (e.g., Jang et al., 2019). Statistical studies based on Galactic GCs have shown that both the maximum internal helium dispersion (\(\delta Y\)), and the fraction of helium-enriched stars positively correlate with the total clusters' masses (Milone et al., 2017, 2018). This correlation also applies to extragalactic GCs (Chantereau et al., 2019, hereafter C19). Footnote 1: In addition, HB stars used for He determination should not be hotter than \(\sim\)11,000 K to avoid the Grundahl jump effect (Grundahl et al., 1999). The method based on the HB could overestimate the internal helium variation if it does not account for the mass loss effect (Tailo et al., 2020). Studying the helium distribution of less-evolved stars such as MS stars would be more reliable. This is difficult for extragalactic clusters because their large distance requires ultra-deep photometry. A recent attempt was made by Li (2021), in which they have studied the helium distribution of MS stars for a 1.7 Gyr-old LMC cluster, NGC 1846. They find that NGC 1846 is consistent with an SSP cluster and that its helium spread, if present, should be smaller than 2%. Another LMC cluster, NGC 1978, was studied by Ji et al. (2022), in which they have analyzed the morphology of its RGBB. However, they can only constrain its maximum helium spread to \(\delta Y\leq 0.03\) (3%) due to the limitation of the image quality. This is consistent with Milone et al. (2020) (\(\delta Y=0.002\pm 0.003\)). This work aims to study the helium distribution of MS dwarfs in an old GC, NGC 2210, in the LMC, which serves as a good comparison to our previously studied younger LMC cluster, NGC 1846 (Li, 2021). Since only clusters older than \(\sim\)2 Gyr are known to harbor MPs (Bastian and Lardo, 2018), the old cluster NGC 2210 is expected to have a significant helium spread, both in terms of the maximum internal helium spread, and the fraction of He-enriched stars. This work aims to examine this expectation. We present the data reduction and method designation in Section 2, and the main results are in Section 3. A discussion about our results is present in Section 4. ## 2 Methods and Data Reduction The data used in this work was observed through the Advanced Camera for Surveys (ACS) Wide Field Channel (WFC) of the Hubble Space Telescope (_HST_), obtained through the Mikulski Archive for Space Telescope (MAST). The program ID is GO-14164 (PI: Sarajedini). NGC 2210 was observed through the F606W and F814W filters, with total exposure times of 4306 s and 6950 s, respectively. We do not use another frame observed through the F336W filter of the WFC3 Ultraviolet and Visual channel in this GO program, because most GCs with MPs have star-to-star nitrogen variations, which will produce a deep NH-absorption line centered at \(\sim\)3370 A. A point-spread-function (PSF) based photometry was applied to all charge-transfer-efficiency corrected frames (the '_flc' and '_drc' frames), using the specific _HST_ photometric package Dolphot2.0 (Dolphin., 2011, 2013; Dolphin., 2011). Similar to Li (2021), we filter the raw stellar catalog to remove bad pixels, centrally saturated objects, extended sources, cosmic rays and objects being contaminated by crowding. The dust distribution in the ACS/WFC field of view (FoV, 202''\(\times\)202'', corresponding to 48.5 pc\(\times\)48.5 pc at the distance of the LMC) may be inhomogeneous. We have used the method designed in Milone et al. (2012) to minimize the effect caused by the dust inhomogeneity - the differential reddening. We find a non-negligible differential reddening across the whole FoV for NGC 2210, with a standard deviation of \(\sigma_{E(B-V)}\)=0.008 mag, which will lead to an average color variation of \(\sigma_{\rm(Fe60W-F814W)}\)=0.027 mag. In Fig.1, we exhibit the differential reddening map for all stars observed in the NGC 2210 field. In Fig.2 we show the color-magnitude diagrams (CMDs) before/after differential reddening correction. We find that the observed colors of stars positively correlate with their differential reddening, i.e., stars with negative differential reddening are bluer than those with positive differential reddening. We estimate that a residual of differential reddening of \(\sigma_{E(B-V)/40}\)=0.0002 mag cannot be statistically removed2. However, the assumption that there is a single referenced ridge line may not be valid if a genuine spatial-dependent helium distribution is present, thus introducing additional uncertainty. In particular, a helium enrichment would mimic a negative differential reddening. Our method for reducing differential reddening may also correct the color shift caused by helium spread, leading to an underestimate of helium spread. Footnote 2: We use the 40 nearest stars surrounding each individual star to correct for differential reddening. To avoid a possible underestimation of the helium spread, we used RGB rather than MS as the referenced population (see Milone et al., 2012). Using the world coordinate system (WCS) parameters stored in the header of '_drc'_ frames, we convert the observed pixel coordinates (X and Y) into the equatorial coordinate system (the \(\alpha_{\rm J2000}\) and \(\delta_{\rm J2000}\), the right ascension and declination). We directly adopt the center coordinates, \(\alpha_{\rm J2000}\)=\(06^{\rm h}11^{\rm m}31.69^{\rm s}\), \(\delta_{\rm J2000}\)=\(-69\arcdeg 07^{\prime}18.37^{\prime\prime}\), as well as the half-mass radius, \(r_{\rm h}\)=\(15.9^{+0.6}_{-0.2}\) arcsec (3.9 pc), for NGC 2210, based on Ferraro et al. (2019). We divide our sample into two parts, stars with a radius from the center, \(r\leq 32\) arcsec (about \(2r_{\rm h}\)) are defined as cluster stars. Stars with \(r\geq 110\) arcsec (about \(7r_{\rm h}\)), which is similar to the tidal radius (\(107.1^{+7.6}_{-4.8}\) arcsec, through the King model) determined in McLaughlin & van der Marel (2005), are defined as referenced field stars. The spatial distribution and the CMD of NGC 2210 stars are presented in Fig.3, with cluster and referenced field stars being color-coded (black and red). In this work only MS stars within the magnitude range of 22.5 mag\(<\)F606W\(<\)24.5 mag are analyzed (stars between the dashed lines in Fig.3), corresponding to a mass range from 0.60\(M_{\odot}\) to 0.78\(M_{\odot}\). We only select these stars for three reasons: (1) These stars are all located well below the MSTO region. We exclude stars near the TO region because a possible helium spread may complicate the morphology of the TO region (see Section 1 for an explanation). (2) The photometric uncertainties for these stars are small enough to study broadening caused by possible helium spread. (3) The average completeness of these stars is sufficient for obtaining statistically robust results, which are \(\sim\)65.8% for cluster stars and \(\sim\)82.6% for referenced field stars (calculated through artificial star test, see below). We used the Princeton-Goddard-PUC (PGPUC) stellar evolutionary code to determine the cluster parameters through isochrone fitting (Valcarce et al., 2012, 2013)3. The PGPUC stellar evolution code is focused mainly on the study of low-mass stars (i.e., stars in GCs), which allows users to freely input the values of age, helium abundance (\(Y\)), global metallicity (\(Z\)), solar scaled abundance of alpha element ([\(\alpha\)/Fe]) and Mass loss rate (\(\eta\)) to generate different isochrones. Since the goal of our work is to study the helium distribution of low-mass MS stars in NGC 2210, PGPUC is the most suitable tool for modeling the observation. We determine the best fitting isochrone through visual inspection, which is present in the left panel of Fig.4. The best-fitting parameters are log(\(t\)/yr)=10.10 (\(\sim\)12.5 Gyr), \(Y\)=0.25, [Fe/H]=\(-\)1.92 dex (\(Z\)=0.0002 Figure 1: The differential reddening (\(\Delta E(B-V)\)) map for all stars in the FoV of the NGC 2210, where \(\Delta E(B-V)\) is the reddening difference between the referenced star and the median reddening of all stars in the field (color-coded). The standard deviation of the differential reddening is \(\sigma_{E(B-V)}\)=0.008 mag. with \(Z_{\odot}\)=0.0167 in PGPUC model), \((m-M)_{0}\)=18.40 mag (47.86 kpc) and \(E(B-V)\)=0.06 mag (\(A_{V}\)=0.20 mag). We have also assumed an enhanced [\(\alpha\)/Fe]=+0.30 dex (Wagner-Kaiser et al., 2017) and a default mass loss rate of \(\eta\)=0.20. The latter is important for stars at post-MS stages (e.g., HB), which does not affect our results since we only concern MS dwarfs. Our best-fitting age, distance modulus and average reddening are compara Figure 3: Left: spatial distribution of stars in the NGC 2210 field. Right: the CMD of all stars observed in the NGC 2210 field. Black and red dots are selected cluster stars and referenced field stars, respectively. Only stars within the magnitude range defined by dashed lines will be used for analysis. Figure 2: Left/right: CMDs of NGC 2210 before/after differential reddening correction. ble to those determined in Wagner-Kaiser et al. (2017) (our age of 12.5 Gyr vs. theirs of 11.63 Gyr\({}^{+1.80}_{-1.12}\) Gyr; distance modulus of \((m-M)_{0}\)=18.40 mag vs. theirs of \((m-M)_{0}\)=18.523\(\pm\)0.042 mag; \(E(B-V)\)=0.06 mag vs. theirs of \(E(B-V)\)=0.06-0.08 mag), but the adopted metallicity is lower than the spectroscopic study of Mucciarelli et al. (2010) (our metallicity of [Fe/H]=\(-\)1.92 dex vs. theirs of [Fe/H]=\(-\)1.65 dex). However, we find even the best-fitting isochrone does not satisfactorily fit the observation, which describes the ridge-line of the RGB but only fits the blue and bright sides of the MS and the SGB. In this work, we want to generate synthetic populations as close to the real observation as possible. We used the MS ridge-line (MSRL) instead of the best-fitting isochrone to model artificial stars. We calculate the MSRL below the TO region using the Gaussian process and iterative trimming (ITGP) based robust regression method developed by Li et al. (2020, 2021)4. The MSRL is shown in the right panel of Fig.4. Footnote 4: [https://github.com/syrte/robustgp/](https://github.com/syrte/robustgp/) Based on the MSRL, we evaluate the effect of He variation through the PGPUC stellar evolution code. We calculate eleven isochrones with He-enrichments of \(\Delta Y\)=0.01-0.12 (\(Y\)=0.26-0.37, with a step size of 0.01), i.e., a total of twelve isochrones (including the \(Y\)=0.25 isochrone). The color deviations for each isochrone to the standard isochrone (\(Y\)=0.25), \(\Delta\)(F606W\(-\)F814W), are added to the calculated MSRL. These curves are loci of populations with \(Y\)=0.26-0.37. In panel \(a\) of Fig.5, we show some examples (\(Y\)=0.25,0.29,0.33,0.37) of these loci. We then generate synthetic stellar populations using the technology of artificial stars (ASs). For each population, we generate 2 million ASs in the corresponding magnitude range following a Kroupa-like mass function. We totally generated 2.6\(\times 10^{7}\) ASs. These ASs are produced using the appropriate PSF model. We perform the same PSF photometry on these ASs. In order not to dramatically increase the crowding. We repeat this procedure 260,000 times and each time we only input 100 ASs. The recovered ASs thus mimic a simulated observation with the same photometric uncertainty (including noise added by cosmic rays, hot pixels, crowding effect and other artifacts) to real stars. All ASs are homogeneously distributed in the FoV of the observation. The artificial stellar catalog was further reduced using the same procedures applied to real stars. Based on ASs, we obtain the average completeness (the number ratio between the recovered ASs after data reduction and the input ASs) for stars in the cluster and referenced field regions (\(\sim\)65.8% and \(\sim\)82.6%). We finally have 13 artificial stellar populations with \(Y\)=0.25-0.37, each synthetic population is an SSP, which contains 2 million ASs with realistic photometric uncertainty like the real observation. Because ASs with a flat distribution suffers less crowding effect than the observation, if we directly use the whole sample of ASs, we will underestimate the MS width of ASs, thus overestimating the resulting helium abundance. Because of this, we only select ASs in the cluster region to generate synthetic models. Because we have applied the same data reduction to the artificial stellar catalog, they also share the similar crowding like the observation. As a result, the selected AS samples have a similar spatial distribution to the observation. From each artificial stellar population, we select a subsample with the same luminosity function and the total number of stars as the real observation as a representation. A synthetic MP is a composition of these synthetic populations. In this work, a series of synthetic MPs with different internal helium spreads and fractions of He-rich stars will be used for quantitative comparison, to determine the best-fitting property of stellar populations for the observation. As an example, in panels \(b\),\(c\) and \(d\) of Fig.5, we show the observed MS, a synthetic SSP with \(Y\)=0.25, and a synthetic MPs with \(Y\) ranges from 0.25 to 0.37, where each population with a certain He abundance accounts for 1/13 of the total star number. Simply for a glance, we can see that the observed MS is indeed wider than the synthetic SSP. Its morphology is more consistent with the example MPs which has a helium dispersion of \(\Delta Y=0.12\) (in this toy model, we have assumed a flat distribution of \(Y\)). Since a visual examination is unreliable, and a flat distribution of \(\Delta Y\) is possibly unphysical, in the next Section, we quantify the similarity between models with different \(\delta Y\) and \(\Delta Y\) distributions, and the observation using statistical methods. ## 3 Main Results ### Helium spread among dwarfs The minimum angular resolution of the _HST_ at the F814W passband is \(\sim\)0.1 arcsec, corresponding to \(\sim\)5000 AU at the distance of the LMC, which is larger than the separation of the widest binaries in the solar neighbourhood (Duquennoy & Mayor, 1991). We can assume that all binaries in NGC 2210 are unresolved. In the CMD, unresolved binaries will populate a brighter and redder envelope to the MS which can be statistically estimated (e.g., Milone et al., 2012; Li et al., 2013). Although our observed sample must contain some unresolved binaries, however, we do not find any significant unresolved binary feature from the CMD (see Fig.4), which is different from what we found for NGC 1846 (Li, 2021b). We find that it is difficult to define an appropriate unresolved binary region like what we have done for NGC 1846. The morphology of the MS, particularly its red side, is strongly affected by their binary properties (the fraction, mass-ratio distribution) and line-of-sight blending caused by crowding. These effects hamper an accurate estimation of helium spread. Fortunately, the color of the He-enriched population will be bluer than the bulk population, this behavior is opposite to unresolved binaries and blending. For each star, we calculate their relative color deviation to the MSRL, \(\Delta(\rm F606W-F814W)\). We have taken a brief visual comparison between the color distributions of two SSPs with/without binaries. Although the color distri Figure 4: Left, the CMD of the NGC 2210 with the best-fitting PGPUC isochrone. The best-fitting parameters are shown in the legend. Right, the same as the left panel, with the calculated MSRL. Black dots are stars used for analysis in this work. Figure 5: Panel \(a\): loci of synthetic populations with \(Y\)=0.25,0.29,0.33,0.37. \(b\): The observed MS. \(c\): A synthetic SSP with \(Y\)=0.25. \(d\): A synthetic MPs with \(Y\) ranges from 0.25 to 0.37. butions of these two stellar populations are very different in the red sides of their MSs, their blue sides are similar. We thus decide not to analyze stars lying in the red direction of the MSRL, i.e., we only analyze stars with \(\Delta(\rm F606W-F814W)\)\(<\)0 mag. In the top-left panel of Fig.6, we show the \(\Delta(\rm F606W-F814W)\) distribution for all MS dwarfs. We find that the \(\Delta(\rm F606W-F814W)\) distribution is not symmetric. The standard deviation of the color difference is \(\sigma_{\rm color}\)=0.0323 mag, while the mean and median photometric errors are \(\bar{\sigma}_{\rm F606W}\approx 0.0086\) mag and \(\bar{\sigma}_{\rm F814W}\approx 0.0084\) mag, respectively. Their median errors are both 0.008 mag, with 97% measurements have their photometric errors of \(\sigma_{\rm F606W}\leq 0.016\) mag and \(\sigma_{\rm F814W}\leq 0.014\) mag, respectively. (see photometric error curves in Fig.7). Clearly, photometric uncertainty cannot explain the observed color spread of the MS solely. We also find a clear excess of'red stars' with \(\Delta(\rm F606W-F814W)\)\(>\)0 mag. We determine the fraction of the excess of the'red stars' is \(\sim\)2.3%, which is the fraction of unresolved binaries (equal to a mass-ratio of \(q=M_{1}/M_{2}\gtrsim 0.7\)) and occasionally blending stars in the line-of-light direction. The total binary fraction, if assuming a flat mass-ratio distribution is \(\sim\)7.7%. If assuming a power-low mass-ratio distribution, is \(\sim\)12.0%(e.g., Li et al., 2013). Both are comparable to those for Galactic GCs (5%-30%, with a flat mass-ratio distribution Milone et al., 2012), but lower than younger LMC clusters (\(\geq 50\%\), Li et al., 2013; Li, 2021). However, we emphasize that assuming no unresolved binary system would have a negative color deviation is not true. Photometric uncertainty would scatter some unresolved binaries (particularly those with low mass-ratios) to the blue side of the MSRL, which is unavoidable in our analysis. In addition, there must be some (low mass-ratio) binaries belonging to He-rich stellar populations (if present) that are bluer than the MSRL. We think that the number of He-rich binaries must be small. Because most primordial scenarios assume the secondary stellar populations (thus He-rich) form in a more centrally concentrated state than normal stars, which would lead to a more severe dynamical disruption (Hong et al., 2016). Indeed, observations have shown that 2P stars of various clusters have less binary fraction than 1P stars (D'Orazi et al., 2010; Lucatello et al., 2015, but see Milone et al. (2020)). In this work we can only minimize (rather than exclude) the binary contamination. Using the same method described above, we analyze the relative color distributions for synthetic MPs with different internal helium spreads. We assume a flat \(\delta Y\) distribution for all these MPs. For example, the model with \(\delta Y\)=0.03 which contains four populations, \(Y\)=0.25,0.26,0.27,0.28, would have each population occupy 25% number of stars. For both the observation and the synthetic MPs, we only study their distributions of stars with \(\Delta(\rm F606W-F814W)\)\(<\)0 mag. The total number of these stars is \(\sim\)5000. We divide these stars into 20 color bins, and the bin width is 0.005 mag. This bin size allows us to study their helium distribution in more detail, and in each bin the number of stars is high enough so that they are not strongly affected by statistical variations. To obtain a preliminary comparison, we first analyzed the standard deviation of the color distribution for an SSP, assuming that their color distribution is Gaussian-like. Our result yields \(\sigma_{\rm color}\)=0.0257 mag. A helium spread of \(\delta Y\)=0.06 (\(\sigma_{\rm color}\)=0.0317 mag) is required to meet the observation (\(\sigma_{\rm color}\)=0.0323 mag). We then use a \(\chi^{2}\) minimization method to quantify the similarity between models and the observation, \[\chi^{2}=\sum_{i}\frac{(N_{\rm i}^{\rm obs}-N_{\rm i}^{\rm mod})}{N_{\rm i}^{ \rm obs}} \tag{1}\] Figure 6: Top-left: the observed \(\Delta(\rm F606W-F814W)\) distribution (grey curve). The distribution for stars with \(\Delta(\rm F606W-F814W)\)\(<\)0 mag is indicated by the black curve. The black dashed line represents the mirror of the black solid line (relative to the \(\Delta(\rm F606W-F814W)\)\(=\)0 mag). A clear excess of stars on the red side of the MSRL appears, which indicates the contribution of unresolved binaries. Top-right: the same as the top-left panel, with similar distributions of synthetic MPs (red solid/dashed curves). Bottom panel: the \(\chi^{2}\) distribution as a function of \(\Delta Y\) and the cubic fitting. The implicated \(\Delta Y\) for minimum \(\chi^{2}\) is indicated by arrow. \[N_{\rm i}^{\rm obs}=N_{\rm i}^{\rm c}-f\frac{A^{\rm c}}{A^{\rm f}}N_{\rm i}^{\rm f} \tag{2}\] where \(N_{\rm i}^{\rm c}\) and \(N_{\rm i}^{\rm f}\) are the number of stars with their relative colors dropped in the \(i\)-th bin. The subscript of 'c' and 'f' means these stars are in the cluster and referenced field regions, respectively. In this work, the area of the cluster region is about 55.4% of the referenced field region, which denotes \(f\)=0.554. \(A^{\rm c}\) and \(A^{\rm f}\) are 0.658 and 0.826, which are the average completeness for cluster and referenced field stars. \(N_{\rm i}^{\rm obs}\) is thus the expected number of stars observed in the cluster region, and with their relative colors belong to the \(i\)-th bin. \(N_{\rm i}^{\rm mod}\) is the corresponding number of ASs in the model used for comparison. Finally, we want to examine if the result \(\chi^{2}\) correlates with the model internal helium spread, \(\Delta Y\). We plot their correlation in the bottom panel of Fig.6. We find that the \(\chi^{2}\) distribution exhibits a smooth trend with \(\delta Y\). The minimum \(\chi^{2}\) occurs at \(\delta Y=\)0.05, with a \(\chi^{2}\)=341. To avoid the effect of statistical noise, we used a cubic curve to fit the \(\chi^{2}\)-\(\delta Y\) correlation, which yields a local minimum \(\chi^{2}\) at \(\delta Y=0.06\). In the top-right panel of Fig.6, we exhibit comparisons between some models and the observation. Indeed, an SSP (\(\delta Y\)=0.0) does not produce the observed \(\Delta({\rm F606W}-{\rm F814W})\) distribution, while a MPs with \(\delta Y\)=0.05 exhibit a much better fitting to the observation. For a better illustration, we have symmetrized the \(\Delta({\rm F606W}-{\rm F814W})\) distribution although we only analyze stars with \(\Delta({\rm F606W}-{\rm F814W})<\)0 mag (in Fig.7). The analysis indicates that NGC 2210 may indeed harbor He-rich population stars. A disadvantage is that under the assumption of a flat \(Y\) distribution, all these toy models indicate that He-rich stars dominate the sample, which is unrealistic. To derive a more realistic helium distribution, we have generated a series of synthetic MPs with different \(\delta Y\) and fractions of 2P stars (stars with \(\Delta Y>0\)), \(f_{\rm 2P}\). Among the 2P stars, their helium distribution, \(\delta Y\), is flat. For example, MPs with \(\delta Y\)=0.02 and \(f_{\rm 2P}\)=30% would have 70% normal stars, and each He-rich population (\(\delta Y\)=0.01, 0.02) occupy a number fraction of 15%. As a result, we totally generated 229 models, including one SSP model (\(\delta Y\)=0), and 228 MPs models with \(\delta Y\)=0.01-0.12 (in step of 0.01), and \(f_{\rm 2P}=\)5%-95% (in step of 5%). For each model, we calculate its corresponding \(\chi^{2}\). Finally we obtained a 2D-distribution of \(\chi^{2}\) as a function of \(\delta Y\) and \(f_{\rm 2P}\), we plot a contour of the \(\chi^{2}\) distribution in Fig.8 (top panel). We find that if the fraction of 2P stars is too low (\(f_{\rm 2P}\leq 20\%\)), it yields a \(\delta Y\sim 0.08\), but the \(\chi^{2}\)-\(\delta Y\) distribution is very noisy. For \(f_{\rm 2P}\geq 40\%\), the \(\chi^{2}\)-\(\delta Y\) correlation becomes smooth. They all report a best-fitting \(\delta Y\) ranges from 0.06-0.10. The minimum range of \(\chi^{2}\) occurs at \(\delta Y\sim\)0.068-0.071 and \(f_{\rm 2P}\sim\)52%-61%(shadow region in the top panel Fig.8), corresponding to a \(\chi^{2}\leq\)364.0, within this region the variation of \(\chi^{2}\) is dominated by noise. In summary, if we assume a continuous \(\delta Y\) distribution, NGC 2210 is likely to have \(\sim 55\%\) He-rich stars, with an internal helium spread of \(\delta Y=0.069^{+0.002}_{-0.001}\). However, such a high fraction of 2P stars is surprising. According to primordial scenarios, the primordial population (1P) stars form earlier than the chemically enriched population (2P) stars with a more extended configuration. As a result, the 1P stars are more easily stripped by the external galactic tidal field (e.g., D'Ercole et al., 2008). The number ratio between the 2P and 1P stars would be lower for GCs in a weaker external tidal field (the LMC) than those in the Galaxy. A possible explanation is that the \(\delta Y\) distribution is discrete (such as NGC 2808) rather than continuous. We therefore generated another set of models, in these models, MPs have a bimodal distribution of \(\delta Y\). For example, a model with 30% He-rich stars and \(\delta Y\)=0.02 only contains two populations, i.e., 70% normal stars (\(\delta Y\)=0.00) and 30% He-rich stars (\(\delta Y\)=0.02). Again, under the adoption of bimodal distributions for MPs, we plot the \(\chi^{2}\) distribution as a function of \(\delta Y\) and \(f_{\rm 2P}\), which is present in the bottom panel of Fig.8. This time, the lowest \(\chi^{2}\) region occurs at \(\delta Y\)=0.068-0.074 and \(f_{\rm 2P}\)=26%-34% (shadow region in the bottom panel Figure 7: The photometric error curves for F606W (top) and F814W (bottom) passbands. Fig.8). We find that MPs with a bimodal distribution of \(\delta Y\) can better reproduce the observation, as they return a lower \(\chi^{2}\) (\(\lesssim\)332) than the case of a flat \(\delta Y\) distribution (\(\lesssim 364\)). We suggest that a small fraction of 2P stars is more reasonable. Indeed, studies have shown that chemically enriched populations in some LMC clusters only occupy a small fraction of 10%-20% (Hollyhead et al., 2019; Dondoglio et al., 2021). We conclude that NGC 2210 most likely harbors \(\sim 30\%\) He-enriched stars, with a maximum helium spread of \(\delta Y=0.071\pm 0.003\). Our results indicate that NGC 2210 is very different to NGC 1846, the latter is likely an SSP cluster (\(\delta Y<\)0.02). However, the detection of a helium variation does not indicate that the He-rich stars do belong to the 2P. In addition to He-enhancement, if we strictly define the 2P stars are those with Na, C, N, O variations, some 1P stars (stars without these specific chemical patterns) are found to have helium spread (e.g., Milone et al., 2017) as well, although the reason remains unclear yet. How to determine if the derived helium spread is an internal spread of 1P stars or if it indicates the presence of MPs? One way is to examine their radial distributions. If both He-normal and He-rich stars are 1P stars, they should be fully mixed in spatial. Otherwise 1P and 2P stars may exhibit different central concentrations, according to primordial scenarios. In this work, we cannot determine if a specific star is He-enriched or normal. Alternatively, we compared each star's color deviation to their photometric uncertainty. Stars with a color deviation \(|\Delta(\rm{F606W-F814W})|\) larger than three times the expected color uncertainty are defined as He-rich stars. Using this criterion, 33.7% stars (1547 of 4557) are defined as He-rich stars. This is consistent with the indication derived by bimodal \(\delta Y\) distribution models (\(\sim\)30%). If the observed MS dwarfs are SSP, we would expect only \(14\pm 4\) stars (0.3%) meet this criterion. We divide both the normal and He-rich stars into seven radial bins ranging from the cluster center to a radius of 7 pc (\(\sim\)2\(r_{\rm{hm}}\)), with a bin size of 1 pc, the latter is roughly the core size (\(r_{\rm{c}}\)) of NGC 2210 (Ferraro et al., 2019). We study the radial profile of the number ratio between He-rich and normal stars. If He-rich stars belong to a secondary stellar population formed in the cluster center after the formation of 1P stars, the number ratio radial profile should exhibit a decreasing trend from the cluster center to its outskirt. In the right panel of Fig.9 we exhibit this number ratio radial profile. We cannot tell any radial difference between the He-rich and normal stars within 2\(r_{\rm{c}}\). We find a significant decreasing trend in the range of 2-7 pc, indicating that the He-rich population has a more compact configuration than normal stars. Since He-rich and normal stars are all in the same magnitude range, this radial difference cannot be explained by their completeness difference in the radial direction. In addition, He-rich dwarfs are less massive than normal dwarfs with similar luminosities, which further strengthens the implication that they must be initially much more compact than normal stars. Milone et al. (2017) have derived a clear correlation between the internal maximum helium spread and cluster's present-day mass. We w Figure 8: The contour of \(\chi^{2}\) distribution as a function of helium spread (\(\delta Y\)) and fractions of 2P stars (\(f_{\rm{2P}}\)). In the top panel, model MPs have a continuous \(\delta Y\) distribution. In the bottom panel, model MPs have a bimodal \(\delta Y\) distribution. Figure 9: Left, the CMD of He-rich (red dots) and normal (blue dots) stars. Right, the number ratio radial distribution between He-rich and normal stars. The red dashed and dash-dotted lines represent the positions of the core and half-mass radius, respectively. Associated error bars are Poisson-like. 2210 also follows the trend. We compare our results with Galactic GCs (Milone et al., 2017), LMC clusters (Li, 2021; Ji et al., 2022), other SMC clusters with internal helium spread studied in literatures (Chantereau et al., 2019; Lagioia et al., 2019, hereafter C19/L19). This result is shown in the top panel of Fig.10. We find that although the helium spread of NGC 2210 is relatively higher than its Galactic counterparts, it is consistent with the same correlation. If we only consider LMC clusters, this correlation is likely steeper than that for Galactic GCs. The cluster initial mass should be a more appropriate parameter that decide the property of MPs than their present-day masses. For Galactic GCs, Baumgardt et al. (2019) have integrated their orbits backward in time to derive the cluster initial masses, taking the effects of dynamical friction and mass-loss of stars into consideration. Using the same method, we have calculated the initial mass of NGC 2210 using N-body simulations, which yields 5.1\(\times\)10\({}^{5}\)\(M_{\odot}\). We find this cluster have lost very limited mass through dynamical effects because of its high mass and the weak tidal field of the LMC. The difference between the present-day and the initial mass is almost entirely due to stellar evolution mass loss. As a result, the present-day number ratio between 1P and 2P stars of NGC 2210 should be almost identical to its initial value. In the bottom panel, we present the helium spread-initial mass relationship for Galactic GCs (grey dots) and NGC 2210 (the red star). It turns out that NGC 2210 indeed harbor a higher internal helium spread than its Galactic counterparts with similar initial masses. It remains unclear if this would indicate that LMC GCs would exhibit a different helium spread-initial mass correlation, studies of more LMC samples are required. The initial masses for LMC/SMC clusters will be present in a forthcoming article. ### Comparing with evolved giants Using the same method, we have derived the helium spread among red-giant stars of NGC 2210. The sample we used is red-giant (RG) stars lying significantly above the bottom of the RGB (F606W\(\sim\)21.28 mag) and below the RGB bump (F606W\(\sim\)17.56 mag), in the range of 19.08 mag\(\leq\)F606W\(\leq\)19.98 mag. We constrain the sample RG stars with a color range of 0.65 mag\(\leq\)F606W\(-\)F814W\(\leq\)0.75 mag. The selections of the magnitude and color ranges are arbitrary. We confirm that the calculated ridge line of the selected RGB part is close to the best-fitting isochrone. We exhibit the selected RG stars and the best-fitting isochrone in Fig.11. However, NGC 2210 exhibits many blue straggler stars (BSSs). A significant fraction of these BSSs may lie in a mass-transferring binary system, where another binary component is likely a sub-giant (SG) or a RG star. These BSS-SG/RG binaries would be distributed in a region between the RGB and the BSS locus, partially overlapping with the He-rich RGB. We generate a large number of artificial BSS-SG/RG binaries and plot them in the CMD of the cluster. For each BSS-SG/RG binary, the BSS is randomly selected from the BSS locus, which is described by a 1 Gyr-old isochrone (with other parameters identical to the best-fitting isochrone). The SG/RG star is selected from the best-fitting isochrone. We find that the region of the BSS-SG/RG binaries exhibits a clear top boundary, which gradually decreases from the region close to the TP-AGB toward the TO region of the BSS. Between this boundary and the HB region, there are no stars. This ideally describes the observation. Given that single stars in the Hertzsprung gap region evolve rapidly, we conclude that most observed stars in this region are unresolved BSS-SG/RG binaries. Some of these binaries will strongly contaminate the He-rich RG population (if the RG component dominates the flux of the binary system). Because of this, we expect that the helium spread derived from the Figure 10: Top: the internal helium spreads and the clusters present-day masses diagram, \(\delta Y\)–\(\log(M/M_{\odot})\). Red circles and a pentagram (NGC 2210, this work) are LMC clusters. Small grey dots are Milky Way GCs. Dark/light grey circles are SMC clusters. For NGC 1846 we have used its expected total mass at \(\sim\)10 Gyr. Bottom: the correlation between the internal helium spreads and the clusters initial masses, for Milky Way GCs and NGC 2210. width of the RGB would be overestimated if we cannot rule out the binary contamination. Indeed, our analysis report that, if we assume that helium spread fully accounts for the width of the RGB, the internal helium spread will reach \(\delta Y\)=0.12, the upper limit of the model we used. Binaries play an important role in the broadening of the RGB. The fraction of He-rich stars is 22%-33%, which is consistent with the result derived from MS stars (Fig.12) Because PGPUC does not calculate HB phase, we used the Modules for Experiments in Stellar Astrophysics (MESA, Paxton et al., 2011) to examine if our result also fits the HB morphology. We use the MESA to calculate three 12.5 Gyr-old HB loci with \(Y=\)0.26, 0.30, 0.34, respectively. They thus briefly describe the morphology of HB with \(\delta Y\sim 0.08\). The adopted metallicity is [Fe/H]=\(-\)1.92 dex (the same as the best-fitting PG-PUC isochrone). The most important parameter controlling the HB morphology is the mass loss rate. During the RGB phase, it is described by Reimer's mass loss rate (\(\eta_{R}\), Reimers, 1975, 1977). The mass loss rate for RG stars varies from cluster to cluster, covering a range of \(\eta_{R}<0.2\) to \(\eta_{R}>0.6\)(Tailo et al., 2020). Because the mass loss rate in our model is a free parameter, the helium spread among HB stars is uncertain. To make a qualitative comparison, we first conservatively set a \(\eta_{R}=0.2\) to our model. In this case, the simulated HB with \(\delta Y\)=0.08 exhibits a more extended morphology than the observation. Under the adoption of \(\eta_{R}=0.2\), a \(\delta Y\)=0.04 (\(Y\)=0.26-0.30) is sufficient to explain the length of the observed HB. We then calculated another two model sets with \(\eta_{R}=0.1\) and 0.05. We find that once we adopt \(\eta_{R}=0.05\), the simulated HB population with a helium spread of \(\delta Y=0.04-0.08\) fits the observation better. We finally constructed a stellar HB population with three helium abundances (\(Y=\)0.26, 0.30, 0.34), with each sub-population containing stars with different mass loss rate \(\eta_{R}=0.05-0.20\). We present the fitting to the observation in Fig.13. ## 4 Discussion and Conclusion Figure 11: RG stars (orange dots) were selected to study the helium spread. The blue dots are simulated BSS-SG/RG binaries (unresolved). The solid blue curve describes the BSS locus. The red curve is the best-fitting isochrone. Figure 12: The same as Fig.8, but for RG stars. The helium distribution of the model RGB is bimodal. Figure 13: The CMD of the HB, overlap with the simulated HB populations with different helium abundances and mass loss rates. Before we discuss the physical implications of our results, we first examine if there is no helium spread, at what value an additional differential reddening is required to fully account for the width of the MS, by comparing the synthetic stellar population with different differential reddening with the de-reddened observation. Our analysis reports an additional differential reddening of \(\sigma_{E(B-V)}=0.004\) mag is required to explain the observed MS. This is about 20 times the expected differential reddening residual. We confirmed that the signature of such an additional reddening is significantly enough to be revealed through our de-reddening method, if present. Therefore, we conclude that the broadening of the width cannot be fully explained by differential reddening. Another effect that would contribute to the width of the MS is metallicity spread. Decreasing the metallicity would reduce the stellar atmospherical opacity, leading to a decrease in cooling efficiency, thus, an increase in the stellar surface temperature. For this reason, stars with lower metallicity will look bluer than normal stars at each evolutionary stage, populating a bluer MS. We then generate a series of isochrones with different metallicities, and compare their loci with the \(Y=0.33\) isochrone, the latter corresponds to the \(\Delta Y=0.07\) stellar population locus. However, we find that even if we generate an isochrone with \(Z\)=0.00001 (the lower metallicity limit of the PGPUC model), the color difference between this isochrone and the best-fitting isochrone (\(Z\)=0.00016) cannot describe the width of the MS. Since we only concern at what value a metallicity spread can describe the width of the MS, we release the upper limit of the metallicity. Although in that case, the isochrone may not be able to describe the CMD well. We find that a metallicity spread from \(Z\)=0.0002 to \(Z=0.001\) ([Fe/H]=\(-\)1.92\(\sim\)\(-\)1.22 dex) is required to fully account for the observed width of the MS. Such a metallicity spread would produce a very wide SGB and RGB. The inconsistency between the model isochrones and the observation can be easily derived visually. We thus exclude the presence of a dramatic metallicity spread among NGC 2210 members. We also confirm that a spread of [\(\alpha\)/Fe]=0.00-0.30 dex (the maximum input range allowed for the PGPUC model) has a negligible contribution to the width of the MS. A similar analysis reports that RGB stars may contain \(\sim\)30% He-rich stars, which agrees with our analysis for MS dwarfs. But it implies a higher helium spread content, \(\delta Y\)-0.12. As we have illustrated, this is likely due to the contamination of BSS/RG binaries. The morphology of the HB, is very short, however. Our simulation indicates that under the fixed mass loss rate of \(\eta_{R}=0.2\), only a helium spread of \(\delta Y\)=0.04 is sufficient to explain the observed HB, which is lower than the value derived from MS dwarfs (\(\delta Y=0.06-0.07\)). A lower mass loss rate down to \(\eta_{R}=0.05\) would fit the observation better with a higher helium spread up to \(\delta Y\)=0.08. However, this would indicate that RGB stars in NGC 2210 experienced less mass loss than Galactic GCs (Tailo et al., 2020). Speculation is that the parameter that controls the HB color extension is environment-dependent. The tidal fields of their host galaxies affect the mass loss during their RGB phase. We highlight that another LMC GC, Hodge 11, exhibits an extended HB (Gilligan et al., 2019, their figure 19). An accurate determination of internal helium variation for this cluster would be crucial. Since analyses of RGB and HB members are affected by binaries and mass loss rates from star to star, both are very uncertain. We are now back to discussing the results derived from MS dwarfs, as the helium spread inferred by MS dwarfs is indicative of the helium contents of the clouds from which multiple populations formed. In summary, in this work, our main conclusions are, 1. NGC 2210 does exhibit a helium spread of \(\delta Y\sim 0.06\)-0.07. The number ratio of He-rich stars to the whole population is about \(\sim\)30%, if assume that the \(\delta Y\) distribution is bimodal. Otherwise it would be more than half, 55%, if the \(\delta Y\) distribution is continuous. 2. He-rich stars are more centrally concentrated than normal stars, indicating that the detected helium spread is not an internal spread among 1P stars, but for two stellar populations formed in different initial configurations. 3. The internal helium spread, \(\delta Y\), of NGC 2210 is consistent with the correlation between the helium spread and the clusters' present-day mass for Galactic GCs. If we only consider LMC clusters, this correlation is even steeper. In a previous study/search on multiple populations in NGC2210, Gilligan et al. (2019) detected a broadened MS and estimated that the second population includes 20\(\pm\)5% of stars. The fraction of second-population stars derived in this work is higher than theirs, if we assume a bimodal helium distribution (26%-34%). In addition, we inferred the internal helium variation of NGC2210 by assuming that the MS color broadening is mostly due to an helium scatter. This is a reasonable hypothesis because most elemental absorption lines concentrated in the UV band, their effects on the color of F606W\(-\)F814W are negligible (see figure 5 of Milone et al. 2018, as an example). For LMC clusters, the positive correlation between the internal helium spread, \(\Delta Y\), and the present-day GC mass (Li, 2021), is similar to that for Galactic GCs (Milone et al., 2018). If the \(\delta Y\) distribution is (close to) bimodal, the 2P fraction of NGC 2210 would be smaller than its Galactic counterparts with comparable masses (Milone & Marino, 2022, their figure 8). This would support scenarios where GCs preferentially lose their 1P stars, as 1P stars of LMC GCs experience weaker tidal stripping than Milky Way clusters. The fact that He-rich stars are more centrally concentrated than He-normal stars in NGC 2210 is in qualitatively agreement with the prediction from the main scenarios on the formation of multiple populations (e.g., Krause et al., 2013; D'Antona et al., 2016; Calura et al., 2019; Gieles et al., 2018; Wang et al., 2020). After the gas expulsion, both 1P and 2P stars escape during the long-term evolution by the galactic tide, and a large amount of time (up to \(\sim\)20\(t_{\rm rh}\)) is needed to fully mix the 1P and 2P stars (e.g., Vesperini et al., 2013). According to McLaughlin & van der Marel (2005), the \(t_{\rm rh}\) for NGC 2210 is 1.0-1.2 Gyr (\(\log t_{\rm th}\)=9.01-0.06, model dependent). If the half-mass relaxation timescale does not significantly change during its evolution, the dynamical age of NGC 2210 is 10-12\(t_{\rm rh}\). At this dynamical age, we would expect that at least the part with \(r\leq r_{\rm rh}\) will already be fully mixed, which is inconsistent with our observation. We speculate that this is because McLaughlin & van der Marel (2005) used a simplified model to calculate the \(r_{\rm rh}\) in which the average stellar mass in NGC 2210 is assumed \(M_{\star}\)=0.5\(M_{\odot}\), which introduces an additional uncertainty. Using the same method of Baumgardt et al. (2019), our calculation yields a longer half-mass relaxation time of \(t_{\rm rh}\sim\)3.2 Gyr for NGC 2210, indicating that NGC 2210 is only 3-4 \(t_{\rm rh}\) old. This is in good agreement with our observation as only stars within the core radius are fully mixed. The difference between NGC 2210 and NGC 1846, where the latter exhibit a minimum helium spread, may be controlled by the difference in their masses or ages, as NGC 1846 is younger and, less massive (at the age of \(\sim\)10 Gyr) than NGC 2210. To determine which parameter plays the critical role, further studies focusing on younger and more massive LMC clusters (i.e., NGC 1850) in terms of their helium distributions are required. Again, this would require high precision photometry focusing on their MS as young clusters usually do not have well-populated RGB and HB. If the He-rich stars detected in NGC 2210 represent 2P stars, they may exhibit common patterns of MPs, i.e., Na-O anti-correlation, C-N anti-correlation. A spectroscopic analysis of these stars is not possible because these stars are too faint. Alternatively, utilizing UV-optical photometry may statistically examine whether or not these stars are different in C, N, O abundances (e.g., Li et al., 2021). Again, this requires deep photometry which will consume lots of _HST_ time. The next-generation Chinese Space Station Telescope (_CSST_) with similar parameters to the _HST_, can take over this task with a larger FoV (Li et al., 2022). ## Acknowledgments C. L. is supported by the National Key R&D Program of China (2020YFC2201400). D.J. acknowledge support from the National Natural Science Foundation of China (Nos 12073070, 11733008). C.L. and L.W. acknowledge support from the one-hundred-talent project of Sun Yat-sen University and the National Natural Science Foundation of China through grant 12073090 and 12233013. B.T. gratefully acknowledges support from the National Natural Science Foundation of China under grant No. U1931102, and the Natural Science Foundation of Guangdong Province under grant No. 2022A1515010732.This work is also supported by the China Manned Space Project with NO.CMS-CSST-2021-A08, CMS-CSST-2021-B03, National Key R&D Program of China with No. 2021YFA1600403 and CAS 'Light of West China' Program. Y.W. acknowledges the support by the Special Research Assistant Foundation Project of Chinese Academy of Sciences.
2308.14883
Aggregation and structural phase transitions of semiflexible polymer bundles: a braided circuit topology approach
We present a braided circuit topology framework for investigating topology and structural phase transitions in aggregates of semiflexible polymers. In the conventional approach to circuit topology, which specifically applies to single isolated folded linear chains, the number and arrangement of contacts within the circuitry of a folded chain give rise to increasingly complex fold topologies. Another avenue for achieving complexity is through the interaction and entanglement of two or more folded linear chains. The braided circuit topology approach describes the topology of such multiple-chain systems and offers topological measures such as writhe, complexity, braid length, and isotopy class. This extension of circuit topology to multichains reveals the interplay between collapse, aggregation, and entanglement. In this work, we show that circuit topological motif fractions are ideally suited order parameters to characterise structural phase transitions in entangled systems that can detect structural re-ordering other measures cannot.
Jonas Berx, Alireza Mashaghi
2023-08-28T20:11:15Z
http://arxiv.org/abs/2308.14883v2
Aggregation and structural phase transitions of semiflexible polymer bundles: a braided circuit topology approach ###### Abstract We present a braided circuit topology framework for investigating topology and structural phase transitions in aggregates of semiflexible polymers. In the conventional approach to circuit topology, which specifically applies to single isolated folded linear chains, the number and arrangement of contacts within the circuitry of a folded chain give rise to increasingly complex fold topologies. Another avenue for achieving complexity is through the interaction and entanglement of two or more folded linear chains. The braided circuit topology approach describes the topology of such multiple-chain systems and offers topological measures such as writhe, complexity, braid length, and isotopy class. This extension of circuit topology to multichains reveals the interplay between collapse, aggregation, and entanglement. We show that circuit topological motif fractions are ideally suited order parameters to characterise structural phase transitions in entangled systems. Circuit topology, conformational phase transitions, entanglement, braiding ## I Introduction It has been shown that, for a single semiflexible chain, the polymer stiffness leads to a structural "phase diagram", displaying a multitude of of conformations such as globular, hairpin, knotted or extended structures, both for off-lattice [1; 2; 3] and on-lattice systems [4; 5]. By keeping a constant monomer-monomer interaction energy scale and decreasing the temperature, the polymer minimizes its energy by either collapsing or stiffening locally. Adjusting the stiffness \(\kappa\) allows one to study the competition between these two effects, which leads to fundamentally distinct structural motifs. This line of reasoning can be extended to aggregates of semiflexible polymers. It has been shown [2; 6] that for small systems consisting of \(M=2,4,8\) polymers of length \(N=13\) at a high temperature \(T\), the system is fragmented and individual polymers can be considered isolated. The structural properties in this regime follow the single-chain results in good approximation. For a decreasing temperature, however, flexible polymers aggregate in an amorphous globular configuration, while stiffer polymers form (twisted) bundles. The collapse and aggregation transitions are not separate processes. If the energy reduction associated with the formation of more interchain contacts (aggregation) is more favourable than the formation of intrachain contacts (collapse or folding), multichain aggregation may be expected to undo single-chain collapse. Hence, by studying the inter-, and intrachain contacts, we can deduce information about the structural properties and phase transitions of the system. Both types of contacts can be neatly studied within the framework of circuit topology (CT), which was originally introduced to describe the topology of folded proteins and folding reactions [7; 8], and was subsequently extended to include entanglement (i.e.,'soft contacts') [9; 10; 11] and multiple chains [12]. By combining circuit topology with a topological description on the level of entanglement, we get a more complete picture of aggregation processes with a small number of chains. The outline of this paper is as follows: in section I we set the stage for our analysis by reiterating the basic concepts of circuit topology for multichain systems. Subsequently, in section II, we discuss the theoretical framework necessary for our concomitant braid-theoretic description. Section III investigates the structural phase transitions in a system of \(M=4\) chains of varying monomer number \(N\in\{10,\,30\}\) and connects this with the circuit topology and braiding analysis. Finally, in section IV, we present conclusions and a future outlook. ## II Multichain circuit topology We start by revisiting multichain circuit topology. To model the formation of bonds in molecular systems and the resulting topology, we can study the mutual topological relation of binary contacts, i.e., two different bonds, that we name \(A\) and \(B\), each consisting of two contact'sites' bearing the same name, for a total of four sites. On a single strand, only three possible arrangements of \(A\) and \(B\) are possible: \(AABB\), \(ABBA\) and \(ABAB\), corresponding respectively to a series \((S)\), parallel \((P)\) or cross \((X)\) motif, where we take renaming into account. When considering a system consisting of multiple distinct open-ended chains, a number of motifs needs to be added to the above set of circuits. For \(n=2\) chains, we can discern three topologically distinct motifs: independent \((I_{2})\), loop \((L_{2})\) and tandem \((T_{2})\). Within these motifs, degeneracies are possible. The independent relation is unique up to renaming of the contact sites, the loop relation can form either a "parallel" or a "cross" loop (not to be confused with the single-chain CT motifs), and the tandem relation occurs in either an "umbrella" or an "arc" motif (Fig. 1). For \(n=3\), only independent \((I_{3})\) and tandem \((T_{3})\) circuits can be formed, which are both non-degenerate. Lastly, for \(n=4\), only the independent relation (\(I_{4}\)) can occur. Four is the maximum number of chains that can participate in creating circuits with four contact sites. In Table 1, we list all possible multichain motifs. Extending now the string notation for the single-chain CT to the multichain framework requires the introduction of "ghost contacts", which we will denote with \(\mathcal{O}\) in the string. To construct the string notation corresponding to a particular motif, we proceed as follows. The two chains are placed parallel and aligned vertically. We now read from top to bottom and from left to right, adding a ghost contact wherever a real contact site does not possess partners on the same level on other chains. A simple example will clarify our formalism. Consider the \(T_{2}\) umbrella CT motif, i.e., the tandem relation on two chains. We assume that the first chain counted from the left possesses the ordered contact sites \(A\,,B\,,A\), and that the second chain only possesses a single \(B\) contact site. We supplement the right chain with two ghost nodes. The string describing \(T_{2}\) is then \(\mathcal{S}_{2}=ABB\mathcal{O}A\mathcal{O}\), where the subscript indicates the number of chains that participate in the motif. Note that the same motif is obtained independent of the exact position of the ghost contacts on the second chain. Similar to the case of single chains, a motif that comprises pairs of contact sites is referred to as a "circuit". A circuit includes an even number of ghost contact sites and can be braided, as described in subsequent sections. The string notations for all multichain motifs can be found in Fig. 1. Note that all motifs within a degenerate circuit class (e.g., \(T_{2}\)) can be obtained by cyclically permuting the characters corresponding to a single chain. ## II Topological entanglement and braiding measures We now assume that the different chains can cross one another in the embedding space, and henceforth call the number of chains the _braid index_\(n\), where we will limit ourselves to \(n\leq 4\). For \(n=2\), a twist of the chains is not topologically protected, i.e., it can be undone by rotating one of the planes in which the endpoints are fixed. we will henceforth assume that the resulting braid projection is confined to the plane spanned by the parallel lines connecting the first and last contact-contact or contact-ghost pair. We label the chains by \(1\,,2\,,3\,,...\) from the left and introduce notation that is common in the theory of braiding, by following the Artin representation [13]. The operators \(\sigma_{i}\), with \(i=1,2,...,n-1\), indicate that a chain with label \(i\) passes _over_ the chain \(i+1\). The inverse operators \(\sigma_{1}^{-1},\sigma_{2}^{-1}\,,...,\sigma_{i}^{-1}\) indicate that a chain with label \(i\) passes _underneath_ the chain with label \(i+1\). This way, we can represent a braid with index \(n\) by a string of \(\sigma-\)operators. For example, the string \(\beta=\sigma_{2}\sigma_{1}^{-1}\sigma_{3}^{-1}\sigma_{1}\sigma_{1}\sigma_{2} \sigma_{2}^{-1}\) corresponds to the braid shown in Fig. 2. We remark here that our string notation for CT and the Artin braid notation can quite naturally be combined into a generic framework by inserting the braid operators \(\sigma_{i}\) in the string after every \(n\)-tuple of contacts. An example is given in Fig. 2(b). Hard contacts can then be modelled within this framework as separate operators, with their own set of Reidemeister moves and bond moves [14]. We defer to ongoing theoretical research on this topic and will not use the string notation any further in this manuscript. Let us formulate some useful properties of braids before continuing. Similar to knot theory, braids belong to the same equivalence class if they are related by a set of moves that are related to the so-called Reidemeister moves. The three fundamental moves can be formulated as follows: 1. \(\sigma_{i}\sigma_{j}=\sigma_{j}\sigma_{i}\) for \(|i-j|\geq 2\) (disjoint strand relation) 2. \(\sigma_{i}\sigma_{i+1}\sigma_{i}=\sigma_{i+1}\sigma_{i}\sigma_{i+1}\) (Skein relation) 3. \(\sigma_{i}\sigma_{i}^{-1}=e\) (annihilation relation) The element \(e\) is the unit element of the braid group; if we encounter such an operator pair we can eliminate it from the string. The braid group \(B_{n}\) itself is then defined from these moves. A natural, yet nontrivial question now arises: which topological quantities can we compute from this braid representation to characterise the entanglement of our system of semiflexible polymers? We briefly discuss four possible measures that can quantify certain topological properties of a braid: the writhe, braid length, complexity index and isotopy class. ### Writhe The writhe \(\mathrm{W}(\beta)\) of a braid \(\beta\) is defined as the sum of the exponents of the braid operators, i.e., for \(\beta=\sigma_{i_{1}}^{a_{1}}\sigma_{i_{2}}^{a_{2}}...\sigma_{i_{m}}^{a_{m}}\) it is \(\mathrm{W}(\beta)=\sum_{k}a_{k}\). It characterises the net twist of a braid and can be used to detect whether underlying (chemical) chirality of the monomers or interactions between them influences the mesoscopic twist of the system [15]. When there are no interactions that prefer one handedness over the other, the writhe can be expected to be symmetrically distributed with mean zero. Since two equivalent braids have the same writhe, it is a topological braid invariant. In, e.g., Fig.2(a) the writhe is \(\mathrm{W}=1\), while for (b) is is \(\mathrm{W}=0\). Since in the next section we will use the Kremer-Grest model, which does not intrinsically include any chiral interactions, to numerically simulate the system, we compute the writhe only as a sanity check. ### Braid length The braid length \(L(\beta)\) is a topological measure, not a geometrical one. It counts the number of operators in the braid word \(\beta\). Since we can always add the trivial combination \(\sigma_{i}\sigma_{i}^{-1}\) to the braid word, the length is not an invariant. However, the minimal braid length \(L_{c}(\beta)\), obtained by reducing the braid word to its normal form, is a braid invariant. Henceforth, we will only consider the minimal length and will just refer to it as "the length \(L\)", without ambiguity. In Fig. 2(a), the minimum braid length is \(L=3\), since the normal form of the braid is \(\beta_{c}=\sigma_{2}\sigma_{3}^{-1}\sigma_{1}\). To some extent, the length can be used as a measure for the complexity of a braid, i.e., the greater the length, the more complex the associated braid. Since it is a feature of the normal form, trivial entanglement such as, e.g., projection-induced crossings \(\sigma_{i}\sigma_{i}^{-1}\) are eliminated and only actual crossings are counted. Naturally, since we only consider systems with a finite number of monomers, we can assume that the maximum braid length is bounded and that it heavily depends on the number of chains \(M\), number of monomers per chain \(N\) and the stiffness \(\kappa\). ### Complexity index Just how entangled is a braid? This is not an easy question to answer, since complexity is not an objective observable. Therefore, we make the choice that henceforth entanglement can be characterised by the complexity index \(C(\beta)\)[16]. The braid complexity index, together with the braid length, constitutes our main measure to characterise the topology and entanglement properties of a braid. It is defined as the number of intersections (which we denote by \(\#\)) of a curve spanning a punctured disk of \(n\) punctures with the real axis, after application of the braiding operations. This is illustrated in Fig. 3. Normalising by the number of initial intersections \(\#E\) with the real axis, which is \(n-1=3\) in our case, and subsequently taking the natural logarithm, the complexity can be written as \(C(\beta)=\log\left(\frac{\#\beta E}{\#E}\right)\). So are for example the braids \(\beta_{1}=\sigma_{1}\sigma_{2}\) and \(\beta_{2}=\sigma_{1}\sigma_{2}^{-1}\) very similar, differing only in the sign of the second crossing, but their complexities are \(C_{1}=1.099\) and \(C_{2}=1.386\), respectively, as can be seen in Fig. 3(c)-(d). This indicates that braid \(\beta_{2}\) is more complex (i.e., more entangled) than braid \(\beta_{1}\), which was intuitively clear. In particular, since we only consider systems consisting of \(M=4\) chains, we will use the base-three instead of the natural logarithm, i.e., \(C=\log_{3}(\frac{\#\beta E}{\#E})\). In this formulation, the braid \(\beta=\sigma_{1}\sigma_{2}^{-1}\) has a complexity equal to unity. Figure 1: The set of circuit topology motifs for \(n=1,2,3,4\) strands. Contacts \(A\) and \(B\) are indicated by light and dark red filled circles, while ghost contacts \(\mathcal{O}\) are indicated by filled green circles. Different chains are indicated by different colours. The coloured chain ends indicate the orientation of the strands; we choose to orient every strand from the yellow to the violet end. Corresponding string notations are given below each motif, where the string is read in \(n\)-tuples. ### Isotopy class Every braid can be classified according to three types of isotopy classes given by the Thurston-Nielsen (TN) classification: reducible (RE), finite-order (FO) or pseudo-Anosov (PA) [17; 18; 19]. The isotopy class of a braid carries global information about the topology. For instance, if a braid is finite-order or, alternatively, periodic, if its \(n\)th power is equal to a power of the half-twist braid \(\Delta\)[20]. Reducible braids can be decomposed in distinct subbraids, where we can imagine "tubes" encompassing the subbraids, which then themselves form a braid. Finally, braids that are not finite-order or reducible are called pseudo-Anosov, and they represent well-entangled braids that cannot trivially be decomposed or untied. We will use the TN isotopy class to differentiate between twisted and braided polymer bundles. ## III Aggregation and structural phase transitions To study the braiding properties of a multichain system, we simulated a coarse-grained bead-spring model system of \(M=4\) chains, each with \(N=30\) or \(N=10\) monomers with mass \(m\), using the Kremer-Grest model [21; 22] by means of the LAMMPS software. The total number of monomers in the system is then \(\mathcal{N}=M\cdot N\). The interaction potential between monomers is of Lennard-Jones (LJ) type, whose length and cohesive strength scales are \(\sigma\) and \(\epsilon k_{B}T\), respectively. The hard-sphere monomer beads are connected by strong nonlinear springs, characterised by a finite-extensible-nonlinear spring (FENE) potential. The bending energy between two successive bonds spanning an angle \(\theta\) is equal to \(U_{\rm bend}(\theta)=\kappa k_{B}T(1-\cos(\theta))\), where \(\kappa\) is the bending parameter or elastic constant. In all simulations, we set \(k_{B}=\epsilon=\sigma=1\), such that the stiffness parameter and temperature tune the bending energy of the chains in units of \(\epsilon\). The characteristic timescale is \(\tau=\sigma\sqrt{m/k_{B}T}\) and the average bond length is \(\ell_{b}=0.965\sigma\). The time step for all simulations is set to \(\Delta t=0.01\tau\). After equilibration, the circuit topological content is recorded. To determine the mutual entanglement of collections of fluctuating chains, a primitive path analysis (PPA) is subsequently carried out to reduce the system to a collection of straight segments that are interlocked [23]. The PPA algorithm essentially contracts the contours of the chains while keeping the ends fixed and without allowing strand passages, see Fig. 4. In this manner, we combine circuit topology with entanglement and make subsequent braid analysis easier to perform. After the PPA is performed, the spatial coordinates of the monomers are recorded and connected by means of a linear interpolation, which is justified due to the volume exclusion we use in the simulations and the PPA algorithm; no other monomer can occupy the space between two bonded monomers in the polymer backbone. The resulting "trajectories" are subsequently analysed by means of the Matlab package BraidLab [24; 19] and the writhe, braid length, complexity index and isotopy class are computed. More details on the computational aspects can be found in [25]. We now study the braided circuit in two aspects: the circuit topological content and the braiding measures. For every configuration, the writhe, complexity, braid length and isotopy class are recorded. To track the structure of the polymer system, we will use the multichain radius of gyration \(R_{g}\), which is defined as \[R_{g}^{2}=\frac{1}{\mathcal{M}}\sum_{i=1}^{\mathcal{N}}m_{i}|r_{i}-r_{CM}|^{2}\,, \tag{1}\] where \(\mathcal{M}=\sum_{i}m_{i}\) is the total mass of all monomers, \(r_{i}\) and \(m_{i}\) are respectively the position and mass of the \(i\)th monomer, and \(r_{CM}\) is the center of mass of the full system. We subsequently make \(R_{g}\) dimensionless by rescaling by the single-chain radius of gyration, i.e, \(\widetilde{R}_{g}=R_{g}/R_{g,\theta}\), where \(R_{g,\theta}=b\,\sqrt{(N-1)/6}\) is the entropically governed radius of gyration of a three-dimensional random walk with equilibrium bond length \(b\) and stiffness \(\kappa=0\). Additionally, we define another end-to-end correlation Figure 4: **(a)** The reduction of a fluctuating polymer (black line) within its confining tube (gray) to a primitive path (orange line). Hard contacts are indicated by colored points. **(b)** Topological arrangement of the individual hard contact points (colored) within the polymer’s primitive path. Figure 3: The punctured disk representation of the braid complexity for four different braids. Punctures corresponding to strands are indicated by coloured dots. The operators \(\sigma_{i}\), \(\sigma_{i}^{-1}\) change punctures \(i\) and \(i+1\) CW or CCW, respectively. Intersections with the central axis are indicated by open circles. parameter \(C_{R}\) as [6] \[C_{R}=\frac{2}{M(M-1)}\sum_{i<j}\left(\mathbf{\hat{R}_{i}}\cdot\mathbf{\hat{R}_{j} }\right)^{2}\,, \tag{2}\] where \(\mathbf{\hat{R}_{i}}\) is the unit end-to-end vector of the \(i\)th polymer. This parameter is \(C_{R}=1\) for completely aligned bundles, and \(C_{R}=1/3\) for uncorrelated polymer systems, i.e., in the amorphous regime. This parameter plays a role similar to a nematic order parameter. Due to long equilibration times for longer polymers, we analyse the smaller \(N=10\) system in more detail, averaged over a larger ensemble of configurations, while for the longer \(N=30\) polymers we present more rough data. For all configurations studied, we perform the simulations in the NVT ensemble, where we fix the temperature \(T\) to be low enough as to be in the aggregation regime. Since we do not focus on the thermodynamics of the system in this work, we will not study thermodynamic phase transitions and will only vary the temperature as a means to control the aggregation of the system before varying the stiffness, ensuring that all polymers participate in the resulting amorphous or bundled aggregate. Let us first consider the case \(N=10\) at a temperature of \(T=1\) and density \(\rho=0.01\), where we vary the parameter \(\kappa\) from \(\kappa=0\) to \(\kappa=12\) in increments of \(\Delta\kappa=0.2\), covering the entire stiffness range. We average over 1000 realisations of the system per value of \(\kappa\), after equilibrating the system for a time of \(20000\tau\), such that \(\widetilde{R}_{g},\,C_{R}\) have saturated and only fluctuate around their steady-state value. The end-to-end correlation parameter \(C_{R}\) varies smoothly from \(C_{R}\approx 1/3\) to \(C_{R}\approx 0.84\), not completely reaching \(C_{R}=1\). This indicates that the four chains are aligned along a common axis and that they have transitioned from an uncorrelated, amorphous state to a more ordered state, where the chains form an oriented bundle. These bundles can twist and form (braided) bundles in order to minimise their energy, or as a means for kink stabilisation as a result of defects [26]. This can be observed by studying the braid length and complexity, which are increasing functions of \(\kappa\), and which saturate at the transition at \(\kappa\approx 7\). Note also that the average writhe fluctuates around zero, as should be the case. A more detailed breakdown of the distributions of the aforementioned quantities as functions of \(\kappa\) is shown in Fig. 5 The isotopy class also reveals information on the topology of the system. Generally, there is high fraction of finite-order braids, for all values of \(\kappa\). A natural explanation for this behaviour is that since for low values of \(\kappa\), the chains first collapse onto themselves and only then aggregate. As a result, when the PPA reveals the resulting braid \(\beta\), it is unbraided, i.e., equal to the identity \(\beta=e\). Since the latter is finite-order, we see an overabundance of this isotopy class for low values of \(\kappa\). The increasing fraction of reducible and pseudo-Anosov braids for higher values of \(\kappa\) is then a natural extension of this reasoning; when the chains become stiffer, they have a higher probability of aggregating and forming more complex braids. For the reducible isotopy class, a subset of the chains aggregate first and form separate braids, e.g., \(\beta=\sigma_{1}\sigma_{3}\), and subsequently aggregate into a single structure. The pseudo-Anosov class then results from a collective aggregation and braiding. These statements about the isotopy class are not exhaustive; a pseudo-Anosov braid can as well result from taking a different projection of the system. As an example, consider the braid \(\beta=\sigma_{1}\sigma_{2}\), in which three strands cross and the fourth is left unbraided. This is a reducible braid, with "tubes" around the first three, and around the fourth. By rotating our projection angle slightly, we could make the third strand undercross the fourth, making it a pseudo-Anosov braid. Since there is no preferred direction or external force that influences the system, the projection angle depends on the specifics of the polymer coordinates. We simulate the system multiple times, and thus we can assume that such artefacts can be neglected to have an influence. Let us now study the circuit topology in more detail. The ensemble averaged circuit topological motif fractions are shown in Fig. 6(c); a more detailed breakdown of the distribution of the topological motif fractions as a function of the stiffness is given in the Supplemental Material [25]. From these figures, it can be seen that already for \(\kappa=0\), the fraction of single-chain motifs, i.e., \(S,\,P,\,X\) is very low. The reason is that these motifs require the formation of two loops on a single chain, which is highly unlikely for the short polymers we study here. As a result, the motifs that either involve two loops on two separate chains (i.e., \(I_{2}\)), or one loop on a single chain (\(T_{2}\) and \(I_{3}\)), are somewhat more represented. Since in our setup we study the aggregated phase, the motifs that consist of only interchain connections, (\(L_{2}\), \(T_{3}\), \(I_{4}\)) are dominant for \(\kappa\gtrsim 2\). Within the latter motifs, the relative abundance of \(T_{3}\) is a consequence of the low number of polymers in the system; a disordered chain can more easily form two distinct bonds with two other chains, in contrast with forming two distinct bonds with the same chain, which tends to align both chains. Moreover, the \(I_{4}\) motif requires that two bonds are distinctly _not_ sharing a polymer. Since the aggregates are closely packed, this situation is also more unlikely than either \(T_{3}\) or \(L_{2}\). Therefore, we see that, on average, \([T_{3}]>[L_{2}]>[I_{4}]\) for all values of \(\kappa\). For increasing stiffness \(\kappa\) the aggregate aligns and the motifs that contain loops vanish. It can then be seen that the motifs containing only interchain contacts, i.e, \(L_{2}\), \(T_{3}\) and \(I_{4}\) tend to dominate the system. For \(\kappa\gtrsim 7\), the sum of these motifs is equal to one, indicating that all intrachain contacts (i.e., loops) have disappeared. Hence, the total fraction of \(L_{2}\), \(T_{3}\) and \(I_{4}\) motifs can be used as an order parameter to measure the degree of alignment of an aggregate. Let us now consider longer chains where the number of monomers is now \(N=30\). At a temperature of \(T=0.1\) and density \(\rho=0.01\), we equilibrate the system for a time of \(350000\tau\) until \(\widetilde{R}_{g}\) has, on average, reached its equilibrium value. Taking stiffness steps of \(\Delta\kappa=1\), we again compute the braiding quantities \(W\), \(C\) and \(L\), along with \(\widetilde{R}_{g}\), \(C_{R}\) and the CT fractions. The results are shown in Figs. 7 and 8. In contrast with the \(N=10\) system, we see that the topological quantities are not monotonous anymore, but exhibit a region of increased braid length and complexity for \(\kappa\lesssim 8\), while for \(\kappa\gtrsim 8\) these quantities saturate at the values \(L\approx 3\) and \(C\approx 1\). From this behaviour, it becomes clear that for very flexible chains individual collapse occurs before aggregation, decreasing the probability of finding heavily entangled aggregates. When the stiffness increases, however, there is a competition between aggregation and collapse; the individual chains can now aggregate and subsequently collectively collapse to form more intricate entangled structures. As a consequence, the length and complexity increase while \(\widetilde{R}_{g}\) and \(C_{R}\) indicate that the system is still not aligned. For a stiffness of \(\kappa\approx 8\), the individual chains aggregate but do not collapse, leading to lower average values for the braid length and complexity. Note that \(L\) and \(C\) converge to the values \(L=3\) and \(C=1\) or, equivalently, the ratio \(C/L\) converges to \(1/3\), which corresponds to the braid \(\beta=\sigma_{1}^{\pm 1}\sigma_{2}^{\pm 1}\sigma_{3}^{\pm 1}\) (and isotopic equivalent braids). For four strands, this indicates that a single strand crosses over (under) all others; none of the strands in such a braid are interwoven. Consequently, the aligned polymer bundle is not twisted nor braided, since this situation is a result of the choice of projection angle. We can see that this also holds for the Figure 5: Density plots of the topological observables for \(N=10\) as a function of the stiffness \(\kappa\). It can be easily seen that there is a transition from an amorphous to an aligned aggregate around \(\kappa\approx 7\). The colours indicate the probability of an observable having a value given on the \(y\)-axis. For high stiffness values, all observables stabilise around a fixed distribution. Figure 6: Observables in the polymer system with \(M=4\), \(N=10\) at \(T=1\), \(\rho=0.01\). **(a)** Write \(W\), complexity \(C\) and length \(L\). **(b)** The rescaled radius of gyration \(\widetilde{R}_{g}\) and correlation parameter \(C_{R}\). **(c)** Circuit topology fractions of the different motifs; error bars are smaller than symbol size. **(d)** Isotopy class given by the TN types (RE, FO, PA), and representative structural conformations of the system at different \(\kappa\). All results are averaged over 1000 runs. Figure 7: Observables in the polymer system with \(M=4\), \(N=30\) at \(T=0.1\), \(\rho=0.01\). **(a)** Write \(W\), complexity \(C\) and length \(L\). **(b)** The rescaled radius of gyration \(\widetilde{R}_{g}\) and correlation parameter \(C_{R}\). **(c)** Circuit topology fractions of the different motifs; error bars are smaller than symbol size. All results are averaged over 1000 runs. \(N=10\) system, where \(C/L\approx 0.346\). Considering now the CT fractions, shown in Fig. 7(c) and in the Supplemental Material [25], we see a distinct difference with respect to the \(N=10\) system. While the overall monotonous behaviour of the fractions remains the same as before, a plateau arises for values \(4\lesssim\kappa\lesssim 7\), when the braiding quantities reach their maximal value. For larger stiffness, aggregation wins out over the collapse and the CT fractions associated with interchain interactions, i.e., \(L_{2}\), \(T_{3}\) and \(I_{4}\) again dominate the motif space, since the polymers are stretched along a single direction when forming the aggregate. While \(\widetilde{R}_{g}\) and \(C_{R}\) might be able to detect the structural phase transition between the amorphous and aligned configurations, circuit topology provides a more nuanced and rich approach to the topology of entangled systems. Considering the \(N=30\) system, we can consider the absence of the plateau phase for the \(N=10\) situation. It is indeed possible that such a plateau phase exists for the latter situation, but since the chains are too short to form complex braided aggregates, the stiffness range for which this occurs is too small to see in simulations. ## IV Conclusions In summary, our investigation of aggregates of semiflexible polymers has provided insights into their structural properties and phase transitions. We observed that varying the polymer stiffness leads to a structural "phase diagram" where different conformations such as amorphous or extended structures are favoured, depending on the stiffness. By adjusting the latter, we were able to study the competition between collapse and stiffening effects. The inter- and intrachain contacts provided valuable information about the system's structural properties and phase transitions, which could be analyzed within the framework of braided multichain circuit topology (CT). The combination of circuit topology and entanglement analysis allowed for a more comprehensive understanding of aggregation processes in systems with a small number of chains. The analysis of braided circuits revealed the importance of topological measures such as writhe, complexity, braid length, and isotopy class. The isotopy class analysis showed a higher fraction of finite-order braids for lower stiffness values, while higher stiffness values favoured more complex pseudo-Anosov and reducible braids. The circuit topological analysis reveals the dominance of interchain connections, particularly motifs \(T_{3}\), \(L_{2}\), and \(I_{4}\), which reflects the closely packed nature of the aggregates. When considering longer chains, we observed a region of increased braid length and complexity, indicating a competition between aggregation and collapse. For very flexible chains, individual collapse occurred before aggregation, reducing the probability of finding heavily entangled aggregates. However, as stiffness increased, the chains aggregated and collectively collapsed, forming intricate entangled structures. The length and complexity of the braids increased in this regime, while the radius of gyration and end-to-end correlation parameter indicated that the system was not fully aligned. The circuit topology fractions revealed differences compared to the smaller system, with a plateau phase emerging for intermediate stiffness values when the braiding quantities reached their maximum. For larger stiffness, the dominance of interchain motifs indicated the stretching of polymers along a single direction during aggregate formation. Our study provided a comprehensive understanding of the structural properties, phase transitions, and braiding characteristics of multichain systems. The combination of circuit topology analysis and braiding measures allowed for a deeper exploration of the system's behavior, shedding light on the interplay between collapse, aggregation, and entanglement. These findings contribute to the broader understanding of semiflexible polymer systems and pave the way for further investigations in this field. One future application we envision is the study of the coil-globule transition for water-soluble hydrophobic polymer chains in aqueous solutions [27]. In such systems, the explicit presence of a polar solvent weakens the attractive interactions among monomers at temperatures close to room temperature and the polymers collapse upon heating, which is in contrast to polymers in Figure 8: Density plots of the topological observables for \(N=30\) as a function of the stiffness \(\kappa\). It can be easily seen that there is a transition from an amorphous to an aligned aggregate around \(\kappa\approx 8\). The colours indicate the probability of an observable having a value given on the \(y\)-axis. For high stiffness values, all observables stabilise around a fixed distribution. inorganic solvents. Circuit topology can then be used to study the conformational changes of such systems at the critical temperature. Finally, the general framework of braided circuit topology is expected to be applicable to the broader field of materials science, including the design and analysis of nanotube, nanofiber, and nanowire assemblies. ###### Acknowledgements. We are grateful for inspiring discussions with I. Diamantis on the topic of braids, and to K. Koga for suggesting the water-soluble polymer chains as an avenue for future research. The data that supports the findings of this study are available from the corresponding author upon reasonable request.
2306.02595
Explore and Exploit the Diverse Knowledge in Model Zoo for Domain Generalization
The proliferation of pretrained models, as a result of advancements in pretraining techniques, has led to the emergence of a vast zoo of publicly available models. Effectively utilizing these resources to obtain models with robust out-of-distribution generalization capabilities for downstream tasks has become a crucial area of research. Previous research has primarily focused on identifying the most powerful models within the model zoo, neglecting to fully leverage the diverse inductive biases contained within. This paper argues that the knowledge contained in weaker models is valuable and presents a method for leveraging the diversity within the model zoo to improve out-of-distribution generalization capabilities. Specifically, we investigate the behaviors of various pretrained models across different domains of downstream tasks by characterizing the variations in their encoded representations in terms of two dimensions: diversity shift and correlation shift. This characterization enables us to propose a new algorithm for integrating diverse pretrained models, not limited to the strongest models, in order to achieve enhanced out-of-distribution generalization performance. Our proposed method demonstrates state-of-the-art empirical results on a variety of datasets, thus validating the benefits of utilizing diverse knowledge.
Yimeng Chen, Tianyang Hu, Fengwei Zhou, Zhenguo Li, Zhiming Ma
2023-06-05T04:58:41Z
http://arxiv.org/abs/2306.02595v1
# Explore and Exploit the Diverse Knowledge in Model Zoo ###### Abstract The proliferation of pretrained models, as a result of advancements in pretraining techniques, has led to the emergence of a vast zoo of publicly available models. Effectively utilizing these resources to obtain models with robust out-of-distribution generalization capabilities for downstream tasks has become a crucial area of research. Previous research has primarily focused on identifying the most powerful models within the model zoo, neglecting to fully leverage the diverse inductive biases contained within. This paper argues that the knowledge contained in weaker models is valuable and presents a method for leveraging the diversity within the model zoo to improve out-of-distribution generalization capabilities. Specifically, we investigate the behaviors of various pretrained models across different domains of downstream tasks by characterizing the variations in their encoded representations in terms of two dimensions: diversity shift and correlation shift. This characterization enables us to propose a new algorithm for integrating diverse pretrained models, not limited to the strongest models, in order to achieve enhanced out-of-distribution generalization performance. Our proposed method demonstrates state-of-the-art empirical results on a variety of datasets, thus validating the benefits of utilizing diverse knowledge. Machine Learning, Domain Generalization, Domain Generalization ## 1 Introduction Although remarkable success has been achieved on multiple benchmarks, machine learning models encounter failures in their real-world applications (Volk et al., 2019; Beery et al., 2018; Dai and Van Gool, 2018). A central cause for such failures has been recognized as the vulnerability to the _distribution shifts_ of the test data (Arjovsky et al., 2019; Gulrajani and Lopez-Paz, 2021). This can occur when test data is collected under new conditions such as different weather (Volk et al., 2019), locations (Beery et al., 2018), or light conditions (Dai and Van Gool, 2018), resulting in a distribution that differs from the training set. To address this challenge, the task of domain generalization (DG) has gained significant attention, where models are trained on multiple source domains in order to improve their generalizability to unseen domains (Gulrajani and Lopez-Paz, 2021). Multiple DG algorithms have been proposed from various perspectives. However, this problem is still far from being resolved. For example, Ye et al. (2022) have identified two distinct categories of data distribution shifts, namely _diversity shift_ and _correlation shift_, and empirically observed that the majority of existing algorithms are only able to surpass the simple empirical risk minimization (ERM) in at most one of the categories. Exploiting pretrained models (PTMs) has shown to be one of the most promising directions for addressing the challenge of DG tasks (Wiles et al., 2022; Ye et al., 2022). Research has demonstrated that pretraining can provide a significant improvement in performance for DG tasks (Wiles et al., 2022). The growing PTM hubs further bring in great opportunities. With the thriving of pretraining technologies, we now have a huge amount of pretrained models (PTMs) published. For example, Hugging Face Hub (2023) contains over 80K models that vary in data sources, architectures, and pretraining frameworks. Such a zoo of PTMs thus enjoys both high transfer ability and diversity. By selecting optimal PTMs for given DG datasets from a zoo of PTMs, Dong et al. (2022) boosted the state-of-the-art DG performance on some benchmarks for over 14%. While utilizing PTMs has proven to be a promising approach for domain generalization, it remains unclear how to effectively leverage the diverse inductive biases present in different PTMs. Ensemble methods of PTMs have been explored (Dong et al., 2022; You et al., 2021), however, these methods typically only consider the top-performing models based on performance ranking scores. For exam ple, Dong et al. (2022) proposed a feature selection method on the concatenated features of the top-3 ranked PTMs. However, without incorporating diversity, such ensembles can perform worse than single models. Although some previous studies have examined certain characteristics of different PTMs (Gontijo-Lopes et al., 2022; Idrissi et al., 2022), they are not specified for DG tasks but focus on the in-distribution behavior of the models. This makes it unclear how to effectively utilize these analyses for tackling DG tasks. To address this challenge, it is crucial to first investigate the compatibility of different PTMs on specific DG tasks and to understand their inductive biases as thoroughly as possible. To achieve this, we propose to profile the shift behaviors of each PTM when conditioned on a given DG task, and then to design an ensemble algorithm that can effectively utilize the profiled shift types. Specifically, similar to the definition presented in (Ye et al., 2022), we interpret the behaviors of PTMs across different domains of downstream tasks by characterizing the variation in their encoded representations from two dimensions, namely _feature diversity shift_ and _feature correlation shift_. Through this design, we empirically demonstrate that the differences in shift patterns not only exist among datasets but also among different PTMs. Such profiling provides guidance for utilizing the inductive bias of poorly performed models which have typical shift patterns on one of the dimensions. As these models capture features that induce a specific kind of distribution shift, we can design ensemble algorithms that prevent the classifier from encountering similar failures, thus improving the out-of-distribution (OOD) generalization ability. To accomplish this, we introduce two key components in our ensemble algorithm: the sample reweight module and the independence penalization module. The sample reweight module utilizes the output of a correlation shift-dominated model to balance the weights of sub-populations, while the independence penalization module requires the main classifier's output to be independent of features that encounter significant diversity shifts among domains. These ensemble procedures are applied during the training process, introducing no additional computational cost for inference. We empirically verify the value of such model zoology on image classification benchmarks, with a model zoo that consists of 35 PTMs varying in architecture, pretraining algorithm, and datasets. The results of our empirical analysis demonstrate the effectiveness of our approach in leveraging poor models to enhance performance, as our new algorithm outperforms top model ensembles. We show that the selected models are different across different datasets, which indicates that our method is adaptive to the specific DG tasks. Our contributions can be summarized as follows. * We propose a novel methodology for profiling the behavior of pretrained models (PTMs) on a given domain generalization (DG) task by quantifying the distribution shift of the features from two dimensions, namely feature diversity shift and feature correlation shift. * We introduce a new ensemble algorithm that leverages the insights from the profiled shift types to effectively utilize the diverse inductive bias among different PTMs for DG tasks. * Through extensive experiments on image classification DG benchmarks, we demonstrate the effectiveness of our proposed approach, which outperforms top-performing PTM ensembles. This work provides a new perspective on how to effectively leverage the diverse inductive bias of PTMs for domain generalization tasks and highlights the importance of understanding the shift behaviors of models for such tasks. ## 2 Related Works Domain generalization.Numerous domain generalization algorithms have been proposed to alleviate the accuracy degradation caused by distribution shifts via exploiting training domain information (Arjovsky et al., 2019; Krueger et al., 2021; Li et al., 2018; Bai et al., 2021; Kuang et al., 2018; Bai et al., 2021; Cha et al., 2021; Wang et al., 2022; Yi et al., 2023). However, (Gulrajani and Lopez-Paz, 2021) empirically show that recent domain generalization algorithms show no improvement compared with ERM. More fine-grained analyses are further conducted (Ye et al., 2022; Wiles et al., 2022), where distribution shifts are decomposed into multiple categories. Ye et al. (2022) empirically observed that the majority of the algorithms are only able to surpass the simple ERM in at most one kind of distribution shift. Wiles et al. (2022) show that progress has been made over a standard ERM baseline. Though best methods are not consistent over different data shifts, pretraining and augmentations usually offer large gains. PTMs for domain generalization.Methods leveraging pretraining models have shown promising improvements in domain generalization performance (Wiles et al., 2022; Li et al., 2022; Arpit et al., 2021; Dong et al., 2022; Wortsman et al., 2022; Rame et al., 2022; Rame et al., 2022). Among them, ensemble methods combined with PTMs show further advantages. Weight averaging methods combine weights of PTMs of the same architecture over different runs (Rame et al., 2022; Wortsman et al., 2022) or tasks (Rame et al., 2022). Arpit et al. (2021) ensemble the predictions of moving average models. Recent methods (Li et al., 2022; Dong et al., 2022), and the performance of the proposed approach is comparable to the performance of the proposed approach. In this paper, we propose a novel approach to enhance the performance of the proposed approach. We propose a novel approach to enhance the performance of the proposed approach. et al., 2022) further consider the ensemble of models with different architectures to exploit the growing large PTM hubs. Specifically, Li et al. (2022) ensemble predictions of multiple different PTMs via instance-specific attention weights. ZooD (Dong et al., 2022) releases the inference cost by only concatenating the representations of top models selected from a diverse model zoo and further conducts Bayesian feature selection. However, as shown in (Dong et al., 2022), such an ensemble does not always outperform the single model. The diversity in the model zoo has not been fully understood and exploited, which is the focus of this paper. Understanding PTMs.The paradigm of PTM reusing triggers the need for understanding the behavior of a PTM on a given downstream task. Recently, studies on the difference in PTM features have been proposed (Gontijo-Lopes et al., 2022; Idrissi et al., 2022), which focus on the distribution behavior of models. Gontijo-Lopes et al. (2022) suggest that models under different pretraining techniques learn diverse features. They propose that the correct predictions of high-accuracy models do not dominate those of low-accuracy models, and model ensembles with diverse training methodologies yield the best downstream performance. Idrissi et al. (2022) introduced ImageNet-X, which is a set of human annotations pinpointing failure types for the ImageNet (Russakovsky et al., 2015) dataset. ImageNet-X labels distinguishing object factors (e.g. pose, color) for each image in the validation set and a random subset. They found that most models when trained, fine-tuned, or evaluated on ImageNet, have the same biases. However, this paper shows different observations on the DG datasets, which will be further discussed in Section 3.3. ## 3 Model Exploration To effectively leverage diversity within a model zoo, we need to understand the difference between PTMs conditioned on each specific DG task. To accomplish this, we propose analyzing and describing the changes in PTM feature distributions across downstream domains. ### Feature Diversity and Correlation Shifts Consider a dataset \(\mathcal{D}\) that contains samples collected under multiple domains \(\mathcal{E}\), i.e., \(\mathcal{D}=\{D_{e}\}_{e\in\mathcal{E}}\). \(D_{e}=\{x_{i}^{e},y_{i}^{e}\}_{i=1}^{n^{e}}\) contains instances of random variables \((X,Y)\) that are _i.i.d._ sampled from the probability distribution \(\mathbb{P}^{e}(\mathcal{X}\times\mathcal{Y})\). Consider a PTM that can be viewed as a feature encoder \(\phi:\mathcal{X}\rightarrow\mathcal{Z}_{\phi}\). To understand the behavior of such an encoder between different domains, we are in fact concerned with the difference between the distributions of \((\phi(X),Y)\) on different \(\mathbb{P}^{e},\forall e\in\mathcal{E}\). As \(\mathbb{P}^{e}(\phi(X),Y)=\mathbb{P}^{e}(Y|\phi(X))\mathbb{P}^{e}(\phi(X))\), the variation of \(\mathbb{P}^{e}(\phi(X),Y)\) can be decomposed into the shift of \(\mathbb{P}^{e}(\phi(X))\) and the shift of \(\mathbb{P}^{e}(Y|\phi(X))\), namely the _feature diversity shift_ and the _feature correlation shift_. In this paper, we use the following two metrics for measuring the diversity shift and correlation shift of \(\phi:\mathbf{x}\mapsto\mathbf{z}\) between a pair of domains \(e,e^{\prime}\), respectively: \[F_{div}(\phi,e,e^{\prime}) =\frac{1}{2}\int_{\mathcal{S}}|p_{e}(\mathbf{z})-p_{e^{\prime}}( \mathbf{z})|\,\mathrm{d}\mathbf{z},\] \[F_{cor}(\phi,e,e^{\prime}) =\frac{1}{2}\int_{\mathcal{T}}\tilde{p}_{e,e^{\prime}}(\mathbf{z })\sum_{y\in\mathcal{Y}}|p_{e}(y|\mathbf{z})-p_{e^{\prime}}(y|\mathbf{z})|\, \mathrm{d}\mathbf{z},\] where \(\tilde{p}_{e,e^{\prime}}\) is an geometric average of \(p_{e}\) and \(p_{e^{\prime}}\). \(\mathcal{S}\) and \(\mathcal{T}\) are partitions of the image set \(Z_{\phi}\) of \(\phi\) defined as follows: \[\mathcal{S}(\phi,e,e^{\prime}) :=\{\mathbf{z}\in\mathcal{Z}_{\phi}|p_{e}(\mathbf{z})\cdot p_{e^{ \prime}}(\mathbf{z})=0\},\] \[\mathcal{T}(\phi,e,e^{\prime}) :=\{\mathbf{z}\in\mathcal{Z}_{\phi}|p_{e}(\mathbf{z})\cdot p_{e^{ \prime}}(\mathbf{z})\neq 0\}.\] Intuitively, \(F_{div}\) describes the proportion of values of features \(\phi(\mathbf{x})\) not shared between two domains. \(F_{cor}\) measures how the correlation between the features and the target label changes between domains. Such definitions are similar to that of diversity shift and correlation shift of datasets in OOD-Bench (Ye et al., 2022). Note that the two metrics in this paper are defined for general feature encoders, not a specific encoder \(Z_{2}\) which encodes the latent spurious variable assumed in the data generating process as in (Ye et al., 2022). By specific to that encoder, (Ye et al., 2022) view the two metrics as a characteristic of the dataset itself. In contrast, we focus on the difference between general encoders on a given dataset. That generality requires a new design for the estimation methods of the two metrics than that in (Ye et al., 2022). We further introduce the practical estimation method we proposed in Section 3.2. Relation with OOD performance.For diversity shift, the model's decision on data from the set \(\mathcal{S}\) depends on the Figure 1: The distribution of feature diversity and correlation shift scores of 35 PTMs on 5 datasets in the DomainBed. classification layer's extrapolation behavior, which is hard to infer with in-distribution data. For correlation shift, it directly causes the change of prediction precision and results in the gap between in-distribution and out-of-distribution performance. As a result, we would prefer a representation with both low diversity and correlation shifts so that the in-distribution training controls the out-of-distribution error. Note that by splitting the data into \(\mathcal{S}\) and \(\mathcal{T}\), we leave out the part that is affected by the classification layer's extrapolation behavior in the correlation shift estimation and the in-domain density shift in the diversity shift estimation. This is the main difference from the scores designed in ZooD. ### Practical Estimation In this section, we show how the two metrics can be computed practically for general latent features of an arbitrary PTM. Diversity shift.Denote \(\mathcal{S}_{e}(e^{\prime},\phi):=\{\mathbf{z}\in\mathcal{Z}_{\phi}|p_{e}( \mathbf{z})>0,p_{e^{\prime}}(\mathbf{z})=0\}\), \(\mathcal{S}_{e^{\prime}}(e,\phi):=\{\mathbf{z}\in\mathcal{Z}_{\phi}|p_{e}( \mathbf{z})=0,p_{e^{\prime}}(\mathbf{z})>0\}\), \(F_{div}(\phi,e,e^{\prime})\) can be written as \[F_{div}(\phi,e,e^{\prime})=\frac{1}{2}(\mathbb{P}^{e}[\mathcal{S}_{e}(e^{ \prime},\phi)]+\mathbb{P}^{e^{\prime}}[\mathcal{S}_{e^{\prime}}(e,\phi)]).\] We design the following empirical estimation of \(\mathbb{P}^{e}[\mathcal{S}_{e}(e^{\prime},\phi)]\): \[\hat{\mathbb{P}}^{e}[\hat{\mathcal{S}}_{e}(e^{\prime},\phi)]:=\hat{\mathbb{P} }^{e}(\{\mathbf{x}\in D_{e}|\hat{p}_{e^{\prime}}(\mathbf{z})<\epsilon_{e^{ \prime}},\mathbf{z}=\phi(\mathbf{x})\}).\] Intuitively, we estimate the no-overlap set \(\mathcal{S}_{e}(e^{\prime},\phi)\) using the estimated probability of the instance in the estimated distribution \(\hat{p}_{e^{\prime}}\). When the probability is lower than a given small threshold \(\epsilon_{e^{\prime}}\), it is considered as in the set \(\mathcal{S}_{e}(e^{\prime},\phi)\). The threshold \(\epsilon_{e^{\prime}}\) is estimated by \[\hat{\mathbb{P}}^{e^{\prime}}(\{\mathbf{x}\in V_{e^{\prime}}|\hat{p}_{e^{ \prime}}(\mathbf{z})<\epsilon_{e^{\prime}},\mathbf{z}=\phi(\mathbf{x})\})=0.01.\] We approximate \(p_{e}\) with a Gaussian distribution \(\mathcal{N}(\mu_{e},\Sigma_{e})\), and estimate the parameters with empirical statistics on \(D_{e}\). In the same way we can get the estimation of \(\mathbb{P}^{e^{\prime}}[\mathcal{S}_{e^{\prime}}(e,\phi)]\). The empirical diversity metric is then the average of the two estimations. Correlation shift.For each pair of domain \(e,e^{\prime}\). We have the empirical set \(\hat{\mathcal{T}}(\phi,e,e^{\prime}):=(D_{e}\setminus\hat{S}_{e}(e^{\prime}, \phi))\cup(D_{e^{\prime}}\setminus\hat{S}_{e^{\prime}}(e,\phi))\). Denote \(p_{e,e^{\prime}}=\frac{1}{2}(p_{e}+p_{e^{\prime}})\) and \[\hat{D}_{cor}=\frac{1}{2}\sum_{\mathbf{x}\in\hat{\mathcal{T}}}\hat{p}_{e,e^{ \prime}}(\mathbf{x})\sum_{y\in\mathcal{Y}}|\hat{p}_{e}(y|\phi(\mathbf{x}))- \hat{p}_{e^{\prime}}(y|\phi(\mathbf{x}))|.\] As \(D_{e},D_{e^{\prime}}\) are independently sampled, \(\hat{p}_{e,e^{\prime}}(\mathbf{x})\) can be estimated by the empirical distribution, i.e., \(\hat{p}_{e,e^{\prime}}(\mathbf{x})=1/|D_{e}\cup D_{e}^{\prime}|\). To estimate \(\hat{p}_{e}(y|\phi(\mathbf{x}))\), we first get a primary estimation \(\tilde{p}_{e}(y|\phi(\mathbf{x}))\) with the following equation, where the coefficient matrices \((\mathbf{M}_{0},\mathbf{M}_{1},\dots,\mathbf{M}_{|\mathcal{Y}|})\) are estimated by minimizing the empirical evidence as in LogME (You et al., 2021), i.e., \[\tilde{p}_{e}(y|\phi(\mathbf{x})):=m(\mathbf{M}_{0}\phi(\mathbf{x}),\mathbf{M }_{1}\phi(\mathbf{x}),\dots,\mathbf{M}_{|\mathcal{Y}|}\phi(\mathbf{x})),\] where \(m\) denotes the normalization operator. We then calibrate \(\tilde{p}_{e}(y|\phi(\mathbf{x}))\) with the empirical accuracy estimated on \(\hat{\mathcal{T}}(\phi,e,e^{\prime})\) to get the final estimation \(\hat{p}_{e}(y|\phi(\mathbf{x}))\). More details are provided in Appendix A.1. ### Observations In this section, we present the results of our empirical analysis on the distribution shifts of PTMs for different DG datasets. We quantify these shifts using the metrics previously described and discuss the various patterns observed. We conduct experiments on five domain generalization benchmarks: PACS (Li et al., 2017), VLCS (Fang et al., 2013), Office-Home (Venkateswara et al., 2017), TerralIncognita (Beery et al., 2018), DomainNet (Peng et al., 2019). According to (Ye et al., 2022), PACS, OfficeHome, and TerralIncognita all only encounter diversity shifts, while DomainNet shows both diversity and correlation shifts. We adopt the model zoo constructed in (Dong et al., 2022), which consists of 35 PTMs with diverse architectures, pre-training methods, and pre-training datasets. The two shift scores for each model are the average of the two metrics in Section 3.1 computed on each pair of domains in the dataset. More details are provided in Appendix A.2. The primary findings in this section are as follows. * Within a specific DG dataset, the shift patterns of PTMs exhibit substantial diversity. * The architectural diversity contributes to distinct shift patterns, and their interrelationships tend to maintain consistency across datasets. * The influence of pretraining frameworks on shift behavior is noteworthy. Particularly, self-supervised learning leads to relatively higher feature diversity shifts. * An increase in the size of the pretraining data results in a decrease in the feature correlation shift. We introduce those findings in detail in the following paragraphs. Different shift patterns of PTMs on the datasets.As shown in (Ye et al., 2022), different datasets exhibit different trends of shifts. A natural question is how the distribution shift of data interacts with the shift in the feature space of a PTM. The observations in this section show that the shift patterns of PTMs can have a great variety within a given DG dataset. Specifically, we compute the average shift metric scores between domain pairs on each dataset. The results are shown in Figure 1. On Terra Incognita, the diversity shift of models varies from 0.21 to 0.89. Notably, some PTMs encounter significant correlation shifts on Terra Incognita, which is different from the dataset correlation shift shown in (Ye et al., 2022). We further compare the results within the following 3 groups of models to show the effect of architectures, training frameworks, and datasets on shift behavior. The details of the 3 groups are introduced in Appendix A.2. Architectures.We compare models with different architectures but pre-trained with the same framework on the same dataset. As shown in Figure 2, when comparing PTMs pretrained under the ERM framework on ImageNet-1K (Russakovsky et al., 2015a), we found that the variation of architectures resulted in a wide range of shift patterns. It can be observed that across different datasets, ResNet-152 generally exhibits a larger diversity shift compared to ResNet-50, and a smaller correlation shift. Additionally, after fine-tuning, ResNet-152 achieves higher OOD accuracy than ResNet-50. These findings suggest an interesting observation that while ResNet-152 captures domain-specific features, they do not result in a geometric skew (Nagarajan et al., 2021). Maaten, 2020) and InsDis (Wu et al., 2018) usually have the most significant diversity shifts and worse OOD performance on these datasets (Dong et al., 2022). Datasets.To demonstrate the impact of dataset size on the distribution shifts of PTMs, we compare the performance of Swin transformers (Liu et al., 2021) pretrained on ImageNet-1K and both ImageNet-1K and ImageNet-22K (Russakovsky et al., 2015), as shown in Figure 4. It indicates that the use of larger pretraining data results in a significant decrease in correlation shift, which may be attributed to the increased complexity of the supervised pretraining tasks. ## 4 Model Zoo Exploitation In this section, we demonstrate how the characteristic of diversity in models can be employed to enhance the domain generalization performance of strong models. In the previous section, we established that models exhibit two distinct types of shift patterns. Our observations indicate that some PTMs are dominated by one type of shift, for example, PIRL on TerraIncognita. This insight inspires the design of an ensemble algorithm that addresses the two dimensions of feature shifts. By leveraging two auxiliary models that are dominated by the two shifts respectively, we design corresponding algorithms to resolve the specific shifts. ### Diversity Ensemble Method To prevent potential failure caused by the diversity shift, we utilize the auxiliary model which encodes features that encounter significant diversity shifts. We propose to require the prediction of the main model to be independent of those features thus mitigating the effect of diversity shift on the predictor. To constraint the independence, we adopt a differentiable independence measure, the Hilbert-Schmidt independence criterion (HSIC) (Gretton et al., 2007). The idea of using HSIC is inspired by the algorithm proposed in (Bahng et al., 2020), where HSIC is used for penalizing the dependency between the predicts of the main model and multiple biased models. Formally, denote \(Z_{l}=l_{m}\circ f_{M}(X)\), where \(l_{m}:\mathcal{Z}_{M}\rightarrow\mathcal{Z}_{l}\) is the classifier on the top of the main model \(f_{M}:\mathcal{X}\rightarrow\mathcal{Z}_{M}\). Denote \(Z_{d}=f_{d}(X)\), where \(f_{d}:\mathcal{X}\rightarrow\mathcal{Z}_{d}\) is the diversity auxiliary model. Our target is then to constrain the dependency between \(Z_{l}\) and \(Z_{d}\). Denote \(k\) as a kernel function on \(\mathcal{Z}_{d}\times\mathcal{Z}_{d},l\) as a kernel function on \(\mathcal{Z}_{l}\times\mathcal{Z}_{l}\). The HSIC statistic between the main model \(f_{M}\) and the auxiliary model \(f_{d}\) is defined as follows: \[\text{HSIC}(f_{M},f_{d}):= \mathbb{E}\left[k\left(Z_{d},Z_{d}^{\prime}\right)l\left(Z_{l},Z _{l}^{\prime}\right)\right]+\] \[\mathbb{E}\left[k\left(Z_{d},Z_{d}^{\prime}\right)\right]\mathbb{ E}\left[l\left(Z_{l},Z_{l}^{\prime}\right)\right]\] \[-2\mathbb{E}\left[\mathbb{E}_{Z_{d}^{\prime}}\left[k\left(Z_{d}, Z_{d}^{\prime}\right)\right]\mathbb{E}_{Z_{l}^{\prime}}\left[l\left(Z_{l},Z_{l}^{ \prime}\right)\right]\right].\] Instead of the unbiased estimator in (Bahng et al., 2020), we used the biased empirical estimate \(\text{HSIC}_{b}\)(Gretton et al., 2007): \[\text{HSIC}_{b}(f_{M},f_{d}):=\frac{1}{m^{2}}\operatorname{trace}(\mathbf{KHLH }),\] where we suppose the sample size is \(m\), \(\mathbf{K}\) denotes the \(m\times m\) matrix with entries \(k_{ij}:=k(f_{d}(x_{i}),f_{d}(x_{j}))\), \(\mathbf{L}\) denotes the \(m\times m\) matrix with entries \(l_{ij}:=l(l_{m}\circ f_{M}(x_{i}),l_{m}\circ f_{M}(x_{j}))\). \(\mathbf{H}=\mathbf{I}-\frac{1}{m}\mathbf{1}\mathbf{1}^{T}\), where \(\mathbf{1}\) is an \(m\times 1\) vector of ones. The final training objective of the main model writes as follows: \[\mathcal{L}(f_{M}):=\min_{f_{M}}\mathbb{E}_{X,Y\sim\mathbb{P}_{ \mathbb{P}}}[ \mathcal{L}_{c}(Y,f_{M}(X))\] \[+\lambda\text{HSIC}_{d}(f_{M},f_{d})].\] In our implementation, we use the Gaussian kernel \(l(z,z^{\prime})=\exp(-\gamma_{1}\|z-z^{\prime}\|^{2})\), \(k(z,z^{\prime})=\exp(-\gamma_{2}\|z-z^{\prime}\|^{2})\). To mitigate the effect of the dimension, we rescale \(\gamma_{1}\) and \(\gamma_{2}\) by dividing by the dimension of the representation \(z\) in the calculation. Following methods in invariant learning literature (Chen et al., 2022), we introduce an additional hyperparameter \(N_{\text{warm-up}}\) which controls the number of warm-up steps before the HSIC penalty is added to the loss. ### Correlation Ensemble Method To prevent potential failure caused by the correlation shift, we adopt the auxiliary model which encodes features that encounter significant correlation shifts. In this module, we reweight training instances to weaken the correlation Figure 4: Results of Swin transformers (Liu et al., 2021) pretrained on ImageNet-1K and both ImageNet-1K and ImageNet-22K on 5 datasets. between the features and the target labels. By that, we avoid the predictor from skewing to that unstable correlation across domains. Specifically, denote the auxiliary model as \(f_{c}\) and its uncertainty output for instance \(\mathbf{x}\) as \(\mathbf{p}_{c}(\mathbf{x})\). We follow the classical strategy which has been proven effective in the debias literature (Xiong et al., 2021) to reweight the instance loss with \[w_{c}(\mathbf{x},y)=p(y)/p_{c}(\mathbf{x})_{y},\] where \(p_{c}(\mathbf{x})_{y}\) is the \(y\)-th component of \(\mathbf{p}_{c}(\mathbf{x})\). During training steps, the weights in each batch are smoothed with a hyperparameter \(T\) and normalized (Yi et al., 2021). The loss on a batch \(|\mathcal{B}|\) is then \[\mathcal{L}_{\mathcal{B}}(f_{M}):=\frac{1}{|\mathcal{B}|}\sum_{(\mathbf{x},y) \in\mathcal{D}}m(\frac{p(y)}{p_{c}(\mathbf{x})_{y}\cdot T})\mathcal{L}_{c}(y, f_{M}(\mathbf{x})),\] where \(m\) denotes the normalization operation over samples in \(\mathcal{B}\). We introduce an additional hyperparameter \(N_{\text{anneal}}\) which controls the number of annealing steps where \(T\) is infinitely large, i.e., before the adjusted weights are attached to the samples. \begin{table} \begin{tabular}{l|c c c c c|c} \hline \hline **Method** & **PACS** & **VLCS** & **OfficeHome** & **TerraInc.** & **DomainNet** & **Avg** \\ \hline ERM\({}^{\dagger}\) & 85.5 & 77.5 & 66.5 & 46.1 & 40.9 & 63.3 \\ IRM\({}^{\dagger}\) & 83.5 & 78.6 & 64.3 & 47.6 & 33.9 & 61.6 \\ GroupDRO\({}^{\dagger}\) & 84.4 & 76.7 & 66.0 & 43.2 & 33.3 & 60.7 \\ I-Mixup\({}^{\dagger}\) & 84.6 & 77.4 & 68.1 & 47.9 & 39.2 & 63.4 \\ MMD\({}^{\dagger}\) & 84.7 & 77.5 & 66.4 & 42.2 & 23.4 & 58.8 \\ SagNet\({}^{\dagger}\) & 86.3 & 77.8 & 68.1 & 48.6 & 40.3 & 64.2 \\ ARM\({}^{\dagger}\) & 85.1 & 77.6 & 64.8 & 45.5 & 35.5 & 61.7 \\ VREx\({}^{\dagger}\) & 84.9 & 78.3 & 66.4 & 46.4 & 33.6 & 61.9 \\ RSC\({}^{\dagger}\) & 85.2 & 77.1 & 65.5 & 46.6 & 38.9 & 62.7 \\ SWAD & 88.1 & 79.1 & 70.6 & 50.0 & 46.5 & 66.9 \\ \hline & \multicolumn{6}{c}{ZooD} \\ \hline Single\({}^{*}\) & 96.0 & 79.5 & 84.6 & 37.3 & 48.2 & 69.1 \\ Ensemble\({}^{*}\) & 95.5 & 80.1 & 85.0 & 38.2 & 50.5 & 69.9 \\ F. Selection\({}^{*}\) & 96.3 & 80.6 & 85.1 & 42.3 & **50.6** & 71.0 \\ \hline & \multicolumn{6}{c}{Ours} \\ \hline Single + Rew & 96.3 & 81.2 & 84.0 & 52.0 & 48.2 & 72.3 \\ + HSIC & **96.7** & **81.5** & 85.2 & 52.3 & 49.2 & 72.8 \\ + Both & **96.7** & 81.4 & **85.3** & **53.0** & 49.2 & **73.1** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of test domain accuracy between our method and SOTA OOD methods. The results of SWAD are from (Cha et al., 2021), and results denoted with \(\dagger\) are from (Gulrajani and Lopez-Paz, 2021). The results of three versions of ZooD are from (Dong et al., 2022) (denoted with \(*\)). Our results are average of three trials. \begin{table} \begin{tabular}{l|c c c c c} \hline \hline **Datasets** & **PACS** & **VLCS** & **OfficeHome** & **TerraInc.** & **DomainNet** \\ \hline Main model & CLIP-ViT & CLIP-ViT & Swin-B-22 & Swin-B-22 & ResNet-101 \\ \(F_{div}\) & 0.37 & 0.36 & 0.11 & 0.71 & 0.35 \\ \(F_{cor}\) & 0.05 & 0.11 & 0.19 & 0.11 & 0.41 \\ \hline HSIC aux. & ResNet50-ss & ResNet50-InsDis & ResNet50-InsDis & ResNet50-PIRL & ViT-B \\ \(F_{div}\) & 0.65 & 0.33 & 0.17 & 0.89 & 0.47 \\ \(F_{cor}\) & 0.10 & 0.21 & 0.57 & 0.05 & 0.22 \\ \hline Rew. aux. & BEiT-base & BEiT-base & deepcluster-v2 & inception-v3 & ResNet50-sws \\ \(F_{div}\) & 0.33 & 0.06 & 0.09 & 0.21 & 0.45 \\ \(F_{cor}\) & 0.38 & 0.29 & 0.52 & 0.47 & 0.40 \\ \hline \hline \end{tabular} \end{table} Table 2: Main and auxiliary models. HSIC aux. denotes the auxiliary models that are dominated by the diversity shift and adopted for computing the HSIC constraint. Rew. aux. denotes the auxiliary models that are dominated by the correlation shift. The metric values are averaged over each pair of domains in the dataset. Details of the model configuration are provided in Appendix B.1. ## 5 Experiments We conduct experiments on domain generalization benchmarks to evaluate the effectiveness of our proposed zoo exploiting method. Our results demonstrate that it consistently outperforms single top models and improves the performance of top model ensembles, highlighting the benefits of exploiting model diversity. Additionally, we analyze the correlation between OOD accuracy and the feature diversity and correlation shifts of the fine-tuned classifiers. ### Experiment Settings Datasets.We conduct experiments on five domain generalization benchmarks: PACS (Li et al., 2017), VLCS (Fang et al., 2013), OfficeHome (Venkateswara et al., 2017), Terralnocognita (Beery et al., 2018), and DomainNet (Peng et al., 2019). During training on each dataset, one of the domains is chosen as the target domain and the remaining are the training domains, where 20% samples are used for validation and model selection. The final test accuracy on the dataset is the mean of the test results on each target domain. Baselines.We compare the proposed algorithm with previous SOTA OOD methods and three versions of ZooD, including 1) _Single_: fine-tune the top-1 model ranked by ZooD; 2) _Ensemble_: fine-tune an ensemble of the top-K models; 3) _F. Selection_: fine-tune an ensemble of the top-K models with feature selection, which is the expected result using ZooD. Our algorithm also has three versions. 1) **Single+Rew**: fine-tune the top-1 model ranked by ZooD with reweight auxiliary; 2) **Single+HSIC**: fine-tune the top-1 model with HSIC auxiliary; 3) **Single+Both**: fine-tune the top-1 model with both kinds of auxiliary. Configurations.We follow the setting of ZooD to construct a model zoo consisting of 35 PTMs. As discussed in Section 3.3, these models vary in architectures, pretraining methods, and datasets. For auxiliary models, we select models that are extreme at one shift metric. For the main model, we use the Top-1 model ranked by ZooD. The detailed statistics of selected auxiliary models and the main models are shown in Table 2. We use a 3-layer MLP as the prediction head on the top of the main model and fine-tune it on the downstream tasks. Following ZooD, we adopt the leave-one-domain-out cross-validation setup in DomainBed for hyper-parameter selection and run 3 trials. More details on the experimental setup are in Appendix B.1. DomainNet datasets, this may be due to the limited effectiveness of the reweighting strategy when the number of classes is large (65 and 345). Previous literature has only validated its success on tasks with a number of classes lower than 10 (Xiong et al., 2021). To further interpret the results, we analyze the shift pattern of the main predictor. Table 4 shows the scores comparison of the last layer features (logits) of the main predictor. The results are obtained using the following hyperparameter set: \(\lambda=100,N_{\text{warm-up}}=500,\gamma_{1}=0.5,\gamma_{2}=0.25\), \(T=1,N_{\text{anneal}}=2000\). As expected, compared to the results obtained using ERM, HSIC, and Rew. lead to a decrease in \(F_{div}\) and \(F_{cor}\), respectively. The results obtained using both modules show a compromise between the two modules. It is worth noting that the use of HSIC on the VLCS dataset leads to a significant decrease in \(F_{cor}\), which can explain the result in Table 1 where incorporating the reweight module in Two does not further improve the results of HSIC. ## 6 Conclusion In this work, we have presented a novel approach for utilizing the diverse knowledge present in a model zoo for domain generalization tasks. The main takeaway findings of this study are two-fold. Firstly, it emphasizes that even the most powerful models have the potential for further enhancements in downstream DG tasks. Secondly, it illustrates that the enhancements do not solely come from powerful models, but rather from a combination of models with diverse characteristics, a weak model can also contribute to the enhancement of an already strong model. This highlights the importance of maintaining a diverse zoo of pretrained models for the community. It is worth emphasizing that our proposed profiling method is general and can be applied to other tasks and domains, making it an interesting avenue for further research. Overall, this work provides a new perspective on how to better utilize the diverse knowledge in a model zoo and opens up new possibilities for improving performance on out-of-distribution tasks.
2303.07233
Impact of polyelectrolyte adsorption on the rheology of concentrated Poly(N-Isopropylacrylamide) microgel suspensions
We explore the impact of three water-soluble polyelectrolytes (PEs) on the flow of concentrated suspensions of poly(N-isopropylacrylamide) (PNIPAm) microgels with thermoresponsive anionic charge density. By progressively adding the PEs to a jammed suspension of swollen microgels, we show that the rheology of the mixtures is remarkably influenced by the sign of the PE charge, PE concentration and hydrophobicity only when the temperature is raised above the microgel volume phase transition temperature $T_c$, namely when microgels collapse, they are partially hydrophobic and form a volume-spanning colloidal gel. We find that the original gel is strengthened close to the isoelectric point, attained when microgels are mixed with cationic PEs, while PE hydrophobicity rules the gel strengthening at very high PE concentrations. Surprisingly, we find that polyelectrolyte adsorption or partial embedding of PE chains inside the microgel periphery occurs also when anionic polymers of polystyrene sulfonate with high degree of sulfonation are added. This gives rise to colloidal stabilization and to the melting of the original gel network above $T_c$. Contrastingly, the presence of polyelectrolytes in suspensions of swollen, jammed microgels results in a weak softening of the original repulsive glass, even when an apparent isoelectric condition is met. Our study puts forward the crucial role of electrostatics in thermosensitive microgels, unveiling an exciting new way to tailor the flow of these soft colloids and highlighting a largely unexplored path to engineer soft colloidal mixtures.
Rajam Elancheliyan, Edouard Chauveau, Domenico Truzzolillo
2023-03-13T16:06:40Z
http://arxiv.org/abs/2303.07233v2
Impact of polyelectrolyte adsorption on the rheology of concentrated Poly(N-Isopropylacrylamide) microgel suspensions1 ###### Abstract We explore the impact of three water-soluble polyelectrolytes (PEs) on the flow of concentrated suspensions of poly(N-isopropylacrylamide) microgels with thermoresponsive anionic charge density. By progressively adding the PEs to a jammed suspension of swollen microgels, we show that the rheology of the mixtures is remarkably influenced by the sign of the PE charge, PE concentration and hydrophobicity only when the temperature is raised above the microgel volume phase transition temperature \(T_{\mathrm{c}}\), namely when microgels collapse, they are partially hydrophobic and form a volume-spanning colloidal gel. We find that the original gel is strengthened close to the isoelectric point, attained when microgels are mixed with cationic PEs, while PE hydrophobicity rules the gel strengthening at very high PE concentrations. Surprisingly, we find that polyelectrolyte adsorption or partial embedding of PE chains inside the microgel periphery occurs also when anionic polymers of polystyrene sulfonate with high degree of sulfonation are added. This gives rise to colloidal stabilization and to the melting of the original gel network above \(T_{\mathrm{c}}\). Contrastingly, the presence of polyelectrolytes in suspensions of swollen, jammed microgels results in a weak softening of the original repulsive glass, even when an apparent isoelectric condition is met. Our study puts forward the crucial role of electrostatics in thermosensitive microgels, unveiling an exciting new way to tailor the flow of these soft colloids and highlighting a largely unexplored path to engineer soft colloidal mixtures. ## 1 Introduction Colloid-polymer mixtures represent an ever-present paradigm for manipulation of the microscopic dynamics and the flow properties of soft matter. They offer unique opportunities for addressing challenges of central interest in the field of the glass transition and the gelation of colloids. Mixtures of neutral sub-micrometer particles and non-adsorbing linear polymers have been investigated in the last 20 years [1, 2, 3] showing that the rheology of colloidal suspensions can be drastically changed by depletion interactions of purely entropic nature [4]. By contrast the effect of adsorbing polymers on the flow properties of colloids is much less explored. One emblematic case is the one of mixtures of charged colloids and polyelectrolytes, two types of macroions that are ubiquitous in nature. Polyelectrolytes (PEs) are charged polymers whose monomer segments bear an electrolyte group that dissociate in polar solvents. For this reason PEs are very sensitive to any electrostatic field and to the presence of other ionic species. They are widely employed to modify the properties of colloidal particle suspensions, membranes and solid surfaces [5, 6], and they are used in industry for many purposes ranging from water remediation to mineral separation and to control the rheology of particle slurries and pastes [7, 8, 9, 10]. PEs are also often employed to create protective and/or functional coatings [11, 12], oppositely charged multilayers [13, 14] or brushes by grafting or by adsorption of block copolymers [15, 16]. These coatings are employed, for example, to regulate surface properties, including wetting, lubrication and adhesion [17, 18]. The properties of colloid surfaces can be therefore remarkably changed by PEs addition and the understanding of the relationship between PE adsorption, particle interactions, and the stability of the resulting mixtures is crucial for the future development of both polyelectrolyte additives and novel, soft and tunable materials. The latter have attracted the attention of a large part of the scientific community working in materials science, and soft colloids have proved to be excellent constituents for their conception and designing. Soft colloids like star polymers, microgels, micelles and vesicles, whose structure can be tailored at the molecular level [3], are indeed model building blocks for materials with adjustable rheology and microscopic dynamics. Among them microgels made of poly(N-isopropylacrylamide) (PNIPAm) are particularly interesting and intensely investigated because they undergo a volume phase transition (VPT) at ambient temperature: below \(T_{\mathrm{c}}\approx 32^{\circ}C\) these microscopic crosslinked networks are fully hydrated and swollen, while above \(T_{\mathrm{c}}\) they collapse due to their increased hydrophobicity. PNIPAm microgels can be synthesized using standard emulsion polymerization in aqueous media [19], and since their first synthesis[20] they showed intriguing electrostatic properties due to the presence of the ionic initiator used to promote chain polymerization. A drastic increase of microgel electrophoretic mobility has been reported for \(T>T_{c}\) as a consequence of the large increase of their charge density driven by the particle collapse. This has raised many questions on the adsorbing power of PNIPAm microgels, especially when they are co-suspended with other charged species. In this respect, some of the authors[21] have recently pointed out the "double-faced" electrostatic behavior of PNIPAm microgels in aqueous media. On the one hand, when microgels are swollen and oppositely charged polyelectrolytes or nanoparticles (Np) are progressively added, cluster formation does not occur in a wide range of PE or Np concentrations until when eventually salting out or other non-electrostatic effects destabilize the suspensions. On the other hand when microgels are in their collapsed state, they strongly interact with oppositely charged PEs or NPs, a large and sharp mobility inversion occurs and large clusters form close to the isoelectric point, i.e., where the mobility of PE- or NP-microgel complexes is zero. This two-fold nature of PNIPAm microgels paves the way towards a temperature-sensitive complexation with charged polymers that might impact many potential applications including the controlled formation of micro-caps[22, 23] and membranes[24], gene delivery[25] and water treatment protocols[7], and that might also change drastically the rheology of fluid-fluid interfaces[26]. However, while the effect of polyelectrolytes[27, 28, 29, 30, 31] and simple ions[32, 33] on PNIPAm-based microgels has been very well detailed in literature, and both the rheology and the microscopic dynamics of concentrated bare microgel suspensions has been thoroughly investigated[34, 35, 36], the effect of soluble PE addition in dense microgel systems and the role played by electrostatic and non-electrostatic adsorption still remain unknown. Only very recently mixtures of concentrated PNIPAm microgels and non-ionic surfactants have been investigated, unveiling a very rich phase diagram[37] and leaving open the question whether electrostatics has an important impact on the dynamics of the mixtures, especially at high temperatures where these colloids become densely charged. In this work we want to elucidate this aspect by studying the effect of three types of known polyelectrolytes on the rheology of concentrated microgel suspensions, both below and above their critical temperature. We added separately two cationic and one anionic PEs with comparable molecular weights to suspensions of anionic PNIPAm microgels and study their linear rheology. The PEs have been chosen to vary both the sign of their charge and their hydrophobicity. We show that, while jammed suspensions of swollen microgels are weakly affected by the presence of PEs even when an apparent charge neutralization occurs, the rheology of collapsed and hydrophobic microgels is dramatically affected by PE addition and that both PE charge and hydrophobicity are important for the rheology of the mixtures. Electrophoresis and transmittance measurements allowed us to relate a large enhancement of the gel elasticity to the presence of concomitant charge inversion and reentrant condensation of microgels occurring in diluted suspensions. The rest of the work is organized as follows. In Section 2 we present the materials employed, we detail microgel synthesis and the techniques used to investigate PE-microgel mixtures. In Section 3 we first present the result of a preliminary characterization of the pure polymers (microgels and PEs) via electrophoresis, light scattering, transmittance and rheology experiments. We then discuss the rheology of the mixtures and the complementary electrophoretic and transmittance experiments that allowed to rationalize our results. Finally, in Section 4 we make some concluding remarks, we summarize the key results and we put forward the perspectives of our work. ## 2 Materials and methods ### Microgel synthesis Poly(N-isopropylacrylamide) microgels were synthesized via emulsion polymerization[38]. 150 mL of ultra-pure water was introduced in a 250 mL three-necked flask, and degassing was carried out using vacuum/argon cycles. Vacuum was achieved by a vane pump and the argon/vacuum sequence was repeated 6 times. Finally, argon was bubbled for 15 min. After having completed degassing, 3 to 4 mL of the (degassed) water was withdrawn via a syringe to dissolve 29.8 mg of the initiator (potassium peroxdisulfate, KPS - purchased from Sigma Aldrich and used without further purification) that was added in a later stage. Once the bubbling was stopped and the mechanical agitation was set up (via Teflon rotating anchor), the two side necks were plugged. At this stage 27.08 mg of Sodium Dodecyl Sulfate (SDS - purchased from Sigma Aldrich and used without further purification) was added in the flask right before the solution was heated up to the desired temperature (\(T_{s}=70\pm 1\ ^{\circ}C\)). Once the target temperature was attained, 1.25 g of N-isopropylacrylamide (N-PAm) (from Sigma Aldrich, used without further purification) and 91.96 mg of N,N-methylene-bis-acrylamide (BIS) (from Sigma Aldrich, used without further purification) were introduced into the three-necked flask. During the heating ramp, the initiator (KPS) was dissolved in 4 mL of deionized and degassed water, and it was injected by hand slowly once the temperature of the batch reached \(T_{s}\). The mixture was left under stirring at \(T_{s}\) for 6 hours. The polymerization terminated spontaneously. All samples have been purified via three consecutive centrifugation/supernatum removal cycles[39, 40] and 19.5 mg (2 mM) of Sodium Azide have been added to prevent bacterial growth. After the purification step, the microgel suspensions were centrifuged to get a final microgel volume fraction \(\varphi=1.57\). Where \(\varphi\) is the generalized volume fraction measured via rolling-ball viscosimetry (section 2.4). We further diluted an aliquot of this sample to study the rheological behavior of pure PNIPAm suspensions at varying \(\varphi\) and to prepare successively PE-microgel mixtures at fixed microgel volume fraction. ### Polyelectrolytes (PEs) Cationic Poly-(l-lysine hydrobromide) (PLL) (Mw=50 kDa) and anionic Polystyrene sulfonate sodium salt (PSS) (Mw=43 kDa) were purchased from Polymer Source, Inc. (Canada). Cationic polydiallyldimethylammonium chloride (PDADMAC) (Mw\(<\)100 kDa) was purchased from Sigma Aldrich (Merck KGaA, USA). All PEs were used without further purification. They were dis solved in deionized salt-free water at varying concentrations and successively mixed with the microgel suspensions. The structure formula for the three repeating units of the PEs are shown in Figure 1. The three polymers are characterized by different persistence lengths \(l_{p}\) (stiffness) and hydrophobicity. In particular \(l_{p}\) =0.3 nm for PSS [41], \(l_{p}\) =1 nm for PLL [42] and \(l_{p}\) =2.7 nm for PDADMAC [43]. PLL is a weak polyelectrolyte [44] and it is the most hydrophobic polymer among those employed here, since eachlysine bear a hydrophobic methylene side-chain that is responsible for chain association at high concentrations [45, 46] and a tendency to penetrate into lipid membranes [47]. Its degree of protonation depends on the pH, whose variation however stays very limited in M-PLL mixtures (see section 2.3). PSS is a strong polyelectrolyte whose hydrophobicity depends on its degree of sulfonation that in our case is high (90 %), but not complete. Therefore we expect possible residual hydrophobic interactions between PSS chains and microgels, when the latter are in their collapsed state. Finally PDADMAC is a strong polyelectrolyte and it is the most hydrophilic polymer [48] among those used in this work to investigate the rheology of PE-microgel mixtures, since it has neither large hydrophobic side chains as PLL nor neutral hydrophobic monomers on the backbone as PSS. ### PE-Microgel suspensions Mixtures of polyelectrolytes and microgels, hereafter called PE-microgel mixtures and coded as M-PLL, M-PSS and M-PDADMAC, were prepared following the same protocol for both anionic (PSS) and cationic (PLL, PDADMAC) PEs: 23.33 \(\mu\)L of PE solution at the required concentration was added to 500 \(\mu\)L of microgel suspension at T=20 \({}^{\circ}\)C with generalized volume fraction \(\varphi\) = 1.57. This protocol allows to get mixtures with fixed microgel volume fraction \(\varphi\) = 1.5 and different concentrations of PEs. The mixtures were stirred for about 2 mins using vortex, and the resulting suspensions were then used for rheology measurements at T=20 \({}^{\circ}\)C and T=40 \({}^{\circ}\)C. Figure 2 sketches the protocol, including the mixing and the successive heating of the samples in the rheometer geometry. We quantify the amount of PE contained in each suspension via the nominal polyelectrolyte to KPS monomolar ratio \(\xi\), where KPS is considered as fully reacted during the synthesis. This is supported by the barely measurable weight of the residual mass (\(m_{r}<\)0.1 mg) present after drying the supernatant that has been extracted after each centrifugation. The monomolar ratio \(\xi\) reads: \[\xi=\frac{C_{PE}\cdot M_{w}^{KPS}}{2\cdot C_{KPS}\cdot M_{w}^{PE}} \tag{1}\] where, \(C_{PE}\), \(C_{KPS}\), \(M_{w}^{PE}\) and \(M_{w}^{KPS}\) are respectively the concentrations (mg/ml) and the molecular weights of PE monomers and of the initiator. One would expect to find an iso-electric point, where all the charges due to the anchored KPS are neutralized by the cationic PEs, to be close to \(\xi=1\). However, we anticipate here that this condition will not be fulfilled, since not all the initiator molecules participate to build up the net charge of microgels [49]. The pH of the concentrated suspensions has been monitored showing in all cases only a weak dependence on the PE concentration: 5.5\(<\)pH\(<\)6.0 for M-PLL mixtures, 5.3\(<\)pH\(<\)6.0 for M-PDADMAC mixtures and 6.5\(<\)pH\(<\)6.0 for M-PSS mixtures. In this range of pH we do not expect any drastic change in PNIPAm microgel properties [50, 51]. Most of mixtures have been further diluted 250 times and then used to measure the electrophoretic mobility of the diluted complexes and the fraction of the incident light transmitted through each sample at different temperatures. A small leftover volume of the the concentrated microgel suspension (\(\varphi=\)1.57) has been further mixed with PEs solutions to perform complementary mobility and light transmission experiments in the same range of \(\xi\) explored via rheology. ### Viscosity Rolling-ball viscosimetry measurements were performed to obtain the generalized colloidal volume fraction of microgel suspensions [38]. The measurements were done at \(T=20\)\({}^{\circ}\)C using an Anton Paar Lovis 2000 ME microvosimeter in the range 4.68 \(\cdot\) 10\({}^{-5}<c\)(wt/wt) \(<\) 1.5 \(\cdot\) 10\({}^{-3}\), where the viscosity \(\eta\) increases linearly with the mass fraction of microgels \(c\) (See Supplementary Material). The particulate volume fraction is defined as \(\varphi\) = \(n_{p}v_{p}\), where \(n_{p}\) is the particle number density and \(v_{p}=4\pi n^{3}/3\) is the volume of a single particle of radius \(r\) at infinite dilution. Fig. 1: The structure formula of the polyions employed in this study: (a) \(\alpha\)-polylysine hydrobromide (PLL); (b) polydiallydimethylammonium chloride (PDADMAC); (c) polystrirene sulfonate sodium salt (PSS) Fig. 2: Schematic representation of the adopted protocol, including the mixing and the successive heating of the samples from T=20 \({}^{\circ}\)C to T=40 \({}^{\circ}\)C. Because of microgel deswelling the generalized volume fraction (section 2.4) decreases from \(\varphi(20^{\circ}C)=\)1.5 to \(\varphi(40^{\circ}C)=\)0.11. Experimentally, only the concentration \(c\) (wt/wt) of a (purified) suspension can be measured directly, by weighting a small volume of the sample before and after removing the solvent by evaporation. Since the generalized volume fraction \(\varphi\) is proportional to the mass concentration \(c\), it can be replaced by \(k\cdot c\), where \(k\) is a factor for converting the mass concentration to the generalized volume fraction. We determined the constant \(k\) matching the \(c\) dependence of the zero shear viscosity \(\eta\) of the purified suspensions with the values expected from the Einstein equation [27]: \[\frac{\eta}{\eta_{s}}=1+\frac{5}{2}\varphi=1+\frac{5}{2}kc, \tag{2}\] where \(\eta_{s}\) is the viscosity of the solvent. By fitting \(\eta/\eta_{s}\) to a straight line (see Supplementary Material), we thus obtained \(k\), which allows extracting the colloidal volume fraction of the suspension. We obtained \(k=28.5\pm 0.6\). This value embeds the effect of microgel permeability and that of the primary electroviscous effect as discussed recently [49]. Finally, we remind that for microgels with very similar synthesis [36] but lower crosslinker content, the onset of glassy dynamics at \(T=20\)\({}^{\circ}\)C occurs at \(\varphi\approx 0.8\), that in our case represents an approximate lower bound for the liquid-to-solid transition. This threshold is compatible with our rheology data, as shown in sections 3.1. ### Light scattering Dynamic and static light scattering experiments have been performed to characterize the microgels at the single particle level. For this purpose an Amtec goniometer and a laser source (\(\lambda=532\) nm) were used to collect the light at scattering angles in the range \(16^{\circ}\leq\theta\leq 150^{\circ}\), corresponding to scattering wave vectors in the range \(4.4~{}\mu m^{-1}\leq q\leq 30.3~{}\mu m^{-1}\). All scattering experiments have been performed in dilute samples: an aliquot of the purified mother batch was diluted in deionized water, to get a final generalized microgel volume fraction \(\varphi=\)0.006. The hydrodynamic radius, \(R_{H}\), and the polydispersity of the microgels were measured by means of dynamic light scattering. The scattered light intensity was collected at a fixed scattering angle (\(\theta=70^{\circ}\)) correspondent to a scattering vector \(q_{DLS}=18\)\(\mu m^{-1}\) and analyzed using a digital autocorrelator. The time decay of the autocorrelation function \(F_{s}(\vec{q},t)^{2}\) was then fitted by a second-order cumulant expansion (see Supplementary Material) to extract the diffusion coefficient \(D\) as shown below [52]: \[F_{s}(\vec{q}_{DLS},t)^{2}\propto exp\left(-2q_{DLS}^{2}Dt\right)\left[1+ \frac{\mu_{2}t^{2}}{2!}+o(t^{3})\right]^{2} \tag{3}\] where \(\mu_{2}\) is related to the second moment of the distribution of the diffusion coefficients of the suspended particles. The average diffusion coefficient is then used to obtain the hydrodynamic radius using the Stokes-Einstein relation: \(D=K_{B}T/6\pi\eta_{s}R_{H}\), where \(K_{B}\) is the Boltzmann constant, \(T\) is the bath temperature and \(\eta_{s}\) is the zero shear viscosity of the solvent. The corresponding size dispersion and polydispersity index are respectively \(\alpha_{R_{H}}=\sqrt{\mu_{2}}R_{H}/(Dq_{DLS}^{2})\) and \(\gamma=\mu_{2}/D^{2}q_{DLS}^{4}\). The polydispersity index here never exceeded 0.20. For commercial PEs solutions, characterized by larger polydispersities indexes (PDI\(>\)0.2) autocorrelation functions have been analyzed by means of the CONTIN algorithm [53] trough which we extracted the number-weighted size distributions. The gyration radius, \(R_{g}\), was measured by collecting the intensity of the light \(I(q)\) scattered by the microgel samples at different scattering angles. The scattered light was subsequently fitted (see Supplementary Material) to the Guinier equation [54] to extract \(R_{g}\): \[I(q)=I(0)\exp\left[-\frac{(qR_{g})^{2}}{3}\right], \tag{4}\] where \(I(0)\) is a constant depending on the number of particles in the scattering volume and on the scattering factor of a single particle. The Guinier regime for all samples was attained in the range \(0.5\leq qR_{g}\leq 2.5\), coherently with previously reported microgel syntheses [55, 56]. The uncertainty on \(R_{g}\) is given by the fit error, the latter being less than 1.5% of the best-fit value. ### Electrophoresis and Transmittance measurements The electrophoretic mobility and the transmittance of the suspensions were simultaneously measured using a Litesizer 500 (Anton Paar). The apparatus uses the new cmPASLS method, which is a recently developed PALS technology [57]. The absolute transmittance \(T_{A}\), is computed as the ratio between the intensity of the light transmitted through the sample (\(I\)) and that of the incident beam (\(I_{0}\)): \[T_{A}=I/I_{0}, \tag{5}\] To filter out any effect due to solvent and the cell we computed the relative transmittance \[T_{R}=T_{A}/T_{A}^{H_{2}O}, \tag{6}\] that is the ratio between the absolute transmittance of the suspension and that of the pure solvent (water here). This has been done for each set temperature. The electrophoretic mobility and transmittance were measured between 20 \({}^{\circ}C\) to 50 \({}^{\circ}C\) after a proper thermalization to monitor the effect of polyelectrolytes addition onto the mobility of swollen and collapsed microgels. The volume fraction of all the samples was fixed at \(\varphi=0.006\). ### Rheology Rheological tests were performed on freshly prepared PE-microgel mixtures using a stress-controlled MCR501 rheometer (Anton Paar, Germany). Standard stainless-steel sandblasted cone-plate geometry (25 mm diameter, 0.998\({}^{\circ}\) cone angle) has been used for all the tests. Temperature control has been ensured by means of a Peltier element (PTD- 200). The measuring temperatures were fixed at T=20 \({}^{\circ}\)C and T=40 \({}^{\circ}\)C. To ensure thermal equilibrium the sample has been kept at the desired temperature for 10 mins prior to each measurement. The outer rim of the samples has been covered with a low viscosity silicon oil (0.1 Pa s) to minimize evaporation. Dynamic strain sweep (DSS) tests were carried out before each dynamic frequency sweep (DFS) test to evaluate the extent of the linear regime, namely where the first-harmonic viscoelastic moduli \(G^{\prime}(\gamma_{0})\) and \(G^{\prime}(\gamma_{0})\) do not appreciably change upon varying the strain amplitude. The lack of important ageing and the absence of evaporation were further tested for about the same duration of one experiment (\(\sim\) 2500 s) at 20 \({}^{\circ}\)C and 40 \({}^{\circ}\)C via a time sweep experiment on one PE-free microgel suspension (Supplementary Material). This excluded that the variation of the moduli observed for the mixtures results from different ages of the pure microgel system. All frequency sweeps were done at strain amplitudes within the linear viscoelastic regime were the two moduli do not appreciably change for increasing \(\gamma_{0}\). DSS and DFS tests at 20 \({}^{\circ}\)C started on freshly loaded samples after 10 mins of thermalization. The samples have been successively heated up to 40 \({}^{\circ}\)C and, after other 10 mins of thermalization, a DFS test has been carried out. Our rheological measurements therefore probe only the rapid formation, strengthening or melting of the original glassy or gel phases and they do not take into consideration possible coarsening processes that might occur over time scales much longer (several hours) than our experiment duration (\(\sim\) 2500 s). For liquid-like samples responding with torques well below the lower limits imposed by rheometer under oscillatory shear we performed steady rate experiments in the range 42 \(s^{-1}\leq\dot{\gamma}\leq\) 1000 \(s^{-1}\) to measure their flow curves \(\sigma(\dot{\gamma})\), to probe their Newtonian behavior and to extract possibly their zero-shear viscosity. ## 3 Results and discussion ### Bare microgels and PEs Prior to mixture preparation we characterized the bare microgels and the PEs to confirm the sign of their charge in water, to measure their size and to identify rheologically the PE-free sample (\(\xi=\)0). The bare microgels were characterized using DLS and SLS in the dilute regime to measure their hydrodynamic radius (\(R_{H}\)), gyration radius (\(R_{g}\)) and mobility \(\mu\) as a function of temperature. As shown in fig 3 (left panel) the microgels undergo a volume phase transition (VPT) when the temperature is raised above the LCST temperature of pNIPAM[20]. The critical temperature (\(T_{c}\)) at which the microgels undergo the volume transition are estimated by fitting both \(R_{g}\) and \(R_{H}\) to an auxiliary function[40]: \[R_{H,g}(T)=[R_{0}-\Delta R_{H,g}\tanh(s(T-T_{c}^{H,g}))]+A(T-T_{c}^{H,g}) \tag{7}\] where, \(R_{0}\) is the radius of the microgel at the VPT, \(\Delta R_{H,g}\) is the amplitude of the VPT and the parameter \(s\) quantifies its sharpness. We obtained \(T_{c}^{H}=\) 32.9 \(\pm\) 0.2 \({}^{\circ}\)C and \(T_{c}^{g}=\) 31.8 \(\pm\) 0.3 \({}^{\circ}\)C for for \(R_{H}\) and \(R_{g}\) respectively. The lower \(T_{c}\) for \(R_{g}\) compared to \(R_{H}\) can be attributed to uneven distribution of the charges between the core and periphery[27, 29, 49]. Despite of that, given the dispersion of the data, we do not detect a clear onset of a minimum of \(R_{g}/R_{H}\) that is strictly related to the two-step deswelling of PNIPm microgels[40, 49], with the core collapsing at temperatures always lower than those marking the transition of the peripheral corona. In this respect we have already reported[49] for a synthesis of microgels with the same crosslinker-to-monomer molar ratio (5.3 %) as the present one, the existence of a barely detectable minimum in \(R_{g}/R_{H}\) crossing the VPT. Such a feature has to be ascribed to the higher crosslinker density of the microgels employed here with respect to other syntheses where the minimum was more evident[40]. As a matter of fact, increasing crosslinker density reduces the extent of the minimum of \(R_{g}/R_{H}\) since it homogenizes the local deswelling within the microgel volume: the core and the corona deswell to the same extent for high crosslinker-to-monomer molar ratios and increasing temperatures, suppressing the decoupling between the transitions in \(R_{g}\) and \(R_{H}\) of the microgels. This has been carefully investigated and established via simulations[40]. The ratio \(R_{g}/R_{H}\) stays within the range 0.61-0.63 for \(T\leq T_{c}^{H}\) and increases sharply at \(T\simeq T_{c}^{H}\) consistently with the microgel shrinking above the VPT[40]. The electrophoretic mobility of the same microgels are shown Fig. 4: Panel a): Electrophoretic Mobility of the three PEs as a function of temperature at \(C_{PE}\)=1.25 mg/ml (for PLL), \(C_{PE}\)=1.75 mg/ml (for PSS) and \(C_{PE}\)=1.38 mg/ml (for PDADMAC). These concentrations are the highest ones (highest \(\xi\)) characterizing PE-microgel mixtures discussed in section 3.2. Electrophoretic Mobility at \(C_{PE}\)=28.0 mg/ml (for PLL), \(C_{PE}\)=5.1 mg/ml (for PSS) and \(C_{PE}\)=35.0 mg/ml (for PDADMAC). Error bars are the full widths at half maximum of mobility distributions. The inset (Panel b)) shows the normalized relative transmittance measured for the three concentrated PE samples as a function of temperature. Fig. 3: Gyration and hydrodynamic radius (left) and mobility (right) of bare PnIPam microgels as a function of temperature. Here \(\varphi=0.006\). The inset (left panel)) shows the ratio \(R_{g}/R_{H}(T)\). The relative error on \(R_{H}\) and \(R_{g}\) obtained from the fit of the intensity correlation function and the q-dependent scattered intensity (Equations 3 and 4) (See Supplementary Material) never exceeded the 1.5% of the best fit values. Error bars for \(R_{H}\), \(R_{g}\) and \(R_{H}/R_{g}\) are smaller than or equal to the symbol size. The error on the mobilities obtained from the full width at the half maximum of the mobility distribution never exceeded the 7% of each mean value. in Figure 3 (right panel). The negative mobility is due to the anionic initiator used for the microgels synthesis. Microgel mobility is remarkably affected by the VPT [20]: it drastically decreases (algebraically) above about \(T_{c}^{H}\) due to the increase of the microgel charge density. We extracted the electrokinetic transition (EKT) temperature (\(T_{c\mu}\)) using the same auxiliary function as in equation 7, namely \[\mu(T)=[\mu_{0}-\Delta\mu\tanh(s_{\mu}(T-T_{c\mu}))]+A_{\mu}(T-T_{c\mu}) \tag{8}\] where, \(\mu_{0}\) is the mobility of the microgel at the EKT, \(\Delta\mu\) is the amplitude of the EKT and the parameter \(s_{\mu}\) quantifies its sharpness. We obtain \(T_{c\mu}\)\(-\)35.8 \(\pm\) 0.1 \({}^{\circ}C\). The discrepancy between the critical temperatures marking the VPT and EKT is \(\Delta\)\(-\)\(T_{c\mu}\)\(\sim\) 2.9 \({}^{\circ}C\). Such a significant difference between the transition temperature has been reported by Pelton _et al._[20], Daly et al. [58] and more recently by Truzzollilo _et al._[27], and it is attributed to a further charge restructuring (densification) well above \(T_{c}^{H}\)[58]. To determine preliminarily the viscoelastic properties of the PE-free microgel suspensions, the rheology of bare microgels dispersions was also investigated as a function of \(\varphi\) at T=20 \({}^{\circ}\)C. This allowed us to determine precisely the rheological state of the suspension in which PEs are progressively added: At the microgel concentration (\(\varphi=1.5\)) that will characterize all the PE-microgel mixtures, microgels are in a jammed glassy state at T=20 \({}^{\circ}\)C. (see Supplementary Material for more details). Prior to mixture preparation we have further investigated the temperature dependence of PE mobility and, since PLL and PSS bear also hydrophobic segments, we also inspected the PE stability via light transmission experiments at high concentration. Figure 4-a shows the electrophoretic mobility of the three PEs at the highest concentrations of the range explored in rheology experiments. As expected the 2 cationic PEs (PLL and PDADMAC) show positive mobility while PSS chains are characterized by negative mobilities at all temperatures. For the three PEs we observe a smooth increase of the mobility modulus with increasing temperature. Such an increase is mainly due to the decrease of the solvent viscosity (see Supplementary Material). Figure 4-b shows the mobility and the normalized transmittance of solutions that are more concentrated in PE chains. We used these samples to test the stability of all the PEs in water, reduce the uncertainties on the mobility measurements and test the effect of PE concentration on the mobility itself. By increasing PE concentration we observe a clear decrease in mobility for all the polymers, that is consistent with an increased fraction of condensed counterions [59] and possibly an augmented friction due to more frequent collisions among different chains, while the observed temperature dependence can be still filtered out by taking into account the viscosity variation of the solvent (see Supplementary Material). Most importantly, the normalized transmittance of the same samples, namely the transmittance measured at temperature \(T\) divided by its value at 20\({}^{\circ}\)C (inset of Figure 4), does not vary remarkably when temperature increases and the non-normalized relative transmittance was \(T_{R}\)(20\({}^{\circ}\)C) = 1.00\(\pm\)0.01 for the three PE solutions. The polyions employed here stay therefore well dissolved in water and any possible conformational change, local chain association, or variation of the fraction of free counterions [60, 61, 62] do not cause any massive condensation or chain swelling. This is an important starting point since transmittance will allow us to discern whether colloidal condensation occurs or not in the mixtures. To establish whether polyions can penetrate microgels we have finally measured or estimated the size of the PEs and the average mesh size of the microgels, the latter being equal to the average distance between two crosslinker molecules thought as uniformly distributed within the microgel volume. The hydrodynamic size distribution of PSS and PDADMAC chains has been obtained via the CONTIN analysis of the intensity autocorrelation functions [53]. DLS experiments have been performed using dilute PE suspensions, namely for \(C_{PE}\ll C_{PE}=\frac{M_{c}}{4/3\pi NaR_{H}}\), where \(C_{PE}^{*}\) is the overlap concentration and \(N_{A}\) is the Avogadro number. We obtained number-weighted size distributions with a main peak at hydrodynamic diameters \(D_{H}^{peak}=4.5\) nm for PSS and \(D_{H}^{peak}=184\) nm for PDADMAC. We attribute this large difference in size between these two PEs to a different average molecular weight and a known large difference in backbone stiffness (see section 2.2). PLL chains did not give enough scattering signal for their hydrodynamic size to be measured reliably in the dilute regime. We have however estimated their average end-to-end distance by considering a worm-like chain model for semiflexible chains [63, 64] and the known persistence length of PLL, \(l_{p}=1.0\) nm [42]. We obtained an end-to-end distance \(R_{ee}=28\) nm. The overlap concentration for PLL chains has been then estimated by replacing the hydrodynamic radius with half of the end-to-end distance. The average mesh size \(d_{m}\) of PNIPAPm microgels has been computed by knowing the amount BIS molecules and the number of microgels contained in the mother batch. The former is known from the synthesis, while the latter can be computed knowing the value of the generalized volume fraction obtained by viscosimetry at T=20 \({}^{\circ}\)C, the hydrodynamic size of microgels at the same temperature and the total volume \(V\) of the suspension. We obtained \(d_{m}=R_{H}[4\pi/(3N_{c})]^{1/3}=4.9\) nm, where \(N_{c}\) is the number of crosslinkers per microgel. Since \(d_{m}\) is always comparable to or lower than the measured or estimated size of the PEs the penetration of PE chains is limited to the outer shell of the microgels where a lower than average crosslinker density characterizes PNIPam microgels synthesized via free radical polymerization [65, 66]. We further expect therefore that the PE penetration is maximum for the small PSS chains and nearly absent for the large PDADMAC polymers. Finally, PE diffusion within the microgel volume is supposed to be additionally reduced at 40 \({}^{\circ}\)C, where microgels collapse and their mesh size consequently decreases. ### Rheology of PE-microgel mixtures Fig 5 shows selected dynamic frequency sweeps for the three sets of mixtures and different charge ratio \(\xi\) at 20 \({}^{\circ}\)C (a,c,e) and 40 \({}^{\circ}\)C (b,d,f), namely below and above the microgel VPT. The limited range of frequency at 40 \({}^{\circ}\)C is due to inertia problems, that are routinely encountered at high frequencies (\(\gtrsim 10\) rad/s) and standard shear geometries like the one given by the cone-and-plate fixture used in this work, producing a non-physical drop of the moduli for ultra soft solids (\(Gp\lesssim 10\) Pa). At 20 \({}^{\circ}\)C the addition of PEs has only a weak effect on the rheology of the suspensions: we observe a weakening of the original jammed glass for M-PLL (a) and M-PSS (b) mixtures for increasing \(\xi\), while the linear viscoelastic spectra of the mixtures stay basically unaltered when PDADMAC is added (c). We attribute this to 3 synergistic effects: i) microgel deswelling due to an increased ionic strength [67] and osmotic pressure exerted possibly by unadsorbed chains [68]; ii) depletion interactions that may act in presence of free chains, especially in the case of anionic PSS polymers [69]; iii) a reduction of the net repulsions between microgels due to a partial adsorption of the cationic polymers that lowers the electrostatic repulsion between microgel coronas bearing most of the microgel charge [40]. As said, however, the effect PDADMAC chains on swollen microgel glasses does not emerge. In this respect it is worth recalling that PDADMAC chains showed a lower mobility at all temperatures (Figure 4-a,b) compared to PLL pointing to a lower charge density on the polymer backbone. We thus expect a lower adsorption energy and a weaker impact on the swollen microgels with low charge density. Finally, further effects might be produced by the PE polydispersity that we expect to be larger for PDADMAC as indicated by the supplier and by the different microgel-to-PE size ratio, since they both affect residual depletion effects. In particular, since at T=20 \({}^{\circ}\)C the hydrodynamic PDADMAC/microgel size ratio is \(\simeq\) 1.08, while the (estimated) hydrodynamic PLL/microgel size ratio is \(\simeq\) 0.082 and the adsorption is limited by the low charge density and high hydrophilicity of the microgels, a more prominent depletion (short-ranged) attraction might characterize their mutual interaction in presence of PLL rather than of PDADMAC chains, the latter being almost equally sized with respect to the PNIPAm microgels. In addition to that, depletion effects in M-PDADMAC mixtures are further reduced due to the presumable higher polydispersity of the depletants (the PEs here), whose impact on entropy driven interactions have been discussed in detail [70]. This said, despite these effects are difficult to quantify or decouple, they all point towards a softening of the colloidal glasses due to a reduction of the repulsive forces between the microgels. At 40 \({}^{\circ}\)C by contrast, the emerging scenario is very different and the rheology of microgel suspensions become very sensitive to both PE concentration and sign of the PE charge (Figure 5-b,d,f). First of all, we point out that at this temperature PE-free microgel suspensions (\(\xi=0\)) are still viscoelastic solids characterized by a nearly frequency-independent storage modulus \(G_{p}=0.47\pm 0.05\) Pa. The generalized microgel volume fraction in this case is much lower than its value at 20 \({}^{\circ}\)C, since microgels are in their collapsed state. By rescaling \(\varphi\) using the the hydrodynamic radii measured at the two temperatures we obtain \(\varphi|_{40\times C}=\varphi|_{20\times C}\frac{R_{B}(40\times C)^{3}}{R_{B}( 20\times C)^{3}}=0.11\). At this volume fraction the suspensions cannot be considered any more as jammed glasses but they are rather in a gel phase, consistently with other rheological studies on concentrated PNIPAm suspensions [34]. Microgels aggregate and form percolating networks due to hydrophobic forces acting between the particles for \(T>T_{c}^{H}\). It's worth remarking that this occurs despite of the temperature-induced increase of the charge density of the microgels, that stabilizes the suspensions at lower volume fractions, and it is caused by both the high number density of counterions, that screen microgel charges, and the large number density of microgels, that enhances the collision probability between two of these colloids, increasing the rate of cluster formation. The impact of the microgel charge and the counterion concentration on microgel gelation has been investigated only very recently [71], confirming that the suppression of electrostatic repulsions gives rise to a net increase of gel elasticity. As PE concentration is progressively increased we observe three interesting phenomena: i) the presence of a large gel strenghtening followed by a sharp softening for both the sets of mixtures containing cationic PEs (M-PLL, M-PDADMAC); ii) a complete melting of the gel in wide range of PE concentrations as the anionic PSS chains are progressively added; iii) a clear gel strengthening or re-gellification at the largest PE concentrations in M-PLL and M-PSS mixtures. For liquid-like M-PSS mixtures, since the linear viscoelastic moduli were not measurable, we performed continuous steady-rate tests and we extracted their flow curves (Inset Figure 5-d). They all showed a Newtonian behavior \(\sigma(\dot{\gamma})=\eta_{0}\dot{\gamma}\), from which we obtained the zero shear viscosity \(\eta_{0}\). Figure 6 summarizes our results for all the samples investigated via rheology and serves to detail more clearly the effect of PEs. We report both the storage and the loss modulus of the solid-like suspensions at \(\omega=0.07\) rad/s and the zero shear viscosity for the liquid-like samples. Panel (a) shows the moduli of all the mixtures as a function of \(\xi\) at T=20 \({}^{\circ}\)C: Increasing \(\xi\) does Figure 5: Storage modulus G\({}^{*}\) (solid symbols) and loss modulus G\({}^{*}\) (open symbols) as a function of the oscillatory frequency \(\omega\) at T=20\({}^{\circ}\)C (a,c,e) and T=40\({}^{\circ}\)C (b,d,f) for M-PLL mixtures (a,b), M-PSS mixtures (c,d) and M-PDADMAC mixtures (e,f). The inset in panel (d) shows the flow curves \(\sigma(\dot{\gamma})\) of liquid samples for which LVE moduli were not measurable. The straight line is a linear fit of the data at \(\xi=\)0.309. not affect remarkably the rheology of the mixtures and produces only a weak decrease of both the moduli in M-PLL and M-PSS mixtures, while the effect of PDADMAC addition is not even detectable. Panels (b) and (c) by contrast show the relevant impact that PEs have on microgel networks. On the one hand both M-PLL and M-PDADMAC mixtures (b) are characterized by moduli that increase by almost two order of magnitudes with respect to the original gel (\(\xi=0\)) at \(\xi\simeq 0.04-0.05\). The same moduli decrease again sharply at \(\xi\simeq 0.1\). At higher PE concentration (\(\xi>0.1\)) the two cationic PEs act differently on the network dynamics: the gel moduli increase again in M-PLL mixtures while this does not occur in M-PDADMAC suspensions. On the other hand, in stark contrast with the phenomenology encountered by adding oppositely charged PEs, the gel network is melted by PSS chains even at the lowest \(\xi\), with the mixtures becoming Newtonian liquids with low viscosities (\(\eta_{0}\simeq\)3 mPa s). Also in this case, however, a large PE content destabilize the suspensions and a gel-phase is observed again. The rheology of the binary PE-microgel mixtures therefore point to the existence of an emergent reentrant behavior of the gel at high temperature when cationic PEs are added, suggesting that PEs may first screen completely the entire microgel charge and then overcompensate for it, producing globally an overcharge of the PE-microgel complexes so formed. This is in line with the findings that some of the authors [21] obtained in dilute suspensions of similar M-PLL mixtures and it will be confirmed by mobility and light transmittance experiments discussed later (section 3.3). The further gel strengthening occurring at high PLL content (\(\xi>\)0.1) and its absence for PDADMAC points to a key role played by PE hydrophobicity. PLL chains are more hydrophobic than PDADMAC, they tend to aggregate at high concentrations as mentioned in section 2.2 and they can thus participate actively to the enhanced elasticity of the gel. In this respect we point out that for M-PLL mixtures the PE concentration reaches values above the overlap concentration \(C_{PLL}^{*}\), spanning the range \(0\leq C_{PLL}\leq 4.1C_{PLL}^{*}\). Our results thus suggest that, for high PE content, gels are at least partially PE-mediated: the gelation is driven also by the PE-PE interaction rather than only the PE-microgel one. We recall also that both the cationic polymers progressively adsorb on the microgels at 40 \({}^{\circ}\)C (see also section 3.3) and depletion effects should be considered of secondary importance. The completely different behavior observed in M-PSS mixtures points quite surprisingly to a strong influence of polystyrene sulfonate on the state of the microgel suspensions, since PSS has been considered as a non adsorbing polymer for anionic PNIPAm microgels [67]. We stress once again the PSS chains used in this work consist of 90 % sulfonated polystyrene monomers, that is to say, polymer backbones having 1 hydrophobic styrene monomer over 10. This puts forward that PE adsorption might occur also in this case. In addition to this, since the mixtures have been prepared at T=20 \({}^{\circ}\)C, i.e. at \(\varphi=1.5\), PSS chains might partially penetrate microgels for entropic reasons, and after microgel collapse part of them can be still confined within the microgel volume. PSS adsorption or inclusion would explain the Fig. 6: Storage (full points) and loss modulus (empty/shaded points) of PE-microgel mixtures for the three PEs as a function of monomolar ratio \(\xi\) (eq. 1) at \(\omega=\)0.07 rad/s, T=20\({}^{\circ}\)C (a) and T=40\({}^{\circ}\)C (b,c). For M-PSS mixtures the zero shear viscosity is also shown (c). The moduli of PE-free suspensions (grey squares) are also shown as reference. Solid lines in panel b) are a guide to the eye. gel melting, since sulfonated chains are globally hydrophilic and their layer on microgels increase the negative charge of the M-PSS complexes hampering gelation. As for partially hydrophobic PLL chains, PSS causes a re-condensation of microgels at high concentrations (\(\xi>\)0.31). Enhanced clustering in this case is only driven by hydrophobic interactions since there cannot be charge patch attraction [72] characterizing M-PLL and M-PDADMAC systems. In more detail, we attribute this microgel aggregation to i) concomitant energetic bridging due to the residual hydrophobic interactions between non-sulfonated styrene monomers and those between hydrophobic PNIPam segments, ii) a simple salting out effect and iii) possibly depletion attractions due to non-adsorbed chains at high concentrations. Although our mobility measurements, that we discuss hereafter, confirm the (at least partial) inclusion of PSS chains within the microgel volume, a more systematic study involving other complementary techniques and simulations, that goes beyond the scope of this work, is in order to decouple all these effects. In what follows, we report on detailed electrophoretic and light transmission experiments in diluted mixtures, through which we have unravelled the role of the sign of the PE charge, and investigated the presence of an isoelectric point and colloidal overcharging causing a reentrant condensation of PE-microgel complexes in the same range of charge ratios probed by rheology experiments. ### Mobility and light transmission experiments for diluted mixtures Most of the suspensions studied via rheology have been diluted in deionized water so to obtain PE-microgels mixtures with generalized microgel volume fraction \(\varphi=0.006\) at T\(=20\)\({}^{\circ}\)C. Figure 7 shows the electrophoretic mobility for the three systems as a function of temperature. The behavior of \(\mu(T)\) changes drastically depending on whether the cationic PLL and PDADMAC polymers (Panels a,c) or anionic PSS (Panel b) are progressively added, staying qualitatively unaltered when the effect of the temperature dependence of the viscosity and the permittivity of the solvent is filtered out (Supplementary Material). For both the cationic PEs we observe a pronounced overcharging starting at \(\xi\simeq 0.1\) that increase radically at \(T>T_{c\mu}\). Such overcharging has been already observed [21, 27] and it is driven by the microgel EKT, with the largest increase of mobility occurring close to the native electrokinetic transition of PNIPAm microgels. This will be briefly discussed below. The mobility behavior for the M-PSS reveals by contrast an unexpected behavior: after a first algebraic increase, presumably due to an increase of the ionic strength and a scarce PE adsorption, the mobility increases in modulus at all temperatures almost doubling its magnitude at T\(=\)50 \({}^{\circ}\)C. This strongly indicates that PSS chains adsorb onto microgels or stay confined within their peripheral volume. We must conclude that part of the PSS chains adsorb, potentially penetrating inside the microgel periphery, and that the hydrophobic interactions between PSS chains and microgels, due to the not fully sulfonation of polystyrene chains, might favor PE coating despite of the anionic nature of the PE. Such mobility change does explain why microgels are stabilized by PSS chains in concentrated suspensions and hamper the formation of gels (Figure 6-c): when this like-charged polymers coat the collapsed microgels, they both enhance electrostatic repulsion between the so-formed decorated objects and screen out the hydrophobic interactions originally driving the gel formation of bare microgels. With this in mind, we can also speculate, now more pertinently, that the (reentrant) gelation observed at \(\xi=0.645\), can be ascribed to PE-mediated hydrophobic interactions, to the screening of the residual charge due to the increased counterion concentration and, since the mobility of the complexes does not varies remarkably for \(\xi>\)0.21, to depletion interactions due to unadsorbed chains. The encountered phenomenology is therefore at odds with the assumption that highly sulfonated PSS is strictly a non-adsorbing polymer in the presence of PNIPAm microgels. Such an assumption has been adopted in the past [67, 69] to justify the observed flocculation of PNIPAm microgels induced by PSS for \(T\lesssim T_{c}\). If this was the case, the presence of PSS chains could only give rise to an enhanced depletion mechanism between the microgels driven by the unbalanced osmotic pressure exerted by the free chains. In concentrated suspensions such scenario would consequently result in a progressive gel strenghtening that is not observed, while by contrast the gel melting and the increase of the mobility modulus in dilute suspensions represent a robust indication that small PSS polymers adsorb on and partially interpenetrate hydrophobic microgels. In this respect, the presence of hydrophobic interaction between PNIPAm chains and non-sulfonated polystyrene is well documented in literature [73]. We obtain a further insight on the charge restructuring process occurring in PE-microgel mixtures when we compare the mobility of the complexes to those of the pure PEs at the same concentration \(C_{PE}\), namely the highest \(\xi\) in the mixtures, below and above \(T_{c}^{H}\) (Figure 7-d,e,f). Remarkably, at T\(=\)20 \({}^{\circ}\)C the average mobility modulus is always lower than the one measured in pure PE suspensions, signaling the absence of a large charge accumulation with respect to pure PEs. By contrast at 40 \({}^{\circ}\)C, the mobility is always algebraically higher with respect to the pure PE solutions for the M-PLL and M-PDADMAC mixtures and it is comparable to that of the free PSS chains for M-PSS systems. The contrasting behavior observed below and above \(T_{c}^{H}\) for cationic PEs confirms the occurrence of a net increase of the positive charge density in the presence of microgels for \(T>T_{c}^{H}\). The latter can be due to both a compaction of the polyions and/or to an increase of the fraction of free counterions promoted by PE adsorption [74]. We remind also, that a simple local increase of the polyion concentration would give rise to a decrease in mobility due essentially to an increased amount of condensed counterions in line with the data shown in figure 4-a,b and other results discussed in literature [59]. In the case of PSS chains this is less evident despite the mobility of the complexes is much larger in modulus compared to that measured for the pure PNIPAm microgels (Figure 7-b). A summary of the mobility values is also reported for comparison in Supplementary Material (Table 1). The charging and neutralization process of the complexes, and their stability at the two selected temperatures, can be finally rationalized if mobility and transmittance data are combined. This is shown in Figure 8. For both M-PLL and M-PDADMAC mixtures a mobility reversal occurs at both 20 \({}^{\circ}\)C and 40 \({}^{\circ}\)C, while M-PSS mixtures show a progressive average mobility that becomes more and more negative as the PE concentration is increased (Figure 8-a,c). Interestingly for both cationic PEs the mixtures undergo a reentrant condensation signaled by the large decrease of the relative transmittance (Figure 8-d) in conjunction with the charge reversal observed at 40 \({}^{\circ}\)C, while the suspensions stays stable at 20 \({}^{\circ}\)C (Figure 8-b). The simultaneous occurrence of an isoelectric condition and a large colloidal condensation further confirm that the measured mobility has to be assigned to the PE-microgel complexes, as also pointed out by the sistematic unimodal mobility distribution measured for all the mixtures at any \(T\) and by the sigmoidal temperature dependence of the mobility in the presence of PEs. Our data therefore confirm that the hydrophilic character of PNIPAm microgels below their VPT (\(T<T_{c}^{H}\)) hampers microgel condensation though an apparent isoelectric condition is attained and chains partially adsorb onto microgels. By contrast above VPT (\(T>T_{c}^{H}\)) both hydrophobic interaction and a stronger charge-patch attraction between decorated microgels drive cluster formation localized at the isoelectric point, where the average mobility vanishes. We recall that charge patch attraction between decorated colloids is enhanced when large surface charge density fluctuations characterize the colloidal surfaces [72, 75]. This is indeed the case for collapsed microgels, where bare portion of the microgel surfaces are supposedly densely charged and with an enhanced hydrophobicity. Therefore, when the destabilization due to polyelectrolyte adsorption is considered, PNIPAm microgels behave as charged colloids at high temperatures (\(T>T_{c}^{H}\)) and as nearly neutral colloids below their volume phase transition temperature (\(T<T_{c}^{H}\)), confirming previous results obtained in dilute suspensions of M-PLL mixtures [21]. We have now shown that this has important repercussions on the rheology of concentrated dispersions whose viscoelasticity at high temperatures is largely affected by the presence of an isoelectric point. The emerging general scenario for \(T>T_{c}^{H}\) is sketched in Figure 9. Three final comments are in order. i) The peak of the gel modulus (Figure 6-b) observed for mixtures of microgels and cationic PEs occurs at \(\xi\simeq 0.04-0.05\) and it is slightly shifted to lower \(\xi\) with respect to \(\xi\simeq 0.1\), where the isoelectric condition and the reentrant condensation is observed in diluted samples (Figure 8-c,d). This can be ascribed to the smaller volume accessible to PEs in concentrated microgel suspensions and therefore to a more effective neutralizing effect of the chains, that presumably spend much less time in their desorbed state. In addition to that, the counterion concentration is also larger in concentrated samples and this could also reduce the amount of oppositely charged polymers needed to neutralize the microgels, since the fraction of counterions condensed on the microgel is higher. ii) We note that at highest \(\xi\) in dilute suspensions (Figure 8) there Fig. 7: Electrophoretic mobility of a) M-PLL, b) and M-PSS and c) M-PDADMAC complexes at varying PE concentrations as a function of temperature. The arrows are a guide for the eye and point to the mobility variation at \(T>T_{c}^{H}\) from \(\xi=\)0 to the maximum PE concentrations. Panels d), e) and f) show the mobility values for the larges PE concentrations in the mixtures compared to the values measured in pure polyelectrolyte solutions at the same concentrations as indicated in the panels. is not evidence for large cluster formation, that would result in a decrease of the relative transmittance, while gel strengthening is reported for concentrated M-PSS and M-PLL mixtures. This again corroborates the hypothesis that these gels are networks stabilized by non-electrostatic interactions between the concentrated PE chains surrounding the microgels at high \(\phi\). PE concentration is much lower in the diluted suspensions used to measure the mobility and the transmittance of the PE-microgel complexes (Figure 8). Here, the free volume accessible to PEs is much larger and their mutual contact is very much reduced. iii) The charge inversion and re-entrant condensation occur far (\(\xi\simeq 0.1\)) from the nominal isoelectric condition (\(\xi=1\)). One possible explanation would be that most of the initiator (\(\approx 90\%\)) does not participate to the polymerization, that then would result in a large residual mass present in the supernatant after the first post-synthesis centrifugation cycle. This is ruled out: after drying and removing the solvent from the supernatant we obtain a solid content equal to only \(\approx 1\%\) of the total NIPAm monomers dissolved in water during the synthesis and that comprises also the removed SDS. A much larger solid content, containing both unreacted NIPAm monomers and initiator, would be present if the \(\approx 90\%\) of the initiator had not participated to the microgel formation. A second possibility is represented by the partial involvement of the ionic initiator in the adsorption process: within this scenario the very peripheral sulfonic groups anchored to the microgels are mostly free from condensed counterions and obviously more available for steric reasons to accomodate PE chains, while groups located further inside the microgel volume are on average more screened by confined counterions and they do not participate to the charge balance. This hypothesis is corroborated by recent experiments and simulations [49] where the addition of randomly distributed charges barely affects the \(R_{g}\) transition of PNIPAm-based microgels. Such a scenario, although it gives a plausible qualitative explanation for the discrepancy between the nominal and the observed isoelectric condition, is certainly not a conclusive one, and the mechanisms of charge neutralization of microgels through polyion adsorption remain to be understood in the future. Fig. 8: Mobility (a,c) and transmittance (b,d) of microgel suspensions (\(\phi=0.006\)) at varying concentrations of PEs at \(20^{\circ}C\) (a,b) and \(40^{\circ}C\) (c,d). Error bars in a) and c) are the full widths at half maximum of mobility distributions. The blue circles in the panels (a,c) mark the isoelectric condition \(\mu=0\). Fig. 9: Schematic representation of the different states encountered when PEs are progressively added in suspensions of PNIPAm microgels and temperature is raised up to \(T>T_{c}^{H}\). Conclusions We have shown that adding polyelectrolytes bearing different charge has an important impact on the rheology of concentrated suspensions of anionic PNIPAm microgels, and that electrostatics influences crucially gel formation at high temperatures. We have studied the effect of the PEs on both a jammed glassy system, where PEs are confined in the interstitial spaces between faceted microgels or slightly penetrate into their peripheral corona, and when the concentrated suspension is brought beyond the microgel volume phase transition (\(T>T_{c}^{H}\)). We have shown that while jammed glasses are only weakly softened by PE addition, the impact of the PE chains is important at higher temperature, where PNIPAm microgels are charged hydrophobic colloids with high charge density. We find that a large and reentrant gel strengthening occurs in the presence of cationic PEs around the isoelectric point, where the average measured mobility is zero and large clustering takes place in diluted suspensions. Strikingly, for PE chains with residual hydrophobicity and bearing charges with the same sign as that of the microgels, we observe a melting of the original gel phase at \(T>T_{c}^{H}\). Mobility measurements suggested that also in this case there is at least a partial adsorption and/or penetration of the PEs into the microgels, increasing their net charge, screening the hydrophobic attraction between collapsed microgels and hence hampering the gelation of the system within the probed experimental time scale. Non-electrostatic interactions are thus also very important, especially when microgels are in their collapsed state, and at high PE concentrations, far from the isoelectric point, we also find that the rheology of the mixtures is highly influenced by these specific effects: for mixtures containing partially hydrophobic PLL or PSS chains, gelation is favoured by the presence of a high PE content, while for the more hydrophilic PDADMAC polymers this is not observed. Our work shows that the rheology of thermosensitive ionic microgels can be tuned by adsorbing polyelectrolytes, and confirms that PNIPAm microgels must be considered as highly charged hydrophobic colloids at high temperatures. Our study paves the way for more systematic investigations including PEs sharing the same chemistry but with different molecular weights and degree of hydrophobicity (e.g PSS chains with different \(M_{w}\) and sulfonation degrees) that are required to discern more in depth the role played by the chain size and the non-electrostatic effects in these composite soft materials. In this respect, the use of fluorescent PEs and molecular dynamic simulations, would surely help to unravel the role played by residual unadsorbed chains on the overall microgel stability. Finally, whether PE adsorption causes glass melting at \(T<T_{c}^{H}\) or reentrant gelation at \(T>T_{c}^{H}\) and lower generalized volume fractions is still unknown and deserves a thorough investigation. ## Conflicts of interest There are no conflicts to declare. ## Acknowledgements We acknowledge financial support from the Agence Nationale de la Recherche (Grant ANR-20-CE06-0030-01; THELECTRA). D.T. thanks Dr. S. Sennato for fruitful discussions. ## References * [1] W. C. K. Poon, _Journal of Physics: Condensed Matter_, 2002, **14**, R859-R880. * [2] K. N. Pham, A. M. Puertas, J. Bergenholtz, S. U. Egelhaaf, A. Moussaid, P. N. Pusey, A. B. Schofield, M. E. Cates, M. Fuchs and W. C. K. Poon, _Science_, 2002, **296**, 104-106. * [3] D. Vlassopoulos and M. Cloitre, _Current Opinion in Colloid & Interface Science_, 2014, **19**, 561-574. * [4] K. Binder, P. Virnau and A. Statt, _The Journal of Chemical Physics_, 2014, **141**, 140901. * [5] I. Szilagyi, G. Trefalt, A. Tiraferri, P. Maroni and M. Borkovec, _Soft Matter_, 2014, **10**, 2479. * [6] F. Bordi, S. Sennato and D. Truzzolillo, _Journal of Physics: Condensed Matter_, 2009, **21**, 203102. * [7] B. Bolto and J. Gregory, _Water Research_, 2007, **41**, 2301-2324. * [8] N. Tobori and T. Amari, _Colloids and Surfaces A: Physicochemical and Engineering Aspects_, 2003, **215**, 163-171. * [9] A. M. Howe, R. D. Wesley, M. Bertrand, M. Cote and J. Leroy, _Langmuir_, 2006, **22**, 4518-4525. * [10] I. Pochard, C. Labbez, A. Nonat, H. Vija and B. Jonsson, _Cement and Concrete Research_, 2010, **40**, 1488-1494. * [11] T. Phenrat, N. Saleh, K. Sirk, H.-J. Kim, R. D. Tilton and G. V. Lowry, _Journal of Nanoparticle Research_, 2008, **10**, 795-814. * [12] V. Salgueirino-Maceira, F. Caruso and L. M. Liz-Marzan, _The Journal of Physical Chemistry B_, 2003, **107**, 10990-10994. * [13] S. Schwarz, J. E. Wong, J. Bornemann, M. Hodenius, U. Himmelreich, W. Richtering, M. Hoehn, M. Zenke and T. Hieronymus, _Nanomedicine: Nanotechnology, Biology and Medicine_, 2012, **8**, 682-691. * [14] G. Decher, _Science_, 1997, **277**, 1232-1237. * [15] J. Ruhe, M. Ballauff, M. Biesalski, P. Dziezok, F. Grohn, D. Johannsmann, N. Houbenov, N. Hugenberg, R. Konradi, S. Minko, M. Motornov, R. R. Netz, M. Schmidt, C. Seidel, M. Stamm, T. Stephan, D. Usov and H. Zhang, _Polyelectrolytes with Defined Molecular Architecture I_, Springer Berlin Heidelberg, Berlin, Heidelberg, 2004, vol. 165, pp. 79-150. * [16] M. Ballauff and O. Borisov, _Polyelectrolyte brushes_, 2006, vol. 11, pp. 316-323. * [17] J. L. Dalsin, L. Lin, S. Tosatti, J. Voros, M. Textor and P. B. Messersmith, _Langmuir_, 2005, **21**, 640-646. * [18] M. Elzbieciak-Wodka, M. Kolasinska-Sojka, D. Wodka, P. Nowak and P. Warszynski, _Journal of Electroanalytical Chemistry_, 2011, **661**, 162-170. * [19] R. Pelton and P. Chibante, _Colloids and Surfaces_, 1986, **20**, 247-256. * [20] R. H. Pelton, H. M. Pelton, A. Morphesis and R. L. Rowell, _Langmuir_, 1989, **5**, 816-818. * [21] S. Sennato, E. Chauveau, S. Casciardi, F. Bordi and D. Truzzolillo, _Polymers_, 2021, **13**, 1153. * [22] M. Prevot, C. Dejugnat, H. Mohwald and G. B. Sukhorukov, _ChemPhysChem_, 2006, **7**, 2497-2502. * Kim et al. [2015] M. Kim, S. J. Yeo, C. B. Highley, J. A. Burdick, P. J. Yoo, J. Doh and D. Lee, _ACS Nano_, 2015, **9**, 8269-8278. * Malaisamy and Bruening [2005] R. Malaisamy and M. L. Bruening, _Langmuir_, 2005, **21**, 10587-10592. * Kabanov and Kabanov [1998] A. V. Kabanov and V. A. Kabanov, _Advanced Drug Delivery Reviews_, 1998, **30**, 49-60. * Vialetto et al. [2018] J. Vialetto, N. Nussbaum, J. Bergfreund, P. Fischer and L. Isa, **608**, 2584-2592. * Truzzollillo et al. [2018] D. Truzzollillo, S. Sennato, S. Sarti, S. Casciardi, C. Bazzoni and F. Bordi, _Soft Matter_, 2018, **14**, 4110-4125. * Greinert and Richtering [2004] N. Greinert and W. Richtering, _Colloid and Polymer Science_, 2004, **282**, 1146-1149. * Kleinen and Richtering [2008] J. Kleinen and W. Richtering, _Macromolecules_, 2008, **41**, 1785-1790. * Kleinen and Richtering [2011] J. Kleinen and W. Richtering, _Colloid and Polymer Science_, 2011, **289**, 739-749. * Kleinen and Richtering [2011] J. Kleinen and W. Richtering, _The Journal of Physical Chemistry B_, 2011, **115**, 3804-3810. * Lopez-Leon and Fernandez-Nieves [2007] T. Lopez-Leon and A. Fernandez-Nieves, _Physical Review E_, 2007, **75**, 011801. * Lopez-Leon et al. [2006] T. Lopez-Leon, J. L. Ortega-Vinuesa, D. Bastos-Gonzalez and A. Elaissari, _The Journal of Physical Chemistry B_, 2006, **110**, 4629-4636. * Romeo et al. [2010] G. Romeo, A. Fernandez-Nieves, H. M. Wyss, D. Acierno and D. A. Weitz, _Advanced Materials_, 2010, **22**, 3441-3445. * Sessoms et al. [2009] D. A. Sessoms, I. Bischofberger, L. Cipelletti and V. Trappe, _Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences_, 2009, **367**, 5013-5032. * Philippe et al. [2018] A.-M. Philippe, D. Truzzollillo, J. Galvan-Myoshi, P. Dieudonne-George, V. Trappe, L. Berthier and L. Cipelletti, _Physical Review E_, 2018, **97**, 040601. * Fussell et al. [2019] S. L. Fussell, K. Bayliss, C. Coops, L. Matthews, W. Li, W. H. Briscoe, M. A. Faers, C. P. Royall and J. S. van Duijneveldt, _Soft Matter_, 2019, **15**, 8578-8588. * Senff and Richtering [1999] H. Senff and W. Richtering, _The Journal of Chemical Physics_, 1999, **111**, 1705-1711. * Conley et al. [2019] G. M. Conley, C. Zhang, P. Aebischer, J. L. Harden and F. Scheffold, _Nature Communications_, 2019, **10**, 2436. * Del Monte et al. [2021] G. Del Monte, D. Truzzollillo, F. Camerin, A. Ninarello, E. Chauveau, L. Tavagnacco, N. Gnan, L. Rovigatti, S. Senato and E. Zaccarelli, _Proceedings of the National Academy of Sciences_, 2021, **118**, e2109560118. * Degiorgio et al. [1991] V. Degiorgio, F. Mantegazza and R. Piazza, _Europhysics Letters (EPL)_, 1991, **15**, 75-80. * Shi et al. [2013] L. Shi, F. Carn, F. Boue, G. Mosser and E. Buhler, _Soft Matter_, 2013, **9**, 5004. * Mattison et al. [1998] K. W. Mattison, P. L. Dubin and I. J. Brittain, _The Journal of Physical Chemistry B_, 1998, **102**, 3830-3836. * Burke and Barrett [2003] S. E. Burke and C. J. Barrett, _Biomacromolecules_, 2003, **4**, 1773-1783. * Homchaudhuri and Swaminathan [2001] L. Homchaudhuri and R. Swaminathan, _Chemistry Letters_, 2001, **30**, 844-845. * Stagi et al. [2022] L. Stagi, R. Farris, L. de Villiers Engelbrecht, F. Mocci, C. Maria Carbonaro and P. Innocenzi, _Spectrochimica Acta Part A: Molecular and Biomolecular Spectroscopy_, 2022, **283**, 121717. * Sennato et al. [2008] S. Sennato, F. Bordi, C. Cametti, C. Marianecci, M. Carafa and M. Cametti, _The Journal of Physical Chemistry B_, 2008, **112**, 3720-3727. * Milyaeva et al. [2017] O. Milyaeva, G. Gochev, G. Loglio, R. Miller and B. Noskov, _Colloids and Surfaces A: Physicochemical and Engineering Aspects_, 2017, **532**, 108-115. * Elancheliyan et al. [2002] R. Elancheliyan, G. Del Monte, E. Chauveau, S. Sennato, E. Zaccarelli and D. Truzzollillo, _Macromolecules_, 2022, **55**, 7526-7539. * Pei et al. [2004] Y. Pei, J. Chen, L. Yang, L. Shi, Q. Tao, B. Hui and J. Li, _Journal of Biomaterials Science, Polymer Edition_, 2004, **15**, 585-594. * Al-Manasir et al. [2009] N. Al-Manasir, K. Zhu, A.-L. Kjoniksen, K. D. Knudsen, G. Karlsson and B. Nystrom, _The Journal of Physical Chemistry B_, 2009, **113**, 11115-11123. * Frisken [2001] B. J. Frisken, _Applied Optics_, 2001, **40**, 4087. * Provencher [1982] S. W. Provencher, _Computer Physics Communications_, 1982, **27**, 213-227. * Guinier [1955] A. Guinier, _Small angle scattering of X-rays_, Wiley New York, 1955, p. 276. * Gasser et al. [2014] U. Gasser, J. S. Hyatt, J.-J. Lietor-Santos, E. S. Herman, L. A. Lyon and A. Fernandez-Nieves, _The Journal of Chemical Physics_, 2014, **141**, 034901. * Clara-Rahola et al. [2012] J. Clara-Rahola, A. Fernandez-Nieves, B. Sierra-Martin, A. B. South, L. A. Lyon, J. Kohlbrecher and A. Fernandez Barbero, _The Journal of Chemical Physics_, 2012, **136**, 214903. * Bellmann et al. [2019] C. Bellmann, A. Caspari, C. Moitzi, T. Luxbacher, M. Schaffler and M. Stintz, _Dynamic and Electrophoretic Light Scattering: Guidelines for Particle-size Analysis and Zeta-Potential Determination_, Anton Paar, 2019. * Daly and Saunders [2000] E. Daly and B. R. Saunders, _Physical Chemistry Chemical Physics_, 2000, **2**, 3187-3193. * Truzzollillo et al. [2009] D. Truzzollillo, F. Bordi, C. Cametti and S. Sennato, _Physical Review E_, 2009, **79**, 011804. * De et al. [2015] R. De, D. Ray and B. Das, _RSC Advances_, 2015, **5**, 54890-54898. * Karpov et al. [2016] G. V. Karpov, E. S. Vasiliev, I. I. Morozov, S. V. Savilov, N. E. Strokova and V. V. Lunin, _International Journal of Chemical Kinetics_, 2016, **48**, 442-448. * Hu et al. [2020] Q. Hu, Y. Liang, H. Zhao, H. Yang and X. Zhu, _Journal of Molecular Liquids_, 2020, **318**, 114313. * Wang and Milstein [2015] H. Wang and J. N. Milstein, _PLOS ONE_, 2015, **10**, e0142277. * Jin et al. [2014] X. Jin, L. Leclercq, N. Sisavath and H. Cottet, _Macromolecules_, 2014, **47**, 5320-5327. * Ninarello et al. [2019] A. Ninarello, J. J. Crassous, D. Paloli, F. Camerin, N. Gnan, L. Rovigatti, P. Schurtenberger and E. Zaccarelli, _Macromolecules_, 2019, **52**, 7584-7592. * Stieger et al. [2004] M. Stieger, W. Richtering, J. S. Pedersen and P. Lindner, _The Journal of Chemical Physics_, 2004, **120**, 6197-6206. * Rasmussen et al. [2004] M. Rasmussen, A. Routh and B. Vincent, _Langmuir_, 2004, **20**, 3536-3542. * Saunders and Vincent [1996] B. R. Saunders and B. Vincent, _Journal of the Chemical Society, Faraday Transactions_, 1996, **92**, 3385-3389. * [69] M. J. Snowden and B. Vincent, _Journal of the Chemical Society, Chemical Communications_, 1992, 1103. * [70] X. L. Chu, A. D. Nikolov and D. T. Wasan, _Langmuir_, 1996, **12**, 5004-5010. * [71] S. Minami, A. Yamamoto, S. Oura, T. Watanabe, D. Suzuki and K. Urayama, _Journal of Colloid and Interface Science_, 2020, **568**, 165-175. * [72] D. Velegol and P. K. Thwar, _Langmuir_, 2001, **17**, 7687-7693. * [73] J. Gao and C. Wu, _Macromolecules_, 1997, **30**, 6873-6876. * [74] R. R. Netz and J.-F. Joanny, _Macromolecules_, 1999, **32**, 9026-9040. * [75] D. Truzzolillo, F. Bordi, F. Sciortino and C. Cametti, _The European Physical Journal E_, 2009, **29**, 229-237. Supplementary Material: Impact of polyelectrolyte adsorption on the rheology of concentrated Poly(N-isopropylacrylamide) microgel suspensions1 Footnote 1: The authors are grateful to the anonymous referee for providing the support of the manuscript. Rajam Elancheliyan Edouard Chauveau Domenico Truzzolillo [email protected] Laboratoire Charles Coulomb, UMR 5221, CNRS-Universite de Montpellier, F-34095 Montpellier, France Phone: +33 (0)467 143589 ## 1 Light scattering We report in figure 1 selected intensity autocorrelation functions (\(g_{2}(t)-1\)) and time-averaged scattered intensity \(I/I_{0}\) as a function of the squared modulus of the scattering vector \(q^{2}\) at different temperatures. The lag time \(t\) (left panel) is normalized by \(\eta_{s}/K_{B}T\) to filter out the trivial speeding up of the dynamics due to the temperature dependence of the solvent viscosity \(\eta_{s}\) and the thermal energy \(k_{B}T\). \(I_{0}\) is obtained by fitting \(I(q)\) to the Guinier equation (eq. 4 of the main text). The autocorrelation functions and the scattered intensity data have been fitted by using respectively equation 3 and 4 (section 2.5 of the main manuscript) to extract the hydrodynamic radius (\(R_{H}\)) and gyration radius (\(R_{g}\)) as a function of temperature. ## Viscosimetry Figure 2 shows the viscosity ratio \(\eta/\eta_{s}\) for very dilute suspensions of bare microgels in the range 4.68 \(\cdot\) 10\({}^{-5}\)\(<\)\(c\)(wt/wt) \(<\) 1.5 \(\cdot\) 10\({}^{-3}\). The proportionality constant \(k\) between the generalized volume fraction \(\varphi\) and the mass fraction \(c\) of microgels has been obtained, as detailed in the main text, via a linear fit of the data to the Einstein equation. ## Rheology of bare microgel suspensions The rheology of bare microgels suspensions was investigated as a function of \(\varphi\) at T=20 \({}^{\circ}\)C to determine precisely the rheological state of the suspension in which PEs are progressively added. Figure 3 shows dynamic frequency (a) and strain (b) sweep tests performed in the Figure 1: Autocorrelation functions \(g_{2}-1\) (left panel) and normalized scattered intensity \(I/I_{0}\) vs \(q^{2}\) (right panel) at different temperatures as indicated in the panels. Red solid (black dashed) lines are cumulant (Guinier) fits according to equations 3 and 4 of the main manuscript for \(g_{2}(t)-1\) and \(I/I_{0}\) respectively. range \(0.89\leq\varphi\leq 1.57\). All samples show a solid-like response and a typical glassy behavior for \(\varphi\geq 1.07\), with \(G^{\prime}>G"\) over about 3 decades in frequency and shallow minimum of \(G"\).[1, 2] Between \(\varphi\)=1.07 and \(\varphi\)=0.89 we observe a large drop of both the moduli, which flags the proximity to the rheological glass transition. The DSS tests confirm such a scenario with all the suspensions showing a nearly strain-independent first-harmonic moduli at low strains before non-linear behavior appears, with \(G"(\gamma_{0})\) reaching a maximum and then declining for \(\varphi\geq 1.07\). The maximum of \(G"(\gamma_{0})\) is absent for \(\varphi\)=0.89 pointing to an important reduction of dissipative processes involved in the yielding transition. In all the cases a crossover between the two first harmonic moduli occurs and samples yield under oscillatory strain. The two insets show the storage modulus \(G_{p}\)=\(G^{\prime}\)(\(\omega\)=1 rad/s) (a) and the yield stress \(\sigma_{y}\) (b) at which the crossover \(G^{\prime}(\gamma_{0})=G"(\gamma_{0})\) occurs as a function of \(\varphi\). The two quantities show an affine behavior with a drop of their values occurring between \(\varphi\)=1.07 and \(\varphi\)=0.89. Pellet and Cloitre [3] have attributed such a sharp change to the passage from a thermal to a jammed glass of microgels, with the former being a solid whose elasticity is dominated by entropic Figure 2: Viscosity ratio \(\eta/\eta_{s}\) for very dilute suspensions of microgels. The straight line is a linear fit of the data to the Einstein equation. Figure 3: Storage modulus G’ (solid symbols) and loss modulus G” (open symbols) as a function of the oscillatory frequency \(\omega\) (a) and respective first harmonic moduli as a function of the strain amplitude \(\gamma_{0}\) at \(\omega=1\) rad/s (b) of concentrated PniPAm microgel suspensions at T=20 \({}^{\circ}\)C and different volume fractions as indicated in the panels. The insets show the plateau modulus \(G_{p}\) (a) and the yield stress \(\sigma_{y}\) (b) as a function of \(\varphi\). caging, while the latter is ruled by contact forces between particles and particle deformability. Therefore the microgel concentration that characterize the PE-microgel mixtures (\(\varphi=1.5\)) is a jammed glass at T=20 \({}^{\circ}\)C. We show in the main manuscript (section 3.2) that at this number density, microgels form a gel when the suspension is heated up to T=40 \({}^{\circ}\)C. ## Ageing of bare microgel suspensions Prior to mixture preparation we performed a dynamic time sweep experiment to evaluate the aging of the bare concentrated microgel suspension (\(\varphi=1.5\), \(\xi=0\)) over a time approximately equal to the duration of one whole experiment (t\(\approx 2500\) s). Figure 4 shows the normalized storage and loss modulus (\(G^{\prime}/G^{\prime}_{t=0}\), \(G"/G"_{t=0}\)) of the sample at \(\omega=\)2 rad/s. The moduli did not show any remarkable evolution within this time window. This excludes that the changes of the moduli observed for the mixtures are due to different age of the pure microgel system. ## Mobility We report below the normalized mobility \(\mu\eta/\epsilon\) for both the pure PE suspensions (Figure 5) and the PE-microgel mixtures (Figure 6). The data are the same shown in figures 4 and 7 of the main manuscript. For pure PE suspensions the normalized mobility does not show a visible trend for increasing temperature, indicating that the observed variation in absolute mobility is due mainly to the reduction of the solvent viscosity going 20 \({}^{\circ}\)C to 50 \({}^{\circ}\)C. We recall that in this temperature range the viscosity of water \(\eta_{s}\) decreases by a factor \(\approx\)1.85, while the relative permittivity \(\varepsilon\) decreases by a factor \(\approx\) 0.86. By contrast the PE-microgel mixtures maintain the trend showed by the mobility \(\mu\) (figure 7 of the main text), hence pointing to an important change of the charge density of the complexes upon varying temperature. \begin{table} \begin{tabular}{l c c} \hline \hline **Sample code** & \(\mu\)**(T=20 \({}^{\circ}\)C)** & \(\mu\)**(T=40 \({}^{\circ}\)C)** \\ \hline M (No PE) & -1.52 \(\pm\)0.10 & -3.20\(\pm\) 0.16 \\ PLL (\(C_{PE}\) =1,25 mg/ml) & 3.46 \(\pm\)0.28 & 4.40\(\pm\) 0.66 \\ M-PLL (\(C_{PE}\) =1,25 mg/ml, \(\xi\) =0.5691) & 2.67\(\pm\) 0.16 & 5.94\(\pm\)0.19 \\ PSS (\(C_{PE}\) = 1.75 mg/ml) & -3.17\(\pm\)0.44 & -4.62\(\pm\)0.36 \\ M-PSS (\(C_{PE}\) = 1.75 mg/ml, \(\xi\) =0.645) & -1.30\(\pm\)0.18 & -4.43\(\pm\)0.19 \\ PDADMAC (\(C_{PE}\) = 1.38 mg/ml) & 2.42\(\pm\)0.71 & 3.60\(\pm\)0.40 \\ M-PDADMAC (\(C_{PE}\) = 1.38 mg/ml, \(\xi\) =0.6625) & 1.42\(\pm\)0.12 & 4.64\(\pm\)0.18 \\ \hline \hline \end{tabular} \end{table} Table 1: Mobility in \(\mu\)m cm V/s of PEs solutions and PE-microgel mixtures at T=20 \({}^{\circ}\)C and T=40 \({}^{\circ}\)C for the highest PE concentrations. Data are shown in figure 4 and figure 7 of the main manuscript. Data for pure microgel suspensions are also given for reference. Figure 4: Normalized viscoelastic moduli (\(\omega\) =2 rad/s) of the bare microgel suspension (\(\varphi\) = 1.5, \(\xi\) = 0) at T=20\({}^{\circ}\)C and T=40\({}^{\circ}\)C as indicated in the panels. Figure 5: Normalized electrophoretic mobility of PLL at \(C_{PE}\)=1.25 mg/ml (left) and \(C_{PE}\)=28 mg/ml (right), PSS at \(C_{PE}\)=1.75 (left) and \(C_{PE}\)=5.1 (right), PDADMAC at \(C_{PE}\)=1.38 mg/ml (left) and \(C_{PE}\)=35 mg/ml (right) as a function of temperature. Figure 6: Normalized electrophoretic mobility of a) M-PLL, b) M-PSS and c) M-PDADMAC complexes at varying PE concentrations as a function of temperature. Data contains complementary samples that were not characterized via rheology. The colored arrows are a guide for the eye and point to the mobility variation from \(\xi\) =0 up to the maximum PE concentration at \(T>T_{c}^{H}\).
2304.04945
On some phase equilibrium features of charged black holes in flat spacetime via Rényi statistics
Motivated by the nonextensive nature of entropy in gravitational context and the Gauge/Gravity duality, black hole thermodynamics has been attracting intense emphasis in the literature. Along the present work, we investigate some features of the phase structure and critical phenomena of the 4-dimensional charged black holes in asymptotically flat spacetime within the formalism of R\'enyi statistics. First, we explore the extended phase space via the R\'enyi statistics approach. Concretely, based on the modified version of the Smarr formula, we recall the equal-area law to remove the oscillatory non-physical region in the $P_R-V$ and $T_R-S_R$ planes. Then, the coexistence curves are determined, as well as the latent heat of phase change. Moreover, we prove that the critical exponent describing the behavior of the order parameter near the critical point is $\frac{1}{2}$, which is consistent with Landau's theory of continuous phase transition. Lastly, we apply the Hamiltonian approach to R\'enyi thermodynamics which provides a new and solid mathematical basis for the extension of phase space and puts more insight into an expected and profound possible connection between the nonextensivity R\'enyi parameter $\lambda$ and the cosmological constant $ \Lambda$.
F. Barzi, H. El Moumni, K. Masmar
2023-04-11T03:11:05Z
http://arxiv.org/abs/2304.04945v1
# On some phase equilibrium features of charged black holes in flat spacetime via Renyi statistics ###### Abstract Motivated by the nonextensive nature of entropy in gravitational context and the Gauge/Gravity duality, the black hole thermodynamics has been attracting intense emphasis in the literature. Along the present work, we investigate some features of the phase structure and critical phenomena of the 4-dimensional charged black holes in asymptotically flat spacetime within the formalism of Renyi statistics. First, we explore the extended phase space via the Renyi statistics approach. Concretely, based on the modified version of the Smarr formula, we recall the equal-area law to remove the oscillatory non-physical region in the \(P_{R}-V\) and \(T_{R}-S_{R}\) planes. Then, the coexistence curves are determined, as well as the latent heat of phase change. Moreover, we prove that the critical exponent describing the behavior of the order parameter near the critical point is \(\frac{1}{2}\), which is consistent with Landau's theory of continuous phase transition. Lastly, we apply the Hamiltonian approach to Renyi thermodynamics which provides a new and solid mathematical basis for the extension of phase space and puts more insight into an expected and profound possible connection between the nonextensivity Renyi parameter \(\lambda\) and the cosmological constant \(\Lambda\). ###### Contents * 1 Introduction * 2 Charged black hole thermodynamics in Renyi statistics * 3 The equal-area law of the asymptotically flat charged black hole in Renyi formalism * 3.1 The construction of equal-area law in \(P_{R}-V\) diagram * 3.2 The construction of equal-area law in \(T_{R}-S_{R}\) diagram * 4 Two-phase coexistence curves, latent heat, and the microscopic explanation of the phase change in Renyi formalism * 5 Hamiltonian approach to Renyi's thermodynamics of charged-flat black hole * 6 Conclusion ## 1 Introduction Black holes are nowadays the leading astrophysical and theoretical laboratories for testing general relativity as well as theories of modified and possibly quantum gravity. Additionally, the most fascinating achievement in modern physics is the bridge between thermodynamics and gravity, which plays a key role to understand more deeply the nature of black holes [1, 2]. Especially, the thermodynamic aspects of Anti-de Sitter (\(AdS\)) spacetimes are largely considered in the literature [3, 4, 5, 6], while considering cosmological constant \(\Lambda\) as a thermodynamical variable opens new windows in such area and new phenomena familiar from everyday thermodynamics emerge, such as enthalpy, reentrant phase transitions, triple points, and Carnot cycle have all now entered the language and structure of the subject, broadening it to what is called Black Hole Chemistry [7, 8, 9, 10, 11, 12, 13, 14, 15]. Maybe the only intriguing exception to this similitude is the non-extensive nature of the Bekenstein-Hawking entropy of black holes which is proportional to the surface area of its event horizon rather than the volume. Furthermore, in the strong gravitational scheme and in the vicinity of black holes, the condition of negligible long-range type interactions in the standard statistical descriptions break down, and consequently, the usual definition of mass and other extensive quantities is not possible locally which pushes us to go beyond the standard Gibbs-Boltzmann (\(GB\)) statistical proposal. In other words, the black hole entropy \(S_{BH}\) should encode the black hole information with non-local and non-extensive nature. Hence, our requirement for a non-Boltzmann statistics is crucial. Abe has introduced in [16], the most general non-additive entropy composition rule as \[H_{\lambda}\left(S_{12}\right)=H_{\lambda}\left(S_{1}\right)+H_{\lambda}\left(S_{2 }\right)+\lambda H_{\lambda}\left(S_{1}\right)H_{\lambda}\left(S_{2}\right), \tag{1}\] in which \(H_{\lambda}\) stands for a differentiable function of entropy \(S\), \(\lambda\) denotes a real constant parameter, and \(S_{1}\), \(S_{2}\) and \(S_{12}\) are the entropies of the subsystems and the total system, respectively. Next, Biro and Van extend this formula by exploring non-homogenous systems as well [17] and elaborate a formalism to derive the most general functional form of those non-additive entropy composition rules that satisfy the familiar relations of standard thermodynamics, especially the zeroth law of thermodynamics \[L(S)=\frac{1}{\lambda}\ln[1+\lambda H_{\lambda}(S)], \tag{2}\] and the compatibility with the additive composition feature \[L(S_{12})=L(S_{1})+L(S_{2}). \tag{3}\] Moreover, the harmonious temperature function with related zeroth law reads as \[\frac{1}{T}=\frac{\partial L(S(E))}{\partial E}, \tag{4}\] under the assumption of energy composition additivity. Within this proposal, Alfred Renyi in 1959 gave a well-defined entropy function, which also obeys both the equilibrium and the zeroth law compatibility exigency of thermodynamics. According to [18], the Renyi entropy is defined as \[S_{R}=\frac{1}{1-q}\ln\sum_{i}p_{i}^{q}. \tag{5}\] It is obvious that \(H_{\lambda}(S)=S\) and \(\lambda=1-q\) in Eq.(2) if the original entropy functions follow the non-additive composition rule \[S_{12}=S_{1}+S_{2}+\lambda S_{1}S_{2}. \tag{6}\] This expression is known by the Tsallis composition rule, and \(\lambda\in\mathbb{R}\) is the parameter of nonextensivity, and Tsallis entropy is defined as \(S_{T}=\frac{1}{1-q}\sum_{i}(p_{i}^{q}-p_{i})\)[19]. By recalling the formal logarithm, one can associate the Tsallis formula to the Renyi entropy by \[S_{R}\equiv L(S_{T})=\frac{1}{1-q}\ln\left[1+(1-q)S_{T}\right]. \tag{7}\] Recovering the standard Boltzmann-Gibbs entropy, \(S_{BG}=-\sum p_{i}\ln p_{i}\) from Tsallis or Renyi formula is an easy task, which is ensured by taking the vanishing limit of \(\lambda\), (\(\lambda\to 0\equiv q\to 1\)). Recently, Renyi entropy evinces very interesting features in the black hole thermodynamics context and attracts a special emphasis in literature [20, 21, 22, 23, 24, 25, 26, 27, 28]. By recalling the Renyi statistics framework, both [22, 25] studies reveal that it is possible, with \(0<\lambda<1\), to obtain the small and large black hole branches in the grand canonical ensemble while this cannot occurs in the \(GB\) statistics approach. Moreover, they unveil that the Hawking-Page phase transition between thermal radiation and large black holes in asymptotically flat Reissner-Nordstrom (RN-flat) and in the presence of dilaton field can happen and crucially depend on the nonextensivity parameter \(\lambda\). Nevertheless, in the canonical ensemble, the black hole exhibits a critical behavior such that a small/large black hole (SBH/LBH) first-order phase transition when \(\lambda<\lambda_{c}\). Above the critical value of the Renyi parameter \(\lambda_{c}\), this behavior disappears and the large black hole phase is the only remaining phase that is possible. Rigorously speaking, this thermal picture is, in the same way, as that of the charged and dilatonic black holes in \(AdS\) space with the standard \(GB\) statistics. It is commonly known from black hole phase structure investigations that when the system reaches the requirement of thermodynamic equilibrium stability in an isothermal or isobaric process, the \(P-V\) or \(T-S\) curves describing the change of the system are discontinuous, and the system possesses a latent heat of phase transition when it crosses the two-phase coexistence line [29]. This discontinuity is associated with a thermodynamically unstable region when \(\partial P/\partial V>0\) or \(\partial T/\partial S<0\), this nonphysical situation is resolved by Maxwell's construction [30, 31, 32, 33, 34]. Besides, Gauge/Gravity duality states a correspondence between gauge and gravity theories. Since the gauge theory side of the duality supports standard thermodynamic treatment, it is natural to demand that standard thermodynamics should be present on the gravity side of the duality. However, the minimal asymptotically Anti-de Sitter Reissner-Nordstrom (RN-AdS) has a one-dimensional thermodynamic description where the entropy is the only degree of freedom that doesn't match the multidimensional thermodynamical nature of most usual matter systems. The underlying reason is that in a formulation with one free variable it is impossible to draw a distinction between isentropic and isothermic thermodynamical transformations, and the fundamental notion of thermodynamic temperature is ill-defined. Therefore, by increasing the number of degrees of freedom of black hole systems one is led to thermodynamics closer to standard thermodynamics of matter systems. Then one may ask how to extend the black hole thermodynamics in a consistent way. One such prescription is the Hamiltonian approach [35, 36, 37] to thermodynamics. Indeed, investigating thermodynamics from the Hamiltonian point of view is the newest tool to probe the thermal black hole laws and to extend the phase space of black holes in a mathematically sound manner compatible with the laws of black hole thermodynamics. Henceforward promoting the Renyi formalism to the Hamiltonian approach is a decisive task. Motivated by those works, herein, we intend to study some aspects of phase equilibrium of charged asymptotically flat black holes within Renyi formalism. Specifically, we implement Maxwell's construction in the phase portrait of RN-flat black holes through the Renyi statistics to remove non-physical behaviors in complete analogy with the Van der Waals (\(VdW\)) fluid and the RN-AdS black holes in the Gibbs-Boltzmann formalism. Then, we unveil the thermodynamical and Hamiltonian consequences of this implementation. The outline of the paper is as follows: In Sec.2 we give a thermodynamical short review of the asymptotically flat-charged black hole in Renyi statistics. In Sec.3 we implement Maxwell's equal-area law for the RN-flat black hole within Renyi formalism, first in the \(P_{R}-V\) diagram by replacing the oscillatory curves with isobaric plateaus, then we perform the construction in the \(T_{R}-S_{R}\) diagram where this time isothermic plateaus supplant the nonphysical trait of the phase profile. Based on the results of this section, in Sec.4 we derive the coexistence curves \(P_{0}-T_{0}\) and the associated latent heat of first-order phase transition. Then, we analyze the influence of each parameter on the latent heat and provide a microscopic interpretation of the phase transition through the application of Landau's theory of continuous phase transition and its symmetry change argument. In Sec.5, we apply the Hamiltonian approach to the RN-flat black hole in Renyi thermodynamics to consistently extend its phase space. Finally, Sec.6 is devoted to the conclusion. ## 2 Charged black hole thermodynamics in Renyi statistics The line element, describing the Reissner-Nordstrom black hole solution of mass \(M\) and charge \(Q\) can be written as \[ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}(d\theta^{2}+\sin^{2}\theta d\phi^ {2}), \tag{8}\] the involved blackening function \(f(r)\) reads as \[f(r)=1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}. \tag{9}\] The horizon radius \(r_{h}\) is determined as the largest real root of \(f(r)\big{|}_{r=r_{h}}=0\) and leads to the following mass formula: \[M=\frac{r_{h}}{2}\left(1+\frac{Q^{2}}{r_{h}^{2}}\right), \tag{10}\] and the electric potential expression is obtained to be \[\Phi=\frac{Q}{r_{h}}=\frac{Q}{M+\sqrt{M^{2}-Q^{2}}}. \tag{11}\] According to [22, 25], the Renyi entropy \(S_{R}\) is the formal logarithm of Bekenstein-Hawking one \(S_{BH}\) taken as the Tsallis entropy \(S_{T}\) in Eq.(7) such as, \[S_{R}=\frac{1}{\lambda}\ln(1+\lambda S_{BH}). \tag{12}\] It's worth noting that in the limit of vanishing nonextensivity parameter \(\lambda\), we recover the usual Gibbs-Boltzmann statistics, \(S_{R}\stackrel{{\lambda\to 0}}{{\longrightarrow}}S_{BH}\). In the present paper we assume that \(\lambda\) is small and positive, \(0<\lambda<<1\), guided by the assumption that non-extensive effects are first order corrections to the classical extensive statistical mechanics. The Renyi entropy generalizes the Gibbs-Boltzmann entropy by taking into account non-extensive effects, this implies a correction to first order in \(\lambda\) to the RN-flat black hole temperature. The Renyi temperature \(T_{R}\) can be expressed as [22], \[T_{R}=\frac{1}{\partial S_{R}/\partial M} = T_{H}(1+\lambda S_{BH}) \tag{13}\] \[= \frac{(r_{h}^{2}-Q^{2})(1+\lambda\pi r_{h}^{2})}{4\pi r_{h}^{3}}. \tag{14}\] Where \(T_{H}=\frac{r_{h}^{2}-Q^{2}}{4\pi r_{h}^{3}}\) and \(S_{BH}=\pi r_{h}^{2}\) are the Hawking temperature and the Bekenstein-Hawking entropy of RN-flat black hole, respectively. Moreover, in the Renyi extended phase space [22], the nonextensivity parameter \(\lambda\) is associated with the Renyi thermodynamic pressure and the electric potential \(\Phi\) by \[P_{R}=\frac{3\lambda(1-\Phi^{2})}{32}. \tag{15}\] In this framework, the conjugate quantity associated with the pressure is the thermodynamical volume \(V=\frac{4}{3}\pi r_{h}^{3}\), this assertion will be made mathematically more precise in section 5. Summarizing all these quantities, one can write the first law of Renyi thermodynamics and its associated generalized Smarr formula as, \[dM = T_{R}dS_{R}+VdP_{R}+\Phi dQ. \tag{16}\] \[M = 2T_{R}S_{R}-2P_{R}V+\Phi Q. \tag{17}\] Otherwise, one may assume that the black holes could have their own micro-states carrying the degrees of freedom as well and the specific volume \(v\), which is proportional to the event horizon radius [22] \[v=\frac{4}{3}r_{h}. \tag{18}\] Having presented some thermodynamical features of the RN-flat black hole including the thermodynamical quantities associated with Renyi statistics, we pave the way for the study of the corresponding phase structure and the Maxwell equal areas law in such black hole background via this non-Boltzmannian formalism. ## 3 The equal-area law of the asymptotically flat charged black hole in Renyi formalism In this section, we examine Maxwell's construction of the charged flat black hole. To do this, we must first recall that Maxwell developed the equal area law to describe the experimental behaviors of the state equations associated with real fluids. The (Pressure, Volume)-plane is often where this design is elaborated while maintaining a constant temperature. Such a construction can, however, also be made in the (Temperature, Entropy) plane by fixing the pressure this time. To derive the equation of state in the \(P_{R}-V\) diagram, first we invert the Renyi temperature expression given by the Eq.(14) and in where we substitute \(\lambda\) by Eq.(15) and the electric potential \(\Phi\) by its form Eq.(11), we get \[P_{R}=\frac{3T_{R}}{8r_{h}}-\frac{3}{32\pi r_{h}^{2}}+\frac{3Q^{2}}{32\pi r_{h}^ {4}}. \tag{19}\] Now through the expressions of the specific volume, \(v=\frac{8}{3}r_{h}\) and the thermodynamical volume, \(V=\frac{4}{3}r_{h}^{3}\) in terms of \(r_{h}\) we arrive at the desired equations of state \[P_{R}=\frac{T_{R}}{v}-\frac{2}{3\pi v^{2}}+\frac{128Q^{2}}{27\pi v^{4}}\quad \Leftrightarrow\quad P_{R}=\frac{6^{\frac{2}{3}}\sqrt[3]{\pi}T_{R}}{8V^{ \frac{1}{3}}}-\frac{\sqrt[3]{6}}{16\sqrt[3]{\pi}V^{\frac{2}{3}}}+\frac{6^{ \frac{2}{3}}\sqrt[3]{\pi}Q^{2}}{24V^{\frac{4}{3}}}. \tag{20}\] The first form of the equation of state associated with the charged asymptotically flat black hole in the \(P-v\) diagram is exhibited by the isotherms in Fig.1, in which the thermodynamically unstable state associated with \(\partial P_{R}/\partial v>0\) will lead to the system automatically expand or contract unrestrictedly. Such a situation occurs also in the Van der Waals (\(VdW\)) equation and in a variety of black holes configurations in the extended phase space [38, 39, 30, 31, 32], but it has been fixed by recalling Maxwell's equal-area law. Moreover, Figure 1: _Isotherms of the charged asymptotically flat black hole in \(P_{R}-v\) diagram with an electric charge \(Q=1\) in Rényi thermodynamics. The green thick curve is the critical isotherm at \(T_{R}=T_{c}\), It is observed that below the critical isotherm the unphysical oscillating behavior appears._ Fig.1 exhibits a critical behavior where the black hole undergoes a second phase transition at a critical temperature \(T_{c}=\frac{\sqrt{6}}{18\pi Q}\) whose analytical expression can be derived by solving the following system \[\left(\frac{\partial P_{R}}{\partial v}\right)_{T_{R},Q}=\left(\frac{\partial^{2 }P_{R}}{\partial v^{2}}\right)_{T_{R},Q}=0. \tag{21}\] In what follows, we extend Maxwell's construction formalism to incorporate Renyi statistics by studying charged asymptotically flat back hole phase structure. ### The construction of equal-area law in \(P_{r}-v\) diagram Maxwell's construction in the \(P_{R}-V\) plane is based on the property of Helmholtz free energy \(F_{R}\) being a state function of the black hole. In the left panel of Fig.2, the black hole undergoes a reversible cyclic transformation along an isotherm curve \(T_{R}<T_{c}\), starting from the liquid state point \((P_{0},V_{l})\) to the gaz state point \((P_{0},V_{g})\) along the red dashed curve and coming back along the blue line, we have \[\oint dF_{R}=0. \tag{22}\] Where the differential of \(F_{R}\) at constant charge \(Q\) is given by \[dF_{R}=-S_{R}dT_{R}-P_{R}dV. \tag{23}\] Along the red dashed curve, the differential of \(F_{R}\) takes the form \(dF_{R}=-P_{R}dV\), while on the blue line \(P_{R}=P_{0}\), it reduces to \(dF_{R}=-P_{0}dV\). Thus, from Eq.(22) we write, \[\oint dF_{R}=-\int_{V_{l}}^{V_{g}}P_{R}\,dV-\int_{V_{g}}^{V_{l}}P_{0}\;dV=0. \tag{24}\] Which gives the form of the Maxwell's equal-area law in the \(P_{R}-V\) diagram as \[P_{0}(V_{g}-V_{l})=\int_{V_{l}}^{V_{g}}P_{R}\;dV, \tag{25}\] and traduces the equality of the areas delimited by the blue line and the dashed red curve shown in the left panel of Fig.2. In the right panel of Fig.2 we have plotted the isotherms in the \(P_{R}-v\) plane according to the equation of state Eq.(20) for a charge \(Q=1\). We can see clearly from this plot the thermodynamically unstable regions below the critical isotherm \(T=T_{c}\) where the compressibility of the black hole phase is negative (\(\partial P_{R}/\partial V>0\)), this behavior being unphysical, the construction of Maxwell aims for replacing these regions by horizontal lines where temperature and pressure are constants. Moreover, below a given temperature, negative pressure appears, although Maxwell's construction can remove some negative pressure zones, others will persist as seen from Fig.1 for very low temperatures compared to the critical temperature \(T_{c}\). For a \(VdW\) fluid, negative pressure is associated with metastable states of the liquid phase, while in our case, these regions of instability can be viewed as metastable states where the black hole is stretched under a background tension. Keeping the charge \(Q\) constant and on an arbitrary isotherm associated with \(T_{R}=T_{0}\leq T_{c}\), the two points \((P_{0},V_{l})\) and \((P_{0},V_{g})\) meet the Maxwell's equal-area law Eq.(25) : \[P_{0}(V_{g}-V_{l})=\int_{V_{l}}^{V_{g}}P_{R}\:dV=\int_{r_{l}}^{r_{g}}4\pi r_{h}^ {2}P_{R}(r_{h})\:dr_{h}. \tag{26}\] Where \(V_{l}=\frac{4\pi}{3}r_{l}^{3}\) associated with the small black hole phase and \(V_{g}=\frac{4\pi}{3}r_{g}^{3}\) labeling the large black hole one. Integrating with respect to \(r_{h}\) for convenience, we obtain the coexistence pressure \(P_{0}\) as, \[P_{0}=\frac{9\left(Q^{2}+2\pi T_{0}r_{g}^{2}r_{l}+2\pi T_{0}r_{g}r_{l}^{2}-r_ {g}r_{l}\right)}{32\pi r_{g}r_{l}\left(r_{g}^{2}+r_{g}r_{l}+r_{l}^{2}\right)}. \tag{27}\] From Eq.(19), we obtain for each mentioned state \[P_{0}=\frac{3Q^{2}}{32\pi r_{l}^{4}}-\frac{3}{32\pi r_{l}^{2}}+\frac{3T_{0}}{8 r_{l}}, \tag{28}\] \[P_{0}=\frac{3Q^{2}}{32\pi r_{g}^{4}}-\frac{3}{32\pi r_{g}^{2}}+\frac{3T_{0}}{8 r_{g}}. \tag{29}\] Figure 2: _The simulated phase transition and the boundary of the two phase coexistence on the base of the isotherm in the \(P_{R}-V\) diagram for charged black hole in flat spacetime within Rényi statistics approach. **Left panel:** demonstration of the Maxwell construction in the \(P_{R}-V\) plan, the blue thick line is calculated such as the two shaded areas are equal and to eliminate the unphysical behavior represented by the red dotted line. **Right panel:** the black horizontal lines are isotherms replacing the unphysical oscillations, the bell shaped black dashed line delimits the coexistence region. The critical isotherm is shown as the thick green line above which the supercritical phase dominates._ Summing Eqs.(28) and (29) we get \[2P_{0}=\frac{3Q^{2}}{32\pi r_{l}^{4}}+\frac{3Q^{2}}{32\pi r_{g}^{4}}+\frac{3T_{0} }{8r_{l}}-\frac{3}{32\pi r_{l}^{2}}-\frac{3}{32\pi r_{g}^{2}}+\frac{3T_{0}}{8r_ {g}} \tag{30}\] \[\implies 2P_{0}=\frac{3Q^{2}}{32\pi r_{g}^{4}(1+\gamma^{4})}-\frac{3}{32\pi r_{g} ^{2}(1+\gamma^{2})}+\frac{3T_{0}}{8r_{g}(1+\gamma)}. \tag{31}\] Where we have introduced the ratio \(\gamma=\frac{r_{l}}{r_{g}}\). Furthermore, the subtraction of Eqs.(28) and (29) gives \[0=T_{0}-\frac{Q^{2}(1-\gamma^{4})+\gamma^{2}r_{g}^{2}(\gamma^{2}-1)}{4\pi \gamma^{3}r_{g}^{3}\left(\gamma-1\right)}. \tag{32}\] While, from Eq.(27) and by substituting \(r_{l}\) by \(\gamma r_{g}\) one can obtain \[0=18\pi T_{0}\gamma r_{g}^{3}\left(\gamma+1\right)-32\pi P_{0}\gamma r_{g}^{4 }\left(\gamma^{2}+\gamma+1\right)-9\gamma r_{g}^{2}+9Q^{2}. \tag{33}\] Next, injecting Eqs.(31) and (32) into Eq.(33) we get, \[0=r_{g}^{2}\left(\gamma^{6}-4\gamma^{5}+6\gamma^{4}-4\gamma^{3}+\gamma^{2} \right)+Q^{2}\left(-\gamma^{6}+9\gamma^{4}-16\gamma^{3}+9\gamma^{2}-1\right). \tag{34}\] Solving Eq.(34) for \(r_{g}\) we obtain \[r_{g}(\gamma)=\frac{Q\sqrt{\gamma^{2}+4\gamma+1}}{\gamma}, \tag{35}\] and thus leading also to \(r_{l}\), \[r_{l}(\gamma)=Q\sqrt{\gamma^{2}+4\gamma+1}. \tag{36}\] Consequently, we get the volume of each state as \[V_{g}(\gamma)=\frac{4\pi Q^{3}\left(\gamma^{2}+4\gamma+1\right)^{\frac{3}{2}} }{3\gamma^{3}}\ \ \ \ \text{and}\ \ \ \ V_{l}(\gamma)=\frac{4\pi Q^{3}\left(\gamma^{2}+4\gamma+1\right)^{\frac{3}{2} }}{3}. \tag{37}\] Advancing, by recalling Eq.(29), we derive the expression of \(P_{0}\) in terms of the ratio \(\gamma\) as \[P_{0}(\gamma)=\frac{9\gamma^{2}}{32\pi Q^{2}\left(\gamma^{2}+4\gamma+1\right)^ {2}}. \tag{38}\] At this level, one can notice that the critical point is attained in the limit \(\gamma\longrightarrow 1\), thus the critical radius \(r_{c}\), pressure \(P_{c}\), and the volume \(V_{c}\) are found to be : \[r_{c}=\sqrt{6}Q,\ \ \ P_{c}=\frac{1}{128\pi Q^{2}},\ \text{and}\ \ \ \ V_{c}=8\sqrt{6}\pi Q^{3}. \tag{39}\] These critical quantities are consistent with those in [22]. Maxwell construction in the \(P_{R}-V\) diagram is depicted in right panel of Fig.2 for different values of ratio \(\gamma\). In such panel, the bell shaped dashed black line is nothing but the so-called saturation line, and \(V_{l,g}\) present the intersection of this curve with an isotherm curve determined for each value of the parameter \(\gamma\). Between these two values \(V_{l}<V<V_{g}\), the black hole system is unstable; there is a phase transition between a small and a large black hole. Thus, the oscillating part, below the critical pressure \(P_{c}\), should be replaced with the isobar \(P_{R}=P_{0}\) in the same way as it is done in the \(VdW\) and RN-AdS cases and which reflects that \(A_{1}\)(orange), the colored area under the black solid line equals in an algebraic manner \(A_{2}\)(cyan), the colored area above the black solid line. This isobar (\(P_{R}=P_{0}\)) is the locus of the coexistence of the two black hole phases, the size of this coexistence region decreases as the ratio \(\gamma\) increases towards the critical value \(\gamma_{c}=1\) where it vanishes. It is worth noting that the intersection points meet and correspond to the critical horizon volume/radius. This line represents all the possible critical points; therefore all possible phase transitions. While on the critical point (\(\mathbf{C}\)), the two-phase system undergoes a second-order phase transition, a first-order phase transition occurs in all intersection points below (\(\mathbf{C}\)). At the critical pressure \(P_{c}\) and above the small and large black hole phases converge to a single supercritical phase, in this phase the two black holes (SBH and LBH) are indistinguishable. Inverting Eq.(38) we can obtain \(\gamma\), \(r_{l}/V_{l}\) and \(r_{g}/V_{g}\) directly in terms of \(Q\) and the dimensionless quantity \(\eta=Q\sqrt{P_{0}}\), \[\gamma=\frac{3-\sqrt{96\eta\left(4\pi\eta-\sqrt{2\pi}\right)+9}}{8\sqrt{2\pi} \eta}-2 \tag{40}\] \[r_{l}=\frac{Q\sqrt{9-48\sqrt{2\pi}\eta-3\sqrt{96\eta\left(4\pi\eta-\sqrt{2\pi} \right)+9}}}{8\sqrt{\pi}\eta}, \tag{41}\] \[r_{g}=\frac{\sqrt{6}Q}{\sqrt{3-16\sqrt{2\pi}\eta-\sqrt{96\eta\left(4\pi\eta- \sqrt{2\pi}\right)+9}}}. \tag{42}\] And finaly, \[V_{l}=\frac{Q^{3}\left(9-48\sqrt{2\pi}\eta-3\sqrt{96\eta\left(4\pi\eta-\sqrt{ 2\pi}\right)+9}\right)^{3/2}}{384\sqrt{\pi}\eta^{3}}, \tag{43}\] \[V_{g}=\frac{8\sqrt{6}\pi Q^{3}}{\left(3-16\sqrt{2\pi}\eta-\sqrt{96\eta\left(4 \pi\eta-\sqrt{2\pi}\right)+9}\right)^{3/2}}. \tag{44}\] Having established Maxwell's equal-area law construction associated which a charged-flat black hole in the (Pressure, Volume)-diagram within the Renyi statics framework, we will turn our attention in the next section to the (Temperature, Entropy)-plane. ### The construction of equal-area law in \(T_{r}-S_{r}\) diagram Herein, the equation of state Eq.(19) solved for \(T_{R}\) traduces the Renyi temperature variation in term of the event horizon radius and it reads as \[T_{R}=\frac{1}{4\pi r_{h}}-\frac{Q^{2}}{4\pi r_{h}^{3}}+\frac{8P_{R}r_{h}}{3}, \tag{45}\] and their associated thermal profile \(T_{R}-r_{h}\) curves are shown in Fig.3. It's obvious that the Van der Waals-like phase transition persists across the oscillating region where the instability is represented by the negative slop region (\(\partial T_{R}/\partial r_{h}<0\)) which corresponds to a negative heat capacity (\(\partial S_{R}/\partial T_{R}<0\)), since (\(\partial S_{R}/\partial r_{h}\)) is always positive. Also the critical behavior at the pressure \(P_{c}\) which is given by Eq.(39) implies a second order phase transition. In the \(T_{R}-S_{R}\) plane, a similar Maxwell's construction is done by replacing the Helmholtz energy by the Gibbs free energy \(G_{R}\) as the function of state of the black hole. As illustrated in left panel of Fig.4, the black hole undergoes a reversible cyclic transformation, we write \[\oint dG_{R}=0. \tag{46}\] Where the differential of \(G_{R}\) at constant charge \(Q\) is found to be \[dG_{R}=VdP_{R}-S_{R}dT_{R}. \tag{47}\] Figure 3: _Isobars of the charged asymptotically flat black hole in \(T_{R}-r_{h}\) plane with electric charge \(Q=1\) in Rényi thermodynamics. The green thick curve is the critical isobar at \(P_{R}=P_{c}\), It is observed that below the critical isobar the unphysical behavior appears and is represented by the negative slops of the isobars._ On the red dashed isobar curve, the differential of \(G_{R}\) reduces to \(dG_{R}=-S_{R}dT_{R}\), while at the blue line (\(T_{R}=T_{0}\)), it vanishes, \(dG_{R}=0\). Thus, from Eq.(46) one can write \[\oint dG_{R}=-\int_{S_{l}}^{S_{g}}S_{R}\:dT_{R}=0. \tag{48}\] Integrating by part gives \[\Big{[}T_{R}S_{R}\Big{]}_{S_{l}}^{S_{g}}-\int_{S_{l}}^{S_{g}}T_{R}\:dS_{R}=0, \tag{49}\] which leads to the Maxwell's equal-area law form in the \(T_{R}-S_{R}\) diagram and reflects the equality of the areas under the blue line and the dashed red curve of Fig.4, \[T_{0}(S_{g}-S_{l}) = \int_{S_{l}}^{S_{g}}T_{R}\:dS_{R} \tag{50}\] \[= \int_{r_{l}}^{r_{g}}T_{R}\:\frac{dS_{R}}{dr_{h}}\:dr_{h},\] with \(S_{l}=S_{R}(r_{l})\) and \(S_{g}=S_{R}(r_{g})\). Recalling that \(S_{R}=\pi r_{h}^{2}-\dfrac{\lambda\pi^{2}r_{h}^{4}}{2}+\mathcal{O} \left(\lambda^{2}\right)\) and replacing \(r_{l}\) and \(r_{g}\) by Eq.(41) and (42) respectively, we obtain for the coexistence temperature \(T_{0}\) in terms of \(\eta=Q\sqrt{P_{0}}\), \[T_{0}=\dfrac{\sqrt{64\pi\eta^{2}-16\sqrt{2\pi}\eta+\frac{3}{2}}\sqrt{-16\sqrt{ 2\pi}\eta-\sqrt{96\eta\left(4\pi\eta-\sqrt{2\pi}\right)+9+3}}}{3\pi Q\left( \dfrac{\sqrt{96\eta\left(4\pi\eta-\sqrt{2\pi}\right)+9}-3}{8\sqrt{2\pi}\eta}+3 \right)}. \tag{51}\] In terms of the parameter \(\gamma\), the coexistence temperature \(T_{0}\) possesses a minimal expression and becomes \[T_{0}(\gamma)=\dfrac{\gamma\left(\gamma+1\right)}{\pi Q\left(\gamma^{2}+4 \gamma+1\right)^{\frac{3}{2}}}. \tag{52}\] We give also the entropies of the small black hole phase \(S_{l}\) and the large black hole one \(S_{g}\) as \[S_{l}=\dfrac{\pi Q^{2}\left(\gamma+4\right)\left(\gamma^{2}+4\gamma+1\right)} {3\gamma}\ln{\left(\dfrac{4\left(\gamma+1\right)}{\gamma+4}\right)}, \tag{53}\] \[S_{g}=\dfrac{\pi Q^{2}\left(4\gamma+1\right)\left(\gamma^{2}+4\gamma+1\right) }{3\gamma^{2}}\ln{\left(\dfrac{4\left(\gamma+1\right)}{4\gamma+1}\right)}. \tag{54}\] As stipulated before, the critical temperature \(T_{c}\) and entropy \(S_{c}\) are obviously obtained in the limit \(\gamma\longrightarrow 1\), as \[T_{c}=\dfrac{\sqrt{6}}{18\pi Q},\quad S_{c}=10\pi Q^{2}\ln{\left(\dfrac{8}{5} \right)}. \tag{55}\] The complete picture of the Maxwell construction in the \(T_{R}-S_{R}\) diagram is depicted in right panel of Fig.4, As in the \(P_{R}-V\) plane, the dashed black line is again the saturation line, and \(S_{l,g}\) present the intersection of this curve with an isobar curve given for each value of \(\gamma\). In the portion between these two values \(S_{l}<S_{R}<S_{g}\), the black hole system is unstable. Thus, the oscillating part, below the critical temperature \(T_{c}\), should be replaced with the isotherm \(T_{R}=T_{0}\). On this isotherm (\(T_{R}=T_{0}\)) the two black hole phases coexist. In a similar way, one notes that the intersection points meet and correspond to the critical horizon entropy/radius. Also, on the critical point (\(\mathbf{C}\)), the two-phase system undergoes a second-order phase transition however a first-order phase transition occurs in all intersection points below (\(\mathbf{C}\)). Once more we see that the supercritical phase dominates above \(T_{c}\). In the following, we complete the correspondence with the \(VdW\) fluid by computing the latent heat of the charged black hole phase transition. Furthermore, a microscopic interpretation of such transition is attempted. Two-phase coexistence curves, latent heat, and the microscopic explanation of the phase change in Renyi formalism It is well known that during a first-order phase transition of a \(VdW\) fluid between its liquid and gas phases a latent heat is exchanged with the heat bath. The Clapeyron equation is the Figure 4: _The simulated phase transition and the boundary of the two phase coexistence on the base of the isobar in the \(T_{R}-S_{R}\) diagram for charged black hole in flat spacetime within Renyi statistics approach. **Left panel:** demonstration of the Maxwell construction in the \(T_{R}-S_{R}\) plan, the blue thick line is calculated such as the two shaded areas are equal and to eliminate the unphysical oscillatory behavior represented by the red dotted line. **Right panel:** black horizontal lines are isobars replacing the unphysical oscillations, the bell shaped black dashed line delimits the coexistence region. The critical isobar is shown as the thick green line above which the supercritical phase dominates. direct modelisation of these phase changes and within the correspondence examined in the previous sections between the \(VdW\) fluid and the charged-flat black hole in Renyi formalism, we can write, \[\frac{dP_{0}}{dT_{0}}=\frac{L_{l\longrightarrow g}}{T_{0}(v_{g}-v_{l})}, \tag{56}\] where \(L_{l\longrightarrow g}\) is the latent heat accompanying the phase transition from the black hole liquid-like phase to the gas-like phase (small/large). Herein, we attend to examine the two-phase equilibrium coexistence \(P_{0}-T_{0}\) curves and the slope \(\frac{dP_{0}}{dT_{0}}\) of those curves for the charged asymptotically flat black hole in the Renyi extended phase space. In Fig.5 we illustrate the coexistence curves for different fixed charge values \(Q\). From such a figure, one can observe the effect of the electric charge \(Q\) on the phase diagram, indeed each curve terminates by the critical point whose coordinates decrease by increasing the charge \(Q\), while the slope \(\frac{dP_{0}}{dT_{0}}\) becomes more important as the electric charge grows. Moreover, the Fig.5 reveals also that the pressure \(P_{0}\) tends toward zero with decreasing temperature \(T_{0}\) and that the electric charge has small effect on the coexistence curves for small values of temperature and pressure. Solving Eq.(56) for the latent heat and using Eq.(38) we obtain the latent heat of the phase transition of RN-flat black hole in Renyi formalism as, \[L(\gamma)=\frac{3\gamma(1-\gamma)(\gamma+1)^{2}}{2\pi Q\left(\gamma^{2}+\gamma +1\right)(\gamma^{2}+4\gamma+1)^{3/2}}. \tag{57}\] The variation of the latent heat with the pressure \(P_{0}\) and the temperature \(T_{0}\) is depicted in Figure 5: \(P_{0}-T_{0}\) coexistence curves for fixed charge \(Q\). Fig.6 for various values of charges, while its behaviour in terms of the ratio \(\gamma\) is illustrated in Fig.7. For both figures, one can observe the effect of the pressure \(P_{0}\), the temperature \(T_{0}\), and the ratio \(\gamma\) on the latent heat of phase transition. In fact, when all the previous quantities grow the latent heat \(L\) is not a monotonic function but increases firstly and then decreases to reach zero at the critical point (\(P_{R}\to P_{c}/\ T_{R}\to T_{c}/\ \gamma\to 1\)), where a second phase transition takes place. Furthermore, the latent heat \(L\) decreases the increasing the charge. Figure 6: _The variation of the latent heat \(L\) with the coexistence pressure \(P_{0}\) and temperature \(T_{0}\) for different values of charge \(Q\)._ Figure 7: _The variation of the latent heat \(L\) in terms of the critical ration \(\gamma\) for different fixed values of charge \(Q\)._ Nowadays, a variety of papers in the literature are devoted to the microstructure of the phase transition of black holes where one links the different sizes of a black hole to a different density of its molecules [41, 42, 43, 44, 45, 46]. Otherwise, the Landau theory of continuous phase transition is typified by changes in the degree of the material order and the accompanying changes in the symmetry of matter. Thus the analogy between the black hole thermodynamics and the ordinary one pushes us to think about the symmetry change in the black hole and the phase transition within the Renyi formalism. In what follows, we put such assertion under investigation by considering a charged-flat black hole solution. During the black hole phase transition, the potential \(\Phi=\frac{Q}{r_{h}}\) presents a mutation, which unveils the conflicting microstructures of the black hole molecules in different phases. The electric potentials of the two-phase system are, respectively given by \[\Phi_{l}=\frac{Q}{r_{l}},\quad\Phi_{g}=\frac{Q}{r_{g}}. \tag{58}\] When \(T_{0}\leq T_{c}\) or \(P_{0}\leq P_{c}\) which means \(0<\gamma\leq 1\), one defines the order parameter as: \[\Psi(T_{0})=\frac{\Phi_{l}-\Phi_{g}}{\Phi_{c}}=\frac{\sqrt{6}\left(1-\gamma \right)}{\sqrt{\gamma^{2}+4\gamma+1}}. \tag{59}\] Fig.8 shows a characteristic behavior of the order parameter \(\Psi\) as a function of the coexistence temperature \(T_{0}\) and pressure \(P_{0}\) for a critical exponent \(\beta=\frac{1}{2}\). A Taylor series expansion in the vicinity of \(T_{c}\) and \(P_{c}\), Eqs.(60), confirms the value of \(\beta\). \[\left\{\begin{array}{cc}\Psi(T_{0})=2\cdot 6^{\frac{3}{4}}\sqrt{\pi}\sqrt{Q} \sqrt{T_{c}-T_{0}}+\mathcal{O}\left(T_{0}-T_{c}\right)&(T_{0}<T_{c})\\ \Psi(P_{0})=8\sqrt{6}\sqrt{\pi}Q\sqrt{P_{c}-P_{0}}+\mathcal{O}\left(P_{0}-P_{ c}\right)&(P_{0}<P_{c})\end{array}\right. \tag{60}\] When the black hole exhibits a temperature or pressure under the critical ones and the black hole molecule is at a high potential phase 1, molecules undergoing such a potential \(\Phi\) Figure 8: _The behaviour of the order parameter \(\psi\) in terms of the coexistence temperature \(T_{0}\) (left panel) and the coexistence pressure \(P_{0}\) (right panel) for different values of charge \(Q\)._ align in a certain orientation. In the low symmetry case, the black hole molecules are quite ordered, when the black hole switches to phase 2 within the same temperature/pressure, the potential \(\Phi\) reduces and consequently, the molecule orientation gets confused leading to higher symmetry. Moreover, the phase below the critical temperature, possesses low symmetry, higher order, and non-zero \(\Psi\), whereas, the phase above the critical temperature shows higher symmetry, lower order, and the order parameter \(\Psi\) is zero. According to Landau's perspective, the order parameter \(\Psi\) is a small amount near the critical point \(T_{c}\) and the Gibbs energy \(G(T_{R},\Phi)\) can be expanded as a power \(\Psi\) as [43], \[G(T_{R},\Phi)=G_{0}(T_{R})+\frac{1}{2}a(T_{R})\Psi^{2}+\frac{1}{4}b(T_{R})\Psi^ {4}+\cdots, \tag{61}\] in which \(G_{0}(T_{R})\) stands for the Gibbs function at \(\Psi=0\). The reason behind the presence of only even order terms of the order parameter is that the system is invariant under the parity transformation \(\Psi\rightleftarrows-\Psi\). The Gibbs function presents three minimal values located at \[\Psi=0,\quad\Psi=\pm\sqrt{-\frac{a}{b}}. \tag{62}\] The first trivial solution \(\Psi=0\) is associated with the disordered state, corresponding to \(T_{R}>T_{c}\) temperature range at \(a>0\), while the non vanishing solutions \(\Psi=\pm\sqrt{-\frac{a}{b}}\) correspond to an orderly state, when \(T_{R}<T_{c}\) at \(a<0\). Although we have \(a=0\) at \(T_{R}=T_{c}\), near the critical point one can write \[a=a_{0}\left(\frac{T_{R}-T_{c}}{T_{c}}\right)=a_{0}t.\quad a_{0}>0, \tag{63}\] since \(\Psi=\pm\sqrt{-\frac{a}{b}}\) is a real quantity. When \(T_{R}<T_{c}\), \(a<0\), so \(b\) should be positive and we have \[\Psi=0,\quad t>0,\] \[\Psi=\pm\left(\frac{a_{0}}{b}\right)^{1/2}(-t)^{1/2},\quad t<0. \tag{64}\] Several ferromagnetic systems share the following experimental features near the critical point field, in fact, * At \(t\to 0^{-}\), the spontaneous magnetization behaves as \[\mathcal{M}\propto(-t)^{\beta},\quad t\to 0^{-}.\] (65) * The zero field magnetic susceptibility of various ferromagnetic substances \(\chi=\left(\frac{\partial\mathcal{M}}{\partial H}\right)_{T_{R}}\) presents a singularity at \(t\to 0^{\pm}\) and in the vicinity of these singularities, \(\chi\) varies as, \[\chi\propto t^{-\zeta},\quad t\to 0^{+};\quad\chi\propto(-t)^{-\zeta^{ \prime}},\quad t\to 0^{-}.\] (66) * While, at \(t=0\) and very weak magnetic field, the magnetization \(\mathcal{M}\) is linked to the external magnetic field \(H\) by the following law \[\mathcal{M}\propto H^{1/\delta}.\] (67) * Additionally, when \(t\to 0^{\pm}\), the zero field specific heat capacity of ferromagnetic material \(c_{H}(H=0)\) obeys the following behavior \[c_{H}\propto t^{-\bar{\alpha}},\quad t>0;\quad c_{H}\propto(-t)^{-\bar{\alpha}^ {\prime}},\quad t<0.\] (68) The relation \(\Psi(t)\) is the same between Eq.(64) and Eq.(65), and give the critical index as \(\beta=1/2\). Following the method of Guo.et al [29, 43], we derive the critical exponent as \(\bar{\alpha}=\bar{\alpha}^{\prime}=0\), \(\zeta=\zeta^{\prime}=1\), \(\delta=3\), and the Renyi entropy of RN-flat black hole near the critical point. The Renyi entropy of disordered phase is \(S_{R_{diso}}\), and the Renyi entropy of ordered phase is \[S_{R_{ord}}=S_{R_{diso}}+\frac{a_{0}^{2}t}{2bT_{c}}. \tag{69}\] This equation traduces the continuity of the Renyi entropy at the critical point. Concretely \[S_{R_{ord}}=S_{R_{diso}},\hskip 28.452756pt\text{at }t=0. \tag{70}\] This Landau's formalism behavior is in the same way as that of the charged black hole in anti-de Sitter space with the standard Gibbs-Boltzmann statistics. In the next section, we introduce a new mathematical approach aiming to put on more firm ground the construction of the extended phase space of the charged-flat black hole in the Renyi formalism. ## 5 Hamiltonian approach to Renyi's thermodynamics of charged-flat black hole Gauge/Gravity duality posits a correspondence between gauge and gravity theories. As pointed out in the introduction, given that the gauge theory side of the duality admits a standard thermodynamic treatment, it is natural to expect that the gravity side too supports a standard thermodynamics approach. Such a picture can be fulfilled by increasing the number of degrees of freedom of black holes, thus, leading to a thermodynamics in accordance with standard thermodynamics of matter systems. The Hamiltonian approach [35, 36, 47] to thermodynamics, which we attend to apply in this section, is a new and powerful scheme to investigate and extend the phase space of black holes. In the Hamiltonian approach to thermodynamics, one considers all equations of state of a given thermodynamic system as constraints on phase space [35, 36]. For each thermodynamic potential \(M\), its differential \(dM\) is expressed by the canonical and tautological form \(pdq\) on the constraint surface defined by these equations of state. Then it is possible to extend the phase space by the introduction of a canonical conjugate pair \((\theta,\tau)\) such that the form \(pdq+\theta d\tau\) reduces to the _Poincare-Cartan_ form \(pdq-hd\tau\) on the constraint surface \(H=\theta+h(q,p,\tau)\). Thus, one obtains a description in both spaces, the reduced phase space \((p,q)\) and the extended phase space \((p,q;\theta,\tau)\). Therefore, all thermodynamic potentials are related by canonical transformations, giving equivalent representations. In this way, one is able to increase the degrees of freedom of the thermodynamic system. Further, through the general Hamiltonian approach to RN-AdS black hole, we can verify that the thermodynamical volume conjugate to the thermodynamical pressure is indeed equal to the volume of a sphere of radius \(r_{h}\), such that \(PV=-V\frac{\Lambda}{8\pi}=-r_{h}^{3}\frac{\Lambda}{6}\) is interpreted as the energy extracted from spacetime due to the presence of the black hole [48], such calculations are based on the promotion of the cosmological constant \(\Lambda\) to a function of coordinates in the phase space through a new equation of state. Now, it's legitimate to check whether the volume of the RN-flat black hole persists within Renyi statistics and by considering that the nonextensivity parameter \(\lambda\) undergoes the same assumptions as \(\Lambda\). According to the first law of black hole thermodynamics, the minimal RN-AdS in Gibbs-Boltzmann has the entropy as the only free thermodynamical variable, which makes it a one-dimensional system. Therefore, any minimal mechanical analog should be one-dimensional as well. A direct identification between mechanical \((p,q)\), and thermodynamic variables \((T_{H},S_{BH})\), up to canonical transformations, is [37] \[q=\frac{S_{BH}}{\pi},\quad p=\pi T_{H}, \tag{71}\] where \(S_{BH}\) and \(T_{H}\) are the Bekenstein-Hawking entropy and the Hawking temperature of a RN-AdS black hole respectively given as, \[T_{H}=\frac{1}{4\pi r_{h}}-\frac{Q^{2}}{4\pi r_{h}^{3}}-\frac{\Lambda r_{h}}{4 \pi}\quad\text{and}\quad S_{BH}=\pi r_{h}^{2}. \tag{72}\] In Renyi formalism, a similar identification can be made. Indeed, recalling the expressions of the Renyi entropy \(S_{R}\), Eq.(12) and of the Renyi temperature \(T_{R}\) Eq.(13), we can define the Renyi mechanical analog of RN-flat black hole in Renyi statistics by the identification of \(S_{R}\) and \(T_{R}\) with \(q_{\lambda}\) and \(p_{\lambda}\) respectively as, \[q_{\lambda}=\frac{S_{R}}{\pi}=q-\frac{\pi\lambda q^{2}}{2},\quad p_{\lambda}= \pi T_{R}=p(1+\pi\lambda q). \tag{73}\] In which, we have expressed the Renyi mechanical variables in terms of the mechanical Gibbs-Boltzmann \((GB)\) variables to a first order in the nonextensivity parameter \(\lambda\). In \(GB\) formalism, the differential of the RN-AdS black hole mass \(M=\frac{r_{h}}{2}(1+\frac{Q^{2}}{r_{h}^{2}}-\frac{\Lambda r_{h}^{2}}{3})\), taken here as the thermodynamical potential, in terms of the mechanical variables Eqs.(71), reads for fixed charge \(Q\), \[dM=pdq-\frac{1}{6}q^{\frac{3}{2}}d\Lambda. \tag{74}\] where \(\Lambda\) is the cosmological constant considered in the Hamiltonian approach as a function a priori of all mechanical variables. The one-form Eq.(74) is constrained by the equation of state of a RN-AdS black hole, Eq.(72) written in mechanical variables as, \[4p=\frac{1}{\sqrt{q}}-\frac{Q^{2}}{q^{\frac{3}{2}}}-\Lambda\sqrt{q} \tag{75}\] The conjectured equivalence between the RN-AdS black hole in Gibbs-Boltzmann formalism and RN-flat black hole in Renyi formalism [23, 24] permits one to write the differential of the RN-flat black hole mass in Renyi statistics in terms of the Renyi mechanical variables defined by Eq.(73), for fixed charge \(Q\) as, \[dM=T_{R}dS_{R}=p_{\lambda}dq_{\lambda}. \tag{76}\] Also from Eq.(73) we compute the differential of \(q_{\lambda}\), we get, \[dq_{\lambda}=(1-\pi\lambda q)dq-\frac{\pi q^{2}}{2}d\lambda. \tag{77}\] where \(\lambda\), similarly to \(\Lambda\), is promoted to a function of the mechanical variables. Next, substituting Eq.(77) and (73) in Eq.(76) and keeping leading order terms in \(\lambda\), we obtain for \(dM\), \[dM=pdq-p(1+\pi\lambda q)\frac{\pi q^{2}}{2}d\lambda. \tag{78}\] Eq.(78) is the one-form of the thermodynamical potential \(M\), in the Renyi formalism. By the help of the state's equation in Renyi formalism Eq.(14) which is now interpreted as a constraint on the phase space \((q,p)\) and applying the transformation equations Eqs.(73), the equation of state reduces to, \[4p=\frac{1}{\sqrt{q}}-\frac{Q^{2}}{q^{\frac{3}{2}}}. \tag{79}\] Thus, in Renyi formalism and for a RN-flat black hole in four spacetime dimensions, the Hamiltonian approach is based on the one-form Eq.(78) subjected to the constraint Eq.(79). In what follows, we will apply such an approach to RN-flat black hole within the Renyi statistics. The starting point is to promote the nonextensivity parameter \(\lambda\) to a function on phase space of the coordinate \(q\), \(\lambda=\lambda(q)\). The one-form \(dM\), Eq.(78) becomes, \[dM=\left[p-p(1+\pi\lambda q)\frac{\pi q^{2}}{2}\frac{\partial\lambda}{\partial q }\right]dq. \tag{80}\] The expression of \(dM\) is in the form \(\alpha_{\lambda}=\omega_{\lambda}dq\), where \(\omega_{\lambda}\) is obtained to be: \[\omega_{\lambda}=p-p(1+\pi\lambda q)\frac{\pi q^{2}}{2}\frac{\partial\lambda} {\partial q}, \tag{81}\] and restricted to the constraint surface in phase space given by the equation of state Eq.(79) as: \[\phi_{\lambda}=4p-\frac{1}{\sqrt{q}}+\frac{Q^{2}}{q^{\frac{3}{2}}}=0. \tag{82}\] Which allow us to write the compact notation, \[dM=\alpha_{\lambda}|_{\phi_{\lambda}=0} \tag{83}\] and define the symplectic 2-form \[\Omega_{\lambda}=\left(\frac{\partial\omega_{\lambda}}{\partial p}\right)dq \wedge dp, \tag{84}\] to make the transformation \((p,q)\rightarrow(\omega_{\lambda},q)\) canonical. _The extension of phase space is accomplished by the introduction in \(dM\) of a new pair of conjugate variables say, \((\theta_{\lambda},\tau_{\lambda})\) such that,_ \[dM=\omega_{\lambda}dq+\theta_{\lambda}d\tau_{\lambda}. \tag{85}\] Taking this time \(\lambda\) as a function of \(q\) and \(\tau_{\lambda}\), \(\lambda=\lambda(q,\tau_{\lambda})\), one can write, \[d\lambda=\frac{\partial\lambda}{\partial q}dq+\frac{\partial\lambda}{\partial \tau_{\lambda}}d\tau_{\lambda}. \tag{86}\] By the help of equation Eq.(86), Eq.(78) is re-expressed as \[dM=\omega_{\lambda}dq-p(1+\pi\lambda q)\frac{\pi q^{2}}{2}\frac{\partial \lambda}{\partial\tau_{\lambda}}d\tau_{\lambda}. \tag{87}\] Confronting Eq.(85) to Eq.(87) leads to the addition of a new constraint on the extended phase space as, \[H_{\lambda}=\theta_{\lambda}+p(1+\pi\lambda q)\frac{\pi q^{2}}{2}\frac{ \partial\lambda}{\partial\tau_{\lambda}}=0. \tag{88}\] The constraint \(H_{\lambda}=0\) reduces the 1-form Eq.(85) in the extended phase space to the _Poincare-Cartan_ form \(pdq-h_{\lambda}d\tau_{\lambda}\) in reduced phase space such that, \[h_{\lambda}(q,\tau_{\lambda})=p(1+\pi\lambda q)\frac{\pi q^{2}}{2}\frac{ \partial\lambda}{\partial\tau_{\lambda}}. \tag{89}\] We define a symplectic 2-form in the extended phase space which preserves the canonical relations among the transformed coordinates \(X_{\lambda}=(\omega_{\lambda},q;\theta_{\lambda},\tau_{\lambda})\) such as, \[\bar{\Omega}_{\lambda}=\left(\frac{\partial\omega_{\lambda}}{\partial p}\right) dq\wedge dp+\left(\frac{\partial\omega_{\lambda}}{\partial\tau_{\lambda}} \right)dq\wedge d\tau_{\lambda}+d\tau_{\lambda}\wedge d\theta_{\lambda}. \tag{90}\] Now, we are able to generate the canonical _Poisson brackets_, for a pair of functions of the coordinates say, \(f_{\lambda}(X_{\lambda})\) and \(g_{\lambda}(X_{\lambda})\) we have, \[\{f_{\lambda},g_{\lambda}\}=\bar{\Omega}_{\lambda}(\xi_{f_{\lambda}},\xi_{g_{ \lambda}}), \tag{91}\] where \(\xi_{f_{\lambda}}\) and \(\xi_{g_{\lambda}}\) are vector fields generated from \(f_{\lambda}\) and \(g_{\lambda}\) respectively. The identification of \(\tau_{\lambda}\) with Renyi thermodynamical pressure \(P_{R}\), leads to \(dM=T_{eff,R}dS_{R}\) for constant \(P_{R}\), where \(T_{eff,R}\) is the thermodynamical temperature given by Eq.(81) such as, \[T_{eff,R}=\frac{\omega_{\lambda}}{\pi}. \tag{92}\] Thus \(dM\) is the exchanged heat in an isobaric transformation. Therefore we identify \(M\) with the enthalpy of the black hole. From the constraint Eq.(88), we read the thermodynamical volume \(V\) conjugate to \(P_{R}\) in Renyi thermodynamics as \[V=p(1+\pi\lambda q)\frac{\pi q^{2}}{2}\frac{\partial\lambda}{\partial P_{R}}. \tag{93}\] In the Renyi extended phase space for a RN-flat black hole in the canonical ensemble, the parameter \(\lambda\) is associated with the thermodynamic Renyi pressure \(P_{R}\) and the electric charge \(Q\) as [22] \[P_{R}=\frac{3\lambda}{32}(1-\frac{Q^{2}}{q})\implies\lambda=\frac{32P_{R}}{3(1 -\frac{Q^{2}}{q})}. \tag{94}\] A substitution of Eq.(94) in Eq.(93) and using the constraint Eq.(82) and the definition \(q=r_{h}^{2}\) in Eq.(71) gives \[V=\frac{4\pi}{3}r_{h}^{3}+\mathcal{O}(\lambda). \tag{95}\] Since the Renyi pressure \(P_{R}\) is proportional to \(\lambda\), the correction term to the thermodynamical volume of order \(\mathcal{O}(\lambda)\) would generate a second order correction \(\mathcal{O}(\lambda^{2})\) to the expression of the enthalpy \(M\) Eq.(17), which can be neglected [22] owing to the condition \(0<\lambda<<1\). The energy extracted from spacetime due to the presence of the black hole is given by \[P_{R}V=\frac{\pi\lambda}{8}r_{h}^{3}(1-\frac{Q^{2}}{r_{h}^{2}}). \tag{96}\] In the large \(r_{h}\) limit (\(r_{h}>>Q\)), the functional dependence on \(r_{h}\) of this energy matches the RN-AdS case, \(-\frac{\Lambda}{6}r_{h}^{3}\), with the identification \(|\Lambda|\rightarrow\frac{3\pi}{4}\lambda\). The application of the Hamiltonian approach to the RN-AdS black hole in Gibbs-Boltzmann formalism and to RN-flat black hole in Renyi formalism presents a great similarity and strengthens once more the conjectured equivalence between these two systems with the advantages of the Renyi statistics outlined in the introduction of the present paper. ## 6 Conclusion In this paper, we have investigated some phase equilibrium features of charged black holes in flat spacetime via Renyi statistics. We first reviewed the thermodynamical structure in such a background. This phase portrait is similar to the Van-der-Waals one and the charged-AdS black holes phase picture in the Gibbs-Boltzmann statistics. Concretely, we have shown that the oscillatory behaviour persists in the \(P_{R}-V\) and \(T_{R}-S_{R}\) diagrams. Afterwards, by means of Maxwell's equal-area law, the unphysical branch of the system has been excluded and the phase transition point has been disclosed. Furthermore, we have established the Clapeyron equation at the coexistence curve associated with each diagram, which puts light on the latent heat of the phase change and its behaviour under varying the black hole charge \(Q\). A pertinent comprehension of the phase structure of charged-flat black hole thermodynamics via Renyi statistics can help to learn about the similitude with the charged-AdS black hole in Gibbs-Boltzman formalism and a possible profound connection between the non-extensive parameter \(\lambda\) and the cosmological constant \(\Lambda\). Putting such assertion under more deep investigations, Landau's continuous phase transition theory has been used to inspect the critical behaviour of such a black hole in the Renyi's formalism, the critical index has been also obtained. Meanwhile, both concepts confirm such similitude. Lastly, we have reconstructed the previous thermodynamics results by the Hamiltonian approach promoted to the Renyi statistics framework. Concretely, we have taken the nonextensivity parameter \(\lambda\) as a function of coordinates of the phase space via a new thermodynamical equation of state based only on the homogeneity of thermodynamics variables, such an equation of state exhibits the generalized thermodynamic volume. All these results consolidate the expected and possible bridge between the nonextensivity Renyi parameter \(\lambda\) and the cosmological constant \(\Lambda\) suggested for the first time in [22] and reinforced in [25, 27]. This study opens up further research perspectives. It would be fascinating to make contact with Euclidean action formalism. Concretely, it is well known that the Renyi extension of standard black hole thermodynamics can be seen as generating conical defects in the Euclidean time coordinate [49]. This fact sparked significant research that broadens the notion of gravitational entropy [50] and, in particular, offers a general method for calculating Holographic Entanglement Entropy. In addition, It would be intriguing to investigate the current Renyi phase transitions from the holographic point of view [40, 51]. On all of the open issues mentioned, we plan to report in future works.
2301.08779
Analysis and Prevention of MCAS-Induced Crashes
Semi-autonomous (SA) systems face the challenge of determining which source to prioritize for control, whether it's from the human operator or the autonomous controller, especially when they conflict with each other. While one may design an SA system to default to accepting control from one or the other, such design choices can have catastrophic consequences in safety-critical settings. For instance, the sensors an autonomous controller relies upon may provide incorrect information about the environment due to tampering or natural fault. On the other hand, the human operator may also provide erroneous input. To better understand the consequences and resolution of this safety-critical design choice, we investigate a specific application of an SA system that failed due to a static assignment of control authority: the well-publicized Boeing 737-MAX Maneuvering Characteristics Augmentation System (MCAS) that caused the crashes of Lion Air Flight 610 and Ethiopian Airlines Flight 302. First, using a representative simulation, we analyze and demonstrate the ease by which the original MCAS design could fail. Our analysis reveals the most robust public analysis of aircraft recoverability under MCAS faults, offering bounds for those scenarios beyond the original crashes. We also analyze Boeing's updated MCAS and show how it falls short of its intended goals and continues to rely upon on a fault-prone static assignment of control priority. Using these insights, we present Semi-Autonomous MCAS (SA-MCAS), a new MCAS that both meets the intended goals of MCAS and avoids the failure cases that plague both MCAS designs. We demonstrate SA-MCAS's ability to make safer and timely control decisions of the aircraft, even when the human and autonomous operators provide conflicting control inputs.
Noah T. Curran, Thomas W. Kennings, Kang G. Shin
2023-01-20T19:29:08Z
http://arxiv.org/abs/2301.08779v2
# Is Boeing 737-MAX Still Safe? ###### Abstract Semi-autonomous (SA) systems face the problem of deciding whether to select control input from the human operator or autonomous controller when they conflict with each other. While one may design an SA system to default to accepting control from one or the other, such design choices can have catastrophic consequences in safety-critical settings. For instance, the sensors an autonomous controller relies upon may provide incorrect information about the environment due to tampering or natural wear. On the other hand, the human operator may also provide dangerous input. This begs an important question: _Can we convert an existing SA system to make dynamic real-time control decisions that are tolerant of erroneous/malicious input?_ To explore this question in this paper, we investigate a specific application of an SA system that failed due to a static assignment of control authority. Namely, the well-publicized failure of the Boeing 737-MAX Maneuvering Characteristics Augmentation System (MCAS) that caused the crashes of Lion Air Flight 610 and Ethiopian Airlines Flight 302. First, through in-depth real-time simulation, we analyze and demonstrate the ease by which the original MCAS design could fail. Our analysis reveals several novel vectors of failure that were not present in the original crashes. We also analyze Boeing's revised MCAS and show how it falls short of its intended goals. Using these insights, we present Semi-Autonomous MCAS (SA-MCAS), a new MCAS that _both_ meets the intended goals of MCAS _and_ avoids the failure cases that plagued the original MCAS design. We demonstrate SA-MCAS's ability to make correct control decisions of the aircraft, even when the human and autonomous operators provide conflicting control inputs. ## 1 Introduction Semi-autonomous (SA) systems--those that take both autonomous and manual inputs to control their actions--are ubiquitous in the modern world, presenting applications in factories, hospitals, transportation, and more. Often, the purpose of these systems is to improve the safety and efficiency of tasks that take substantial manual effort. Airplanes, for example, use SA control to maintain safe flight while pilots perform other tasks. As a consequence of SA systems' close coupling with safety-critical applications, there is a complicated trade-off between trusting human and autonomous inputs. SA functionality is often included in a system because humans are prone to making mistakes, but autonomous systems are also imperfect, leading to distrust from human operators. SA systems have a tendency to be hard-coded to "trust" the input from one entity over the other. For instance, an autonomous system in the Boeing 737-MAX, the _Maneuvering Characteristics Augmentation System_ (MCAS), was originally given immense authority over the control of the pitch of the aircraft during aircraft stall events. This decision was motivated by a lack of trust in pilot control during these safety-critical stall events, and it had dire consequences: In 2019, the MCAS played a devil's role in the crash of two Boeing 737-MAX aircraft--Lion Air Flight 610 (JT610) and Ethiopian Arlines Flight 302 (ET302). These crashes led to a redesign of the MCAS that greatly shifted the balance of control away from the MCAS, and towards the pilots. To motivate this work, we hypothesize that the MCAS redesign abandons its original responsibility of avoiding stalls due to dangerous pilot pitch control. To test this simple hypothesis, we conduct a preliminary analysis and demonstrate that Boeing's revised MCAS fixes the problems of the original MCAS in exchange for our hypothesized new issues related to pitch control (see Fig. 2). From a security and reliability perspective, we argue that this is the wrong solution to the MCAS problem. Giving one entity full authority to control a safety-critical application without considering other conflicting control inputs creates the possibility of a single point-of-failure. We assert that rather than defaulting the control to one entity, the control should be dynamically chosen based on the vehicle's situation. By moving the target, MCAS is more tolerant of erroneous/malicious input from either autonomous or manual control. Following this design philosophy and using the Boeing 737-MAX case study, we propose Semi-Autonomous MCAS (SA-MCAS). Unlike the existing implementations of MCAS, SA-MCAS is capable of providing safer control of the aircraft's pitch in the presence of erroneous input from _either_ autonomous or manual control. Prior work on SA vehicle control often considers this problem of reliability as a question of safety rather than security. However, the problem space is just as important for security, as guarantee of safe control is unlikely without some security guarantees. The prior work for improving the safety of a vehicle through SA control often relies on the autonomous system without considering the validity of sensor data [1, 2, 3, 4, 5, 6]. Likewise, prior work that investigates competing control between the human and autonomous operators also fails to examine sensor data correctness [7, 8, 9, 10, 11]. On the contrary, SA-MCAS investigates how to select which operator to allow control of the vehicle, even in the presence of erroneous sensor readings, which may be due to a malfunctioning sensor or due to an adversary. We discuss the threat model in SS 3. With SA-MCAS, we make the following contributions: 1. Build a MATLAB/Simulink template for simulating control input for aircraft modeled in JSBSim [12]. We provide the building blocks for easily creating and evaluating new aircraft control systems.1 Footnote 1: Available on GitHub: Anonymized for the required blind review. 2. Model novel attacks on both MCAS versions in the Boeing 737-MAX. Our analysis uncovers novel attack scenarios resulting from erroneous control input, in addition to the scenarios that occurred in the original Boeing 737-MAX crashes. 3. Propose the SA-MCAS arbiter, a control decision-maker that is capable of accounting for erroneous control inputs from the pilot or autonomous system. The arbiter uses linear regression models to determine the correct sensor input, and our evaluation demonstrates that SA-MCAS maintains safe control of the aircraft in more flights that have erroneous/normal control than other MCAS implementations. ## 2 Background & Motivation Before introducing the problem, we provide a refresher on the Boeing 737-MAX, its MCAS system, the issues MCAS introduced, and Boeing's redesign of MCAS. We simulate the original and redesigned MCAS modules to demonstrate remaining safety concerns, and use this demonstration to motivate the remainder of this work. ### _Maneuvering Characteristics Augmentation System (MCAS)_ During the design of the 737-MAX line of aircraft, Boeing sought to certify it as a 737 variant to streamline the aircraft's certification and to minimize the pilot training for airline companies. Compared to the 737-NG, the 737-MAX made a notable change from the CFM56 engine to the LEAP-1B engine, which is larger and placed farther forward. Testing revealed that in specific scenarios, the new engines could push the nose of the 737-MAX upward and cause the aircraft to stall due to a high Angle-of-Attack (AoA) (\(\gtrsim\)18\({}^{\circ}\)). To address this issue, Boeing introduced MCAS, a flight stabilization program that automatically pitches the aircraft down to prevent a stall during high-AoA maneuvering. In general, MCAS operates as follows. It first observes the AoA of the aircraft (see Fig. 1) through a sensor. In response to a high AoA, MCAS provides a 2.5\({}^{\circ}\) nose-down control input to the _horizontal stabilizer_ (HS) to push the nose of the aircraft back down and avoid stall [14]. While initial disclosures to the FAA demonstrated an MCAS that was less intrusive to the flight controls (and deemed a low risk), Boeing failed to notify the FAA of substantial changes to MCAS and made no mention of MCAS in the 737-MAX's pilot manuals. Because pilots were ill-prepared to handle an erroneously engaged MCAS, two deadly crashes followed: ET302 and JT610. While the 737-MAX is equipped with redundant AoA sensors, MCAS was designed to check only the AoA sensor located on the pilot-side. During these flights, the AoA sensor delivered faulty readings that made MCAS believe the airplane's AoA was too high. Consequently, the nose of the aircraft was pushed down by MCAS. To counteract MCAS, the pilot manually trimmed the HS and pulled back on the column to actuate the elevator (see Fig. 1) to raise the nose back up. MCAS again displaced the HS due to the sensor's incorrect readings. After back-and-forth between MCAS and the pilot, the HS was eventually displaced so much that elevator deflection could not counter the effects of the much larger HS. Also, due to aerodynamic factors, the manual HS hand-crank available in the cockpit eventually would not budge. In both catastrophic cases, the aircraft entered a steep nosedle and crashed. The two crashes killed all 346 people onboard and resulted in the grounding of all Boeing 737-MAX aircraft globally. While skilled pilots were sometimes capable of landing aircraft that MCAS negatively impacted, these instances were not reported to any regulatory agencies until after the deadly crashes [15]. In response to these crashes, Boeing proposed a redesigned MCAS with several changes [16]. First, MCAS will now check both AoA sensors. If the AoA sensors disagree when the flaps are not up, MCAS will not activate. Fig. 1: Relevant information for the pitch of an aircraft. The Angle of Attack (AoA) is defined as the difference between the chord line and flight path of the aircraft. The elevator and horizontal stabilizer are the actuators responsible for pitch control. Image in circle from [13]. Second, MCAS will only activate once per sensed event rather than an unconstrained number of times. Lastly, when MCAS does engage, pilots can now override MCAS and perform manual flight at any time since MCAS will not provide more input on the HS than the pilot can put on the elevator. The final revision approved by the FAA included a few additional requirements to the flight control computer, including integrity monitoring in order to stop erroneously generated trim commands from MCAS [17]. ### Boeing's Inconsistent MCAS Design There is an inconsistency in Boeing's revisions: * Boeing originally designed MCAS due to a lack of trust for the pilot, defaulting to trusting the autonomous control of the aircraft through MCAS; * Boeing revised MCAS due to a lack of trust for autonomous control, defaulting to trusting the pilot control of the aircraft. Such scenarios where both the autonomous entity and a human compete for control of a vehicle is called a _semi-autonomous_ (SA) system. Defaulting control to one input in the case of disagreement is a common trend in SA system design. While Boeing designed both versions of MCAS with this default behavior, one can see cases where the pilot is more trustworthy and others where the MCAS is: there are instances in flight where either the pilot or the autonomous system could be incorrect. Thus, we argue that neither the original design nor the redesign is the right choice. To back this claim, we reconstruct the behavior of the pre- and post-crash MCAS using information _The Seattle Times_[14] and the FAA [17] reported to the public and conduct a preliminary analysis (Fig. 2). We find that the redesign, which fully trusts the pilot, introduces similar hazard as the original MCAS, which fully trusted the autonomous system, supporting our claim that neither design choice is the right one. We conclude that our preliminary result calls for a deeper analysis of scenarios that threaten either MCAS version. A well-designed SA system should not default control to one input in the face of conflicting inputs. We use this initial insight to motivate the following research question: _Can we convert an existing SA system to make dynamic real-time control decisions that are tolerant of erroneous/malicious input?_ In the remainder of this paper, we explore this question in the specific case of the MCAS in the Boeing 737-MAX. ## 3 Threat Model As demonstrated in SS 2.2, pitch control of the aircraft is fallible through the input of either the autonomous MCAS or the human pilot. We provide an overview of this threat model in Fig. 3. It is evident from the crashes of the 737-MAX aircraft that erroneous (or compromised) sensor readings are capable of eliciting behavior from the MCAS module that may put the vehicle into a dangerous state. In the case of the prior crashes, this is due to broken or damaged AoA sensors. However, erroneous sensor readings can also be caused by abnormal environmental conditions or intentional physical or digital tampering of the sensor values by an adversary. More recently, the cybersecurity of aircraft against remote adversaries has been questioned due to a takeover of the internal PA system on some American Airlines aircraft [18, 19]. Irrespective of the reason for erroneous sensor readings, they will cause the autonomous system to misunderstand the physical state of the vehicle, leading to incorrect and dangerous control decisions. Figure 3: An overview of the threat model for SA control of the pitch of an aircraft. Figure 2: Preliminary study of the fault-tolerance of the Boeing 737-MAX MCAS before and after the crashes. For each, we simulated 737-MAX takeoff using our custom toolkit (§ 5) built on top of JSBSSim [12]. Likewise, human factors also provide possibility for dangerous control of SA systems. For instance, pilots with malicious intent can intentionally change the pitch of an aircraft to send it into a nosedive and crash, like the crash of Germanwings Flight 9525 [20]. There are reasons other than malicious intent for why a pilot could dangerously control the pitch of an aircraft, including inexperience and exhaustion. Thus, when evaluating the threats to MCAS, four scenarios must be considered. In the first scenario, neither MCAS nor pilot input are providing dangerous control to the aircraft. For the second and third scenarios, either MCAS or pilot input are providing dangerous aircraft control. In the final case, both MCAS and pilot input are dangerously controlling the aircraft. In this paper, we focus on the first three scenarios, but discuss how we can leverage prior work to handle the last case in SS 8. Recall that MCAS was originally included in the 737-MAX to counter the inexperience of pilots who were trained to fly the 737-NG line of aircraft. While quick remedial action from an experienced pilot was possible [15], MCAS took an extreme form of distrust of pilot control, effectively ignoring the pilot's attempt to undo the erroneous MCAS control. Conversely, the redesign of MCAS takes an extreme form of distrust of autonomous control, enabling the pilot to easily override MCAS. ## 4 Semi-Autonomous MCAS (SA-MCAS) Following our preliminary analysis of the Boeing MCAS design before and after the 737-MAX crashes (Fig. 2), we propose Semi-Autonomous MCAS (SA-MCAS), an MCAS that does not give bias to one control input over the other. SA-MCAS uses an _arbiter_ to cross-validate the sensor readings to first determine whether the pilot or autonomous control input is correct and then decide which to allow authority to control the pitch of the aircraft. We provide an overview of the SA-MCAS control-loop in Fig. 4. During each iteration (\(t\)) of the control-loop, airplane sensor data (\(\mathbf{x}_{s}(t)\)) is delivered to SA-MCAS and indicators in the cockpit. Additionally, pilots use intuition and vision to infer additional information about their environment (\(\mathbf{x}_{e}(t)\)). Using the available data, the pilot and SA-MCAS decide on their control input (\(u_{p}(t)\) and \(u_{a}(t)\), respectively). Then, the SA-MCAS arbiter determines which of the control input to be used to control the pitch of the aircraft. Finally, the resulting behavior (\(y(t)\)) informs the pitch controller of a new state. **Challenges.** To the best of our knowledge, SA-MCAS is the first to explore the consequences of the designs of the various 737-MAX MCAS revisions. As a result, during the development of SA-MCAS we encountered several challenges that lead to the primary contributions of this paper. Therefore, before returning to the research question posed in SS 2.2, we first raise the following technical questions: * How can we streamline the design and evaluation of MCAS programs? (SS 5) * Which control inputs from MCAS and the human pilot threaten the safety of the aircraft? (SS 6) * How can we detect incorrect/dangerous control input and subsequently mitigate it by changing the authority of control? (SS 7) We answer these questions and evaluate the respective solutions in the following sections. ## 5 MCAS Simulation In this section, we address Challenge\(-\) * Using a real airplane as a testbed for evaluation of SA-MCAS is unrealistic/infeasible due to the high cost of purchasing the aircraft, renting or building a storage facility, and hiring pilots. Thus, the natural solution is to employ a widely-used flight simulation engine. Aerospace companies have custom flight simulators for testing their internal products, but they are often unavailable to researchers. As a result, open-source flight simulators, such as JSBSim [12], are popular among academic researchers. Such simulation tools have been vetted by NASA, validating their accuracy of modeling real flight maneuvers [21]. The JSBSim flight simulator gives us the ability to accurately model the Boeing 737 MAX's flight and control dynamics. Furthermore, because of the ease of modeling control loops in MATLAB Simulink, an integration of JSBSim to MATLAB was developed for this purpose [22]. However, its functionality was limited to just a few hard-coded control inputs and no account for pilot control. To overcome this issue of inflexibility, we introduce an extension to the JSBSim Simulink module, which includes several user-definable features. Our extension enables the user to select any flight sensor input/output to/from JSBSim, provides a pilot simulation module with customizable scripts for controlling the aircraft, and an MCAS module for easily integrating new MCAS designs. Furthermore, switching between scripts and different MCAS designs is programmable, allowing for automated simulation runs without any additional manual effort. Figure 4: The SA-MCAS control-loop. SA-MCAS and the pilot control input pass through the SA-MCAS Arbiter to dynamically control the SA system. In addition to these features, we provide a module for injecting erroneous sensor readings into the JSBSim sensor data. This module is capable of injecting three different classes of erroneous data. First, _sudden injection_ data, where the sensor data is set to a constant value. Second, _delta injection_ data, where the sensor data is offset with a constant value. Third, _gradual injection_ data, where the sensor data offset is determined by a function of time. A more rigorous definition for the erroneous data is available in SS 6.1. ### Simulation Creation Process We provide a tutorial on creating simulations in our toolkit. The process is split into three stages (Fig. 5). 1. **Initialize the input/output parameters.** Before designing the rest of the simulation, you must define the data available. Our toolkit enables a user to define the flight sensor data s/he wants to have on each iteration of the simulation for use in other simulation components. This information is provided as part of an XML file. Also part of this step is defining the time ranges that erroneous input occurs and the characteristics of the erroneous input. This data is to be defined within a JSON file. We show code snippets of these in Fig. 4(a). 2. **Building the MCAS module.** We next integrate a specified MCAS design into the simulation. Using output parameters from the previous step (which also may have been altered by the error injection portion), we define MCAS activation behavior when specific conditions are met. In Fig. 4(b) is a demonstration of an implementation of the pre-crash MCAS activation conditions. 3. **Scripting the pilot behavior.** Finally, we provide several pilot flight maneuvers as part of the toolkit, such as takeoff, landing, and turning, and adding a new maneuver is straightforward (Fig. 4(c)). Again, this requires output from the first step in order to activate behavior based on specific conditions. A limitation of this simulated approach is the simulated pilot lacks some of the finer feel and touch of a real-world pilot. However, we design these simulations using aircraft manuals that make suggestions for typical choices in flight (see the references in Tbl. 1). Therefore, our simulation of pilot behavior is sufficient for investigating error injections and defense. ### Example Simulation Scenarios Using our toolkit, we present the simulation results for a few standard flight scenarios. A summary of the flight scenarios is available in Tbl. 1. To validate our simulation accuracy, we graph the 3D trace of the aircraft, such as in Fig. 6. The flying traces are shown to accurately draw the path of the desired maneuver. For the purpose of evaluating MCAS, the takeoff and landing maneuvers are the most important, as they are the intended scenarios for MCAS activation. However, we simulate other maneuvers to verify that MCAS will not activate when it is unneeded. Furthermore, we demonstrate the effect of injected errors to the AoA sensor on the flight path, as seen in the preliminary study presented in Fig. 2. This result also demonstrates the Boeing's implementation of MCAS pre- and post-crash. In the next section, we will detail novel MCAS failures. ## 6 Launching Attacks on MCAS Using the erroneous sensor injection tool included as part of our simulation toolkit presented in SS 5, we launch attacks on MCAS to reveal novel vectors for causing dangerous control of the aircraft. Doing so leads to a conclusion for Challenge\(-\). To lay the groundwork for the remainder of this section, we provide definitions for each of our injection methods. Then, before performing a more rigorous analysis of possible injection scenarios, we demonstrate the case study crash of JT610. ### Methodology for Launching an Attack We describe the methodology for launching our attacks which are divided into two categories: injected sensor values and dangerous pilot behavior. While our threat model in Fig. 3 is more abstract in its representation of these attacks, we showcase a more specific example of sensor anomalies through our simulations called _sensor error injections_. Injecting simulated errors into the sensor data streams is similar to how either natural sensor failures or adversarial attacks would occur. \begin{table} \begin{tabular}{||c|c|c||} \hline **Maneuver** & **Performed in Crash** & **Ref.** \\ \hline \hline Accelerate & \(\bullet\) & N/A \\ \hline Climb & \(\bullet\) & [23] \\ \hline Descend & & [23] \\ \hline Level-Turn & & [24] \\ \hline Climb-Turn & & [23, 24] \\ \hline Descend-Turn & & [23, 24] \\ \hline Holding Pattern & & [25] \\ \hline Takeoff & \(\bullet\) & [23, 26] \\ \hline Landing & & [23] \\ \hline \end{tabular} \end{table} TABLE 1: Simulated flight maneuvers. ``` 1<s_function_config> 2<input> 3<property> fcs/pitch-trim-cmd-norm</property> 4</input> 5<outputs> 6<outputname="MCAS"> 7<property> aero/alpha-deg</property> 8</output> 9<outputname="FlightData"> 10<property> position/h-s1-ft</property> 11<property> attitude/theta-deg</property> 12</output> 13</outputs> 14</s_function_config> 1 15{ 16 17{ 18{ 19{ 20{ 21{ 22{ 23{ 24{ 25 26{ 27{ 28{ 29{ 30{ 31{ 32{ 33{ 40{ 40{ 5{ 5{ 6{ 6{ 7{ 1} 6{ 6{ 7{ function, we choose a few standard functions: the linear (\(f(t)=at\)), quadratic (\(f(t)=at^{2}+bt\)), and logarithmic (\(f(t)=a\log(t)\)) functions, where \(a\) and \(b\) are predefined coefficients. This error injection mirrors how a drifting sensor failure may occur over time, or how a stealthy adversary may inject error into a sensor data stream. A gradual injection has an additional component just before the injection concludes. The injection gradually recovers to the original sensor value by a two-stage process. First, we move the injected value by 5% in the direction of the real sensor value at each time step. Then, once we are within 10% of the original difference of the injected sensor at the start of recovery and the original sensor at that same point in time, we move on to the second stage where we use a Kalman filter to ease into the real sensor value. More details of this algorithm can be found in Alg. 1. #### 6.1.2 Dangerous Pilot Behavior The scope of MCAS's authority for counteracting pilot control is exclusively within the pitch axis of the aircraft in the downward direction. The pilot controls the pitch by either manually cranking the HS or pulling up or pushing down on the control column to adjust the elevator. The longitudinal flight dynamics equation using the short period approximation is [27]: \[\begin{bmatrix}\dot{w}\\ \dot{q}\end{bmatrix}=A\begin{bmatrix}w\\ q\end{bmatrix}+B\delta_{e}, \tag{4}\] where \(w\) is the rate of change in altitude, \(q\) is the rate of change of the pitch, \(A\) and \(B\) are flight system transition and control distribution matrices, respectively, and \(\delta_{e}\) is the elevator input. To provide dangerous control input to the aircraft within the range of command of MCAS, the pilot would need to continuously pitch the aircraft up. They do so by pulling back the control column, which in turn commands a consistent input to \(\delta_{e}\). Eventually, the AoA of the aircraft will exceed 18\({}^{\circ}\), causing the aircraft to stall and experience a significant decline in altitude before entering a nosedive. ### Case Study: Simulation of JT610 In order to understand how injections impacted the real flight on JT610, we model the _delta injection_ that impacted the MCAS decision-making. The flight-data recorder (black box) for JT610 was successfully found by the Indonesian government, and while the raw data was never publicly released, detailed graphs of the data are available for analysis [15, 28]. We use the pilot simulation framework described in SS 5 to model the decisions made by the pilots Figure 8: Simulation of JT610 without pilot intervention at the onset of MCAS falsely firing for the first time. Figure 6: Simulated aircraft traces of typical Boeing 737-MAX maneuvers. Figure 7: Simulation of the pilot operation of JT610 alongside the delta injection with \(\delta=15\). The flight path of the simulation is plotted against the true flight path of JT610. in JT610, following the same takeoff procedure. Likewise, we modeled the same AoA delta injection that the left AoA sensor faced, which had \(\delta\approx 15\) for the entire flight from takeoff until crash. Before takeoff, the injection in the left AoA sensor was more variable, but because it was before takeoff it did not impact the operation of the airplane. Thus, we do not model this in our simulation. As mentioned before, the black box data from JT610 was never publicly released, but we were able to acquire the flight path of JT610 from Flightradar24 [29]. We overlay this recovered data with our simulation of JT610 in Fig. 7. The overlay on the simulation demonstrates the capability of our toolkit to accurately model MCAS misfires in the presence of incorrect sensor values. The simulation is shown to closely overlap with the true flight path of JT610. Since we are capable of providing an accurate simulation of real pipiting of aircraft experiencing injection attacks, we provide a deeper investigation of how our proposed injection attacks impact the simulated aircraft in the following section. Before conducting this deeper investigation, our case study reveals a more interesting pattern that warrants a closer inspection. The pilot of JT610 was capable of maintaining an altitude of \(\approx\)5250 ft. for \(\approx\)7 mins. If an immediate action was not taken by the pilot, JT610 would have entered a nososelve almost immediately (simulated in Fig. 8). In fact, the pilot recovered the aircraft 21 times in a row before becoming overwhelmed and handing off responsibility to the co-pilot, who ultimately failed to recover the aircraft after the hand-off. Ultimately, the crash of JT610 was a combination of MCAS _repeatedly_ activating and the pilot becoming too tired to manually fight against MCAS automatically trimming the HS. In other words, this case study underscores the pilot's capability of manually recovering an aircraft from _rare_ false-positive activation of MCAS. ### _Evaluation of Proposed Attacks_ To conduct experiments on the different types of sensor threats, we perform a parametric analysis using the sensor error injection format in Fig. 4(a). For a given injection type, we keep all inputs to the injection format constant except one, which is the value we target for each analysis. #### 6.3.1 Sudden & Delta Injections For the _sudden_ and _delta injections_, we conduct three experiments for each injection: (1) StartTime=100, EndTime=130, and Val set from 0 to 30; (2) StartTime=100, Val=18, and EndTime set from 110 to 180. In experiments (1) and (2), the pilot reacts after 10 secs; (3) StartTime=100, EndTime=130, and Val=18, and the pilot's reaction set from 3 to 30 secs. The pilot's reaction is a normalized elevator input of -0.3, followed by a normalized elevator input of -0.15 after 10 more secs. In each of the three experiments, the original MCAS is used, and the range of Fig. 10: Analysis of _delta injection_ for the stated variable on the original MCAS in a Boeing 737-800 MAX. Fig. 9: Analysis of _sudden injection_ for the stated variable on the original MCAS in a Boeing 737-800 MAX. values for the variable has an increment of 1. The results of these simulations for the _sudden_ and _delta_ injections are summarized in Fig. 9 and Fig. 10, respectively. The results for the _sudden injection_ demonstrate that even just exceeding the threshold of MCAS is enough to trigger devastating impact. This is especially apparent in Fig. 8(a), which has two distinct outcomes for the plane. The parametric analyses of EndTime and pilot reaction time show that these variables impact the time-to-crash (Figs. 8(b) and 8(c)). On the contrary, the results for the _delta injection_ show something more interesting, especially around the threshold of MCAS activation. For instance, the analysis of Val in Fig. 9(a) shows that there are some recoverable scenarios of an MCAS misfire. In the case of Fig. 9(b), a smaller EndTime can illicit either a recoverable flight or a slower descent after an MCAS misfire. Finally, the pilot reaction time analysis demonstrates an interesting pattern. As shown in Fig. 9(c), a later reaction can allow the pilot to deflect the aircraft's decent into a more graceful crash of the aircraft, whereas an earlier reaction time may trigger an additional MCAS misfire and quickly push the aircraft back into a quick decent. #### 6.3.2 Gradual Injections For the _gradual injections_, we conduct three experiments, one for each of linear, quadratic, and logarithmic functions. The controlled settings for these experiments are StartTime=100, EndTime=130, and pilot reaction time of 10 secs for the linear and quadratic simulations, and 22.5 secs for the logarithmic simulations. Coefl is the coefficient \(a\) in each of the three functions, and Coef2 is the coefficient \(b\) in the quadratic function. For the linear function, we run simulations with Coefl=0 to 3 with a step size of 0.1. For the quadratic function, Coefl=0 Figure 11: Analysis of _gradual injection_ for the stated function’s coefficient on the original MCAS in a Boeing 737-800 MAX. Figure 12: Breakdown of Coefl of the quadratic _gradual injection_. Figure 13: Pilot stalling Boeing 737-800 MAX with the post-crash revised MCAS. Variables are the target pitch, final climb airspeed, and pitch up start time. to 1 and Coef2=0 to 3, both with step sizes of 0.1. And for the logarithmic function, Coef1=0 to 200 with a step size of 1. The pilot reaction works the same as in SS 6.3.1, and the original MCAS is used. The results of simulations are summarized in Fig. 11. The gradual linear injections lead to three behaviors that we notice (Fig. 10(a)). For the smaller values of Coef1, the impact is marginal or causes a very slow descent. As Coef1>1, we see the flight trajectory either nosedive completely or nosedive into a last second nose up before ultimately crashing. In the latter, it appears that a more graceful landing is possible in comparison to a nosedive. In the case of the gradual logarithmic injection (Fig. 10(c)), the flight descends quicker for larger values of Coef1. In the case of smaller values, the descent is more gradual, whereas larger values lead to a nose dive. We break down the gradual quadratic injection simulations based on Coef1, shown in Fig. 12. The breakdown shows that starting from Coef1=0 (Fig. 11(a)), the gradual quadratic injection starts with the behavior of the linear function (Fig. 10(a)). With progressive increases in Coef1 (Figs. 11(b) and 11(c)), the gradual quadratic injections converge toward the behavior of a _sudden injection_ (Fig. 8(a)). An adversary may benefit from a gradual quadratic injection in order to achieve similar results as the _sudden injection_ without exhibiting an immediate "sudden" jump to the target value. #### 6.3.3 Pilot Dangerously Stalls Aircraft In our final experiment on launching attacks on the MCAS, we take the other side and simulate a pilot attempting to dangerously stall out the aircraft equipped with the post-crash MCAS. Our findings are summarized in Fig. 13. In our simulations, we control three parts of the pilot: the pitch the pilot targets (from 25\({}^{\circ}\) to 50\({}^{\circ}\) in increments of 5), the final climb airspeed (from 225kt to 275kt in increments of 10), and the pitch up start time (from 80 sec. to 160 sec. in increments of 20). As is clear in Fig. 13, all simulations lead to a stall and ultimately crash the aircraft through the subsequent nosedive. ## 7 SA-MCAS Arbiter With an available simulator for streamlining the design and evaluation of MCAS programs (SS 5) and well-defined threat scenarios (SS 6), we must consider a defense for preventing dangerous control input. Here, we discuss and evaluate our implementation of the control arbiter in Fig. 4. In doing so, we seek a conclusive answer to Challenge\(-\). ### Training & Testing Dataset To our knowledge, there is no dataset robust enough for the purpose of our evaluation. Specifically, we require a dataset that not only includes flight sensor data for all of the standard flight maneuvers in Tbl. 1, but also maneuvers that experience dangerous control. The dataset our toolkit produced is summarized in Tbl. 2. Because MCAS is activated during stall conditions, which \begin{table} \begin{tabular}{||c|c|c|c||} \hline **Maneuver** & **Parameter(s) Adjusted** & **\# Collected** & **Mission Success Metric(s)** \\ \hline \hline \multirow{3}{*}{Accelerate} & Initial airspeed (kts) & \multirow{3}{*}{121} & Maintain level altitude (\(\pm 10\) ft) \\ & Final airspeed (kts) & & \\ \hline \multirow{3}{*}{Climb} & Initial altitude (ft) & \multirow{3}{*}{100} & Maintain acceptable g-force (\(0<\) and \(<3g_{0}\)) \\ & Final altitude (ft) & & Maintain level altitude along expected route (\(\pm 10\) ft) \\ \hline \multirow{3}{*}{Descend} & Initial altitude (ft) & \multirow{3}{*}{100} & Maintain acceptable g-force (\(0<\) and \(<3g_{0}\)) \\ & Final altitude (ft) & & Maintain level altitude along expected route (\(\pm 10\) ft) \\ \hline \multirow{3}{*}{Level-Turn} & Initial heading (\({}^{\circ}\)) & \multirow{3}{*}{101} & Maintain level altitude (\(\pm 10\) ft) \\ & Turn amount (\({}^{\circ}\)) & & Maintain level altitude (\(\pm 10\) ft) \\ \hline \multirow{3}{*}{Climb-Turn} & Turn amount (\({}^{\circ}\)) & \multirow{3}{*}{121} & Maintain acceptable g-force (\(0<\) and \(<3g_{0}\)) \\ & Climb amount (ft) & & Maintain level altitude along expected route (\(\pm 10\) ft) \\ \hline \multirow{3}{*}{Descend-Turn} & Turn amount (\({}^{\circ}\)) & \multirow{3}{*}{121} & Maintain acceptable g-force (\(0<\) and \(<3g_{0}\)) \\ & Climb amount (ft) & & Maintain level altitude along expected route (\(\pm 10\) ft) \\ \hline \multirow{3}{*}{Holding Pattern} & Initial heading (\({}^{\circ}\)) & \multirow{3}{*}{169} & Maintain level altitude (\(\pm 10\) ft) \\ & Inbound/outbound leg time (s) & & \\ \hline \multirow{3}{*}{Takeoff} & Initial climb airspeed (kt) & \multirow{3}{*}{225} & Maintain an altitude \(>0\) \\ & Limb phase transition altitude (ft) & & Continue to increase in altitude \\ \cline{1-1} & Final altitude (ft) & & \\ \hline \multirow{3}{*}{Landing} & Initial altitude (ft) & \multirow{3}{*}{100} & Reaches \(<50\) ft with safe landing conditions \\ \cline{1-1} & Vertical descent rate (ft/s) & & (pitch \(<4^{\circ}\); airspeed \(<120\) kt) \\ \cline{1-1} & Approach airspeed (kt) & & \\ \hline \end{tabular} \end{table} TABLE II: Summary of the dataset used in our evaluation. are most likely to occur during takeoff and landing, we focus much of our attention on takeoff and landing maneuvers. We generated takeoff maneuvers where the aircraft first ascends rapidly during initial climb, then climbs more gradually after reaching a specified altitude. The aircraft continues to climb until it arrives at a final cruising altitude, where it remains at level flight. As for landing maneuvers, the aircraft starts at an initial altitude and begins descending at a consistent vertical speed. The aircraft reduces its speed to a target speed, and maintains steady descent. Actual touchdown of the aircraft is unnecessary since MCAS plays no role at this point, so simulations are terminated upon reaching a very low altitude. These takeoff and landing procedures are performed many times, with several key parameters modified (Tbl. 2). In total, we simulated and collected data from 1158 flights, totaling 133 simulated flight hours. ### Model Architecture For the SA-MCAS arbiter, we opt to use a series of trained linear regression models, each representing a correlated sensor pair. Then, the estimated value from each regression model is compared against the actual measured value. For each comparison, if the difference between the two values is within a trained \(\epsilon\), then we deem the sensor measurement to accurately measure the environment. Otherwise, the pair is flagged for additional inspection. For the regression estimates that do not match the measured value, we use several correlated sensor pairs to pin down exactly which sensor of the flagged pair is causing the inaccurate estimate. To determine the correlated sensor pairs and train the linear regression model, we use the dataset described in Tbl. 2. During the training, we exclude the data with sensor injections and dangerous pilot control that we generated in SS 6. This is because the goal for the regression models is to understand normal control input for correlated sensor pairs in order to later infer when a particular control decision is normal. Of the data in Tbl. 2, we split 30% for training and 70% for testing, where we split randomly between each maneuver type in order to have equal representation. #### 7.2.1 Correlated Sensor Pairs Using our dataset summarized in Tbl. 2, we determine correlated pairs of sensed data across each flight maneuver as well as all maneuvers as a whole. Our findings for all maneuvers as a whole are summarized in Tbl. 3. We automatically drop pairs of sensors that have an absolute correlation \(\leq\)0.8. We additionally use a relationship between the AoA and pitch due to their values being closely coupled with one another. ### Metrics for Evaluation During the evaluation of the SA-MCAS arbiter, our metric of concern is primarily of overall flight safety. In other words, after injecting erroneous data or commanding the pilot to perform a dangerous control input, we are most concerned with whether SA-MCAS intervenes to prevent the simulated flight from crashing. As we show in our preliminary study in Fig. 2, the individual MCAS versions are capable of preventing _separate_ dangerous control, but our evaluation seeks to prevent _both_. Thus, an important aspect to our evaluation is whether, ultimately, the aircraft safely _finishes_ its mission. Here, we define "mission" to mean the duration of any particular maneuver, but a more precise definition would mean the journey of an airplane from takeoff to landing. Since the latter definition is too general, we use the former definition to provide a more fine-grained assessment of each type of dangerous control that we studied in SS 6. In doing so, we provide a rigorous argument for the success of SA-MCAS. To clarify what it means for a mission to finish successfully, we provide metrics for distinguishing success from failure for each maneuver, included in Tbl. 2. ### Evaluation of SA-MCAS Arbiter's Prevention of Dangerous Control We evaluate the SA-MCAS arbiter. Using our design (SS 7.2), we train linear regression models for each sensor pair in Tbl. 3, as well as a linear regression for the relationship between the pitch and the AoA of the aircraft. During simulation of flights, we use these regression models to take one value of a correlated sensor pair to estimate the other value. Then, we determine the validity of the measured sensors using the estimated sensors. To validate the AoA sensors for use by MCAS, we specifically estimate the pitch of the aircraft with the change in altitude of the aircraft to verify if the pitch is normal. We use the pitch to estimate the AoA of the aircraft to verify if the AoA is normal. Additional sensor pairs can be leveraged to increase the trust in our estimations. #### 7.4.1 Sensor Error Injection For sensor error injections of any type (i.e., _sudden, delta_, or _gradual injections_), the SA-MCAS arbiter delivers the correct control of the aircraft's HS in _every single modeled injection scenario_ that we report in SS 6.3.1 and SS 6.3.2. In other words, SA-MCAS arbiter is able to perfectly account for when the AoA sensor is exhibiting behavior that would lead to dangerous control of the aircraft and prevents it from doing so. \begin{table} \begin{tabular}{||c|c|c||} \hline **Sensor \#1** & **Sensor \#2** & **Corr. Val.** \\ \hline \hline Altitude & Temperature & -0.85 \\ Altitude & Air pressure & -0.99 \\ \(\Delta\)Altitude & Pitch & -0.98 \\ \(\Delta\)Altitude & \(x\) acceleration & 0.81 \\ Pitch & \(x\) acceleration & 0.86 \\ Roll & \(\Delta\)Heading & 0.95 \\ \(\Delta\)Roll & Yaw acceleration & 0.94 \\ \(x\) acceleration & Engine \% & 0.87 \\ \(x\) acceleration & Throttle & 0.86 \\ \(z\) acceleration & Roll acceleration & -0.80 \\ Engine \% & Throttle & 0.95 \\ Temperature & Air pressure & 0.91 \\ \hline \end{tabular} \end{table} TABLE III: Sensor pairs with an absolute correlation \(\geq\)0.8. A \(\Delta\) signifies a rate of change of the stated sensor. #### 7.4.2 Dangerous Pilot Behavior To evaluate the SA-MCAS arbiter on dangerous pilot behavior, we reuse our simulated pilot maneuvers from SS 6.3.3. In this case, not all simulated flights were recoverable. We separate the flights based on whether they were recovered in Fig. 14. While overall just 68/180 flights were recovered, a breakdown of the pilot behavior scenarios reveals the shortcomings of the SA-MCAS arbiter. If we break down the 180 simulated flights based on when the pilot starts to pitch up the aircraft into a stall (Tbl. 4), the odds of flight recovery increase as the pitch up time increases. This is primarily because the aircraft gains more altitude with more time before the pitch up into a stall. A common mantra in the aeronautics industry is that "altitude is life insurance," and this certainly holds here. While their impact is less than the pitch up time, the target pitch and final climb airspeed also both impact SA-MCAS's ability to recover the aircraft from dangerous flight behavior. For instance, a large target pitch may put the aircraft into such an extreme stall that the aircraft is doomed from the start. Since MCAS only controls the HS and has no authority over the elevator, there is nothing the SA-MCAS arbiter can do to stop the pilot from dangerously pulling back the column to control the elevator. Conclusion for Challenge-**@**: _We present an evaluation of SA-MCAS, an MCAS that is capable of resolving control conflicts between the manual and automatic input through an SA-MCAS arbiter. It is less susceptible to the previously identified control threats, preventing pitch control that puts the aircraft into a dangerous state._ ## 8 Discussion ### _Other Attacks_ There are a few additional attacks that we would like to discuss further. #### 8.1.1 Instantaneous Injection of Incorrect Sensor Values The _instantaneous injection_ creates incorrect values at singular points in a sensor's readings. These sort of injections can either be _periodic_ or _random_ in their behavior. While we do not evaluate these here, SA-MCAS assess each individual sensor value. During our evaluation of the three main injections in this paper, no singular erroneous sensor value was missed by SA-MCAS. #### 8.1.2 Pilot Provides Dangerous Nosedive Input Our evaluation of dangerous pilot input only assesses a pilot attempting to stall the aircraft, as this is the intended scenario for MCAS to activate. In other words, MCAS is not equipped to detect dangerous nose-down events and mitigate them. Thus, our evaluation did not cover this dangerous input. However, given that MCAS does have the ability to control the HS of an aircraft, it is possible that it _could_ be given the authority to control the HS for dangerous nose-down events. In such a case, SA-MCAS arbiter may be used to counteract such an event and in order to stop the aircraft from entering a nosedive. #### 8.1.3 Both Control Inputs are Erroneous We evaluated the cases where _either_ the pilot delivers dangerous manual control to the aircraft _or_ the MCAS delivers unsafe autonomous control to the aircraft. What we did not evaluate is what happens when _both_ of these inputs are unsafe. A limitation to the current version of SA-MCAS is that at least one of the control modes must be delivering safe control input. The problem of delivering safe control to an SA system where all actors/operators are attempting dangerous \begin{table} \begin{tabular}{||c|c||} \hline **Pitch Up Time** & **\# Success / \# Total** \\ \hline \hline 80 sec. & 0/36 \\ 100 sec. & 5/36 \\ 120 sec. & 14/36 \\ 140 sec. & 21/36 \\ 160 sec. & 28/36 \\ \hline \end{tabular} \end{table} TABLE IV: Successful flight recoveries broken down by the time the pilot dangerously pitches up into a stall. Figure 14: Pilot stalling Boeing 737-800 MAX with the SA-MCAS. Variables are the same as Fig. 13. 68/180 = 37.78% of flights are recovered from crash. control has been studied in other, simpler domains. For instance, one can design an SA system where we have full knowledge of its state-space. Then, we may only allow control of the SA system along state-space paths that guarantee its safety [30]. However, for more sophisticated systems such as aircraft, providing a complete state-space is intractable due to the exploding number of states one can generate. To alleviate this issue, the prior work loosened the guarantees provided to the safety of the system in favor of a probabilistic approach to safe operation [31, 32, 33]. Rather than generating the entire state-space, just a few important sections of the state-space are generated. During operation, the paths with the highest probability of safe operation are favored over those with lower probabilities of safe operation. In our case, these solutions are complimentary to the desired objective of SA-MCAS. To extend them to our work, we may define a set of safe states for the airplane to wait (such as the holding pattern) while the SA-MCAS arbiter determines the correct course of action. Then, in order to make the correct, safe control decision, we may use prior work on local view reconstruction to rebuild the erroneous sensor data into the form that is most probably correct [34]. This option is the last resort and undesirable if we are capable of using a safe control decision that is directly from either the pilot or MCAS. ### _Limitations_ Below we discuss limitations to our work and potential directions for addressing them. We also discuss the trade-offs and provide alternative choices for when SA-MCAS's trade-offs are insufficient for a certain user. For the cases where iterative improvements may be available, we leave these possible extensions to SA-MCAS as future work. **Dangerous Pilot Input Prevention is Effective in a Limited Scope of Conditions.** As we briefly allude to in SS 7.4.2, there are a limited scope of conditions that SA-MCAS arbiter is incapable of recovering an aircraft from dangerous input. Generally, these simulations occur at a much lower altitude. While one solution would be to prevent the pilot from providing a dangerous control input on the elevator in addition to the HS, the MCAS is not given authority over the elevator. To do this would require FAA approval. Thus, a limitation of SA-MCAS is that these dangerous pilot controls are out of the scope of its capabilities. **Passenger Trust.** In the aftermath of the Boeing 737-MAX crashes, passenger trust towards the 737-MAX aircraft has slowly recovered. The main contributor to this revival of faith has been from dropping MCAS as a tool that is capable of having authority to autonomously control the pitch of the aircraft. One drawback to our work is that it assumes that we can regain the trust of such passengers with a dynamic authority SA-MCAS. However, we note that the airline industry is not the only one facing this challenge--the autonomous vehicle industry has faced several controversies due to issues in the self-driving algorithms that lead to deadly accidents [35, 36]. While restoring public trust in autonomous systems is outside the scope of this paper, we acknowledge the drawback that this issue presents to SA-MCAS. In order to regain this trust, we propose that a system such as SA-MCAS should be introduced in such a way that would (1) educate the pilot on its autonomous functions and limitations so the pilot will not over-trust the SA system, and (2) give the pilot the capability to disable SA-MCAS in the event that issues with the algorithm arise. Before ever being put into the air, SA-MCAS should also go through hundreds of simulated flight hours with real pilots in order to establish faith with regulation agencies such as the FAA. ## 9 Conclusion In this paper, we introduced SA-MCAS arbiter, a system for deciding who to trust when a human pilot and the autonomous MCAS module of the Boeing 737-MAX are in disagreement. Our analysis of the control threats of the post- and pre-crash MCAS version motivate the need for an MCAS that can make such dynamic control arbitration. We demonstrate SA-MCAS arbiter is capable of providing the correct control input in all cases of injected erroneous sensor values as well as many instances of dangerous pilot behavior, especially at a high altitude. Our results encourage recommendations toward Boeing to include a system such as SA-MCAS as an integrity checker in the flight control computer of the Boeing 737-MAX and other aircraft in order to achieve the FAA's flight directive in [17].
2308.03327
Parametric excitations of coupled nanomagnets
We demonstrate that parametrically excited eigenmodes in nearby nanomagnets can be coupled to each other. Both positive (in-phase) and negative (anti-phase) couplings can be realized by a combination of appropriately chosen geometry and excitation field frequency. The oscillations are sufficiently stable against thermal fluctuations. The phase relation between field-coupled nanomagnets shows a hysteretic behavior with the phase relation being locked over a wide frequency range. We envision that this computational study lays the groundwork to use field-coupled nanomagnets as parametrons as building blocks of logic devices, neuromorphic systems or Ising machines.
Domonkos Laszlo Farkas, Gyorgy Csaba
2023-08-07T06:19:58Z
http://arxiv.org/abs/2308.03327v1
# Parametric excitations of coupled nanomagnets ###### Abstract We demonstrate that parametrically excited eigenmodes in nearby nanomagnets can be coupled to each other. Both positive (in-phase) and negative (anti-phase) couplings can be realized by a combination of appropriately chosen geometry and excitation field frequency. The oscillations are sufficiently stable against thermal fluctuations. The phase relation between field-coupled nanomagnets shows a hysteretic behavior with the phase relation being locked over a wide frequency range. We envision that this computational study lays the groundwork to use field-coupled nanomagnets as parametrons as building blocks of logic devices, neuromorphic systems of Ising machines. Introduction Our research is motivated by the possibility of using the oscillation phase of a nanomagnet as the information-carrying physical variable in a computing system. The idea of using the phase of a parametrically-excited system as a variable in computing has a long history [1] - in fact the magnetic parametron was one of the first practically successful computing paradigms before the age of integrated, transistor-based digital electronics. In this paper - following the footsteps of earlier works [2; 3] - we first show that small-sized nanomagnets can indeed act as parametrons. It means that they produce \(f_{0}\) frequency oscillations to a \(2f_{0}\) frequency excitation and be locked to this external pumping field with two distinct, stable phases. To use individual magnetic parametrons as building blocks of computing devices they should be interconnected with each other. One possibility is to use the stray field of the magnets for this, which comes for free, not requiring additional hardware. The oscillatory stray field of a neighboring magnet shifts the resonance frequency of a node in such a way that a given \(2f_{0}\) excitation may favorably excite only on or the other phase, effectively realizing positive (in-phase) or negative (anti-phase) couplings between the phase variables of the magnets. Nanomagnets coupled by their stray fields have been intensely researched for applications in beyond-Moore computing paradigms. For example, in Nanomagnet Logic (NML) the magnetic orientation of a nanomagnet acts as a logic variable [4], and this magnetic orientation, in turn, controlled by the field-coupling to neighboring magnets. The NML architecture enables a potentially low-power, non-volatile, straightforwardly realizable architecture for Boolean and non-Boolean functions. In this paper we extend the NML paradigm to dynamically-coupled nanomagnets. We call the field-coupled building blocks Coupled NanoMagnet Parametrons (CNMPs). The realization of computationally useful magnetization patterns requires both negatively (anti-phase) and positively (in-phase) coupled magnets or at least negatively coupled ones. In-phase coupled magnets alone are not computationally useful as they have only a trivial ground state with all magnets oscillating in phase. We will show that the strength and the sign of couplings can be engineered by choosing the right parametric excitation frequency for a given geometry. Oscillatory states of a stand-alone nanomagnet Oscillatory eigenstates of nanoscale magnets are a well-studied and the reader is referred to [5] for more information. Here we are primarily interested in parametrically excited quasi-uniform oscillation modes in small magnets as they are the ones that provide sufficiently strong fields for coupling to neighbors. ### Simulation setup We study an ellipse-shaped (major axis=160 nm, minor axis=80 nm, thickness=20 nm) YIG nanomagnet, see Figure 1. The elliptical shape is useful to increase the ellipticity of the magnetization precession orbit and remove the degeneracy of eigenmodes. A static bias field of 500 mT is applied along the easy axis of the nanomagnet. Additionally, a weaker sinusoidal field is applied to generate oscillations of the magnet. The sinusoidal field is aligned parallel to the bias field for parametric excitation and perpendicular to the bias field in case of direct excitation of a mode. Figure 1: Geometry for exciting the mode with eigenfrequency \(f_{0}\) of an ellipse-shaped magnet (2b=a). In \(a)\), the eigenmode is excited by its eigenfrequency with a field perpendicular to the bias field (and thus falls in the same plane as the oscillation). \(b)\) As the spins’ precession orbit is elliptical in this eigenmode, it is also possible to parametrically pump it with a double-frequency (\(f=2f_{0}\)) excitation along the bias field’s direction. The relatively small nanomagnet size is chosen for two reasons. The quasi-uniform oscillation creates the strongest stray field so the strongest couplings are expected to nearby magnets. In addition to this, for such small nanomagnets only the lowest-energy eigenmode is accessible with a straightforwardly applicable \(f<20\) GHz excitation frequency - this simplifies the physical picture by removing the internal degrees of freedom. For the calculations the Mumax simulation code [6] is used with standard YIG parameters (\(M_{s}=1.40\)E+5 A/m, \(A=3.65\)E-12 J/m, \(\alpha=0.0005\)) and we assume that this thin film is precisely modeled by a two-dimensional simulation (single layer of computing cells along \(z\) axis, but with 20 nm height). We used the temperature-dependent module of Mumax and most simulations (unless otherwise indicated) done at room temperature (\(T=300\) K). It simulates thermal fluctuations by adding a white-noise-like effective magnetic field to all the computing cells. The higher the temperature, the more prominent the fluctuations becomes. Most simulations were repeated multiple times using different seeds for random thermal field generation - this removes simulation artifacts and ensures the effects we see are robust against thermal noise. Thermal effects do influence the results: for example, parametric excitations require significantly higher excitation power than they would require at zero temperature. ### Localizing eigenfrequencies of modes Eigenfrequencies of a nanomagnet are determined by the effective magnetic field, which includes the magnet's own demagnetizing field and external field sources. We use a high-bandwidth pulse (impulse response) and the thermal spectrum to identify eigenmode frequencies. Both methods excite a wide range of frequencies, thus they excite all eigenfrequencies at once. Applying Fourier transform (FFT) on the time-domain magnetization dynamics will show them as peaks in the spectral domain. Although the two exciting methods may provide similar results, looking at thermal fluctuations have some benefits. First, it has more uniform spectral density so it excites all eigenfrequencies more evenly. Secondly, it can be prolonged at any scale (e.g. for better FFT resolution) with the same effectiveness, whereas the impulse response results in an exponential decay. Thirdly, the high magnetic field of the impulse may distort the system's behaviour, as eigenfrequencies depend on the magnetic field. Eigenmodes may alternatively may identified by sweeping the frequency over time and determining the frequency where maximum-amplitude oscillations occur - this method is useful if the scanned frequency range is narrow. ### Excitation of eigenmodes, parametric pumping Parametric excitation of nanomagnet eigenmodes is widely studied, see e.g. [7, 8]. Here we focus on perhaps the simplest case of a parametric process, the excitation of a quasi-uniform mode, leaving the oscillation phase a free variable. A simple picture of parametric excitation is illustrated in Figure 1b contrasting it to the direct excitation of 1a. In direct excitation, a resonant excitation field is applied perpendicular to static magnetization, maximizing the torque. Although resonant excitation requires lower excitation power, the resulting oscillation does not have a degree of freedom such as two possible phases for parametric pumping. In parametric excitation, the ellipsoidal-shaped nanomagnet is excited parallel to the bias field with an oscillatory magnetic field of \(B_{osc}\) (typically in the 25 mT range) and frequency \(f=2f_{0}\). As Figure 1 shows, a precessional motion with \(f_{0}\) frequency and non-zero ellipticity will have a component oscillating at \(2f_{0}\) frequency along the biased easy axis. The parametric pumping will couple to this component and sustain the precession against dispersion losses. As illustrated in Figure 2a), a small eigenoscillation (buried in thermal noise) occurring at \(f_{0}\) will be amplified to large amplitudes. Figure 2a) is an illustration, and Figure 2b) is a result of actual simulations of an ellipsoidal nanomagnet, starting from different, temperature-induced random states. It typically takes several hundred oscillation cycles to reach the steady-state oscillation amplitude. A key feature of parametric excitation, as shown in Figure 2, is that the oscillations of the magnet may occur in one of exactly two possible locked phases with respect to the excitation frequency. In stand-alone magnets the phase is 'decided' randomly by the initial random state. It is worthwhile to note that in-larger sized magnets, non-uniform modes will be excited by parametric pumping, and they add additional degrees of freedom to the system. So locking to the subharmonic of the excitation could happen in multiple ways as shown in Figure3. We chose the size of the nanomagnet in the deep submicron regime to suppress the formation of these modes. ## III Mode coupling in coupled nanomagnets In case of two magnets placed next to each other, the stray field shifts eigenfrequencies. We can obtain the modified system's eigenfrequencies with the same methodology as for a stand-alone magnet. First we used the thermal fluctuations as a uniform, high-bandwidth excitation, which excited all modes at once. Then we applied FFT on the sampled average magnetization of each magnet separately to localize the eigenfrequencies. Figure 3: a) In case of larger-sized magnets, modes can be localized on one side of the magnet. Thus parametrically excited eigenmodes may form with different amplitude due to this internal degrees of freedom. The size of the magnet was 640 nm x 320 nm, temperature was set to zero for these simulations. Figure 2: Parametric pumping allows oscillations with two different phase, and the same amplitude. Subfigure b) shows that oscillation phase is more robust to thermal fluctuations than amplitude. The magnet’s major axis was 160 nm, the minor 80 nm and the magnetization precesses in a coherent way. We observed that the presence of another magnet clearly affects the eigenfrequencies. In case the magnets were placed next to each other along their hard axis, the eigenfrequencies were lowered by a few tens of MHz. On the other hand, when the magnets' easy axis coincided, the effect was the opposite, the eigenfrequencies increased. In both cases, the frequency shift decreases as the pair's distance increases, converging to the stand alone magnet case. Additionally, we see new peaks, which faded as the gap increased. Figure 4 shows the obtained frequency-domain around the first eigenfrequency in a constellation, where the magnet pair was placed next to each other along their hard axis (other eigenfrequencies' or the other constellation's domain is quite similar, only higher in frequency). Two separate peaks, appear of which the second one diminishes if we apply FFT on the full system's average magnetization rather than separately on each magnet. This indicates that the second peak corresponds to the anti-phase oscillation of the magnets, where they cancel out on average, whereas the first peak corresponds to the same-phase oscillation. If only thermal excitations drive the system, then the the two phases are present simultaneously. If parametric pumping is applied, then one or the other phase configuration (which is closer to \(f_{0}\) will be amplified. It is worthwhile to note that the modes' spatial distribution slightly changes compared to the stand alone magnet setup, they become more non-uniform, there is a gradual increase in amplitude going from the magnets' close ends towards their far ends. Figure 4: The geometry of two couples nanomagnet and their thermal spectrum. The peaks correspond to eigenmodes coupled in different phases. The magnets were placed next to each other along their hard (\(y\)) axis. ### Sweeping the frequency around the resonance Coupling of the magnets was studied by extracting their phases from the mumax simulations. First we calculated the volume-averaged magnetization for the two neighboring magnets. The magnetization values were normalized in the sliding window, by the ratio of the maximum value (per magnet) and a predefined constant. This step removes the amplitude-changes of the oscillations. Secondly, we took the dot product of the two magnet's normalized values in the window. This dot product is positive for in-phase oscillations, as all members are a multiplication of two _same_-sign values, and it is negative for anti-phase oscillations, as all members are a multiplication of two _different_-sign values. There is a strictly monotonic transition between the two. Exciting the CNMP with different excitation frequencies around the stand-alone magnet eigenfrequency showed that depending on the frequency, both negative (anti-phase) and positive (in-phase) couplings may occur. A clear boundary separates the positive and negative couplings' regimes. Figure 5 shows this separation, using the above-mentioned indicative phase difference evaluation. The figure also illustrates that the easy and hard axis constellations' coupling regimes look similar, but the positive-negative coupling regimes are reversed, i.e. the order in frequency is the following: hard axis constellation's positive coupling regime (15.60 GHz - 15.64 GHz), hard axis constellation's negative coupling regime (15.65 GHz - 15.69 GHz), stand-alone magnet's eigenfrequency peak (15.67 GHz - 15.69 GHz), easy axis constellation's negative coupling regime (15.69 GHz - 15.73 GHz), easy axis constellation's positive coupling regime (15.74 GHz - 15.79 GHz). So in both cases the negative couplings are closer to the stand-alone eigenfrequency Figure 5: Phase correlation of magnets at different frequencies for hard (a) and easy (b) axis constellations. the black line is an indicative phase-relation of the two magnets oscillation while the yellow texts in the middle show the excitation frequencies at different time intervals. The magnetization is relaxed (without RF driving) to a thermal state after each magnetization interval. than the positive couplings. This separation can be explained by the eigenfrequencies' dependence on the magnetic field. The two magnet's field affect each other's effective field, therefore its eigenfrequency as well. However this change depends on the coupling phase, because the magnets demagnetizing field's components and the other magnet's effect may sum up or cancel out depending on the sign of the coupling. So that the positive and negative couplings have different eigenfrequencies. The negative couplings might be closer to the stand-alone eigenfrequency because more components cancel out, or the summed up components have opposite effects on the eigenfrequencies. The gap's size between the magnets are crucial in regard of the coupling regime's separation. Figure 6 shows the same kind of simulation with a wider gap (80 nm) and with a much wider gap (1000 nm) compared to the previous results on Figure 5 (20 nm). One can see that the two regimes are more pushed together at 80 nm, and completely coincide at 1000 nm (there is no separation). Furthermore the resonant domain gets closer to the stand-alone eigenfrequency. These observations are not surprising: as the gap size increases the magnet's effect on each other's effective field decreases, so they are getting closer to the stand-alone magnet case. If the magnets are too far away then there is no coupling between them and they randomly 'choose' one of the two possible phases (forced by the parametric excitation). Their phase difference is only stable because the individual oscillations (parametrons) cannot switch phase after they reached a sufficiently large amplitude. But in different simulations that use different seeds for Figure 6: Phase correlation of magnets placed in different distances. a) For far-away magnets (80 nm), there is a relatively narrow frequency range where the pumped magnets are in a definite phase-relation. This frequency regime opens to much wider range when the magnets get closer to each other. b) For even further-away magnets (1000 nm), the phase-relation is random and the eigenfrequency shifts back to the stand alone magnet case, i.e. the magnets don’t affect each other. thermal field generation the positive-negative "coupling" pattern would be completely different. There might be a switch in case of coupling around the separation border, where the lower-frequency coupling has small steady-state oscillation amplitude. Thermal fluctuations can switch this small-amplitude oscillation into the other coupling, but for the reverse switch, much higher fluctuations are required, as the oscillation amplitude is much higher, so that more stable to noise. Finally a general rule is that, for any constellation and coupling, lower frequencies - if still in the excitable regime- result in higher amplitude oscillations. This is also due to the eigenfrequency - magnetic field dependence and will be discussed in more detail in the following section. ## IV Hysteretic behavior of phase coupling In the previous sections (Figures 5) the frequency was changed by allowing the magnets to relax (i.e. their phase to randomize) between the steps of the parametric excitations. Here we study the oscillation's behaviour during a frequency sweep without relaxing the magnets at frequency steps. The results are shown in Figure 7. The phase relation shows a hysteretic behavior: a phase relation, once formed between the magnets, remains stable over a wide range of frequencies, even at frequencies where the opposite phase would be energetically favorable. Apparently, CNMPs have a similar hysteresis than what is found in large-amplitude oscillations of stand alone magnets[9]: stabilized oscillations can stay in the same coupling while sweeping through the other coupling's regime and in much lower frequencies, where the pair cannot be excited in either phase without hysteresis. Even more the oscillation amplitude increases quite notably until a sharp decay. Even a parametrically excited stand-alone magnet can hold its phase once the oscillation is strong enough, so it is reasonable that CNMPs do not switch either, especially to a lower amplitude excitation. The oscillation can retain and even increase its amplitude because of the magnetic field dependence of the eigenfrequencies, notably the magnets' demagnetizing fields. The greater the oscillation, the smaller the magnetization component along the easy axis which lowers the eigenfrequency. Thus, if the frequency is decreased slowly, the steady-state oscillation amplitude increases and shifts the eigenfrequency downwards, so it stays close to the excitation's frequency. At high amplitudes it starts to become less stable (see the increasing amplitude deviations in Figure 7 at high amplitudes), and eventually the excitation cannot counter the growing dispersion losses and fast decay starts. The decreasing oscillation shifts back the eigenfrequency, further from the excitation's frequency, so the decay even accelerates itself. Note that this hysteresis behaviour is much less prominent if we initially start from the lower-frequency coupling in either constellation. ## V Chains of coupled magnets Phase couplings between CNMPs can be observed for longer nanomagnet chains as well. Using a short chain of magnets result in quite more complex behaviour as the subfigures of Figure 8 shows. First of all, the transient time is much longer, even at the end of this 3 \(\mu\)s simulation, there are prominent changes in some magnets' oscillations, whereas stand alone magnets or CNMPs had a couple hundred ns transient at maximum. During this transient the chain repeatedly sticks in local minima and then switches relatively fast to higher-amplitude oscillations overallly creating a staircase-like increase of the average oscillation amplitude. As there are still changes at the end, it is not sure what the global optimum is. By examining the pairwise phase of the neighbors, we concluded that these steps indicate either the synchronization of one or more pairs (possibly resulting in anti-synchronization of others) or Figure 7: Oscillation amplitude of magnet pairs while sweeping the excitation frequency (purple stepwise line) downwards. The initial frequency was set such that it’s in the middle of the higher-frequency coupling regime, i.e. negative coupling for the hard axis constellation (a), positive coupling for the easy axis constellation (b). In both cases, the initial coupling was preserved through a couple of tenth gigahertz wide frequency domain which is a multiple of the width of the excitable domain without hysteresis. Note that the evenly-spaced spikes in the phase difference are only visualization artifacts. coupling switches (in-phase to anti-phase or vice versa) which in overall can lead to a higher energy oscillation of the whole chain. We also visualized a few periods of the chain's oscillation in at least temporarily stable states. From these movies we concluded that in-phase couplings (what produce standing wave parts of the chain oscillation) usually occur at the ends of the chain with low-amplitudes and they are more often for lower excitation frequencies. Inversely, anti-phase couplings (what produce cuts in the chain oscillation) are more common in the middle with high amplitudes and they are more often for higher frequency excitations. ## VI Conclusions: Toward Computing Devices In this paper we explored the rich dynamics of CNMPs - showing that in parametrically excited, coupled nanomagnets the phase of oscillations can be determined by field-coupling between the magnets. We envision that these results may be significant both from a fundamental physics and as application point of view. On a physics level, CNMPs may be viewed as a phase-domain implementation of spin ices - where couplings (potentially competing couplings) give rise to complex magnetization patterns [10; 11]. Spin ices are widely studied model systems of ordering and frustration, arrays of CNMPs are expected to show similarly complex behaviors. CNMPs likely offer more 'knobs' to control the couplings by tuning the parametric excitation frequency. We hope that this computational study inspires experimental investigation of the rich phase dynamics of nanomagnet arrays. Figure 8: Simulation of a chain of 9 magnets placed next to each other along their hard axis. On the application side, CNMPs are one implementation of phase-based Ising systems [12], which are intensely studied to solve computationally hard problems. ###### Acknowledgements. The authors acknowledge financial support from the Horizon 2020 Framework Program of the European Commission under FET-Open grant agreement no. 899646 (k-NET). ## Data availability statement The data that support the findings of this study are available from the corresponding author upon reasonable request.
2307.02864
Critical behavior of Anderson transitions in higher dimensional Bogoliubov-de Gennes symmetry classes
Disorder is ubiquitous in solid-state systems, and its crucial influence on transport properties was revealed by the discovery of Anderson localization. Generally speaking, all bulk states will be exponentially localized in the strong disorder limit, but whether an Anderson transition takes place depends on the dimension and symmetries of the system. The scaling theory and symmetry classes are at the heart of the study of the Anderson transition, and the critical exponent $\nu$ characterizing the power-law divergence of localization length is of particular interest. In contrast with the well-established lower critical dimension $d_l=2$ of the Anderson transition, the upper critical dimension $d_u$, above which the disordered system can be described by mean-field theory, remains uncertain, and precise numerical evaluations of the critical exponent in higher dimensions are needed. In this study, we apply Borel-Pad\'e resummation method to the known perturbative results of the non-linear sigma model (NL$\sigma$M) to estimate the critical exponents of the Boguliubov-de Gennes (BdG) classes. We also report numerical simulations of class DIII in 3D, and classes C and CI in 4D, and compare the results of the resummation method with these and previously published work. Our results may be experimentally tested in realizations of quantum kicked rotor models in atomic-optic systems, where the critical behavior of dynamical localization in higher dimensions can be measured.
Tong Wang, Zhiming Pan, Keith Slevin, Tomi Ohtsuki
2023-07-06T09:04:20Z
http://arxiv.org/abs/2307.02864v1
Critical behavior of Anderson transitions in higher dimensional Bogoliubov-de Gennes symmetry classes ###### Abstract Disorder is ubiquitous in solid-state systems, and its crucial influence on transport properties was revealed by the discovery of Anderson localization. Generally speaking, all bulk states will be exponentially localized in the strong disorder limit, but whether an Anderson transition takes place depends on the dimension and symmetries of the system. The scaling theory and symmetry classes are at the heart of the study of the Anderson transition, and the critical exponent \(\nu\) characterizing the power-law divergence of localization length is of particular interest. In contrast with the well-established lower critical dimension \(d_{l}=2\) of the Anderson transition, the upper critical dimension \(d_{u}\), above which the disordered system can be described by mean-field theory, remains uncertain, and precise numerical evaluations of the critical exponent in higher dimensions are needed. In this study, we apply Borel-Pade resummation method to the known perturbative results of the non-linear sigma model (NL\(\sigma\)M) to estimate the critical exponents of the Boguliubov-de Gennes (BdG) classes. We also report numerical simulations of class DIII in 3D, and classes C and CI in 4D, and compare the results of the resummation method with these and previously published work. Our results may be experimentally tested in realizations of quantum kicked rotor models in atomic-optic systems, where the critical behavior of dynamical localization in higher dimensions can be measured. ## I Introduction Since the discovery of Anderson localization [1], the effects of disorder in various media have been a constant focus of the physics community. The disorder-driven Anderson transition (AT) is a second-order quantum phase transition, around which physical observables show universal power-law behaviors. The universality class of the AT depends on the dimensionality and fundamental symmetries of the system: time-reversal symmetry, particle-hole symmetry, and chiral symmetry [2; 3; 4]. Based on these symmetries, Altland and Zirnbauer (AZ) completed the symmetry classification of non-interacting disordered Hamiltonians known as the "10-fold way" [5]. The classification is comprised of the three Wigner-Dyson classes (A, AI, and AII), the three chiral classes (AIII, BDI, and CII), and the four Bogoliubov-de Gennes (BdG) classes (D, C, DIII, and CI). The AZ classification is revelatory not only to the study of localization phenomena, but to the study of topological materials [6; 7; 8]. The critical exponent \(\nu\) of the AT characterizes the power-law divergence of the correlation length \(\xi\) on approaching the critical point, \[\xi\sim|x-x_{c}|^{-\nu}, \tag{1}\] where \(x\) is the tuning parameter and \(x_{c}\) is the critical point. Constrained by computational capacity, relatively few numerical studies have gone beyond three-dimensions (3D) [9; 10; 11; 12; 13] into higher dimensions where a stronger strength of disorder is required to drive the system into localization. A strong-disorder renormalization group (RG) approach is in development to provide theoretical insights [14; 15]. Recently, the potentials of such efforts are revealed by the proposed superuniversality of ATs in Hermitian and non-Hermitian systems [16], and the mapping between certain disorder-free interacting systems and disordered non-interacting systems with extra dimension [17; 18]. Moreover, the theory and numerical simulations are applicable to experimental realizations of quantum kicked rotors with synthetic dimensions [19; 20; 21]. While the lower critical dimension \(d_{l}=2\) of the AT is well established by the one-parameter scaling theory [3], the upper critical dimension \(d_{u}\), above which a mean-field description is accurate, remains debatable. The self-consistent theory of AT by Vollhardt and Wolfle [22; 23] gives the critical exponent of Anderson model (class AI) as \[\nu=\left\{\begin{aligned} &\frac{1}{d-2},&& 2<d<4,\\ &\frac{1}{2}&&,&& d\geq 4.\end{aligned}\right. \tag{2}\] The results that \(d_{u}=4\) and the mean-field critical exponent \(\nu=1/2\) are reminiscent of the \(\phi^{4}\) theory. A modified version of this theory that considers the renormalization of the diffusion coefficient [9] gives \[\nu=\frac{1}{2}+\frac{1}{d-2}, \tag{3}\] and \(d_{u}=\infty\). The prediction of the limiting value \[\lim_{d\to\infty}\nu=\frac{1}{2}, \tag{4}\] by both theories agree with the value from the Anderson model on an infinite-dimensional Bethe lattice [24; 25; 26; 27; 28; 29; 30]. However, Eq. (3) is in better agreement with numerical results[31; 32; 11; 33] of the orthogonal symmetry class for \(d=3,4,5,6\) than Eq.(2). On the other hand, the nonlinear sigma model (NL\(\sigma\)M), an effective field theory of Anderson localization, has been studied extensively in \(d=2+\epsilon\) dimensions [34; 35; 36; 37]. The \(\beta\)-function, which describes the renormalization of the conductance with system size, can be calculated analytically using perturbation techniques [38; 39; 40; 41]. From the \(\beta\)-function one can derive relevant physical quantities including a series in powers of \(\epsilon\) for the critical exponent \(\nu\). This method, which is referred to as the \(\epsilon\)-expansion, is rigorous only when \(\epsilon\ll 1\). In this limit, the \(\epsilon\)-expansion gives \(\nu=1/\epsilon\) in agreement with Eq. (2) but not Eq. (3) and with numerical simulations on fractals with spectral dimensions close to 2 [31; 42]. To obtain results for higher dimensions resummation methods are needed. However, a straightforward resummation[37] of the power series for the critical exponent yields \(\nu\to 0\) in the limit \(d\to\infty\) in disagreement with both Eq. (2) and Eq. (3). For the Wigner-Dyson classes ressmitations that incorporate the correct asymptotic behaviour of the critical exponent for \(d\to\infty\) have been performed [43; 11] giving better agreement with numerical simulations [32; 11; 33; 12] and experimental results [44; 20]. However, a comprehensive understanding of the dimensional-dependence of the AT in different symmetry classes is still lacking. In this paper, we focus on the BdG symmetry classes in 3D and 4D. The four BdG classes appear naturally in the topological superconducting system [4; 5]. The underlying BdG Hamiltonian \(H\) is invariant under the antiunitary transform of particle-hole symmetry (PHS) \({\cal C}=U_{C}K\), \[{\cal C}:\quad H\to-U_{C}^{\dagger}H^{T}U_{C}, \tag{5}\] where \(U_{C}\) is a unitary matrix and \(K\) denotes the operation of complex conjugation [8]. The BdG universality classes are realized at the particle-hole symmetric point, \(E=0\). The particle-hole symmetry can be classified into two kinds, even (\({\cal C}^{2}=+1\)) or odd (\({\cal C}^{2}=-1\)). The symmetry classes can further be characterized by time-reversal symmetry (TRS) \({\cal T}\). There are four BdG classes: singlet/triplet SC (class D), singlet SC (class C), singlet/triplet SC with TRS (class DIII) and singlet SC with TRS (class CI). Class D and class C describes BdG systems with even or odd PHS and broken TRS. Classes DIII and CI are characterized by a time-reversal operator \({\cal T}:H\to U_{T}H^{T}U_{T}^{-1}\), where the unitary matrix \(U_{T}\) satisfies \(U_{T}^{2}=\pm 1\). For classes DIII, one has PHS \({\cal C}^{2}=+1\) and TRS \({\cal T}^{2}=-1\). For class CI, one has PHS \({\cal C}^{2}=-1\) and TRS \({\cal T}^{2}=+1\). The symmetries of the BdG classes are summarized in Table 1. Due to the absence of spin-rotation invariance, class D and class DIII exhibit weak antilocalization. Below we apply the resummation method previously employed [43; 11] for the Wigner-Dyson symmetry classes to the BdG classes. We also report simulations using the transfer matrix method for class DIII in 3D, and classes C and CI in four dimensions (4D). We compare estimates of the critical exponent \(\nu\) obtained by finite-size scaling analysis of the numerical simulations with the results of the resummation method. Our results show the ability of this Borel-Pade analysis to give quantitative predictions of critical exponents \(\nu\) for the BdG classes beyond 2D. The rest of the paper is organized as follows. In Sec. II, we review briefly the Borel-Pade resummation. In Sec. III, we apply the Borel-Pade method to the \(\epsilon\)-series of the critical exponent \(\nu\) for the BdG classes. In Sec. IV, we apply the Borel-Pade method to the \(\epsilon\)-series of the \(\beta\)-functions. In Sec. V we report our numerical simulations. In Sec. VI we compare the Borel-Pade predictions with numerical results (both those reported here and previously published work). A summary is given in Table 5. In Sec. VII we discuss and conclude our findings. ## II Borel-Pade resummations In the scaling theory of Anderson transition [3], the \(\beta\)-function is defined as, \[\beta(g)=\frac{d\ln g}{d\ln L}, \tag{6}\] where \(g\) is the dimensionless conductance measured in units of \(e^{2}/h\) and summed over the spins, and \(L\) is the length of a \(d\)-dimensional cubic system. For the NL\(\sigma\)M description it is more convenient to work with the inverse conductance \(t=1/(\pi g)\) and \[\beta(t)=-\frac{dt}{d\ln L}=\frac{\beta(g)}{\pi g}. \tag{7}\] The critical point \(t_{c}>0\) of the AT is a zero-crossing point of \(\beta(t)\) \[\beta(t_{c})=0, \tag{8}\] and the critical conductance is given by \(g_{c}=1/(\pi t_{c})\). The critical exponent \(\nu\) is related to the derivative of the \(\beta\)-function at the critical point, \[\frac{d\beta(t)}{dt}\Big{|}_{t=t_{c}}=-\frac{d\beta(g)}{d\ln g}\Big{|}_{g=g_{ c}}=-\frac{1}{\nu}. \tag{9}\] The \(\beta\)-functions of the BdG classes up to the 4-loop order [4; 38; 45] are listed in Table 1. Note that the coefficient of \(t^{6}\) for class C in Table 1 differs from that given in Table III of Ref. [4]. [46]. We also note in passing that the \(\beta\)-functions of the chiral symmetry classes were found to be strictly zero in all orders in perturbation theory [47; 48]. The Borel-Pade resummation method is a technique for dealing with truncated and possibly divergent series. Given an infinite series \(f\) \[f(x)=\sum_{k}f_{k}x^{k}, \tag{10}\] its Borel sum is defined as \[\tilde{f}(x)=\sum_{k}\frac{f_{k}}{k!}x^{k}. \tag{11}\] The original series in Eq. (10) can be recovered by calculating the Borel transform \[f(x)=\frac{1}{x}\int_{0}^{\infty}e^{-y/x}\tilde{f}(y)\mathrm{d}y. \tag{12}\] Suppose the coefficients \(f_{k}\) are known for order \(k\leq l\). We approximate \(\tilde{f}\) on the r.h.s by a rational function \[\tilde{f}(x)\approx r(x)=\frac{p(x)}{q(x)}, \tag{13}\] where \(p(x)\), \(q(x)\) are polynomials of order \(m\) and \(n\), respectively, \[p(x)=\sum_{k=0}^{m}p_{k}x^{k},\qquad q(x)=\sum_{k=0}^{n}q_{k}x^{k},\ q_{0}\equiv 1. \tag{14}\] For choices of \([m,n]\) that satisfy \(m+n=l\), the coefficients of the polynomials \(p\) and \(q\) are uniquely determined. In some cases we require \(m<n\) so that the Pade approximant satisfies \[\lim_{x\rightarrow\infty}r(x)=0. \tag{15}\] Then, the rational function \(r\) can be decomposed into a sum of partial fractions \[r(x)=\sum_{j=1}^{n}\frac{a_{j}}{x-\lambda_{j}}, \tag{16}\] where \(\lambda_{j}\) are the roots of the polynomial \(q(x)\). In general, the \(\lambda_{j}\) and \(a_{j}\) are complex numbers. Substituting the above equation into Eq. (12) and performing the integration, we obtain the Borel-Pade approximation \(F\) of the series for \(f\) \[F(x)=\frac{1}{x}\sum_{j=1}^{n}a_{j}B\left(\frac{\lambda_{j}}{x}\right). \tag{17}\] Here, the function \(B\) is defined by \[B(s)=\begin{cases}-\exp(-s)\mathrm{E}_{\mathrm{i}}(s)&s\in\mathbb{R},s\neq 0, \\ \exp(-s)\mathrm{E}_{1}(-s)&s\in\mathbb{C},\arg s\neq\pi,\end{cases} \tag{18}\] where \[\mathrm{E}_{\mathrm{i}}(x)= -\int_{-x}^{\infty}\frac{e^{-t}}{t}dt=\int_{-\infty}^{+x}\frac{e^ {t}}{t}dt,\] \[\mathrm{E}_{1}(z)= \int_{z}^{\infty}\frac{e^{-t}}{t}dt,\quad|\arg z|<\pi. \tag{19}\] ## III Resummation of the series for \(\nu(\epsilon)\) Series in powers of \(\epsilon\) for the critical exponent \(\nu\) can be derived starting from the series for the \(\beta\)-function in powers of \(t\) as follows. We take symmetry class C as an example. We first find an approximation for \(t_{c}(\epsilon)\) by solving Eq. (8) using the available terms in the power series for \(\beta(t)\). For class C we find \[t_{c}(\epsilon)=\frac{1}{2}\epsilon-\epsilon^{2}+\frac{9}{4}\epsilon^{3}- \frac{77}{12}\epsilon^{4}+\mathcal{O}(\epsilon^{5}). \tag{20}\] Here we have chosen the root for which \[\lim_{\epsilon\to 0}t_{c}=0. \tag{21}\] If we then substitute the series for \(t_{c}\) into Eq. (9), we obtain the following series in powers of \(\epsilon\) for the inverse of \(\nu\) \[\frac{1}{\nu}\left(\epsilon\right)=\epsilon+2\epsilon^{2}-\epsilon^{3}+\frac{ 15}{2}\epsilon^{4}+\mathcal{O}(\epsilon^{5}). \tag{22}\] Taking the reciprocal of this series we obtain \[\nu(\epsilon)=\frac{1}{\epsilon}-2+5\epsilon-\frac{39}{2}\epsilon^{2}+ \mathcal{O}(\epsilon^{3}). \tag{23}\] \begin{table} \begin{tabular}{c c c c c c c} Class & TRS & PHS & SLS & SU(2) & NL\(\sigma\)M Manifold & \(\beta(t)\)-function \\ \hline D & 0 & \(+1\) & 0 & \(\times\) & \(\mathrm{Sp}(2N)/\mathrm{U}(N)\) & \(\epsilon t+t^{2}-2t^{3}+\frac{7}{2}t^{4}-\frac{47}{6}t^{5}+\mathcal{O}(t^{6})\) \\ C & 0 & \(-1\) & 0 & \(\triangle\) & \(\mathrm{O}(2N)/\mathrm{U}(N)\) & \(\epsilon t-2t^{2}-8t^{3}-28t^{4}-\frac{376}{3}t^{5}+\mathcal{O}(t^{6})\) \\ DIII & \(-1\) & \(+1\) & 1 & \(\times\) & \(\mathrm{Sp}(2N)\) & \(\epsilon t+t^{2}-\frac{1}{2}t^{3}+\frac{3}{8}t^{4}-\frac{1}{8}\Big{(}\frac{19 }{6}+6\zeta(3)\Big{)}t^{5}+\mathcal{O}(t^{6})\) \\ CI & \(+1\) & \(-1\) & 1 & \(\triangle\) & \(\mathrm{O}(N)\) & \(\epsilon t-2t^{2}-2t^{3}-3t^{4}-2\Big{(}\frac{19}{6}+6\zeta(3)\Big{)}t^{5}+ \mathcal{O}(t^{6})\) \\ \end{tabular} \end{table} Table 1: List of the BdG symmetry classes and their transformation behavior under time-reversal, particle-hole, chiral (sublattice) (SLS) symmetries, and the presence (\(\triangle\)) or absence (\(\times\)) of SU(2) spin-rotation symmetry. The penultimate column shows corresponding non-compact fermionic replica non-linear sigma-model (NL\(\sigma\)M) manifolds. The last column shows the \(\beta\)-function[4; 38; 45] of the four BdG symmetry classes. Here \(\zeta\) is the Riemann zeta function. Similarly, for symmetry class CI we find \[t_{c}(\epsilon) =\frac{1}{2}\epsilon-\frac{1}{4}\epsilon^{2}+\frac{1}{16}\epsilon^{ 3}-\frac{1+9\zeta(3)}{24}\epsilon^{4}+\mathcal{O}(\epsilon^{5})\] \[\frac{1}{\nu}\left(\epsilon\right) =\epsilon+\frac{1}{2}\epsilon^{2}+\frac{1}{4}\epsilon^{3}+\frac{ 5+36\zeta(3)}{16}\epsilon^{4}+\mathcal{O}(\epsilon^{5})\] \[\nu(\epsilon) =\frac{1}{\epsilon}-\frac{1}{2}-\frac{3+36\zeta(3)}{16}\epsilon^ {2}+\mathcal{O}(\epsilon^{3}). \tag{24}\] This approach works for symmetry classes C and CI because the coefficient of the \(t^{2}\) term in \(\beta(t)\) is negative and the lower critical dimensions for these classes is \(d_{l}=2\). However, for symmetry classes D and DIII the coefficient of the \(t^{2}\) term in \(\beta(t)\) is positive, so that when we follow the procedure explained above we find \[\lim_{\epsilon\to 0}t_{c}\neq 0, \tag{25}\] and we are unable to obtain a useful series in powers of \(\epsilon\) for \(\nu\). This reflects the possibility that the lower critical dimensions for these two classes is below 2D (\(d_{l}<2\)), as thought to be the case for the symplectic class AII. Now we apply the Borel-Pade resummation introduced in the previous section. A naive resummation tacitly assumes the limiting behavior \[\lim_{d\rightarrow\infty}\nu=0, \tag{26}\] which disagrees with self-consistent theories of the AT and the results for the AT on the Bethe lattice, i.e., with Eq. (4). Instead, we rewrite \[\nu\left(\epsilon\right)=\frac{1}{2}+\frac{1}{\epsilon}f\left(\epsilon\right), \tag{27}\] and perform the resummation of \(f(\epsilon)\) with the requirement \(m\leq n\). Such a treatment guarantees the limiting behavior given in Eq. (4). Of course, the application of this restraint to the BdG symmetry classes needs to be justified. For later reference, in Table 2, we compare the results given by imposing Eq. (4) and Eq. (26) for the classes C and CI in 3D and 4D. ## IV Resummation of the series for \(\beta(t)\) An alternative to the approach above is to apply the Borel-Pade method directly to the series for the \(\beta\)-function.[11]. All the series take the form \[\beta(t)=\epsilon t-tf(t), \tag{28}\] where \(f\) is a power series in \(t\). In terms of \(f(t)\) the critical exponent is \[\frac{1}{\nu}=t\frac{df(t)}{dt}\Big{|}_{t=t_{c}}. \tag{29}\] We need to impose the limiting behaviour at infinite dimension given in Eq.(4). We first note that in high dimensions the Anderson transition takes place at strong disorder and, moreover, that \[\lim_{d\rightarrow\infty}t_{c}=\infty \tag{30}\] This means that we can obtain the correct limiting behaviour by arranging that \[\lim_{t\rightarrow\infty}t\frac{df}{dt}=A, \tag{31}\] with \(A=2\). To do so, we define \(h\), a polynomial in \(t\), by \[h(t)=t\frac{df(t)}{dt}-A. \tag{32}\] Applying the Borel-Pade method to \(h\), we obtain an approximation \(H\) for \(h\) that satisfies \[\lim_{t\rightarrow\infty}H(t)=0, \tag{33}\] so that Eq. (31) is satisfied. To obtain the corresponding approximation \(F\) for \(f\), a further integration is needed, \[f(t)\approx F(t)=\int_{0}^{t}\frac{A+H(t)}{t}dt. \tag{34}\] The result can be expressed in the form[11] \[F(t)=\sum_{j=1}^{n}c_{j}B(\lambda_{j}/t),\quad c_{j}=\frac{a_{j}}{\lambda_{j}}. \tag{35}\] Finally, the \(\beta\)-function is approximated as \[\beta(t)\approx\epsilon t-tF(t). \tag{36}\] We show the resulting Borel-Pade approximations for \(\beta(g)\) in 3D for classes C, CI in Fig. 1 and Fig. 2, respectively, together with the series without resummation. We omit the \([m,n]=[1,3]\) resummation for class C because the resulting \(\beta\)-function is not monotonic and has two unphysical fixed points. The limiting behavior \(\beta(g)\sim 2\ln g\) \begin{table} \begin{tabular}{c c c c c} (a) 3D & \(\lim_{d\rightarrow\infty}\nu=0\) & \(\lim_{d\rightarrow\infty}\nu=\frac{1}{2}\) \\ class & \([0,3]\) & \([1,2]\) & \([0,3]\) & \([1,2]\) \\ \hline C & 0.357 & 0.227 & 0.773 & 0.360 \\ CI & 0.555 & 0.776 & 0.924 & 1.226 \\ \end{tabular} \end{table} Table 2: Comparison of the critical exponents \(\nu\) for classes C and CI in 3D and 4D obtained from Borel-Pade resummations of the series for \(\nu(\epsilon)\) when imposing different limiting conditions, i.e., Eq. (26) compared with Eq. (4). Numbers in the square brackets indicate the orders of polynomials, \(m\) and \(n\) [Eq. (14)]. at \(g\ll 1\) guaranteed by the constraint \(A=2\) in Eq. (31) is observed only at \(\ln g\) much smaller than the range plotted in Fig. 1. We show the resulting Borel-Pade approximations for \(\beta(g)\) in 2D for classes D and DIII in Fig. 3 and Fig. 4, respectively, together with the series without resummation. In classes D and DIII, for \(d<2\), two fixed points appear: a critical fixed point, and a stable fixed point. At the lower critical dimension \(d_{l}\), these two fixed points annihilate, e.g., the dashed curve in Fig. 3 and Fig. 4, and the value of the \(\beta\)-function at its maximum is zero \[\max_{d=d_{l}}\beta(g)=0. \tag{37}\] This leads directly to an estimate for \(d_{l}\), \[d_{l}\approx 2-\max\beta(g,\epsilon=0). \tag{38}\] Estimates of the lower critical dimension obtained from the Borel-Pade resummations are summarized in Table 3. ## V Numerical simulations To evaluate the effectiveness of the Borel-Pade resummation in estimating the critical exponents of the BdG symmetry classes, especially in high spatial dimensions \(d\geq 3\), we perform simulations for 3D class DIII, 4D class C, and 4D class CI. We set the energy \(E\) to the particle-hole symmetric point, \(E=0\), and vary the disorder strength \(W\). Figure 1: Comparison of the approximations for the \(\beta(g)\)-function before and after Borel-Padé resummation of the series for class C in 3D. Numbers in the square brackets indicate the orders of polynomials, \(m\) and \(n\) [Eq. (14)]. Figure 4: Comparison of the approximations for the \(\beta(g)\)-function before and after Borel-Padé resummation of the series for class DIII in 2D. The \([1,3]\) Bore-Padé resummation of the \(\beta(g)\)-function at the corresponding estimate \(d_{l}=1.21\) of the lower critical dimension is plotted with a dashed line. Figure 3: Comparison of the approximations for the \(\beta(g)\)-function before and after Borel-Padé resummation of the series for class D in 2D. The \([1,3]\) Bore-Padé resummation of the \(\beta(g)\)-function at the corresponding estimate \(d_{l}=1.76\) of the lower critical dimension is plotted with a dashed line. Figure 2: Comparison of the approximations for the \(\beta(g)\)-function before and after Borel-Padé resummation of the series for class CI in 3D. ### 3D class DIII This symmetry class describes time-reversal symmetric superconductors with broken spin-rotational symmetry. We study a four-band tight-binding model on cubic lattice [49; 50], \[\mathcal{H}_{\text{DIII}} =\sum_{\mathbf{r},\mathbf{r}^{\prime}}c_{\mathbf{r}}^{\dagger}[H_{ \text{DIII}}]_{\mathbf{r}\mathbf{r}^{\prime}}c_{\mathbf{r}^{\prime}}\] \[=\sum_{\mathbf{r}}\sum_{\mu=1}^{3}\left[\frac{it}{2}c_{\mathbf{r }+\mathbf{e}_{\mu}}^{\dagger}\alpha_{\mu}c_{\mathbf{r}}-\frac{m_{2}}{2}c_{ \mathbf{r}+\mathbf{e}_{\mu}}^{\dagger}\beta c_{\mathbf{r}}+\text{ H.c. }\right]\] \[+\sum_{\mathbf{r}}\left(m_{0}+3m_{2}+v_{\mathbf{r}}\right)c_{ \mathbf{r}}^{\dagger}\beta c_{\mathbf{r}} \tag{39}\] where \(c_{\mathbf{r}}^{\dagger}\) (\(c_{\mathbf{r}}\)) is the 4-component creation (annihilation) operator on a cubic-lattice site \(\mathbf{r}\). For convenience we set the lattice constant \(a\) to be unity. The \(\mathbf{e}_{\mu=1,2,3}\) are the primitive lattice vectors along the \(x,y,z\) directions, respectively. The matrices \(\alpha_{\mu}\) and \(\beta\) are defined as \[\alpha_{\mu}=\left(\begin{array}{cc}0&\sigma_{\mu}\\ \sigma_{\mu}&0\end{array}\right),\quad\beta=\left(\begin{array}{cc}1&0\\ 0&-1\end{array}\right), \tag{40}\] where \(\sigma_{\mu}\) and \(\tau_{\mu}\) are Pauli matrices acting on different degrees of freedom (e.g., spin and orbital). Parameter \(m_{0}\) is a mass, and parameters \(m_{2}\) and \(t\) are hopping amplitudes. This Hamiltonian has time-reversal symmetry \(U_{T}^{\dagger}\)\(H_{\text{DIII}}^{*}\)\(U_{T}=H_{\text{DIII}}\) where \[U_{T}=\delta_{\mathbf{r}\mathbf{r}^{\prime}}(\sigma_{2}\otimes\tau_{0}),\quad U _{T}^{T}=-U_{T}, \tag{41}\] and a particle-hole symmetry \(U_{S}^{\dagger}\)\(H_{\text{DIII}}\)\(U_{S}=-H_{\text{DIII}}\) where \[U_{S}=\delta_{\mathbf{r}\mathbf{r}^{\prime}}(\tau_{0}\otimes\tau_{2}). \tag{42}\] This model depicts a 3D \(\mathbb{Z}\) topological insulator (TI) when \(m_{0}<0\) and a trivial insulator when \(m_{0}>0\). For numerical calculations, we specify the parameters \(t=2\), \(m_{2}=1\), \(m_{0}=-2.5\), and use independent uniform distributions for the random on-site potential \[v_{\mathbf{r}}\in[-W/2,W/2],\quad\langle v_{\mathbf{r}}v_{\mathbf{r}^{\prime} }\rangle=\delta_{\mathbf{r}\mathbf{r}^{\prime}}W^{2}/12. \tag{43}\] Here, \(\langle\cdots\rangle\) indicates a disorder average. We use the transfer matrix method to calculate the localization length of the model [33] and impose periodic boundary conditions in the transverse direction. We simulate a semi-infinite bar with a cross section of size \(L\times L\) and estimate the quasi-one-dimensional (Q1D) localization length \(\lambda\) at disorder strength \(W\) and linear size \(L\). A dimensionless ratio \(\Lambda\) is defined as \[\Lambda(W,L)=\lambda(W,L)/L. \tag{44}\] The results are shown in Fig. 5 where \(\Lambda\) is plotted versus \(W\) for various \(L\). Curves for different \(L\) have an approximate common crossing point. This point indicates the Anderson transition between the TI (localized) phase and the metallic (extended) phase. To estimate the critical exponent, we fit the data to the following scaling form that includes corrections to single parameter scaling due to an irrelevant scaling variable [33; 51] \[\Lambda=F\big{(}\phi_{1},\phi_{2}\big{)}=F\big{(}u_{1}(w)L^{1/\nu},u_{2}(w)L^ {-y}\big{)}, \tag{45}\] where \[\omega=(W-W_{c})/W_{c}, \tag{46}\] and \(\phi_{1}=u_{1}L^{1/\nu}\) is the relevant scaling variable that encodes the power-law divergence of correlation length \(\xi\sim|u_{1}(w)|^{-\nu}\) around the critical point. The second scaling variable \(\phi_{2}=u_{2}L^{-y}\) with exponent \(-y<0\) is the leading irrelevant correction, and vanishes in the limit \(L\rightarrow\infty\). We approximate the scaling function \(F\) using a truncated Taylor series near the critical point (\(|w|\ll 1\)), \[F\left(\phi_{1},\phi_{2}\right)=\sum_{j=0}^{n_{2}}F_{j}(\phi_{1})\phi_{2}^{j}= \sum_{i=0}^{n_{1}}\sum_{j=0}^{n_{2}}f_{ij}\phi_{1}^{i}\phi_{2}^{j}, \tag{47}\] and \[u_{1}=\sum_{k=1}^{m_{1}}b_{k}w^{k},\quad u_{2}=\sum_{k=0}^{m_{2}}c_{k}w^{k}. \tag{48}\] We set \(b_{1}=c_{0}=1\) to remove the arbitrariness of the expansion coefficients. The numerical data are fitted to the scaling function by minimizing the \(\chi\)-squared statistic \[\chi^{2}=\sum_{n=1}^{N_{\text{D}}}\frac{(\Lambda_{n}-F_{n})^{2}}{\sigma_{n}^{2 }}. \tag{49}\] Here, \(N_{\text{D}}\) is the number of data points, \(\Lambda_{n}\) is the value of \(\Lambda\) for \(n\)th data point, \(\sigma_{n}\) its standard error, and \(F_{n}\) the value of the scaling function for the \(n\)th data point. To assess whether or not the fit is acceptable, we use the goodness of fit probability. Here, this is well approximated by [33] \[\text{GoF}\approx 1-\frac{1}{\Gamma(N_{\text{F}}/2)}\int_{0}^{\chi_{\text{min}}^{ 2}/2}\text{d}t\,e^{-t}\,t^{\chi_{\text{min}}^{2}/2-1}, \tag{50}\] where \(N_{\text{F}}=N_{\text{D}}-N_{\text{P}}\) is the degrees of freedom (with \(N_{\text{P}}\) the number of fitting parameters), \(\chi_{\text{min}}^{2}\) is the minimum value of the \(\chi\)-squared statistic, and \(\Gamma\) is the Gamma function. The fitting results are shown in Table 4 (a). Our estimate of the critical exponent for 3D Class DIII is \[\nu=0.96\pm 0.01 \tag{51}\] ### 4D class C Symmetry class C describes disordered superconductors with spin-rotational symmetry but broken time-reversal symmetry. For this symmetry class the spin quantum Hall effect occurs in two-dimensions [52]. We extend the 3D tight-binding model for class C of Ref. [53] to 4D, \[\mathcal{H}_{\rm C} = \sum_{\mathbf{r},\mathbf{r}^{\prime}}c_{\mathbf{r}}^{\dagger}[H_{ \rm C}]_{\mathbf{r}\mathbf{r}^{\prime}}c_{\mathbf{r}^{\prime}} \tag{52}\] \[= \sum_{\mathbf{r}}\left[\sum_{\mu=1}^{3}tc_{\mathbf{r}+\mathbf{e} _{\mu}}^{\dagger}c_{\mathbf{r}}+t_{\parallel}c_{\mathbf{r}+\mathbf{e}_{4}}^{ \dagger}c_{\mathbf{r}}\right.\] \[+ \left.it_{\perp}\left(c_{\mathbf{r}+\mathbf{e}_{1}}^{\dagger} \sigma_{1}c_{\mathbf{r}}+\sum_{\mu=2,3}c_{\mathbf{r}+\mathbf{e}_{\mu}}^{ \dagger}\sigma_{2}c_{\mathbf{r}}\right)+{\rm H.c.}\right]\] \[+ \sum_{\mathbf{r}}(v_{\mathbf{r}}+\Delta)c_{\mathbf{r}}^{\dagger} \sigma_{3}c_{\mathbf{r}}.\] Here, \(c_{\mathbf{r}}^{\dagger}\) is the creation operator on lattice site \(\mathbf{r}=(x_{1},x_{2},x_{3},x_{4})\) where the two components act on spin, orbital or Nambu space depending on the nature of the system. The Hamiltonian has a particle-hole symmetry \(U_{P}^{\dagger}H_{\rm C}^{*}U_{P}=-H_{\rm C}\) with \[U_{P}=\delta_{\mathbf{r}\mathbf{r}^{\prime}}e^{\mathrm{i}\pi\sum_{\mu=1}^{4} \mathbf{r}\cdot\mathbf{e}_{\mu}}\sigma_{2},\quad U_{P}^{T}=-U_{P}. \tag{53}\] In the clean limit the Fourier transformation of the Hamiltonian is \[h_{\rm C}(\mathbf{k}) = 2t_{\parallel}\cos k_{4}+2t\sum_{\mu=1}^{3}\cos k_{\mu}+\Delta \sigma_{3} \tag{54}\] \[- 2t_{\perp}\left[\sin k_{1}\sigma_{1}+\left(\sin k_{1}+\sin k_{2} \right)\sigma_{2}\right].\] For numerical simulations, we set \(\Delta=0.5\), \(t_{\perp}=t=1\) and \(t_{\parallel}=0.8\) so that the clean system has a finite Fermi surface at \(E_{F}=0\). We calculate the two-terminal Landauer conductance \(G\) using the transfer matrix method Figure 5: the dimensionless ratio \(\Lambda\) near the Anderson transition for the 3D class DIII model. The expansion order is \((n_{1},n_{2})=(3,1)\)\((m_{1},m_{2})=(2,0)\) as defined in Eqs. (47, 48). The solid lines are the fitting functions, and the black dots with error bars are simulation data points. Inset: the scaling function \begin{table} \begin{tabular}{c c c c c c c c c} \multicolumn{1}{c}{(a) 3D class DIII} & & & & & & & \\ \hline \hline \(L\) & \(m_{1}\) & \(m_{2}\) & GoF & & \(W_{c}\) & \(\nu\) & \(y\) & \(\Lambda_{c}\) \\ \hline 4-16 & 2 & 0 & 0.19 & 32.909 [32.882, 32.935] & 0.972 [0.958, 0.986] & 2.09 [1.94, 2.25] & 0.349 [0.347, 0.351] \\ & 3 & 0 & 0.42 & 32.903 [32.877, 32.933] & 0.981 [0.966, 0.994] & 2.14 [1.95, 2.30] & 0.350 [0.347, 0.352] \\ & 2 & 0 & 0.50 & 32.917 [22.642, 22.727] & 0.952 [0.928, 0.974] & 1.98 [1.62, 2.50] & 0.349 [0.345, 0.352] \\ & 3 & 0 & 0.60 & 32.898 [32.854, 32.965] & 0.963 [0.917, 0.979] & 2.23 [1.50, 2.94] & 0.351 [0.343, 0.354] \\ \hline \hline \multicolumn{10}{c}{(b) 4D class C} & & & & & \\ \hline \hline \(L\) & \(m_{1}\) & \(m_{2}\) & GoF & \(W_{c}\) & \(\nu\) & \(y\) & \(g_{c}\) \\ \hline 4-12 & 2 & 0 & 0.40 & 22.65 [22.62, 22.69] & 0.724 [0.698, 0.750] & 1.45 [1.26, 1.69] & 0.83 [0.78, 0.89] \\ & 3 & 0 & 0.49 & 22.66 [22.62, 22.70] & 0.724 [0.699, 0.751] & 1.45 [1.27, 1.71] & 0.83 [0.78, 0.89] \\ & 2 & 0 & 0.44 & 22.68 [22.64, 22.73] & 0.698 [0.649, 0.734] & 1.66 [1.22, 2.46] & 0.80 [0.74, 0.85] \\ & 3 & 0 & 0.48 & 22.68 [22.64, 22.72] & 0.703 [0.652, 0.742] & 1.61 [1.18, 2.29] & 0.80 [0.75, 0.86] \\ \hline \hline \multicolumn{10}{c}{(c) 4D class CI} & & & & \\ \hline \hline \(L\) & \(m_{1}\) & \(m_{2}\) & GoF & \(W_{c}\) & \(\nu\) & \(y\) & \(g_{c}\) \\ \hline 4-12 & 2 & 1 & 0.97 & 22.53 [22.50, 22.55] & 0.820 [0.710, 0.936] & 1.57 [1.48, 1.66] & 0.90 [0.88, 0.91] \\ & 3 & 1 & 0.98 & 22.53 [22.51, 22.56] & 0.817 [0.722, 0.900] & 1.59 [1.50, 1.70] & 0.89 [0.88, 0.91] \\ & 6-12 & 3 & 0 & 0.93 & 22.62 [22.58, 22.66] & 0.818 [0.713, 0.877] & 1.81 [1.55, 2.23] & 0.83 [0.81, 0.85] \\ \end{tabular} \end{table} Table 4: FSS results for class DIII in 3D, and classes C and CI in 4D. The orders of the expansion of the scaling function are fixed at \(n_{1}=3\) and \(n_{2}=1\). Here, \(m_{1}\) and \(m_{2}\) and the orders, respectively, of the expansions of \(u_{1}\) and \(u_{2}\) (see Eq. (47,48). The values enclosed in square brackets are 95% confidence intervals determined from 1000 Monte Carlo samples. [54], \[G=\frac{e^{2}}{h}g,\quad g=\mathrm{Tr}\left[\tilde{t}^{\dagger}\tilde{t}\right], \tag{55}\] where \(\tilde{t}\) is the transmission matrix of the hypercubic samples of size \(L^{4}\) along \(w\) axis. We impose periodic boundary conditions in directions the transverse to the current. While the dimensionless conductance \(g\) exhibits fluctuations, various disorder average are well described by a scaling function like Eq. (45) [55; 56]. We calculate \(\ln\langle g\rangle\), and use the same nonlinear fitting procedures as described through Eq. (45-50). Each data point \(\langle g\rangle\) is averaged over 5000-20000 samples to ensure a relative error smaller than 1%. The results for the critical exponent \(\nu\) and other quantities are shown in Table 4 (b). The fitting results are stable against change of expansion order \(m_{1},m_{2}\) and the range of system size. Our estimate of the critical exponent for 4D class C is \[\nu=0.72\pm 0.02. \tag{56}\] Note that the critical disorder \(W_{c}\) and critical conductance \(g_{c}\) are model-dependent, i.e.not universal. ### 4D class CI Symmetry class CI describes disordered superconductors with both time-reversal symmetry and spin-rotational symmetry. Again, we extended the 3D class CI model of Ref. [53] to 4D \[H_{\mathrm{CI}} =\sum_{\mathbf{r},\mathbf{r}^{\prime}}c_{\mathbf{r}}^{\dagger}[H_ {\mathrm{CI}}]_{\mathbf{rr}^{\prime}}c_{\mathbf{r}^{\prime}}\] \[=\sum_{\mathbf{r}}\bigg{[}\sum_{\mu=1}^{3}t_{\perp}c_{\mathbf{r}+ \mathbf{e}_{\mu}}^{\dagger}c_{\mathbf{r}}+t_{\parallel}c_{\mathbf{r}+\mathbf{ e}_{\mu}}^{\dagger}\sigma_{3}c_{\mathbf{r}}\] \[+t_{\parallel}^{\prime}c_{\mathbf{r}+\mathbf{e}_{4}}^{\dagger} \sigma_{1}c_{\mathbf{r}}+\mathrm{H.c.}\bigg{]}+\sum_{\mathbf{r}}(v_{\mathbf{r} }+\Delta)c_{\mathbf{r}}^{\dagger}\sigma_{1}c_{\mathbf{r}}. \tag{57}\] The Hamiltonian is time-reversal symmetric since \(H_{\mathrm{CI}}^{*}=H_{\mathrm{CI}}\), and has particle-hole symmetry \(U_{P}^{\dagger}H_{\mathrm{CI}}^{*}U_{P}=-H_{\mathrm{CI}}\) given by \[U_{P}=\delta_{\mathbf{rr}^{\prime}}e^{\mathrm{i}\pi\sum_{\mu=1}^{3}\mathbf{r }\cdot\mathbf{e}_{\mu}}\sigma_{2},\quad U_{P}^{T}=-U_{P}. \tag{58}\] In the clean limit the Fourier transformation of the Hamiltonian is \[h_{\mathrm{CI}}(\mathbf{k}) =2t_{\perp}\sum_{\mu=1}^{3}\cos k_{\mu}+2t_{\parallel}^{\prime} \cos k_{4}\sigma_{3}\] \[+(\Delta+2t_{\parallel}\cos k_{4})\sigma_{1}. \tag{59}\] In numerical simulations of the two-terminal Landauer conductance, we chose \(\Delta=1.2\), \(t_{\perp}=1\) and \(t_{\parallel}=t_{\parallel}^{\prime}=0.5\). Following the same procedures as described in the previous section, we estimate the critical exponent \(\nu\) and other quantities. The results are shown in Table 4 (b). Our estimate of the critical exponent \(\nu\) for 4D class CI is \[\nu=0.83\pm 0.04 \tag{60}\] Figure 6: Dimensionless Landauer conductance as a function of disorder \(W\) around the Anderson transition. The expansion order is \((n_{1},n_{2},m_{1},m_{2})=(3,1,2,0)\). **Left panel:** 4D symmetry class C. **Right panel:** 4D symmetry class CI. The colored solid lines are fitting functions and black dots with error bars are the numerical data. ## VI Comparison of Borel-Pade predictions with numerical results Referring to Table 5, we see that for classes C and CI in both 3D and 4D, the estimates of the critical exponent obtained with the \([0,4]\) Borel-Pade resummations are in good agreement with the numerical estimates. For 3D class D the discrepancy is relatively large and even larger for 3D class DIII. These are also the two symmetry classes where \(d_{l}<2\) (see Table 3). In addition we notice an inconsistency between our estimation of the critical exponent for 3D class DIII \(\nu=0.96\pm 0.01\) and that in Ref. [57]\(\nu=0.85\pm 0.05\). The model used in Ref. [57] is essentially the same as here, but the data set of Ref. [57] is of smaller size and of lower numerical precision. However, we note the possibility that the weak topological indices may change the critical behavior of Anderson transition [58]. We have resummed the series for the \(\beta\)-function in such a way that Eq. (31) is satisfied. This resummation means that in the localised regime the \(\beta\)-function will behave like \(A\ln g\) up to a constant. It would then seem more natural to set \(A=1\) rather than \(A=2\). However, the former choice does not yield the correct limiting behavior Eq. (4). For reference, we also tabulate the estimates of the critical exponents calculated from the truncated \(\beta\)-function series without resummation and from the Borel-Pade analysis with \(A=1\) in Table 6. Without resummation, we obtain estimates that violate the Chayes inequality \(\nu\geq 2/d\)[60]. With \(A=1\), the estimates satisfy the Chayes inequality but are in poorer agreement with the numerical estimates compared with \(A=2\). ## VII Summary and discussion In this paper, we have studied the Anderson transition in the BdG symmetry classes both analytically and numerically. We applied the Borel-Pade resummation method to the known perturbative results for the NL\(\sigma\)M to estimate the critical exponents in 3D and 4D. We also reported numerical simulations of class DIII in 3D, and classes C and CI in 4D, and compared the results of the resummation method with the results of the resummations and previously published work. We find that the results of the Borel-Pade analysis provide estimates of the critical exponent with the numerical estimates provided the limiting behaviour Eq. (4) is imposed during the resummation. In principal, the NL\(\sigma\)M theory of Anderson localization and its renormalization analysis in \(d=2+\epsilon\) dimensions are valid only when \(\epsilon\) is small, i.e., the Anderson transition occurs under weak disorder. Nonetheless, our results show that the perturbative \(\beta\)-functions can provide useful information concerning critical properties in 3D and 4D. The estimations of the critical exponents in BdG symmetry classes based on the Borel-Pade resummation methods with the assumption of infinite upper critical dimension match the numerical results better. This suggest that the upper critical critical dimension \(d_{u}\) may be infinite for the Anderson localization in BdG symmetry classes. Previous theoretical works have argued that in noncompact NL\(\sigma\)M, the upper critical dimension is infinite[61; 62], which seems to be consistent with the numerical results and estimation of Borel-Pade resummation method in this work. Further theoretical efforts are needed to conform these observations. Recently, it has been pointed out that the NL\(\sigma\)M model characterizes the measurement-induced phase transition in quantum circuits[63]. This scenario involves a replica number \(N\) equal to 1. The resummation method discussed in this paper is also applicable to that case, al \begin{table} \begin{tabular}{c c c c c} \multicolumn{5}{c}{(a) 3D} \\ & \multicolumn{2}{c}{Borel-Pade with \(A=2\)} & \multicolumn{2}{c}{numerical} \\ class & \([0,4]\) & \([1,3]\) & \(\nu\) & Ref. \\ \hline C & 1.056 & - & \(0.996\pm 0.012\) & [53; 59] \\ CI & 1.107 & 1.822 & \(1.17\pm 0.02\) & [53] \\ D & 0.823 & 0.858 & \(0.87\pm 0.03\) & [53] \\ DIII & 0.751 & 0.674 & \(0.85\pm 0.05\) & [57] \\ DIII & 0.751 & 0.674 & \(0.96\pm 0.01\) & * \\ \end{tabular} \end{table} Table 5: Critical exponents \(\nu\) of the BdG symmetry classes in 3D and 4D obtained from \(\beta\)-function series without resummation and order \([m,n]\) Borel-Pade resummation with \(A=1\). \begin{table} \begin{tabular}{c c c c} \multicolumn{5}{c}{(a) 3D} \\ & \multicolumn{2}{c}{no resummation} & \multicolumn{2}{c}{Borel-Padé with \(A=1\)} \\ & class & \([0,4]\) & \([1,3]\) \\ \hline C & 0.471 & 1.446 & - \\ CI & 0.555 & 1.478 & 2.131 \\ D & 0.187 & 1.254 & 1.249 \\ DIII & 0.151 & 1.202 & 1.088 \\ \end{tabular} \end{table} Table 6: Critical exponents \(\nu\) of the BdG symmetry classes in 3D and 4D obtained from \(\beta\)-function series without resummation and order \([m,n]\) Borel-Padé resummation with \(A=1\). lowing for the prediction of critical exponents in quantum circuit systems. **Acknowledgments** We thank and Ryuichi Shindou, Ferdinand Evers and Alexander D. Mirlin for fruitful discussions. T.W. was supported by the National Basic Research Programs of China (Grant No. 2019YFA0308401) and the National Natural Science Foundation of China (Grants No. 11674011 and No. 12074008). Z.P. was supported by National Natural Science Foundation of China (No. 12147104). T.O. and K.S. were supported by JSPS KAKENHI Grants 19H00658, and T.O. was supported by JSPS KAKENHI 22H05114.
2303.10997
On the invariance of the arithmetic mean with respect to generalized Bajraktarević means
The purpose of this paper is to investigate the following invariance equation involving two $2$-variable generalized Bajraktarevi\'c means, i.e., we aim to solve the functional equation $$ f^{-1}\bigg(\frac{p_1(x)f(x)+p_2(y)f(y)}{p_1(x)+p_2(y)}\bigg)+g^{-1}\bigg(\frac{q_1(x)g(x)+q_2(y)g(y)}{q_1(x)+q_2(y)}\bigg)=x+y \qquad(x,y\in I), $$ where $I$ is a nonempty open real interval and $f,g:I\to\mathbb{R}$ are continuous, strictly monotone and $p_1,p_2,q_1,q_2:I\to\mathbb{R}_+$ are unknown functions. The main result of the paper shows that, assuming four times continuous differentiability of $f$, $g$, twice continuous differentiability of $p_1$ and $p_2$ and assuming that $p_1$ differs from $p_2$ on a dense subset of $I$, a necessary and sufficient condition for the equality above is that the unknown functions are of the form $$ f=\frac{u}{v},\qquad g=\frac{w}{z},\qquad \mbox{and}\qquad p_1q_1=p_2q_2=vz, $$ where $u,v,w,z:I\to\mathbb{R}$ are arbitrary solutions of the second-order linear differential equation $F''=\gamma F$ ($\gamma\in\mathbb{R}$ is arbitrarily fixed) such that $v>0$ and $z>0$ holds on $I$ and $\{u,v\}$ and $\{w,z\}$ are linearly independent.
Richárd Grünwald, Zsolt Páles
2023-03-20T10:24:18Z
http://arxiv.org/abs/2303.10997v1
# On the invariance of the arithmetic mean with respect to generalized Bajraktarevic means ###### Abstract. The purpose of this paper is to investigate the following invariance equation involving two \(2\)-variable generalized Bajraktarevic means, i.e., we aim to solve the functional equation \[f^{-1}\biggl{(}\frac{p_{1}(x)f(x)+p_{2}(y)f(y)}{p_{1}(x)+p_{2}(y)}\biggr{)}+g^ {-1}\biggl{(}\frac{q_{1}(x)g(x)+q_{2}(y)g(y)}{q_{1}(x)+q_{2}(y)}\biggr{)}=x+y \qquad(x,y\in I),\] where \(I\) is a nonempty open real interval and \(f,g:I\to\mathbb{R}\) are continuous, strictly monotone and \(p_{1},p_{2},q_{1},q_{2}:I\to\mathbb{R}_{+}\) are unknown functions. The main result of the paper shows that, assuming four times continuous differentiability of \(f\), \(g\), twice continuous differentiability of \(p_{1}\) and \(p_{2}\) and assuming that \(p_{1}\) differs from \(p_{2}\) on a dense subset of \(I\), a necessary and sufficient condition for the equality above is that the unknown functions are of the form \[f=\frac{u}{v},\qquad g=\frac{w}{z},\qquad\text{and}\qquad p_{1}q_{1}=p_{2}q_{ 2}=vz,\] where \(u,v,w,z:I\to\mathbb{R}\) are arbitrary solutions of the second-order linear differential equation \(F^{\prime\prime}=\gamma F\) (\(\gamma\in\mathbb{R}\) is arbitrarily fixed) such that \(v>0\) and \(z>0\) holds on \(I\) and \(\{u,v\}\) and \(\{w,z\}\) are linearly independent. Key words and phrases:quasi-arithmetic mean; Bajraktarevic mean; invariance equation 2010 Mathematics Subject Classification: 39B22, 39B12, 26E60 The research of the first author was supported by the UNKP-20-3 New National Excellence Program of the Ministry of Human Capacities. The research of the second author was supported by the K-134191 NKFIH Grant and the 2019-2.1.11-TET-2019-00049 and the EFOP-3.6.1-16-2016-00022 projects. The last project is co-financed by the European Union and the European Social Fund. Introduction Let \(M\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group and let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. Let \(\mathbb{R}^{2}\) be a finite dimensional Lie group. _._ 2. _Let_ \(k\in\mathbb{N}\)_. Assume that_ \(M\) _is_ \(k\) _times partially differentiable (resp._ \(k\) _times continuously partially differentiable) with respect to its first and second variables on_ \(U\) _and_ \(f:I\to\mathbb{R}\) _is a_ \(k\) _times differentiable (resp._ \(k\) _times continuously differentiable) function on_ \(I\)_. Then_ \(p_{1}\) _and_ \(p_{2}\) _are_ \(k\) _times differentiable (resp._ \(k\) _times continuously differentiable) on_ \(I\)_._ Proof.: By our assumption, for all \((x,y)\in U\), we have that \[A_{f,p}(x,y)=M(x,y).\] This is equivalent to the following equality \[\frac{p_{1}(x)f(x)+p_{2}(y)f(y)}{p_{1}(x)+p_{2}(y)}=f(M(x,y))\qquad((x,y)\in U). \tag{3}\] Observe that, for \(x,y\in I\) with \(x\neq y\), the inequalities \(\min(x,y)<M(x,y)<\max(x,y)\) and the strict monotonicity of \(f\) imply that \(f(y)\neq f(M(x,y))\). Thus, solving equation (3) with respect to \(p_{2}(y)\), we get \[p_{2}(y)=p_{1}(x)\frac{f(M(x,y))-f(x)}{f(y)-f(M(x,y))}\qquad((x,y)\in U,\,x \neq y). \tag{4}\] Let \(x_{0}\in I\) be an arbitrarily fixed point. The pair \((x_{0},x_{0})\) is an interior point of \(U\), therefore, there exists \(x\in I\setminus\{x_{0}\}\) such that \((x,x_{0})\in U\). Then the set \[V:=\{y\in I\mid(x,y)\in U,\,x\neq y\}\] is a neighborhood of \(x_{0}\) on which we have the equality (4) for \(p_{2}\). On the other hand, the continuity of \(M\) in its second variable implies that the right hand side of (4) is a continuous function of \(y\) on \(V\). Therefore, \(p_{2}\) is continuous at \(x_{0}\), resulting that \(p_{2}\) is continuous on \(I\). A similar argument shows that \(p_{1}\) is continuous due to the continuity of \(M\) in its first variable. By the standard calculus rules, under the \(k\)-times (continuous) differentiability assumptions, the right hand side of (4) is also \(k\)-times (continuously) differentiable on the above constructed set \(V\), in particular, at \(x_{0}\), which yields the same property for \(p_{2}\) at \(x_{0}\). The arbitrariness of \(x_{0}\) shows that \(p_{2}\) is \(k\)-times (continuously) differentiable on \(I\). ## 3. The Schwarzian derivative For a three times differentiable function \(f:I\to\mathbb{R}\) with a nonvanishing first derivative, we recall the notion of the Schwarzian derivative \(S(f):I\to\mathbb{R}\) which is defined by the following formula: \[S(f)=\frac{f^{\prime\prime\prime}}{f^{\prime}}-\frac{3}{2}\bigg{(}\frac{f^{ \prime\prime}}{f^{\prime}}\bigg{)}^{2}.\] **Lemma 2**.: _Let \(\gamma\in\mathbb{R}\) and let \(f:I\to\mathbb{R}\) be a three times differentiable function such that \(f^{\prime}\) does not vanish on \(I\). Then the following two assertions are equivalent._ 1. _There exist twice differentiable functions_ \(u,v:I\to\mathbb{R}\) _such that_ \(v\) _does not vanish on_ \(I\)_,_ \[u^{\prime\prime}=\gamma u,\qquad v^{\prime\prime}=\gamma v,\qquad\text{and} \qquad f=\frac{u}{v}.\] 2. \(f\) _satisfies the third-order differential equation_ \[S(f)=-2\gamma.\] Proof.: Assume that assertion (i) holds. Then \(u\) and \(v\) are three times differentiable functions and \[u =fv,\] \[u^{\prime} =f^{\prime}v+fv^{\prime},\] \[\gamma u=u^{\prime\prime} =f^{\prime\prime}v+2f^{\prime}v^{\prime}+fv^{\prime\prime}=(\gamma f +f^{\prime\prime})v+2f^{\prime}v^{\prime},\] \[\gamma u^{\prime}=u^{\prime\prime\prime} =f^{\prime\prime\prime}v+3f^{\prime\prime}v^{\prime}+3f^{\prime}v ^{\prime\prime}+fv^{\prime\prime\prime}=(3\gamma f^{\prime}+f^{\prime\prime \prime})v+(\gamma f+3f^{\prime\prime})v^{\prime}.\] This is a system of homogeneous linear equations with respect to the unknowns \((u,u^{\prime},v,v^{\prime})\). Thus its base determinant has to be zero, that is, \[0 =\begin{vmatrix}1&0&f&0\\ 0&1&f^{\prime}&f\\ \gamma&0&\gamma f+f^{\prime\prime}&2f^{\prime}\\ 0&\gamma&3\gamma f^{\prime}+f^{\prime\prime\prime}&\gamma f+3f^{\prime\prime} \end{vmatrix}=\begin{vmatrix}1&0&0&0\\ 0&1&f^{\prime}&0\\ \gamma&0&f^{\prime\prime}&2f^{\prime}\\ 0&\gamma&3\gamma f^{\prime}+f^{\prime\prime\prime}&3f^{\prime\prime}\end{vmatrix} =\begin{vmatrix}1&f^{\prime}&0\\ 0&f^{\prime\prime}&2f^{\prime}\\ \gamma&3\gamma f^{\prime}+f^{\prime\prime\prime}&3f^{\prime\prime}\end{vmatrix}\] \[=\begin{vmatrix}1&0&0\\ 0&f^{\prime\prime}&2f^{\prime}\\ \gamma&2\gamma f^{\prime}+f^{\prime\prime\prime}&3f^{\prime\prime}\end{vmatrix} =\begin{vmatrix}f^{\prime\prime}&2f^{\prime}\\ 2\gamma f^{\prime}+f^{\prime\prime\prime}&3f^{\prime\prime}\end{vmatrix}=3f^{ \prime\prime 2}-4\gamma f^{\prime 2}-2f^{\prime}f^{\prime\prime\prime}=-2f^{\prime 2}(S(f)+ 2\gamma).\] Therefore, assertion (ii) is valid. To prove the converse, assume that assertion (ii) holds. Let \(x_{0}\in I\) be fixed and let \(u\) and \(v\) be linearly independent functions that satisfy \(u^{\prime\prime}=\gamma u\), \(v^{\prime\prime}=\gamma v\) and \[u(x_{0}) =f(x_{0}), v(x_{0}) =1,\] \[u^{\prime}(x_{0}) =f^{\prime}(x_{0})-\frac{1}{2}f^{\prime}(x_{0})^{-1}f^{\prime \prime}(x_{0})f(x_{0}), v^{\prime}(x_{0}) =-\frac{1}{2}f^{\prime}(x_{0})^{-1}f^{\prime\prime}(x_{0}).\] By the Liouville Theorem, the Wronskian \(W(u,v):=u^{\prime}v-v^{\prime}u\) of the linear differential equation \(F^{\prime\prime}=\gamma F\) is constant. Therefore, \(W(u,v)\equiv W(u,v)(x_{0})=f^{\prime}(x_{0})\). Let \(V\) be the largest open subinterval of \(I\) containing \(x_{0}\) such that \(v\) is positive on \(V\). Define \(g:V\to\mathbb{R}\) by \(g:=u/v\). We can see that \[g(x_{0}) =\frac{u(x_{0})}{v(x_{0})}=f(x_{0}),\] \[g^{\prime}(x_{0}) =\Big{(}\frac{u^{\prime}v-v^{\prime}u}{v^{2}}\Big{)}(x_{0})=f^{ \prime}(x_{0}),\] \[g^{\prime\prime}(x_{0}) =\Big{(}\frac{u^{\prime}v-v^{\prime}u}{v^{2}}\Big{)}^{\prime}(x_{0 })=\bigg{(}\frac{f^{\prime}(x_{0})}{v^{2}}\bigg{)}^{\prime}(x_{0})=-2f^{ \prime}(x_{0})\Big{(}\frac{v^{\prime}}{v^{2}}\Big{)}(x_{0})=f^{\prime\prime}(x_ {0}).\] On the other hand, by the first part of the proof, \(g\) also satisfies the differential equation \(S(g)=-2\gamma\) on \(V\). Thus \(f\) and \(g\) are solutions of the same ordinary differential equation and they satisfy the same initial value condition at \(x_{0}\). By the existence and uniqueness theorem for ordinary differential equations it follows that \(f=g\) on \(V\). If \(V\) were a proper subinterval of \(I\), then one of its endpoints, say the lower one, would belong to \(I\). At this endpoint, the function \(v\) vanishes, hence the right limit of \(g\) does not exist contradicting that the right limit of \(f\) exists at this point. This contradiction shows that \(V=I\) and hence, \(f\) is of the form stated in assertion (i). The next result is the particular case of Lemma 2 when \(\gamma=0\). **Corollary 3**.: _Let \(f:I\to\mathbb{R}\) be a three times differentiable function such that \(f^{\prime}\) does not vanish on \(I\). Then_ \[S(f)=0\] _holds on \(I\) if and only if there exist four constants \(a,b,c,d\in\mathbb{R}\) with \(ad\neq bc\) and \(0\not\in cI+d\) such that_ \[f(x)=\frac{ax+b}{cx+d}\qquad(x\in I).\] ## 4. A sufficient condition for the invariance equation In what follows we prove a sufficient condition for \((f,p),(g,q)\) to be a solution of (2). For a real number \(\gamma\in\mathbb{R}\), we introduce the sine and cosine type functions \(S_{\gamma},C_{\gamma}:\mathbb{R}\to\mathbb{R}\) by \[S_{\gamma}(x):=\sum_{k=0}^{\infty}\frac{\gamma^{k}x^{2k+1}}{(2k+1)!}=\begin{cases} \frac{\sin(\sqrt{-\gamma}x)}{\sqrt{-\gamma}}&\text{ if }\gamma<0,\\ x&\text{ if }\gamma=0,\\ \frac{\sinh(\sqrt{\gamma}x)}{\sqrt{\gamma}}&\text{ if }\gamma>0,\end{cases}\] \[C_{\gamma}(x):=\sum_{k=0}^{\infty}\frac{\gamma^{k}x^{2k}}{(2k)!}=\begin{cases} \cos(\sqrt{-\gamma}x)&\text{ if }\gamma<0,\\ 1&\text{ if }\gamma=0,\\ \cosh(\sqrt{\gamma}x)&\text{ if }\gamma>0.\end{cases}\] **Theorem 4**.: _Let \(\gamma\in\mathbb{R}\) be a real constant, let \(u,v,w,z:I\to\mathbb{R}\) be arbitrary solutions of the second-order linear differential equation \(F^{\prime\prime}=\gamma F\) such that \(v>0\) and \(z>0\) holds on \(I\) and \(\{u,v\}\) and \(\{w,z\}\) are linearly independent. Assume that the functions \(f,g:I\to\mathbb{R}\), \(p=(p_{1},p_{2}):I\to\mathbb{R}_{+}^{2}\), and \(q=(q_{1},q_{2}):I\to\mathbb{R}_{+}^{2}\) satisfy_ \[f=\frac{u}{v},\qquad g=\frac{w}{z},\qquad\text{and}\qquad p_{1}q_{1}=p_{2}q_{2 }=vz. \tag{5}\] _Then \(f\) and \(g\) are strictly monotone and continuous and the invariance equation (2) holds for all \(x,y\in I\)._ Proof.: By basic results on linear homogeneous differential equations, the functions \(S_{\gamma}\) and \(C_{\gamma}\) form a fundamental system of solutions for the differential equation \(F^{\prime\prime}=\gamma F\). Therefore, the pairs \((u,v)\) and \((w,z)\) are equivalent to \((S_{\gamma},C_{\gamma})\), that is, there exist \(a_{1},b_{1},c_{1},d_{1},a_{2},b_{2},c_{2},d_{2}\in\mathbb{R}\) real constants such that \(a_{1}d_{1}\neq b_{1}c_{1}\), \(a_{2}d_{2}\neq b_{2}c_{2}\) and \[u=a_{1}S_{\gamma}+b_{1}C_{\gamma},\qquad v=c_{1}S_{\gamma}+d_{1}C_{\gamma}, \qquad\text{ and }\qquad w=a_{2}S_{\gamma}+b_{2}C_{\gamma},\qquad z=c_{2}S_{ \gamma}+d_{2}C_{\gamma}.\] In view of the sufficiency part of [14, Theorem 6], this implies the identity \[\Big{(}\frac{u}{v}\Big{)}^{-1}\bigg{(}\frac{tu(x)+su(y)}{tv(x)+sv(y)}\bigg{)} +\Big{(}\frac{w}{z}\Big{)}^{-1}\bigg{(}\frac{sw(x)+tw(y)}{sz(x)+tz(y)}\bigg{)} =x+y\qquad(x,y\in I,\,t,s>0). \tag{6}\] On the other hand, with \(t:=\frac{p_{1}(x)}{v(x)}=\frac{z(x)}{q_{1}(x)}\) and \(s:=\frac{p_{2}(y)}{v(y)}=\frac{z(y)}{q_{2}(y)}\), using (5), we have \[\Big{(}\frac{u}{v}\Big{)}^{-1}\bigg{(}\frac{tu(x)+su(y)}{tv(x)+sv(y)}\bigg{)}= f^{-1}\bigg{(}\frac{p_{1}(x)f(x)+p_{2}(y)f(y)}{p_{1}(x)+p_{2}(y)}\bigg{)}=A_{f,p}(x,y)\] and \[\Big{(}\frac{w}{z}\Big{)}^{-1}\bigg{(}\frac{sw(x)+tw(y)}{sz(x)+tz(y)}\bigg{)}=g^{- 1}\bigg{(}\frac{q_{1}(x)g(x)+q_{2}(y)g(y)}{q_{1}(x)+q_{2}(y)}\bigg{)}=A_{g,q}(x, y).\] Therefore, (6) yields that (2) is satisfied. ## 5. Partial derivatives of generalized Bajraktarevic means In the next result we recall the formulas for the partial derivatives of generalized Bajraktarevic means up to third-order at diagonal points of \(I^{2}\). These assertions were proved in [8] under tight regularity assumptions. We also calculate the fourth-order partial derivative \(\partial_{1}^{2}\partial_{2}^{2}A_{f,p}\). **Theorem 5**.: _Let \(\ell\in\{1,2,3,4\}\), let \(f:I\to\mathbb{R}\) be an \(\ell\) times differentiable function on \(I\) with a nonvanishing first derivative, and let \(p=(p_{1},p_{2}):I\to\mathbb{R}_{+}^{2}\). Then we have the following assertions._ 1. _If_ \(\ell=1\)_,_ \(i\in\{1,2\}\)_, and_ \(p_{i}\) _is continuous on_ \(I\)_, then the first-order partial derivative_ \(\partial_{i}A_{f,p}\) _exists on_ \(\operatorname{diag}(I^{2})\) _and_ \[\partial_{i}A_{f,p}\circ\Delta_{2}=\frac{p_{i}}{p_{0}}.\] 2. _If_ \(\ell=2\)_,_ \(p_{1}\) _and_ \(p_{2}\) _are differentiable on_ \(I\)_, then the second-order partial derivative_ \(\partial_{1}\partial_{2}A_{f,p}\) _exists on_ \(\operatorname{diag}(I^{2})\) _and_ \[\partial_{1}\partial_{2}A_{f,p}\circ\Delta_{2}=-\frac{p_{1}p_{2}}{p_{0}^{2}} \bigg{(}\frac{(p_{1}p_{2})^{\prime}}{p_{1}p_{2}}+\frac{f^{\prime\prime}}{f^{ \prime}}\bigg{)}.\] 3. _If_ \(\ell=3\) _and_ \(p_{1}\) _and_ \(p_{2}\) _are twice differentiable on_ \(I\)_, then the third-order partial derivative_ \(\partial_{1}^{2}\partial_{2}A_{f,p}\) _exists on_ \(I^{2}\) _and_ \[\partial_{1}^{2}\partial_{2}A_{f,p}\circ\Delta_{2}=-\frac{1}{4} \Big{(}\frac{p_{1}-p_{2}}{p_{0}}\Big{)}^{\prime\prime}-\frac{3p_{0}(p_{1}-p_{ 2})}{16p_{1}p_{2}}\bigg{(}\Big{(}\frac{p_{1}-p_{2}}{p_{0}}\Big{)}^{\prime} \bigg{)}^{2}-\frac{1}{2}\bigg{(}\frac{p_{1}p_{2}}{p_{0}^{2}}\bigg{(}\frac{(p_{ 1}p_{2})^{\prime}}{p_{1}p_{2}}+\frac{f^{\prime\prime}}{f^{\prime}}\bigg{)} \bigg{)}^{\prime}\] \[\qquad\qquad\qquad+\frac{3p_{1}p_{2}}{4p_{0}^{2}}\cdot\frac{p_{1} -p_{2}}{p_{0}}\bigg{(}\frac{(p_{1}p_{2})^{\prime}}{p_{1}p_{2}}+\frac{f^{ \prime\prime}}{f^{\prime}}\bigg{)}^{2}-\frac{p_{1}p_{2}}{2p_{0}^{2}}\cdot\frac {p_{1}-p_{2}}{p_{0}}S(f).\] 4. _If_ \(\ell=4\)_,_ \(p_{1}\) _and_ \(p_{2}\) _are twice continuously differentiable on_ \(I\)_, then the fourth-order partial derivative_ \(\partial_{1}^{2}\partial_{2}^{2}A_{f,p}\) _exists on_ \(\operatorname{diag}(I^{2})\) _and_ \[\partial_{1}^{2}\partial_{2}^{2}A_{f,p}\circ\Delta_{2}=\bigg{(} \Big{(}\frac{p_{1}p_{2}}{p_{0}^{2}}\Big{)}^{\prime\prime}+\frac{3} {8}\Big{(}6-\frac{p_{0}^{2}}{p_{1}p_{2}}\Big{)}\bigg{(}\Big{(}\frac{p_{1}-p_{ 2}}{p_{0}}\Big{)}^{\prime}\bigg{)}^{2}\bigg{)}\bigg{(}\frac{(p_{1}p_{2})^{ \prime}}{p_{1}p_{2}}+\frac{f^{\prime\prime}}{f^{\prime}}\bigg{)}\] \[\qquad\qquad\qquad-\Big{(}\frac{p_{1}p_{2}}{p_{0}^{2}}\Big{)}^{ \prime}\bigg{(}\frac{(p_{1}p_{2})^{\prime}}{p_{1}p_{2}}+\frac{f^{\prime\prime} }{f^{\prime}}\bigg{)}^{\prime}-\frac{1}{2}\Big{(}\frac{p_{1}p_{2}}{p_{0}^{2}} \Big{)}^{2}\bigg{(}6-\frac{p_{0}^{2}}{p_{1}p_{2}}\bigg{)}\bigg{(}\frac{(p_{1}p_ {2})^{\prime}}{p_{1}p_{2}}+\frac{f^{\prime\prime}}{f^{\prime}}\bigg{)}^{3}\] \[\qquad\qquad\qquad+\Big{(}\frac{p_{1}p_{2}}{p_{0}^{2}}\Big{)}^{ 2}\bigg{(}6-\frac{p_{0}^{2}}{p_{1}p_{2}}\bigg{)}\bigg{(}\frac{(p_{1}p_{2})^{ \prime}}{p_{1}p_{2}}+\frac{f^{\prime\prime}}{f^{\prime}}\bigg{)}S(f)-\Big{(} \frac{p_{1}p_{2}}{p_{0}^{2}}\Big{)}^{2}S(f)^{\prime}.\] Proof.: It follows from elementary calculus rules that partial derivative \(\partial_{1}^{\alpha}\partial_{2}^{\beta}A_{f,p}\) exists on \(I^{2}\) if \(f\) is \((\alpha+\beta)\)-times differentiable with a nonvanishing first derivative, furthermore, \(p_{1}\) is \(\alpha\)-times and \(p_{2}\) is \(\beta\)-times differentiable on \(I\). By equality (3), with the notation \(M:=A_{f,p}\), for all \(x,y\in I\), we have \[p_{1}(x)f(x)+p_{2}(y)f(y)=p_{1}(x)f\big{(}M(x,y)\big{)}+p_{2}(y)f\big{(}M(x,y) \big{)}.\] Let \(\ell\in\{1,2,3,4\}\), \(\alpha,\beta\in\{0,1,2\}\) and let \(\delta_{\cdot,\cdot}\) denote the standard Kronecker symbol. Differentiating this equality with respect to the first variable \(\alpha\)-times and the second variable \(\beta\)-times by applying the generalized Leibniz Product Rule, we get \[\delta_{0,\beta}\sum_{i=0}^{\alpha} \binom{\alpha}{i}p_{1}^{(\alpha-i)}(x)f^{(i)}(x)+\delta_{0,\alpha }\sum_{j=0}^{\beta}\binom{\beta}{j}p_{2}^{(\beta-j)}(y)f^{(j)}(y)\] \[=\sum_{i=0}^{\alpha}\binom{\alpha}{i}p_{1}^{(\alpha-i)}(x)\cdot \partial_{1}^{i}\partial_{2}^{\beta}(f\circ M)(x,y)+\sum_{j=0}^{\beta}\binom{ \beta}{j}p_{2}^{(\beta-j)}(y)\cdot\partial_{1}^{\alpha}\partial_{2}^{j}(f \circ M)(x,y).\] Restricting this equality to the diagonal of \(I^{2}\), we obtain \[\delta_{0,\beta}\sum_{i=0}^{\alpha} \binom{\alpha}{i}p_{1}^{(\alpha-i)}f^{(i)}+\delta_{0,\alpha} \sum_{j=0}^{\beta}\binom{\beta}{j}p_{2}^{(\beta-j)}f^{(j)} \tag{7}\] \[=\sum_{i=0}^{\alpha}\binom{\alpha}{i}p_{1}^{(\alpha-i)}\cdot \partial_{1}^{i}\partial_{2}^{\beta}(f\circ M)\circ\Delta_{2}+\sum_{j=0}^{ \beta}\binom{\beta}{j}p_{2}^{(\beta-j)}\cdot\partial_{1}^{\alpha}\partial_{2} ^{j}(f\circ M)\circ\Delta_{2}.\] For the computation of the partial derivatives \(\partial_{1}^{i}\partial_{2}^{j}(f\circ M)\), the following easy-to-see formulas apply: \[\partial_{\mu}(f\circ M) =(f^{\prime}\circ M)\cdot\partial_{\mu}M, \tag{8}\] \[\partial_{\mu}\partial_{\nu}(f\circ M) =(f^{\prime\prime}\circ M)\cdot\partial_{\mu}M\cdot\partial_{\nu }M+(f^{\prime}\circ M)\cdot\partial_{\mu}\partial_{\nu}M,\] \[\partial_{\mu}^{2}\partial_{\nu}(f\circ M) =(f^{\prime\prime\prime}\circ M)\cdot\partial_{\mu}M^{2}\cdot \partial_{\nu}M+(f^{\prime\prime}\circ M)\cdot\big{(}\partial_{\mu}^{2}M\cdot \partial_{\nu}M+2\partial_{\mu}\partial_{\nu}M\cdot\partial_{\mu}M\big{)}\] \[\quad+(f^{\prime}\circ M)\cdot\partial_{\mu}^{2}\partial_{\nu}M,\] \[\partial_{1}^{2}\partial_{2}^{2}(f\circ M) =(f^{\prime\prime\prime\prime}\circ M)\cdot\partial_{1}M^{2}\cdot \partial_{2}M^{2}+(f^{\prime}\circ M)\cdot\partial_{1}^{2}\partial_{2}^{2}M\] \[\quad+(f^{\prime\prime\prime}\circ M)\cdot\big{(}\partial_{1}^{2 }M\cdot\partial_{2}M^{2}+\partial_{2}^{2}M\cdot\partial_{1}M^{2}+4\partial_{1 }\partial_{2}M\cdot\partial_{1}M\cdot\partial_{2}M\big{)}\] \[\quad+(f^{\prime\prime}\circ M)\cdot\big{(}\partial_{1}^{2}M \cdot\partial_{2}^{2}M+2\partial_{1}\partial_{2}M^{2}+2\partial_{1}^{2} \partial_{2}M\cdot\partial_{2}M+2\partial_{1}\partial_{2}^{2}M\cdot\partial_ {1}M\big{)},\] where \(\mu,\nu\in\{1,2\}\). In the particular case when \(\alpha=1\) and \(\beta=0\), (7) and also the first formula from (8) yield \[p_{1}^{\prime}f+p_{1}f^{\prime}=p_{1}^{\prime}f+p_{0}f^{\prime}\cdot\partial_{ 1}M\circ\Delta_{2}.\] Expressing \(\partial_{1}M\circ\Delta_{2}\), we arrive at assertion (i) for \(i=1\). The case \(i=2\) can be seen similarly. In the case \(\alpha=\beta=1\), using the first two formulas from (8), the equality (7) simplifies to \[0=p_{1}^{\prime}f^{\prime}\cdot\partial_{2}M\circ\Delta_{2}+p_{2}^{\prime}f^{ \prime}\cdot\partial_{1}M\circ\Delta_{2}+p_{0}\big{(}f^{\prime\prime}\cdot( \partial_{1}M\cdot\partial_{2}M)\circ\Delta_{2}+f^{\prime}\cdot\partial_{1} \partial_{2}M\circ\Delta_{2}\big{)},\] which, using (i), implies that \[f^{\prime\prime}\cdot(\partial_{1}M\cdot\partial_{2}M)\circ\Delta_{2}+f^{ \prime}\cdot\partial_{1}\partial_{2}M\circ\Delta_{2}=-\frac{(p_{1}p_{2})^{ \prime}}{p_{0}^{2}}f^{\prime}. \tag{9}\] Now, applying the formulas from (i) again, we can conclude that (ii) is valid. Using (i) and (ii), we can also get the formula \[\begin{split}\partial_{i}^{2}M\circ\Delta_{2}&=\big{(} \partial_{i}M(x,x)\circ\Delta_{2}\big{)}^{\prime}-\partial_{1}\partial_{2}M \circ\Delta_{2}\\ &=\Big{(}\frac{p_{i}}{p_{0}}\Big{)}^{\prime}+\frac{p_{1}p_{2}}{p _{0}^{2}}\bigg{(}\frac{(p_{1}p_{2})^{\prime}}{p_{1}p_{2}}+\frac{f^{\prime \prime}}{f^{\prime}}\bigg{)}=2\frac{p_{i}^{\prime}p_{3-i}}{p_{0}^{2}}+\frac{p _{1}p_{2}}{p_{0}^{2}}\cdot\frac{f^{\prime\prime}}{f^{\prime}}.\end{split} \tag{10}\] Using assertion (i) and (10), it follows that \[f^{\prime\prime}\cdot(\partial_{i}M^{2})\circ\Delta_{2}+f^{\prime}\cdot \partial_{i}^{2}M\circ\Delta_{2}=\frac{p_{i}}{p_{0}}f^{\prime\prime}+2\frac{p _{i}^{\prime}p_{3-i}}{p_{0}^{2}}f^{\prime}. \tag{11}\] In the case when \(\alpha=2\), \(\beta=1\), the equality (7) yields \[0=p_{1}^{\prime\prime}\cdot\partial_{2}(f\circ M)\circ\Delta_{2}+2p_{1}^{ \prime}\cdot\partial_{1}\partial_{2}(f\circ M)\circ\Delta_{2}+p_{2}^{\prime} \cdot\partial_{1}^{2}(f\circ M)\circ\Delta_{2}+p_{0}\cdot\partial_{1}^{2} \partial_{2}(f\circ M)\circ\Delta_{2}.\] In view of (8), we can rewrite this equality as \[\begin{split} 0&=p_{1}^{\prime\prime}f^{\prime}\cdot \partial_{2}M\circ\Delta_{2}+2p_{1}^{\prime}\Big{(}f^{\prime\prime}\cdot( \partial_{1}M\cdot\partial_{2}M)\circ\Delta_{2}+f^{\prime}\cdot\partial_{1} \partial_{2}M\circ\Delta_{2}\Big{)}\\ &\qquad+p_{2}^{\prime}\Big{(}f^{\prime\prime}\cdot(\partial_{1}M^ {2})\circ\Delta_{2}+f^{\prime}\cdot\partial_{1}^{2}M\circ\Delta_{2}\Big{)}+p_ {0}\Big{(}f^{\prime\prime\prime}\cdot(\partial_{1}M^{2}\cdot\partial_{2}M) \circ\Delta_{2}\\ &\qquad+f^{\prime\prime}\cdot\big{(}\partial_{1}^{2}M\cdot \partial_{2}M+2\partial_{1}\partial_{2}M\cdot\partial_{1}M\big{)}\circ\Delta _{2}+f^{\prime}\cdot\partial_{1}^{2}\partial_{2}M\circ\Delta_{2}\Big{)}.\end{split}\] This equality, using assertion (i), (9) and (11), implies that \[\begin{split} f^{\prime\prime\prime}\cdot&(\partial_{1 }M^{2}\cdot\partial_{2}M)\circ\Delta_{2}+f^{\prime\prime}\cdot(\partial_{1}^ {2}M\cdot\partial_{2}M+2\partial_{1}\partial_{2}M\cdot\partial_{1}M)\circ \Delta_{2}+f^{\prime}\cdot\partial_{1}^{2}\partial_{2}M\circ\Delta_{2}\\ &=-\frac{p_{1}^{\prime\prime}p_{2}}{p_{0}^{2}}f^{\prime}+2\frac{p _{1}^{\prime}(p_{1}p_{2})^{\prime}}{p_{0}^{3}}f^{\prime}-\frac{p_{2}^{\prime}}{p _{0}}\bigg{(}\frac{p_{1}}{p_{0}}f^{\prime\prime}+2\frac{p_{1}^{\prime}p_{2}}{p _{0}^{2}}f^{\prime}\bigg{)}\\ &=\frac{2p_{1}^{\prime}((p_{1}p_{2})^{\prime}-p_{2}^{\prime}p_{2} )-p_{1}^{\prime\prime}p_{2}p_{0}}{p_{0}^{3}}f^{\prime}-\frac{p_{2}^{\prime}p_{ 1}}{p_{0}^{2}}f^{\prime\prime}.\end{split} \tag{12}\] Using assertions (i), (ii), we can also get \[(\partial_{1}^{2}M\cdot\partial_{2}M+2\partial_{1}\partial_{2}M\cdot\partial _{1}M)\circ\Delta_{2}=2\frac{p_{1}^{\prime}p_{2}^{2}-p_{1}(p_{1}p_{2})^{ \prime}}{p_{0}^{3}}+\frac{p_{1}p_{2}(p_{2}-2p_{1})}{p_{0}^{3}}\cdot\frac{f^{ \prime\prime}}{f^{\prime}}. \tag{13}\] Thus, dividing equation (12) by \(f^{\prime}\) side by side, it reduces to \[\begin{split}\frac{p_{1}^{2}p_{2}}{p_{0}^{3}}\cdot\frac{f^{\prime \prime\prime}}{f^{\prime}}+\bigg{(}2\frac{p_{1}^{\prime}p_{2}^{2}-p_{1}(p_{1}p_{ 2})^{\prime}}{p_{0}^{3}}&+\frac{p_{1}p_{2}(p_{2}-2p_{1})}{p_{0}^{ 3}}\cdot\frac{f^{\prime\prime}}{f^{\prime}}\bigg{)}\frac{f^{\prime\prime}}{f^{ \prime}}+\partial_{1}^{2}\partial_{2}M\circ\Delta_{2}\\ &=\frac{2p_{1}^{\prime}((p_{1}p_{2})^{\prime}-p_{2}^{\prime}p_{2})-p _{1}^{\prime\prime}p_{2}p_{0}}{p_{0}^{3}}-\frac{p_{2}^{\prime}p_{1}}{p_{0}^{2}} \cdot\frac{f^{\prime\prime}}{f^{\prime}}.\end{split}\] Therefore, \[\begin{split}\partial_{1}^{2}\partial_{2}M\circ\Delta_{2}& =\frac{2p_{1}^{\prime}((p_{1}p_{2})^{\prime}-p_{2}^{\prime}p_{2})-p_{1}^{\prime \prime}p_{2}p_{0}}{p_{0}^{3}}-\frac{(2p_{1}^{\prime}p_{2}+p_{2}^{\prime}p_{1})(p_ {2}-p_{1})}{p_{0}^{3}}\cdot\frac{f^{\prime\prime}}{f^{\prime}}\\ &\qquad-\frac{p_{1}p_{2}(p_{2}-2p_{1})}{p_{0}^{3}}\bigg{(}\frac{f^ {\prime\prime}}{f^{\prime}}\bigg{)}^{2}-\frac{p_{1}^{2}p_{2}}{p_{0}^{3}}\cdot \frac{f^{\prime\prime\prime}}{f^{\prime}}.\end{split}\] From here, assertion (iii) follows. In the case when \(\alpha=\beta=2\), the equality (7) yields \[0= p_{2}^{\prime\prime}\cdot\partial_{1}^{2}(f\circ M)\circ\Delta_{2}+p_{1}^{ \prime\prime}\cdot\partial_{2}^{2}(f\circ M)\circ\Delta_{2}\] \[+2p_{2}^{\prime}\cdot\partial_{1}^{2}\partial_{2}(f\circ M)\circ \Delta_{2}+2p_{1}^{\prime}\cdot\partial_{1}\partial_{2}^{2}(f\circ M)\circ \Delta_{2}+p_{0}\cdot\partial_{1}^{2}\partial_{2}^{2}(f\circ M)\circ\Delta_{2}.\] Hence \[0= p_{2}^{\prime\prime}\Big{(}f^{\prime\prime}\cdot(\partial_{1}M^{2}) \circ\Delta_{2}+f^{\prime}\cdot\partial_{1}^{2}M\circ\Delta_{2}\Big{)}+p_{1}^ {\prime\prime}\Big{(}f^{\prime\prime}\cdot(\partial_{2}M^{2})\circ\Delta_{2}+ f^{\prime}\cdot\partial_{2}^{2}M\circ\Delta_{2}\Big{)}\] \[+2p_{2}^{\prime}\Big{(}f^{\prime\prime\prime}\cdot(\partial_{1}M ^{2}\cdot\partial_{2}M)\circ\Delta_{2}+f^{\prime\prime}\cdot(\partial_{1}^{2} M\cdot\partial_{2}M+2\partial_{1}\partial_{2}M\cdot\partial_{1}M)\circ\Delta_{2}+ f^{\prime}\cdot\partial_{1}^{2}\partial_{2}M\circ\Delta_{2}\Big{)}\] \[+2p_{1}^{\prime}\Big{(}f^{\prime\prime\prime}\cdot(\partial_{2}M ^{2}\cdot\partial_{1}M)\circ\Delta_{2}+f^{\prime\prime}\cdot(\partial_{2}^{2} M\cdot\partial_{1}M+2\partial_{1}\partial_{2}M\cdot\partial_{2}M)\circ\Delta_{2}+ f^{\prime}\cdot\partial_{2}^{2}\partial_{1}M\circ\Delta_{2}\Big{)}\] \[+p_{0}\Big{(}f^{\prime\prime\prime\prime}\cdot(\partial_{1}M^{2} \cdot\partial_{2}M^{2})\circ\Delta_{2}+f^{\prime}\cdot\partial_{1}^{2}\partial _{2}^{2}M\circ\Delta_{2}\] \[\qquad\qquad+f^{\prime\prime\prime}\cdot\Big{(}\partial_{1}^{2}M \cdot\partial_{2}M^{2}+\partial_{2}^{2}M\cdot\partial_{1}M^{2}+4\partial_{1} \partial_{2}M\cdot\partial_{1}M\cdot\partial_{2}M\Big{)}\circ\Delta_{2}\] \[\qquad\qquad+f^{\prime\prime}\cdot\Big{(}\partial_{1}^{2}M\cdot \partial_{2}^{2}M+2\partial_{1}\partial_{2}M^{2}+2\partial_{1}^{2}\partial_{2}M \cdot\partial_{2}M+2\partial_{1}\partial_{2}^{2}M\cdot\partial_{1}M\Big{)} \circ\Delta_{2}\Big{)}.\] Using (13) and its symmetric counterpart, we can obtain that \[\big{(}\partial_{1}^{2}M\cdot\partial_{2}M^{2}+\partial_{2}^{2}M \cdot\partial_{1}M^{2}+4\partial_{1}\partial_{2}M\cdot\partial_{1}M\cdot \partial_{2}M\big{)}\circ\Delta_{2}\] \[=\big{(}(\partial_{1}^{2}M\cdot\partial_{2}M+2\partial_{1}\partial _{2}M\cdot\partial_{1}M)\cdot\partial_{2}M\big{)}\circ\Delta_{2}+\big{(}( \partial_{2}^{2}M\cdot\partial_{1}M+2\partial_{1}\partial_{2}M\cdot\partial_ {2}M)\cdot\partial_{1}M\big{)}\circ\Delta_{2}\] \[=\bigg{(}2\frac{p_{1}^{\prime}p_{2}^{2}-p_{1}(p_{1}p_{2})^{\prime }}{p_{0}^{3}}+\frac{p_{1}p_{2}(p_{2}-2p_{1})}{p_{0}^{3}}\cdot\frac{f^{\prime \prime}}{f^{\prime}}\bigg{)}\frac{p_{2}}{p_{0}}+\bigg{(}2\frac{p_{2}^{\prime}p _{1}^{2}-p_{2}(p_{1}p_{2})^{\prime}}{p_{0}^{3}}+\frac{p_{1}p_{2}(p_{1}-2p_{2})} {p_{0}^{3}}\cdot\frac{f^{\prime\prime}}{f^{\prime}}\bigg{)}\frac{p_{1}}{p_{0}}\] \[=2\frac{p_{1}^{\prime}p_{2}^{3}+p_{2}^{\prime}p_{1}^{3}-2p_{1}p_{ 2}(p_{1}p_{2})^{\prime}}{p_{0}^{4}}+\frac{p_{1}p_{2}(p_{0}^{2}-6p_{1}p_{2})}{p _{0}^{4}}\cdot\frac{f^{\prime\prime}}{f^{\prime}}\] and \[\big{(}\partial_{1}^{2}M\cdot\partial_{2}^{2}M+2\partial_{1} \partial_{2}M^{2}+2\partial_{1}^{2}\partial_{2}M\cdot\partial_{2}M+2\partial_{1 }\partial_{2}^{2}M\cdot\partial_{1}M\big{)}\circ\Delta_{2}\] \[=\bigg{(}2\frac{p_{1}^{\prime}p_{2}}{p_{0}^{2}}+\frac{p_{1}p_{2}}{ p_{0}^{2}}\cdot\frac{f^{\prime\prime}}{f^{\prime}}\bigg{)}\bigg{(}2\frac{p_{2}^{ \prime}p_{1}}{p_{0}^{2}}+\frac{p_{1}p_{2}}{p_{0}^{2}}\cdot\frac{f^{\prime\prime }}{f^{\prime}}\bigg{)}+2\bigg{(}\frac{(p_{1}p_{2})^{\prime}}{p_{0}^{2}}+\frac{p _{1}p_{2}}{p_{0}^{2}}\cdot\frac{f^{\prime\prime}}{f^{\prime}}\bigg{)}^{2}\] \[\qquad\qquad+2\bigg{(}\frac{2p_{1}^{\prime}((p_{1}p_{2})^{\prime }-p_{2}^{\prime}p_{2})-p_{1}^{\prime\prime}p_{2}p_{0}}{p_{0}^{3}}-\frac{(2p_{1} ^{\prime}p_{2}+p_{2}^{\prime}p_{1})(p_{2}-p_{1})}{p_{0}^{3}}\cdot\frac{f^{ \prime\prime}}{f^{\prime}}-\frac{p_{1}p_{2}(p_{2}-2p_{1})}{p_{0}^{3}}\bigg{(} \frac{f^{\prime\prime}}{f^{\prime}}\bigg{)}^{2}\] \[\qquad\qquad\qquad\qquad-\frac{p_{1}p_{2}}{p_{0}^{3}}\cdot\frac{f^{ \prime\prime\prime}}{f^{\prime}}\bigg{)}\frac{p_{2}}{p_{0}}+2\bigg{(}\frac{2p_{ 2}^{\prime}((p_{1}p_{2})^{\prime}-p_{1}^{\prime}p_{1})-p_{2}^{\prime\prime}p_{ 1}p_{0}}{p_{0}^{3}}-\frac{(2p_{2}^{\prime}p_{1}+p_{1}^{\prime}p_{2})(p_{1}-p_{ 2})}{p_{0}^{3}}\cdot\frac{f^{\prime\prime}}{f^{\prime}}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad- \frac{p_{1}p_{2}(p_{1}-2p_{2})}{p_{0}^{3}}\bigg{(}\frac{f^{\prime\prime}}{f^{ \prime}}\bigg{)}^{2}-\frac{p_{1}p_{2}^{2}}{p_{0}^{3}}\cdot\frac{f^{\prime\prime \prime}}{f^{\prime}}\bigg{)}\frac{p_{1}}{p_{0}}\] \[=\frac{4p_{1}^{\prime}p_{2}^{\prime}(3p_{1}p_{2}-p_{0}^{2})+6(p_{1 }p_{2})^{\prime 2}}{p_{0}^{4}}-2\frac{p_{1}^{\prime\prime}p_{2}^{2}+p_{2}^{\prime \prime}p_{1}^{2}}{p_{0}^{3}}\] \[\qquad\qquad\qquad+\frac{4(p_{1}p_{2})^{\prime}(5p_{1}p_{2}-p_{0}^{ 2})+2p_{1}p_{2}(p_{1}^{\prime}p_{1}+p_{2}^{\prime}p_{2})}{p_{0}^{4}}\cdot \frac{f^{\prime\prime}}{f^{\prime}}+\frac{p_{1}p_{2}(15p_{1}p_{2}-2p_{0}^{2})}{p _{0}^{4}}\bigg{(}\frac{f^{\prime\prime}}{f^{\prime}}\bigg{)}^{2}-4\frac{p_{1}^{2}p _{2}^{2}}{p_{0}^{4}}\cdot\frac{f^{\prime\prime\prime}}{f^{\prime}}.\] Dividing this equality by \(p_{0}f^{\prime}\) side by side, and then using assertions (i), (ii) and (10) for the computation of the at most second-order partial derivatives, a simple computation yields that \[0 =\frac{p_{2}^{\prime\prime}}{p_{0}}\bigg{(}2\frac{p_{1}^{\prime}p_{ 2}}{p_{0}^{2}}+\frac{p_{1}}{p_{0}}\cdot\frac{f^{\prime\prime}}{f^{\prime}} \bigg{)}+2\frac{p_{2}^{\prime}}{p_{0}}\bigg{(}\frac{2p_{1}^{\prime}((p_{1}p_{2}) ^{\prime}-p_{2}^{\prime}p_{2})-p_{1}^{\prime\prime}p_{2}p_{0}}{p_{0}^{3}}- \frac{p_{2}^{\prime}p_{1}}{p_{0}^{2}}\cdot\frac{f^{\prime\prime}}{f^{\prime}} \bigg{)}\] \[\quad+\frac{p_{1}^{\prime\prime}}{p_{0}}\bigg{(}2\frac{p_{2}^{ \prime}p_{1}}{p_{0}^{2}}+\frac{p_{2}}{p_{0}}\cdot\frac{f^{\prime\prime}}{f^{ \prime}}\bigg{)}+2\frac{p_{1}^{\prime}}{p_{0}}\bigg{(}\frac{2p_{2}^{\prime}((p_ {1}p_{2})^{\prime}-p_{1}^{\prime}p_{1})-p_{2}^{\prime\prime}p_{1}p_{0}}{p_{0}^ {3}}-\frac{p_{1}^{\prime}p_{2}}{p_{0}^{2}}\cdot\frac{f^{\prime\prime}}{f^{ \prime}}\bigg{)}\] \[\quad+\frac{p_{1}^{2}p_{2}^{2}}{p_{0}^{4}}\cdot\frac{f^{\prime \prime\prime\prime}}{f^{\prime}}+\bigg{(}2\frac{p_{1}^{\prime}p_{2}^{3}+p_{2 }^{\prime}p_{1}^{3}-2p_{1}p_{2}(p_{1}p_{2})^{\prime}}{p_{0}^{4}}+\frac{p_{1}p _{2}(p_{0}^{2}-6p_{1}p_{2})}{p_{0}^{4}}\cdot\frac{f^{\prime\prime}}{f^{\prime} }\bigg{)}\frac{f^{\prime\prime\prime}}{f^{\prime}}\] \[\quad\quad+\bigg{(}\frac{4p_{1}^{\prime}p_{2}^{\prime}(3p_{1}p_{ 2}-p_{0}^{2})+6(p_{1}p_{2})^{\prime 2}}{p_{0}^{4}}+\frac{4(p_{1}p_{2})^{ \prime}(5p_{1}p_{2}-p_{0}^{2})+2p_{1}p_{2}(p_{1}^{\prime}p_{1}+p_{2}^{\prime} p_{2})}{p_{0}^{4}}\cdot\frac{f^{\prime\prime}}{p_{0}^{4}}\cdot\frac{f^{\prime \prime}}{f^{\prime}}\] \[\qquad\quad-2\frac{p_{1}^{\prime\prime}p_{2}^{2}+p_{2}^{\prime 2}p_{1}^{2}}{p_{0}^{ 3}}+\frac{p_{1}p_{2}(15p_{1}p_{2}-2p_{0}^{2})}{p_{0}^{4}}\bigg{(}\frac{f^{ \prime\prime}}{f^{\prime}}\bigg{)}^{2}-4\frac{p_{1}^{2}p_{2}^{2}}{p_{0}^{4}} \cdot\frac{f^{\prime\prime\prime}}{f^{\prime}}\bigg{)}\frac{f^{\prime\prime}}{ f^{\prime}}+\partial_{1}^{2}\partial_{2}^{2}M\circ\Delta_{2}.\] Therefore, \[\partial_{1}^{2}\partial_{2}^{2}M \circ\Delta_{2}=2\frac{(p_{1}p_{2})^{\prime\prime}p_{0}^{\prime} p_{0}-p_{0}^{\prime\prime}(p_{1}p_{2})^{\prime}p_{0}-6p_{1}^{\prime}p_{2}^{ \prime}(p_{1}p_{2})^{\prime}}{p_{0}^{4}}\] \[\quad+\frac{(p_{1}p_{2})^{\prime\prime}p_{0}^{2}-2p_{0}^{\prime \prime}p_{0}p_{1}p_{2}+p_{1}^{\prime}p_{2}^{\prime}(2p_{0}^{2}-24p_{1}p_{2})+p _{1}^{\prime 2}p_{2}(2p_{0}-6p_{2})+p_{2}^{\prime 2}p_{1}(2p_{0}-6p_{1})}{p_{0}^{4}} \cdot\frac{f^{\prime\prime}}{f^{\prime}}\] \[\quad+\frac{(p_{1}p_{2})^{\prime}(4p_{0}^{2}-18p_{1}p_{2})-2p_{0} ^{\prime}p_{0}p_{1}p_{2}}{p_{0}^{4}}\bigg{(}\frac{f^{\prime\prime}}{f^{\prime} }\bigg{)}^{2}+\frac{p_{1}p_{2}(2p_{0}^{2}-15p_{1}p_{2})}{p_{0}^{4}}\bigg{(} \frac{f^{\prime\prime}}{f^{\prime}}\bigg{)}^{3}\] \[\quad+\frac{(p_{1}p_{2})^{\prime}(6p_{1}p_{2}-2p_{0}^{2})+2p_{0} ^{\prime}p_{0}p_{1}p_{2}}{p_{0}^{4}}\cdot\frac{f^{\prime\prime\prime}}{f^{ \prime}}+\frac{p_{1}p_{2}(10p_{1}p_{2}-p_{0}^{2})}{p_{0}^{4}}\cdot\frac{f^{ \prime\prime}f^{\prime\prime\prime}}{(f^{\prime})^{2}}-\frac{(p_{1}p_{2})^{2}}{p _{0}^{4}}\cdot\frac{f^{(4)}}{f^{\prime}}.\] From here, we can directly conclude that assertion (iv) holds. ## 6. Necessary conditions for the invariance equation In the subsequent lemmas we establish the first-, second-, third-, and fourth-order necessary conditions for the validity of the invariance equation (2). Finally, we present the main result of our paper in Theorem 10. **Lemma 6**.: _Let \(f,g:I\to\mathbb{R}\) be differentiable functions on \(I\) with nonvanishing first derivatives and \(i\in\{1,2\}\). Let \(p=(p_{1},p_{2}):I\to\mathbb{R}_{+}^{2}\) and \(q=(q_{1},q_{2}):I\to\mathbb{R}_{+}^{2}\) be such that \(p_{i}\) and \(q_{i}\) are continuous on \(I\). If \(\partial_{i}A_{f,p}+\partial_{i}A_{g,q}=1\) holds on \(\operatorname{diag}(I^{2})\), then_ \[\frac{p_{1}}{p_{0}}=\frac{q_{2}}{q_{0}}\qquad\text{and}\qquad\frac{p_{2}}{p_{0} }=\frac{q_{1}}{q_{0}} \tag{14}\] _and hence_ \[p_{1}q_{1}=p_{2}q_{2} \tag{15}\] _holds on \(I\)._ Proof.: By formula (i) of Theorem 5, the equality \((\partial_{i}A_{f,p}+\partial_{i}A_{g,q})\circ\Delta_{2}=1\) can be rewritten as \[\frac{p_{i}}{p_{0}}+\frac{q_{i}}{q_{0}}=1,\] which is equivalent to (14) and also equivalent to (15). **Lemma 7**.: _Let \(f,g:I\to\mathbb{R}\) be twice differentiable functions on \(I\) with nonvanishing first derivatives. Let \(p=(p_{1},p_{2}):I\to\mathbb{R}_{+}^{2}\) and \(q=(q_{1},q_{2}):I\to\mathbb{R}_{+}^{2}\) be differentiable functions on \(I\) and assume that (14) holds on \(I\). If \(\partial_{1}\partial_{2}A_{f,p}+\partial_{1}\partial_{2}A_{g,q}=0\) holds on \(\operatorname{diag}(I^{2})\), then_ \[\frac{(p_{1}p_{2})^{\prime}}{p_{1}p_{2}}+\frac{f^{\prime\prime}}{f^{\prime}}+ \frac{(q_{1}q_{2})^{\prime}}{q_{1}q_{2}}+\frac{g^{\prime\prime}}{g^{\prime}}=0. \tag{16}\] _Consequently, there exists a nonzero constant \(\delta\) such that_ \[p_{1}q_{1}=p_{2}q_{2}=\sqrt{\frac{\delta}{f^{\prime}g^{\prime}}} \tag{17}\] _is valid on \(I\)._ Proof.: The equality \((\partial_{1}\partial_{2}A_{f,p}+\partial_{1}\partial_{2}A_{g,q})\circ\Delta_ {2}=0\), in view of formula (ii) of Theorem 5, is equivalent to \[\frac{p_{1}p_{2}}{p_{0}^{2}}\bigg{(}\frac{(p_{1}p_{2})^{\prime}}{p_{1}p_{2}}+ \frac{f^{\prime\prime}}{f^{\prime}}\bigg{)}+\frac{q_{1}q_{2}}{q_{0}^{2}} \bigg{(}\frac{(q_{1}q_{2})^{\prime}}{q_{1}q_{2}}+\frac{g^{\prime\prime}}{g^{ \prime}}\bigg{)}=0.\] Multiplying this equality by \(\frac{p_{0}^{2}}{p_{1}p_{2}}\), which by (14), is equal to \(\frac{q_{0}^{2}}{q_{1}q_{2}}\), we can easily see that (16) holds on \(I\). Integrating both sides of the equality (16), we find that there exists a constant \(\delta\) such that \[p_{1}p_{2}f^{\prime}q_{1}q_{2}g^{\prime}=\delta.\] This equality together with (15) implies that (17) is also valid. **Lemma 8**.: _Let \(f,g:I\to\mathbb{R}\) be three times differentiable functions on \(I\) with nonvanishing first derivatives. Let \(p=(p_{1},p_{2}):I\to\mathbb{R}_{+}^{2}\) and \(q=(q_{1},q_{2}):I\to\mathbb{R}_{+}^{2}\) be twice differentiable functions such that (14) and (16) hold on \(I\). If \(\partial_{1}^{2}\partial_{2}A_{f,p}+\partial_{1}^{2}\partial_{2}A_{g,q}=0\) holds on \(\operatorname{diag}(I^{2})\), then_ \[\frac{p_{1}-p_{2}}{p_{0}}S(f)+\frac{q_{1}-q_{2}}{q_{0}}S(g)=0. \tag{18}\] _Consequently,_ \[(p_{1}-p_{2})(S(f)-S(g))=0 \tag{19}\] _is valid on \(I\)._ Proof.: In view of (14) and (16), we have that \[\frac{p_{1}p_{2}}{p_{0}^{2}}=\frac{q_{1}q_{2}}{q_{0}^{2}},\qquad\frac{p_{1}-p_ {2}}{p_{0}}=\frac{q_{2}-q_{1}}{q_{0}},\qquad\frac{(p_{1}p_{2})^{\prime}}{p_{1} p_{2}}+\frac{f^{\prime\prime}}{f^{\prime}}=-\bigg{(}\frac{(q_{1}q_{2})^{\prime}}{q_{1} q_{2}}+\frac{g^{\prime\prime}}{g^{\prime}}\bigg{)}. \tag{20}\] By using formula (iii) of Theorem 5 for \(\partial_{1}^{2}\partial_{2}A_{f,p}\circ\Delta_{2}\) and the analogous formula for \(\partial_{1}^{2}\partial_{2}A_{g,q}\circ\Delta_{2}\), the equality \(\big{(}\partial_{1}^{2}\partial_{2}A_{f,p}+\partial_{1}^{2}\partial_{2}A_{g,q }\big{)}\circ\Delta_{2}=0\) can be rewritten as \[0=-\frac{1}{4}\Big{(}\frac{p_{1}-p_{2}}{p_{0}}\Big{)}^{\prime\prime }-\frac{3p_{0}^{2}}{16p_{1}p_{2}}\cdot\frac{p_{1}-p_{2}}{p_{0}}\bigg{(}\frac{p _{1}-p_{2}}{p_{0}}\bigg{)}^{\prime 2}+\frac{3p_{1}p_{2}}{4p_{0}^{2}}\cdot\frac{p_{1}-p_{2}}{p_ {0}}\bigg{(}\frac{(p_{1}p_{2})^{\prime}}{p_{1}p_{2}}+\frac{f^{\prime\prime}}{ f^{\prime}}\bigg{)}^{2}\] \[\quad-\frac{1}{2}\bigg{(}\frac{p_{1}p_{2}}{p_{0}^{2}}\bigg{(}\frac {(p_{1}p_{2})^{\prime}}{p_{1}p_{2}}+\frac{f^{\prime\prime}}{f^{\prime}}\bigg{)} \bigg{)}^{\prime}-\frac{p_{1}p_{2}}{2p_{0}^{2}}\cdot\frac{p_{1}-p_{2}}{p_{0}}S (f)\] \[\quad-\frac{1}{4}\Big{(}\frac{q_{1}-q_{2}}{q_{0}}\Big{)}^{\prime \prime}-\frac{3q_{0}^{2}}{16q_{1}q_{2}}\cdot\frac{q_{1}-q_{2}}{q_{0}}\bigg{(} \frac{q_{1}-q_{2}}{q_{0}}\bigg{)}^{\prime 2}+\frac{3q_{1}q_{2}}{4q_{0}^{2}}\cdot \frac{q_{1}-q_{2}}{q_{0}}\bigg{(}\frac{(q_{1}q_{2})^{\prime}}{q_{1}q_{2}}+ \frac{g^{\prime\prime}}{g^{\prime}}\bigg{)}^{2}\] \[\quad-\frac{1}{2}\bigg{(}\frac{q_{1}q_{2}}{q_{0}^{2}}\bigg{(}\frac {(q_{1}q_{2})^{\prime}}{q_{1}q_{2}}+\frac{g^{\prime\prime}}{g^{\prime}}\bigg{)} \bigg{)}^{\prime}-\frac{q_{1}q_{2}}{2q_{0}^{2}}\cdot\frac{q_{1}-q_{2}}{p_{0}} S(g).\] Using the identities in (20), this equality is equivalent to \[0=-\frac{p_{1}p_{2}}{2p_{0}^{2}}\cdot\frac{p_{1}-p_{2}}{p_{0}}S(f)-\frac{q_{1 }q_{2}}{2q_{0}^{2}}\cdot\frac{q_{1}-q_{2}}{p_{0}}S(g).\] Multiplying the last equation by \(\frac{2p_{0}^{2}}{p_{1}p_{2}}=\frac{2q_{0}^{2}}{q_{1}q_{2}}\), we can see that (18) holds. Using the second equality in (20), this is equivalent to (19). **Lemma 9**.: _Let \(f,g:I\to\mathbb{R}\) be four times differentiable functions on \(I\) with nonvanishing first derivatives. Let \(p=(p_{1},p_{2}):I\to\mathbb{R}_{+}^{2}\) and \(q=(q_{1},q_{2}):I\to\mathbb{R}_{+}^{2}\) be twice differentiable functions such that \((\ref{eq:16})\) and \((\ref{eq:18})\) hold on \(I\). If \(\partial_{1}^{2}\partial_{2}^{2}A_{f,p}+\partial_{1}^{2}\partial_{2}^{2}A_{g,q }=0\) holds on \(\mathrm{diag}(I^{2})\), then_ \[S(f)^{\prime}-2\bigg{(}\frac{(p_{1}p_{2})^{\prime}}{p_{1}p_{2}}+\frac{f^{ \prime\prime}}{f^{\prime}}\bigg{)}S(f)+S(g)^{\prime}-2\bigg{(}\frac{(q_{1}q_{ 2})^{\prime}}{q_{1}q_{2}}+\frac{g^{\prime\prime}}{g^{\prime}}\bigg{)}S(g)=0 \tag{21}\] _is valid on \(I\). Consequently,_ \[(p_{1}-p_{2})(S(f)^{\prime}+S(g)^{\prime})=0 \tag{22}\] _is valid on \(I\)._ Proof.: The equality \((\partial_{1}^{2}\partial_{2}^{2}A_{f,p}+\partial_{1}^{2}\partial_{2}^{2}A_{g,q })\circ\Delta_{2}=0\), formula (iv) of Theorem 5 for \(\partial_{1}^{2}\partial_{2}^{2}A_{f,p}\circ\Delta_{2}\) and the analogous expression for \(\partial_{1}^{2}\partial_{2}^{2}A_{g,q}\circ\Delta_{2}\) imply that \[0=\] \[\quad-\bigg{(}\frac{p_{1}p_{2}}{p_{0}^{2}}\bigg{)}^{\prime}\bigg{(} \frac{(p_{1}p_{2})^{\prime}}{p_{1}p_{2}}+\frac{f^{\prime\prime}}{f^{\prime}} \bigg{)}^{\prime}-\frac{1}{2}\Big{(}\frac{p_{1}p_{2}}{p_{0}^{2}}\Big{)}^{2} \bigg{(}6-\frac{p_{0}^{2}}{p_{1}p_{2}}\bigg{)}\bigg{(}\frac{(p_{1}p_{2})^{ \prime}}{p_{1}p_{2}}+\frac{f^{\prime\prime}}{f^{\prime}}\bigg{)}^{3}\] \[\quad+\bigg{(}\frac{p_{1}p_{2}}{p_{0}^{2}}\bigg{)}^{2}\bigg{(}6- \frac{p_{0}^{2}}{p_{1}p_{2}}\bigg{)}\bigg{(}\frac{(p_{1}p_{2})^{\prime}}{p_{1}p_{ 2}}+\frac{f^{\prime\prime}}{f^{\prime}}\bigg{)}S(f)-\Big{(}\frac{p_{1}p_{2}}{p _{0}^{2}}\Big{)}^{2}S(f)^{\prime}\] \[\quad+\bigg{(}\frac{q_{1}q_{2}}{q_{0}^{2}}\bigg{)}^{\prime\prime}+ \frac{3}{8}\Big{(}6-\frac{q_{0}^{2}}{q_{1}q_{2}}\bigg{)}\bigg{(}\Big{(}\frac{q _{1}-q_{2}}{q_{0}}\Big{)}^{\prime}\bigg{)}^{2}\bigg{)}\bigg{(}\frac{(q_{1}q_{2})^ {\prime}}{q_{1}q_{2}}+\frac{g^{\prime\prime}}{g^{\prime}}\bigg{)}\] \[\quad-\bigg{(}\frac{q_{1}q_{2}}{q_{0}^{2}}\bigg{)}^{\prime}\bigg{(} \frac{(q_{1}q_{2})^{\prime}}{q_{1}q_{2}}+\frac{g^{\prime\prime}}{g^{\prime}} \bigg{)}^{\prime}-\frac{1}{2}\Big{(}\frac{q_{1}q_{2}}{q_{0}^{2}}\bigg{)}^{2} \bigg{(}6-\frac{q_{0}^{2}}{q_{1}q_{2}}\bigg{)}\bigg{(}\frac{(q_{1}q_{2})^{ \prime}}{q_{1}q_{2}}+\frac{g^{\prime\prime}}{g^{\prime}}\bigg{)}^{3}\] \[\quad+\bigg{(}\frac{q_{1}q_{2}}{q_{0}^{2}}\bigg{)}^{2}\bigg{(}6- \frac{q_{0}^{2}}{q_{1}q_{2}}\bigg{)}\bigg{(}\frac{(q_{1}q_{2})^{\prime}}{q_{1}q_{ 2}}+\frac{g^{\prime\prime}}{g^{\prime}}\bigg{)}S(g)-\Big{(}\frac{q_{1}q_{2}}{q _{0}^{2}}\bigg{)}^{2}S(g)^{\prime}.\] Using the identities in (20), this equality reduces to \[0 =\bigg{(}\frac{p_{1}p_{2}}{p_{0}^{2}}\bigg{)}^{2}\bigg{(}S(f)^{ \prime}+\bigg{(}\frac{p_{0}^{2}}{p_{1}p_{2}}-6\bigg{)}\bigg{(}\frac{(p_{1}p_{2})^ {\prime}}{p_{1}p_{2}}+\frac{f^{\prime\prime}}{f^{\prime}}\bigg{)}S(f)\bigg{)}\] \[\qquad+\bigg{(}\frac{q_{1}q_{2}}{q_{0}^{2}}\bigg{)}^{2}\bigg{(}S(g )^{\prime}+\bigg{(}\frac{q_{0}^{2}}{q_{1}q_{2}}-6\bigg{)}\bigg{(}\frac{(q_{1}q_ {2})^{\prime}}{q_{1}q_{2}}+\frac{g^{\prime\prime}}{g^{\prime}}\bigg{)}S(g) \bigg{)},\] which, by the first equality in (20) yields \[0 =S(f)^{\prime}+\bigg{(}\frac{(p_{1}-p_{2})^{2}}{p_{1}p_{2}}-2 \bigg{)}\bigg{(}\frac{(p_{1}p_{2})^{\prime}}{p_{1}p_{2}}+\frac{f^{\prime \prime}}{f^{\prime}}\bigg{)}S(f) \tag{23}\] \[\qquad+S(g)^{\prime}+\bigg{(}\frac{(q_{1}-q_{2})^{2}}{q_{1}q_{2} }-2\bigg{)}\bigg{(}\frac{(q_{1}q_{2})^{\prime}}{q_{1}q_{2}}+\frac{g^{\prime \prime}}{g^{\prime}}\bigg{)}S(g).\] On the other hand, by (18), we have \[\frac{p_{1}-p_{2}}{p_{0}}S(f)=\frac{q_{2}-q_{1}}{q_{0}}S(g),\] whence we can obtain \[\frac{(p_{1}-p_{2})^{2}}{p_{1}p_{2}}\bigg{(}\frac{(p_{1}p_{2})^{ \prime}}{p_{1}p_{2}}+\frac{f^{\prime\prime}}{f^{\prime}}\bigg{)}S(f)=\frac{p_ {0}^{2}}{p_{1}p_{2}}\cdot\frac{p_{1}-p_{2}}{p_{0}}\bigg{(}\frac{(p_{1}p_{2})^ {\prime}}{p_{1}p_{2}}+\frac{f^{\prime\prime}}{f^{\prime}}\bigg{)}\bigg{(}\frac {p_{1}-p_{2}}{p_{0}}S(f)\bigg{)}\] \[\qquad=-\frac{q_{0}^{2}}{q_{1}q_{2}}\cdot\frac{q_{2}-q_{1}}{q_{0} }\bigg{(}\frac{(q_{1}q_{2})^{\prime}}{q_{1}q_{2}}+\frac{g^{\prime\prime}}{g^{ \prime}}\bigg{)}\bigg{(}\frac{q_{2}-q_{1}}{q_{0}}S(g)\bigg{)}=-\frac{(q_{1}-q_ {2})^{2}}{q_{1}q_{2}}\bigg{(}\frac{(q_{1}q_{2})^{\prime}}{q_{1}q_{2}}+\frac{g ^{\prime\prime}}{g^{\prime}}\bigg{)}S(g).\] Using this equality, equation (23) reduces to (21). Finally, multiplying (21) by \((p_{1}-p_{2})\) side by side and use (16) and then (19), the last assertion, i.e., equality (22) follows. **Theorem 10**.: _Let \(f,g:I\to\mathbb{R}\) be four times continuously differentiable functions on \(I\) with nonvanishing first derivatives. Let \(p=(p_{1},p_{2}):I\to\mathbb{R}_{+}^{2}\) be a twice continuously differentiable function and \(q=(q_{1},q_{2}):I\to\mathbb{R}_{+}^{2}\). Assume that the set_ \[P:=\{x\in I\mid p_{1}(x)=p_{2}(x)\}\] _is nowhere dense in \(I\). Then the following assertions are equivalent to each other._ 1. _The invariance equation (_2_) holds for every_ \((x,y)\in I^{2}\)_._ 2. _There exists an open set_ \(U\subseteq I^{2}\) _containing the diagonal_ \(\operatorname{diag}(I^{2})\) _such that the invariance equation (_2_) holds for all_ \((x,y)\in U\)_._ 3. _The function_ \(q=(q_{1},q_{2}):I\to\mathbb{R}_{+}^{2}\) _is twice continuously differentiable and the system of equalities_ \[\partial_{1}A_{f,p}+\partial_{1}A_{g,q} =1,\] (24) \[\partial_{1}\partial_{2}A_{f,p}+\partial_{1}\partial_{2}A_{g,q} =0,\] \[\partial_{1}^{2}\partial_{2}A_{f,p}+\partial_{1}^{2}\partial_{2}A_{g,q} =0,\] \[\partial_{1}^{2}\partial_{2}^{2}A_{f,p}+\partial_{1}^{2}\partial_{2 }^{2}A_{g,q} =0\] _holds on the diagonal_ \(\operatorname{diag}(I^{2})\)_._ 4. _There exists a real constant_ \(\gamma\in\mathbb{R}\)_, there exist solutions_ \(u,v,w,z:I\to\mathbb{R}\) _of the second-order linear differential equation_ \(F^{\prime\prime}=\gamma F\) _such that_ \(v>0\) _and_ \(z>0\) _holds on_ \(I\) _and_ \(\{u,v\}\) _and_ \(\{w,z\}\) _are linearly independent such that_ (_5_) _holds._ Proof.: The implication (i)\(\Rightarrow\)(ii) is trivial. Now assume that (ii) is valid. Rearranging the invariance equation (2), we get \[A_{g,q}(x,y)=x+y-A_{f,p}(x,y)=:M(x,y)\qquad((x,y)\in U).\] By the regularity assumptions on \(f\) and \(p\), it follows that the mean \(M\) defined by the above equality is twice continuously partially differentiable on \(U\). Therefore, Lemma 1 implies that \(q_{1}\) and \(q_{2}\) are also twice continuously differentiable on \(I\). In view of Theorem 5, we now obtain that the partial derivatives \(\partial_{1}^{i}\partial_{2}^{j}A_{f,p}\) and \(\partial_{1}^{i}\partial_{2}^{j}A_{g,q}\) exist on \(\operatorname{diag}(I^{2})\) for all \(i,j\in\{1,2\}\). Differentiating both sides of the invariance equation (2) partially, \(i\) and \(j\) times with respect to the variables \(x\) and \(y\), we obtain that the system of equalities (24) holds on \(\operatorname{diag}(I^{2})\). This proves that (iii) follows from (ii). Assume that (iii) is valid. Using Lemma 8 and the third equality in (24), it follows that (19) is valid on \(I\). Therefore, for all \(x\in I\setminus P\), we have that \[S(f)(x)=S(g)(x). \tag{25}\] Observe that \(S(f)\) and \(S(g)\) are continuous functions, hence this equality is also valid on the closure of \(I\setminus P\), which equals \(I\) since the set \(P\) is nowhere dense. In view of Lemma 8, the fourth equality in (24) implies that (22) is valid on \(I\). Therefore, by the nowhere density of the set \(P\) again, \[S(f)^{\prime}(x)+S(g)^{\prime}(x)=0.\] Differentiating (25) side by side, the last equation implies that \[S(f)^{\prime}(x)=S(g)^{\prime}(x)=0.\] Therefore \(S(f)\) and \(S(g)\) are constant functions which are equal to each other. Let us denote this constant value by \(-2\gamma\). Then, applying Lemma 2, we can conclude that there exist solutions \(u,v,w,z:I\to\mathbb{R}\) of the second-order linear differential equation \(F^{\prime\prime}=\gamma F\) such that \(v>0\) and \(z>0\) holds on \(I\) and the first two equalities in (5) are satisfied, i.e., \(f=u/v\) and \(g=w/z\). The strict monotonicity of \(f\) and \(g\) imply that \(\{u,v\}\) and \(\{w,z\}\) are also linearly independent. Observe that \[(u^{\prime}v-uv^{\prime})^{\prime}=u^{\prime\prime}v+u^{\prime}v^{\prime}-u^ {\prime}v^{\prime}-uv^{\prime\prime}=\gamma uv-\gamma uv=0.\] This implies that \(u^{\prime}v-uv^{\prime}=\alpha\) and, analogously, \(w^{\prime}z-wz^{\prime}=\beta\) for some nonzero real constants \(\alpha\) and \(\beta\). Thus \[f^{\prime}=\frac{u^{\prime}v-uv^{\prime}}{v^{2}}=\frac{\alpha}{v^{2}}\qquad \text{and}\qquad g^{\prime}=\frac{w^{\prime}z-wz^{\prime}}{z^{2}}=\frac{ \beta}{z^{2}}.\] On the other hand, by Lemma 6 and Lemma 7, the first two equalities in (24) imply that there exists a nonzero constant \(\delta\) such that (17) is valid on \(I\). Consequently, \[p_{1}q_{1}=p_{2}q_{2}=\sqrt{\frac{\delta}{f^{\prime}g^{\prime}}}=\sqrt{\frac{ \delta v^{2}z^{2}}{\alpha\beta}}=\sqrt{\frac{\delta}{\alpha\beta}}vz=\eta vz,\] where \(\eta:=\sqrt{\frac{\delta}{\alpha\beta}}\) is a nonzero constant. Define \(\bar{u}:=\eta u\) and \(\bar{v}:=\eta v\). Then we can see that \(\bar{u}\) and \(\bar{v}\) are also solutions of the second-order linear differential equation \(F^{\prime\prime}=\gamma F\) and (5) is satisfied if we replace \(u\) and \(v\) by \(\bar{u}\) and \(\bar{v}\), respectively. Hence we have proved that assertion (iii) implies statement (iv). The implication (iv)\(\Rightarrow\)(i) is a consequence of Theorem 4.
2306.13498
Constant-sized self-tests for maximally entangled states and single projective measurements
Self-testing is a powerful certification of quantum systems relying on measured, classical statistics. This paper considers self-testing in bipartite Bell scenarios with small number of inputs and outputs, but with quantum states and measurements of arbitrarily large dimension. The contributions are twofold. Firstly, it is shown that every maximally entangled state can be self-tested with four binary measurements per party. This result extends the earlier work of Man\v{c}inska-Prakash-Schafhauser (2021), which applies to maximally entangled states of odd dimensions only. Secondly, it is shown that every single binary projective measurement can be self-tested with five binary measurements per party. A similar statement holds for self-testing of projective measurements with more than two outputs. These results are enabled by the representation theory of quadruples of projections that add to a scalar multiple of the identity. Structure of irreducible representations, analysis of their spectral features and post-hoc self-testing are the primary methods for constructing the new self-tests with small number of inputs and outputs.
Jurij Volčič
2023-06-23T13:43:56Z
http://arxiv.org/abs/2306.13498v2
# Constant-sized self-tests for maximally entangled states and single projective measurements ###### Abstract. Self-testing is a powerful certification of quantum systems relying on measured, classical statistics. This paper considers self-testing in bipartite Bell scenarios with small number of inputs and outputs, but with quantum states and measurements of arbitrarily large dimension. The contributions are twofold. Firstly, it is shown that every maximally entangled state can be self-tested with four binary measurements per party. This result extends the earlier work of Mancinska-Prakash-Schafhauser (2021), which applies to maximally entangled states of odd dimensions only. Secondly, it is shown that every single binary projective measurement can be self-tested with five binary measurements per party. A similar statement holds for self-testing of projective measurements with more than two outputs. These results are enabled by the representation theory of quadruples of projections that add to a scalar multiple of the identity. Structure of irreducible representations, analysis of their spectral features and post-hoc self-testing are the primary methods for constructing the new self-tests with small number of inputs and outputs. Key words and phrases:Self-test, maximally entangled state, device-independent certification, non-locality, projective quantum measurement 2020 Mathematics Subject Classification: 81P45, 81P40, 81R15, 46L60 Supported by the NSF grant DMS-1954709. ## 1. Introduction Thanks to non-locality of quantum theory, unknown non-communicating quantum devices measuring an unknown shared entangled state can sometimes be identified based on classical statistic of their outputs. This phenomenon is called _self-testing_, and is the strongest form of device-independent certification of quantum systems. Self-testing was introduced in [10], and has been a heavily studied subject ever since; see [14] for a comprehensive review of major advances on this topic. The immense interest attracted by self-testing originates from its applications in device-independent quantum cryptography [1, 2], delegated quantum computation [15], randomness generation [13, 1], entanglement detection [11], and computational complexity [12, 13]. For experimental developments, see [1, 14]. This paper focuses on self-testing in bipartite Bell scenarios [1], where two parties randomly perform measurements on a shared quantum state without communicating. From these measurements, joint probability distribution of inputs and outputs of both parties can be constructed as classical data describing the system. Suppose that each party can perform \(N\) measurements, each of them with \(K\) outcomes. Borrowing terminology from quantum games, we model this setup with bipartite quantum _strategies_. Namely, an \(N\)-input \(K\)-output strategy \(\mathcal{S}\) of two parties (subsystems) A and B consists of a bipartite quantum state \(\left|\psi\right\rangle\) in the tensor product of Hilbert spaces \(\mathcal{H}_{A}\) and \(\mathcal{H}_{B}\), a measurement \((\mathcal{M}_{i,a})_{a=1}^{K}\) of positive operators on \(\mathcal{H}_{A}\) for each \(i=1,\ldots,N\), and a measurement \((\mathcal{N}_{j,b})_{b=1}^{K}\) of positive operators on \(\mathcal{H}_{B}\) for each \(j=1,\ldots,N\). The _correlation_ of \(\mathcal{S}\) is the array \(p\) of probabilities given by the Born rule \(p(a,b|i,j)=\left\langle\psi\right|\mathcal{M}_{i,a}\otimes\mathcal{N}_{j,b} \left|\psi\right\rangle\), and is the classically observable data induced by \(\mathcal{S}\). There are two trivial modifications of the strategy \(\mathcal{S}\) that do not affect its correlation: one is a unitary change of local bases, and the other is extending the state with an ancillary state on which the measurements act trivially. If any other strategy with correlation \(p\) is obtained from \(\mathcal{S}\) using these trivial modifications, then we say that \(\mathcal{S}\) is _self-tested_ by \(p\). That is, the state and measurements in a self-tested strategy are essentially uniquely determined by the correlation. The most renowned example of a self-tested strategy (with 2 inputs and 2 outputs) consists of maximally entangled qubits and two pairs of Pauli measurements, which give the maximal quantum violation of the famous CHSH inequality [11, 12, 13]. The following is a fundamental self-testing problem: \((\star)\) _Which states and which measurements can be self-tested, i.e., appear in a strategy that is self-tested by its correlation? Furthermore, how complex is such a strategy, e.g., how many inputs and outputs per party are required?_ The breakthrough on \((\star)\) for quantum states was achieved in [10], where the authors showed that every entangled bipartite state can be self-tested. The number of inputs in the provided self-tests _grows with the local dimension \(n\)_ of the quantum state under investigation, which makes these self-tests rather complicated in large dimensions. The existence result of [10] was later not only extended to multipartite states in quantum networks [21] and refined in one-sided device-independent scenarios [21], but also improved in terms of inputs and outputs needed to self-test certain states. In [20], the authors show that an \(n\)-dimensional maximally entangled bipartite state can be self-tested using 2 inputs and \(n\) outputs. The paper [11] was the first to provide _constant-sized_ self-tests for some infinite families of maximally entangled states of _even_ dimension (but not constant-sized self-tests for all maximally entangled states of even dimension). This result was complemented by [14], where the authors establish that maximally entangled state of _any odd_ dimension can be self-tested using 4 inputs and 2 outputs. In comparison with states, the progress on \((\star)\) for measurements has been more constrained. All two-dimensional projective measurements have been self-tested [13], and likewise tensor products of Pauli measurements [12, 13]. Recently, it has been established that every projective measurement can be self-tested [13]. Actually, the self-tests derived in [13] allow for real ensembles of projective measurements to be self-tested simultaneously. However, self-testing an \(n\)-dimensional projective measurement in this manner requires roughly \(n^{2}\) inputs. ### Contributions This paper provides self-tests for _all maximally entangled states_ and _all single projective measurements_, respectively, that are _uniform_ in number of both inputs and outputs. The first main result concerns maximally entangled states. **Theorem A** (Corollary 5.4).: Maximally entangled bipartite state of any local dimension \(d\) can be self-tested using \(4\) inputs and \(2\) outputs. The strategies of Theorem A are given in Definition 5.1. Their construction and self-testing feature arises from the one-parametric family of universal C*-algebras \(\mathcal{A}_{2-\frac{1}{n}}\) generated by four projections adding up to \(2-\frac{1}{n}\) times the identity. Remarkable results about \(*\)-representations of these algebras were established by Kruglyak-Rabanovich-Samoilenko using Coxeter functors between representation categories [16]. Their theory is essential in the proof of Theorem A. Representations of C*-algebras of this type have already been leveraged in [17]. However, their work uses a different family of parameters (\(2-\frac{2}{n}\) for odd \(n\), instead of \(2-\frac{1}{n}\) for natural \(n\)) that leads to simple C*-algebras, and maximally entangled states of odd dimensions only. On the other hand, exploiting algebras \(\mathcal{A}_{2-\frac{1}{n}}\) for self-testing purposes requires a more sophisticated analysis of their representations, but applies to all maximally entangled states. The second main result of this paper provides constant-sized self-tests for single projective measurements with \(2\) outputs, i.e., _binary_ projective measurements. Note that a binary projective measurement \((P,I-P)\) is, up to unitary change of local basis, determined by the dimension \(n\) and the rank \(r\) of the projection \(P\). **Theorem B** (Corollary 5.11).: A single binary projective measurement of any dimension \(n\) and rank \(r\) appears in a \(5\)-input \(2\)-output strategy that is self-tested by its correlation. See Definition 5.9 for the explicit strategies used in Theorem B. A generalization of Theorem B for non-binary projective measurements is given in Corollary 5.13. It is important to stress out that the significance of Theorem B lies in the sufficiency of a constant number of inputs and outputs to self-test a single binary projective measurement. On the other hand, Theorem B does not allow for any other relations between measurements in the strategy to be prescribed in advance; from this perspective, it is weaker than [13]. The strategies of Theorem B are obtained from the strategies of Theorem A by the principle of _post-hoc self-testing_[21]. A broad sufficiency criterion for applicability of post-hoc self-testing was presented in [13]. To apply this criterion in the proof of Theorem B, certain spectral aspects of \(*\)-representations of \(\mathcal{A}_{2-\frac{1}{n}}\) need to be resolved. Namely, we determine the spectrum of the sum of pairs of projections arising from \(*\)-representations of \(\mathcal{A}_{2-\frac{1}{n}}\). While the derivation of the newly presented self-tests might seem rather abstract, the resulting correlations admit closed form formulae, and the corresponding strategies can recursively constructed using basic tools from linear algebra (see Appendix A for examples). The last section addresses obstructions to constant-sized self-testing of arbitrary entangled states and pairs of projective measurements. ### Acknowledgments The author thanks Ken Dykema for inspiring conversations about self-testing, and Ricardo Gutierrez-Jauregui for sharing his expertise on experimental aspects of quantum theory. ## 2. Preliminaries This section introduces notation and terminology on quantum strategies and self-testing, following the conventions presented in [10]. For a comprehensive overview, see [11]. Let \(K\in\mathbb{N}\). A \(K\)-tuple of operators \((P_{a})_{a=1}^{K}\) acting on a Hilbert space \(\mathcal{H}\) is a _positive operator-valued measure (\(K\)-POVM)_ if \(P_{a}\succeq 0\) and \(\sum_{a=1}^{K}P_{a}=I\). If all \(P_{a}\) are projections, then \((P_{a})_{a=1}^{K}\) is a _projection-valued measure (\(K\)-PVM)_, or a _projective measurement_. Note that, up to a unitary basis change, a PVM \((P_{a})_{a=1}^{K}\) is uniquely determined by the ranks \(\operatorname{rk}P_{a}\) for \(a=1,\ldots,K\). That is, every \(K\)-PVM with ranks of projections \(r_{1},\ldots,r_{K}\) is unitarily equivalent to \[\left(I_{r_{1}}\oplus 0_{r_{2}+\cdots+r_{K}},0_{r_{1}}\oplus I_{r_{2}}\oplus 0 _{r_{3}+\cdots+r_{K}},\ldots,0_{r_{1}+\cdots+r_{K-1}}\oplus I_{r_{K}}\right).\] A 2-POVM is also called a _binary_ measurement. Observe that a binary PVM is simply a pair \((P,I-P)\) where \(P\) is a projection, and is determined by the dimension and the rank of \(P\) up to a unitary basis change. A _(pure bipartite) state_\(\left|\psi\right\rangle\) is a unit vector in \(\mathcal{H}_{A}\otimes\mathcal{H}_{B}\), where \(\mathcal{H}_{A},\mathcal{H}_{B}\) are Hilbert spaces. We say that \(\left|\psi\right\rangle\) has _full Schmidt rank_ if \(P\otimes I\left|\psi\right\rangle=I\otimes Q\left|\psi\right\rangle=0\) for some projections \(P,Q\) implies \(P=0\) and \(Q=0\). In this case, the Hilbert spaces \(\mathcal{H}_{A}\) and \(\mathcal{H}_{B}\) are isomorphic. For \(n\in\mathbb{N}\), the (canonical) _maximally entangled state_ of local dimension \(n\) is \(\left|\phi_{n}\right\rangle=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\left|i\right\rangle \left|i\right\rangle\in\mathbb{C}^{n}\otimes\mathbb{C}^{n}\). For \(A,B\in\mathrm{M}_{n}(\mathbb{C})\), \[\left\langle\phi_{n}\right|A\otimes B\left|\phi_{n}\right\rangle=\tau(AB^{ \mathrm{t}})=\frac{1}{n}\operatorname{tr}(AB^{\mathrm{t}}),\] where \(\tau\) denotes the normalized trace on \(\mathrm{M}_{n}(\mathbb{C})\). Let \(K_{A},K_{B},N_{A},N_{B}\in\mathbb{N}\). An _\((N_{A},N_{B})\)-input \((K_{A},K_{B})\)-output bipartite quantum strategy \(\mathcal{S}\)_ is a triple \[\mathcal{S}=(\left|\psi\right\rangle;\mathcal{M}_{1},\ldots,\mathcal{M}_{N_{ A}};\mathcal{N}_{1},\ldots,\mathcal{N}_{N_{B}})\] where \(\mathcal{M}_{i}\) are \(K_{A}\)-POVMs on a finite-dimensional Hilbert space \(\mathcal{H}_{A}\), \(\mathcal{N}_{j}\) are \(K_{B}\)-POVMs on a finite-dimensional Hilbert space \(\mathcal{H}_{B}\), and \(\left|\psi\right\rangle\in\mathcal{H}_{A}\otimes\mathcal{H}_{B}\) is a state. When \(K=K_{A}=K_{B}\) and \(N=N_{A}=N_{B}\), we simply say that \(\mathcal{S}\) is a \(N\)_-input \(K\)-output bipartite strategy_. The _correlation_ of \(\mathcal{S}\) is the \(N_{A}\times N_{B}\times K_{A}\times K_{B}\) array \(p\) with entries \[p(a,b|i,j)=\left\langle\psi\right|\mathcal{M}_{i,a}\otimes \mathcal{N}_{j,b}\left|\psi\right\rangle 1\leq a\leq K_{A},\ 1\leq b\leq K_{B},\] \[1\leq i\leq N_{A},\ 1\leq j\leq N_{B}.\] Since \(\mathcal{S}\) in particular models non-communication between parties, the correlation \(p\) is _non-signalling_, meaning that \(p(a|i):=\sum_{b=1}^{K_{B}}p(a,b|i,j)\) and \(p(b|j):=\sum_{a=1}^{K_{A}}p(a,b|i,j)\) are well-defined (the first sum is independent of \(j\) and the second sum is independent of \(i\)). A correlation \(p\) is called _synchronous_ if \(K_{A}=K_{B}\), \(N_{A}=N_{B}\) and \(p(a,b|i,i)=0\) for all \(i\) and \(a\neq b\). Let \(\mathcal{S}\) and \(\widetilde{\mathcal{S}}\) be \((N_{A},N_{B})\)-input \((K_{A},K_{B})\)-output strategies. Then \(\widetilde{\mathcal{S}}\) is a _local dilation_ if \(\mathcal{S}\) there exist finite-dimensional Hilbert spaces \(\mathcal{K}_{A},\mathcal{K}_{B}\), a state \(\left|\mathrm{aux}\right\rangle\in\mathcal{K}_{A}\otimes\mathcal{K}_{B}\) and isometries \(U_{A}:\mathcal{H}_{A}\to\widetilde{\mathcal{H}}_{A}\otimes\mathcal{K}_{A}\) and \(U_{B}:\mathcal{H}_{B}\to\widetilde{\mathcal{H}}_{B}\otimes\mathcal{K}_{B}\) such that \[(U_{A}\otimes U_{B})(\mathcal{M}_{i,a}\otimes\mathcal{N}_{j,b})\left|\psi \right\rangle=(\widetilde{\mathcal{M}}_{i,a}\otimes\widetilde{\mathcal{N}}_{j,b})\left|\widetilde{\psi}\right\rangle\otimes\left|\mathrm{aux}\right\rangle \tag{1}\] for all \(a,b,i,j\). There is a slight abuse of notation in (1); namely, we identify \[(\widetilde{\mathcal{H}}_{A}\otimes\mathcal{K}_{A})\otimes(\widetilde{ \mathcal{H}}_{B}\otimes\mathcal{K}_{B})\equiv(\widetilde{\mathcal{H}}_{A} \otimes\widetilde{\mathcal{H}}_{B})\otimes(\mathcal{K}_{A}\otimes\mathcal{K}_ {B}).\] Note that if \(\widetilde{\mathcal{S}}\) is a local dilation of \(\mathcal{S}\), then the correlations of \(\mathcal{S}\) and \(\widetilde{\mathcal{S}}\) coincide. Finally, we say that a strategy \(\widetilde{\mathcal{S}}\) is _self-tested_ by its correlation if it is a local dilation of any other strategy with the same correlation. ## 3. Representation theory of certain C*-algebras In [10], the authors derive several profound results on tuples of projections that add to a scalar multiple of the identity operator. This is achieved by studying certain functors between categories of \(*\)-representations, which are also the cornerstone of this paper. For our purposes, we focus on projections \(P_{1},P_{2},P_{3},P_{4}\) that add to \((2-\frac{1}{n})I\), where \(n\) is a natural number. First we review the construction of the aforementioned functors from [10, Section 1.2]. Then we refine a part of [10, Proposition 3] to obtain further properties about the projections \(P_{i}\) as above (Proposition 3.1). For \(\alpha\in\mathbb{R}\) define the C*-algebra \[\mathcal{A}_{\alpha}=\mathrm{C}^{*}\left\langle x_{1},x_{2},x_{3},x_{4}\colon x _{i}=x_{i}^{*}=x_{i}^{2},\ x_{1}+x_{2}+x_{3}+x_{4}=\alpha\right\rangle.\] Let \(\mathrm{Rep}_{\alpha}\) denote the category of \(*\)-representations of \(\mathcal{A}_{\alpha}\). That is, objects of \(\mathrm{Rep}_{\alpha}\) are \(*\)-representations of \(\mathcal{A}_{\alpha}\) on Hilbert spaces, and morphisms of \(\mathrm{Rep}_{\alpha}\) are equivariant maps, i.e., bounded linear operators between Hilbert spaces that intertwine the actions of \(*\)-representations. To a \(*\)-representation \(\pi\) of \(\mathcal{A}_{\alpha}\) on a Hilbert space \(\mathcal{H}\) we assign the \(6\)-tuple \([\pi]=(\alpha;n;d_{1},\ldots,d_{4})\) where \(n=\dim\mathcal{H}\) and \(d_{i}=\operatorname{rk}\pi(x_{i})\). ### Functors between representation categories Next, we define two functors \(T=T_{\alpha}:\operatorname{Rep}_{\alpha}\to\operatorname{Rep}_{4-\alpha}\) (linear reflection) and \(S=S_{\alpha}:\operatorname{Rep}_{\alpha}\to\operatorname{Rep}_{\frac{\alpha}{ \alpha-1}}\) (hyperbolic reflection). The subscripts are omitted when clear from the context. \((T)\): Given a \(*\)-representation \(\pi\) of \(\mathcal{A}_{\alpha}\) let \(T(\pi)\) be the \(*\)-representation of \(\mathcal{A}_{4-\alpha}\) determined by \(T(\pi)(x_{i}):=I-\pi(x_{i})\). Note that \(T\) commutes with equivariant maps between \(*\)-representations, so it extends to a functor \(T:\operatorname{Rep}_{\alpha}\to\operatorname{Rep}_{4-\alpha}\). If \([\pi]=(\alpha;n;d_{i})\) then \([T(\pi)]=(4-\alpha;n;n-d_{i})\). \((S)\): Suppose \(\alpha\notin\{0,1\}\), and let \(\pi\) be a \(*\)-representation of \(\mathcal{A}_{\alpha}\) on \(\mathcal{H}\). Denote \(\widehat{\mathcal{H}}=\bigoplus_{i}\operatorname{ran}\pi(x_{i})\). Let \(w_{i}:\operatorname{ran}\pi(x_{i})\to\widehat{\mathcal{H}}\) be the canonical injections, and let \(u_{i}:\operatorname{ran}\pi(x_{i})\to\mathcal{H}\) be inclusions. Then \[u=\frac{1}{\sqrt{\alpha}}\begin{pmatrix}u_{1}^{*}\\ \vdots\\ u_{4}^{*}\end{pmatrix}:\mathcal{H}\to\widehat{\mathcal{H}}\] is an isometry by definition of the algebra \(\mathcal{A}_{\alpha}\). Let \(\mathcal{K}=\operatorname{ran}(I-uu^{*})\), with inclusion \(v:\mathcal{K}\to\widehat{\mathcal{H}}\). Note that \(\dim\mathcal{K}=\dim\widehat{\mathcal{H}}-\dim\mathcal{H}\). Define \[S(\pi)(x_{i}):=\frac{\alpha}{\alpha-1}v^{*}w_{i}w_{i}^{*}v.\] Then \[(S(\pi)(x_{i}))^{2} =\frac{\alpha^{2}}{(\alpha-1)^{2}}v^{*}w_{i}w_{i}^{*}vv^{*}w_{i}w _{i}^{*}v=\frac{\alpha^{2}}{(\alpha-1)^{2}}v^{*}w_{i}w_{i}^{*}(I-uu^{*})w_{i}w _{i}^{*}v\] \[=\frac{\alpha^{2}}{(\alpha-1)^{2}}v^{*}w_{i}\left(I-\frac{1}{ \alpha}u_{i}^{*}u_{i}\right)w_{i}^{*}v=\frac{\alpha^{2}}{(\alpha-1)^{2}}\left( 1-\frac{1}{\alpha}\right)v^{*}w_{i}w_{i}^{*}v\] \[=S(\pi)(x_{i})\] and \[\sum_{i=1}^{4}S(\pi)(x_{i})=\sum_{i=1}^{4}\frac{\alpha}{\alpha-1}v^{*}w_{i}w _{i}^{*}v=\frac{\alpha}{\alpha-1}v^{*}\left(\sum_{i=1}^{4}w_{i}w_{i}^{*}\right) v=\frac{\alpha}{\alpha-1}v^{*}v=\frac{\alpha}{\alpha-1}I.\] Therefore \(S(\pi)(x_{1}),\dots,S(\pi)(x_{4})\) are projections that give rise to a \(*\)-representation \(S(\pi)\) of \(\mathcal{A}_{\frac{\alpha}{\alpha-1}}\) on \(\mathcal{K}\). As described in [11, Section 1.2], one can also extend \(S\) to equivariant maps, resulting in a functor \(S:\operatorname{Rep}_{\alpha}\to\operatorname{Rep}_{\frac{\alpha}{\alpha-1}}\). If \([\pi]=(\alpha;n;d_{i})\) then \([S(\pi)]=(\frac{\alpha}{\alpha-1};\sum_{i}d_{i}-n;d_{i})\). ### Distinguished quadruples of projections For \(\alpha\in(0,3)\), the (Coxeter) functor \[\Phi^{+}=S\circ T=S_{4-\alpha}\circ T_{\alpha}:\operatorname{Rep}_{\alpha}\to \operatorname{Rep}_{1+\frac{1}{3-\alpha}}\] define an equivalence of categories (with inverse \(T\circ S\)) by [11, Theorem 2]. If \([\pi]=(\alpha,n,d_{1},\dots,d_{4})\) then \([\Phi^{+}(\pi)]=(1+\frac{1}{3-\alpha};3n-\sum_{i}d_{i};n-d_{i})\). The functor \(\Phi^{+}\) plays an implicit yet crucial role in [11, Proposition 3] that describes the category \(\operatorname{Rep}_{2-\frac{1}{\hbar}}\). For the sake of completeness, we provide the proof of the part of [13, Proposition 3], and refine it to extract the additional information needed in this paper. **Proposition 3.1** ([13, Proposition 3(c)]).: _Let \(n\in\mathbb{N}\). The C*-algebra \(\mathcal{A}_{2-\frac{1}{n}}\) has precisely four non-equivalent irreducible \(*\)-representations. More precisely, there are projections \(\mathfrak{P}_{1}^{(n)},\ldots,\mathfrak{P}_{4}^{(n)}\in\mathrm{M}_{n}(\mathbb{ R})\) with \(\operatorname{rk}\mathfrak{P}_{1}^{(n)}=\lfloor\frac{n}{2}\rfloor-(-1)^{n}\) and \(\operatorname{rk}\mathfrak{P}_{i}^{(n)}=\lfloor\frac{n}{2}\rfloor\) for \(i=2,3,4\), such that given an irreducible \(*\)-representation of \(\mathcal{A}_{2-\frac{1}{n}}\), the quadruple \((\pi(x_{1}),\ldots,\pi(x_{4}))\) is unitarily equivalent to one of the_ \[(\mathfrak{P}_{1}^{(n)},\mathfrak{P}_{2}^{(n)},\mathfrak{P}_{3}^ {(n)},\mathfrak{P}_{4}^{(n)}), (\mathfrak{P}_{4}^{(n)},\mathfrak{P}_{1}^{(n)},\mathfrak{P}_{2 }^{(n)},\mathfrak{P}_{3}^{(n)}),\] \[(\mathfrak{P}_{3}^{(n)},\mathfrak{P}_{4}^{(n)},\mathfrak{P}_{1}^ {(n)},\mathfrak{P}_{2}^{(n)}), (\mathfrak{P}_{2}^{(n)},\mathfrak{P}_{3}^{(n)},\mathfrak{P}_{4 }^{(n)},\mathfrak{P}_{1}^{(n)}).\] Proof.: We prove the statement by induction on \(n\). If \(n=1\), then \(\mathfrak{P}_{1}^{(1)}=1\) and \(\mathfrak{P}_{i}^{(1)}=0\) for \(i=2,3,4\) are the desired \(1\times 1\) projections, giving rise to a \(*\)-representation \(\mathcal{A}_{1}\to\mathbb{C}\). Now suppose projections \(\mathfrak{P}_{i}^{(n)}\in\mathrm{M}_{n}(\mathbb{R})\) possess the desired properties. Then they define an irreducible \(*\)-representation of \(\mathcal{A}_{2-\frac{1}{n}}\) given by \(\pi(x_{i})=\mathfrak{P}_{i}^{(n)}\), and the other three irreducible \(*\)-representations up to unitary equivalence are obtained by cyclically permuting the generators. Now let \(\mathfrak{P}_{i}^{(n+1)}:=\Phi^{+}(\pi)(x_{i})\). Since \(\Phi^{+}:\mathrm{Rep}_{2-\frac{1}{n}}\to\mathrm{Rep}_{2-\frac{1}{n+1}}\) is an equivalence of categories, \(\Phi^{+}(\pi)\) is an irreducible \(*\)-representation of \(\mathcal{A}_{2-\frac{1}{n}}\), and the other three irreducible \(*\)-representations up unitary equivalence are obtained via cyclic permutations of generators. The rank values are determined by comparing \([\pi]\) and \([\Phi^{+}(\pi)]\). For later use we record a technical fact. **Lemma 3.2**.: _The \(4\times 4\) matrix_ \[\begin{pmatrix}\operatorname{rk}\mathfrak{P}_{1}^{(n)}&\operatorname{rk} \mathfrak{P}_{2}^{(n)}&\operatorname{rk}\mathfrak{P}_{3}^{(n)}&\operatorname {rk}\mathfrak{P}_{4}^{(n)}\\ \operatorname{rk}\mathfrak{P}_{4}^{(n)}&\operatorname{rk}\mathfrak{P}_{1}^ {(n)}&\operatorname{rk}\mathfrak{P}_{2}^{(n)}&\operatorname{rk}\mathfrak{P} _{3}^{(n)}\\ \operatorname{rk}\mathfrak{P}_{3}^{(n)}&\operatorname{rk}\mathfrak{P}_{4}^ {(n)}&\operatorname{rk}\mathfrak{P}_{1}^{(n)}&\operatorname{rk}\mathfrak{P} _{2}^{(n)}\\ \operatorname{rk}\mathfrak{P}_{2}^{(n)}&\operatorname{rk}\mathfrak{P}_{3}^ {(n)}&\operatorname{rk}\mathfrak{P}_{4}^{(n)}&\operatorname{rk}\mathfrak{P} _{1}^{(n)}\end{pmatrix}=-(-1)^{n}I_{4}+\left\lfloor\frac{n}{2}\right\rfloor \begin{pmatrix}1&1&1&1\\ 1&1&1&1\\ 1&1&1&1\\ 1&1&1&1\end{pmatrix}\] _is invertible for every \(n\in\mathbb{N}\)._ **Remark 3.3**.: Let us determine the normalized traces of \(\mathfrak{P}_{i}^{(n)}\) and their products. Clearly, \[\tau\left(\mathfrak{P}_{1}^{(n)}\right)=\frac{1}{2}-\frac{1+3(-1)^{n}}{4n}, \qquad\tau\left(\mathfrak{P}_{i}^{(n)}\right)=\frac{1}{2}-\frac{1-(-1)^{n}}{4n},\quad\text{for }i=2,3,4.\] Next, by Proposition 3.1, for every permutation \(\sigma\) of \(\{2,3,4\}\) there exists a unitary \(U\in\mathrm{M}_{n}(\mathbb{C})\) such that \[U\mathfrak{P}_{1}^{(n)}U^{*}=\mathfrak{P}_{1}^{(n)},\qquad U\mathfrak{P}_{i}^{( n)}U^{*}=\mathfrak{P}_{\sigma(i)}^{(n)},\quad\text{for }i=2,3,4.\] Therefore \(\tau(\mathfrak{P}_{1}^{(n)}\mathfrak{P}_{i}^{(n)})\) is independent of \(i\in\{2,3,4\}\), and \(\tau(\mathfrak{P}_{i}^{(n)}\mathfrak{P}_{j}^{(n)})\) is independent of \(i,j\in\{2,3,4\}\) with \(i\neq j\). From the equation \(\sum_{j=1}^{4}\mathfrak{P}_{i}^{(n)}\mathfrak{P}_{j}^{(n)}=(2-\frac{1}{n}) \mathfrak{P}_{i}^{(n)}\) for \(i=1,\ldots,4\) we then obtain \[\tau\left(\mathfrak{P}_{1}^{(n)}\mathfrak{P}_{i}^{(n)}\right) =\frac{1}{3}\left(1-\frac{1}{n}\right)\tau\left(\mathfrak{P}_{1}^{ (n)}\right)\quad\text{for $i=2,3,4$,}\] \[\tau\left(\mathfrak{P}_{i}^{(n)}\mathfrak{P}_{j}^{(n)}\right) =\frac{1}{2}\left(1-\frac{1}{n}\right)\left(\tau\left(\mathfrak{P }_{i}^{(n)}\right)-\frac{1}{3}\tau\left(\mathfrak{P}_{1}^{(n)}\right)\right) \quad\text{for $i,j=2,3,4$ and $i\neq j$.}\] ## 4. Spectral results To establish new self-tests featuring the projections \(\mathfrak{P}_{i}^{(n)}\), we require information on eigenvalues and eigenvectors of certain tensor combinations and sums of pairs of matrices \(\mathfrak{P}_{i}^{(n)}\). ### Role of the maximally entangled state First, we identify the largest eigenvalue of \(\sum_{i}\mathfrak{P}_{i}^{(n)}\otimes\mathfrak{P}_{i}^{(n)}\) and the corresponding eigenvector (cf. [13, Lemma 5.7]), and bound the spectrum of \(\sum_{i}\mathfrak{P}_{i}^{(n)}\otimes\mathfrak{P}_{\sigma(i)}^{(n)}\) for a nontrivial cyclic permutation \(\sigma\) of \((1,2,3,4)\). Given \(|\psi\rangle=\sum_{i,j}\alpha_{ij}\,|i\rangle|j\rangle\in\mathbb{C}^{n}\otimes \mathbb{C}^{n}\) let \(\operatorname{mat}(|\psi\rangle)=\sum_{i,j}\alpha_{ij}\,|i\rangle\langle j| \in\mathrm{M}_{n}(\mathbb{C})\) denote its matricization; note that \(\operatorname{mat}(|\phi_{n}\rangle)=\frac{1}{\sqrt{n}}I\), and \[\operatorname{mat}\left(A\otimes B\,|\psi\rangle\,\right)=A\operatorname{ mat}(|\psi\rangle)B^{\mathrm{t}}\] for \(A,B\in\mathrm{M}_{n}(\mathbb{C})\). **Lemma 4.1**.: _Let \(n\in\mathbb{N}\) and let \(\sigma\) be a cyclic permutation \(\sigma\) of \((1,2,3,4)\). Denote \(M=\frac{n}{2n-1}\sum_{i=1}^{4}\mathfrak{P}_{i}^{(n)}\otimes\mathfrak{P}_{ \sigma(i)}^{(n)}\)._ _(i) If \(\sigma=\operatorname{id}\), then the largest eigenvalue of \(M\) is 1, with the eigenspace \(\mathbb{C}\,|\phi_{n}\rangle\)._ _(ii) If \(\sigma\neq\operatorname{id}\), then all eigenvalues of \(M\) are strictly smaller than 1._ Proof.: Let \(|\psi\rangle\in\mathbb{C}^{n}\otimes\mathbb{C}^{n}\) be an arbitrary state. Then \[\begin{split}\langle\psi|\,I\otimes I-M\,|\psi\rangle& \geq\langle\psi|\,I\otimes I-\frac{n}{2n-1}\sum_{i=1}^{4}\mathfrak{P}_{i}^{(n )}\otimes I\,|\psi\rangle\\ &=\langle\psi|\left(I-\frac{n}{2n-1}\sum_{i=1}^{4}\mathfrak{P}_{i }^{(n)}\right)\otimes I\,|\psi\rangle=0.\end{split} \tag{2}\] Therefore the largest eigenvalue of \(M\) is at most 1. Since \[\langle\phi_{n}|\,I\otimes I-\frac{n}{2n-1}\sum_{i=1}^{4}\mathfrak{P}_{i}^{(n )}\otimes\mathfrak{P}_{i}^{(n)}\,|\phi_{n}\rangle=\tau\left(I-\frac{n}{2n-1} \sum_{i=1}^{4}\mathfrak{P}_{i}^{(n)}\right)=0,\] \(|\phi_{n}\rangle\) is an eigenvector of \(M\) for eigenvalue 1 if \(\sigma=\operatorname{id}\). Suppose \(|\psi\rangle\in\mathbb{C}^{n}\otimes\mathbb{C}^{n}\) satisfies \(M\,|\psi\rangle=|\psi\rangle\). Then (2) gives \[\langle\psi|\,M\,|\psi\rangle=\langle\psi|\,\frac{n}{2n-1}\sum_{i=1}^{4} \mathfrak{P}_{i}^{(n)}\otimes I\,|\psi\rangle\] and therefore \[\langle\psi|\sum_{i=1}^{4}\mathfrak{P}_{i}^{(n)}\otimes(I-\mathfrak{P}_{\sigma(i)} ^{(n)})\,|\psi\rangle=0.\] Positive semidefinitness then implies \(\mathfrak{P}_{i}^{(n)}\otimes(I-\mathfrak{P}_{\sigma(i)}^{(n)})\,|\psi\rangle=0\), and analogously \((I-\mathfrak{P}_{i}^{(n)})\otimes\mathfrak{P}_{\sigma(i)}^{(n)}\,|\psi\rangle=0\). In particular, \(\mathfrak{P}_{i}^{(n)}\otimes I\,|\psi\rangle=I\otimes\mathfrak{P}_{\sigma(i )}^{(n)}\,|\psi\rangle\) for \(i=1,\dots,4\). Therefore \[\mathfrak{P}_{i}^{(n)}\operatorname{mat}(|\psi\rangle)=\operatorname{mat}(| \psi\rangle)\mathfrak{P}_{\sigma(i)}^{(n)}\qquad\text{for $i=1,\dots,4$.} \tag{3}\] Note that \(\mathfrak{P}_{1}^{(n)},\dots,\mathfrak{P}_{4}^{(n)}\) and \(\mathfrak{P}_{\sigma(1)}^{(n)},\dots,\mathfrak{P}_{\sigma(4)}^{(n)}\) give rise to two irreducible \(*\)-representations of \(\mathcal{A}_{2-\frac{1}{n}}\) by Proposition 3.1, which are equivalent if and only if \(\sigma=\operatorname{id}\). Since \(\operatorname{mat}(|\psi\rangle)\) intertwines these two irreducible \(*\)-representations, Schur's lemma implies that \(\operatorname{mat}|\psi\rangle=\gamma I\) for some \(\gamma\in\mathbb{C}\) if \(\sigma=\operatorname{id}\), and \(\operatorname{mat}|\psi\rangle=0\) if if \(\sigma\neq\operatorname{id}\). Therefore \(|\psi\rangle\) is a scalar multiple of \(|\phi_{n}\rangle\) if \(\sigma=\operatorname{id}\), and \(1\) is not an eigenvalue of \(M\) if \(\sigma\neq\operatorname{id}\). The following proposition shows how the maximally entangled state \(|\phi_{n}\rangle\) relates to an arbitrary \(*\)-representation of \(\mathcal{A}_{2-\frac{1}{n}}\). **Proposition 4.2**.: _Let \(n\in\mathbb{N}\), let \(a_{1},\dots,a_{4},b_{1},\dots,b_{4}\) be nonnegative integers with \(a_{1}+\dots+a_{4}=b_{1}+\dots+b_{4}\), and let \(\sigma_{1},\dots,\sigma_{4}\) be the distinct cyclic permutations of \((1,2,3,4)\). Consider the identification_ \[\mathbb{C}^{(a_{1}+\dots+a_{4})n}\otimes\mathbb{C}^{(b_{1}+\dots+b_{4})n} \equiv\left(\bigoplus_{j,k=1}^{4}\mathbb{C}^{a_{j}}\otimes\mathbb{C}^{b_{k}} \right)\otimes(\mathbb{C}^{n}\otimes\mathbb{C}^{n}).\] _Then the largest eigenvalue of_ \[\frac{n}{2n-1}\sum_{i=1}^{4}\left(\bigoplus_{j=1}^{4}I_{a_{j}}\otimes \mathfrak{P}_{\sigma_{j}(i)}^{(n)}\right)\otimes\left(\bigoplus_{j=1}^{4}I_{ b_{j}}\otimes\mathfrak{P}_{\sigma_{j}(i)}^{(n)}\right)\] _is 1, with the eigenspace_ \[\Big{\{}\big{(}\,|\mathrm{aux}_{1}\rangle\oplus|\mathrm{aux}_{2}\rangle \oplus|\mathrm{aux}_{3}\rangle\oplus|\mathrm{aux}_{4}\rangle\,\big{)}\otimes |\phi_{n}\rangle:\;|\mathrm{aux}_{j}\rangle\in\mathbb{C}^{a_{j}}\otimes \mathbb{C}^{b_{j}}\Big{\}}.\] Proof.: Follows from the distributivity of tensor product over direct sum, and Lemma 4.1. ### Spectrum of the sum of two distinguished projections Next, we analyze the spectrum of \(\mathfrak{P}_{3}^{(n)}+\mathfrak{P}_{4}^{(n)}\) for every \(n\). To do this, we return to the functors between categories \(\operatorname{Rep}_{\alpha}\). Given a finite-dimensional \(*\)-representation \(\pi\) of \(\mathcal{A}_{\alpha}\), let \(\Lambda_{\pi}\subset[0,2]\) denote the set of eigenvalues of \(\pi(x_{3}+x_{4})\). **Lemma 4.3**.: _Let \(\pi\) be an \(n\)-dimensional \(*\)-representation of \(\mathcal{A}_{\alpha}\)._ _(i) \(\Lambda_{T(\pi)}=2-\Lambda_{\pi}\)._ _(ii) Let \(\alpha\notin\{0,1\}\)._ _(ii.a) If \(\operatorname{rk}\pi(x_{1})+\operatorname{rk}\pi(x_{2})>n=\operatorname{rk} \pi(x_{3})+\operatorname{rk}\pi(x_{4})\) then_ \[\Lambda_{S(\pi)}=\{0\}\cup\left(\tfrac{\alpha}{\alpha-1}-\tfrac{1}{\alpha-1} \Lambda_{\pi}\right).\] _._ 2. _If_ \(\operatorname{rk}\pi(x_{3})+\operatorname{rk}\pi(x_{4})>n=\operatorname{rk}\pi(x_{ 1})+\operatorname{rk}\pi(x_{2})\) _then_ \[\Lambda_{S(\pi)}=\left\{\tfrac{\alpha}{\alpha-1}\right\}\cup\left(\tfrac{\alpha} {\alpha-1}-\tfrac{1}{\alpha-1}\Lambda_{\pi}\right).\] 3. _Let_ \(\alpha\in(0,3)\)_._ 1. _If_ \(\operatorname{rk}\pi(x_{1})+\operatorname{rk}\pi(x_{2})<n=\operatorname{rk} \pi(x_{3})+\operatorname{rk}\pi(x_{4})\) _then_ \[\Lambda_{\Phi^{+}(\pi)}=\left\{0\right\}\cup\left(1-\tfrac{1}{3-\alpha}+\tfrac{ 1}{3-\alpha}\Lambda_{\pi}\right).\] 2. _If_ \(\operatorname{rk}\pi(x_{3})+\operatorname{rk}\pi(x_{4})<n=\operatorname{rk} \pi(x_{1})+\operatorname{rk}\pi(x_{2})\) _then_ \[\Lambda_{\Phi^{+}(\pi)}=\left\{1+\tfrac{1}{3-\alpha}\right\}\cup\left(1-\tfrac{ 1}{3-\alpha}+\tfrac{1}{3-\alpha}\Lambda_{\pi}\right).\] Proof.: Equation (i) follows immediately from \(T(\pi)(x_{i})=I-\pi(x_{i})\). Equations (iii) are consequences of (i) and (ii) because \(\Phi^{+}=S\circ T\). Equations (ii): Suppose \(\pi\) act on \(\mathcal{H}\) with \(\dim\mathcal{H}=n\), and let \[u_{i} :\operatorname{ran}\pi(x_{i})\to\mathcal{H},\] \[w_{i} :\operatorname{ran}\pi(x_{i})\to\operatorname{ran}\pi(x_{1})\oplus \cdots\oplus\operatorname{ran}\pi(x_{4}),\] \[v=\left(\begin{smallmatrix}v_{1}\\ \vdots\\ v_{4}\end{smallmatrix}\right) :\operatorname{ran}\left(I-\frac{1}{\alpha}\left(\begin{smallmatrix}u _{1}^{*}\\ \vdots\\ u_{4}^{*}\end{smallmatrix}\right)\left(\begin{smallmatrix}u_{1}&\cdots&u_{4} \\ \vdots\\ u_{4}^{*}\end{smallmatrix}\right)\right)\to\operatorname{ran}\pi(x_{1}) \oplus\cdots\oplus\operatorname{ran}\pi(x_{4})\] be inclusions as in the construction of \(S\). Then \(S(\pi)(x_{i})=\frac{\alpha}{\alpha-1}v^{*}w_{i}w_{i}^{*}v\), and the characteristic polynomial of \(S(\pi)(x_{3}+x_{4})\) equals \[\det\left(\lambda I-S(\pi)(x_{3}+x_{4})\right)\] \[= \det\left(\lambda I-\tfrac{\alpha}{\alpha-1}v^{*}(w_{3}w_{3}^{*}+ w_{4}w_{4}^{*})v\right)\] \[= \det\left(\lambda I-\tfrac{\alpha}{\alpha-1}\left(v_{3}^{*}\;v_{ 4}^{*}\;\right)\left(\begin{smallmatrix}v_{3}\\ v_{4}^{*}\end{smallmatrix}\right)\right)\] \[= \lambda^{\operatorname{rk}\pi(x_{1})+\operatorname{rk}\pi(x_{2})-n }\det\left(\lambda I-\tfrac{\alpha}{\alpha-1}\left(\begin{smallmatrix}v_{3}\\ v_{4}^{*}\end{smallmatrix}\right)\left(\begin{smallmatrix}v_{3}^{*}&v_{4}^{*} \end{smallmatrix}\right)\right)\] \[= \lambda^{\operatorname{rk}\pi(x_{1})+\operatorname{rk}\pi(x_{2})-n }\det\left(\lambda I-\tfrac{\alpha}{\alpha-1}\left(I-\tfrac{1}{\alpha}\left( \begin{smallmatrix}u_{3}^{*}\\ u_{4}^{*}\end{smallmatrix}\right)\left(\begin{smallmatrix}u_{3}&u_{4}\\ \vdots\\ u_{4}^{*}\end{smallmatrix}\right)\right)\right)\] \[= \lambda^{\operatorname{rk}\pi(x_{1})+\operatorname{rk}\pi(x_{2})-n }\det\left(\left(\lambda-\tfrac{\alpha}{\alpha-1}\right)I+\tfrac{1}{\alpha-1} \left(\begin{smallmatrix}u_{3}^{*}\\ u_{4}^{*}\end{smallmatrix}\right)\left(\begin{smallmatrix}u_{3}&u_{4}\\ \vdots\\ u_{4}^{*}\end{smallmatrix}\right)\right)\] \[= \lambda^{\operatorname{rk}\pi(x_{1})+\operatorname{rk}\pi(x_{2})-n }\left(\lambda-\tfrac{\alpha}{\alpha-1}\right)^{\operatorname{rk}\pi(x_{3})+ \operatorname{rk}\pi(x_{4})-n}\det\left(\left(\lambda-\tfrac{\alpha}{\alpha-1} \right)I+\tfrac{1}{\alpha-1}\left(\begin{smallmatrix}u_{3}&u_{4}\\ \vdots\\ u_{4}^{*}\end{smallmatrix}\right)\right)\] \[= \lambda^{\operatorname{rk}\pi(x_{1})+\operatorname{rk}\pi(x_{2})-n }\left(\lambda-\tfrac{\alpha}{\alpha-1}\right)^{\operatorname{rk}\pi(x_{3})+ \operatorname{rk}\pi(x_{4})-n}\det\left(\left(\lambda-\tfrac{\alpha}{\alpha-1} \right)I+\tfrac{1}{\alpha-1}\pi(x_{3}+x_{4})\right).\] Therefore \[\Lambda_{S(\pi)}=\left\{0\right\}\cup\left(\tfrac{\alpha}{\alpha-1}-\tfrac{1}{ \alpha-1}\Lambda_{\pi}\right)\] if \(\operatorname{rk}\pi(x_{1})+\operatorname{rk}\pi(x_{2})>n=\operatorname{rk} \pi(x_{3})+\operatorname{rk}\pi(x_{4})\), and \[\Lambda_{S(\pi)}=\left\{\tfrac{\alpha}{\alpha-1}\right\}\cup\left(\tfrac{ \alpha}{\alpha-1}-\tfrac{1}{\alpha-1}\Lambda_{\pi}\right)\] if \(\operatorname{rk}\pi(x_{3})+\operatorname{rk}\pi(x_{4})>n=\operatorname{rk} \pi(x_{1})+\operatorname{rk}\pi(x_{2})\) The following proposition identifies all eigenvalues of \(\mathfrak{P}_{3}^{(n)}+\mathfrak{P}_{4}^{(n)}\); in particular, they are all simple (pairwise distinct). **Proposition 4.4**.: _Eigenvalues of \(n(\mathfrak{P}_{3}^{(n)}+\mathfrak{P}_{4}^{(n)})\) are \(\{0,2,\ldots,2n-2\}\) if \(n\) is odd, and \(\{1,3,\ldots,2n-1\}\) if \(n\) is even._ Proof.: Let \(\pi_{1}:\mathcal{A}_{1}\to\mathbb{C}\) be given as \(\pi_{1}(x_{1})=1\) and \(\pi_{1}(x_{2})=\pi(x_{3})=\pi(x_{4})=0\). For \(n\geq 2\) denote \(\pi_{n}=\Phi^{+}(\pi_{1})\). By Proposition 3.1 we have \(\operatorname{rk}\pi_{n}(x_{1})+\operatorname{rk}\pi_{n}(x_{2})<n= \operatorname{rk}\pi_{n}(x_{3})+\operatorname{rk}\pi_{n}(x_{4})\) if \(n\) is even, and \(\operatorname{rk}\pi_{n}(x_{3})+\operatorname{rk}\pi_{n}(x_{4})<n= \operatorname{rk}\pi_{n}(x_{1})+\operatorname{rk}\pi_{n}(x_{2})\) if \(n\) is odd. By Lemma 4.3, \[\Lambda_{\pi_{n+1}} =\{0\}\cup\left(\tfrac{1}{n+1}+\tfrac{n}{n+1}\Lambda_{\pi_{n}} \right) \text{if $n$ is even},\] \[\Lambda_{\pi_{n+1}} =\{2-\tfrac{1}{n+1}\}\cup\left(\tfrac{1}{n+1}+\tfrac{n}{n+1} \Lambda_{\pi_{n}}\right) \text{if $n$ is odd}.\] Therefore \[(n+1)\Lambda_{\pi_{n+1}} =\{0\}\cup(1+n\Lambda_{\pi_{n}}) \text{if $n$ is even},\] \[(n+1)\Lambda_{\pi_{n+1}} =\{2n+1\}\cup(1+n\Lambda_{\pi_{n}}) \text{if $n$ is odd}.\] Since \(\Lambda_{\pi_{1}}=\{0\}\), induction on \(n\) shows that \[n\Lambda_{\pi_{n}} =\{0,2,\ldots,2n-2\} \text{if $n$ is odd},\] \[n\Lambda_{\pi_{n}} =\{1,3,\ldots,2n-1\} \text{if $n$ is even}.\] Finally, \(\mathfrak{P}_{3}^{(n)},\mathfrak{P}_{4}^{(n)}\) are simultaneously unitarily equivalent to \(\pi_{n}(x_{3}),\pi_{n}(x_{4})\). Lastly, we determine how eigenvectors of \(\mathfrak{P}_{3}^{(n)}+\mathfrak{P}_{4}^{(n)}\) interact with \(\mathfrak{P}_{1}^{(n)}\) and \(\mathfrak{P}_{2}^{(n)}\). **Proposition 4.5**.: _Let \(\lambda\) be an eigenvalue of \(\mathfrak{P}_{3}^{(n)}+\mathfrak{P}_{4}^{(n)}\), with a corresponding unit eigenvector \(\left|e\right\rangle\in\mathbb{R}^{n}\)._ _(i) If \(\lambda\neq 1-\frac{1}{n}\) then_ \[\left\langle e\right|\mathfrak{P}_{1}^{(n)}\left|e\right\rangle=\left\{e \right|\mathfrak{P}_{2}^{(n)}\left|e\right\rangle=1-\frac{1}{2n}-\frac{ \lambda}{2}.\] _(ii) If \(\lambda=1-\frac{1}{n}\) then_ \[\left\langle e\right|\mathfrak{P}_{1}^{(n)}\left|e\right\rangle=\left\{ \begin{array}{ll}0&\text{if $n$ even,}\\ 1&\text{if $n$ odd,}\end{array}\right.\qquad\left\langle e\right| \mathfrak{P}_{2}^{(n)}\left|e\right\rangle=\left\{\begin{array}{ll}1&\text{if $n$ even,}\\ 0&\text{if $n$ odd.}\end{array}\right.\] Proof.: (i) By the universal property of \(\mathfrak{P}_{i}^{(n)}\) (4) Multiplying (4) on the left with \(\left\langle e\right|\mathfrak{P}_{i}^{(n)}\) for \(i=1,2\) results in \[\left\langle e\right|\mathfrak{P}_{1}^{(n)}\left|e\right\rangle+\left\langle e \right|\mathfrak{P}_{1}^{(n)}\mathfrak{P}_{2}^{(n)}\left|e\right\rangle=\left( 2-\frac{1}{n}-\lambda\right)\left\langle e\right|\mathfrak{P}_{1}^{(n)}\left| e\right\rangle,\] Therefore \(\left\langle e\right|\mathfrak{P}_{1}^{(n)}\left|e\right\rangle=\left\langle e \right|\mathfrak{P}_{2}^{(n)}\left|e\right\rangle\) if \(\lambda\neq 1-\frac{1}{n}\). Multiplying (4) on the left with \(\left\langle e\right|\) then gives \(\left\langle e\right|\mathfrak{P}_{1}^{(n)}\left|e\right\rangle=\left\langle e \right|\mathfrak{P}_{2}^{(n)}\left|e\right\rangle=1-\frac{1}{2n}-\frac{ \lambda}{2}\). (ii) Note that \(\mathfrak{P}_{3}^{(n)}+\mathfrak{P}_{4}^{(n)}\) admits \(n\) orthonormal eigenvectors \(\left|e_{1}\right\rangle,\ldots,\left|e_{n}\right\rangle\in\mathbb{R}^{n}\) by Proposition 4.4. Hence \[\operatorname{tr}\mathfrak{P}_{i}^{(n)}=\sum_{k=1}^{n}\left\langle e_{k} \right|\mathfrak{P}_{i}^{(n)}\left|e_{k}\right\rangle\] for \(i=1,2\). By (ii) and Proposition 3.1 we therefore have \[\left\langle e\right|\mathfrak{P}_{i}^{(n)}\left|e\right\rangle =\operatorname{tr}\mathfrak{P}_{i}^{(n)}-(n-1)\left(1-\frac{1}{2n }\right)+\frac{1}{2}\left(\operatorname{tr}\left(\mathfrak{P}_{3}^{(n)}+ \mathfrak{P}_{4}^{(n)}\right)-1+\frac{1}{n}\right)\] \[=2\left\lfloor\frac{n}{2}\right\rfloor-n+1-\left\{\begin{array}[ ]{ll}(-1)^{n}&\text{if }i=1\\ 0&\text{if }i=2\end{array}\right.\] since \(\operatorname{tr}\mathfrak{P}_{i}^{(n)}=\operatorname{rk}\mathfrak{P}_{i}^{(n)}\). ## 5. Constant-sized self-tests In this section we derive the main results of the paper: every maximally entangled state is self-tested by a 4-input 2-output strategy (Subsection 5.1), and every single binary PVM is self-tested by a 5-input 2-output strategy (Subsection 5.2). ### Self-testing maximally entangled states First we introduce a family of 4-input 2-output strategies that self-test maximally entangled states of all dimensions (Theorem 5.2). **Definition 5.1**.: For \(n\in\mathbb{N}\) let \(\mathfrak{P}_{i}^{(n)}\) be the \(n\times n\) projections as in Proposition 3.1. Let \(\mathcal{S}_{n}\) be the 4-input 2-output bipartite strategy \[\mathcal{S}_{n}=\left(\left|\phi_{n}\right\rangle;\left(\mathfrak{P}_{i}^{(n) },I-\mathfrak{P}_{i}^{(n)}\right)_{i=1}^{4};\left(\mathfrak{P}_{i}^{(n)},I- \mathfrak{P}_{i}^{(n)}\right)_{i=1}^{4}\right).\] The correlation of \(\mathcal{S}_{n}\) is determined by the values \[p(1,1|i,j) =\left\langle\phi_{n}\right|\mathfrak{P}_{i}^{(n)}\otimes \mathfrak{P}_{j}^{(n)}\left|\phi_{n}\right\rangle=\tau\left(\mathfrak{P}_{i}^ {(n)}\mathfrak{P}_{j}^{(n)}\right),\] \[p(1|i) =\left\langle\phi_{n}\right|\mathfrak{P}_{i}^{(n)}\otimes I \left|\phi_{n}\right\rangle=\left\langle\phi_{n}\right|I\otimes\mathfrak{P}_ {i}^{(n)}\left|\phi_{n}\right\rangle=\tau\left(\mathfrak{P}_{i}^{(n)}\right)\] for \(i,j=1,\ldots,4\), which are computed in Remark 3.3. Note that the correlation of \(\mathcal{S}_{n}\) is synchronous, i.e., \(p(a,b|i,i)=\tau\left(\mathfrak{P}_{i}^{(n)}(I-\mathfrak{P}_{i}^{(n)})\right)=0\) for \(a\neq b\). **Theorem 5.2**.: _The strategy \(\mathcal{S}_{n}\) is self-tested by its correlation for every \(n\in\mathbb{N}\)._ Proof.: Let \(p\) be the correlation of \(\mathcal{S}_{n}\). Suppose \[\mathcal{S}=\left(\left|\psi\right\rangle;(P_{i},I-P_{i})_{i=1}^{4};(Q_{i},I-Q _{i})_{i=1}^{4}\right)\] is another strategy with the correlation \(p\). Since \(p\) is synchronous and local dilations are transitive, by [13, Lemma 4.9 and Corollary 3.6] it suffices to assume that the state \(\left|\psi\right\rangle\in\mathcal{H}\otimes\mathcal{H}\) has full Schmidt rank, \(P_{i},Q_{i}\) are projections on \(\mathcal{H}\), and \[P_{i}\otimes I\left|\psi\right\rangle=I\otimes Q_{i}\left|\psi\right\rangle \tag{5}\] for \(i=1,\ldots,4\). By equality of correlations and (5), \[\left\langle\psi\right|\left(\frac{2n-1}{n}I-\sum_{i=1}^{4}P_{i} \right)\otimes\left(\frac{2n-1}{n}I-\sum_{i=1}^{4}P_{i}\right)\left|\psi\right\rangle\] \[= \left\langle\psi\right|\left(\frac{2n-1}{n}I-\sum_{i=1}^{4}P_{i} \right)\otimes\left(\frac{2n-1}{n}I-\sum_{i=1}^{4}Q_{i}\right)\left|\psi\right\rangle\] \[= \left\langle\phi_{n}\right|\left(\frac{2n-1}{n}I-\sum_{i=1}^{4} \mathfrak{P}_{i}^{(n)}\right)\otimes\left(\frac{2n-1}{n}I-\sum_{i=1}^{4} \mathfrak{P}_{i}^{(n)}\right)\left|\phi_{n}\right\rangle=0,\] and analogously for \(Q_{i}\). Since \(\left|\psi\right\rangle\) has full rank, we obtain \[\frac{2n-1}{n}I-\sum_{i=1}^{4}P_{i}=0=\frac{2n-1}{n}I-\sum_{i=1}^{4}Q_{i}. \tag{6}\] Furthermore, \[\left\langle\psi\right|\frac{n}{2n-1}\sum_{i=1}^{4}P_{i}\otimes Q_{i}\left| \psi\right\rangle=\left\langle\phi_{n}\right|\frac{n}{2n-1}\sum_{i=1}^{4} \mathfrak{P}_{i}^{(n)}\otimes\mathfrak{P}_{i}^{(n)}\left|\phi_{n}\right\rangle=1. \tag{7}\] Let \(\sigma_{1},\ldots,\sigma_{4}\) be the distinct cyclic permutations of \((1,2,3,4)\), with \(\sigma_{1}=\mathrm{id}\). By (6) and Proposition 3.1 there exist nonnegative integers \(a_{1},\ldots,a_{4},b_{1},\ldots,b_{4}\) with \(a_{1}+\cdots+a_{4}=b_{1}+\cdots+b_{4}\), and unitaries \(U\) and \(V\) on \(\mathcal{H}\), such that \[UP_{i}U^{*}=\bigoplus_{j=1}^{4}I_{a_{j}}\otimes\mathfrak{P}_{\sigma_{j}(i)}^{ (n)},\qquad VQ_{i}V^{*}=\bigoplus_{j=1}^{4}I_{b_{j}}\otimes\mathfrak{P}_{ \sigma_{j}(i)}^{(n)}\] for \(i=1,\ldots,4\). By (7) and Proposition 4.2, \[U\otimes V\left|\psi\right\rangle=\left(\left|\mathrm{aux}_{1}\right\rangle \oplus\left|\mathrm{aux}_{2}\right\rangle\oplus\left|\mathrm{aux}_{3}\right \rangle\oplus\left|\mathrm{aux}_{4}\right\rangle\right)\otimes\left|\phi_{n}\right\rangle\] for some \(\left|\mathrm{aux}_{j}\right\rangle\in\mathbb{C}^{a_{j}}\otimes\mathbb{C}^{b_ {j}}\), where we identified \[\mathcal{H}\otimes\mathcal{H}\equiv\left(\bigoplus_{j,k=1}^{4}\mathbb{C}^{a_{j }}\otimes\mathbb{C}^{b_{k}}\right)\otimes(\mathbb{C}^{n}\otimes\mathbb{C}^{n}).\] Then \[\left\langle\phi_{n}\right|\mathfrak{P}_{i}^{(n)}\otimes I\left|\phi_{n}\right\rangle =\left\langle\psi\right|P_{i}\otimes I\left|\psi\right\rangle=\sum_{j=1}^{4} \left\langle\mathrm{aux}_{j}|\mathrm{aux}_{j}\right\rangle\left\langle\phi_{ n}\right|\mathfrak{P}_{\sigma_{j}(i)}^{(n)}\otimes I\left|\phi_{n}\right\rangle\] gives rise to a linear system of equations in \(\left\langle\mathrm{aux}_{j}|\mathrm{aux}_{j}\right\rangle\), \[\mathrm{rk}\,\mathfrak{P}_{i}^{(n)}=\sum_{j=1}^{4}\mathrm{rk}\,\mathfrak{P}_{ \sigma_{j}(i)}^{(n)}\cdot\left\langle\mathrm{aux}_{j}|\mathrm{aux}_{j}\right \rangle\qquad\text{for $i=1,2,3,4$.} \tag{8}\] By Lemma 3.2, the system (8) has a unique solution; since \(\sigma_{1}=\mathrm{id}\), we obtain \(\left\langle\mathrm{aux}_{1}|\mathrm{aux}_{1}\right\rangle=1\) and \(\left\langle\mathrm{aux}_{j}|\mathrm{aux}_{j}\right\rangle=0\) for \(j=2,3,4\). Since \(\left|\psi\right\rangle\) is a faithful state, it follows that \(a_{j}=b_{j}=0\) for \(j=2,3,4\), and \(a_{1}=b_{1}\). Therefore \[UP_{i}U^{*}=I_{a_{1}}\otimes\mathfrak{P}_{i}^{(n)},\qquad VQ_{i}V^{*}=I_{a_{1 }}\otimes\mathfrak{P}_{i}^{(n)},\qquad U\otimes V\left|\psi\right\rangle=| \mathrm{aux}_{1}\rangle\otimes\left|\phi_{n}\right\rangle,\] so \(\mathcal{S}_{n}\) is a local dilation of \(\mathcal{S}\). **Remark 5.3**.: The proof of Theorem 5.2 follows the core ideas of the proof of [13, Corollary 7.1], which treats maximally entangled states of odd dimension. The main difference arises from applying the representation theory of C*-algebras \(\mathcal{A}_{\alpha}\) for different values of \(\alpha\). Namely, in [13] the authors focus on \(\mathcal{A}_{2-\frac{2}{n}}\) for odd \(n\) (and their analogs on more than four generators), since \(\mathcal{A}_{2-\frac{2}{n}}\) for odd \(n\) is simple and isomorphic to \(\mathrm{M}_{n}(\mathbb{C})\) (i.e., it has a unique irreducible \(*\)-representation, which is \(n\)-dimensional). On the other hand, algebras \(\mathcal{A}_{2-\frac{1}{n}}\) for \(n\in\mathbb{N}\) are not simple, as they are isomorphic to \(\mathbb{C}^{4}\otimes\mathrm{M}_{n}(\mathbb{C})\). Non-simplicity is the origin of intricacies in the proof of Theorem 5.2 and auxiliary results. Finally, with a considerable effort, the authors of [13] also establish that their self-tests are _robust_. Such robustness analysis is omitted in this paper; nevertheless, there is no obstruction for the techniques of [13, Section 6] to imply robust versions of the newly presented self-tests. **Corollary 5.4**.: _The following states and binary projective measurements can be self-tested by 4-input 2-output bipartite strategies for every \(n\in\mathbb{N}\):_ 1. _maximally entangled state of local dimension_ \(n\)_;_ 2. _binary projective measurement determined by an_ \(n\times n\) _projection with rank in_ \[\left\{\left\lceil\frac{n}{2}\right\rceil,\ \left\lfloor\frac{n}{2}\right\rfloor-(-1)^{n},\ \left\lceil\frac{n}{2}\right\rceil+(-1)^{n}\right\}.\] ### Self-testing projective measurements Next we introduce a two-parametric family of 5-input 2-output strategies that self-test binary PVMs of all dimensions and ranks (Theorem 5.10). These strategies are obtained from the 4-input 2-output strategies of Subsection 5.1 by the principle of post-hoc self-testing. The key criterion enabling the application of post-hoc self-testing was derived in [10]. Given an invertible hermitian matrix \(X\in\mathrm{M}_{n}(\mathbb{C})\) let \(\mathrm{sgn}(X)\in\mathrm{M}_{n}(\mathbb{C})\) be the unique hermitian unitary matrix that commutes with \(X\), and \(\mathrm{sgn}(X)X\succ 0\). Equivalently, \(\mathrm{sgn}(X)\) is the unitary part of the polar decomposition of \(X\). In other words, \(\operatorname{sgn}\) is the matrix extension of the usual sign function via functional calculus. This map plays a role in the following post-hoc self-testing criterion established in [13]. **Proposition 5.5**.: _[_13_, Proposition 3.7]_ _Suppose \(P,P_{i},Q_{j}\in\operatorname{M}_{n}(\mathbb{R})\) for \(i=1,\ldots,N_{A}\) and \(j=1,\ldots,N_{B}\) are projections, and the \((N_{A},N_{B})\)-input \((2,2)\)-output strategy_ \[\left(\left|\phi_{n}\right.\right);(P_{i},I-P_{i})_{i=1}^{N_{A}}\,;(Q_{i},I-Q_ {i})_{i=1}^{N_{B}}\Big{)}\] _is self-tested by its correlation. If_ \[2P-I\in\operatorname{sgn}\Big{(}\operatorname{GL}_{n}(\mathbb{R})\cap \operatorname{span}_{\mathbb{R}}\{I,Q_{1},\ldots,Q_{N_{B}}\}\Big{)},\] _then the \((N_{A},N_{B}+1)\)-input \((2,2)\)-output strategy_ \[\left(\left|\phi_{n}\right.\right);(P_{i},I-P_{i})_{i=1}^{N_{A}}\,,(P,I-P);(Q_ {i},I-Q_{i})_{i=1}^{N_{B}}\Big{)}\] _is self-tested by its correlation._ **Proposition 5.6**.: _Let \(n,r\in\mathbb{N}\) with \(r\leq n\). The projection_ \[\mathfrak{Q}^{(n,r)}:=\frac{1}{2}\bigg{(}I+\operatorname{sgn}\Big{(}(2r- \tfrac{1}{2})I-n\big{(}\mathfrak{P}_{3}^{(n)}+\mathfrak{P}_{4}^{(n)}\big{)} \Big{)}\bigg{)}\in\operatorname{M}_{n}(\mathbb{R})\] _has rank \(r\)._ Proof.: The matrix \(\mathfrak{Q}^{(n,r)}\) is a projection by definition of the map \(\operatorname{sgn}\). By Proposition 4.4, the matrix \(n(\mathfrak{P}_{3}^{(n)}+\mathfrak{P}_{4}^{(n)})\) has eigenvalues \(\{0,2,\ldots,2n-2\}\) if \(n\) is odd and \(\{1,3,\ldots,2n-1\}\) if \(n\) is even. Therefore \((2r-\frac{1}{2})I-n(\mathfrak{P}_{3}^{(n)}+\mathfrak{P}_{4}^{(n)})\) has \(r\) positive eigenvalues and \(n-r\) negative eigenvalues. Consequently, the multiplicities of eigenvalues \(1\) and \(-1\) of \(\operatorname{sgn}((2r-\frac{1}{2})I-n(\mathfrak{P}_{3}^{(n)}+\mathfrak{P}_{4 }^{(n)}))\) are \(r\) and \(n-r\), respectively. Hence the rank of \(\mathfrak{Q}^{(n,r)}\) is \(r\). **Remark 5.7**.: For \(r\leq n\) let \(\left|e_{1}\right.,\ldots,\left|e_{r}\right.\rangle\in\mathbb{R}^{n}\) be unit eigenvectors of \(\mathfrak{P}_{3}^{(n)}+\mathfrak{P}_{4}^{(n)}\) corresponding to the smallest \(r\) eigenvalues in increasing order (note that \(\left|e_{i}\right.\)) are uniquely determined up to a sign because \(\mathfrak{P}_{3}^{(n)}+\mathfrak{P}_{4}^{(n)}\) has \(n\) distinct eigenvalues). Then \[\mathfrak{Q}^{(n,r)}=\left|e_{1}\right.\rangle\langle e_{1}\rvert+\cdots+ \left|e_{r}\right.\rangle\langle e_{r}\rvert\,.\] While this is arguably a simpler and computationally more available definition of \(\mathfrak{Q}^{(n,r)}\) than the original in Proposition 5.6, the presentation in terms of the \(\operatorname{sgn}\) map is critical in establishing the self-test of Theorem 5.10 below. **Remark 5.8**.: Let us determine the normalized traces of \(\mathfrak{P}_{i}^{(n)}\mathfrak{Q}^{(n,r)}\) for \(r<\frac{n}{2}\). Clearly, \(\tau\left(\mathfrak{Q}^{(n,r)}\right)=\frac{r}{n}\). By Proposition 3.1 there exists a unitary \(U\in\operatorname{M}_{n}(\mathbb{C})\) such that \(U\mathfrak{P}_{3}^{(n)}U^{*}=\mathfrak{P}_{4}^{(n)}\) and \(U\mathfrak{P}_{4}^{(n)}U^{*}=\mathfrak{P}_{3}^{(n)}\), and therefore in particular \(\operatorname{tr}\Big{(}\mathfrak{P}_{3}^{(n)}\mathfrak{Q}^{(n,r)}\Big{)}= \operatorname{tr}\Big{(}\mathfrak{P}_{4}^{(n)}\mathfrak{Q}^{(n,r)}\Big{)}\). Thus \[\tau\left(\mathfrak{P}_{i}^{(n)}\mathfrak{Q}^{(n,r)}\right)=\frac{1}{2}\tau \left(\mathfrak{Q}^{(n,r)}\big{(}\mathfrak{P}_{3}^{(n)}+\mathfrak{P}_{4}^{(n) }\big{)}\mathfrak{Q}^{(n,r)}\right)=\frac{r}{2n^{2}}\left(r-\frac{1-(-1)^{n}} {2}\right)\] for \(i=3,4\) by Proposition 4.4, since \(\operatorname{tr}(\mathfrak{Q}^{(n,r)}(\mathfrak{P}_{3}^{(n)}+\mathfrak{P}_{4}^{(n )})\mathfrak{Q}^{(n,r)})\) is the sum of smallest \(r\) eigenvalues of \(\mathfrak{P}_{3}^{(n)}+\mathfrak{P}_{4}^{(n)}\) by Remark 5.7. Since \(r<\frac{n}{2}\), Proposition 4.5 and Remark 5.7 imply \(\operatorname{tr}(\mathfrak{Q}^{(n,r)}\mathfrak{P}_{1}^{(n)}\mathfrak{Q}^{(n,r)})=\operatorname{tr}(\mathfrak{Q}^{(n,r)}\mathfrak{P}_{2}^{(n)}\mathfrak{Q }^{(n,r)})\). By the universal property of \(\mathfrak{P}^{(n)_{i}}\) we then obtain \[\tau\left(\mathfrak{P}_{i}^{(n)}\mathfrak{Q}^{(n,r)}\right)=\frac{1}{2}\left( \left(2-\frac{1}{n}\right)\tau\left(\mathfrak{Q}^{(n,r)}\right)-\tau\left( \mathfrak{P}_{3}^{(n)}\mathfrak{Q}^{(n,r)}\right)-\tau\left(\mathfrak{P}_{4}^ {(n)}\mathfrak{Q}^{(n,r)}\right)\right)\] for \(i=1,2\). **Definition 5.9**.: Given \(n,r\in\mathbb{N}\) with \(r<n\), let \(\mathfrak{P}_{i}^{(n)}\) be as in Proposition 3.1, and let \(\mathfrak{Q}^{(n,r)}\) be as in Proposition 5.6. Let \(\mathcal{S}_{n,r}\) be the 5-input 2-output bipartite strategy \[\left(\left|\phi_{n}\right.\right);\left(\mathfrak{P}_{i}^{(n)},I-\mathfrak{P} _{i}^{(n)}\right)_{i=1}^{4},(\mathfrak{Q}^{(n,r)},I-\mathfrak{Q}^{(n,r)}); \left(\mathfrak{P}_{i}^{(n)},I-\mathfrak{P}_{i}^{(n)}\right)_{i=1}^{4},( \mathfrak{Q}^{(n,r)},I-\mathfrak{Q}^{(n,r)})\right).\] Since \(\mathcal{S}_{n,r}\) is an extension of \(\mathcal{S}_{n}\), its correlation is determined by that of \(\mathcal{S}_{n}\) and \[p(1|5) =\left\langle\phi_{n}\right|\mathfrak{Q}^{(n,r)}\otimes I\left| \phi_{n}\right\rangle=\left\langle\phi_{n}\right|\mathfrak{Q}^{(n,r)}\otimes I \left|\phi_{n}\right\rangle=\tau\left(\mathfrak{Q}^{(n,r)}\right),\] \[p(1,1|i,5) =\left\langle\phi_{n}\right|\mathfrak{Q}^{(n,r)}\otimes\mathfrak{ P}_{j}^{(n)}\left|\phi_{n}\right\rangle=\tau\left(\mathfrak{P}_{i}^{(n)} \mathfrak{Q}^{(n,r)}\right)\] for \(i=1,\ldots,4\), which are computed in Remark 5.8. **Theorem 5.10**.: _The strategy \(\mathcal{S}_{n,r}\) is self-tested by its correlation for all \(n,r\in\mathbb{N}\) with \(r<n\)._ Proof.: By Theorem 5.2, the strategy \(\mathcal{S}_{n}\) is self-tested by its correlation. Note that the projection \(\mathfrak{Q}^{(n,r)}\) lies in the image of the span of \(\{\mathfrak{P}_{i}^{(n)}\}_{i=1}^{4}\) under the map \(\operatorname{sgn}\). Therefore \(\mathcal{S}_{n,r}\) is self-tested by its correlation by Proposition 5.5. **Corollary 5.11**.: _Every binary projective measurement appears in a 5-input 2-output strategy that is self-tested by its correlation._ Proof.: Every binary PVM is, up to unitary basis change, determined by its dimension and ranks of its projections. Therefore it suffices to consider measurements \((\mathfrak{Q}^{(n,r)},I-\mathfrak{Q}^{(n,r)})\), and these appear in the 5-input 2-output strategies \(\mathcal{S}_{n,r}\), self-tested by Theorem 5.10. Finally, we generalize Theorem 5.10 to arbitrary \(K\)-PVMs. Given \(r_{1},\ldots,r_{K},n\in\mathbb{N}\) with \(n=r_{1}+\cdots+r_{K}\), Remark 5.7 shows that \[\mathfrak{Q}_{a}^{(r_{1},\ldots,r_{K})}:=\mathfrak{Q}^{(n,r_{1}+\cdots+r_{a})} -\mathfrak{Q}^{(n,r_{1}+\cdots+r_{a-1})}\] is a projection of rank \(r_{a}\) for every \(a=1,\ldots,K\), and \[\left(\mathfrak{Q}_{a}^{(r_{1},\ldots,r_{K})}\right)_{a=1}^{K}\] is a \(K\)-PVM. To it we assign a certain bipartite strategy with a mixed number of inputs and outputs. **Definition 5.12**.: Let \(r_{1},\ldots,r_{K},n\in\mathbb{N}\) with \(n=r_{1}+\cdots+r_{K}\). We define a bipartite strategy \(\mathcal{S}_{r_{1},\ldots,r_{K}}\) that has 4 inputs with 2 outputs and 1 input with \(K\) outputs for the first party, and 4 inputs with 2 outputs for the second party: \[\mathcal{S}_{r_{1},\ldots,r_{K}}=\left(\left|\phi_{n}\right.\right\rangle; \left(\mathfrak{P}_{i}^{(n)},I-\mathfrak{P}_{i}^{(n)}\right)_{i=1}^{4}, \left(\mathfrak{Q}_{a}^{(r_{1},\ldots,r_{K})}\right)_{a=1}^{K};\left( \mathfrak{P}_{i}^{(n)},I-\mathfrak{P}_{i}^{(n)}\right)_{i=1}^{4}\right).\] **Corollary 5.13**.: _Let \(r_{1},\ldots,r_{K},n\in\mathbb{N}\) with \(n=r_{1}+\cdots+r_{K}\) be arbitrary. Then the strategy \(\mathcal{S}_{r_{1},\ldots,r_{K}}\) is self-tested by its correlation._ _In particular, every single \(K\)-PVM appears in a self-tested strategy that has 8 inputs with 2 outputs and 1 input with \(K\) outputs._ Proof.: Let \[\mathcal{S}=\left(\left|\psi\right.;\left(P_{i},I-P_{i}\right)_{i=1}^{4}, \left(R_{a}\right)_{a=1}^{K};\left(Q_{i},I-Q_{i}\right)_{i=1}^{4}\right)\] be a bipartite strategy with the same correlation as \(\mathcal{S}_{r_{1},\ldots,r_{K}}\). Define bipartite strategies that have \(3+K\) inputs with 2 outputs for the first party, and 4 inputs with 2 outputs for the second party: \[\widetilde{\mathcal{S}} =\left(\left|\phi_{n}\right.\right\rangle;\left(\mathfrak{P}_{i} ^{(n)},I-\mathfrak{P}_{i}^{(n)}\right)_{i=1}^{4},\left(\mathfrak{Q}_{a}^{(n,r _{1}+\cdots+r_{a})},I-\mathfrak{Q}_{a}^{(n,r_{1}+\cdots+r_{a})}\right)_{a=1}^{ K-1};\left(\mathfrak{P}_{i}^{(n)},I-\mathfrak{P}_{i}^{(n)}\right)_{i=1}^{4} \right),\] \[\mathcal{S}^{\prime} =\left(\left|\psi\right.\right\rangle;\left(P_{i},I-P_{i}\right)_ {i=1}^{4},\left(R_{1}+\cdots+R_{a},I-\left(R_{1}+\cdots+R_{a}\right)\right)_{ a=1}^{K-1};\left(Q_{i},I-Q_{i}\right)_{i=1}^{4}\right).\] Since the projections \(\mathfrak{Q}^{(n,r_{1}+\cdots+r_{a})}\) lie in the image of the span of \(\{\mathfrak{P}_{i}^{(n)}\}_{i=1}^{4}\) under the map sgn by Proposition 5.6, and the strategy \(\mathcal{S}_{n}\) is self-tested by Theorem 5.2, the strategy \(\widehat{\mathcal{S}}\) is self-tested by a repeated application of Proposition 5.5. Therefore \(\widetilde{\mathcal{S}}\) is a local dilation of \(\mathcal{S}^{\prime}\). The same local isometries and the ancillary state show that \(\mathcal{S}_{r_{1},\ldots,r_{K}}\) is a local dilation of \(\mathcal{S}\). ### Correlation formulae While correlations of strategies \(\mathcal{S}_{n}\) and \(\mathcal{S}_{n,r}\) for \(r<\frac{n}{2}\) are essentially described by Remarks 3.3 and 5.8, we record their explicit formulae in this subsection. #### 5.3.1. Correlations self-testing maximally entangled states Strategy \(\mathcal{S}_{n}\) from Definition 5.1 gives rise to the correlation \(p\) satisfying \(p(a,b|i,j)=p(b,a|j,i)\) for all \(a,b\in\{1,2\}\) and \(i,j\in\{1,\ldots,4\}\). It is determined by the vector \[\left(p(1|i)\right)_{i=1}^{4}=\left(\tfrac{\lfloor\frac{n}{2}\rfloor-(-1)^{n}} {n}\quad\tfrac{\lfloor\frac{n}{2}\rfloor}{n}\quad\tfrac{\lfloor\frac{n}{2} \rfloor}{n}\quad\tfrac{\lfloor\frac{n}{2}\rfloor}{n}\right)\] and the symmetric matrix \[\left(p(1,1|i,j)\right)_{i,j=1}^{4}=\left(\begin{matrix}\tfrac{\lfloor\frac{n}{2 }\rfloor-(-1)^{n}}{n}&\tfrac{(n-1)(\lfloor\frac{n}{2}\rfloor-(-1)^{n})}{3n^{2} }&\tfrac{(n-1)(\lfloor\frac{n}{2}\rfloor-(-1)^{n})}{3n^{2}}&\tfrac{(n-1)( \lfloor\frac{n}{2}\rfloor-(-1)^{n})}{3n^{2}}\\ \cdot&\tfrac{\lfloor\frac{n}{2}\rfloor}{n}&\tfrac{(n-1)(2n-1+3(-1)^{n})}{12n^ {2}}&\tfrac{(n-1)(2n-1+3(-1)^{n})}{12n^{2}}\\ \cdot&\cdot&\tfrac{\lfloor\frac{n}{2}\rfloor}{n}&\tfrac{(n-1)(2n-1+3(-1)^{n})} {\lfloor\frac{n}{2}\rfloor}\\ \cdot&\cdot&\cdot&\tfrac{\lfloor\frac{n}{2}\rfloor}{n}\end{matrix}\right),\] as computed in Remark 3.3. #### 5.3.2. Correlations self-testing binary projective measurements Let \(n,r\in\mathbb{N}\) with \(r<n\). If \(r=\frac{n}{2}\), then a binary projective measurement of dimension \(n\) and rank \(r\) is up to a unitary basis change contained in the self-tested strategy \(\mathcal{S}_{n}\), whose correlation is given in Subsection 5.3.1. Otherwise, a binary projective measurement of dimension \(n\) and rank \(r\) is contained, up to a unitary basis change and a reordering of outputs, in \(\mathcal{S}_{n,r}\) or \(\mathcal{S}_{n,n-r}\). For the purpose of self-testing it therefore suffices to determine the correlation of \(\mathcal{S}_{n,r}\) for \(r<\frac{n}{2}\). Thus assume \(r<\frac{n}{2}\). Since \(\mathcal{S}_{n,r}\) from Definition 5.9 is an extension of \(\mathcal{S}_{n}\), its correlation is determined by \(p(1,1|i,j)\) and \(p(1|i)\) for \(i=1,\ldots,4\) from Subsection 5.3.1, together with \[p(1|5)=\frac{r}{n}\] and \[\Big{(}p(1,1|i,5)\Big{)}_{i=1}^{4}=\Big{(}\tfrac{r(4n-2r-1-(-1)^{n})}{4n^{2}} \quad\tfrac{r(4n-2r-1-(-1)^{n})}{4n^{2}}\quad\tfrac{r(2r-1+(-1)^{n})}{4n^{2}} \quad\tfrac{r(2r-1+(-1)^{n})}{4n^{2}}\Big{)}\,,\] as computed in Remark 5.8. For self-testing general \(K\)-PVMs using \(\mathcal{S}_{r_{1},\ldots,r_{K}}\), one can derive similar (yet more complex) formulae using Remark 5.7, and Propositions 4.4 and 4.5. ## 6. Obstructions to constant-sized self-tests In a sense, maximally entangled states of all dimensions and single binary projective measurements of all dimensions and ranks can be self-tested with a constant number of inputs and outputs because they form discrete families of objects (i.e., they are parameterized by finitely many natural parameters). On the other hand, there are no constant-sized self-tests for all entangled states, nor for all pairs of binary projective measurements, as implied by the results of this section (for self-tests with varying numbers of inputs, see [10] and [14]). The local dimension of subsystems in a quantum strategy is not directly responsible for the absence of constant-sized self-tests; rather, dimensions of parameter spaces describing states and pairs of binary projective measurements are the obstructions to existence of uniform self-tests. The proofs of statements in this section rely on notions from real algebraic geometry [1]. By the singular value decomposition, every bipartite \(|\psi\rangle\in\mathbb{C}^{n}\otimes\mathbb{C}^{n}\) is, up to a left-right unitary basis change, equal to \[\sum_{i=1}^{n}c_{i}\,|i\rangle|i\rangle\] for \(c_{i}\geq 0\) and \(\sum_{i=1}^{n}c_{i}^{2}=1\). The numbers \(c_{i}\) are the _Schmidt coefficients_ of \(|\psi\rangle\). For example, all the Schmidt coefficients of \(|\phi_{n}\rangle\) are \(\frac{1}{\sqrt{n}}\). Note that \(|\psi\rangle\) has full Schmidt rank if and only if \(c_{i}>0\) for all \(i\). **Proposition 6.1**.: _Let \(L,K,N\in\mathbb{N}\) satisfy_ \[L>(N(K-1)+1)^{2}.\] _Then for all \(d_{1},\ldots,d_{L}\in\mathbb{N}\) there exists a bipartite state with \(L\) distinct Schmidt coefficients of multiplicities \(d_{1},\ldots,d_{L}\) that cannot be self-tested by \(N\)-inputs and \(K\)-outputs._ Proof.: Let \(\mathbf{A}\) denote the set of all \(N\)-input \(K\)-output bipartite quantum strategies whose states are of the form \[\left|\psi\right\rangle=\sum_{\ell=1}^{L}\lambda_{\ell}\sum_{i=d_{\ell-1}+1}^{ d_{\ell}}\left|i\right\rangle\!\left|i\right\rangle,\qquad\lambda_{1}<\cdots< \lambda_{L} \tag{9}\] where \(d_{0}:=0\). In particular, the states in strategies from \(\mathbf{A}\) have full Schmidt rank and \(L\) distinct Schmidt coefficients of multiplicities \(d_{1},\ldots,d_{L}\). Consider the action of \(G:=\mathrm{U}_{d_{1}}(\mathbb{C})\times\cdots\times\mathrm{U}_{d_{L}}(\mathbb{ C})\) on \(\mathbf{A}\), given by \[U\cdot\left(\left|\psi\right\rangle;(\mathcal{M}_{i})_{i};(\mathcal{N}_{j})_{ j}\right)=\left(U\otimes U\left|\psi\right\rangle;(U\mathcal{M}_{i}U^{*})_{i};(U \mathcal{N}_{j}U^{*})_{j}\right)\] for \(U=\oplus_{\ell=1}^{L}U_{\ell}\in G\). Note that \(G\) encodes precisely all actions of local unitaries that preserve the form (9) of states in strategies from \(\mathcal{S}\). Let \(\mathbf{B}\) be the quotient of \(\mathbf{A}\) with respect to the action of \(G\), and let \(\pi:\mathbf{A}\rightarrow\mathbf{B}\) be the canonical projection. Given \(\mathcal{S}\in\mathbf{A}\) let \(f(\mathcal{S})\in\mathbb{R}^{d_{1}+\cdots+d_{L}}\otimes\mathbb{R}^{d_{1}+ \cdots+d_{L}}\) be its state (i.e., \(f\) is the projection onto the first component of the strategy). To \(\mathcal{S}=(\left|\psi\right\rangle;(\mathcal{M}_{i})_{i};(\mathcal{N}_{j})_{ j})\) we also assign a tuple \(g(\mathcal{S})\in\mathbb{R}^{(N(K-1)+1)^{2}-1}\) consisting of \[\left\langle\psi\right|\mathcal{M}_{i,a}\otimes\mathcal{N}_{j,b} \left|\psi\right\rangle,\qquad i,j=1,\ldots,N,\ a,b=1,\ldots,K-1,\] \[\left\langle\psi\right|\mathcal{M}_{i,a}\otimes I\left|\psi \right\rangle,\qquad i=1,\ldots,N,\ a=1,\ldots,K-1,\] \[\left\langle\psi\right|I\otimes\mathcal{N}_{j,b}\left|\psi \right\rangle,\qquad j=1,\ldots,N,\ b=1,\ldots,K-1.\] Note that \(g(\mathcal{S})\) determines the correlation of \(\mathcal{S}\). The set \(\mathbf{A}\) is semialgebraic and the maps \(f,g\) are semialgebraic [1, Section 2]. Furthermore, \(\mathbf{B}\) is semialgebraic by [1, Proposition 2.2.4] since \(G\) is a semialgebraic group. The maps \(f,g\) factor through \(\pi\), in the sense that there are semialgebraic maps \(f^{\prime},g^{\prime}\) on \(\mathbf{B}\) satisfying \(f^{\prime}\circ\pi=f\) and \(g^{\prime}\circ\pi=g\). Let \(\mathbf{C}\subseteq\mathbf{B}\) be the set of equivalence classes \([\mathcal{S}]\) such that \(g^{\prime-1}(\{g^{\prime}([\mathcal{S}])\})=\{[\mathcal{S}]\}\). Then \(\mathbf{C}\) is also semialgebraic by [1, Proposition 2.2.4]. Note that if \(\mathcal{S}\in\mathbf{A}\) is self-tested by its correlation then \(\pi(\mathcal{S})\in\mathbf{C}\). Observe that \(\dim f^{\prime}(\mathbf{B})=L-1\), and \(\dim\mathbf{C}=\dim g^{\prime}(\mathbf{C})\leq(N(K-1)+1)^{2}-1\) by [1, Theorem 2.8.8] since \(g^{\prime}|_{\mathbf{C}}\) is injective. Surjectivity of \(f^{\prime}|_{\mathbf{C}}\) would imply \(\dim\mathbf{C}\geq L-1\), contradicting \(L-1>(N(K-1)+1)^{2}-1\). Therefore \(f^{\prime}|_{\mathbf{C}}\) is not surjective. In particular, there exists a state \(\left|\psi\right\rangle\) of the form (9) such that \(\pi(\mathcal{S})\notin\mathbf{C}\) for every \(\mathcal{S}\in f^{-1}(\{\left|\psi\right\rangle\})\). In particular, no \(N\)-input \(K\)-output strategy containing \(\left|\psi\right\rangle\) is self-tested by its correlation. By the renowned theorem of Halmos [12], a pair of projections \(P_{1},P_{2}\in\mathrm{M}_{n}(\mathbb{C})\) is, up ot a unitary basis change, equal to \[P_{1} =\varepsilon_{1}\oplus\cdots\oplus\varepsilon_{o}\oplus\begin{pmatrix} 1&0\\ 0&0\end{pmatrix}\oplus\cdots\oplus\begin{pmatrix}1&0\\ 0&0\end{pmatrix},\] \[P_{2} =\varepsilon_{1}^{\prime}\oplus\cdots\oplus\varepsilon_{o}^{ \prime}\oplus\begin{pmatrix}\frac{1}{2}+\frac{1}{2}\cos\alpha_{1}&\frac{1}{2} \sin\alpha_{1}\\ \frac{1}{2}\sin\alpha_{1}&\frac{1}{2}-\frac{1}{2}\cos\alpha_{1}\end{pmatrix} \oplus\cdots\oplus\begin{pmatrix}\frac{1}{2}+\frac{1}{2}\cos\alpha_{L}&\frac{1}{ 2}\sin\alpha_{L}\\ \frac{1}{2}\sin\alpha_{L}&\frac{1}{2}-\frac{1}{2}\cos\alpha_{L}\end{pmatrix}, \tag{10}\] where \(\varepsilon_{i},\varepsilon_{i}^{\prime}\in\{0,1\}\) and \(\alpha_{\ell}\in(0,\frac{\pi}{2})\). The number of distinct \(2\times 2\) blocks in (10) equals the number of distinct positive eigenvalues of \(i(P_{1}P_{2}-P_{2}P_{1})\). **Proposition 6.2**.: _Let \(L,N\in\mathbb{N}\) satisfy \(L>(N+1)^{2}\). Then for all \(d_{0},d_{1},\ldots,d_{L}\in\mathbb{N}\) there exists a pair of binary projective measurements \((P_{1},I-P_{1}),(P_{2},I-P_{2})\) with \(L\) distinct \(2\times 2\) blocks in (10) with multiplicities \(d_{1},\ldots,d_{L}\) and \(d_{0}\)\(1\times 1\) blocks, that cannot be self-tested by \(N\)-inputs and \(2\)-outputs._ Proof.: We proceed analogously as in the proof of Proposition 6.1. The set \(\mathbf{A}\) consists of \(N\)-input \(2\)-output strategies whose first two measurements are given by projections of the form (10) with \(L\) angles \(\alpha_{\ell}\) of multiplicities \(d_{1},\ldots,d_{L}\). Let \(f:\mathbf{A}\to\mathrm{M}_{d_{0}+2(d_{1}+\cdots+d_{L})}(\mathbb{R})^{2}\) be the projection onto the pair of projections defining the first two measurements in a strategy. The group \(G\) consists of all unitaries preserving the structure of (10). Then \(g,\mathbf{B},\mathbf{C}\) are defined similarly as in the proof of Proposition 6.1, and the same dimension arguments apply. ## Appendix A Distinguished projections in low dimensions Let us describe the recursive construction of \(\mathfrak{P}_{i}^{(n)}\) in a more concrete way, which requires only matrix arithmetic and Gram-Schmidt orthogonalization. \((n=1)\) Set \(\mathfrak{P}_{1}^{(1)}:=1\) and \(\mathfrak{P}_{i}^{(1)}:=0\) for \(i=2,3,4\). \((n\to n+1)\) Given \(\mathfrak{P}_{1}^{(n)},\ldots,\mathfrak{P}_{4}^{(n)}\) let: * \(U_{i}\) be an \(n\times\mathrm{rk}(n-\mathfrak{P}_{i}^{(n)})\) matrix whose columns form an orthonormal basis of the column space of \(I-\mathfrak{P}_{i}^{(n)}\); * \(V_{i}\) be an \(\mathrm{rk}\,\mathfrak{P}_{i}^{(n)}\times(n+1)\) matrix, such that the columns of form an orthonormal basis of the column space of \[I-\frac{1}{2+\frac{1}{n}}\begin{pmatrix}U_{1}^{*}\\ \vdots\\ U_{4}^{*}\end{pmatrix}\begin{pmatrix}U_{1}&\cdots&U_{4}\end{pmatrix}.\] Then set \(\mathfrak{P}_{i}^{(n+1)}:=(2-\frac{1}{n+1})V_{i}^{*}V_{i}\). As a demonstration, we construct \(\mathfrak{P}_{1}^{(n)},\dots\mathfrak{P}_{4}^{(n)}\) for \(n\leq 5\). \(n=1\): \((1),\ (0),\ (0),\ (0)\) \(n=2\): \[\begin{pmatrix}0&0\\ 0&0\end{pmatrix},\ \begin{pmatrix}1&0\\ 0&0\end{pmatrix},\ \begin{pmatrix}\frac{1}{4}&-\frac{\sqrt{3}}{4}\\ -\frac{\sqrt{3}}{4}&\frac{3}{4}\end{pmatrix},\ \begin{pmatrix}\frac{1}{4}&\frac{ \sqrt{3}}{4}\\ \frac{\sqrt{3}}{4}&\frac{3}{4}\end{pmatrix}\] \(n=3\): \[\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&0\end{pmatrix},\ \begin{pmatrix}0&0&0\\ 0&\frac{4}{9}&-\frac{2\sqrt{5}}{9}\\ 0&-\frac{2\sqrt{5}}{9}&\frac{5}{9}\end{pmatrix},\ \begin{pmatrix}\frac{1}{3}& \frac{1}{3\sqrt{3}}&\frac{\sqrt{5}}{3\sqrt{3}}\\ \frac{1}{3\sqrt{3}}&\frac{1}{9}&\frac{\sqrt{5}}{9}\\ \frac{\sqrt{5}}{3\sqrt{3}}&\frac{\sqrt{5}}{9}&\frac{5}{9}\end{pmatrix},\ \begin{pmatrix}\frac{1}{3}&-\frac{1}{3\sqrt{3}}&-\frac{ \sqrt{5}}{3\sqrt{3}}\\ -\frac{1}{3\sqrt{3}}&\frac{1}{9}&\frac{\sqrt{5}}{9}\\ -\frac{\sqrt{5}}{3\sqrt{3}}&\frac{\sqrt{5}}{9}&\frac{5}{9}\end{pmatrix}\] \(n=4\): \[\begin{pmatrix}1&0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&0\end{pmatrix},\ \begin{pmatrix}\frac{1}{4}&0&-\frac{\sqrt{3}}{4}&0\\ 0&1&0&0\\ -\frac{\sqrt{3}}{4}&0&\frac{3}{4}&0\\ 0&0&0&0\end{pmatrix},\] \[\begin{pmatrix}\frac{1}{4}&-\frac{\sqrt{15}}{16}&\frac{\sqrt{3}}{8}&\frac{ \sqrt{21}}{16}\\ -\frac{\sqrt{15}}{16}&\frac{3}{8}&-\frac{3\sqrt{5}}{16}&0\\ \frac{\sqrt{3}}{8}&-\frac{3\sqrt{5}}{16}&\frac{1}{2}&-\frac{\sqrt{7}}{16}\\ \frac{\sqrt{21}}{16}&0&-\frac{\sqrt{7}}{16}&\frac{7}{8}\end{pmatrix},\ \begin{pmatrix}\frac{1}{4}&\frac{\sqrt{15}}{16}&\frac{\sqrt{3}}{8}&-\frac{ \sqrt{21}}{16}\\ \frac{\sqrt{15}}{16}&\frac{3}{8}&\frac{3\sqrt{5}}{16}&0\\ \frac{\sqrt{3}}{8}&\frac{3\sqrt{5}}{16}&\frac{1}{2}&\frac{\sqrt{7}}{16}\\ -\frac{\sqrt{21}}{16}&0&\frac{\sqrt{7}}{16}&\frac{7}{8}\end{pmatrix}\] \(n=5\): \[\begin{pmatrix}1&0&0&0&0\\ 0&1&0&0&0\\ 0&0&1&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\end{pmatrix},\ \begin{pmatrix}0&0&0&0&0\\ 0&\frac{4}{25}&0&-\frac{2\sqrt{21}}{25}&0\\ 0&0&\frac{16}{25}&0&-\frac{12}{25}\\ 0&-\frac{2\sqrt{21}}{25}&0&\frac{21}{25}&0\\ 0&0&-\frac{12}{25}&0&\frac{9}{25}\end{pmatrix},\] \[\begin{pmatrix}\frac{2}{5}&\frac{3}{5\sqrt{5}}&0&\frac{\sqrt{21}}{ 5\sqrt{5}}&0\\ \frac{3}{5\sqrt{5}}&\frac{8}{25}&\frac{\sqrt{7}}{25}&\frac{\sqrt{21}}{25}& \frac{3\sqrt{7}}{25}\\ 0&\frac{\sqrt{7}}{25}&\frac{2}{25}&-\frac{\sqrt{3}}{25}&\frac{6}{25}\\ \frac{\sqrt{21}}{5\sqrt{5}}&\frac{\sqrt{21}}{25}&-\frac{\sqrt{3}}{25}&\frac{ 12}{25}&-\frac{3\sqrt{3}}{25}\\ 0&\frac{3\sqrt{7}}{25}&\frac{6}{25}&-\frac{3\sqrt{3}}{25}&\frac{18}{25}\end{pmatrix},\ \begin{pmatrix}\frac{2}{5}&-\frac{3}{5\sqrt{5}}&0&-\frac{\sqrt{21}}{5\sqrt{5}}&0 \\ -\frac{3}{5\sqrt{5}}&\frac{8}{25}&-\frac{\sqrt{7}}{25}&\frac{\sqrt{21}}{25}&- \frac{3\sqrt{7}}{25}\\ 0&-\frac{\sqrt{7}}{25}&\frac{2}{25}&\frac{\sqrt{3}}{25}&\frac{6}{25}\\ -\frac{\sqrt{21}}{5\sqrt{5}}&\frac{\sqrt{21}}{25}&\frac{\sqrt{3}}{25}&\frac{ 12}{25}&\frac{3\sqrt{3}}{25}\\ 0&-\frac{3\sqrt{7}}{25}&\frac{6}{25}&\frac{3\sqrt{3}}{25}&\frac{18}{25}\end{pmatrix}\] To obtain \(\mathfrak{Q}_{n,r}\), one computes \(\mathfrak{Q}_{n,r}=\sum_{i=1}^{r}|e_{i}\rangle\langle e_{i}|\) where \(|e_{i}\rangle\) are unit eigenvectors of \(\mathfrak{P}_{3}^{(n)}+\mathfrak{P}_{4}^{(n)}\) corresponding to the \(r\) smallest eigenvalues in increasing order. Examples for \(r<n\leq 5\) are given below. \(n=2\), \(r=1\): \(n=3\), \(r=1,2\): \(n=4\), \(r=1,2,3\): \[\begin{pmatrix}\frac{3}{4}&0&-\frac{\sqrt{3}}{4}&0\\ 0&0&0&0\\ -\frac{\sqrt{3}}{4}&0&\frac{1}{4}&0\\ 0&0&0&0\end{pmatrix},\ \begin{pmatrix}\frac{3}{4}&0&-\frac{\sqrt{3}}{4}&0\\ 0&1&0&0\\ -\frac{\sqrt{3}}{4}&0&\frac{1}{4}&0\\ 0&0&0&0\end{pmatrix},\ \begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&0\end{pmatrix}\] \(n=5\), \(r=1,2,3,4\): \[\begin{pmatrix}0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&\frac{9}{10}&0&-\frac{3}{10}\\ 0&0&0&0&0\\ 0&0&-\frac{3}{10}&0&\frac{1}{10}\end{pmatrix},\ \begin{pmatrix}0&0&0&0&0\\ 0&\frac{7}{10}&0&-\frac{\sqrt{21}}{10}&0\\ 0&0&\frac{9}{10}&0&-\frac{3}{10}\\ 0&-\frac{\sqrt{21}}{10}&0&\frac{3}{10}&0\\ 0&0&-\frac{3}{10}&0&\frac{1}{10}\end{pmatrix},\] \[\begin{pmatrix}1&0&0&0&0\\ 0&\frac{7}{10}&0&-\frac{\sqrt{21}}{10}&0\\ 0&0&\frac{9}{10}&0&-\frac{3}{10}\\ 0&-\frac{\sqrt{21}}{10}&0&\frac{3}{10}&0\\ 0&0&-\frac{3}{10}&0&\frac{1}{10}\end{pmatrix},\ \begin{pmatrix}1&0&0&0&0\\ 0&1&0&0&0\\ 0&0&\frac{9}{10}&0&-\frac{3}{10}\\ 0&0&0&1&0\\ 0&0&-\frac{3}{10}&0&\frac{1}{10}\end{pmatrix}\]
2306.04682
Tunable superdiffusion in integrable spin chains using correlated initial states
Although integrable spin chains only host ballistically propagating particles they can still feature diffusive spin transport. This diffusive spin transport originates from quasiparticle charge fluctuations inherited from the initial state's magnetization Gaussian fluctuations. We show that ensembles of initial states with quasi-long range correlations lead to superdiffusive spin transport with a tunable dynamical exponent. We substantiate our prediction with numerical simulations and explain how deviations arise from finite time and finite size effects.
Hansveer Singh, Michael H. Kolodrubetz, Sarang Gopalakrishnan, Romain Vasseur
2023-06-07T18:00:02Z
http://arxiv.org/abs/2306.04682v1
# Tunable superdiffusion in integrable spin chains using correlated initial states ###### Abstract Although integrable spin chains only host ballistically propagating particles they can still feature diffusive spin transport. This diffusive spin transport originates from quasiparticle charge fluctuations inherited from the initial state's magnetization Gaussian fluctuations. We show that ensembles of initial states with quasi-long range correlations lead to superdiffusive spin transport with a tunable dynamical exponent. We substantiate our prediction with numerical simulations and explain how deviations arise from finite time and finite size effects. _Introduction_--In lattice quantum systems, typical initial states far from equilibrium relax according to diffusive hydrodynamics. After a local equilibration step, one expects that the only data from the initial state that survive in local observables are those which determine conjugate thermodynamic quantities, e.g. local temperature, chemical potential. The remaining evolution from local to global equilibrium is governed by the classical theory of hydrodynamics, which generically predicts diffusive transport for lattice systems. However this intuition breaks down in kinetically constrained dynamics [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13] which can exhibit both sub- and superdiffusion, or avoid thermalization altogether [14], and in integrable systems where initial state fluctuations in densities of infinitely-many conserved quantities play a prominent role in governing hydrodynamics. In integrable systems one has an extensive number of extensive conserved quantities or, equivalently, stable ballistically propagating quasiparticles at finite energy density [15; 16; 17; 18]. Although such systems are in principle fine tuned, they are of considerable interest since current experiments can engineer systems that are approximately integrable [19; 20; 21; 22; 23; 24; 25; 26; 27]. Naively, stable ballistically-propagating quasiparticles should always lead to ballistic transport. However this does not always hold, as highlighted by the \(|\Delta|>1\) regime of the spin-1/2 XXZ chain where one finds diffusive spin transport [28; 29; 30; 31; 32; 33; 34; 35]. One can understand the origin of this diffusion as coming from two ingredients: ballistic motion of the quasiparticles (magnons and boundstates thereof) and fluctuations in the quasiparticle's charge which is "screened" by the magnetization fluctuations of the initial state. That is to say if in a region of size \(\ell\), the magnetization density fluctuations scale as \(\ell^{-w}\), where \(w\) is known as the wandering exponent. Then, if the quasiparticle travels a distance \(\ell\) the fluctuations of the quasiparticle's charge over that distance also scales as \(\ell^{-w}\). For initial states drawn from a thermal ensemble, these fluctuations obey the central limit theorem (\(w=1/2\)) and using this fact along with the ballistic motion of the quasiparticles one can show that this leads to diffusive spin transport [32]. This suggests that tuning the wandering exponent of the initial state should modify the nature of spin transport. In this work we demonstrate that this is indeed the case by constructing an ensemble of initial states with tunable wandering exponent which have fluctuations that grow faster than a thermal ensemble's. Based on the charge screening argument outlined above, we show that these states should display superdiffusive transport and indeed find that this is the case by numerically computing the variance of the charge transfer. The variance of the charge transfer grows in a power law fashion and provide an argument for the expected exponent of this power law. We find relatively good agreement with numerical results and explain how deviations arise from finite size effects. _Screening Argument_.-- To illustrate our argument, we study transport in the folded XXZ automaton [36] which qualitatively captures transport features of the easy axis regime and can be thought of as the \(\Delta\to\infty\) limit of the quantum spin-1/2 XXZ chain [37; 38]. A desirable feature of this model compared to the spin-1/2 XXZ chain is that product states in the occupation basis are mapped to product states and thus we can perform simulations to long times with large system sizes. Our argument will rely solely on a quasiparticle picture, and we expect it to generalize to the entire diffusive regime \(|\Delta|>1\) of the XXZ spin chain. Figure 1: **Folded XXZ Automaton.** (a) Circuit geometry for the XXZ automaton follows a staircase pattern. (b) The rules for the updates in the automaton. One can see that they conserve the number of domain walls. (c) A highlighted magnon trajectory traversing ballistically through domains. The system is comprised of \(L\) qubits whose individual basis states are \(|\bullet\rangle\) (particle) and \(|\circ\rangle\) (hole). The unitary governing the dynamics is given by \(\mathcal{U}=V_{3}V_{2}V_{1}\) where \(V_{j}=\prod_{i\equiv j\,\text{mod }3}U_{i,i+1,i+2,i+3}\) and \[\begin{split} U_{i,i+1,i+2,i+3}&=P^{\bullet}_{i} \text{SWAP}_{i+1,i+2}P^{\bullet}_{i+3}\\ &+P^{\circ}_{i}\text{SWAP}_{i+1,i+2}P^{\circ}_{i+3}\\ &+P^{\circ}_{i}P^{\bullet}_{i+3}+P^{\bullet}_{i}P^{\circ}_{i+3}, \end{split} \tag{1}\] where \(P^{\circ}=|\circ\rangle\langle\circ|\) and \(P^{\bullet}=|\bullet\rangle\langle\bullet|\), conserving both particle and domain wall number. A pictorial representation of the circuit along with the update rule performed by the gates are shown in Fig. 1. This model is integrable and has three types of quasiparticles: left- and right-moving magnons (single isolated particles or holes), and frozen domains which arise from domain wall conservation. To understand transport in integrable systems we essentially have to keep track of how much charge has been spread by the magnons. To track the amount of charge a magnon spreads we compute \(\delta x_{\text{charge}}(t)^{2}=\langle(q_{\text{mag}}(t)x_{\text{mag}}(t))^{2}\rangle\) where \(q_{\text{mag}}\) denotes the charge of the magnon, \(x_{\text{mag}}\) denotes the distance the magnon has traveled, and \(\langle\cdot\rangle\) denotes averaging over the ensemble of initial states. In integrable systems magnons traverse ballistically with a well-defined velocity, \(v\) (we imagine tracking a single species), so \[\delta x_{\text{charge}}(t)^{2}=v^{2}t^{2}\langle q_{\text{mag}}(t)^{2}\rangle. \tag{2}\] On average the magnon's charge will be zero at infinite temperature but the variance will non-zero. As shown in Fig. 1, magnons change their charge by passing through frozen domains. For example, if a magnon is a particle prior to collision, it will change to a hole while traversing the next domain (see Fig. 1). This means a magnon's charge fluctuations are entirely specified by the pattern of frozen domains in the initial state. Fluctuations in the pattern of frozen domains are determined by the initial state magnetization density, \(m\), defined such that a particle/hole has charge \(\pm 1\). Thus, fluctuations are characterized by the wandering exponent, \(w\), i.e. in a region of size \(\ell\), \(m\sim\ell^{-w}\). Thus we have that the charge of the magnon should scale in the same way as the initial state magnetization density fluctuations. So if a magnon moves a distance \(\ell\) in a time \(t\) then \(\langle q_{\text{mag}}^{2}\rangle\sim\ell^{-w}\sim t^{-w}\). Therefore \[\delta x_{\text{charge}}(t)\sim t^{1-w}. \tag{3}\] This argument was first used in Ref. [32] and was applied to thermal states where \(w=1/2\) where one correctly predicts diffusive spin transport. In the next section we will provide a procedure to generate states with tunable wandering exponent with \(w<1\) which implies superdiffusive spin transport. _Correlated States_.-- Based on the screening argument, generating states which do not obey the central limit theorem should lead to anomalous transport. One way to do so is to create states that are spatially correlated. This can be achieved by forming product states which are comprised of contiguous fully polarized domains (e.g. \(|\bullet\bullet\bullet\bullet\circ\circ\circ\circ\bullet\bullet \bullet\circ\circ\rangle\)) that have unbounded lengths. To do so we draw the lengths of these domains from Levy alpha stable distributions (although any distribution with power law tails should work) [39]. These are distributions with power law tails that can be tuned via the so-called stability parameter that characterizes the distribution. To be more precise, let \(S(y;\alpha,c,\beta,\mu)\) denote the stable distribution with stability parameter, \(\alpha\); scale parameter, \(c\); skewness, \(\beta\); mean, \(\mu\). In this work \(\mu=0\) and \(\beta=0\) and \(1<\alpha\leq 2\). For large \(y\), \(S(y)\sim y^{-(\alpha+1)}\). Given a random variable \(y\) distributed according to \(S\) then the domain length of the \(i\)th domain is given by \(x_{i}=\text{floor}(|y|)\). For \(1<\alpha\leq 2\) the average of \(y\) is well defined and we note that the average of \(|y|\) for \(\beta=\mu=0\) is given by \(\frac{2}{\pi}c^{1/\alpha}\Gamma(1-1/\alpha)\) where \(\Gamma(z)\) is the usual gamma function and so the scale parameter determines the average domain size. We randomly choose a domain to be fully filled (empty) with equal probability. To determine the wandering exponent of these states we compute the magnetization ((un)occupied sites have charge \(\pm 1\)) in a region of size \(\ell\), i.e. \(M=\sum_{i=1}^{\ell}\sigma_{i}\) and \(\sigma_{i}=\pm 1\). Let \(\tau_{i}\) denote the polarization of the \(i\)th domain (this is \(\pm 1\) for occupied and unoccupied respectively). We can rewrite the sum as \(M=\sum_{i=1}^{k}x_{i}\tau_{i}+(\ell-\sum_{i=1}^{k}x_{i})\tau_{k+1}\) where the last term accounts for the fact not all \(k\) domains may fit inside the region. Let the magnetization of a domain be \(m_{i}=x_{i}\tau_{i}\). Then the distribution of \(m_{i}\) is symmetric since \(\tau_{i}=\pm 1\) with equal probability and has power law tails inherited from the domain length distribution. Thus we can invoke the generalized central limit theorem to show that \(\langle M^{2}\rangle\sim k^{2/\alpha}\). For the stability parameters we consider, the mean is well-defined and so the typical domain lengths will be controlled by \(c\). For \(c\sim O(1)\), we expect that the number of domains \(k\sim\ell\). Thus \(\langle M^{2}\rangle\sim\ell^{2/\alpha}\). We conclude that the wandering exponent is given by \(w=1-1/\alpha\). For \(1<\alpha\leq 2\), this means that magnetization fluctuations decay slower than thermal states consistent with the fact that these states have larger correlations. Based on the screening argument this would imply that charge spreads as \(t^{1/\alpha}\) which would indicate superdiffusive transport. In the next section we demonstrate that this is indeed the case by computing transport observables. _Transport and numerics_.-- To characterize transport in this special class of initial states, we cannot use the standard linear-reponse Kubo formalism (which relies on special properties of thermal states). Instead, we diagnose charge transport via the variance of the so-called charge transfer \(Q(t)\), \[Q(t)=\sum_{x\leq 0}(n_{x}(t)-n_{x}(0))-\sum_{x\geq 0}(n_{x}(t)-n_{x}(0)), \tag{4}\] with \(n_{x}=0,1\) the particle density on site \(x\). The charge transfer keeps track of how much charge has been transferred across the origin at time \(t\). The full distribution of the charge transfer is known as the full counting statistics and it not only captures linear response, but also captures details about fluctuations on top of the average hydrodynamic behavior [40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52]. Experimental setups can access the full distribution via snapshots corresponding to projective measurements on the whole system [53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72]. In this work, we will only be focusing on the behavior of the variance, \(\langle Q^{2}\rangle-\langle Q\rangle^{2}\), for states at half filling, i.e. \(\mu=0\). When \(\mu=0\), \(\langle Q\rangle=0\) but \(\langle Q^{2}\rangle\neq 0\) and is generally expected to grow in a power law fashion, \(\langle Q^{2}\rangle\sim t^{1/z_{\rm var}}\) and \(z_{\rm var}=2\) for diffusive systems. In this section we numerically compute \(\langle Q^{2}\rangle\) and extract \(z_{\rm var}\) via a log-derivative, i.e. \(z_{\rm var}(t)=2\bigg{(}\frac{d\log(Q^{2})}{d\log t}\bigg{)}^{-1}\). Our results are shown in Fig. 2. One can see that when initial states are drawn from the correlated ensemble \(\langle Q^{2}\rangle\) grows faster than initial states drawn from a thermal ensemble at infinite temperature, which is consistent with the screening argument. We extracted \(z_{\rm var}\) for various \(\alpha\) and one can clearly see that tuning \(\alpha\) does indeed change the behavior of \(\langle Q^{2}\rangle\) which is consistent with the fact that tuning fluctuations of the initial state should correspond to tuning transport. _Theory of charge transfer._-- To understand the behavior of \(Q(t)\) we first consider initial states where the magnetization density \(m_{x}=(2n_{x}-1)/2\) for \(x\leq 0\) (\(x>0\)) is \(m/2\) (\(-m/2\)) and the imbalance, \(m\), is large. In this limit one has a low density of magnons in each half of the chain so there is a well-defined domain wall that separates the two halves of the chain. For example an initial state might look like where the squiggly line indicates the location of the domain wall which separates the halves of the chain. Due to domain wall conservation the only time charge can be transferred is if a particle or hole moves across the origin. At some later time the magnon will cross the origin and our state will look like Notice that the domain wall that separated the halves of the chain moves two sites in the opposite direction in which the magnon moved. The key observation that was made in Ref. [51] is that charge transfer is linked to the motion of this domain wall that separates the two halves. If at time \(t\), the domain wall is at \(x_{\rm dw}(t)\) and if \(m_{[0,x_{4w}]}\) is the magnetization density in the region between the origin and the location of the domain wall then \[Q(t)=-2m_{[0,x_{4w}(t)]}x_{\rm dw}(t). \tag{5}\] When \(m\to 0\) there is not a well defined notion of a domain wall that separates the two halves of the chain. However one can imagine a fictitious domain wall which is located at the origin and moves in the same fashion as one would expect when \(m\simeq 1\). For instance, suppose we have the following initial state, where the squiggly line indicates the location of our fictitious domain wall initially at the origin. At the next time step the hole next to the origin crosses over to the left half of the system and our state will look like Figure 2: **Charge Variance vs Stability Parameter.** (a) Variance of the charge transfer as a function of time shown for stability parameter, \(\alpha=1.4\), compared to initial states drawn from an infinite temperature, i.e. uncorrelated, ensemble. Note that the fluctuations for \(\alpha=1.4\) grow faster than those in the thermal ensemble indicating superdiffusive behavior. (b) Dynamical exponents, \(z_{\rm var}\), versus time obtained via log derivatives for two different values of \(\alpha\). (c) \(z_{\rm var}\) obtained via log-derivatives at a late time, \(t=500\). The orange point \(\alpha=2\) is obtained by sampling states from an infinite temperature ensemble. One sees that the prediction, \(z_{\rm var}=\alpha^{2}/2\) is roughly consistent with the numerics. The data is averaged over \(10^{7}\) initial states. The domain wall moves two steps in the opposite direction in which the magnon propagated. Demanding that the fictitious domain wall move in this manner implies that the charge transfer satisfies Eq. 5. From Eq. 5, we can now relate the dynamical exponent of the charge transfer variance to the motion of the domain wall. On average the domain wall will be at the origin at half-filling, i.e. \(\langle x_{\rm dw}(t)\rangle=0\), since the number of left and right movers is equal, but \(\langle x_{\rm dw}(t)^{2}\rangle\neq 0\) and is expected to grow as a power law, \(\langle x_{\rm dw}(t)^{2}\rangle\sim t^{2/z_{\rm dw}}\). Using the fact that the magnetization density over a distance \(|x_{\rm dw}|\) scales \(m\sim|x_{\rm dw}|^{-1+1/\alpha}\), then the variance of the charge transfer should obey the following scaling relation, \[\langle Q(t)^{2}\rangle\sim t^{2/\alpha z_{\rm dw}}. \tag{6}\] To obtain what \(z_{\rm dw}\) is we note that the domain wall only moves when a magnon "hits" it. And whenever the magnon "hits" the domain wall its charge changes thus the variance in time of \(x_{\rm dw}\) should follow the same time dependence as the variance in the amount of charge a magnon has carried, i.e. \(\delta x_{\rm mag}(t)\). Since \(\delta x_{\rm mag}^{2}\sim t^{2-2w}=t^{2/\alpha}\) then \(\langle x_{\rm dw}(t)^{2}\rangle\sim t^{2/\alpha}\). Using this result along with Eq. 6 implies \(z_{\rm var}=\alpha^{2}/2\). From Fig. 2, we see relatively good agreement with our prediction. Numerical deviations from this prediction can be understood from two finite size/time effects: the motion of the domain wall and the dependence of the dynamical exponents on \(c\). The first comes from the fact that in a time \(t\) the domain wall will have only moved a distance \(t^{1/\alpha}\) and thus the domain wall cannot efficiently probe the tails of the distribution. The second comes from the fact that the mean domain size of is tuned by the scaling parameter, \(c\), and at long times and distances one expects the dependence on \(c\) to drop out and only the tails of the distributions should matter. As shown in Fig. 3, we see that \(z_{\rm var}\) and \(z_{\rm dw}\) display a dependence on the scale parameter with the more severe dependence in the latter case presumably coming from the fact that the domain wall does not probe the tails of the distribution well enough and thus is much more prone to experience stronger effects from the scaling parameter. _Discussion_.-- In this work we demonstrated that anisotropic integrable spin chains provide a route towards tunable transport by tuning initial state magnetization fluctuations. We gave a procedure to generate states which have quasi-long range correlations using Levy alpha stable distributions. And we demonstrated that the fluctuations in this ensemble of correlated states is tuned by the stability parameter that changes the tails of these stable distributions. Using the variance of the charge transfer we found superdiffusive transport in these initial states which is tuned by the stability parameter consistent with the screening argument set forth in Ref. [32]. Furthermore we are able to provide a theoretical prediction for the dynamical exponent controlling the growth of the charge variance via the motion of a "domain wall" and the magnetization density in the region between the position of the domain wall and the position at which the charge transfer is being measured. We find good agreement with our prediction and numerical simulations with some finite size effects. We showed that these finite size effects come about from the domain wall's inability to efficiently probe the tails of these stable distributions at short times. As such the domain wall motion is severely affected by the average domain size that it probes which is controlled by the scaling parameter of the Levy alpha stable distributions. Surprisingly the time scale for which the dependence on \(c\) goes away is different for the charge variance and the domain wall motion. This suggests that finite size effects in the magnetization density cancel out the finite size effects in the motion of the "domain wall." It would be interesting to find different ensembles of correlated states that are able to probe the tails more effectively. While we only presented results for superdiffusive transport in principle one can also achieve tunable subdiffusive transport. For this to happen one would need to generate states which have fluctuations that drop off faster than those satisfying the central limit theorem. Such states, also known as hyperuniform states, would need to have domains that are anti-correlated. A second interesting route would be to go away from the \(\Delta\to\infty\) limit and probe transport in the \(\Delta>1\) regime of the spin-1/2 XXZ chain using these correlated initial states. In this regime fully polarized domains are no longer frozen but are only frozen for a time \(t\sim\Delta^{s-1}\) where \(s\) is the domain length. The dynamics of the spin-1/2 XXZ chain is accessible in experiments [73] and so it would also be interesting to explore this tunable trans Figure 3: **Dynamical Exponent Finite Size Effects.** (a) One sees that at short times the dynamical exponent of the variance of the charge differs for different scaling parameters but at longer times converges to the same value. Here data was averaged \(10^{7}\) initial states. (b) The domain wall dynamical exponent as function of time features much stronger finite sizes as the exponent does not appear to converge until \(\sim 10^{3}\) which is an order of magnitude larger than the variance of the charge transfer. Here results were averaged over \(10^{6}\) realizations. port experimentally since these correlated initial states are product states. Approaching the isotropic \(\Delta=1\) limit should be particularly interesting, as it already shows superdiffusive transport with \(z=3/2\) for thermal states [23, 74, 75, 76, 34]. _Acknowledgements_.-- We thank Vedika Khemani for stimulating discussions. This work was supported by the National Science Foundation under NSF Grants No. DMR-2103938 (S.G.), DMR-2104141 (R.V.), DMR-1945529 (M.K.), the Welch Foundation through award number AT-2036-20200401 (M.K.), and the Alfred P. Sloan Foundation through a Sloan Research Fellowship (R.V.).
2303.16458
When to Pre-Train Graph Neural Networks? From Data Generation Perspective!
In recent years, graph pre-training has gained significant attention, focusing on acquiring transferable knowledge from unlabeled graph data to improve downstream performance. Despite these recent endeavors, the problem of negative transfer remains a major concern when utilizing graph pre-trained models to downstream tasks. Previous studies made great efforts on the issue of what to pre-train and how to pre-train by designing a variety of graph pre-training and fine-tuning strategies. However, there are cases where even the most advanced "pre-train and fine-tune" paradigms fail to yield distinct benefits. This paper introduces a generic framework W2PGNN to answer the crucial question of when to pre-train (i.e., in what situations could we take advantage of graph pre-training) before performing effortful pre-training or fine-tuning. We start from a new perspective to explore the complex generative mechanisms from the pre-training data to downstream data. In particular, W2PGNN first fits the pre-training data into graphon bases, each element of graphon basis (i.e., a graphon) identifies a fundamental transferable pattern shared by a collection of pre-training graphs. All convex combinations of graphon bases give rise to a generator space, from which graphs generated form the solution space for those downstream data that can benefit from pre-training. In this manner, the feasibility of pre-training can be quantified as the generation probability of the downstream data from any generator in the generator space. W2PGNN offers three broad applications: providing the application scope of graph pre-trained models, quantifying the feasibility of pre-training, and assistance in selecting pre-training data to enhance downstream performance. We provide a theoretically sound solution for the first application and extensive empirical justifications for the latter two applications.
Yuxuan Cao, Jiarong Xu, Carl Yang, Jiaan Wang, Yunchao Zhang, Chunping Wang, Lei Chen, Yang Yang
2023-03-29T05:05:02Z
http://arxiv.org/abs/2303.16458v4
# When to Pre-Train Graph Neural Networks? ###### Abstract. In recent years, graph pre-training has gained significant attention, focusing on acquiring transferable knowledge from unlabeled graph data to improve downstream performance. Despite these recent endeavors, the problem of negative transfer remains a major concern when utilizing graph pre-trained models to downstream tasks. Previous studies made great efforts on the issue of _what to pre-train_ and _how to pre-train_ by designing a variety of graph pre-training and fine-tuning strategies. However, there are cases where even the most advanced "pre-train and fine-tune" paradigms fail to yield distinct benefits. This paper introduces a generic framework W2PGNN to answer the crucial question of _when to pre-train_ (_i.e._, in what situations could we take advantage of graph pre-training) before performing effortful pre-training or fine-tuning. We start from a new perspective to explore the complex generative mechanisms from the pre-training data to downstream data. In particular, W2PGNN first fits the pre-training data into graphon bases, each element of graphon basis (_i.e._, a graphon) identifies a fundamental transferable pattern shared by a collection of pre-training graphs. All convex combinations of graphon bases give rise to a generator space, from which graphs generated form the solution space for those downstream data that can benefit from pre-training. In this manner, the feasibility of pre-training can be quantified as the generation probability of the downstream data from any generator in the generator space. W2PGNN offers three broad applications: providing the application scope of graph pre-trained models, quantifying the feasibility of pre-training, and assistance in selecting pre-training data to enhance downstream performance. We provide a theoretically sound solution for the first application and extensive empirical justifications for the latter two applications. graph neural networks, graph pre-training + Footnote †: dagger}\)This work was done when the first author was a visiting student at Fudan University.Zhejiang, August 6-10, 2023, _Jong Beach, CA_ + Footnote †: dagger}\)This work was done when the first author was a visiting student at Fudan University. molecular networks (unstable vs. stable in terms of chemical property) from those in social networks (stable vs. unstable in terms of social relationship); such distinct or reversed semantics does not contribute to transferability, and even exacerbates the problem of negative transfer. To avoid the negative transfer, recent efforts focus on _what to pre-train_ and _how to pre-train_, _i.e._, design/adopt graph pre-training models with a variety of self-supervised tasks to capture different patterns (Zhu et al., 2019; Zhang et al., 2020; Zhang et al., 2021) and fine-tuning strategies to enhance downstream performance (Zhu et al., 2019; Li et al., 2020; Zhang et al., 2021; Zhang et al., 2021). However, there do exist some cases that no matter how advanced the pre-training/fine-tuning method is, the transferability from pre-training data to downstream data still cannot be guaranteed. This is because the underlying assumption of deep learning models is that the test data should share a similar distribution as the training data. Therefore, it is a necessity to understand _when to pre-train_, _i.e._, under what situations the "graph pre-train and fine-tune" paradigm should be adopted. Towards the answer of when to pre-train GNNs, one straightforward way illustrated in Figure 1(a) is to train and evaluate on all candidates of pre-training models and fine-tuning strategies, and then the resulting best downstream performance would tell us whether pre-training is a sensible choice. If there exist \(l_{1}\) pre-training models and \(l_{2}\) fine-tuning strategies, such a process would be very costly as you should make \(l_{1}\times l_{2}\) "pre-train and fine-tune" attempts. Another approach is to utilize graph metrics to measure the similarity between pre-training and downstream data, _e.g._, density, clustering coefficient and etc. However, it is a daunting task to enumerate all hand-engineered graph features or find the dominant features that influenced similarity. Moreover, the graph metrics only measure the pair-wise similarity between two graphs, which cannot be directly and accurately applied to the practical scenario where pre-training data contains multiple graphs. In this paper, we propose a W2PGNN framework to answer _when to pre-train GNNs from a graph data generation perspective_. The high-level idea is that instead of performing effortful graph pre-training/fine-tuning or making comparisons between the pre-training and downstream data, we study the complex generative mechanism from the pre-training data to the downstream data (Figure 1(b)). We say that downstream data can benefit from pre-training data (_i.e._, has high feasibility of performing pre-training), if it can be generated with high probability by a graph generator that summarizes the topological characteristic of pre-training data. The major challenge is how to obtain an appropriate graph generator, hoping that it not only inherits the transferable topological patterns of the pre-training data, but also is endowed with the ability to generate feasible downstream graphs. To tackle the challenge, we propose to design a graph generator based on graphons. We first fit the pre-training graphs into different graphons to construct a _graphon basis_, where each graphon (_i.e._, element of the graphon basis) identifies a collection of graphs that share common transferable patterns. We then define a _graph generator_ as a convex combination of elements in a graphon basis, which serves as a comprehensive and representative summary of pre-training data. All of these possible generators constitute the _generator space_, from which graphs generated form the solution space for the downstream data that can benefit from pre-training. Accordingly, the feasibility of performing pre-training can be measured as the highest probability of downstream data being generated from any graph generator in the generator space, which can be formulated as an optimization problem. However, this problem is still difficult to solve due to the large search space of graphon basis. We propose to reduce the search space to three candidates of graphon basis, _i.e._, topological graphon basis, domain graphon basis, and integrated graphon basis, to mimic different generation mechanisms from pre-training to downstream data. Built upon the reduced search space, the feasibility can be approximated efficiently. Our major contributions are concluded as follows: * **Problem and method.** To the best of our knowledge, we are the first work to study the problem of when to pre-train GNNs. We propose a W2PGNN framework to answer the question from a data generation perspective, which tells us the feasibility of performing graph pre-training before conducting effortful pre-training and fine-tuning. * **Broad applications.** W2PGNN provides several practical applications: (1) provide the application scope of a graph pre-trained model, (2) measure the feasibility of performing pre-training for a downstream data and (3) choose the pre-training data so as to maximize downstream performance with limited resources. * **Theory and Experiment.** We theoretically and empirically justify the effectiveness of W2PGNN. Extensive experiments on real-world graph datasets from multiple domains show that the proposed method can provide an accurate estimation of pre-training feasibility and the selected pre-training data can benefit the downstream performance. ## 2. Problem Formulation In this section, we first formally define the problem of when to pre-train GNNs. Then, we provide a brief theoretical analysis of the transferable patterns in the problem we study, and finally discuss some non-transferable patterns. Definition 1 (When to pre-train GNNs).: _Given the pretraining graph data \(\mathcal{G}_{train}\) and the downstream graph data \(\mathcal{G}_{down}\), our main goal is to answer to what extent the "pre-train and fine-tune" paradigm can benefit the downstream data._ Figure 1. Comparison of existing methods and proposed W2PGNN to answer _when to pre-train_ GNNs. Note that in addition to this main problem, our proposed framework can also serve other scenarios, such as providing the application scope of graph pre-trained models, and helping select pre-training data to benefit the downstream (please refer to the _application cases_ in Section 4.1 for details). Transferable graph patterns.The success of "pre-train and fine-tune" paradigm is typically attributed to the commonplace between pre-training and downstream data. However, in real-world scenarios, there possibly exists a significant divergence between the pre-training data and the downstream data. To answer the problem of when to pre-train GNNs, the primary task is to define the transferable patterns across graphs. We here theoretically explore which patterns are transferable between pre-training and downstream data under the performance guarantee of graph pre-training model (with GNN as the backbone). Theorem 2.1 (Transferability of graph pre-training model).: _Let \(G_{\text{train}}\) and \(G_{\text{down}}\) be two (sub)graphs sampled from \(\mathcal{G}_{\text{train}}\) and \(\mathcal{G}_{\text{down}}\), and assume the attribute of each node as a scalar \(1\) without loss of generality. Given a graph pre-training model \(e\) (instantiated as a GNN) with \(K\) layers and \(1-\)hop graph filter \(\Phi(L)\) (which is a function of the normalized graph Laplacian matrix \(L\)), we have_ \[\left\|e(G_{\text{train}})-e(G_{\text{down}})\right\|_{2}\leq\kappa\Lambda_{ \text{loop}}\left(G_{\text{train}},G_{\text{down}}\right) \tag{1}\] _where \(\Lambda_{\text{loop}}\left(G_{\text{train}},G_{\text{down}}\right)=\frac{1} {mn}\sum_{l=1}^{m}\sum_{f^{\prime}=1}^{n}\left\|L_{g_{l}}-L_{g_{j}}\right\|_{2}\) measures the topological divergence between \(G_{\text{train}}\) and \(G_{\text{down}}\), where \(g_{i}\) is the \(K\)-hop ego-network of node \(i\) from \(G_{\text{train}}\) and \(L_{g_{i}}\) is its corresponding normalized graph Laplacian matrix, \(m\) and \(n\) are the number of nodes of \(G_{\text{train}}\) and \(G_{\text{down}}\). \(e(G_{\text{train}})\) and \(e(G_{\text{down}})\) are the output representations of \(G_{\text{train}}\) and \(G_{\text{down}}\) from graph pre-training model, \(\kappa\) is a constant relevant to \(K\), graph filter \(\Phi\), learnable parameters of GNN and the activation function used in GNN._ Detailed proofs and descriptions can be found in Appendix A.1. Theorem 2.1 suggests that two (sub)graphs sampled from pre-training and downstream data with similar topology are transferable via graph pre-training model (_i.e._, sharing similar representations produced by the model). Hence we consider the transferable graph pattern as the topology of a (sub)graph, either node-level or graph-level. Specifically, the node-level transferable pattern could be the topology of the ego-network of a node (or the structural role of a node), irrespective of the node's exact location in the graph. The graph-level transferable pattern is the topology of the entire graph itself (_e.g._, molecular network). Such transferable patterns constitute the input space introduced in Section 4.1. Discussion of non-transferable graph patterns.As a remark, we show that two important pieces of information (_i.e._, attributes and proximity) commonly used in graph learning are not necessarily transferable across pre-training and downstream data in most real-world scenarios, thus we do not discuss them in this paper. First, although the attributes carry important semantic meaning in one graph, it can be shown that the attribute space of different graphs typically has little or no overlap at all. For example, if the pre-training and downstream data come from different domains, their nodes would indicate different types of entities and the corresponding attributes may be completely irrelevant. Even for graphs from the similar/same domain, the dimensions/meaning of their node attributes can also be totally different and result in misalignment. The proximity, on the other hand, assumes that closely connected nodes are similar, which also cannot be transferred across graphs. Obviously, this proximity assumption depends on the overlaps in neighborhoods and thus only works on graphs with the same or overlapped node set. ## 3. Preliminary and related works Graphons.A Graphon (short for graph function) (Belleelle and G fine-tuning strategies, and then the resulting best downstream performance as the transferability measure. However, as depicted in Figure 1(a), such approach would be very costly to perform effortful pre-training and fine-tuning. Another way is based on graph properties, which leverage the graph properties (_e.g.,_ degree (Bordes and Zisserman, 2017), density (Zisserman and Zisserman, 2017), assortativity (Kang et al., 2018) and etc.) to measure the similarities between pre-training and downstream graphs, potentially can be utilized to approximate the transferability. Some other works also focus on analyzing the transferability of GNNs theoritically (Kang et al., 2018; Li et al., 2019). Nevertheless, they are limited to measure the transferability of GNNs on a single graph or when training and testing data are from the same dataset (Kang et al., 2018; Li et al., 2019), which are inapplicable to our setting. A recent work, EGI (Kang et al., 2019) addresses the transferability measure problem of GNNs across graphs. However, EGI is a model-specific measure and depend on its own framework. For the first time, we study the transferability of graph pre-training from the data perspective, without performing any pre-training and fine-tuning. ## 4. Methodology In this section, we first present our proposed framework W2PGNN to answer when to pre-train GNNs in Section 4.1. Based on the framework, we further introduce the measure of the feasibility of performing pre-training in Section 4.2. Then in Section 4.3, we discuss our approximation to the feasibility of pre-training. Finally, the complexity analysis of W2PGNN is provided in Section 4.4. ### Framework Overview W2PGNN framework provides a guide for answering _when to pre-train GNNs from a graph data generation perspective_. The key insight is that if downstream data can be generated with high probability by a graph generator that summarizes the pre-training data, the downstream data would present high feasibility of performing pre-training. The overall framework of W2PGNN can be found in Figure 2. Given the _input space_ consisting of pre-training graphs, we fit them into a graph generator in the _generator space_, from which the graphs generated constitute the _possible downstream space_. More specifically, an ideal graph generator should inherit different kinds of topological patterns, based on which new graphs can be induced. Therefore, we first construct a graphon basis \(\mathcal{B}=\{B_{1},B_{2},\cdots,B_{k}\}\), where each element \(B_{i}\) represents a graphon fitted from a set of (sub)graphs with similar patterns (_i.e.,_ the blue dots ). To access different combinations of generator basis, each \(B_{i}\) is assigned with a corresponding weight \(\alpha_{i}\) (_i.e.,_ the width of blue arrow ) and their combination gives rise to a graph generator (_i.e.,_ the blue star ). All weighted combinations compose the generator space \(\Omega\) (_i.e.,_ the gray surface ), from which graphs generated form the possible solution space of downstream data (shorted as possible downstream space). The generated graphs are those that could benefit from the pre-training data, we say that they exhibit _high feasibility_ of performing pre-training. In the following, we introduce the workflow of W2PGNN in the input space, the generator space and the possible downstream space in detail. Then, the application cases of W2PGNN are given for different practical use. **Input space.** The input space of W2PGNN is composed of nodes' ego-networks or graphs. For node-level pre-training, we take the nodes' ego-networks to constitute the input space; For graph-level pre-training, we take the graphs (_e.g.,_ small molecular graphs) as input space. **Generator space.** As illustrated in Figure 2, each point (_i.e.,_ graph generator) in the generator space \(\Omega\) is a convex combination of generator basis \(\mathcal{B}=\{B_{1},B_{2},\cdots,B_{k}\}\). Formally, we define the graph generator as \[f(\{\alpha_{i}\},\{B_{i}\})=\sum_{i=1}^{k}\alpha_{i}B_{i},\ \ \text{where}\ \sum_{i=1}^{k}\alpha_{i}=1,\alpha_{i}\geq 0. \tag{2}\] Different choices of \(\{\alpha_{i}\},\{B_{i}\}\) comprise different graph generators. All possible generators constitute the _generator space_\(\Omega=\{f(\{\alpha_{i}\},\{B_{i}\})\mid\forall\ \{\alpha_{i}\},\{B_{i}\}\}\). We shall also note that, the graph generator \(f(\{\alpha_{i}\},\{B_{i}\})\) is indeed a mixed graphon, (_i.e.,_ mixture of \(k\) graphons \(\{B_{1},B_{2},\cdots,B_{k}\}\)), where each element \(B_{i}\) represents a graphon estimated from a set of similar pre-training (sub)graphs. Furthermore, it can be theoretically justified that the mixed version still preserve the properties of graphons (c.f. Theorem 5.1) and the key transferable patterns inherited in \(B_{i}\) (c.f. Theorem 5.2). Thus the graph generator \(f(\{\alpha_{i}\},\{B_{i}\})\), _i.e.,_ mixed graphon, can be considered as a representative and comprehensive summary of pre-training data, from which unseen graphs with different combinations of transferable patterns can be induced. **Possible downstream space.** All the graphs produced by the generators in the generator space \(\Omega\) could benefit from the pre-training, and finally form the possible downstream space. Formally, for each generator in the generator space \(\Omega\) (we denote it as \(f\) for simplicity), we can generate a \(n\)-node graph as follows. First, we independently sample a random latent variable for each node. Then for each pair of nodes, we assign an edge between them with the probability equal to the value of the graphon at their randomly sampled points. The graph generation process can be formulated as \[\begin{split}&\vspace{0.1cm}\cdots,\vspace{0.1cm}\text{Uniform}(\{0,1\}),\\ &\vspace{0.1cm}A_{ij}\sim\text{Bernoulli}(f(\vspace{0.1cm}v_{j}) ),\quad\forall i,j\in\{1,2,\ldots,n\},\end{split} \tag{3}\] Figure 2. Illustration of our proposed framework W2PGNN to answer when to pre-train GNNs. where \(f(\{a_{i},v_{j}\})\in[0,1]\) indicates the corresponding value of the graphon at point \((v_{i},v_{j})\)1, and \(A_{ij}\in\{0,1\}\) indicates the existence of edge between \(i\)-th node and \(j\)-th node. The adjacency matrix of the sampled graph \(G\) is denoted as \(A=[A_{ij}]\in\{0,1\}^{n\times n},\forall i,j\in[n]\). We summarize this generation process as \(G\gets f\). Footnote 1: For simplicity, we slightly abuse the notations \(f(\cdot,\cdot)\). Note that \(f(\{a_{i}\},\{B_{i}\})\) is a function of \(\{a_{i}\}\) and \(\{B_{i}\}\), representing that the generator depends on \(\{a_{i}\}\), \(\{B_{i}\}\); while for each generator \(\{a_{i},\,\text{mixed graphon}\}\) given \(\{a_{i}\}\), \(\{B_{i}\}\), it can be represented as a continuous, bounded and symmetric function \(f:[0,1]^{2}\rightarrow[0,1]\). Therefore, with all generators from the generator space \(\Omega\), the possible downstream space is defined as \(\mathcal{D}=\{G\gets f|f\in\Omega\}\). Note that for each \(\{a_{i}\},\{B_{i}\}\), we have a generator \(f\); and for each generator, we also have different generated graphs. Besides, we theoretically justify that the generated graphs in the possible downstream space can inherit key transferable graph patterns in our generator (c.f. Theorem 5.3). **Application cases.** The proposed framework is flexible to be adopted in different application scenarios when discussing the problem of when to pre-train GNNs. * _Use case 1: provide a user guide of a graph pre-trained model._ The possible downstream space \(\mathcal{D}\) serves as a user guide of a graph pre-trained model, telling the application scope of graph pre-trained models (_i.e._, the possible downstream graphs that can benefit from the pre-training data). * _Use case 2: estimate the feasibility of performing pre-training from pre-training data to downstream data._ Given a collection of pre-training graphs and a downstream graph, one can directly measure the feasibility of performing pre-training on pre-training data, before conducting costly pre-training and fine-tuning attempts. By making such pre-judgement of a kind of transferability, some unnecessary and expensive parameter optimization steps during model training and evaluation can be avoided. * _Use case 3: select pre-training data to benefit the downstream_. In some practical scenarios where the downstream data is provided (_e.g._, a company's need is to boost downstream performance of its business data), the feasibility of pre-training inferred by W2PGNN can be utilized to select data for pre-training, such that the downstream performance can be maximized with limited resources. Use case 1 can be directly given by our produced possible downstream space \(\mathcal{D}\). However, how to measure the feasibility of pre-training in use case 2 and 3 still remains a key challenge. In the following sections, we introduce the formal definition of the feasibility of pre-training and its approximate solution. ### Feasibility of Pre-training If a downstream graph can be generated with a higher probability from any generator in the generator space \(\Omega\), then the graph could benefit more from the pre-training data. We therefore define the feasibility of performing pre-training as the highest probability of the downstream data generated from a generator in \(\Omega\), which can be formulated as an optimization problem as follows. Definition 2 (Feasibility of graph pre-training).: _Given the pre-training data \(\mathcal{G}_{train}\) and downstream data \(\mathcal{G}_{down}\), we have the feasibility of performing pre-training on \(\mathcal{G}_{train}\) to benefit \(\mathcal{G}_{down}\) as_ \[\zeta(\mathcal{G}_{train}\rightarrow\mathcal{G}_{down})=\sup_{\{a_{i}\},\{B_{i }\}}\Pr\left(\mathcal{G}_{down}\mid f(\{a_{i}\},\{B_{i}\})\right), \tag{4}\] _where \(\Pr\left(\mathcal{G}_{down}\mid f(\{a_{i}\},\{B_{i}\})\right)\) denotes the probability of the graph sequence sampled from \(\mathcal{G}_{down}\) being generated by graph generator \(f(\{a_{i}\},\{B_{i}\})\); each (sub)graph represents an ego-network (for node-level task) or a graph (for graph-level task) sampled from the downstream data \(\mathcal{G}_{down}\)._ However, the probability \(\Pr\left(\mathcal{G}_{down}\mid f(\{a_{i}\},\{B_{i}\})\right)\) of generating the downstream graph from a generator is extremely hard to compute, we therefore turn to converting the optimization problem (4) to a tractable problem. Intuitively, if generator \(f(\{a_{i}\},\{B_{i}\})\) can generate the downstream data with higher probability, it potentially means that the underlying generative patterns of pre-training data (characterized by \(f(\{a_{i}\},\{B_{i}\})\)) and downstream data (characterized by the graphon \(\mathcal{B}_{down}\) fitted from \(\mathcal{G}_{down}\)) are more similar. Accordingly, we turn to figure out the infimum of the distance between \(f(\{a_{i}\},\{B_{i}\})\) and \(B_{down}\) as the feasibility, _i.e._, \[\zeta(\mathcal{G}_{train}\rightarrow\mathcal{G}_{down})=-\inf_{\{a_{i}\},\{B_{ i}\}}\operatorname{dist}(f(\{a_{i}\},\{B_{i}\}),B_{down}). \tag{5}\] Following (Zhou et al., 2018), we hire the 2-order Gromov-Wasserstein (GW) distance as our distance function \(\operatorname{dist}(\cdot,\cdot)\), as GW distance is commonly used to measure the difference between structured data. Additionally, we establish a theoretical connection between the above-mentioned distance and the probability of generating the downstream data in extreme case, which further adds to the integrity and rationality of our solution. Theorem 4.1 ().: _Given the graph sequence sampled from downstream data \(\mathcal{G}_{down}\), we estimate its corresponding graphon as \(\mathcal{B}_{down}\). If a generator \(f\) can generate the downstream graph sequence with probability 1, then \(\operatorname{dist}(f,\mathcal{B}_{down})=0\)._ ### Choose Graphon Basis to Approximate Feasibility Although the feasibility has been converted to the optimization problem (5), exhausting all possible \(\{a_{i}\},\{B_{i}\}\) to find the infimum is impractical. An intuitive idea is that we can choose some appropriate graphon basis \(\{B_{i}\}\), which can not only prune the search space but also accelerate the optimization process. Therefore, we aim to first reduce the search space of graphon basis \(\{B_{i}\}\) and then learn the optimal \(\{a_{i}\}\) in the reduced search space. Considering that the downstream data may be formed via different generation mechanisms (implying various transferable patterns), a single graphon basis might have limited expressivity and completeness to cover all patterns. We therefore argue that a good reduced search space of graphon basis should cover a set of graphon bases. Here, we introduce three candidates of them as follows. **Integrated graphon basis.** The first candidate of graphon basis is the integrated graphon basis \(\{B_{i}\}_{\text{integer}}\). This graphon basis is introduced based on the assumption that the pre-training and the downstream graphs share very similar patterns. For example, the pre-training and the downstream graphs might come from social networks of different time spans (Kipip et al., 2018). In the situation, almost all patterns involved in the pre-training data might be useful for the downstream. To achieve this, we directly utilize all (sub)graphs sampled from the pre-training data to estimate one graphon as the graphon basis. This integrated graphon basis serves as a special case of the graphon basis introduced below. **Domain graphon basis.** The second candidate of graphon basis is the domain graphon basis \(\{B_{i}\}_{\text{domain}}\). The domain information that pre-training data comes from is important prior knowledge to indicate the transferability from the pre-training to downstream data. For example, when the downstream data is molecular network, it is more likely to benefit from the pre-training data from specific domains like biochemistry. This is because the specificity of molecules makes it difficult to learn transferable patterns from other domains, _e.g._, closed triangle structure represents diametrically opposite meanings (stable vs unstable) in social network and molecular network. Therefore, we propose to split the (sub)graphs sampled from pre-training data according to their domains, and each split of (sub)graphs will be used to estimate a graphon as a basis element. In this way, each basis element reflects transferable patterns from a specific domain, and all basis elements construct the domain graphon basis \(\{B_{i}\}_{\text{domain}}\). **Topological graphon basis.** The third candidate is the topological graphon basis \(\{B_{i}\}_{\text{topo}}\). The topological similarity between the pre-training and the downstream data serves as a crucial indicator of transferability. For example, a downstream social network might benefit from the similar topological patterns in academic or web networks (_e.g._, closed triangle structure indicates stable relationship in all these networks). Then, the problem of finding topological graphon basis can be converted to partition \(n\) (sub)graphs sampled from pre-training data into \(k\)-split according to their topology similarity, where each split contains (sub)graphs with similar topology. Each element of graphon basis (_i.e._, graphon) fitted from each split of (sub)graphs is expected to characterize a specific kind of topological transferable pattern. However, the challenge is that for graph structured data that is irregular and complex, we cannot directly measure the topological similarity between graphs. To tackle this problem, we introduce a _graph feature extractor_ that maps arbitrary graph into a fixed-length vector representation. To approach a comprehensive and representative set of topological features, we here consider both node-level and graph-level properties. For node-level topological features, we first apply a set of node-level property functions \([\phi_{1}(v),\cdots,\phi_{m_{1}}(v)]\) for each node \(v\) in graph \(G\) to capture the local topological features around it. Considering that the numbers of nodes of two graphs are possibly different, we introduce an aggregation function AGG to summarize the node-level property of all nodes over \(G\) to a real number AGG\((\{\phi_{i}(v),v\in G\})\). We can thus obtain the node-level topological vector representation as follows. \[h_{\text{node}}(G)=[\text{AGG}(\{\phi_{1}(v),v\in G\}),\cdots,\text{AGG}(\{ \phi_{m_{1}}(v),v\in G\})].\] In practice, we calculate degree (Belleelle and Solla, 2017), clustering coefficient (Kang et al., 2017) and closeness centrality (Kang et al., 2017) for each node and instantiate the aggregation function AGG as the mean aggregator. For graph-level topological features, we also employ a set of graph-level property functions for each graph \(G\) to serve as the vector representation \[h_{\text{graph}}(G)=[\{y_{1}(G),\cdots,y_{m_{2}}(G)\},\] where density (Kang et al., 2017), assortativity (Kang et al., 2017), transitivity (Kang et al., 2017) are adopted as graph-level properties here 2. Footnote 2: Other graph-level properties can also be utilized like _diameter and Wiener index_, but we do not include them due to their high computational complexity. Finally, the final representation of \(G\) produced by the graph feature extractor is \[h=[h_{\text{local}}(G)||h_{\text{global}}(G)]\in\mathbb{R}^{m_{1}+m_{2}},\] where \(||\) is the concatenation function that combines both node-level and graph-level features. Given the topological vector representation, we leverage an efficient clustering algorithm K-Means (Kang et al., 2017) to obtain k-splits of (sub)graphs and finally fit each split into a graphon as one element of topological graphon basis. **Optimization solution.** Given the above-mentioned three graphon bases, the choice of graphon basis \(\{B_{i}\}\) can be specified to one of them. In this way, the pre-training feasibility (simplified as \(\zeta\)) could be approximated in the reduced search space of graphon basis as \[\zeta\leftarrow-\text{MIN}(\{\inf_{\{a_{i}\}}\text{dist}(f(\{a_{i}\},\{B_{i} \}),B_{\text{down}}),\forall\{B_{i}\}\in\mathcal{B}\}), \tag{6}\] where \(\mathcal{B}\)=\(\{(B_{i}\}_{\text{topo}},\{B_{i}\}_{\text{domain}},\{B_{i}\}_{\text{interj}})\) is the reduced search space of \(\{B_{i}\}\). Thus, the problem can be naturally splitted into three sub-problems with objective of \(\text{dist}(f(\{a_{i}\},\{B_{i}\}_{\text{topo}}),B_{\text{down}})\), \(\text{dist}\) (\(f(\{a_{i}\},\{B_{i}\}_{\text{domain}}),B_{\text{down}})\)) and \(\text{dist}(f(\{a_{i}\},\{B_{i}\}_{\text{interj}}),B_{\text{down}})\)) respectively. Each sub-problem can be solved by updating the corresponding learnable parameters \(\{a_{i}\}\) with multiple gradient descent steps. Taking one step as an example, we have \[\{a_{i}\}=\{a_{i}\}-\eta\nabla_{\{a_{i}\}}\text{dist}(f(\{a_{i}\},\{B_{i}\}),B _{\text{down}}) \tag{7}\] where \(\eta\) is the learning rate. Finally, we achieve three infimum distances under different \(\{B_{i}\}\in\mathcal{B}\) respectively, the minimum value among them is the approximation of pre-training feasibility. In practice, we adopt an efficient and differential approximation of GW distance, _i.e._, entropic regularization GW distance (Kang et al., 2017), as the distance function. For graphon estimation, we use the "largest gap" method as to estimate graphon \(B_{i}\). ### Computation Complexity We now show that the time complexity of W2PNN is much lower than traditional solution. Suppose that we have \(n_{1}\) and \(n_{2}\) (sub)graphs sampled from pre-training data and downstream data respectively, and denote \(|V|\) and \(|E|\) as the average number of nodes and edges per (sub)graph. The overall time complexity of W2PGNN is \(O((n_{1}+n_{2})|V|^{2})\). For comparison, traditional solution in Figure 1(a) to estimate the pre-training feasibility should make \(l_{1}\times l_{2}\) "pre-train and fine-tune" attempts, if there exist \(l_{1}\) pre-training models and \(l_{2}\) fine-tuning strategies. Suppose the batch size of pre-training as \(b\) and the representation dimension as \(d\). The overall time complexity of traditional solution is \(O\left(l_{1}b\zeta((n_{1}+n_{2})(|V|^{3}+|E|d)+n_{1}bd)\right)\). Detailed analysis can be found in Appendix D. ## 5. Theoretical Analysis In this section, we theoretically analyze the rationality of the generator space and possible downstream space in W2PGNN. Detailed proofs of the following theorems can be found in Appendix A. ### Theoretical Justification of Generator Space Our generator preserves the properties of graphons.We first theoretically prove that any generator in the generator space still preserve the properties of graphon (_i.e._, a bounded symmetric function \(\left[0,1\right]^{2}\rightarrow\left[0,1\right]\), summarized in the following theorem. Theorem 5.1 ().: _For a set of graphon basis \(\{B_{i}\}\), the corresponding generator space \(\Omega=\{f(\{\alpha_{i}\},\{B_{i}\})\mid\forall\{\alpha_{i}\},\{B_{i}\}\}\) is the convex hull of \(\{B_{i}\}\)._ Our generator preserves the key transferable patterns in graphon basis.As a preliminary, we first introduce the concept of _graph motifs_ as a useful description of transferable graph patterns and leverage _homomorphism density_ as a measure to quantify the degree to which the patterns inherited in a graphon. Definition 3 (Graph motifs(Graham, 1994)).: _Given a graph \(G=(V,E)\) (\(V\) and \(E\) are node set and edge set), graph motifs are substructure \(F=(V^{\prime},E^{\prime})\) that recur significantly in statistics, where \(V^{\prime}\subset V,E^{\prime}\subset E\) and \(|V^{\prime}|\ll|V|\)._ Graph motifs can be roughly taken as the key transferable graph patterns across graphs (Zhu et al., 2017). For example, the motif (X) has the same meaning of "feedforward loop" across networks of control system, gene systems or organisms. Then, we introduce the measure of homomorphism density \(t(F,B)\) to quantify the relative frequency of the key transferable pattern, _i.e._, graph motifs \(F\), inherited in graphon \(B\). Definition 4 (Homomorphism density(Graham, 1994)).: _Consider a graph motif \(F=(V^{\prime},E^{\prime})\), we define a homomorphisms of \(F\) into graph \(G=(V,E)\) as an adjacency-preserving map from \(V^{\prime}\) to \(V\), where \((i,j)\in E\) implies \((i,j)\in E\). There could be multiple maps from \(V^{\prime}\) to \(V\), but only some of them are homomorphisms. Therefore, the definition of homomorphism density \(t(F,G)\) is introduced to quantify the relative frequency with which the graph motif \(F\) appears in \(G\)._ _Analogously, the homomorphism density of graphs can be extended into the graphon \(B\). We denote \(t(F,B)\) as the homomorphism density of graph motif \(F\) into graphon \(B\), which represents the relative frequency of \(F\) occurring in a collection of graphs \(\{G_{i}\}\) that convergent to graphon \(B\), i.e., \(t(F,B)=\lim_{i\rightarrow\infty}t\) (\(F,\{G_{i}\}\))._ Now, we are ready to quantify how much the transferable patterns in graphon basis can be preserved in our generator by exploring the difference between the homomorphism density of graph motifs into the graphon basis and that into our generator. Theorem 5.2 ().: _Assume a graphon basis \(\{B_{1},\cdots,B_{k}\}\) and their convex combination \(f(\{\alpha_{i}\},\{B_{i}\})=\sum_{i=1}^{k}a_{i}B_{i}\). The \(a\)-th element of graphon basis \(B_{a}\) corresponds to a motif set. For each motif \(F_{a}\) in the motif set, the difference between the homomorphism density of \(F_{a}\) in \(f(\{\alpha_{i}\},\{B_{i}\})\) and that in basis element \(B_{a}\) is upper bounded by_ \[|t(F_{a},f(\{\alpha_{i}\},\{B_{i}\}))-t(F_{a},B_{a})|\leq\sum_{b=1,b\neq a}^{ k}|F_{a}|\alpha_{b}||B_{b}-B_{a}||_{\Omega} \tag{8}\] _where \(|F_{a}|\) represents the number of nodes in motif \(F_{a}\), and \(||\cdot||_{\Omega}\) denotes the cut norm._ Theorem 5.2 indicates the graph motifs (_i.e._, key transferable patterns) inherited in each basis element can be preserved in our generator, which justifies the rationality to take the generator as a representative and comprehensive summary of pre-training data. ### Theoretical Justification of Possible Downstream Space The possible downstream space includes the graphs generated from generator \(f(\{\alpha_{i}\},\{B_{i}\})\). We here provide a theoretical justification that the generated graphs in possible downstream space can inherit key transferable graph patterns (_i.e._, graph motifs) in the generator. Theorem 5.3 ().: _Given a graph generator \(f(\{\alpha_{i}\},\{B_{i}\})\), we can obtain sufficient number of random graphs \(\mathbb{G}=\mathbb{G}(n,f(\{\alpha_{i}\},\{B_{i}\}))\) with \(n\) nodes generated from \(f(\{\alpha_{i}\},\{B_{i}\})\). The homomorphism density of graph motif \(F\) in \(\mathbb{G}\) can be considered approximately equal to that in \(f(\{\alpha_{i}\},\{B_{i}\})\) with high probability, which can be represented as_ \[\mathrm{P}(|t(F,\mathbb{G})-t(F,f(\{\alpha_{i}\},\{B_{i}\}))|>\varepsilon) \leq 2\exp\left(-\frac{\varepsilon^{2}n}{8\nu(F)^{2}}\right), \tag{9}\] _where \(\nu(F)\) denotes the number of nodes in \(F\), and \(0\leq\varepsilon\leq 1\)._ Theorem 5.3 indicates that the homomorphism density of graph motifs into the generated graphs in the possible downstream space can be inherited from our generator to a significant degree. ## 6. Experiments In this section, we evaluate the effectiveness of W2PGNN with the goal of answering the following questions: (1) Given the pre-training and downstream data, is the feasibility of pre-training estimated by W2PGNN positively correlated with the downstream performance (Use case 2)? (2) When the downstream data is provided, does the pre-training data selected by W2PGNN actually help improve the downstream performance (Use case 3)? Note that it is impractical to empirically evaluate the application scope of graph pre-trained models (Use case 1), as we cannot enumerate all graphs in the possible downstream space. Whereas, by answering question (1), it can be indirectly verified that a part of graphs in the possible downstream space, _i.e._, the downstream graphs with high feasibility, indeed benefit from the pre-training. ### Experimental Setup We validate our proposed framework on both node classification and graph classification task. **Datasets.** For node classification task, we directly adopt six datasets from (Zhu et al., 2017) as the candidates of pre-training data, which consists of Academia, DBLP(SNAP), DBLP(NetRep), IMDB, Facebook and LiveJournal (from academic, movie and social domains). Regarding the downstream datasets, we adopt US-Airport and H-Index from (Zhu et al., 2017) and additionally add two more datasets Chameleon and Europe-Airport for a more comprehensive results. For graph classification task, we choose the large-scale datasets ZINC15(Zhu et al., 2017) containing \(2\) million unlabeled molecules. To enrich the follow-up experimental analysis, we use scaffold split to partition the ZINC15 into five datasets (ZINC15-0,ZINC15-1,ZINC15-2,ZINC15-3 and ZINC15-4) according to their scaffolds (Kong et al., 2017), such that the scaffolds are different in each dataset. Regarding the downstream datasets, we use 5 classification benchmark datasets contained in MoleculeNet (Wang et al., 2017). For downstream datasets, we use BACE, BBBP, MUV, HIV and ClinTox provided in (Wang et al., 2017). The dataset details are summarized in Appendix B. **Baseline of graph pre-training measures.** The baselines can be divided into 3 categories: (1) EGI (Wang et al., 2017) computes the difference between the graph Laplacian of (sub)graphs from pre-training data and that from downstream data; (2) Graph Statistics,by which we merge average degree, degree variance, density, degree assortativity coefficient, transitivity and average clustering coefficient to construct a topological vector for each (sub)graph. (3) Clustering Coefficient, Spectrum of Graph Laplacian, and Betweenness Centrality, by which we adopt the distributions of graph properties as topological vectors. For the second and third category of baselines, we calculate the negative value of Maximum Mean Discrepancy (MMD) distance between the obtained topological vectors of the (sub)graph from pre-training data and that from downstream data. Note that in all baselines, the distance/difference is computed between one ego-network (for node classification) or graph (for graph classification) from pre-training data and another one from downstream data. For efficiency, when conducting node classification, we randomly sample 10% nodes and extract their 2-hop ego-networks for each candidate pre-training dataset, and extract 2-hop ego-networks of all nodes for each downstream dataset. For graph classification, we randomly select 10% graphs for each candidate pre-training dataset and downstream dataset. Then we take the average of all distances/differences as the final measure. **Implementation Details.** For node classification tasks, we randomly sample 1000 nodes for each pre-training dataset and extract 2-hops ego-networks of sampled nodes to compose our input space, and extract 2-hops ego-networks of all nodes in each downstream dataset to estimate the graphon. For graph classification tasks, we take all graphs in each pre-training dataset to compose our input space and we use all graphs in each downstream dataset to estimate its corresponding graphon. When constructing topological graphon basis, we set the the number of clusters \(k=5\). The maximum iterations number of K-Means is set as 300. When constructing domain graphon basis, we take each pre-training dataset as a domain. For graphon estimation, we use the largest gap (Beng et al., 2017) approach and let the block size of graphon as the average number of nodes in all graphs. When learning \(\alpha_{i}\), we adopt Adam as the optimizer and set the learning rate \(\eta\) as 0.05. When calculating the GW distance, we utilize its differential and efficient version entropic regularization GW distance with default hyperparameters (Krizhevsky et al., 2012). ### Results of Pre-training Feasibility **Setup.** When evaluating the pre-training feasibility, since its ground truth is unavailable, we adopt the best downstream performance among a set of graph pre-training models as the ground truth. \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline & \multicolumn{6}{c}{\(N=2\)} & \multicolumn{6}{c}{\(N=3\)} \\ \cline{2-13} & US-Airport & Europe-Airport & H-index & Chameleon & Rank & US-Airport & Europe-Airport & H-index & Chameleon & Rank \\ \hline Graph Statistics & -0.6068 & 0.3571 & -0.6220 & -0.2930 & 10 & -0.7096 & -0.5052 & -0.2930 & -0.8173 & 10 \\ EGI & 0.6672 & -0.6077 & -0.2152 & -0.2680 & 9 & -0.2358 & -0.5540 & -0.2822 & -0.6511 & 9 \\ Clustering Coefficient & -0.0273 & 0.1519 & 0.3622 & 0.3130 & 5 & -0.0039 & 0.2069 & 0.4829 & 0.2279 & 4 \\ Spectrum of Graph Laplacian & -0.2023 & 0.1467 & 0.0794 & 0.0095 & 8 & -0.7648 & -0.4311 & 0.2611 & -0.2300 & 8 \\ Betweenness Centrality & -0.2739 & -0.2554 & 0.2051 & 0.2241 & 7 & -0.3421 & -0.5903 & 0.1364 & 0.0849 & 7 \\ \hline W2PGNN (interject) & 0.3579 & 0.1224 & 0.3313 & 0.1072 & 6 & 0.0841 & 0.5310 & 0.4213 & -0.0916 & 6 \\ W2PGNN (domain) & **0.4774** & 0.4666 & 0.6775 & 0.3460 & 3 & **0.7132** & 0.5523 & **0.7381** & 0.1857 & 3 \\ W2PGNN (topo) & 0.2059 & 0.3908 & 0.3745 & 0.4464 & 4 & 0.4900 & 0.5061 & 0.4072 & 0.1497 & 5 \\ W2PGNN (\(x=1\)) & 0.4172 & 0.5206 & 0.6829 & 0.4391 & 2 & 0.5282 & 0.6663 & 0.7240 & **0.3246** & 1 \\ W2PGNN & 0.3941 & **0.5336** & **0.7162** & **0.4838** & 1 & 0.5089 & **0.6706** & 0.6754 & 0.3166 & 2 \\ \hline \hline \end{tabular} \end{table} Table 1. Pearson correlation coefficient between the estimated pre-training feasibility and the best downstream performance on node classification. \(N\) denotes the number of candidate pre-training datasets that form the pre-training data. Bold indicates the highest coefficient. “Rank” represents the overall ranking on all downstream datasets. \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline & \multicolumn{6}{c}{\(N=2\)} & \multicolumn{6}{c}{\(N=3\)} \\ \cline{2-13} & BACE & BBBP & MUV & HIV & ClinTox & Rank & BACE & BBBP & MUV & HIV & ClinTox & Rank \\ \hline Graph Statistics & -0.4118 & -0.1328 & 0.3858 & 0.0174 & -0.3577 & 9 & -0.3093 & -0.1430 & 0.1946 & 0.3545 & -0.1372 & 7 \\ EGI & 0.2912 & -0.6862 & 0.4488 & 0.0587 & 0.0452 & 7 & 0.4570 & 0.3230 & 0.3024 & 0.4144 & -0.0085 & 3 \\ Clustering Coefficient & -0.5098 & -0.5097 & 0.3754 & 0.4738 & 0.5154 & 8 & -0.4080 & 0.3217 & -0.1190 & -0.2483 & -0.4248 & 9 \\ Spectrum of Graph Laplacian & -0.0633 & -0.4878 & -0.3413 & -0.1125 & -0.2562 & 10 & -0.3563 & -0.1611 & -0.2294 & -0.2448 & 0.3001 & 8 \\ Betweenness Centrality & -0.0021 & -0.7755 & 0.4040 & 0.0339 & 0.3411 & 6 & -0.3695 & -0.4568 & -0.2752 & -0.3035 & -0.2129 & 10 \\ \hline W2PGNN (interject) & 0.7547 & **0.7790** & 0.2907 & 0.7033 & 0.5639 & 3 & 0.4081 & 0.4687 & -0.0567 & 0.3802 & 0.4354 & 5 \\ W2PGNN (domain) & 0.7334 & 0.7689 & 0.5395 & 0.6831 & 0.5431 & 5 & 0.0864 & 0.3680 & 0.0187 & 0.4784 & 0.3765 & 6 \\ W2PGNN (topo) & 0.6656 & 0.7164 & 0.8131 & **0.7391** & 0.5406 & 2 & 0.1109 & 0.5357 & 0.0514 & 0.3265 & 0.4724 & 4 \\ W2PGNN (\(a=1\)) & 0.6549 & 0.7690 & 0.6730 & 0.7033 & 0.5639 & 4 & 0.5287 & **0.7102** & 0.1925 & 0.5893 & 0.5430 & 2 \\ W2PGNN & **0.7549** & 0.7767 & **0.8131** & 0.7044 & **0.5784** & **1** & **0.6207** & 0.6696 & **0.5227** & **0.6529** & **0.5994** & 1 \\ \hline \hline \end{tabular} \end{table} Table 2. Pearson correlation coefficient between the feasibility and the best downstream optimization. For node classification tasks, we use the following 4 graph pre-training models: GraphCL [52] and GCC models [32] with three different hyper-parameter (_i.e._, 128, 256 and 512 rw-hops). For graph classification tasks, we adopt 7 SOTA pre-training models: AttMarkMasing [15], ContextPred [15], EdgePred [15], Infomax [15], GraphCL [52], GraphMAE [14] and JOAO [51]. When pre-training, we directly use the default hyper-parameters of pre-training models except the rw-hops in GCC. During fine-tuning, we freeze the parameters of pre-trained models and utilize the logistic regression as classifier for node classification and SVM as classifier for graph classification, following [32] and its fine-tuning hyper-parameters. The downstream results are reported as the average of Micro F1 and ROC-AUC under 10 runs on node classification and graph classification respectively. For each downstream task, the best performance among all methods is regarded as the ground truth. For a comprehensive evaluation on the correlation between the estimated pre-training feasibility and the above ground truth (_i.e._, best downstream performance), we need to construct multiple \(\langle\mathcal{G}_{\text{train}},\mathcal{G}_{\text{down}}\rangle\) sample pairs as our evaluation samples. When constructing the \(\langle\mathcal{G}_{\text{train}},\mathcal{G}_{\text{down}}\rangle\) sample pairs for each downstream data, multiple pre-training data are required to be paired with it. Hence we adopt the following two settings to augment the choice of pre-training data for more possibilities. Here we use \(N\) as the number of dataset candidates contained in pre-training data. (1) For \(N=2\) setting, we randomly select 2 pre-training dataset candidates as pre-training data and enumerate all possible cases. (2) For \(N=3\) setting, we randomly select 3 pre-training dataset candidates as pre-training data. We enumerate all possible cases for graph classification tasks and randomly select 40% of all cases for node classification tasks for efficiency. **Results.** Table 1 (for node classification) and Table 2 (for graph classification) show the Pearson correlation coefficient between the best downstream performance and the estimated pre-training feasibility by W2PGNN and baselines for each downstream dataset. A higher coefficient indicates a better estimation of pre-training feasibility. We also include 4 variants of W2PGNN: W2PGNN (intergr), W2PGNN (domain) and W2PGNN (topo) only utilize the integrated graphon basis, domain graph basis and topological graphon basis to approximate feasibility respectively, and W2PGNN (\(\alpha=1\)) directly set the learnable combination weights \(\{\alpha_{i}\}\) as fixed constant 1. We have the following observations. (1) The results show that our model achieve the highest overall ranking in most cases, indicating the superiority of our proposed framework. (2) We find that the measures provided by other baselines sometimes show no correlation or negative correlation with the best downstream performance. (3) Comparing W2PGNN and its 4 variants, we find that although the variants sometimes achieve superior performance on some downstream datasets, they cannot consistently perform well on all datasets. In contrast, the top-ranked W2PGNN can provide a more comprehensive picture with various graph bases and learnable combination weights. To provide a deeper understanding of the feasibility estimated by W2PGNN, Figure 3 shows our estimated pre-training feasibility (in x-axis) versus the best downstream performance on node classification (in y-axis) of all \(<\)pre-training data, downstream data- pairs (one point represents the result of one pair) when the selection budget is 2. The plots when the selection budget is 3 and the plots under graph classification can be found in Appendix C.1. We find that there exist a strong positive correlation between estimated pre-training feasibility and the best downstream performance on Figure 3. Pre-training feasibility vs. the best downstream performance on node classification when the selection buget is 2. \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline & \multicolumn{6}{c}{\(N=2\)} & \multicolumn{6}{c}{\(N=3\)} \\ \cline{2-10} & US-Airport & Europe-Airport & H-index & Chameleon & Rank & US-Airport & Europe-Airport & H-index & Chameleon & Rank \\ \hline All Datasets & 65.62 & 55.65 & 75.22 & 46.81 & - & 65.62 & 55.65 & 75.22 & 46.81 & - \\ \hline Graph Statistics & 64.20 & 53.36 & 74.30 & 44.31 & 4 & 62.27 & 54.58 & 72.88 & 43.87 & 5 \\ EGI & **64.96** & 57.37 & 74.30 & 43.21 & 2 & 62.27 & 57.36 & 72.28 & 45.93 & 3 \\ Clustering Coefficient & 62.61 & 52.87 & **77.74** & 43.21 & 3 & 62.94 & 54.58 & 75.18 & 44.66 & 4 \\ Spectrum of Graph Laplacian & 61.76 & **57.88** & 73.14 & 42.20 & 5 & **63.95** & 54.87 & 73.90 & 44.66 & 2 \\ Betweenness Centrality & **64.96** & 52.87 & 73.50 & 41.63 & 6 & 62.27 & 54.87 & 75.18 & 43.87 & 6 \\ \hline W2PGNN & **64.96** & **57.88** & 77.24 & **45.54** & 1 & **63.95** & **57.59** & **75.68** & **46.07** & 1 \\ \hline \hline \end{tabular} \end{table} Table 3. Node classification results when performing pre-training on different selected pre-training data. We also provide the results of using all pre-training data without selection for your reference (see “All Datasets” in the table). all downstream datasets, which also suggests the significance of our feasibility. ### Results of Pre-Training Data Selection Given the downstream data, a collection of pre-training dataset candidates and a selection budget (_i.e._, the number of datasets selected for pre-training) due to limited resources, we aim to select the pre-training data with the highest feasibility, so as to benefit the downstream performance. **Setup.** We here adopt two settings, _i.e._, selection budget is set as 2 and 3 respectively. The datasets that are augmented for more pre-training data choices in Section 6.2 can be directly used as the candidates of pre-training datasets here. Then, the selected pre-training data serves as the input of graph pre-training model. For node classification tasks, we adopt GCC as the pre-training model as an example, because it is the pre-training model that can be generalized across domains and most of the datasets used for node classification are taken from it (Zhou et al., 2019). For graph classification tasks, we take GraphCL as the pre-training model as it provides multiple graph augmentation approaches and is more general (Zhou et al., 2019). **Results.** Table 3 shows the results of pre-training data selection on node classification task. (The results on graph classification is included in Appendix C.2). We have the following observations. (1) We can see that the pre-training data selected by W2PGNN ranks first, which is the most suitable one for downstream. (2) We find that sometimes simple graph property like clustering coefficient serves as a good choice on a specific dataset (_i.e._, H-index), when the budget of pre-training data is 2. It is because that H-index exhibits the largest clustering coefficient compared to other downstream datasets (see Table 4), which facilitates the data selection via clustering coefficient. However, such simple graph property is only applicable when the downstream dataset shows a strong indicator of the property, and is not helpful when you need to select more datasets for pre-training (see results under _N=3_). (3) Moreover, it is also interesting to see that using all pre-training data for pre-training is not always a reliable choice. We find that carefully selecting pre-training data can not only benefit downstream performance but also reduce computation resources. ## 7. Conclusion This paper proposes a W2PGNN framework to answer the question of _when to pre-train_ GNNs based on the generative mechanisms from pre-training to downstream data. W2PGNN designs a graph-based graph generator to summarize the knowledge in pre-training data, and the generator can in turn produce the solution space of downstream data that can benefit from the pre-training. W2PGNN is theoretically and empirically shown to have great potential to provide the application scope of graph pre-training models, estimate the feasibility of pre-training and help select pre-training data.
2303.11337
Recursive Euclidean Distance Based Robust Aggregation Technique For Federated Learning
Federated learning has gained popularity as a solution to data availability and privacy challenges in machine learning. However, the aggregation process of local model updates to obtain a global model in federated learning is susceptible to malicious attacks, such as backdoor poisoning, label-flipping, and membership inference. Malicious users aim to sabotage the collaborative learning process by training the local model with malicious data. In this paper, we propose a novel robust aggregation approach based on recursive Euclidean distance calculation. Our approach measures the distance of the local models from the previous global model and assigns weights accordingly. Local models far away from the global model are assigned smaller weights to minimize the data poisoning effect during aggregation. Our experiments demonstrate that the proposed algorithm outperforms state-of-the-art algorithms by at least $5\%$ in accuracy while reducing time complexity by less than $55\%$. Our contribution is significant as it addresses the critical issue of malicious attacks in federated learning while improving the accuracy of the global model.
Charuka Herath, Yogachandran Rahulamathavan, Xiaolan Liu
2023-03-20T06:48:43Z
http://arxiv.org/abs/2303.11337v1
# Recursive Euclidean Distance Based Robust Aggregation Technique For Federated Learning ###### Abstract Federated learning has gained popularity as a solution to data availability and privacy challenges in machine learning. However, the aggregation process of local model updates to obtain a global model in federated learning is susceptible to malicious attacks, such as backdoor poisoning, label-flipping, and membership inference. Malicious users aim to sabotage the collaborative learning process by training the local model with malicious data. In this paper, we propose a novel robust aggregation approach based on recursive Euclidean distance calculation. Our approach measures the distance of the local models from the previous global model and assigns weights accordingly. Local models far away from the global model are assigned smaller weights to minimize the data poisoning effect during aggregation. Our experiments demonstrate that the proposed algorithm outperforms state-of-the-art algorithms by at least \(5\%\) in accuracy while reducing time complexity by less than \(55\%\). Our contribution is significant as it addresses the critical issue of malicious attacks in federated learning while improving the accuracy of the global model. ## I Introduction The emerging Artificial Intelligence market is accompanied by an unprecedented growth of cloud-based AI solutions. This technological revolution was catalyzed by rapidly expanding personal computing devices. Most people frequently carry their intelligent devices equipped with multiple sensors. As a result, personal computing devices offer access to a large amount of training data necessary to build reliable machine learning (ML) models. However, traditional ML requires gathering the training data in a single machine or a data center. As a result, technology companies must go through the costly and lengthy process of harvesting their users' data, not mentioning the risks and responsibilities of storing data in a centralized location. This also leads to many privacy violation issues.1 Footnote 1: This paper has been submitted to IEEE IAS GlobeNet 2023 Conference. Federated Learning (FL) is developed to address these issues by enabling end-user devices to collaboratively learn a shared global model using the locally-stored training data under the orchestration of a central server, decoupling training deep learning models from the need to collect and store the data in the cloud. With its decentralized data approach, FL is one of the fastest-growing research fields. Generally, the client and the server are the two prominent roles in FL. The client is the owner of training data, and the server is the aggregator of the global model parameters. FL is characterized by obtaining better global model performance while keeping all client-side training data. As shown in Fig. 1, In the FL architecture, the server initializes the parameters of the global model and distributes them to the clients participating in FL. Secondly, the clients train their local models using their data. Thirdly, they upload the new parameters of their models to the server for aggregation. The above process is repeated until the pre-defined condition is met [1]. By considering the distribution of the data, data can be mainly divided into two parts, which are identically distributed data (IID) and non-independently and identically distributed data (Non-IID). Moreover, as stated in [2], by considering the distribution of training data, FL can be categorized into the three groups shown in Fig. 2 which are horizontal FL, vertical FL, and transfer FL. * Horizontal FL: Horizontal FL or sample-based FL illustrated in Fig. 2(a), is introduced in the scenarios that data sets that share the same feature space but are different in samples. For instance, where, the same database is shared among two different hospitals that will share the same set of features, but different data samples from two different regions can be taken as an example. * Vertical FL: Vertical FL or feature-based FL illustrated in Fig. 2(b), applies to cases where two data sets share the same sample ID space but differ in feature space. For instance, two hospitals that treat dental issues (feature) and general issues (feature) maintain two databases for the same set of patients (samples) in one area can be taken as an example. * Transfer FL: Transfer FL illustrated in Fig. 2(c) applies to the scenarios that the two data sets differ not only in samples but also in feature space. For instance, training a personalized model for a movie recommendation for the user's past browsing behavior can be taken as an example. In our study, we are focusing on HFL. However, future works will consider the other two data distribution architectures. In a real-world scenario, data used for FL may contain biases, such as wrongly annotated labels, and missing data. Also, adversarial clients may sabotage the learning process by sending corrupted local models to the aggregation process. Standard aggregation methods such as Federated Averaging (FedAvg) purposed by [3] are vulnerable to these bad local models. Moreover, [4] states that FL suffers from label-flipping and backdoor attacks. When a local model is poisoned, the aggregated global model can also be poisoned and fail to behave correctly. To tackle these challenges, researchers developed many defense mechanisms, such as robust aggregation, zero-knowledge proof, and recognizing legitimate clients [5]. In this study, we develop novel robust aggregation mechanisms for horizontally distributed data under an IID setting to make FL more attack-resistance against label flipping attacks. Also, We compare our proposed algorithm with [4] purposed Residual-based Reweighting aggregation algorithm and the standard averaging algorithm by conducting experiments on the MNIST dataset. Our proposed aggregation algorithm significantly mitigates the impact of label-flipping models on the IID setting and outperforms other baselines in terms of the lesser average attack success rate of 1.1962%, higher average model accuracy of 98.27%, and lesser average aggregation time of 0.028s. ## II problem Formulation A study in [6] proposed an FL method based on sampling a fraction of the gradients for every client, known as Federated Stochastic Gradient Decent (FedSGD). The gradients are then sent to a global server and averaged proportionally to the number of samples on each client. FedAvg by [3] illustrated in the **Algorithm 1** is a more efficient and generalized version of FedSGD by allowing clients to train their local models on multiple batches of local data. Then, the parameters of the model are shared with the global server instead of the gradients as in FedSGD. Hence, FedAvg can be defined as the weighted average of the local updated model parameters \(M_{n}^{k}\) sent by the \(K\) clients. Here \(n\) is the training round out of \(N\) rounds. \[M_{n}=\sum\limits_{k=1}^{K}\frac{D_{k}}{D}M_{n}^{k} \tag{1}\] ``` 1:Locally trained models \(M_{n}^{1}\), \(M_{n}^{2}\),..., \(M_{n}^{k}\) from each participants for the \(n^{th}\) epoch 2: 3:for\(k\leq K\)do 4: Add each local model together while averaging by partition data used by \(k^{th}\) client 5:endfor 6: Average the sum of models \(M_{n}=\sum\limits_{k=1}^{n}\frac{D_{k}}{D}M_{n}^{k}\) 7:Return\(M_{n}\). ``` **Algorithm 1** Standard Federated Averaging Algorithm The FL method has some critical issues regarding the approach feasibility and, most importantly, privacy and security. Therefore, in our research, we are focusing on privacy and security-related issues. As pointed out in the introduction section, poisoning attacks are the most critical issue in both FL and ML. An attack injects vulnerabilities by a malicious attacker to manipulate the global model. Poisoning attacks can be divided into two types, i.e., targeted poisoning attacks and untargeted poisoning attacks [5]. A targeted poisoning attack, i.e., model poisoning, aims to completely control or affect one sub-task of the FL model without affecting any other sub-task. In a backdoor attack, the global model behaves incorrectly on adversarial targeted input [3]. For instance, for an image classification application, the attacker may corrupt the model to misclassify "red cars" as "bikes" while ensuring other car instances are classified correctly. The purpose of untargeted poisoning attacks, i.e., label flipping, is to corrupt or manipulate an FL model by changing the model's training data or parameters, which allows the attacker to control some parameters of the training process of some selected clients, such as raw data, training rounds, and model parameters. A study by [7] shows that the label-flipping attack has great harm to a federated system even with a minimal number of attackers. In this attack, the attacker flips the training data labels from one class to another and trains the model accordingly. However, the default FL aggregation algorithm is not immune to attacks and failures that target each step of the system's training and deployment pipelines. [8, 5, 9, 10, 11], show that under the traditional aggregation approach, Fl suffers from different attacks such as label flipping and backdoor attacks. When a local model is poisoned, the aggregated global model can also be poisoned and fail to behave correctly. ## III Background and Related Works In the broader picture, data anomalies are vital in centralized and decentralized machine-learning approaches. So, ensuring the trustworthiness of data is critical. Developing a novel approach for secure FL will solve data availability and privacy issues. ### _Robust Aggregation Mechanisms_ Aggregation algorithms security is elemental to secure FL. There is a breadth of research on robust aggregation algorithms in [7], that detect and discard faulty or malicious updates during training. However, [12] states that many state-of-the-art robust Fig. 1: Federated Learning Framework aggregations rely on realistic assumptions or need to be held in FL environments. As a result, several novel aggregation methods have been proposed, such as adaptive aggregation. This aggregation method is robust against model update corruption in up to half the clients. Furthermore, [13] suggested using Gaussian distribution to measure clients' potential contributions. While this method is effective, evaluating each model in every round requires much time. Statistical methods have been studied and applied in robust distributed learning where data is IID. For example, the median and trimmed mean methods are practical approaches in robust distributed learning. One of the novel robust aggregation mechanisms purposed by [4], combines repeated median regression using residual distance with a reweighting scheme that is iteratively reweighted least squares (IRLS). While this method is effective, it requires comparatively high time to reweight each local model parameter. Also, it accumulates the parameter confidence in each local model in every single round or training. Even though this method is one of the successful attack-resistant robust aggregation methods, the time complexity of the aggregation algorithm is \(O(n)\). Test results show that our aggregation reduces the aggregation time by 50% while maintaining a high attack-resistant rate and a high model accuracy. ## IV Proposed Solution ### _Euclidean Distance_ Euclidean distance is a widely used distance metric. It is used in many ML algorithms as a default distance metric to measure the similarity between two recorded observations. Moreover, it works on the principle of the Pythagoras theorem and signifies the shortest distance between two points illustrated in (2). \[E(x,y)=\sqrt{\sum\limits_{i=1}^{n}{(x_{i}-y_{i})^{2}}} \tag{2}\] **Algorithm 2** summarizes our aggregation algorithm, and a detailed step-by-step description is provided below. Here, in each epoch, we are adjusting the weights of each local participant by using its Euclidean distance to the most recent global model weights during the aggregation process. ### _Aggregation Algorithm_ **Model initialization.** There are multiple rounds of communication between participants and a central server for learning a global model in FL. In each round, the global model is shared among the \(K\) participants. First, a local model on each device is trained on its local private data with the shared global model as initialization. Then all the \(K\) local models are sent to the central server to update the global model using the aggregation algorithm for \(N\) rounds. For instance, let's denote the model trained by the \(k^{th}\) user in the \(n^{th}\) training round as \(M_{n}^{k}\). **Flatten local and global model weights into separate single-dimension arrays.** Here, we are feeding an array of local models and the current global model into the aggregation algorithm. Then we are flattening each model's layer into a single-dimension tensor illustrated in (3). Here, \(t\) is the number of weights in each model. \[M_{n}=[x_{n}^{1},x_{n}^{2},....,x_{n}^{t}],M_{n}^{k}=[x_{n}^{k^{1}},x_{n}^{k^{ 2}},....,x_{n}^{k^{t}}] \tag{3}\] **Calculation Euclidean Distance.** Then, at the central server, we calculate the Euclidean distance \(e^{k}\) for each participant's local model with respect to the current global model \(M_{n}\) during each epoch as illustrated in (4). \[e_{k}=|M_{n}-M_{n}^{k}| \tag{4}\] **Average local model update.** Now, we average the local model update in proportion to its Euclidean distance as illustrated in (5). \[M_{k}^{n}=\frac{1}{e_{k}}M_{n}^{k} \tag{5}\] **Global model aggregation.** Finally, all local participant models will be aggregated into the global model illustrated in (6). \[M_{n}=\frac{\sum\limits_{k=1}^{K}{\frac{1}{e_{k}}M_{n}^{k}}}{\sum\limits_{k= 1}^{K}{\frac{1}{e_{k}}}} \tag{6}\] Fig. 2: Data Distribution Methods in FL ## V Simulation Results In this section, we conduct image recognition experiments on MINST dataset to illustrate the learning performance of our developed robust aggregation mechanism. We compare our approach FedAvg [3] and Residual-based Reweighting algorithm [4]. We perform experiments on the MNIST handwritten digit dataset and we implement attack strategies and defense algorithms in PyTorch. We use a two-layer convolutional neural network (CNN) for our MNIST experiments. With this simple CNN model, our goal is to evaluate different aggregation algorithms for defending FL in the presence of attacks. ### _MNIST Dataset_ MNIST dataset. The MNIST dataset contains 70,000 real-world handwritten images with digits from 0 to 9. We evaluate different methods by learning a global model on these training images distributed on multiple devices in an IID setting with adversarial attacks. ### _Results without any Attackers_ First, we tested our aggregation algorithm without any label-flipping attacks. We trained three different approaches with the following settings. We ran 100 synchronization rounds with a learning rate set to 0.01. In each round of FL, each participant is supposed to train the local model for 2 epochs. And we maintained this setting as the default setting throughout our experiments. There we extracted the average accuracy, loss, and average aggregation time. Moreover, we tested two approaches, **All local participants participated in aggregation** According to our experiments, Fig. 3 shows that the proposed algorithm performs the same as the FedAvg and is slightly better than [4]. We recorded an average accuracy of 98.78% and an average aggregation time of 1.1467s. The results are shown in Table I. **10 users participated for aggregation in each training round**. In this setting, as shown in Fig. 4 the FedAvg algorithm performed better than both approaches. However, the difference in the accuracy is 0.05% between the FedAvg algorithm and our approach. The results are shown in Table II. The significant outcome of both of these experiments was that the Average aggregation time for our approach was recorded at 1.1467s while the Residual-based reweighting algorithm was recorded at 2.523s. Most importantly, our approach performed well and reduced the aggregation time from 54.55%. ### _Results on Label-flipping Attacks_ In label-flipping attacks, attackers flip the labels of training examples in the source class to a target class and train their models accordingly. In the MNIST experiment, we simulate FL with 100 participants, within which 0 to 10 of them are attackers. Each participant contains images of two random digits. The attackers are chosen to be some participants with images of digit 1 and another random digit since they are flipping the label from 1 to 7. We ran 100 epochs with a learning rate set to 0.01. In each epoch, each participant is supposed to train the local model for 2 epochs, but the attackers can train for arbitrary epochs. Fig. 6 shows how accuracy differs in each epoch for the algorithms we compare. We can see that our aggregation logarithms converged between epochs 35-40 while other aggregation algorithms never fully converged. Moreover, the purposed algorithm is more stable while the other two algorithms are suffering from their unstable accuracy. We extracted the attacker success rate in each epoch under the above setting. As shown in Fig. 7 our approach outperformed both algorithms. However, the residual-based reweighting algorithm performed well and recorded an average attack success rate of 1.8643% while our aggregation algorithm recorded a 1.1962% attack success rate. Moreover as Table III states, again our algorithm recorded less model aggregation time than the residual-based reweighting algorithm. The results are shown in Fig. 5 where attackers train the models with 2 more epochs to enhance the attacks. Our algorithm outperformed all other methods and is robust when the number of attackers increases. Besides, FedAvg is a coordinate-wise operation, our algorithm accumulates a weight for each model by its similarity to the most recent global model. Even though the residual-based reweighting algorithm is an effective model-wise reweighting approach as ours the time complexity is \(O(n^{2})\) while our algorithm's time complexity is \(O(n)\) and is more efficient. ## VI Conclusion Federated Learning (FL) utilizes private data on multiple devices to train a global model. However, the simple aggregation algorithm in FL is vulnerable to malicious attacks. To tackle this problem, we present a novel aggregation algorithm with Euclidean distance. Our experiments on computer vision show that our approach is more robust and efficient for label-flipping attacks than the prior aggregation methods. As future work, we are focusing to test our algorithm in different and complex datasets with backdoor poisoning attacks, vertical FL, and homomorphic encryption. We hope our proposed aggregation algorithm can make FL more practical and robust. Fig. 4: Results for experiments with 10 participants contributed to aggregation without any attackers Fig. 5: Results of label-flipping attacks with 100 users and 1-10 attacker participants in each round Fig. 3: Results for experiments with all 100 participants contributed to aggregation without any attackers Fig. 6: Results of label-flipping attacks with 100 users and 1-10 attacker participants in each round
2310.09440
Target Variable Engineering
How does the formulation of a target variable affect performance within the ML pipeline? The experiments in this study examine numeric targets that have been binarized by comparing against a threshold. We compare the predictive performance of regression models trained to predict the numeric targets vs. classifiers trained to predict their binarized counterparts. Specifically, we make this comparison at every point of a randomized hyperparameter optimization search to understand the effect of computational resource budget on the tradeoff between the two. We find that regression requires significantly more computational effort to converge upon the optimal performance, and is more sensitive to both randomness and heuristic choices in the training process. Although classification can and does benefit from systematic hyperparameter tuning and model selection, the improvements are much less than for regression. This work comprises the first systematic comparison of regression and classification within the framework of computational resource requirements. Our findings contribute to calls for greater replicability and efficiency within the ML pipeline for the sake of building more sustainable and robust AI systems.
Jessica Clark
2023-10-13T23:12:21Z
http://arxiv.org/abs/2310.09440v1
# Target Variable Engineering ###### Abstract How does the formulation of a target variable affect performance within the ML pipeline? The experiments in this study examine numeric targets that have been binarized by comparing against a threshold. We compare the predictive performance of regression models trained to predict the numeric targets vs. classifiers trained to predict their binarized counterparts. Specifically, we make this comparison at every point of a randomized hyperparameter optimization search to understand the effect of computational resource budget on the tradeoff between the two. We find that regression requires significantly more computational effort to converge upon the optimal performance, and is more sensitive to both randomness and heuristic choices in the training process. Although classification can and does benefit from systematic hyperparameter tuning and model selection, the improvements are much less than for regression. This work comprises the first systematic comparison of regression and classification within the framework of computational resource requirements. Our findings contribute to calls for greater replicability and efficiency within the ML pipeline for the sake of building more sustainable and robust AI systems. Machine Learning Target Variables Hyperparameter Optimization ## 1 Introduction Target variables for machine learning applications should be formulated to support a specific decision, and in research contexts are usually treated as a fixed part of the ML pipeline. However, even given a specific task, there can be flexibility in the formulation of the target variable. Specifically, there are many applications where either numeric or categorical predictions could be equally suitable. To use a classic example, if a company wants to address customers who are likely to "churn", i.e. suspend their services in the near future, they could train regression models to predict each customer's numeric future usage of their service. Alternatively, the company could binarize the target variable based on whether usage is above or below some threshold. Then, they would train binary classifiers to predict the resulting categorical target. Beyond the obvious differences between the two formulations, such as choosing the appropriate evaluation metric, the choice between these two potential target variable formulations is not usually discussed in the extant ML literature. This work studies the fundamental but previously unanswered research question of how regression vs. classification models differ, in terms of both resource requirements within the ML pipeline and replicability of results. In applications where either formulation could be used interchangeably, the choice is usually approached heuristically. To systematize this choice, we conduct an experimental comparison of various parts of the ML pipeline given a numeric target versus the categorical target variable that results from binarizing using a threshold. Thus, the predictive problems compared are identical, save for the formulation of the target variable. A key component of the experiments is that we compare the performance for the two task types as a function of Hyperparameter Optimization (HPO) random search budgets. Across feature sets, target variables, and model families, we consistently find that regression tasks require significantly more computation to converge on optimal parameters than their classification counterparts. Digging into these results reveals that regression is more sensitive not only to HPO budget, but to all of the heuristic choices across the ML pipeline that we investigate. HPO budget, model selection, choice of grid search algorithm, and amount of training data all yield significantly more variation in terms of test performance than they do for classifiers. The performance of regression models are also more sensitive to randomness and therefore prone to overfitting. Thus, in applications where either formulation could be appropriate, choosing classification enables use of smaller HPO budgets and yields more straightforward generalization of results. In general, modelers planning regressions should ensure a large budget for HPO and use repeated sampling to ensure generalizability. Modelers conducting classification don't need to use such large grids as are used for regression, and can also use smaller amounts of training data to reach nearly optimal results. Given the substantial carbon emissions associated with HPO, recent work has pointed out that prioritizing computationally efficient algorithms can lead to significant reductions in environmental impact (Strubell et al., 2019; Schwartz et al., 2020). Our findings thus contribute to research in sustainable AI not by developing more efficient algorithms, but by streamlining other parts of the ML pipeline. Our findings also contribute to advances in automated machine learning (AutoML) by systematizing some heuristic choices. Finally, we also contribute to work dealing with the crisis of replicability in science broadly, and ML more specifically (Bouthillier et al., 2021). Simply put, classification results are easier to replicate. They are less susceptible to overfitting, less sensitive to both randomness and heuristic choices. Although the state-of-the-art is relatively better (on average) for regression, making decisions systematically and reporting all parameters is of more critical importance for this task. ## 2 Related Work Practical guides for applied machine learning emphasize the importance of formulating the target variable to align with some decision that is being supported (Provost and Fawcett, 2013). For instance, CRISP-DM, a widely-used business framework for applying machine learning, includes formulating the target variable as part of the "business understanding" phase (Chapman et al., 2000). It has been acknowledged that tasks related to the business understanding phase are not widely studied in the literature (Baier et al., 2019), and most research in ML (both applied and theoretical) assumes that the target variable is a fixed concept. There are a few common practices relating to modifying the target variable to make prediction problems easier. For example, if the distribution of numeric values has a heavy right tail, it can be log-transformed. If a binary-valued target variable has a strong class imbalance, oversampling or undersampling can be used to improve predictive performance. Target variables can be specially formulated for particular applications, such as causal effect estimation (Fernandez-Loria and Provost, 2022). The field of _prompt engineering_ includes constructing tasks to elicit the best-possible classifications or predictions from large language models such as ChatGPT (i.e. Sorensen et al. (2022); Brown et al. (2020); Liu et al. (2021); Zhou et al. (2022)). There are many examples of past work which have implicitly compared classification and regression for a particular task by providing reasons for binarizing numerical target variables. First, it may be easier to acquire binary, rather than numerical, labels, especially when the labels are user-generated (Sparling and Sen, 2011). Second, although what is being measured directly may be numeric, typical use of that variable involves a categorical decision (Liu et al., 2020). Binarization may result in a simpler problem (Zhang and Moe, 2021) or yield desirable evaluation metrics such as a confusion matrix (Abbasi et al., 2019). Recent work has also studied how and why reformulating a regression problem as classification can result in improved performance of neural networks (Stewart et al., 2023), which they term "the binning phenomenon." The question of whether it's ever appropriate to binarize a numeric dependent variable has also been debated in the traditional statistics literature, and is generally viewed as a bad practice (Royston et al., 2006; Fitzsimons, 2008). Binarizing has been found to lead to misleading results in the size and direction of coefficients in regression analysis (Maxwell and Delaney, 1993). Although binarizing the response variable makes results easier to explain and present to non-practitioners, it can also lead to a loss of information and statistical power (Irwin and McClelland, 2003). The field has continued to discuss the role of dichotomization in statistics (Pham, 2015). The results in this work do not directly contradict past findings; however, we find that there are positive benefits to binarizing in predictive contexts. Experiments The core experiments in this paper seek to compare the process and performance models trained to predict numerical target variables (regression task) versus binary categorical target variables (classification task). The experimental framework relies on the idea that we can compute a binarized counterpart to any numeric target variable by comparing to a threshold. Thus, all of the other parameters of the experiment are kept as similar as possible such that the only difference is the two target variable data types. ### Grid Search Following the notation developed by Dodge et al. (2019), we denote \(\mathcal{M}\) to indicate the _model family_, meaning a general induction algorithm with a set of \(k\) hyperparameters that can be optimized. Each \(k\)-tuple of values of individual hyperparameters forms one _hyperparameter value_\(h\), and the set of all possible hyperparameter values forms \(\mathcal{H}_{\mathcal{M}}\). In our experiments, we choose model families that can be adapted to predict either numeric or categorical target variables, and thus \(\mathcal{H}_{\mathcal{M}}\) is the same for both task types. The grid searches completed in this paper conduct \(B\) random draws from \(\mathcal{H}_{\mathcal{M}}\), and are randomly initialized \(S\) times. Let \(\mathcal{A}\left(\mathcal{M},h,s,\mathcal{D}_{T},\mathcal{D}_{P}\right)\) denote an algorithm that returns the performance in some prediction data \(\mathcal{D}_{P}\) using a model from \(\mathcal{M}\) with hyperparameter value \(h\) trained on \(\mathcal{D}^{T}\), given random initialization state \(s\in\{1,\cdots,S\}\). For draw \(b\) from \(\mathcal{H}_{\mathcal{M}}\), define the validation and test performance for draw \(b\) as: \[v_{b} =\mathcal{A}\left(\mathcal{M},h_{b},s,\mathcal{D}_{T},\mathcal{D} _{V}\right) \tag{1}\] \[t_{b} =\mathcal{A}\left(\mathcal{M},h_{b},s,\mathcal{D}_{T},\mathcal{D} _{TE}\right) \tag{2}\] We report the cumulative maximum validation performance after \(B\) grid search iterations \(v_{B}^{*}\), the best hyperparameter value \(h_{B}^{*}\) and test performance using those best hyperparameters \(t_{B}^{*}\): \[v_{B}^{*} =\max_{h\in\{h_{1},\ldots,h_{B}\}}\mathcal{A}\left(\mathcal{M},h, s,\mathcal{D}_{T},\mathcal{D}_{V}\right) \tag{3}\] \[h_{B}^{*} =\operatorname*{arg\,max}_{h\in\{h_{1},\ldots,h_{B}\}}\mathcal{A }\left(\mathcal{M},h,s,\mathcal{D}_{T},\mathcal{D}_{V}\right)\] (4) \[t_{B}^{*} =\mathcal{A}\left(\mathcal{M},h_{B}^{*},s,\mathcal{D}_{T}, \mathcal{D}_{TE}\right) \tag{5}\] For the experiments in this paper, we set \(B=400\) and \(S=15\). The first draw for each search always comprises the default hyperparameters for \(\mathcal{M}\), yielding a reasonable estimate of off-the-shelf performance. \(\mathcal{H}_{\mathcal{M}}\) (including the default parameters) is specific to each \(\mathcal{M}\) and were drawn from Hyperopt-Sklearn (Komer et al., 2014) and have been used in past work on HPO (Grinsztajn et al., 2022; Gorishniy et al., 2021). We experiment with both a standard random search (Bergstra and Bengio, 2012) as well as the Tree-Structured Parzen Estimator algorithm, a Bayesian optimization algorithm (Turner et al., 2021). We use the Optuna library in Python for managing this grid search (Akiba et al., 2019). ### Datasets The data testbed uses three feature sets gathered from publicly-available online data: Airbnb.com2, Kickstarter.com 3, and Yelp.com 4. We engineered a tabular feature set of size approximately 2000 from each.5 Footnote 2: [http://insideairbnb.com/get-the-data/](http://insideairbnb.com/get-the-data/), accessed March 2018 Footnote 3: [https://webrobots.io/kickstarter-datasets/](https://webrobots.io/kickstarter-datasets/), accessed Dec 2015 Footnote 4: [https://www.yelp.com/dataset](https://www.yelp.com/dataset), accessed Jan 2022 Footnote 5: Note that other tabular benchmarking datasets (Grinsztajn et al., 2022) mostly have considerably fewer features; having larger feature sets allows us to experiment with feature set size. We derived 10 numeric target variables from each domain, which were standardized using \(z\)-score normalization such that each one has mean 0 and standard deviation 1. We further created a binarized counterpart to each numeric target by thresholding at the mean value. That is, the binarized target is positive if the numeric target is greater than 0, and negative otherwise. Table 1 contains detailed descriptions of the datasets and target variables. For consistency of comparison, each feature set contains \(30,000\) instances, yielding "medium"-sized data.6 Most results in the paper, other than those presented in Section 4.2, divide each feature set into three: Footnote 6: We have experimented with much larger datasets in terms of both instance and feature set sizes and found very consistent results but due to computational resource constraints we have excluded a full comparison from this paper. 1. \(\mathcal{D}^{T}\): \(10,000\) training instances. 2. \(\mathcal{D}^{V}\): \(5,000\) validation instances, used for tuning hyperparameters. 3. \(\mathcal{D}^{TE}\): \(15,000\) test instances, used for evaluation. ### Model Families The experiments in this paper use three families of induction algorithms that can be suitable for either the regression or classification task. First, ensemble methods such as XGBoost (Chen et al., 2015) are currently regarded as the state-of-the-art ML model for tabular data (Grinsztajn et al., 2022; Borisov et al., 2022; Shwartz-Ziv and Armon, 2022) and so most of our main results use XGBoost for modeling. Second, although ensemble methods currently have superior performance, deep learning for tabular data is an area of active research, and a recent survey found that ResNet and other deep learning models can achieve comparable or superior performance to XGBoost on benchmarking datasets, although they generally take far longer to train (Gorishishyn et al., 2021). Third, we include \(L_{2}\)-regularized linear methods (linear regression and logistic regression) as a third model family because they are simple to understand and interpret, are in common use across a wide variety of fields and applications, and have been found to achieve decent performance in past work (Rudin, 2019; Clark and Provost, 2019). ### Evaluation We measure the \(R^{2}\) regression score between the actual numeric values and numerical predictions. \(R^{2}\) is normally between 0 and 1. Our results include numerous modeling settings yielding negative \(R^{2}\) due to overfitting to the training data. To ensure a fair comparison for such results, we truncated the reported \(R^{2}\) to 0. For the corresponding classifiers, we measured the AUC (Area under the ROC Curve), which represents the ability of a classifier's scores to rank positive instances above negative ones (Provost and Fawcett, 2001) and is usually between 0.5 and 1. AUCs of less than 0.5 in the validation or test data were truncated to 0.5. A direct "apples-to-apples" comparison of regression and classification results is challenging for two reasons. First, the two performance measures are on different scales and measure different things. Second, even among target variables of the same type, performance is not necessarily comparable; some tasks are easier and some are harder. Therefore, we normalize both \(R^{2}\) and AUC relative to the maximum value achieved in each random initialization of grid search. That is, we compare progress from the minimum possible value to the maximum value as a function of the HPO budget. If \(v_{min}\) is the minimum achievable value (0 for \(R^{2}\) and \(0.5\) for AUC), then define: \begin{table} \begin{tabular}{|p{85.4pt}|p{113.8pt}|p{113.8pt}|} \hline **Domain** & **Feature Set Description** & **Numeric Target Variables** \\ \hline Airbnb & Information and descriptions of listings from Airbnb.com. & (1) Number of guests accommodated (2) Availability in the next 30 days (3) Availability in the next 60 days (4) Availability in the next 90 days (5) Availability in the next 365 days (6) Host listings count (7) Number of reviews (8) Price (9) Average rating (10) Average reviews per month \\ \hline Kickstarter & Information and descriptions of completed crowdfunding campaigns from Kickstarter.com. & (1) Dollars pledged (2) Number of backers (3) Dollar goal amount (4) Number of reward levels for contributors (5) Minimum amount to receive an award (6) Maximum amount to receive an award (7) Standard deviation of reward amounts (8) Time between campaign creation and campaign launch (9) Number of sentences in description (10) Average length of sentences in description \\ \hline Yelp & Information about business which have received reviews on Yelp.com. & (1) Total number of reviews (2) Average star rating (3) Average "useful" review rating (4) Average "funny" review rating (5) Average ”cool" review rating (6) Average review count of reviewers (7) Percent of reviewers with ”elite” status (8) Percent of male reviewers (9) Number of checkins (10) Number of tips \\ \hline \end{tabular} \end{table} Table 1: Description of feature sets and numeric target variables. \[vnorm_{b} =\frac{v_{b}-v_{min}}{v_{B}^{*}-v_{min}} \tag{6}\] \[tnorm_{b} =\frac{t_{b}-v_{min}}{v_{B}^{*}-v_{min}} \tag{7}\] Thus, \(vnorm_{b}\) starts somewhere between 0 and 1 with the default \(h\) value. As \(b\) increases to \(B\), \(vnorm_{b}\) increases to 1. We expect that \(tnorm(b,s)\) is less than \(vnorm(b,s)\), and shows the relative generalizability of each random search run by comparing the test performance to the expected maximum (validation) performance. ## 4 Results The experiments in this paper illustrate key differences in how the model selection and training process plays out for two types of predictive tasks: regression (predicting numerical targets) vs. classification (predicting their binarized counterparts). In summary, this section shows that regression requires more time and data resources to reach optimal performance, and is also more sensitive to various settings in the process. Unless stated otherwise, most of the results in this section use feature set sizes of approximately 2000, a random sampling algorithm for HPO, and XGBoost as the model family. Section 4.3 probes the effect of these three choices. ### Hyperparameter Optimization Using the formulas given in Equations 6 and 7, Figure 2 plots the normalized cumulative maximum validation and test performance for each numeric target variable vs. its binarized counterpart across 400 HPO budgets. The lines show the average performance across 30 target variables and 15 random initializations, and the shaded regions show the average difference across target variables between the minimum and maximum initializations. The validation performance of regression tasks (in blue) not only has relatively worse performance given default \(h\), but also requires a higher budget to approach \(v^{*}\). Table 2 summarizes the average number of trials required to reach \(90\), \(95\), and \(100\%\) of \(v^{*}\) for each target variable and random initialiation. The differences between numeric and binarized targets are all significant for \(\alpha>.99\). Furthermore, the HPO process for regression is less generalizable. Note in Figure 2 that the test performance for regression is lower than for classification, relative to what would be expected given the validation performance. After 400 search iterations, the average regression \(t^{*}\) is \(0.88\). For classification, it is \(0.96\). A t-test for the difference between these two means is significant for \(\alpha=0.99\). The average gap between the minimum and maximum \(tnorm*\) is also larger for regression, so the test performance has greater variation relative to the expected validation performance. After 400 iterations, the average \(tnorm*\) range is 0.27 for regression and 0.04 for classification. Again, a \(t\)-test for the difference in these two means is significant for \(\alpha=0.99\). These results suggests that regression would benefit from increased HPO budgets, i.e. more computational resources. Certainly, classification and regression should not use the same sizes of grid. These results are further emphasized in Figure 2, which compares the average \(t^{*}\) given a budget of 400 iterations versus various other budgets: the default HP, 10, and 100 iterations as well as the overall best \(h\) found across all random initializations for each target variable. For many numeric variables, there is a substantial loss in performance for the default \(h\) and other smaller budgets. Furthermore, even the average \(t^{*}\) after 400 iterations is still far from the true best possible performance across random searches, again emphasizing the tendency of regression to overfit. These differences are not present for classification. Table 3 summarizes the average percent difference versus a budget of 400 iterations for numeric and binarized targets. All differences between numeric and binarized tasks are significant for \(\alpha=0.99\) based on paired t-tests. Not tuning, or using a smaller grid, affects classification significantly less than it affects regression. Also, the best possible outcome is substantially larger for regression than the average, again calling the replicability and generalizability of regression results into question. ### Learning Curves The results in this section used the process given by Perlich et al. (2003) to create learning curves that show generalization performance with respect to the amount of training data for regression vs. classification. In order to experiment with larger quantities of training data (up to \(20,000\) training instances), we recombined \(\mathcal{D}^{T}\), \(\mathcal{D}^{V}\), and \(\mathcal{D}^{TE}\), then randomly selected \(5,000\) test instances for each target. To create learning curves, we repeated the following steps 30 times. 1. Randomly draw \(k\) training instances, where \(k\) is between \(100\) and \(20,000\). 2. Using the training set of size \(k\), train an XGB model using a best \(h\) to predict the numeric target. Estimate predictions in the test set and measure the \(R^{2}\). 3. Using the training set of size \(k\), train an XGB model using a best \(h\) to predict the binarized target. Estimate predictions in the test set and measure the AUC. 4. Normalize each \(R^{2}\) and \(AUC\) such that 0 is the minimum possible performance and 1 is the maximum observed performance across all \(k\) for that target. Figure 3 shows the normalized progress to the maximum performance averaged across 30 target variables and 30 random draws for each. The shaded regions represent a +/- 1 standard deviation interval around the average. This chart provides evidence that the learning curves for regression are "steeper" with respect to the amount of training data. That is, for any number of training data, classification tends to have relatively closer performance to the maximum than regression. For instance, the average normalized performance for regression with a 100-instance training set is \(4\%\) of the maximum observed and \(17\%\) for classification. With a 1000-instance training set, regression is at \(39\%\) and classification is at \(59\%\). We also expect that learning curves will level out as the marginal benefit of more data diminishes. At the high end of training set sizes, the classification learning curves appear to be flattening, while the regression curves are apparently still increasing. This implies that regression models receive relatively more benefit from more data; once again, classification requires fewer resources to perform at the highest level. \begin{table} \begin{tabular}{l|l|l|l|l|l} \(\%\) **of Max** & **Num** & **Bin** & **Diff** & **Std Err** & **p-val** \\ \hline 90\% & 48.19 & 0.60 & 47.59 & 3.85 & \(<.001\) \\ 95\% & 86.92 & 6.41 & 80.51 & 4.92 & \(<.001\) \\ 99\% & 162.94 & 110.21 & 52.73 & 7.49 & \(<.001\) \\ \end{tabular} \end{table} Table 2: Trials Required for Grid Search Convergence \begin{table} \begin{tabular}{l|l|l} \(\%\) **Iterations** & **Num (std)** & **Bin (std)** \\ \hline 0 & -39.12\(\%\) (0.425) & -0.56\(\%\) (0.004) \\ 10 & -30.64\(\%\) (0.333) & -0.51\(\%\) (0.003) \\ 100 & -5.33\(\%\) (0.091) & -0.19\(\%\) (0.001) \\ Best Overall & 13.91\(\%\) (0.199) & 0.58\(\%\) (0.002) \\ \end{tabular} \end{table} Table 3: Mean Difference vs. 400 Tuning Iterations ### Other parts of the pipeline There are other heuristic choices in the ML pipeline besides HPO budget. This section probes the effects of the size of the feature set, the choice of sampling algorithm in the HPO grid search, and the choice of the model family for regression vs. classification. #### 4.3.1 Feature set size The models trained in the prior sections were trained using approximately 2000 features each. What happens if there are fewer features? Figure 4 replicates Figure 1 with 5 and 100 features. With 5 features, the relative differences between regression and classification are less dramatic, although still present. The differences are quite evident when there are 100 features. Anecdotally, we note that we have conducted preliminary experiments both on published benchmark tabular datasets (Grinsztajn et al., 2022) as well as feature sets with up to \(200,000\) features, and the results of these preliminary experiments confirm the main results in this paper. #### 4.3.2 Grid Search Sampling Algorithm Figure 5 compares the test performance across two grid search sampling algorithms: simple random sampling, and Tree-Structured Parzen Estimator (TPE), a Bayesian sampler which has been found to yield improved results (Turner et al., 2021). As before, the difference between the two samplers are much smaller for classification than for regression. We also note that for regression, the choice of which sampler performs better would depend on the HPO budget. #### 4.3.3 Model Selection All of the prior results in this paper have used XGBoost as the model family; however, we find that model selection is also more impactful for regression than for classification, as can be seen in Figure 6.7 This chart compares the average best tuned test performance of Linear and ResNet models relative to XGBoost. The differences between the best and worst-performing model families for classification are significantly less than those associated with regression. The average percent improvement for regression tasks between the worst and best-performing model family is \(245.79\%\) For classification, the average percent improvement is 6.34\(\%\). The paired differences are statistically significant for \(\alpha=0.90\).8 Footnote 8: The core results for this paper have also been replicated for Linear and ResNet models and are included in the supplemental material. Taken along with the results in Section 4.3.2, the implication is that heuristic choices in all parts of the ML pipeline matter relatively more for regression than for classification. The selection of models included in model selection are more consequential. Model selection can be significantly shortcut for classification because the best model is closer in performance to the worst and/or default model. With implications for replicability, the models chosen to benchmark performance in research proposing a new algorithm for regression also take on increased importance. These results also present an interesting tradeoff for researchers. Although regression requires more resources, there are also potentially larger benefits to be found when developing new regression algorithms (for all parts of the ML pipeline). ## 5 Discussion Our results bring additional nuance to the current understanding of the importance of HPO in the machine learning pipeline, particularly as it pertains to replicability of ML findings, sustainable AI, and automation of ML heuristics. HPO is necessary to achieve optimal performance in ML models (Bischl et al., 2023), to the point where ML benchmarking results can be reversed depending on the extent of HPO conducted (Bouthillier et al., 2021; Dodge et al., 2019). This has contributed to a lack of replicability in the ML literature and calls for increased detail in reporting of experimental parameters (Dodge et al., 2019). HPO budget, i.e. number of search iterations or total time, is also a framework that has been used for evaluating the differences between induction algorithms; for instance, deep learning methods have been found to achieve comparable performance to tree-based ensemble methods on tabular data, but deep learning methods require far more computational resources (Gorishiny et al., 2021). Our results leverage HPO budget as a dimension by which to compare the relative resources required by regression and classifications and reveal the large discrepancy in computational requirements between the two tasks. Our focus on HPO also highlights the fact that regression is more sensitive to both heuristic choices and randomness. This both makes regression modeling findings around regression harder to replicate and calls for larger grids (and even more computation) to be used in such contexts. A major cost associated with HPO is the computation time that it requires, especially in the modern age of large language models and neural architecture search (NAS) (Strubell et al., 2019). A full grid search trains and evaluates models using all possible hyperparameter combinations, although randomized grid search and its variations have been shown to be just as effective but much faster (Bergstra and Bengio, 2012). Still, given the criticality of conducting a thorough grid search, HPO uses a tremendous amount of resources. These resource requirements leads to egregious quantities of carbon emissions (Strubell et al., 2019; Schwartz et al., 2020) and also inequities in who is able to contribute to the ML field (Strubell et al., 2019). Our findings contribute to recent calls for more efficient ML algorithms (Strubell et al., 2019; Schwartz et al., 2020; Dodge et al., 2019) by improving the efficiency of the ML pipeline rather than any specific modeling algorithm: assuming that regression and classification are interchangeable from the perspective of performance in a downstream application, we show that classification requires a smaller grid search and fewer resources in general. The other cost of HPO is one that is common to the entire ML pipeline. There are numerous heuristic choices involved, such as which induction algorithms to try for comparison or optimization, which features to use, how much training data to acquire, how to set the HPO budget, which hyperparameters to tune, the size of the grid, and more. These choices are usually made by knowledgeable data scientists, who are in short supply (He et al., 2021). AutoML attempts to automate some of these choices, thereby streamlining the number of heuristic choices in the pipeline (He et al., 2021). For instance, recent work has focused on determining which hyperparameters for each common model family are tunable (i.e. where HPO effort is best spent) (Probst et al., 2019). This paper makes a fundamental contribution to the AutoML literature by instead evaluating tunability based on an underlying characteristic of the data being modeled: the formulation of the target variable. We find that regression tasks are overall more tunable, which has previously observed but not systematically evaluated (Sipper, 2022). This work makes the significant assumption that regression and classification can be used interchangeably in some contexts and studies the effect of this choice on the resources required by the ML pipeline.9 Thus, it provides insight into the choice of whether or not to binarize by conducting a systematic comparison. Although past work in statistics has demonstrated that binarization leads to issues in traditional analyses, it frequently occurs in applied ML. We demonstrate that regression tasks are particularly costly in terms of required modeling effort; they require a higher HPO budget and greater amounts of training data, and the model selection process is less generalizable. Classification should be chosen when possible for the sake of efficiency, and smaller grids can be used. On the other hand, regression may present a greater opportunity for researchers who wish to publish impactful results; however, sufficiently large grid search, ensembling, and repeated sampling should be used to ensure replicability. Footnote 9: Of course, there are also situations where either formulation could be reasonably used but the downstream outcomes will differ; we leave a thorough exploration of the choice between regression and classification in terms of outcomes to future work. There are a few other apparent limitations in this work. First, most of our results use XGBoost to demonstrate the salient differences between the two tasks. We assert that using XGBoost may actually yield conservative results based on preliminary experiments with linear models and ResNet deep learning models. Second, our datasets are relatively small compared to the data typically used for truly computationally burdensome ML tasks. Once again, we believe that the performance differences between regression and classification seen in our results may be conservatively estimated compared to what would be seen with larger datasets, both in number of instances and number of features, based on the results in Sections 4.2 and 4.3. Finally we also note that tabular datasets of medium size are quite common in business applications. Future work could verify our findings with larger datasets and other model types. ## 6 Conclusion We have experimentally compared the effect of choosing numeric regression vs. binary classification on the required resources and resulting performance in the ML pipeline. We show that choosing a numeric target variable consistently requires more time, computation, and data resources, and yields results that are more sensitive to randomness and model selection. We present actionable recommendations for ML researchers, users, and consumers of models.
2310.06773
Uni3D: Exploring Unified 3D Representation at Scale
Scaling up representations for images or text has been extensively investigated in the past few years and has led to revolutions in learning vision and language. However, scalable representation for 3D objects and scenes is relatively unexplored. In this work, we present Uni3D, a 3D foundation model to explore the unified 3D representation at scale. Uni3D uses a 2D initialized ViT end-to-end pretrained to align the 3D point cloud features with the image-text aligned features. Via the simple architecture and pretext task, Uni3D can leverage abundant 2D pretrained models as initialization and image-text aligned models as the target, unlocking the great potential of 2D models and scaling-up strategies to the 3D world. We efficiently scale up Uni3D to one billion parameters, and set new records on a broad range of 3D tasks, such as zero-shot classification, few-shot classification, open-world understanding and part segmentation. We show that the strong Uni3D representation also enables applications such as 3D painting and retrieval in the wild. We believe that Uni3D provides a new direction for exploring both scaling up and efficiency of the representation in 3D domain.
Junsheng Zhou, Jinsheng Wang, Baorui Ma, Yu-Shen Liu, Tiejun Huang, Xinlong Wang
2023-10-10T16:49:21Z
http://arxiv.org/abs/2310.06773v1
# Uni3D: Exploring Unified 3D Representation at Scale ###### Abstract Scaling up representations for images or text has been extensively investigated in the past few years and has led to revolutions in learning vision and language. However, scalable representation for 3D objects and scenes is relatively unexplored. In this work, we present Uni3D, a 3D foundation model to explore the unified 3D representation at scale. Uni3D uses a 2D initialized ViT end-to-end pretrained to align the 3D point cloud features with the image-text aligned features. Via the simple architecture and pretext task, Uni3D can leverage abundant 2D pretrained models as initialization and image-text aligned models as the target, unlocking the great potential of 2D models and scaling-up strategies to the 3D world. We efficiently scale up Uni3D to one billion parameters, and set new records on a broad range of 3D tasks, such as zero-shot classification, few-shot classification, open-world understanding and part segmentation. We show that the strong Uni3D representation also enables applications such as 3D painting and retrieval in the wild. We believe that Uni3D provides a new direction for exploring both scaling up and efficiency of the representation in 3D domain. ## 1 Introduction 3D representation learning is one of the most fundamental problems in 3D computer vision, especially with the rapid development of 3D sensors (e.g., LiDAR) and the growing demands in real-world applications, e.g., autonomous driving, augmented/virtual reality and robotics. Existing methods make great progress in 3D model architecture (Qi et al., 2017; Yu et al., 2021; Wang et al., 2019), learning objective (Yu et al., 2022; Wang et al., 2021), task-oriented modeling (Zhou et al., 2020; Yin et al., 2021; Zhao et al., 2021), etc. However, most of the works explore at a relatively small scale, with limited parameters, data, and task scenarios. Learning scalable 3D representation that can transfer in the wild is relatively unexplored and remains a challenging problem. In the past few years, scaling up pre-trained language models (Brown et al., 2020; Liu et al., 2019; Raffel et al., 2020) has largely revolutionized natural language processing. Some recent works (Radford et al., 2021; Dosovitskiy et al., 2020; Bao et al., 2021; He et al., 2022; Fang et al., 2022) translate the progress from language to 2D vision via model and data scaling. Motivated by their success, it is appealing that we can also lift this success from 2D to 3D, i.e., to learn a scalable 3D representation model that can transfer in the 3D world. Recently, as the release of a large-scale 3D dataset Obja-verse (Deitke et al., 2023b), a few works have tried to explore scalable pretraining in 3D, but either still limit to the small-scale 3D backbones (Xue et al., 2023a;b), or can hardly scale to a relatively larger size (Liu et al., 2023), e.g., 72M in Fig. 1. In this work, we propose Uni3D, a unified and scalable 3D pretraining framework for large-scale 3D representation learning, and explore its limits at the scale of one billion parameters with a million 3D shapes and 10 million images paired with 70 million texts. Uni3D uses a 2D ViT as the 3D encoder initialized with the best 2D prior, which is then end-to-end pre-trained to align the 3D point cloud features with the image-text aligned features. Via the simple architecture and pretext task, Uni3D can leverage abundant 2D pre-trained models as initialization (Fang et al., 2022; Caron et al., 2021), and image-text aligned models as the target (Radford et al., 2021; Sun et al., 2023; Cherti et al., 2023), unlocking the great potential of 2D models and scaling-up strategies to the 3D world. In addition, we systematically study the scalability and flexibility of Uni3D in terms of 1) model scaling from 6M to 1B parameters, 2) 2D initialization from visual self-supervised to text supervised, and 3) text-image aligned target model from 150M to 5B parameters. We observe continuous performance improvements as the scaling of each component under the flexible and unified framework. The sharable 2D prior and scale-up strategies also largely benefit the large-scale 3D representation learning. For the first time, we demonstrate a billion-scale 3D representation model that transfers well to various downstream tasks and scenarios. As shown in Fig. 2, Uni3D yields a boost compared to prior art in various zero-shot and few-shot 3D tasks. Specifically, Uni3D achieves a zero-shot classification accuracy of 88.2% on ModelNet, which surprisingly performs on par with some supervision methods. Uni3D also achieves state-of-the-art performance on other representative 3D tasks such as open-world understanding, part segmentation, etc. In addition, we present some interesting applications with the strong 3D representation learned by Uni3D, such as point cloud painting and text/image-based 3D shape retrieval. By scaling up 3D foundation models with a simple and unified pre-training to learn strong 3D representation across tasks, we hope Uni3D would bridge the gap between 2D and 3D vision, and contribute to the big convergence across different modalities. To facilitate future research, we will release all the code and 3D foundation models. ## 2 Related Work **3D Representation Learning.** Learning representations from point clouds for 3D understanding (Qi et al., 2017; Wang et al., 2019; Yu et al., 2021) has been fully explored in recent years. Some works further studied self-supervised pretraining for point clouds by specific 3D pretext tasks like self-reconstruction (Wang et al., 2021), mask point modeling (Yu et al., 2022) and contrastive learning (Qi et al., 2023). These works merely explore under limited 3D data (e.g. ShapeNet (Chang et al., 2015)) and do not investigate multi-modal representation from 2D/NLP to 3D. With the recent success in learning visual concepts from raw text with contrastive learning like CLIP (Radford et al., 2021; Jia et al., 2021; Li et al., 2022; Ramesh et al., 2022; Gregorichelaki et al., 2022), recent works (Liu et al., 2023; Qi et al., 2023; Xue et al., 2023; Hegde et al., 2023; Lei et al., 2023) seek to learn 3D representations by aligning text, image, and point cloud features through in a similar contrastive learning way. Recently, as the release of a large-scale 3D dataset Objaverse (Deitke et al., 2023), OpenShape (Liu et al., 2023) and ULIP2 (Xue et al., 2023) have tried to explore scalable pretraining in 3D, but either still limit to the small-scale 3D backbones (Xue et al., 2023), or can hardly scale to a relatively larger size (Liu et al., 2023). In this work, We aim to explore a unified and scalable 3D pretraining framework, i.e., Uni3D, for large-scale 3D representation learning and explore its limits in billion-scale model sizes. **Foundation models.** Recently, it has been drawing significant attention to design foundation models for unifying and scaling up representations under different modalities (e.g. NLP, 2D vision). Starting from NLP, recent works in scaling up pre-trained language models (Brown et al., 2020; Liu et al., 2019; Raffel et al., 2020) have largely revolutionized natural language processing. Some research in 2D vision (Radford et al., 2021; Dosovitskiy et al., 2020; Bao et al., 2021; He et al., 2022; Fang et al., 2022) translates the progress from language to 2D vision via model and data scaling. However, such a phenomenon has not been well-established and explored in the 3D domain, due to the limited 3D data and difficulties in unifying and scaling up 3D backbones. Meta-Transformer (Zhang et al., 2023) and FrozeCLIP (Huang et al., 2022) have indicated a promising future for developing a unified framework with a modality-shared encoder. However, they require retraining task-specific heads with labor-intensive manual labeling of ground truth for different downstream tasks, which leads to a lack of out-of-domain zero-shot capabilities. In this work, we design the first billion-scale 3D foundation model with a unified 3D representation. The unified ViT architecture allows us to simply scale up Uni3D with the well-studied unified 2D/NLP scaling-up strategies. We anticipate Uni3D to serve as a bridge between 2D and 3D vision, facilitating significant convergence across various modalities. ## 3 Method We introduce Uni3D, a unified and scalable 3D pretraining framework for large-scale 3D representation learning by aligning 3D point cloud features with the image-text aligned features. The overview of Uni3D is shown in Fig. 3. We first present how we design, scale up and initialize a unified 3D representation in Uni3D in Sec. 3.1. We then introduce the multi-modal contrastive learning for aligning image and language with point cloud in Sec. 3.2. More training details are provided in Sec. A of the appendix. ### Unified 3D representation Uni3D leverages a unified vanilla transformer structurally equivalent to 2D Vision Transformer (ViT) (Dosovitskiy et al., 2020) as the backbone. The only difference here is that we replace the patch embedding layer in ViT with a specific point tokenizer to achieve 3D embeddings. The point tokenizer keeps the same as PointBERT (Yu et al., 2022) to first group points into local patches with FPS (farthest point sampling) and NNN (k nearest neighbor), and then extract token embeddings with a tiny PointNet (Qi et al., 2017) for each patch. The vanilla transformer is then applied to the 3D tokens to extract the 3D representations. **Scaling Up Uni3D.** Previous works on point cloud representation learning merely focus on designing specific model architectures for pursuing better performances in different applications and are limited to a certain small-scale dataset (e.g. ShapeNet (Chang et al., 2015), ModelNet (Wu et al., 2015)). With the recent successes in large-scale 3D data (e.g. Objaverse (Deitke et al., 2023; 20)), a few recent works (Xue et al., 2023; Liu et al., 2023; Xue et al., 2023) have tried to explore scalable pretraining in 3D, but either still limit to the small-scale 3D backbones (Xue et al., 2023), or can hardly scale to a relatively larger size (Liu et al., 2023). The difficulties lie in the un-unified backbones and pretraining in 3D domain, where each backbone requires a specific scaling-up strategy, which is rarely explored. Moreover, some backbones (e.g. PointMLP (Ma et al., 2021), DGCNN (Wang et al., 2019)) require modeling local patterns completely on dense points, which brings extensive computational cost when scaling up. We justify that Uni3D, which directly leverages the vanilla transformer structurally equivalent to ViT, can naturally solve the difficulties by simply scaling up the model size with the well-studied unified 2D/NLP scaling-up strategies. Specifically, we leverage the strategy of ViT which gradually scales up Transformer from Tiny (6 M), Small (23M), Base (88 M), Large (307 M) to giant (1B) and replace the Transformer of Uni3D with different sizes of ViT as the scaled-up version of Uni3D at different model sizes. The effectiveness and efficiency of our scaling-up strategy are fully demonstrated by the comprehensive exploration of scaling up ViT in the 2D vision domain. As shown in Fig. 1 and Tab. 5, we observe continuous performance improvements as the scaling of model size under the flexible and unified framework. Given the unified scaling-up strategy, we train the largest 3D presentation model with one billion parameters under the multi-modal alignment learning objective, in a large-scale dataset of nearly one million 3D shapes, along with paired 10 million images and 70 million texts. For the first time, we demonstrate a billion-scale 3D representation model that transfers well to various downstream tasks and scenarios. **Initializing Uni3D.** Another challenge that prevents previous works in scaling up 3D backbones is that larger model sizes lead to overfitting and difficulties in convergence. A naive solution is to pretrain each 3D backbone with specific 3D pretext tasks (e.g. PointBERT (Yu et al., 2022), OcCo (Wang et al., 2021)), and leverage the pretrained parameters as the initialization. However, this results in expensive training costs, and the relatively limited scale of 3D data for pretraining makes it challenging to establish a robust initialization for stabilizing cross-modal contrastive learning. In Uni3D, we directly leverage the vanilla transformer structurally equivalent to ViT as the 3D backbone, which brings a new perspective of introducing pretrained priors. Specifically, we can naturally adopt the pretrained large models in other modalities which share the same vanilla transformer as ours to initialize Uni3D, such as the 2D pretrained model DINO (Caron et al., 2021), EVA (Fang et al., 2022), EVA-02 (Fang et al., 2023) and the cross-modal models CLIP (Radford et al., 2021), EVA-CLIP (Sun et al., 2023), etc. These pretrained models are trained in datasets consisting of billions of images and texts, which already learn rich underlying representational abilities for Transformer and have the potential to enhance and stabilize the learning of large-scale 3D representations. Uni3D is not limited to a specific pretrained model for initialization, where we can flexibly leverage any off-the-shelf Transformer-based pretrained models at any modalities for pushing the performance and exploring the cross-modal pretraining (please refer to Sec. 4.7 for detailed analysis). ### Multi-Modal Alignment We train Uni3D to learn the multi-modal alignment across language, image and point cloud following a similar paradigm as ULIP (Xue et al., 2023a) and OpenShape (Liu et al., 2023). **Datasets.** In order to keep the experimental settings consistent with other methods for a fair comparison, we adopt the ensembled 3D dataset provided by OpenShape for training, which consists of Figure 3: **The overview of Uni3D. Uni3D is a unified and scalable 3D pretraining framework for large-scale 3D representation learning. We scale up Uni3D to one billion parameters with a million 3D shapes paired with 10 million images and 70 million texts. Uni3D uses a 2D ViT as the 3D encoder initialized with the best 2D prior from abundant 2D pre-trained models, which is then end-to-end pre-trained to align the 3D point cloud features with the image-text aligned ones from SoTA CLIP models. Uni3D shows superior performance on a wide range of benchmarks.** four 3D dataset, i.e., Obigavverse Deitke et al. (2023), ShapeNet Chang et al. (2015), 3D-FUTURE Fu et al. (2021) and ABO Collins et al. (2022). We sample 10,000 points from the mesh surface with colors and render 10 color images from different views that uniformly cover the whole shape. The point cloud-text-image triplets are conducted in the same way as OpenShape. **Objective.** The illustration of the multi-modal alignment is shown in Fig. 3. We initialize the Uni3D point encoder \(f_{P}\) with pretrained 2D ViT models and obtain the text encoder \(f_{T}\) and image encoder \(f_{I}\) from CLIP models. We train \(f_{P}\) to learn 3D representations by aligning them to well-learned 2D / Language representations of CLIP models and distills cross-modal knowledge. Both \(f_{I}\) and \(f_{T}\) are frozen since they are well-optimized, and only \(f_{P}\) are learnable during training. Given a batch of \(N\) triplets \(\{(P_{i},I_{i},T_{i})\}_{i=1}^{N}\), where \(P_{i}\), \(I_{i}\), \(T_{i}\) donate a point cloud and its corresponding image and text obtained from the same 3D shape. We first achieve the normalized feature for the sampled triplets as \(\{(e_{i}^{p}=f_{P}(P_{i})/|f_{P}(P_{i})|,e_{i}^{p}=f_{I}(I_{i})/|f_{I}(P_{i})|,e_{i}^{T}=f_{T}(T_{i})/|f_{T}(T_{i})|)\}_{i=1}^{N}\). The contrastive loss is then formulated as: \[-\frac{1}{4N}\sum_{i=1}^{N}\left(\log\frac{\exp(e_{i}^{p}\cdot e_{i}^{T}/\tau) }{\sum_{j}\exp(e_{i}^{p}\cdot e_{j}^{p}/\tau)}+\log\frac{\exp(e_{i}^{T}\cdot e _{j}^{p}/\tau)}{\sum_{j}\exp(e_{i}^{p}\cdot e_{j}^{p}/\tau)}+\log\frac{\exp(e_ {i}^{p}\cdot e_{j}^{p}/\tau)}{\sum_{j}\exp(e_{i}^{p}\cdot e_{j}^{p}/\tau)}+ \log\frac{\exp(e_{i}^{p}\cdot e_{i}^{p}/\tau)}{\sum_{j}\exp(e_{i}^{p}\cdot e_ {j}^{p}/\tau)}\right), \tag{1}\] where \(\tau\) is a learnable temperature. The training target is to minimize the triplet contrastive loss. **Image-Text Aligned Target.** We further justify that Uni3D is not limited to a specific CLIP teacher, where we can switch it to off-the-shelf SoTA CLIP models with different model scales flexibly to achieve better performance. For example, we can simply change the CLIP source from OpenAL-CLIP Radford et al. (2021), OpenCLIP Cherti et al. (2023) to the best EVA-CLIP Sun et al. (2023), and probably to the better CLIP in the future. We can also directly scale up the CLIP teacher from EVA-CLIP-B (150 M) to EVA-CLIP-E (5 B). This demonstrates the flexibility and scalability of Uni3D and shows the potential of Uni3D to progress with the progress of CLIP models. ## 4 Experiment ### Zero-Shot Shape Classification We first evaluate Uni3D under the zero-shot shape classification task. We conduct experiments under three benchmarks: ModelNet Wu et al. (2015), ScanObjNN Uy et al. (2019) and Obigavverse-LVIS Deitke et al. (2023). ModelNet and ScanObjNN are widely-used datasets which contains 15 and 40 common categories, respectively. The Obigavverse-LVIS benchmark is an annotated and cleaned subset of Obigavverse which contains 46,832 shapes of 1,156 LVIS categories. We follow the settings of OpenShape Liu et al. (2023) to conduct evaluations. For Obigavverse-LVIS, we use 10,000 sampled colored points as input. For ModelNet40, we utilize 10,000 sampled points without color as input. For ScanObjNN, the input is 2,048 sampled points without color from the OBJ_ONLY version. We compare Uni3D with the previous SoTA methods in the zero-shot shape classification task, such as PointCLIP Zhang et al. (2022), PointCLIP V2 Zhu et al. (2022), ULIP Xue et al. (2023) and OpenShape Liu et al. (2023). Note that PointCLIP and PointCLIP V2 directly project point clouds \begin{table} \begin{tabular}{c|c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{Method} & training shape & \multicolumn{3}{c|}{Obigavverse-LVIS} & \multicolumn{3}{c|}{ModelNet40} & \multicolumn{3}{c}{ScanObjectNN} \\ \cline{2-11} & source & Top1 & Top3 & Top5 & Top1 & Top3 & Top5 & Top1 & Top3 & Top5 \\ \hline \multirow{3}{*}{UILP-PointBERT OpenShape-SparseConv OpenShape-PointsBERT (no LVIS)} & 21.4 & 38.1 & 46.0 & 71.4 & 84.4 & 89.2 & 46.0 & 66.1 & 76.4 \\ & 37.0 & 58.4 & 66.9 & 82.6 & 95.0 & 97.5 & 54.9 & 76.8 & 87.0 \\ & 39.1 & 60.8 & 68.9 & 85.3 & 96.2 & 97.4 & 47.2 & 72.4 & 84.7 \\ & **47.2** & **68.8** & **76.1** & **86.8** & **97.3** & **98.4** & **66.5** & **83.5** & **90.1** \\ \hline \multirow{3}{*}{UILP-PointBERT OpenShape-SparseConv OpenShape-PointsBERT (no LVIS)} & 26.8 & 44.8 & 52.6 & 75.1 & 88.1 & 93.2 & 51.6 & 72.5 & 82.3 \\ & 43.4 & 64.8 & 72.4 & 83.4 & 95.6 & 97.8 & 56.7 & 78.9 & 88.6 \\ \cline{1-1} & 46.8 & 69.1 & 77.0 & 84.4 & 96.5 & 98.0 & 52.2 & 79.7 & 88.7 \\ \cline{1-1} & **53.5** & **75.5** & **82.0** & **87.3** & **98.1** & **99.2** & **63.9** & **84.9** & **91.7** \\ \cline{1-1} & **55.3** & **76.7** & **82.9** & **88.2** & **98.4** & **99.3** & **65.3** & **85.5** & **92.7** \\ \hline \hline \end{tabular} \end{table} Table 1: Zero-shot classification on Obigavverse-LVIS Deitke et al. (2023), ModelNet40 Wu et al. (2015), and ScanObjectNN Uy et al. (2019). († represents the best results achieved on different benchmarks respectively) into images and leverage 2D CLIP for classification, while other methods adopt a similar schema to train a native 3D backbone for aligning 3D representations with image and text representations produced by a pretrained CLIP. We follow OpenShape (Liu et al., 2023) to report the performance under two different training settings. "Ensembled" indicates that the backbones are trained under all the four datasets same as OpenShape and "Ensembled (no LVIS)" further excludes the shapes from the Objaverse-LVIS subset. We justify that even when LVIS shapes are included in the training shapes, i.e., the "Ensembled" dataset, their test-time category labels are probably not included in the training texts. The quantitative comparison is shown in Tab. 1, where Uni3D significantly outperforms the previous state-of-the-art methods under different settings. ### Few-Shot Linear Probing Linear probing is a widely used approach for evaluating the learned representation of a model. To evaluate the linear probing ability of Uni3D, we follow the common setting as OpenShape (Liu et al., 2023) to freeze the parameters of Uni3D and only train a linear classifier on few-shot class labels. We conduct few-shot linear probing under the difficult Objaverse-LVIS dataset with labeled training samples per class from 1, 2, 4, 8 to 16. Fig. 4 summarizes the performance of Uni3D in comparison with OpenShape (Liu et al., 2023) (PointBERT backbone and SparseConv backbone), ULIP (Xue et al., 2023) (official release and the version retrained on the large ensembled dataset) and PointCLIP V2 (Zhu et al., 2022). Uni3D significantly outperforms all the other methods by a large margin under all the few-shot settings. ### Open-World Understanding To evaluate the capability of Uni3D in 3D understanding of real-world shapes and scenes, we follow CLIP\({}^{2}\)(Zeng et al., 2023) to conduct experiments under ScanNet (Dai et al., 2017) to explore the zero-shot recognition performance of Uni3D under real-world scenarios. Note that the ground truth instant segmentation is available for all the methods and the target is to recognize the category of each instant of the scene in a zero-shot way. ScanNet (Dai et al., 2017) is a popular real-scanned 3D dataset containing 1.5K reconstructed meshes of real-world scenes. We adopt the same setting as CLIP\({}^{2}\) to split classes and evaluate the results under the test set of ScanNet. We compare our proposed Uni3D with the state-of-the-art methods PointCLIP (Zhang et al., 2022), PointCLIP V2 (Zhu et al., 2022), CLIP2Point (Huang et al., 2022) and CLIP\({}^{2}\)(Zeng et al., 2023). The quantitative comparison is shown in Tab. 2. "PointCLIP w/TP" and "CLIP2Point w/TP" donate training PointCLIP and CLIP2Point with the real-world data provided by CLIP\({}^{2}\). Note that "PointCLIP w/TP", "CLIP2Point w/TP" and CLIP\({}^{2}\) are trained under 1.6M triplets of real-world point cloud-image-text samples, while Uni3D is only trained under available synthetic data. Nonetheless, Uni3D achieves the best performance among all the previous methods. The results demonstrate the capability of Uni3D to perform real-world recognition and understanding even without training under real-world data. The reason is that Uni3D distills some perceptions of the real world from the CLIP models which are trained under large-scale real-world images and text. Moreover, by scaling up model size, Uni3D achieves a larger representation bandwidth, leading to superior performance under difficult real-world rscenarios. The qualitative comparison is shown in Fig. 5, where Uni3D produces much more accurate zero-shot recognition results than PointCLIP V2 and CLIP2Point. We do not visually compare with CLIP\({}^{2}\) since its code and model are not publicly available. ### Open-Vocabulary / Few-Shot Part Segmentation Some prior methods (Rao et al., 2022; Yang et al., 2022) have demonstrated that transferring the knowledge gained from image-text contrastive learning, i.e., CLIP, can yield significant performance improvements in 2D dense prediction tasks (e.g. segmentation and detection). However, transferring this knowledge to 3D dense prediction tasks is barely explored. We propose a novel approach for Figure 4: Few-shot linear probing on Objaverse-LVIS. We report the average performance over 10 random seeds. 3D dense prediction with Uni3D, and justify the effectiveness with part segmentation experiment. For more details on the approach, please refer to Sec. B of the appendix. We conduct part segmentation experiments under ShapeNetPart dataset (Yi et al., 2016). The results in Tab. 3 demonstrate that when supervised with only 1 or 2 samples per class, Uni3D outperforms PointNet by +13.3%/+9.8%. Moreover, we largely increase the training samples used for comparative methods to 10% or 20% of the training set. These settings surpass training samples in Uni3D's one-shot or two-shot settings by two orders of magnitude. Even in the face of such a discrepancy in the number of training samples, Uni3D still achieves comparable performance in terms of overall mIoU. The visual comparisons with PointBERT are provided in Sec. B of the appendix. "Open-vocabulary part segmentation" quantifies the ability of Uni3D to learn fine-grained semantic information of local point clouds during multi-modal contrastive pre-training. We partition the ShapeNetPart dataset into two subsets: "Seen Categories" and "Unseen Categories." In the "Seen Categories" subset, the text of ground-truth part labels serve as training samples of Uni3D for learning part semantics, while in the "Unseen Categories" subset, the text of ground-truth part labels is unseen during training and is only utilized for testing. The superior performance of Uni3D in Tab. 4 demonstrates its ability to discern fine-grained 3D patterns, even for part-level semantic concepts not encountered in the "Seen Categories". These results robustly affirm Uni3D's capacity to transfer the learned patterns in a close set of 3D parts to open-vocabulary parts, utilizing the rich open-world knowledge distilled from the pre-trained CLIP model. We believe that Uni3D opens avenues to achieve fine-grained, cross-category segmentation of open-vocabulary 3D concepts by leveraging a limited number of category-agnostic segmentation examples. ### Point Cloud Painting We propose to leverage the trained Uni3D for painting point clouds by exploring the learned 3D semantic patterns in Uni3D. Specifically, given an initial point cloud and an input prompt, we optimize the appearance, i.e., RGB channel of the point cloud, by maximizing the cosine similarity between the feature of the point cloud extracted by Uni3D and the feature of the prompt extracted with CLIP text encoder. The painting for a point cloud can be achieved within one minute in a single V100 GPU. We show the paintings in Fig. 6, where Uni3D successfully optimizes the point cloud \begin{table} \begin{tabular}{c|c|c c c c c c c c c c c c c c c c} \hline \hline Method & Avg. & Bad & Cab & Chair & Sofa & Talk & Door & Wind & Blaf & Prc & Curr & Deck & Curr & Pridg & Bath & Showr & Tod & Sink \\ \hline PointCLIP & 6.3 & 0.0 & 0.0 & 0.0 & 0.0 & 0.7 & 0.0 & 0.0 & 0.9 & 0.8 & 0.0 & 0.0 & 15.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ PointCLIPV2 & 11.0 & 0.0 & 0.0 & 23.8 & 0.0 & 0.0 & 0.0 & 0.0 & 7.8 & 0.0 & 90.7 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ CLIPIP2 & 24.9 & 0.8 & 0.0 & 85.1 & 4.33 & 26.5 & 9.9 & 0.0 & 20.9 & 1.7 & 31.7 & 27.0 & 0.0 & 1.6 & 46.5 & 0.0 & 22.4 & 25.6 \\ PointCLIP+WT & 26.1 & 0.0 & 0.57 & 72.8 & 5.0 & 5.1 & 1.7 & 0.9 & 77.2 & 0.0 & 0.1 & 5.0 & 0.3 & 0.0 & 0.0 & 43.0 & 85.3 & 49.2 \\ CLIPPoint+WT & 35.2 & 11.8 & 3.0 & 45.1 & 27.6 & 10.5 & 1.5 & 2.6 & 71.9 & 0.3 & 33.6 & 25.9 & 47.7 & 11.5 & 72.2 & 92.4 & 86.1 & 34.0 \\ CLIP2 & 38.5 & 32.6 & 67.9 & 42.3 & 18.3 & 19.1 & 4.0 & 62.6 & 14.7 & 27.5 & 28.0 & 0.1 & 91.9 & 59.7 & 41.0 & 71.0 & 45.5 \\ \hline **Uni3D** & **45.8** & 58.5 & 3.7 & 78.8 & 83.7 & 54.9 & 31.3 & 39.4 & 70.1 & 35.1 & 1.9 & 27.3 & 94.2 & 13.8 & 38.7 & 10.7 & 88.1 & 47.6 \\ \hline PointCLIP V2 & \multicolumn{8}{c}{CLIP2Point} & \multicolumn{8}{c}{Oars} & \multicolumn{8}{c}{Ground Truth} \\ \hline \hline \end{tabular} \end{table} Table 2: Zero-shot recognition in ScanNet. Avg.: the average Top1 accuracy across all categories. Figure 5: Comparisons of real-world zero-shot recognition results on Scannet dataset. by revealing complex semantics from the prompt. The results demonstrate that Uni3D has learned abundant and diverse 3D patterns via contrastive pretraining. ### Cross-Modal Retrieval With the learned multi-modal representations of Uni3D, we can naturally retrieve 3D shapes from images or text. Specifically, we retrieve 3D shapes from the large 3D dataset (Deitke et al., 2023b) by calculating the cosine similarity between the embedding of a query image or a query text prompt and the embedding of 3D shapes. We then perform kNN to achieve the most similar 3D shapes of the query. In Fig. 7, we show that Uni3D successfully retrieves 3D shapes from real-world images. Note that the images for training are only renderings, and there is a big gap between the training images and the real-world images. We also take two images as inputs and retrieve the shape similar to both two images by calculating the cosine similarity between the average of the embedding of two images and the embedding of 3D shapes. The interesting results demonstrate that Uni3D learns a diverse 3D representation with the ability to perceive multiple 2D signals. We further show the results of leveraging Uni3D to retrieve 3D shapes from the input texts in Fig. 7. More visualization results are provided in Sec. C of the appendix. \begin{table} \begin{tabular}{l|c c c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{6}{c|}{Seen Categories} & \multicolumn{6}{c|}{Unseen Categories} \\ \cline{2-13} & \multicolumn{2}{c|}{\(\text{mIoU}_{C}\)} & \multicolumn{2}{c|}{Car} & \multicolumn{2}{c|}{Kanle} & \multicolumn{2}{c|}{Lamp} & \multicolumn{2}{c|}{Moto} & \multicolumn{2}{c|}{Pistol} & \multicolumn{2}{c|}{Rocket} & \multicolumn{2}{c|}{Guitar} & \multicolumn{2}{c|}{State Chair} & \multicolumn{2}{c|}{Cap} & \multicolumn{2}{c|}{Plane} & \multicolumn{2}{c|}{Bag} & \multicolumn{2}{c|}{Earth} & \multicolumn{2}{c|}{Laptop} & \multicolumn{2}{c|}{Mug} & \multicolumn{2}{c|}{Table} & \multicolumn{2}{c|}{mIoU\({}_{C}\)} & \multicolumn{2}{c ### Ablation Study We then conduct ablation studies to justify the effectiveness of each design in Uni3D. The default setting is to use the ViT-Base as the backbone with an initialization of EVA (Fang et al., 2022), and the default CLIP teacher is EVA-CLIP-E (Sun et al., 2023). The default data setting is "Ensembled (no-LVIS)". We keep the default experimental setting during ablation studies except for the modified part described in each ablation experiment below. **Scaling Up Model Size.** We first explore the effectiveness of scaling up the model size of Uni3D in Tab. 5. Since we leverage a unified vanilla transformer structurally equivalent to ViT as the foundational 3D representation model, we can simply scale up Uni3D with the well-studied unified 2D/NLP scaling-up strategies. Specifically, we follow the scaling up principles of the plain ViT (Dosovitskiy et al., 2020) to increase parameters from 6 M (Tiny), 23 M (Small), 88 M (Base), 307 M (Large) to 1 B (giant). The hyper-parameters on the model architecture are detailed in Tab. 5. The performance under different model scales demonstrates that scaling up the model size of Uni3D can significantly improve the 3D representation. **Switching / Scaling Up CLIP Teachers.** We justify that Uni3D is a flexible framework where we can switch the off-the-shelf SoTA CLIP models as the teacher. To this end, we investigate the performances of Uni3D with different CLIP teachers at different scales. Specifically, we evaluate various CLIP models (e.g. OpenAI-CLIP (Radford et al., 2021), OpenCLIP (Cherti et al., 2023) and EVA-CLIP (Sun et al., 2023)), and also explore large scale CLIP models (e.g., OpenCLIP-bigG, EVA-CLIP-E). The quantitative comparison is shown in Tab. 6, with the best performance achieved by the largest CLIP model EVA-CLIP-E. The results show that the capability and model scale of CLIP teachers are key factors for achieving better performance in Uni3D. Moreover, it indicates the potential of Uni3D to progress with the progress of CLIP models by switching state-of-the-art CLIP teachers. **Initializing Transformer.** We further conduct ablation studies to explore the effectiveness of initializing Uni3D with 2D pretraining or multi-modal large models. In Tab. 7, we report the performance of training Uni3D from scratch (None) and initializing Uni3D with off-the-shelf 2D pretraining model DINO (Caron et al., 2021) / EVA (Fang et al., 2022) and SoTA CLIP model EVA-CLIP (Sun et al., 2023). The best performance is achieved by initializing Uni3D with the SoTA 2D pretraining model EVA (Fang et al., 2022). We also demonstrate that leveraging the frozen parameters from the 2D pretrained ViT model may fail to provide strong 3D understanding without fine-tuning, as shown in "EVA + Freeze ViT" of Tab. 7. For more analysis on initializing Uni3D, please refer to Sec. D of the appendix. \begin{table} \begin{tabular}{l l l l} \hline \hline CLIP variant & Pretrain data & \#Params & O-LVIS \\ \hline EVA-CLIP-B/16 & Merged-2B & 150M & 42.3 \\ OpenAI-CLIP-B/16 & WIT-400M & 150M & 42.7 \\ OpenCLIP-B/16 & LAION-2B & 150M & 43.4 \\ \hline OpenCLIP-bigG/14 & LAION-2B & 2.5B & 44.5 \\ EVA-CLIP-E/14+ & LAION-2B & 5.0B & 45.8 \\ \hline \hline \end{tabular} \begin{tabular}{l|c} \hline \hline Init variant & O-LVIS \\ \hline None & 44.8 \\ DINO & 45.0 \\ EVA-CLIP & 45.2 \\ EVA & 45.8 \\ \hline EVA + Freeze ViT & 15.7 \\ \hline \hline \end{tabular} \end{table} Table 6: Different CLIP teachers at different model scales. ## 5 Conclusion We present Uni3D, a unified framework that scales up a 3D representation model to one billion parameters. We directly leverage a unified vanilla transformer structurally equivalent to ViT as the model, which allows us to simply scale up Uni3D with the well-studied unified 2D/NLP scaling-up strategies. Moreover, Uni3D can leverage abundant 2D pretrained models as initialization and image-text aligned models as the target, unlocking the great potential of 2D models and strategies to the 3D world. We train Uni3D under a large dataset containing about one million 3D point clouds, 10 million images and 70 million texts to explore powerful 3D representations by aligning the 3D point cloud features with the image-text aligned features. Uni3D achieves state-of-the-art performance in various 3D understanding tasks including zero-shot and few-shot classification, open-world understanding, part segmentation, etc. We believe that Uni3D can serve as a 3D foundation model to enable many applications in the 3D community.
2304.12841
Thermophoretic motion of a charged single colloidal particle
We calculate the thermophoretic drift of a charged single colloidal particle with hydrodynamically slipping surface immersed in an electrolyte solution in response to a small temperature gradient. Here, we rely on a linearized hydrodynamic approach for the fluid flow and the motion of the electrolyte ions while keeping the full nonlinearity of the Poisson-Boltzmann equation of the unperturbed system to account for possible large surface charging. The partial differential equations are transformed into a coupled set of ordinary differential equations in linear response. Numerical solutions are elaborated for parameter regimes of small and large Debye shielding and different hydrodynamic boundary conditions encoded in a varying slip length. Our results are in good agreement with predictions from recent theoretical work and successfully describe experimental observations on thermophoresis of DNA. We also compare our numerical results with experimental data on polystyrene beads.
Daniel B. Mayer, Dieter Braun, Thomas Franosch
2023-04-25T14:13:14Z
http://arxiv.org/abs/2304.12841v1
# Thermophoretic motion of a charged single colloidal particle ###### Abstract We calculate the thermophoretic drift of a charged single colloidal particle with hydrodynamically slipping surface immersed in an electrolyte solution in response to a small temperature gradient. Here, we rely on a linearized hydrodynamic approach for the fluid flow and the motion of the electrolyte ions while keeping the full nonlinearity of the Poisson-Boltzmann equation of the unperturbed system to account for possible large surface charging. The partial differential equations are transformed into a coupled set of ordinary differential equations in linear response. Numerical solutions are elaborated for parameter regimes of small and large Debye shielding and different hydrodynamic boundary conditions encoded in a varying slip length. Our results are in good agreement with predictions from recent theoretical work and successfully describe experimental observations on thermophoresis of DNA. We also compare our numerical results with experimental data on polystyrene beads. ## I Introduction Nonequilibrium transport processes of charged colloids or macromolecules in aqueous solutions are ubiquitous in biological, chemical and physical systems [1; 2; 3; 4; 5; 6]. Typically, the motion of such colloids is mediated by externally maintained thermodynamic (bulk) gradients mostly in solute concentration, electric potential and temperature. The phoretic motion then depends in a subtle manner on the surface properties of the colloid and its interactions with the solvent whose details are still subject of ongoing scientific research, experimentally [4; 5; 7; 8; 9; 10; 11; 12] as well as theoretically [13; 14; 15; 16; 17; 18; 19; 20; 21]. In particular, the directed drift motion in response to a temperature gradient, usually referred to as thermophoresis is a formidable problem due to its peculiar sensitivity on the details of the system under investigation. It depends not only on particle properties such as molecular weight [22], size [23; 24; 8; 25], anisotropy [26; 27], concentration [28], surface charging, and surface coating [9], but also on solvent parameters including permittivity, salinity, Debye screening length, and thermoelectric field, as well as their inherent temperature dependence [10; 17; 25; 29]. For example, already the dependence of the thermophoretic drift velocity on the dimensions of the colloid has been observed differently for the same system under investigation. While the study in Ref. [8] suggests a linear variation with particle size, measurement data from Refs. [23; 24] strongly supports a constant thermophoretic drift motion of the particle. This results in competing contributions to thermophoretic transport rendering it more complex to understand and predict than other field-driven transport processes such as electrophoresis or diffusiophoresis. Nevertheless, thermophoresis has numerous (bio-) technological and microfluidic applications, for example, it plays a pivotal role for the separation and characterization of polymers and macromolecules by thermal field-flow fractionation [30], the trapping and enrichment of DNA in a microchannel with ambient flow [25; 31], the possible guiding of fluid motion by thermal micropumps [32; 33], and in the state-of-the-art analysis of biomolecular interactions by means of microscale thermophoresis (MST) [34]. Thermally driven transport was first observed by the Irish physicist John Tyndall in aerosols by simple noticing that a temperature gradient affects the motion of dust particles tending to avoid hot surfaces [35]. Shortly afterwards, the German physiologist Carl Ludwig discovered a similar effect in aqueous alkali halide solutions in 1856 [36], which then was independenly considered in detail by the Swiss physico-chemist Charles Soret in 1879 [37]. The phenomenon is therefore also called the Ludwig-Soret effect or just Soret effect. In principle, thermophoresis of a charged colloidal particle immersed in an aqueous electrolyte solution constitutes a highly nonlinear transport problem coupling ion convection-diffusion dynamics, electrostatics, and solvent flow. This makes a quantitative analysis of the underlying field equations and their corresponding boundary conditions within a continuum approach almost intractable. Nevertheless, most studies regard thermophoresis more or less explicitly as a linear-response phenomenon [17; 29; 38], where the equilibrium electrolyte structure around the colloid is only slightly distorted by the applied temperature gradient. Then to linear order, the thermophoretic drift velocity of the colloidal particle becomes \[\mathbf{U}_{T}=-D_{T}\nabla T, \tag{1}\] with \(D_{T}\) being referred to as the thermal diffusion coefficient which may take both signs indicating that the colloid migrates to the cold for positive \(D_{T}\) and to the warm for negative values, respectively. This transport coefficient constitutes an Onsager cross-coefficient relating heat and particle flux within the framework of non-equilibrium thermodynamics [39; 28]. Considering symmetry arguments, the linearized set of partial differen tial equations can be significantly simplified and thus the problem of calculating the thermal diffusion coefficient \(D_{T}\) essentially reduces to finding a solution to a coupled set of ordinary differential equations with suitable boundary conditions, similar to the treatment of the problem of electrophoresis by O'Brien and White [40]. While in the two limiting cases of thin and wide Debye layers, the scale disparity as well as weak surface charging allow for approximate analytic solutions [17; 29], a numerical approach is generally neccessary to capture the subtle interplay of the underlying transport mechanisms for the full range of parameters. Here the focus lies on the response of the aqueous electrolyte to the temperature gradient, in particular, how concentration gradients in the bulk solution and the accompanying thermoelectric potential affect the thermal transport coefficient via boundary conditions. Furthermore, effects arising from a strong surface charging can be properly revealed only by retaining the full nonlinear Poisson-Boltzmann equation governing the equilibrium potential instead of applying its linearized form in Debye-Huckel approximation valid only for weakly charged particles. Based on these considerations, we provide here a comprehensive review of the thermophoresis problem of a charged spherical colloid within linear response following the theoretical approach of Rasuli and Golestanian [38]. Moreover, for completeness, we also discuss in detail the correct representation of the electrolyte bulk behavior in terms of suitable far-field boundary conditions since this has been paid little attention to in literature so far, but seems to be crucial to correctly determine the thermal diffusion coefficient. Comparison with other most recent theoretical work on thermophoresis [19] supports our explanations. This paper is organized as follows: In Sec. II, we reformulate the generic thermophoresis problem within a hydrodynamic continuum approach. Then the linear response of the system is addressed in Sec. II.2, where we derive the relevant linear differential equations for thermophoretic transport, while Sec. II.3 provides a short discussion of different contributions to the thermal diffusion coefficient. In Secs. II.4-II.7, we elaborate the techniques to considerably simplify these differential equations relying on strategies originally introduced by O'Brien and White [40] to tackle the electrophoresis problem. In the following Sec. III, the solution procedure to obtain numerical solutions to the ordinary differential equations is described, while in Secs. IV.2-IV.5 the results are compared to other theoretical approaches as well as to experimental data on thermophoresis of DNA and polystyrene beads. Last, we conclude in Sec V. ## II Theory In this section, we introduce a minimal theoretical continuum model for a charged single colloid in an aqueous electrolyte solution exposed to a stationary and spatially uniform temperature gradient. Here a description in terms of field equations is employed, where the behavior of the bulk solution is accounted for by suitable far-field boundary conditions. In particular, we elaborate the linear response of the system to small temperature gradients in order to calculate the thermal diffusion coefficient for arbitrary Debye layer width and possibly large surface charging. Most theoretical approaches to thermophoresis of colloids discussed in the literature [13; 17; 29; 38; 41] constitute extensions of the theory of electrophoresis [42; 43; 44; 45; 16; 18; 40; 41]. Our theoretical description follows the same path. In particular, when concerning the solution strategy of the corresponding field equations using asymptotic expressions for the relevant quantities, we strongly rely on the techniques of O'Brien and White in their seminal work on electrophoresis [40]. ### Formulation of the thermophoresis problem The system of interest is a charged chemical inert dielectric spherical particle, immersed in a large electrolyte reservoir, where the completely ionized solute consists of \(N\) different ionic species of charge \(z_{i}e\) with elementary charge \(e\) and valences \(z_{i}\) (\(i=1,2,\ldots,N\)). This reservoir can exchange heat with the surroundings and at the boundary a thin charged layer emerges due to ionic density gradients setting up a thermoelectric field (see Appendix A). At the interface between solid and electrolyte, a Debye double layer of characteristic width \(1/\kappa\) forms, screening the surface charge of the colloidal particle. It comprises a thin immobile layer of adsorbed counter-charged ions on the solid surface adjacent to an otherwise diffusive cloud of mobile ions [46]. The double layer connects smoothly to an electroneutral bulk region within the electrolyte-domain boundary. Then a stationary and spatially uniform temperature gradient \(\nabla T\) is applied externally, resulting in a phoretic motion of the neutrally buoyant spherical particle with steady-state velocity \(\mathbf{U}_{T}\) relative to the quiescent electrolyte. This drift motion is a consequence of the local hydrodynamic stresses in the surrounding solution [13] induced by gradients in ion concentrations and electric potential (see Appendix A) in the bulk solution, as well as the corresponding temperature-induced asymmetry of the Debye double layer. In addition to the Debye length, a second length scale is characteristic for the system, namely the distance from the particle center to the hydrodynamic slipping plane [47]. The solvent inside may remain attached to the particle surface and a hydrodynamically stagnant layer builds up, except for a small region of slip length \(\lambda\)[48] accounting for the possible hydrophilic or hydrophobic nature of the particle surface [49; 50]. Thus, the slipping plane can be understood as the effective or virtual boundary of the colloidal particle with hydrodynamic radius \(a\), where the electrolyte is assumed to be unaffected by the applied temperature gradient. In the remainder, we employ a reference frame attached to the center of the colloidal particle. Hence, in the far field the solvent flow approaches a uniform stream \(-\mathbf{U}_{T}\) and within the slipping plane the velocity is zero. The accompanied temperature profile is assumed to change only linearly in the temperature gradient \[T(\mathbf{r})=T_{0}+\mathbf{r}\cdot\nabla T, \tag{2}\] where \(T_{0}\) denotes the reference temperature in the center of the spherical particle. The presence of the colloidal particle does not alter the applied temperature gradient since thermal conductivities of the solvent and the core material of the colloid are assumed to be comparable. In contrast, for metallic particles the local temperature variations around the colloid may be of central importance [51]. Furthermore, the ions are treated as non-interacting particles, dispersed in a fluid that consists mainly of solvent molecules, yielding an ideal dilute solution. These assumptions justify a continuum description of the thermophoresis problem, where the colloid is considered as a macroscopic object compared to the solutes and the surrounding solvent as a dielectric continuous medium [52]. The fundamental equations governing thermophoretic transport in terms of the electrostatic potential \(\phi(\mathbf{r})\), the ion concentration \(n_{i}(\mathbf{r})\) for each species \(i=1,2,\ldots,N\), the pressure \(p(\mathbf{r})\) and the velocity field \(\mathbf{u}(\mathbf{r})\) within a stationary state, are presented in the following. #### ii.1.1 Governing field equations The Poisson equation relates the electrostatic potential outside the colloidal particle to the free charge density \[\rho(\mathbf{r})=\sum_{i=1}^{N}z_{i}en_{i}(\mathbf{r}), \tag{3}\] via \[\nabla\cdot[\epsilon_{\mathrm{r}}(\mathbf{r})\nabla\phi(\mathbf{r})]=-\frac{1 }{\epsilon_{0}}\rho(\mathbf{r}), \tag{4}\] where the space dependence of the relative dielectric permittivity \(\epsilon_{\mathrm{r}}(\mathbf{r})\) is inherited from the thermal gradient, since the permittivity depends on temperature. Here \(n_{i}(\mathbf{r})\) denotes the local concentration of the ions and \(\epsilon_{0}\) is the vacuum permittivity. The current density of the ionic solutes is phenomenologically modified along the lines of Onsager's linear response relation between conjugate fluxes and forces [39] and reads \[\mathbf{j}_{i}(\mathbf{r}) = n_{i}(\mathbf{r})\mathbf{u}(\mathbf{r})-\mu_{i}^{0}z_{i}en_{i}( \mathbf{r})\nabla\phi(\mathbf{r}) \tag{5}\] \[-D_{i}(\mathbf{r})n_{i}(\mathbf{r})\left[S_{T}^{i}\nabla T( \mathbf{r})+\nabla\log n_{i}(\mathbf{r})\right].\] It accounts for the combined effects of advection, electric migration, as well as thermal and mass diffusion, where \(D_{i}(\mathbf{r})=\mu_{i}^{0}k_{\mathrm{B}}T(\mathbf{r})\) denotes the Einstein diffusion coefficients evaluated at the local temperature. Thus the assumption is that the ion mobilities \(\mu_{i}^{0}\) are temperature-independent and the Stokes-Einstein relation holds locally. The _ionic Soret coefficients_\(S_{T}^{i}\) of the salt cations and anions comprises the thermophoretic response of the solutes due to hydration by surrounding water molecules [53; 54; 55] and a thermoelectric field [28; 41; 56] acting on the ions (see Appendix A). In principle, these Soret coefficients could also be temperature-dependent, however we shall be interested only in the effects linear in the temperature gradient. Consequently we can evaluate them at the reference temperature \(T_{0}\). In the following, they are treated as known input parameters. In the stationary state, the currents are source-free and satisfy the extended Nernst-Planck equations \[\nabla\cdot\mathbf{j}_{i}(\mathbf{r})=0. \tag{6}\] In addition, we consider the momentum-balance equation for the solvent and shall neglect effects of inertia in the limit of small Reynolds number. It is known as the stationary Stokes equation for a Newtonian fluid \[\nabla P(\mathbf{r})-\eta\nabla^{2}\mathbf{u}(\mathbf{r})=\mathbf{f}^{\mathrm{ el}}(\mathbf{r}), \tag{7}\] accompanied by the incompressibility constraint \[\nabla\cdot\mathbf{u}(\mathbf{r})=0. \tag{8}\] The electric body force density is obtained as \[\mathbf{f}^{\mathrm{el}}(\mathbf{r})=-\rho(\mathbf{r})\nabla\phi(\mathbf{r})- \frac{\epsilon_{0}}{2}\mathbf{E}(\mathbf{r})^{2}\nabla\epsilon_{\mathrm{r}}( \mathbf{r}) \tag{9}\] from the divergence of the Korteweg-Helmholtz stress tensor for an electrically linear dielectric material [57; 58; 59]. The first term on the right-hand side (r.h.s.) of Eq. (9) denotes the electrostatic force density while the second is a dielectric contribution accounting for the polarization of the solvent in the local electric field \(\mathbf{E}(\mathbf{r})=-\nabla\phi(\mathbf{r})\). For an incompressible solvent, the electrostrictive contribution due to variations in the relative dielectric permittivity with respect to the solvent mass density \(\rho_{\mathrm{m}}\) as well as the hydrostatic pressure can be absorbed in an effective pressure [60] \[P(\mathbf{r})=p(\mathbf{r})-\frac{\epsilon_{0}}{2}\rho_{\mathrm{m}}\mathbf{E} (\mathbf{r})^{2}\left(\frac{\partial\epsilon_{\mathrm{r}}(\mathbf{r})}{\partial \rho_{\mathrm{m}}}\right)_{T}. \tag{10}\] Here \(p(\mathbf{r})\) denotes the hydrodynamic pressure and \(\eta\) is the viscosity of the solvent. We ignore effects arising from a possible temperature dependence of the viscosity. #### ii.1.2 Boundary conditions At the stationary (virtual) surface of the colloidal particle with hydrodynamic radius \(a\), the boundary conditions are specified by means of the unit normal \(\mathbf{n}\) pointing into the solvent. Then, by virtue of the electric Gauss law, the electric displacements in both the dielectric particle and the solvent are connected to the effective surface charge density \(\sigma(\mathbf{r})\) by \[\left[\epsilon_{\mathrm{r}}(\mathbf{r})\frac{\partial}{\partial n}\phi(\mathbf{r} )-\epsilon_{\mathrm{r}}^{\mathrm{in}}(\mathbf{r})\frac{\partial}{\partial n} \phi^{\mathrm{in}}(\mathbf{r})\right]\bigg{|}_{r=a}=-\frac{\sigma(\mathbf{r})}{ \epsilon_{0}}, \tag{11}\] where \(\epsilon_{\mathrm{r}}^{\mathrm{in}}(\mathbf{r})\) is the dielectric permittivity of the core material and \(\partial/\partial n=\mathbf{n}\cdot\nabla\) denotes the normal derivative at the surface. In principle, the potential inside the particle \(\phi^{\mathrm{in}}(\mathbf{r})\) has to be obtained from Laplace's equation \(\nabla\cdot\left[\epsilon_{\mathrm{r}}^{\mathrm{in}}(\mathbf{r})\nabla\phi^{ \mathrm{in}}(\mathbf{r})\right]=0\), together with the continuity condition \((\phi(\mathbf{r})-\phi^{\mathrm{in}}(\mathbf{r}))|_{r=a}=0\). However, the ratio of the dielectric permittivities is small for the particles of interest [61], such that we can neglect contributions from the electric field inside the particle. Furthermore, the electrolyte solution within the region between the solid particle surface and the slipping plane is assumed to be unaffected neither by the applied temperature gradient nor by the accompanied electric field and displays no macroscopic motion. Consequently, electrochemical reactions, mostly from dissociation of surface functional groups or adsorption of ions and surface conduction [62; 63; 64] due to possible lateral motion within the slipping plane, are absent, yielding a radially symmetric surface-charge density \(\sigma_{0}\) on the colloidal particle independent of the temperature. Then Eq. (11) simplifies to \[\left.\frac{\partial\phi(\mathbf{r})}{\partial n}\right|_{r=a}=-\frac{\sigma_ {0}}{\epsilon_{0}\epsilon_{\mathrm{r}}(\mathbf{r})}. \tag{12}\] Under these conditions, the ion currents together with the velocity normal to the particle vanish \[\mathbf{n}\cdot\mathbf{j}_{i}(\mathbf{r})|_{r=a} =0, \tag{13a}\] \[\mathbf{n}\cdot\mathbf{u}(\mathbf{r})|_{r=a} =0, \tag{13b}\] since ions cannot penetrate the slipping plane. The velocity obeys a Navier boundary condition [65] \[\mathbf{u}_{\mathrm{r}}(\mathbf{r})|_{r=a}=\frac{\lambda}{\eta}\left[\mathbf{ \sigma}^{\prime}(\mathbf{r})\cdot\mathbf{n}-\left(\mathbf{n}\cdot\mathbf{\sigma}^ {\prime}(\mathbf{r})\cdot\mathbf{n}\right)\mathbf{n}\right]|_{r=a}, \tag{14}\] linearly relating the tangential component of the electrolyte velocity \(\mathbf{u}_{\mathrm{t}}(\mathbf{r})=\mathbf{u}(\mathbf{r})-\left(\mathbf{u}( \mathbf{r})\cdot\mathbf{n}\right)\mathbf{n}\) to the shear stress tensor \(\mathbf{\sigma}^{\prime}(\mathbf{r})=\eta\left[\nabla\mathbf{u}(\mathbf{r})+ \left(\nabla\mathbf{u}(\mathbf{r})\right)^{\mathrm{T}}\right]\) at the slipping plane [48]. Here \(\lambda\) denotes the slip length, which we treat as a known input parameter. For \(\lambda=0\) the usual no-slip boundary condition is recovered. At large distances away from the colloidal particle within the electroneutral bulk region (not yet in the vicinity of the electrolyte domain boundary), the electric field approaches the thermoelectric field as a consequence of the thermoelectric force \(\mathbf{F}_{i}=z_{i}e\mathbf{E}^{\mathrm{th}}\) directly acting on the ions [19; 56]. To linear order in the thermal gradient the thermoelectric field is uniform \[\lim_{|\mathbf{r}|\rightarrow\infty}\nabla\phi(\mathbf{r})=-\mathbf{E}^{ \mathrm{th}}=\phi^{\mathrm{th}}\frac{\nabla T}{T_{0}}, \tag{15}\] where the response coefficient \(\phi^{\mathrm{th}}\) is referred to as the thermoelectric potential (see Appendix A). Furthermore, the ion concentrations approach their bulk behavior arising from the redistribution of the salt ions [41] due to the temperature gradient. To linear order in the thermal gradient (Appendix A), the ion concentrations behave asymptotically for \(|\mathbf{r}|\rightarrow\infty\) as \[n_{i}(\mathbf{r})\sim n_{i}^{\mathrm{b}}(\mathbf{r})=n_{i,0}^{\mathrm{b}}\left[ 1-S_{T}^{i}\mathbf{r}\cdot\nabla T\right]. \tag{16}\] This is a striking difference to other phoretic transport processes, such as diffusiophoresis [66] or electrophoresis [19], since there one avoids the interdependence of companion fields in the bulk, whereas in thermophoresis, the inherent coupling of the thermoelectric field and the gradient in ion concentrations has to be accounted for (see especially Eq. (14) in Appendix A). Finally, we have to specify the far-field stream velocity \[\lim_{|\mathbf{r}|\rightarrow\infty}\mathbf{u}(\mathbf{r})=-\mathbf{U}_{T}, \tag{17}\] by the requirement for phoretic motion, that the total force acting on the colloidal particle vanishes [52]. There is no need to include a zero-torque constraint, as the problem displays axial symmetry. Here \(\mathbf{U}_{T}\) denotes the thermophoretic velocity attained by the particle under steady-state conditions. The calculation of its magnitude \(|\mathbf{U}_{T}|\) constitutes the goal of our investigations. ### Linear-response theory We are solely interested in the linear response of the system to an externally applied temperature gradient. Correspondingly relative temperature changes over distances of the order of the extend of the colloid including its Debye layer, \(a+\kappa_{0}^{-1}\), are considered to be small as characterized by the following condition \[\left(a+\kappa_{0}^{-1}\right)\frac{|\nabla T|}{T_{0}}\ll 1. \tag{18}\] Here the inverse (equilibrium) Debye screening length \(\kappa_{0}\) is defined via \[\kappa_{0}^{2}=\frac{1}{k_{B}T_{0}\epsilon_{0}\epsilon_{0}^{0}}\sum_{i=1}^{N}z_ {i}^{2}e^{2}n_{i,0}^{b}, \tag{19}\] with dielectric permittivity \(\epsilon_{\mathrm{r}}^{0}\) and constant bulk ion concentration \(n_{i,0}^{\mathrm{b}}\) evaluated at the reference temperature \(T_{0}\). In this case, the electrical double layer is only slightly distorted from its equilibrium configuration by the applied temperature gradient and the subsequent particle motion. This allows linearizing the governing nonlinear partial differential equations, together with the corresponding boundary conditions, in the perturbation with respect to the spherically symmetric reference state, which corresponds to thermal equilibrium with a uniform temperature \(T_{0}\), such that no solvent flow \(\mathbf{u}_{0}=0\) occurs. Consequently, we can write the field variables within linear response as \[\mathbf{u}(\mathbf{r}) =\delta\mathbf{u}(\mathbf{r}), \tag{20a}\] \[n_{i}(\mathbf{r}) =n_{i}^{0}(r)+\delta n_{i}(\mathbf{r}),\] (20b) \[P(\mathbf{r}) =P_{0}(r)+\delta P(\mathbf{r}),\] (20c) \[\phi(\mathbf{r}) =\phi_{0}(r)+\delta\phi(\mathbf{r}), \tag{20d}\] where \(n_{i}^{0}(r),P_{0}(r)\) and \(\phi_{0}(r)\) denote the reference quantities with \(r=|\mathbf{r}|\) and the perturbation terms are proportional to \(|\nabla T|\) to lowest order. The thermophoretic velocity is thus linearly related to the weak temperature gradient by \[\mathbf{U}_{T}=-D_{T}\nabla T, \tag{21}\] defining the thermal diffusion coefficient as \(D_{T}\). Consequently the calculation of \(|\mathbf{U}_{T}|\) to linear order in the temperature gradients is equivalent to determining \(D_{T}\). #### ii.2.1 Reference system Substituting now the expansion [Eqs. (20)] into the nonlinear field equations [Eqs. (4) and (6)-(8)], we arrive to zeroth order in the perturbation at the equilibrium electrokinetic equations \[0 =\nabla^{2}\phi_{0}(r)+\frac{1}{\epsilon_{0}\epsilon_{\mathrm{r} }^{0}}\rho_{0}(r), \tag{22a}\] \[0 =\nabla P_{0}(r)+\rho_{0}(r)\nabla\phi_{0}(r),\] (22b) \[0 =\nabla\cdot\left[\mu_{i}^{0}z_{i}e\imath_{i}^{0}(r)\nabla\phi_{ 0}(r)+D_{i}^{0}\nabla n_{i}^{0}(r)\right], \tag{22c}\] with charge density \(\rho_{0}=\sum_{i}z_{i}en_{i}^{0}(r)\) and spatially uniform diffusion coefficients \(D_{i}^{0}=\mu_{i}^{0}k_{\mathrm{B}}T_{0}\). A solution for the continuity equation [Eq. (22c)] exists for vanishing fluxes, \(\mathbf{j}_{i}^{0}(r)=0\), recovering the Boltzmann distribution \[n_{i}^{0}(r)=n_{i,0}^{\mathrm{b}}\exp\left[-\frac{z_{i}e\phi_{0}(r)}{k_{\mathrm{ B}}T_{0}}\right], \tag{23}\] where the potential vanishes in the electroneutral bulk, \[\lim_{r\to\infty}\phi_{0}(r)=0. \tag{24}\] Inserting now this ion distribution into Eq. (22a) and using the spherical symmetry yields the nonlinear Poisson-Boltzmann equation \[\frac{1}{r^{2}}\frac{\mathrm{d}}{\mathrm{d}r}\left[r^{2}\frac{\mathrm{d}}{ \mathrm{d}r}\phi_{0}(r)\right]=-\frac{\rho_{0}(r)}{\epsilon_{0}\epsilon_{ \mathrm{r}}^{0}}, \tag{25}\] determining the overall electrostatic potential [67]. The corresponding boundary condition [Eq. (12)] reduces to \[\frac{\mathrm{d}\phi_{0}(r)}{\mathrm{d}r}\bigg{|}_{r=a}=-\frac{\sigma_{0}}{ \epsilon_{0}\epsilon_{\mathrm{r}}^{0}}. \tag{26}\] Furthermore, a local balance between pressure gradients and electric body forces [Eq. (22b)] maintains a spherically symmetric solvent distribution around the colloidal particle with local solute (osmotic) pressure \[P_{0}(r)=k_{\mathrm{B}}T_{0}\sum_{i=1}^{N}\left(n_{i}^{0}(r)-n_{i,0}^{b}\right), \tag{27}\] and vanishing pressure at infinity, \(P_{0}(r)\to 0\) as \(r\to\infty\). #### ii.2.2 Linearized equations Retaining only first-order perturbation terms, a set of coupled linear field equations is obtained: \[0 =\nabla^{2}\delta\phi(\mathbf{r})+\frac{1}{\epsilon_{0}\epsilon_ {\mathrm{r}}^{0}}\delta\rho(\mathbf{r})\] \[\quad-\alpha\nabla\cdot\left(\frac{\mathbf{r}\cdot\nabla T}{T_{0} }\nabla\phi_{0}(r)\right), \tag{28a}\] \[0 =\eta\nabla^{2}\mathbf{u}(\mathbf{r})-\nabla\delta P(\mathbf{r})- \rho_{0}(r)\nabla\delta\phi(\mathbf{r})\] \[\quad-\delta\rho(\mathbf{r})\nabla\phi_{0}(r)+\frac{1}{2} \alpha\epsilon_{0}\epsilon_{\mathrm{r}}^{0}[\nabla\phi_{0}(r)]^{2}\frac{ \nabla T}{T_{0}},\] (28b) \[0 =\nabla\cdot\left[n_{i}^{0}(r)\mathbf{u}(\mathbf{r})-D_{i}^{0}n_ {i}^{0}(r)\nabla\left(\frac{\delta n(\mathbf{r})}{n_{i}^{0}(r)}\right)\right.\] \[\qquad\qquad\left.-\mu_{i}^{0}z_{i}e\imath_{i}^{0}(r)\nabla\delta \phi(\mathbf{r})-D_{i}^{0}n_{i}^{0}(r)S_{T}^{i}\nabla T\right.\] \[\qquad\qquad\left.-D_{i}^{0}\frac{\mathbf{r}\cdot\nabla T}{T_{0} }\nabla n_{i}^{0}(r)\right], \tag{28c}\] with charge density variations \(\delta\rho(\mathbf{r})=\sum_{i}z_{i}e\delta n_{i}(\mathbf{r})\). Here, gradients in the dielectric permittivity have been evaluated as \(\nabla\epsilon_{\mathrm{r}}(\mathbf{r})=-\alpha\epsilon_{\mathrm{r}}^{0} \nabla T/T_{0}\) by expanding the dielectric permittivity in temperature gradients \[\epsilon_{\mathrm{r}}(\mathbf{r})=\epsilon_{\mathrm{r}}^{0}+\left(\frac{\partial \epsilon_{\mathrm{r}}}{\partial T}\right)\mathbf{r}\cdot\nabla T=\epsilon_{ \mathrm{r}}^{0}-\frac{\alpha\epsilon_{\mathrm{r}}^{0}}{T_{0}}\mathbf{r}\cdot \nabla T, \tag{29}\] with logarithmic derivative \(\alpha=-\partial\ln\epsilon_{\mathrm{r}}/\partial\ln T\). This set of generalized electrokinetic equations [Eqs. (28)] for thermophoresis, requires the solution of the full nonlinear Poisson-Boltzmann equation [Eq. (25)] as input. In principle, these coupled partial differential equations constitute a possible starting point for theoretical investigations of thermophoresis. However, to streamline the further analysis, we follow Ref. [38] and introduce a set of ionic potential functions \[\Omega_{i}(\mathbf{r})= \frac{\delta n_{i}(\mathbf{r})}{n_{i}^{0}(r)}+S_{T}^{i}\mathbf{r} \cdot\nabla T+\frac{z_{i}e\delta\phi(\mathbf{r})}{k_{\mathrm{B}}T_{0}}\] \[-\frac{z_{i}e\left(\phi_{0}(r)+\phi^{\mathrm{th}}\right)}{k_{ \mathrm{B}}T_{0}}\frac{\mathbf{r}\cdot\nabla T}{T_{0}}, \tag{30}\] which is suggested from the linearization of a Boltzmann-type ansatz \[n_{i}(\mathbf{r})=n_{i,0}^{b}\exp\left[-\frac{z_{i}e\phi(\mathbf{r })}{k_{\mathrm{B}}T(\mathbf{r})}+\Omega_{i}(\mathbf{r})\right.\] \[\qquad\qquad\left.-S_{T}^{i}\mathbf{r}\cdot\nabla T(\mathbf{r})- \frac{z_{i}e\mathbf{r}\cdot\mathbf{E}^{\mathrm{th}}}{k_{\mathrm{B}}T(\mathbf{r})} \right], \tag{31}\] for the ion concentrations. Here the first term in the exponential is of a local-equilibrium form, the last two terms anticipate the thermophoretic motion of the ionic solutes in the bulk (see Appendix A) and \(\Omega_{i}(\mathbf{r})\) parametrizes the residual genuine nonequilibrium effects. Then Eq. (28c) yields \[\nabla^{2}\Omega_{i}(\mathbf{r})-\frac{z_{i}e\nabla\phi_{0}(r)}{k_{ \mathrm{B}}T_{0}}\cdot\left[\nabla\Omega_{i}(\mathbf{r})-\frac{\mathbf{u}( \mathbf{r})}{D_{i}^{0}}\right]=\] \[\quad-\frac{z_{i}e\nabla\phi_{0}(r)}{k_{\mathrm{B}}T_{0}}\cdot \left(1-\frac{z_{i}e[\phi_{0}(r)+\phi^{\mathrm{th}}]}{k_{\mathrm{B}}T_{0}} \right)\frac{\nabla T}{T_{0}}, \tag{32}\] after spelling out the divergence. Gradients in the perturbed pressure \(\delta P(\mathbf{r})\) and the electrostatic potential \(\delta\phi(\mathbf{r})\) are eliminated by taking the curl of Eq. (28b), leading to \[\eta\nabla^{2}(\nabla\times\mathbf{u}(\mathbf{r}))-\sum_{i=1}^{N }z_{i}en_{i}^{0}(r)\nabla\Omega_{i}(\mathbf{r})\times\nabla\phi_{0}(r)=\] \[\sum_{i=1}^{N}z_{i}en_{i}^{0}(r)\left[\frac{z_{i}e(\phi_{0}(r)+ \phi^{\mathrm{th}})}{k_{\mathrm{B}}T_{0}}-S_{T}^{i}T_{0}\right]\frac{\nabla T }{T_{0}}\times\nabla\phi_{0}(r)\] \[-\frac{1}{2}\alpha\epsilon_{0}\epsilon_{\mathrm{r}}^{0}\nabla| \nabla\phi_{0}(r)|^{2}\times\frac{\nabla T}{T_{0}}. \tag{33}\] The introduction of the potential function \(\Omega_{i}(\mathbf{r})\) considerably simplifies the task of computing the thermal diffusion coefficient \(D_{T}\), since it decouples Eqs. (28b) and (28c) from the Poisson Eq. (28a). Note that the r.h.s. of Eqs. (32) and(33) depend (nonlinearly) on the reference system, while the dependence on the unknowns \(\Omega_{i}(\mathbf{r})\) and \(\mathbf{u}(\mathbf{r})\) on the left-hand side (l.h.s.) is linear by construction. To obtain a complete specification of the thermophoresis problem, it still remains to determine the boundary conditions for the perturbed field quantities \(\mathbf{u}(\mathbf{r})\) and \(\Omega_{i}(\mathbf{r})\). At the colloidal surface, we impose the Navier condition for the solvent velocity [Eqs. (14) and (13b)], together with a vanishing radial ion current [Eqs. (13a)], yielding within linear response \[\frac{\partial\Omega_{i}(\mathbf{r})}{\partial n}\bigg{|}_{r=a}+\frac{z_{i}e[ \phi_{0}(r)+\phi^{\mathrm{th}}]}{k_{\mathrm{B}}T_{0}}\frac{\mathbf{n}\cdot \nabla T}{T_{0}}\bigg{|}_{r=a}=0. \tag{34}\] In the far field, the velocity obeys Eq. (17) to lowest order in the temperature gradients. Furthermore, by means of Eqs. (15) and (24) the perturbed potential behaves asymptotically as \[\delta\phi(\mathbf{r})\sim\phi^{\mathrm{th}}\frac{\mathbf{r}\cdot\nabla T}{T_ {0}}\quad\text{for}\quad|\mathbf{r}|\to\infty, \tag{35}\] as a consequence of the thermoelectric migration [Eq. (13)]. In addition, according to Eq. (16) the perturbation in ion concentrations should tend to \[\frac{\delta n_{i}(\mathbf{r})}{n_{i,0}^{b}}\sim-S_{T}^{i}\,\mathbf{r}\cdot \nabla T\quad\text{for}\quad|\mathbf{r}|\to\infty, \tag{36}\] arising from the gradients in bulk concentration. Hence, it follows from Eq. (30) that we have to impose \[\lim_{|\mathbf{r}|\to\infty}\Omega_{i}(\mathbf{r})=0, \tag{37}\] within the bulk region. These boundary conditions together with the corresponding Eqs. (32) and (33) enable us to calculate the response of the isolated colloidal particle to the small temperature gradient and its accompanying fields. In the next sections we shall show that the asymptotic behavior of the functions \(\mathbf{u}(\mathbf{r})\) and \(\Omega_{i}(\mathbf{r})\) completely determines the linear response, i.e. the thermal diffusion coefficient. The linearized Poisson equation [Eq. (28a)] is redundant. ### Different Contributions to thermophoresis In principle, the thermal diffusion coefficient \(D_{T}\) is determined by four contributions. The first is due to the electrostatic energy density of the different ionic solutes within the temperature-induced asymmetric Debye double layer [19] and is represented by the term \(\propto\sum_{i}^{N}z_{i}^{2}e^{2}n_{i}^{0}(r)\phi_{0}(r)\) in Eq. (33). A second stems from polarization effects of the solvent in the local electric field and can be interpreted as hydration enthalpy density [19; 28; 59]. It corresponds to the last term on the r.h.s. of Eq. (33). The last two contributions originate from the thermophoretic behavior of the ions in the bulk solution encoded in the term \(\propto\sum_{i}^{N}z_{i}en_{i}^{0}(r)[z_{i}e\phi^{\mathrm{th}}/k_{\mathrm{B}} T_{0}-S_{T}^{i}T_{0}]\) and the far-field boundary conditions [Eqs. (35) and (36)]. We refer to it as ion hydration effect. More specifically, we define the contribution arising from the boundary condition for the disturbed electrostatic potential only as electrophoretic contribution to ion hydration, as it is reminiscent of the electrophoresis problem. Since the field equations for the perturbed fields are linear, we can disentangle the different contributions by discarding inhomogeneities or changing the far-field boundary conditions. For example, the electrostatic contribution is obtained by keeping in Eq. (33) only the relevant energy-density terms and imposing the far-field boundary conditions \[\lim_{|\mathbf{r}|\to\infty}\delta\phi(\mathbf{r}) =0, \tag{38a}\] \[\lim_{|\mathbf{r}|\to\infty}\frac{\delta n_{i}(\mathbf{r})}{n_{i,0 }^{\mathrm{b}}} =0. \tag{38b}\] Similarly, by retaining the original boundary conditions and artificially switching off the relevant terms related to the electrostatic energy, contributions from ion and colloid hydration can be compared. ### Decomposition of the problem The appearance of the thermal diffusion coefficient \(D_{T}\) in the far-field boundary condition [Eq. (17)] for the velocity makes the problem of solving the governing generalized electrokinetic equations intricate. Using the technique of O' Brien and White [40], we circumvent this difficulty by exploiting the linearity of the derived field equations together with the corresponding boundary conditions and writing the overall solution as a superposition of the solutions for the following two simpler auxiliary problems: 1. The spherical particle held fixed in a flow field \(-\mathbf{U}\) in the absence of any applied temperature gradient \(\nabla T\) yielding the far-field boundary conditions \[\lim_{|\mathbf{r}|\rightarrow\infty}\mathbf{u}(\mathbf{r}) =-\mathbf{U},\] (39a) \[\lim_{|\mathbf{r}|\rightarrow\infty}\delta\phi(\mathbf{r}) =0,\] (39b) \[\lim_{|\mathbf{r}|\rightarrow\infty}\frac{\delta n_{i}(\mathbf{r})}{n_{i,0} ^{b}} =0.\] (39c) 2. The spherical particle held fixed in a temperature gradient \(\nabla T\) in a quiescent electrolyte far away from the colloidal particle with far-field boundary conditions \[\lim_{|\mathbf{r}|\rightarrow\infty}\mathbf{u}(\mathbf{r}) =0,\] (40a) \[\delta\phi(\mathbf{r}) \sim-\mathbf{r}\cdot\mathbf{E}^{\mathrm{th}}\quad\mathrm{for} \quad|\mathbf{r}|\rightarrow\infty,\] (40b) \[\frac{\delta n_{i}(\mathbf{r})}{n_{i,0}^{b}} \sim-S_{T}^{i}\mathbf{r}\cdot\nabla T\quad\mathrm{for}\quad| \mathbf{r}|\rightarrow\infty.\] (40c) The sum of the solutions to the Eqs. (32) and (33) for each of these problems then satisfies the desired far-field boundary condition [Eq. (37)]. Concomitantly, we have to ensure the constraint that for thermophoretic motion the net force acting on the particle is zero [52]. Within linear response, the forces required to hold the colloidal particle fixed for each problem read \[\mathbf{F}^{(1)} =\gamma^{(1)}\mathbf{U}, \tag{41a}\] \[\mathbf{F}^{(2)} =\gamma^{(2)}\frac{\nabla T}{T_{0}}, \tag{41b}\] where \(\gamma^{(1)}\) and \(\gamma^{(2)}\) are constants to be determined. The superposition of the forces gives then rise to a vanishing net force \(\mathbf{F}=\mathbf{F}^{(1)}+\mathbf{F}^{(2)}=0\), provided we choose \[\mathbf{U}=-\frac{\gamma^{(2)}}{\gamma^{(1)}}\frac{\nabla T}{T_{0}}. \tag{42}\] Thus, by comparison with Eq. (21) the thermal diffusion coefficient is read off as \[D_{T}=\frac{\gamma^{(2)}}{\gamma^{(1)}}\frac{1}{T_{0}}. \tag{43}\] Furthermore, this method yields also the diffusion coefficient \(D=k_{\mathrm{B}}T_{0}/\gamma^{(1)}\) of a charged spherical particle from the solution to problem (1). ### Symmetry considerations The reference system without gradients exhibits spherical symmetry, while both auxiliary problems display only axial symmetry due to the imposed perturbations either by the flow \(\mathbf{U}\) or the thermal gradient \(\nabla T\). We choose the origin of the coordinate system to be at the center of the colloid and the \(z\)-direction to be aligned with the flow, respectively with the thermal gradient (see Fig. 1). Thus the temperature is represented as \[T(\mathbf{r})=T_{0}+\mathbf{r}\cdot\nabla T=T_{0}+|\nabla T|r\cos\vartheta. \tag{44}\] Furthermore both auxiliary problems (1) and (2) are discussed in parallel by introducing \[\mathbf{X}=\begin{cases}\mathbf{U},&(1)\\ \nabla T/T_{0},&(2).\end{cases} \tag{45}\] To linear order in \(\mathbf{X}\) all scalar potentials are then of the form \(f(r)(\mathbf{r}/r)\cdot\mathbf{X}\) with some spherically symmetric function \(f(r)\), while no pseudo-scalar fields can be constructed. Accordingly, a convenient representation of the solenoidal velocity field is introduced by \[\mathbf{u}(\mathbf{r})=\nabla\times\left[\mathbf{r}\psi(\mathbf{r})\right]- \nabla\times\left[\mathbf{r}\times\nabla\chi(\mathbf{r})\right], \tag{46}\] Figure 1: Schematics (not to scale) of the colloidal particle with hydrodynamic radius \(a\) carrying a surface charge density \(\sigma\) in a particle-fixed reference frame. A small temperature gradient \(\nabla T\) is applied from outside. The short-dashed line denotes the outer edge of the slightly distorted Debye double layer of width \(1/\kappa\), while the dotted line corresponds to the fluid-domain boundary located at a macroscopic distance from the colloid. The integration boundary \(\delta S\) in the electroneutral bulk is shown as big-dotted line. The solvent displays a spatially varying dielectric constant \(\epsilon(\mathbf{r})\) due to the temperature gradient. in terms of two scalar functions, called toroidal \(\psi(\mathbf{r})\) and poloidal Debye potential \(\chi(\mathbf{r})\)[68]. Owing to the fact that no pseudo-scalar fields arise within linear response with respect to \(\mathbf{X}\), the velocity fields can be written as \[\mathbf{u}(\mathbf{r}) =-\nabla\times\left[\mathbf{r}\times\nabla\chi(\mathbf{r})\right] -\mathbf{U}, (1) \tag{47a}\] \[\mathbf{u}(\mathbf{r}) =-\nabla\times\left[\mathbf{r}\times\nabla\chi(\mathbf{r}) \right]. (2) \tag{47b}\] Finally, we express the ion potentials and the poloidal Debye potential as \[\Omega_{i}(\mathbf{r}) =\omega_{i}(r)(\mathbf{r}/r)\cdot\mathbf{X}, \tag{48a}\] \[\chi(\mathbf{r}) =R(r)(\mathbf{r}/r)\cdot\mathbf{X}, \tag{48b}\] with radially symmetric unknowns \(\omega_{i}(r)\) and \(\chi(r)\) for each of the two problems. With these symmetry-adapted forms for \(\mathbf{u}(\mathbf{r})\) and \(\Omega_{i}(\mathbf{r})\), the linearized partial differential equations [Eqs. (32) and (33)] reduce to a set of coupled linear ordinary differential equations, drastically simplifying the task of calculating the thermal diffusion coefficient. ### Calculating the force acting on the colloid In order to obtain the thermal diffusion coefficient, we first have to determine the forces acting on the colloidal particle for each problem (1) and (2). A common procedure is to integrate viscous and electrical traction forces over the surface of the spherical particle relying on the calculation of gradients in the potential and velocity. However, we avoid this cumbersome procedure following again a method suggested by O'Brien and White [40] for the electrophoresis problem and compute the forces from the asymptotic form of the velocity field \(\mathbf{u}_{\text{as}}(\mathbf{r})\) behind the Debye double layer in the bulk solution. This is possible, since in the momentum balance equation neither inertial terms no body forces enter, rather all forces derive from a stress tensor. Thus, by Gauss' theorem the total force on the colloid is the same as the total force on any concentric sphere containing the colloid. At large radii, this force will be only due to the viscous drag, since forces due to electric fields either rapidly decay or cancel upon integrating over the sphere. Another convenient aspect of this approach is that it does not require computing the disturbances in the potential \(\delta\phi(\mathbf{r})\). Hence, we consider a large sphere \(S\) enclosing the particle and the Debye double layer. Its radius has been taken sufficiently large in order to enclose the region where the charge density \(\rho(\mathbf{r})\) is non-negligible, since in the bulk solution local charge neutrality holds (see Fig. 1). Consequently the _total_ electric force on the combined system becomes zero and the external forces \(\mathbf{F}\) for problems (1) and (2) are counterbalanced by integrating viscous traction forces over the surface \(\partial S\) of the sphere, \[\mathbf{F}=-\int_{\partial S}\mathbf{\sigma}(\mathbf{r})\cdot\mathbf{n}\,\mathrm{ d}S, \tag{49}\] where \[\mathbf{\sigma}(\mathbf{r})=-P(\mathbf{r})\mathbb{I}+\eta\left[\nabla\mathbf{u}( \mathbf{r})+\left(\nabla\mathbf{u}(\mathbf{r})\right)^{T}\right], \tag{50}\] denotes the viscous stress tensor for the respective problems. Next, we show how the velocity and pressure fields behave asymptotically for large distances and calculate the corresponding forces. The electric forces decay rapidly in Eq. (28b) as \(r\to\infty\) leading to a simplified momentum balance equation \[-\eta\nabla\times\nabla\times\mathbf{u}(\mathbf{r})-\nabla\delta P(\mathbf{r} )=0, \tag{51}\] together with the corresponding boundary conditions [Eqs. (39a) and (40a)] for each problem in turn. Since charge neutrality is assumed to hold in bulk, we can also safely neglect the term \(\propto\rho_{0}(r)\delta\phi(\mathbf{r})\) in Eq. (28b). Taking the curl of Eq. (51) eliminates the pressure and using the representation of the velocity field in terms of the poloidal Debye potential [Eqs. (47)], yields for the scalar function \(R=R(r)\) the ordinary linear differential equation \[\mathscr{L}(\mathscr{L}(R))(r)=0, \tag{52}\] with the differential operator \[\mathscr{L}=\frac{\mathrm{d}^{2}}{\mathrm{d}r^{2}}+\frac{2}{r}\frac{\mathrm{d }}{\mathrm{d}r}-\frac{2}{r^{2}}. \tag{53}\] An asymptotic solution then reads \[R(r)\sim C_{N+1}+\frac{C_{N+2}}{r^{2}}\quad\text{for}\quad r\to\infty, \tag{54}\] with constants \(C_{N+1},C_{N+2}\) to be determined, where the notation is adopted from Ref. [40]. By symmetry and linearity in \(\mathbf{X}\) the perturbation in the scalar pressure field assumes the form \(\delta P(\mathbf{r})=\pi(r)(\mathbf{r}/r)\cdot\mathbf{X}\) with a radially symmetric field \(\pi(r)\) which can be calculated for large distances from Eq. (51) to \[\pi(r)\sim\eta\frac{\mathrm{d}}{\mathrm{d}r}\left(r\mathscr{L}(R)\right)= \frac{2\eta C_{N+1}}{r^{2}}\quad\text{for}\quad r\to\infty. \tag{55}\] The magnitude of the force \(\mathbf{F}=F\mathbf{X}/|\mathbf{X}|\) exerted by the fluid on the particle \[F=-\int\mathrm{d}S\left(\sigma_{rr}(\mathbf{r})\cos\vartheta-\sigma_{\vartheta r }(\mathbf{r})\sin\vartheta\right), \tag{56}\] is now evaluated from the viscous stresses in spherical coordinates \[\sigma_{rr}(\mathbf{r}) =-P(\mathbf{r})+2\eta\frac{\partial u_{r}(\mathbf{r})}{\partial r}\] \[=-\eta\left(\frac{6C_{N+1}}{r^{2}}+\frac{12C_{N+2}}{r^{4}}\right) |\mathbf{X}|\cos\vartheta, \tag{57a}\] \[\sigma_{\vartheta r}(\mathbf{r}) =\eta\left[\frac{1}{r}\frac{\partial u_{r}(\mathbf{r})}{\partial \vartheta}+\frac{\partial u_{\vartheta}(\mathbf{r})}{\partial r}-\frac{u_{ \vartheta}(\mathbf{r})}{r}\right]\] \[=-\eta\frac{6C_{N+2}}{r^{4}}|\mathbf{X}|\sin\vartheta. \tag{57b}\] We thus arrive at \[\mathbf{F}=8\pi\eta C_{N+1}\mathbf{X}, \tag{58}\] and consequently it follows from Eq. (43) that \[D_{T}=\frac{C_{N+1}^{(2)}}{C_{N+1}^{(1)}}\frac{1}{T_{0}}, \tag{59}\] where the constants \(C_{N+1}^{(1)},C_{N+1}^{(2)}\) have to be extracted from the asymptotic behavior of \(R(r)\) [Eq. (54)] for problem (1) and (2). As an additional result, we obtain the diffusion coefficient for the particle as \(D=k_{\mathrm{B}}T_{0}/8\pi\eta C_{N+1}^{(1)}\). ### Nondimensional formulation and reference scales We employ a dimensionless formulation, measuring lengths in units of the particle radius \(a\) and electrostatic potentials in units of the thermal voltage \(k_{\mathrm{B}}T_{0}/e\). The Poisson equation [Eq. (28a)] suggests then measuring surface charge densities in units of \(\epsilon_{0}\epsilon_{r}^{0}k_{\mathrm{B}}T_{0}/ae\), while the viscosity in Stokes' equation [Eq (28b)] sets the unit of velocity to \(U_{0}=\epsilon_{0}\epsilon_{r}^{0}(k_{\mathrm{B}}T_{0})^{2}/e^{2}\eta a\). Rather than using dimensionless concentrations \(n_{i}(r)a^{3}\), we follow tradition and introduce the dimensionless concentrations by \(n_{i}(r)/2I\) (and similarly for the reference concentrations \(n_{i,0}^{b}/2I\)) with the constant ionic strength in the bulk solution \[I=\frac{1}{2}\sum_{i=1}^{N}z_{i}^{2}n_{i,0}^{\mathrm{b}}. \tag{60}\] For a monovalent salt assuming completely dissociated ions, the dimensionless concentrations simplify to \(n_{+,0}^{\mathrm{b}}/2I=n_{-,0}^{\mathrm{b}}/2I=1/2\) for cations (\(+\)) and anions (\(-\)) as the valences evaluate to \(\pm 1\). Similar expressions can also be found for divalent or trivalent salts. Consequently, this renders the problem independent of the equilibrium ion bulk concentrations, except for the dimensionless inverse Debye screening length \(\kappa_{0}\). It characterizes the limiting cases of a thin (\(\kappa_{0}\gg 1\)), respectively wide (\(\kappa_{0}\ll 1\)) double layer as compared to the particle radius \(a\). Once we fix the dimension of the particle, \(\kappa_{0}\) can only vary with the ionic strength \(I\). Then the Poisson-Boltzmann equation for the dimensionless equilibrium potential \(\phi_{0}(r)\) reads \[\frac{1}{r^{2}}\frac{\mathrm{d}}{\mathrm{d}r}\left[r^{2}\frac{\mathrm{d}}{ \mathrm{d}r}\phi_{0}(r)\right]=-\kappa_{0}^{2}\sum_{i=1}^{N}z_{i}n_{i,0}^{ \mathrm{b}}\exp\left[-z_{i}\phi_{0}(r)\right], \tag{61}\] subject to the boundary conditions \[\lim_{r\to\infty}\phi_{0}(r) =0, \tag{62a}\] \[\left.\frac{\mathrm{d}\phi_{0}(r)}{\mathrm{d}r}\right|_{r=1} =-\sigma_{0}. \tag{62b}\] Here \(\sigma_{0}\) denotes the dimensionless bare colloidal surface potential. Further, using the symmetry-adapted ansatz for the ionic potential and the velocity field [Eqs. (47) and (48)], we obtain from Eqs. (32) and (33) the coupled linear ODEs in dimensionless form \[\begin{split}&\mathscr{L}\omega_{i}(r)-z_{i}\frac{\mathrm{d}\phi_{0}(r )}{\mathrm{d}r}\left[\frac{\mathrm{d}\omega_{i}(r)}{\mathrm{d}r}-\mathrm{Pe}_{ i}\frac{2R(r)}{r}\right]=\\ &\quad\quad z_{i}\frac{\mathrm{d}\phi_{0}(r)}{\mathrm{d}r}\begin{cases} \mathrm{Pe}_{i},&(1)\\ z_{i}\left[\phi_{0}(r)+\phi^{\mathrm{th}}\right]-1,&(2),\end{cases}\\ &\mathscr{L}(\mathscr{L}R)(r)+\kappa_{0}^{2}\frac{\mathrm{d}\phi_{0}(r)}{ \mathrm{d}r}\sum_{i=1}^{N}z_{i}n_{i}^{0}(r)\frac{\omega_{i}(r)}{r}=\\ &-\kappa_{0}^{2}\frac{\mathrm{d}\phi_{0}(r)}{\mathrm{d}r}\sum_{i=1} ^{N}z_{i}n_{i}^{0}(r)\left[\begin{cases}0,&(1)\\ z_{i}\phi_{0}(r)+z_{i}\phi^{\mathrm{th}}-S_{T}^{\mathrm{i}}T_{0},&(2)\end{cases} \right]\\ &\qquad\qquad-\begin{cases}0,&(1)\\ \alpha\frac{\mathrm{d}\phi_{0}(r)}{\mathrm{d}r}\frac{\mathrm{d}^{2}\phi_{0}(r) }{\mathrm{d}r^{2}},&(2),\end{cases}\end{split} \tag{63b}\] for the nondimensional functions \(\omega_{i}(r)\) and \(R(r)\). In the preceding equations, we have introduced the ionic Peclet number [61] \[\mathrm{Pe}_{i}=\frac{U_{0}a}{D_{i}^{0}}, \tag{64}\] quantifying the ratio between convective and diffusive ion transport. The corresponding far-field boundary conditions translate to \[\lim_{r\to\infty}\omega_{i}(r) =0, \tag{65a}\] \[\lim_{r\to\infty}\frac{R(r)}{r} =0,\] (65b) \[\lim_{r\to\infty}\frac{\mathrm{d}R(r)}{\mathrm{d}r} =0 \tag{65c}\] and at the surface of the colloidal particle, the boundary conditions assume the form \[\frac{\mathrm{d}\omega_{i}(r)}{\mathrm{d}r}\bigg{|}_{r=1} =\begin{cases}0,&(1)\\ -z_{i}[\phi_{0}(r)|_{r=1}+\phi^{\mathrm{th}}],&(2),\end{cases} \tag{66a}\] \[\frac{R(r)}{r}\bigg{|}_{r=1} =\begin{cases}1/2,&(1)\\ 0,&(2),\end{cases}\] (66b) \[\frac{\mathrm{d}R(r)}{\mathrm{d}r}\bigg{|}_{r=1} =-\lambda\frac{\mathrm{d}^{2}R(r)}{\mathrm{d}r^{2}}\bigg{|}_{r=1} =\begin{cases}1/2,&(1)\\ 0,&(2).\end{cases} \tag{66c}\] Eventually, these equations are solved numerically to determine the thermal diffusion coefficient \(D_{T}(\sigma,\kappa_{0},\lambda)3T_{0}/2U_{0}a\) as a dimensionless function of the rescaled bare colloidal surface potential \(\sigma_{0}\), the normalized inverse Debye width \(\kappa_{0}\) and the reduced slip length \(\lambda\) for different salt species. Similar to the electrophoresis problem [40], the additional factor of \(3/2\) is introduced for a convenient comparison with other theoretical approaches [19; 38]. ## III Numerical solution of the differential equations In this section, we describe the numerical methods employed to obtain approximate solutions of the ODEs in dimensionless form as elaborated in the previous subsection for the relevant functions \(\phi_{0}(r)\), \(\omega_{i}(r)\) and \(R(r)\). The Poisson-Boltzmann equation is solved relying on a Chebyshev spectral collocation method [69; 70]. For the coupled linear ODEs [Eqs. (63)] a shooting method [71], together with asymptotic matching is applied adapting the solution procedure of O'Brien and White [40] for the electrophoresis problem. ### Solving the Poisson-Boltzmann equation with the Chebyshev spectral collocation method Since the dimensionless potential, as well as its first and second derivative are required as coefficients in Eqs. (63) and in the corresponding boundary condition [Eqs. (66a)], we have to determine numerically first these quantities from the nonlinear Poisson-Boltzmann equation. Thus after mapping the half-infinite domain \([1,\infty)\) to the half-open interval \([-1,1)\) by a diffeomorphism, this two-point boundary value problem (BVP) [Eqs. (61) and (62)] can be solved efficiently and with high accuracy by applying a Chebyshev spectral collocation method to the transformed BVP (see appendix B). Here, a nonlinear coordinate transformation of the form \[\Phi_{L}:[-1,1)\to[1,\infty),\quad t\mapsto r=L\frac{(1+t)}{(1-t)}+1 \tag{67}\] is used, where \(L>0\) denotes an adjustable mapping parameter. The advantage of the chosen algebraic transformation is its smoothness and robustness, i.e. the decreased sensitivity on \(L\)[72; 73; 70]. Then, in the finite domain \([-1,1]\) we approximate the solution to the problem \(\phi_{0}(t)\) by a global Lagrange-interpolation polynomial of degree \(M\)[74; 70] that satisfies the mapped BVP at the Chebyshev-Gauss-Lobatto points \[t_{j}=\cos\left(\frac{j\pi}{M}\right),\quad j=0,\ldots,M. \tag{68}\] The \(p\)-th derivative (\(p=1,2\)) is obtained by differentiating the interpolant at these nodal points \(\{t_{j}\}\), defining the discretized derivative operators which can be represented by Chebyshev differentiation matrices \(\mathsf{D}^{(p)}\)[69; 70]. Accordingly, the numerical differentiation may be performed as \[\mathbf{y}^{(p)}=\mathsf{D}^{(p)}\mathbf{y}, \tag{69}\] where \(\mathbf{y}\) and \(\mathbf{y}^{(p)}\) are the vectors of function values, respectively approximate derivative values at these nodes and \(\mathsf{D}^{(p)}=\left(\mathsf{D}^{(1)}\right)^{p}\). The transformed BVP is now converted to a set of \(M+1\) nonlinear algebraic equations that are solved by the Newton-Raphson method with an appropriate initial guess (for further details see Appendix B). We choose the mapping parameter as equal to the dominant length scale of the solution \(L=1/\kappa_{0}\), i.e. the Debye length in units of the particle radius and vary the total number of Chebyshev nodes depending on the rescaled bare surface potential \(\sigma_{0}\) ensuring a rapid convergence of the polynomial series coefficients for different \(\kappa_{0}\). This rapidity facilitates the high accuracy of the calculated numerical solution, as well as the stability of the numerical scheme [72; 75]. Finally, an approximate solution for \(\phi_{0}(r)\) is obtained on the unbounded interval \([1,\infty)\) in terms of a transformed barycentric interpolant [76] using the inverse transform \(\Phi_{L}^{-1}\) (see Appendix B). Similar expressions for the first and second derivative of \(\phi_{0}(r)\) are also derived. ### Solving the coupled linear ODEs with a shooting method and asymptotic matching The algorithm for solving the coupled set of linear ODEs [Eqs. (63)] is based on a predictor-corrector Adams-multistep method adaptively choosing both step size and order [77]. We start the numerical integration at large radial distance \(r_{0}=1+20/\kappa_{0}\), i.e in the bulk, with the asymptotic forms for the functions \(\omega_{i}(r)\) and \(R(r)\) and terminate it after reaching the rescaled (virtual) colloidal surface with \(r=1\). Neglecting exponentially small terms due to the electrostatics in Eqs. (63) for \(r\to\infty\), the asymptotic behavior can be obtained from \[\mathscr{L}\omega_{i}(r) =0, \tag{70a}\] \[\mathscr{L}\left(\mathscr{L}R\right)(r) =0, \tag{70b}\] for both problems (1) and (2) obeying the far-field boundary conditions [Eqs. (65)]. This yields \[\omega_{i}(r) \sim\frac{C_{i}}{r^{2}}, \tag{71a}\] \[R(r) \sim C_{N+1}+\frac{C_{N+2}}{r^{2}}, \tag{71b}\] for \(r\to\infty\) with asymptotic constants \(C_{i}\), \(i=1,\ldots,N+2\) for problems (1) and (2), respectively. The second expression is reminiscent of the results for the velocity field obtained in Sec.II.6, however now in their nondimensional forms. We are aiming to determine the set of asymptotic constants \[\mathbf{C}=\left(C_{1},\ldots,C_{N+2}\right)^{\mathrm{T}}, \tag{72}\] for the two problems from the slipping plane boundary condition [Eqs. (66)]. The linearity of the coupled ODEs allows writing a general solution as the following linear combination \[\mathbf{y}(r)=\mathbf{y}_{\mathrm{part}}(r)+\sum_{k=1}^{N+2}C_{k}\mathbf{y}_{ \mathrm{hom}}^{k}(r), \tag{73}\] by superimposing a particular solution \(\mathbf{y}_{\mathrm{part}}(r)\) for each problem (1) and (2) with \(N+2\) homogeneous solutions \(\mathbf{y}_{\mathrm{hom}}^{k}(r)\). Note that the homogeneous solutions \(\mathbf{y}_{\mathrm{hom}}^{k}(r)\) are the same for both problem (1) and (2). First, we define the \(k\)-th solution (\(k=1,\ldots,N+2\)) to the homogeneous problem as \[\mathbf{y}_{\mathrm{hom}}^{k}(r)=\left(\omega_{1}^{k}(r),\ldots,\omega_{N}^{k }(r),R^{k}(r)\right)^{\mathrm{T}}. \tag{74}\] In addition, the initial condition for this solution set is determined by the asymptotic forms [Eqs. (71)] in combination with the particular choice \[C_{i}^{k}=\delta_{ik},\quad i=1,\ldots,N+2, \tag{75}\] for the asymptotic constants \(C_{i}\). Utilizing these initial condition, we then solve for each value of \(k=1,\ldots,N+2\) in turn the homogeneous forms of Eqs. (63) \[\mathscr{L}\omega_{i}(r)-z_{i}\frac{\mathrm{d}\phi_{0}(r)}{ \mathrm{d}r}\left[\frac{\mathrm{d}\omega_{i}(r)}{\mathrm{d}r}-\mathrm{Pe}_{i} \frac{2R(r)}{r}\right]=0, \tag{76a}\] \[\mathscr{L}(\mathscr{L}R)(r)+\kappa_{0}^{2}\frac{\mathrm{d}\phi_ {0}(r)}{\mathrm{d}r}\sum_{i=1}^{N}z_{i}n_{i}^{0}(r)\frac{\omega_{i}(r)}{r}=0, \tag{76b}\] by numerical integration from \(r=r_{0}\) down to the virtual colloidal surface at \(r=1\). Second, to obtain a particular solution denoted as \[\mathbf{y}_{\mathrm{part}}(r)=\left(\omega_{1}(r),\cdots,\omega_{N}(r),R(r) \right)^{\mathrm{T}}, \tag{77}\] the inhomogenous ODEs [Eqs. (63)] are again numerically integrated from \(r=r_{0}\) down to the slipping plane at \(r=1\) for problems (1) and (2). Here all asymptotic constants are set to zero, \[C_{i}=0\quad i=1,\cdots,N+2. \tag{78}\] Substituting the general solution [Eq. (73)] into the boundary conditions at the colloidal surface [Eqs. (66)], yields a linear system of \(N+2\) simultaneous equations of the form \[\mathsf{A}\cdot\mathbf{C}=\mathbf{B} \tag{79}\] for the \(N+2\) asymptotic coefficients \(\mathbf{C}\) for problems (1) and (2). The coefficient matrix \(\mathsf{A}\) and the vector \(\mathbf{B}\) for both problems can be found in Appendix C. We solve these equations by Gaussian elimination with maximum pivoting. The method presented requires that the homogeneous ODEs have to be solved \(N+2\) times and the inhomogeneous ODEs are solved once for each problem in turn. As the thermal diffusion coefficient is calculated from the asymptotic constants determined from the boundary condition at the slipping plane, our approach requires the functions to be resolved with high accuracy within the Debye double layer, as well as in the bulk region which may have considerably varying length scales. This is justified for the equilibrium potential \(\phi_{0}(r)\), since the combination of the algebraic transformation, together with the Chebyshev collocation method yields high accuracy to possibly machine precision. Especially in the outer region (bulk), the transformation allows the potential to be sufficiently resolved notwithstanding that it decays exponentially. Furthermore, we also have extended the computational domain far enough to capture the power-law behavior of the respective functions. We have found by varying the radial distance \(r_{0}\) that our choice \(1+20/\kappa_{0}\) (corresponding to 20 Debye lengths) is an acceptable lower bound, balancing computational effort and accuracy of the results. The relative changes between each trial amounts to approximately \(\approx 10^{-5}\) for all \(\kappa_{0}\). Thus, the results for \(D_{T}\) are identical within four to six (significant) digits. ## IV Results and discussion In the following, we first validate the numerical procedure described in Sec. III by comparing our results for the electrostatic potential and thermal diffusion coefficient with (semi-) analytical expressions from previous theoretical studies [38; 67]. Then, the theoretical work by Rasuli and Golestanian [38] is carefully reexamined with the main focus on the effect of the thermoelectric field in bulk. Afterwards, a detailed comparison with a different theoretical approach [19] is performed, where besides the mentioned effect of electric migration in bulk also several other contributions to the thermal diffusion coefficient are investigated. At the end, we compare experimental results obtained in Refs. [8; 10] on thermophoretic drift motion of single-stranded DNA, respectively polystyrene beads, to our theoretical predictions with particular emphasis on the hydrodynamic boundary condition, the effect of buffer dissociation and surface charging. The characteristic parameters chosen to represent a typical aqueous electrolyte with different salt added, are summarized in Appendix D and used to generate the Figs. 1-6. We point out, that all quantities in this section are presented in a nondimensional form (see Sec. II.7 for the corresponding characteristic units) unless otherwise stated. ### Code validation in Debye-Huckel approximation First, we test our numerical approach for the case of weakly charged colloids, where some analytic progress can be made. The Debye-Huckel approximation [67] for a weakly charged colloidal particle states that for \(|z_{i}\phi_{0}|\ll 1\), the nonlinear Poisson-Boltzmann equation [Eq. (61)] can be simplified by expanding the Boltzmann factor \(\exp(-z_{i}\phi_{0})=1-z_{i}\phi_{0}+\mathcal{O}((z_{i}\phi_{0})^{2})\) to obtain a linear differential equation for the rescaled equilibrium potential \(\phi_{0}(r)\), using the electroneutrality condition in bulk. Then, assuming a monovalent salt an analytic solution for the potential and its first derivative fulfilling the boundary conditions [Eqs. (62)] are readily obtained as \[\phi_{0}(r) =\frac{\sigma_{0}}{1+\kappa_{0}}\frac{1}{r}\exp\left[-\kappa_{0}(r-1 )\right], \tag{80a}\] \[\frac{\mathrm{d}\phi_{0}(r)}{\mathrm{d}r} =-\left(\frac{1}{r}+\kappa_{0}\right)\phi_{0}(r). \tag{80b}\] Our numerical approach to solve the nonlinear Poisson-Boltzmann equation by a Chebyshev collocation method (see Sec. III.1) can now be validated by comparison with this analytic expression [Eqs. (80)]. As shown in Fig. 2a, for weak negative surface charging \(\sigma_{0}=-0.08\,(\approx 2.1\,\mathrm{mV})\) and intermediate Debye screening, our results are in perfect agreement with the theory. Moreover, Rasuli and Golestanian [38] have successfully derived semi-analytic formulas for the thermal diffusion coefficient by solving the coupled system of linear differential equations for the hydrodynamic solvent flow and the generalized ionic potentials [Eqs. (32) and (33)] within the Debye-Huckel approximation. In particular, the crossover between the two limiting cases of thin (\(\kappa_{0}\gg 1\)) and wide (\(\kappa_{0}\ll 1\)) Debye layers is elaborated. However, they have neglected the advection current and the coupling between ionic and electric potential functions which effectively disconnect the dynamics of the solutes from that of the solvent flow, providing an analytically tractable problem. We start by comparing our numerically determined results for the rescaled thermal diffusion coefficient \(D_{T}\) with these analytic formulas for two aqueous solutions, adding exclusively the salt KCl, respectively NaOH for different bare surface potentials \(\sigma_{0}\) and a no-slip boundary condition (\(\lambda=0\)). By artifically setting the thermoelectric potential to zero, we have modified our numerical treatment to account for the difference in the ionic potential functions of both theoretical approaches (see also the next Sec. IV.2). In addition, focusing on binary electrolytes equal ionic Soret coefficients \(S_{T}^{+}=S_{T}^{-}=(\mathcal{S}_{T}^{+}+\mathcal{S}_{T}^{-})/2\) for cations (\(+\)) and anions (\(-\)) are used. Here, \(\mathcal{S}_{T}^{\pm}=Q_{\pm}^{*}/k_{\mathrm{B}}T_{0}^{2}\) is related to the ionic heat of transport due to water hydration effects for infinite dilution, see Ref. [54] and Appendix A. This helps in rearranging the pertinent equations into a form equivalent to those of Ref. [38]. Then, for small bare surface potentials the numerical results agree very well with the predicted analytic expressions for the full range of Debye screening lengths and in fact, only for increasing bare potential values small deviations occur, since the Debye-Huckel approximation ceases to be valid, as shown in Figs. 2c and 2d. Here the numerically calculated zeta-potential values \(\phi_{0}(1)\) at the slipping plane, which varies with ionic strength and thus with the dimensionless Debye screening length \(\kappa_{0}\), as \(\sigma_{0}\) is fixed, corroborates this argument (see Fig. 2b). The precise agreement with the semi-analytic formulas does not only confirm our numerical approach, but also shows that the solution techniques of O'Brien and White [40] are reliably applicable to the problem of thermophoresis. ### Comparison with the model of Rasuli and Golestanian The theoretical continuum model for thermophoresis provided in Ref. [38] merely differs from our approach by the asymptotic behavior of the overall electrostatic potential. Since we properly account (to linear order) for the thermoelectric field behind the Debye double layer in bulk (see Eqs. (15) and (16) or Eqs. (35) and (36), respectively), Rasuli and Golestanian seemed to have implicitly discarded this electrophoretic contribution to the ion hydration effect in their treatment by not (directly) specifying a far-field boundary condition for the electrostatic potential. At least, it was not mentioned, neither in their paper [38] nor in its Supplemental Material. Consequently, their choice of the ionic potential functions misses a (rescaled) term \(\propto z_{i}\mathbf{r}\cdot\mathbf{E}^{\mathrm{th}}=z_{i}\phi^{\mathrm{th}} \mathbf{r}\cdot\nabla T\). In addition, they also have claimed, that an appropriate boundary condition for \(r\to\infty\) consists of a vanishing potential functions \(\omega_{i}(r)\) (see the Supplemental Material of Ref.[38]). Clearly within these assumptions, the steady-state distribution of the ionic solutes in bulk cannot be correctly recovered to linear order in thermal gradients (see Appendix A) with ramifications for the thermal diffusion coefficient. Already for an aqueous solution titrated solely with KCl, which gives rise to a rather weak thermoelectric effect, respectively electrophoretic contribution with \(\phi^{\mathrm{th}}=-0.42\), deviations from our numerical results for the thermal diffusion coefficient over the whole range of inverse Debye screening lengths and for different surface potentials with a no-slip boundary condition become apparent (Fig. 3a). Although their theory correctly predicts the sign of \(D_{T}\), the difference increases up to six orders of magnitude once very thin double layers are considered. However, the discrepancies become even more prominent, when accounting for electrolytes with a strong thermoelectric effect. While for an aqueous solution adding exclusively the base NaOH (\(\phi^{\mathrm{th}}=-2.8\)), the theoretical model of Rasuli and Golestanian [38] yields only strictly positive thermal diffusion coefficients in the full parameter range, our numerical results for the transport coefficient \(D_{T}\) with \(\lambda=0\) show an inverse thermophoretic effect (\(D_{T}<0\)) for weak charging, together with a sign reversal around \(\kappa_{0}\approx 1\), as the bare surface potential approaches large values (see Fig. 3b). The work in Ref. [19] strongly supports our findings (see also Sec. IV.3 for details) rendering the ambiguous treatment of the boundary condition for the electrostatic potential and the corresponding choice of the ionic potential functions in Rasuli and Golestanian's work [38] exclusively applicable in the limit of very small thermoelectric potentials \(\phi^{\mathrm{th}}\ll 1\). This severe restriction holds only for a few salt species, such as LiCl and NaF as the magnitude of the (rescaled) thermoelectric potential can reach up to \(\approx 3\) (\(\approx 100\,\mathrm{mV}\)) and its sign depends strongly on the relative difference of the ionic heat of transport, see Ref. [41] and Appendix D. To account for this inconsistency, Rasuli and Golestanian's work [38], Rasuli and Golestanian's work [38], Rasuli and Golestanian's work [38], Rasuli and Golestanian's work [38], Rasuli and Golestanian's work [38], Rasuli and Golestanian's work [38], Rasuli and Golestanian's work [38], Rasuli and Golestanian's work [38], Rasuli and Golestanian's work [38], Rasuli and Golestanian's work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [38], Rasuli and Golestanian' work [3], Rasuli and Golestanian' work [38], Rasuli and Gol nian also incorporated a possible salt dependence of \(\mathcal{S}_{T}^{i}\) in their theoretical treatment when comparing with experiments which has not improved the situation yet. ### Relation to the work of Burelbach and Stark A different semi-analytical formula for the transport coefficient of a weakly charged colloidal particle with a hydrodynamic slipping surface undergoing thermopheres has been proposed by Burelbach and Stark [19]. Based on an alternative hydrodynamic approach [28] within the framework of non-equilibrium thermodynamics using Onsager's reciprocity relations, the colloidal drift velocity can be derived irrespective of how the screening length \(1/\kappa_{0}\) compares to the particle size \(a\). In our approach, we strongly rely on momentum conservation in a force-free system to obtain the thermal diffusion coefficient beyond the limiting cases of strong and weak screening. Again, we numerically determine the thermal diffusion coefficient for a colloidal particle with different surface charging immersed in a water-based electrolyte solution with the salt KCl, respectively the base NaOH added. In Fig. 4, the results are presented as a function of the inverse Debye screening length, together with the predictions from Ref. [19] for both, no-slip (\(\lambda=0\)) and perfect-slip (\(\lambda\to\infty\)) boundary condition. In general, their calculations for the rescaled thermophoretic mobility predict qualitatively similar behavior within the range of our testing parameters and both salts. Especially, the overall strong enhancement of \(D_{T}\) in magnitude, together with the flattening out to a plateau for strong shielding \(\kappa_{0}\gg 1\) as the slip length \(\lambda\) is increased, are common features (Figs. 4b and 4d). In addition, for bare surface potentials \(|\sigma_{0}|\gtrsim 6.0\), the sign reversal in the thermal diffusion coefficient for the base NaOH occurring independently of the slip length when \(\kappa_{0}\approx 1\) is also covered by both theoretical models (Figs. 4c and 4d). Nevertheless care has to be taken again, when comparing our predictions with those of Ref. [19], since for an increasing \(\sigma_{0}\), the dimensionless zeta potential \(\phi_{0}(1)\) can become large (Fig. 2b), such that the Debye-Huckel approximation is no longer valid. Thus, the theoretical approach suggested in Ref. [19] does Figure 2: a) Numerical results for the rescaled equilibrium potential and its first derivative (inset) as a function of the distance \(r\) from the particle surface for intermediate Debye screening and weak negative surface charging, \(\sigma_{0}=-0.08\,(\approx 2.1\,\mathrm{mV})\). The black dashed lines denote the analytical results within the Debye-Hückel approximation [Eqs. (80)]. b) Variation of the numerically obtained zeta-potential values \(\phi_{0}(1)\) with the dimensionless Debye screening length for different \(\sigma_{0}\). Thermal diffusion coefficient for an aqueous solution with c) KCl and d) NaOH added. Compared are numerical solutions (solid lines) with analytic expressions from Ref. [38] for a no-slip boundary condition. not apply in this regime. Fortunately, we do not encounter this problem, since the rescaled potential \(\phi_{0}(r)\) is calculated from the full nonlinear Poisson-Boltzmann equation [Eq. (61)]. Consequently, our findings suggest for screening lengths \(\kappa_{0}\lesssim 1\) and large \(\sigma_{0}\) a different though still complex behavior. To gain further insight into it, we also have computed the various contributions to the net thermophoretic transport coefficient \(D_{T}\) as mentioned in Sec. II.3 for different \(\sigma_{0}\). Here we display only both limiting cases of weak (\(\sigma_{0}=-0.08\)) and strong (\(\sigma_{0}=-12.0\)) charging for illustration purposes (see Figs. 4e-h). Independent of the slip length \(\lambda\) and strength of the thermoelectric effect, encoded in \(\phi^{\text{th}}\), the term arising from the colloidal hydration is still the dominant contribution, yet significantly smaller as compared to the predictions from Ref. [19] (Figs. 4f and 4h). Consequently, our numerical results exhibit neither an extended shoulder in the curve for the salt KCl (see Fig. 4a and 4b) nor a pronounced peak in the function for the base NaOH (see Fig. 4c and 4d) around \(\kappa_{0}\approx 1\). In the limit of thin Debye double layers, respectively high ionic strength, Burelbach und Stark [19] have derived an analytic expression for the dimensionless thermal diffusion coefficient \[D_{T}=\mu_{e}\phi^{\text{th}}, \tag{81}\] as the product of the thermal potential and the electrophoretic mobility \[\mu_{e}=\begin{cases}\dfrac{2\sigma_{0}}{3\kappa_{0}},&\text{for $\lambda=0$, $\kappa_{0}\gg 1$}\\ \dfrac{2}{3}\dfrac{\sigma_{0}\lambda}{(1+2\lambda)},&\text{for $\lambda\neq 0$, $\kappa_{0}\gg 1$}\end{cases}. \tag{82}\] Hence, independent of the salt added \(D_{T}\) converges either to zero for a no-slip boundary condition (Figs. 4a and 4c) or to a constant value, as the slip-length is increased (Figs. 4b and 4d). Whether a negative thermophoretic effect occurs, depends on the sign of \(\sigma_{0}\) and \(\phi^{\text{th}}\). Furthermore, the ion hydration effect is presumed to be the dominant contribution to the transport coefficient for strong screening and arbitrary slip length \(\lambda\) (Figs. 4e-h). Our numerical predictions support all these findings, although we have observed a different scaling for the thermophoretic transport coefficient \(D_{T}\) and particularly its ionic hydration contribution since the electrophoretic mobility \(\mu_{e}\) differs by a factor of \(\approx 1.5\), as compared to the results of Ref. [19], yielding the famous Helmholtz-Smoluchowski [78] expression \(\sigma_{0}/\kappa_{0}\) for \(\lambda=0\) and a generalized version of it for a very thin Debye double-layer with a slipping boundary condition. In Ref. [19], they also offer a possible explanation for this discrepancy referring to the dielectric permittivity of the colloid which was assumed to be equal in their treatment, whereas we have considered it to be negligible. Here, they have used a similar argument as in their treatment of the heat flow in the boundary layer approximation (see Appendix of Ref. [28]). Nevertheless, for weak surface charging (Debye-Huckel approximation) both predictions are in very good agreement with maximal relative deviations remaining below 5% over a wide range of Debye screening lengths \(\kappa_{0}\) (see Figs. 4a and 4b, together with the inset of Figs. 4c and 4d). In particular, at low ionic strength (\(\kappa_{0}\ll 1\)) both results seem to obey an identical limiting behavior, which also has been obtained in Ref. [29] within the point-particle limit \(a\to 0\). Besides that, perfect accordance is also achieved for the colloidal hydration contribution to thermophoretic transport (Figs. 4e and 4g). ### Thermophoresis of single-stranded DNA In this section, we compare our predictions for the thermophoretic transport coefficient to experimental results from Ref. [10] on 22mer single-stranded DNA molecules immersed in a TRIS-HCl (tris(hydroxymethyl)aminomethane-hydrochloride) Figure 3: Numerically obtained thermal diffusion coefficients for an aqueous solution in the presence of a) KCl and b) NaOH plotted against the inverse Debye screening length for different bare surface potential \(\sigma_{0}\) and a no-slip boundary condition \(\lambda=0\). The solid lines take into account the electrophoretic contribution to the ion hydration effect while the dashed lines discard it (as suggested in Ref. [38]). buffered aqueous electrolyte with different monovalent salts added. The measurements have been conducted at room temperature and with 1 mM TRIS-HCl buffer to stabilize the pH-value around 7.5. These oligonucleotides exhibit a hydrodynamic radius of the order of the Debye length (\(1/\kappa_{0}\approx 1\)) and carry a rather high negative surface charge (\(|\sigma_{0}|\lesssim 6\)), requiring the electrical potential to be derived from the full nonlinear Poisson-Boltzmann equation [Eq. (61)]. Thus, our theoretical approach provides a promising candidate to be tested for the full nonlinear Poisson-Boltzmann equation. Figure 4: Net thermal diffusion coefficient (a-d) and its different contributions (e-h) of a colloid with different bare surface potentials \(\sigma_{0}\) for an aqueous electrolyte in the presence of the salt KCl, respectively NaOH as a function of the inverse Debye screening length for a no-slip (\(\lambda=0\)) and a perfect slip boundary (\(\lambda\to\infty\)). Our predictions and results from Ref. [19] are shown in solid and dashed-dotted lines, respectively. against the experimental measurements. In general, the effect of buffer dissociation on the thermophoretic transport coefficient has been ignored when fitting data points from experiments since the ionic heat of transport (see Appendix A), as well as the ion mobilities of the buffer molecules are not known or difficult to obtain experimentally [10, 17, 38, 79]. Yet, for the given pH-value, the TRIS-HCl buffer is almost fully dissociated and hence the contribution from the TRIS-H\({}^{+}\) cation to the thermoelectric effect cannot be neglected, as it sets a lower bound for \(\kappa_{0}\) when the salt concentration is decreased. Although, the oxonium (H\({}_{3}\)O\({}^{+}\)) and hydroxide ions (OH\({}^{-}\)) serve as a very efficient source for the thermoelectric potential, their influence can be safely ignored for the given pH-value [10]. In the presence of NaF and KF, the experimental data is well fitted by our numerical results for the dimensionless Soret coefficient \(S_{T}\) without any free fitting parameters provided that a partial hydrodynamic slip is imposed at the DNA surface (see Fig. 5a and 5b). In particular, concerning the salt NaF a slip length of \(\lambda=0.25\) (\(\approx 0.43\,\mathrm{nm}\)) is used, whereas changing the cation, Na\({}^{+}\rightarrow\) K\({}^{+}\) yields an even smaller value of \(\lambda=0.125\) (\(\approx 0.21\,\mathrm{nm}\)). For both salts a similar trend yet lower in magnitude is predicted for \(\lambda=0\). Here, the Soret coefficient relates to the nondimensional thermal diffusion coefficient as follows [19] \[S_{T}=D_{T}\frac{3a}{2\ell_{\mathrm{B}}}\frac{1+2\lambda}{1+3\lambda}, \tag{83}\] where \(\ell_{\mathrm{B}}=e^{2}/4\pi\epsilon_{0}\epsilon_{\mathrm{r}}^{0}k_{\mathrm{B} }T_{0}\) is defined as Bjerrum length, resulting from the balance of electrostatic and thermal energies. For water at room temperature, it takes the value \(0.7\,\mathrm{nm}\). In addition, the fitting in Figs 5a and 5b has been carried out with a hydrodynamic radius of \(a=1.7\,\mathrm{nm}\) and an effective charge number \(Z=-13.8\), connected to the bare surface potential by \(\sigma_{0}=Z\ell_{\mathrm{B}}/a\). Owing to the fact, that the average values \(a=2\,\mathrm{nm}\) and \(Z=-11.6\) obtained in Refs. [10, 79] from experiments display rather big uncertainties, we have achieved a reasonable agreement with these numbers and consequently \(a\), as well as \(Z\) are not used as free-fitting parameters. Moreover, the effective charge per base pair \(Z/22=0.63\) matches also with electrophoresis results using coarse-grained molecular-dynamics simulations [10, 80] and the value \(\lambda=0.25\) for the slip length is consistent with current experiments in Ref. [81] on electrophoresis of DNA in nanopores, where a value of \(\lambda=0.29\) have been suggested to explain their findings. However, we can only speculate about the salt-dependent decrease in the hydrodynamic slip at the DNA surface. Obviously, modeling the single-stranded DNA molecule as a spherical particle with a homogeneously distributed surface charge neglects some of its important structural properties. The nucleo-bases inside the DNA grooves are hydrophobic, leading to large hydrodynamic slip effects [82], while the negatively charged phosphate groups of the backbone are known to be hydrophilic. However, the latest atomistic molecular dynamic simulation [83] provides evidence also for a non-zero tangential velocity along the DNA backbone. Possibly, the K\({}^{+}\) ions provides an enhanced efficiency in shielding these hydrophilic regions as the ionic radii of the cations K\({}^{+}\) and Na\({}^{+}\) differ by around \(30\,\%\) and it is more likely for them to be located nearby the negatively charged phosphate groups due to electrostatic interactions, resulting in a smaller overall hydrodynamic slip length \(\lambda\) for our simplified model. It is also likely, that a nonuniform surface conductivity [62, 63, 64, 16], which we did not account for in our theory, can effectively reduce the hydrodynamic slip. To incorporate the effect of buffer dissociation in the numerical calculations, we follow Ref. [19] and choose for the ionic heat of transport (see Appendix A) of the TRIS-H\({}^{+}\)-ion the same value as for the Na\({}^{+}\)-ion. In addition, the data for the mobility necessary to determine the ionic Peclet number (see Appendix D) in the corre Figure 5: Soret coefficient \(S_{T}\) as a function of the inverse Debye screening length \(\kappa_{0}\) for 22mer single-stranded DNA in the presence of a) KF and b) NaF. Symbols correspond to the experimental data from Ref. [10], lines represent numerical predictions without any free fitting parameters for \(a=1.7\,\mathrm{nm}\), an effective charge of \(Z=-13.8\) and different slip length \(\lambda\). The theoretical calculations also take into account the influence of buffer dissociation. sponding equations [Eqs. 63] is taken from a similar organic compound, the amino acid leucine [84]. The influence of the buffer dissociation is highlighted by changing the salt concentration and keeping the one of the buffer fixed. This is illustrated in Figs. 5a and 5b for the different salts KF and NaF. While for intermediate Debye lengths (\(1/\kappa_{0}\approx 1\)), the contribution from the dissociated buffer ions to the Soret coefficient is of little significance independent of the added salt, in the regime of low ionic strength (\(\kappa_{0}\lesssim 0.5\)) it is to a large extent determined by the buffer ions which only moderately improve the agreement with the experimental data, especially for the salt NaF (see Fig. 5b). In general, our findings are in accord with the results obtained in Ref. [19], except for the decrease in the hydrodynamic slip length as the cations are exchanged. When accounting for the buffer molecules, the electrolyte consists of two monovalent salts which are assumed to be fully dissociated. Hence, both buffer ions TRIS-H\({}^{+}\) and Cl\({}^{-}\), together with the ions for the different salts KF and NaF are present in the aqueous solution with corresponding valences \(\pm 1\). Consequently, the dimensionless concentrations (see Sec. II.7) no longer evaluate to a constant, rather they explicitly depend on the Debye screening length \(1/\kappa_{0}\) via \[\frac{n_{\text{buf}}}{2I} =\frac{1}{2}\left(\frac{\kappa_{0}^{\text{buf}}}{\kappa_{0}} \right)^{2}, \tag{84a}\] \[\frac{n_{\text{s}}}{2I} =\frac{1}{2}\left[1-\left(\frac{\kappa_{0}^{\text{buf}}}{\kappa_ {0}}\right)^{2}\right], \tag{84b}\] where we have defined \(n_{\text{s}}\) as the dimensional equilibrium bulk concentration of the added salt ions and \(n_{\text{buf}}\) as the respective concentration of the buffer ions. Here \(\kappa_{0}^{\text{buf}}=(8a^{2}\pi\ell_{\text{B}}n_{\text{buf}})^{1/2}\) represents the dimensionless number for the inverse Debye screening length \(\kappa_{0}\) in the absence of salt. Then, by varying only \(n_{\text{s}}\) a fixed buffer concentration of \(n_{\text{buf}}=1\,\)mM yields a lower bound of \(0.18\) for \(\kappa_{0}\). Using now Eqs. (84), the thermal potential can be recast into \[\phi^{\text{th}}= -\frac{T_{0}}{2}\left[\left(\frac{\kappa_{0}^{\text{buf}}}{ \kappa_{0}}\right)^{2}\left(\mathcal{S}_{T}^{\text{TRIS-H}^{+}}-\mathcal{S}_{ T}^{\text{Cl}^{-}}\right)\right.\] \[\left.+\left(1-\left(\frac{\kappa_{0}^{\text{buf}}}{\kappa_{0}} \right)^{2}\right)\left(\mathcal{S}_{T}^{+}-\mathcal{S}_{T}^{-}\right)\right], \tag{85}\] with \(\mathcal{S}_{T}^{\pm}=Q_{\pm}^{\ast}/k_{\text{B}}T_{0}^{2}\) for cations (\(+\)) and anions (\(-\)) and corresponding values for the buffer molecules. Similar expressions can be derived for \(n_{i}^{0}(r)/2I\) by applying the same strategy. Thus, these quantities, especially the thermal potential are essentially dominated by the ions of the dissociated buffer at low ionic strength. Apparently, the dependence on the Debye screening length \(1/\kappa_{0}\) vanishes upon the presence of only one species of salt and we recover the case of a binary electrolyte. ### Thermophoretic motion of polystyrene beads We also carry out numerical calculations for the experiment of Duhr and Braun [8] performed on carboxylate-modified polystyrene beads (PSBs) of various sizes in the Debye-Huckel regime. Similar to the 22mer single-stranded DNA molecules, these PSBs are immersed in an aqueous solution buffered with \(n_{\text{buf}}=0.5\,\)mM TRIS-HCl at a pH-value of \(7.6\) and are titrated solely with KCl at different concentrations. From free-flow electrophoresis measurements on PSBs with radius \(a=40\,\)nm and identical carboxyl-surface modifications at fixed nondimensional Debye length \(1/\kappa_{0}=0.24\), an effective surface charge density of \(\sigma_{\text{el}}=-4\,500\,e/\text{\mu m}^{2}\) has been observed. Thus, the colloidal bare surface potential takes different values \(\sigma_{0}=-4\pi\sigma_{\text{el}}\ell_{\text{B}}/ea\) depending on the size of the PSBs. Then, a comparison between our theoretical predictions and the experimental measurements for the Soret coefficient \(S_{T}\) is presented in Fig. 6 for three different PSB sizes and a no-slip boundary condition \(\lambda=0\), since these PSBs hardly show a hydrodynamic slippage at their surface. In addition, no further adjustable fitting parameters are involved in the calculations. As a result, deviations between one and two orders of magnitude from our numerical results emerge. However, a satisfactory agreement can only be achieved by artificially increasing the surface charge density by a factor Figure 6: Soret coefficient \(S_{T}\) vs the inverse Debye screening length \(\kappa_{0}\) for carboxyl modified polystyrene beads of radius \(a=100,250\) and \(550\,\)nm exclusively titrated with KCl. Symbols relate to the experimental data from Ref. [8]. Solid colored lines represent numerical predictions for a no-slip boundary condition \(\lambda=0\) together with an effective surface charge density of \(\sigma_{\text{el}}=-4\,500\,e/\text{\mu m}^{2}\) corresponding to different bare surface potentials \(\sigma_{0}\) and solid black lines are the analytic solutions from Ref. [15] for the same \(\sigma_{\text{el}}\). Dashed-dotted lines denote numerical results with artificially increased \(\sigma_{0}\). Short and long-dashed lines represent the semi-analytical solutions from Ref. [19] for identical parameters. radii \(a=100\,\)nm and \(a=250\,\)nm, while for the largest PSBs, it has to become 18 times larger than the experimental determined value, which is far from every physical realistic number for colloidal charging. A very similar behavior is observed when applying the semi-analytical formula proposed by Burelbach and Stark [19] to calculate the Soret coefficient of a weakly charged colloidal particle (see Fig. 6). Also for this theoretical approach, only an increase in the surface charge density leads to a good fit of the experimental data. In all these considerations, we have ignored contributions arising from buffer dissociation, as the qualitative behavior of the Soret coefficient of the PSBs changes only marginally. Moreover, it is instructive to study the dependence of both, the thermal diffusion coefficient \(D_{T}\) and the Soret coefficient \(S_{T}\) on the size of the colloidal particle. Therefore, we compare our numerical predictions with experimental measurements conducted by Eslahian _et al_[85], Braibanti _et al_[24], and Duhr and Braun [8] for PSBs with different surface modifications and stabilizing buffers. While the first experiment is performed on sulfated PSBs immersed in a deionized-water-based electrolyte adding \(n_{\text{NaCl}}=5\,\)mM of the the salt NaCl, the last two experimental studies are carried out on carboxylated PSBs in an aqueous solution only buffered with \(n_{\text{buf}}=1\,\)mM TRIS-HCl. Our theoretical results for these quantities are shown in Fig. 7 as a function of the inverse Debye screening length. Here \(\kappa_{0}\) varies exclusively with the particle radius \(a\), since both the salt and buffer concentrations have been fixed in the experiments as well as for the numerical calculations, where we account for the relevant parameters of the dissociated buffer ions according to Sec. IV.4. Furthermore, the bare surface potential \(\sigma_{0}\) becomes then also a function of the particle radius. A good agreement with the data from Ref. [8] for the measured effective surface charge density \(\sigma_{\text{el}}=-4\,500\,e/\text{$\mathrm{\SIUnitSymbolMicro m}$}^{2}\) can only be found for the smallest particle radii \(a\lesssim 50\,\)nm and a drastic increase in \(\sigma_{\text{el}}\) does not significantly improve the situation (see Fig. 7a). This descrepancy is even more pronounced considering the semi-analytical predictions for the thermal diffusion coefficient derived by Burelbach and Stark [19] for equal surface charge density variations (see again Fig. 7a). The agreement is not much better, when comparing the predictions from theory with the experimental measurements from Braibanti _et al_[24] and Eslahian _et al_[85] for the Soret coefficient. Since both experiments are performed for various temperatures, we have extracted data at room temperature \(T_{0}\approx 300\,\)K. Assuming the same surface charge density \(\sigma_{\text{el}}\) as in Ref. [8] significant deviations are found for the experimental observations from Ref. [24]. However, increasing its magnitude by a factor of 3.5 agreement can be obtained (see Fig 7b). Similarly, relying on the measured zeta-potential for the data of Ref. [85] overestimates the Soret coefficient. Yet, using a zeta potential roughly 20% smaller than compared to the measured value yields reasonable agreement with our theoretical approach (see again Fig. 7b). Note, that we account for a constant zeta potential in the thermophoresis problem by replacing the boundary condition for the derivative of the dimensionless equilibrium potential [Eq. (62b)] at the colloidal surface by a constant surface-potential value \(\phi_{0}(1)\). In the spectral collocation method this transforms then to the even simpler expression \(\varphi_{M}=\phi_{0}(1)\) for the noncollocated endpoint \(t_{M}\) (see Appendix B) leading now to a set of \(M\) nonlinear algebraic equations to be solved. In contrast, other theoretical approaches derived within irreversible thermodynamics [8; 21; 15] under lo Figure 7: a) Thermal diffusion coefficient \(D_{T}\) and b) Soret coefficient \(S_{T}\) vs the inverse Debye screening length \(\kappa_{0}\) for polystyrene beads in an aqueous solution at room temperature with a fixed salt concentration corresponding to a varying particle radius. Measurement data (symbols) are taken from Refs. [8; 24; 85]. a) Solid colored lines denote numerical predictions for a no-slip boundary condition and different surface charge densities \(\sigma_{\text{el}}\). Dashed lines represent the semi-analytical results from Ref. [19] for different \(\sigma_{\text{el}}\), the solid black line is the analytic solution from Ref. [15] for \(\sigma_{\text{el}}=-4\,500\,e/\text{$\mathrm{\SIUnitSymbolMicro m}$}^{2}\). b) Solid colored lines denote numerical predictions for a no-slip boundary condition using the measured surface charge densities \(\sigma_{\text{el}}\), respectively zeta potentials \(\phi_{0}(1)\). Dashed-dotted lines correspond to numerical predictions using the surface charge density resp. the zeta potential as free parameter. cal thermodynamic equilibrium conditions are able to explain the experimental observations obtained by Duhr and Braun [8] (see again Fig. 6 and 7a), while our model predictions can only fit the data from different experimental studies on PSBs in aqueous solutions [8; 24; 85] when the parameters are strongly tuned such that the fitted surface charging differs strongly from the directly measured values. It appears questionable that these discrepancies can be rationalized by experimental uncertainties. This suggests, that other effects than these studied here should become important in controlling the behavior of the system. ## V Summary and conclusion In this work, we have numerically investigated the thermophoretic transport of a single spherical particle immersed in an electrolyte solution in linear response to an externally applied temperature gradient, addressing both moderately and highly charged solid surfaces exhibiting hydrodynamic slip for arbitrary Debye layer width. As a result of the linearization with respect to the spherically symmetric reference system at thermal equilibrium and by observing the axial symmetry of the thermophoresis problem regarding the imposed perturbations by the thermal gradient, a set of coupled ordinary differential equations has been systematically derived. Moreover, the dynamics of ions in the essentially electroneutral bulk solution is incorporated to linear order by appropriate far-field boundary conditions. In solving these linear differential equations, we have successfully utilized the solution techniques of O'Brien and White from their original treatment of the electrophoresis problem [40]. The excellent agreement with (semi-) analytic expressions from former theoretical work [38; 67] for weakly charged particles provides confidence in our numerical calculations, thereby validating our numerical predictions of the thermal diffusion coefficient, as well as the electrostatic potential. Moreover, in Ref. [38] a similar theoretical model for thermophoretic motion has been presented, yet with a different treatment of the bulk solution behavior. Consequently, we have examined their far-field boundary conditions for the potential functions by considering electrolyte solutions with both a strong and a weak thermoelectric effect. Our analysis reveals that the electrophoretic contribution to the colloidal hydration term is crucial to correctly predict the overall trend of the thermal diffusion coefficient. In particular, the inverse thermophoretic effect (\(D_{T}<0\)) for strong thermoelectric potentials cannot be explained as long as this term is missing. Only recently a description of colloidal thermophoresis based on Onsager's reciprocal relations has been introduced [28] and later on general expressions for the thermal diffusion coefficient of a weakly charged spherical particle in an aqueous electrolyte have been derived [19]. Altogether, our numerical predictions have essential features in common with their results. First, we also have observed the thermal diffusion coefficient to be sensitive to the hydrodynamic slip at the particle surface. In particular, this is accompanied by a constant thermal diffusion coefficient for strong shielding \(\kappa_{0}\gg 1\) and non-vanishing slip length \(\lambda\neq 0\), which is shown to be proportional to the electrophoretic mobility [19]. Second, for the base NaOH both models display a sign reversal in the thermophoretic transport coefficient independent of the slip length. This agreement corroborates our critical analysis of the far-field boundary condition for the electrostatic potential in Ref. [38]. In addition, we expect the negative thermophoretic effect to be an intrinsic characteristic of electrolytes with a strong thermoelectric potential, especially when hydroxide or oxonium ions are present, for example, the sodium hydroxide (NaOH) or hydrochloric acid (HCl). This numerical findings are also confirmed by experimental measurements on micellar solutions of sodium dodecyl sulfate [86]. Also, our numerical predictions agree well with the experimental data on 22mer single stranded DNA molecules in a TRIS-HCl buffered electrolyte [10], which suggest the occurrence of hydrodynamic slippage along the surface of the DNA in accordance with Refs. [19; 81]. As part of this comparison, we further have probed the influence of the buffer dissociation on the thermal diffusion coefficient and ascertain, that for low overall ionic strength the buffer ions dominate the bulk behavior by setting the value for the thermoelectric potential. Further, we have expected, that after modifying the theoretical model provided in Ref. [38] by accounting for the dominant thermoelectric effect in bulk, our numerical results should also explain the experimental measurements on PSBs in the Debye-Huckel regime [8]. Unfortunately, only an unphysically large increase in the bare surface potential, respectively surface charge density leads to a sufficient agreement. By examining the dependence of our hydrodynamic approach for thermophoretic transport on the colloidal particle dimensions, we have revealed similar results. A varying surface charge density does not yield agreement between the theoretical predictions for the thermal diffusion coefficient and the experimental data measured by Duhr and Braun [8] on PSBs and for the experiments conducted by Braibanti _et al_[24], as well as Eslahian _et al_[85] we can achieve a consistent description only by tuning parameters to regimes which are hard to reconcile with the measured values. In particular, this discrepancy in our theoretical analysis of the thermal diffusion coefficient of PSBs revives a prolonged debate initialized in Ref. [87]. It deals with the question whether different regimes exist, where either the system is in local thermodynamic equilibrium by maximizing the number of microstates of the counter ions in the Debye layer surrounding the particle [25] or dissipation via local fluid flow dominates the phoretic motion thereby characterizing non-equilibrium transport. In the first regime, thermal fluctuations may become important, while in the other regime hydrodynamic stresses determine the phoretic drift velocity. The experiments on PSBs appear to fall into the first regime, where theoretical models based on irreversible thermodynamics are suitable and the hydrodynamic approach alone fail to account since it display only small corrections to the thermal diffusion coefficient [21]. A detailed analysis of thermophoresis beyond thermodynamic equilibrium is provided in the companion paper [88]. More generally speaking, our theoretical treatment corroborates the hydrodynamic character of thermophoretic motion to be a force-free interfacial phenomenon with local solvent flow in the vicinity of the colloid by showing that an explicit dependence on the hydrodynamic boundary condition occurs. This was also argued in Ref. [19]. Thus, a careful treatment of the surface properties of the colloidal particle plays a critical role in thermophoretic phenomena. Moreover we have also generalized the force-free argument beyond the boundary layer approximation used in other theoretical approaches [41, 17, 29]. ## Acknowledgments We are grateful to Bernhard Altaner for numerous helpful discussions. This work has been supported by the Austrian Science Fund (FWF): I5257-N. ## Appendix A Soret effect of the ions We follow a commonly used approach describing ionic thermophoresis caused by hydration effects [56, 19, 41]. Here, the different ionic species are understood as a dilute gas of non-interacting charged particles enclosed by hydration layers of water molecules. The current densities of the ionic solutes \[\mathbf{j}_{i}(\mathbf{r})=-D_{i}n_{i}^{b}(\mathbf{r})\left[\nabla\log n_{i}^ {b}(\mathbf{r})+\frac{Q_{i}^{*}}{k_{\mathrm{B}}T}\frac{\nabla T}{T}-\frac{z_ {i}e\mathbf{E}^{\mathrm{th}}(\mathbf{r})}{k_{\mathrm{B}}T}\right], \tag{10}\] in the bulk solution with ion concentrations \(n_{i}^{b}(\mathbf{r})\) comprise mass and thermal diffusion as well as thermoelectric migration. Here, the Einstein diffusion coefficient is evaluated at the reference temperature \(T\) and \(Q_{i}^{*}\) denotes the temperature-independent heat of transport for each ionic solute due to hydration by surrounding water molecules in the limit of infinite dilution [53, 54, 55]. Switching on the temperature gradient, the corresponding currents [Eq. (10)] accumulate ions in a thin layer of thickness \(\sim 1/\kappa_{0}\) close to the hot and cold boundaries of the system. Then the thermoelectric field \(\mathbf{E}^{\mathrm{th}}(\mathbf{r})\) is fixed by the steady state of the solutes, where the ion currents \(\mathbf{j}_{i}(\mathbf{r})=0\) vanish. This may be justified by the significantly slower reaction of the colloidal particle as compared to the ions [89, 28]. In bulk, we further use the condition of local charge neutrality \(\sum_{i}z_{i}e^{\mathrm{b}}_{i}(\mathbf{r})=0\) (at least over spatial scales larger than the characteristic width of the Debye double layer and far away from the reservoir boundaries). Then in the equation for the total current \(\sum_{i}z_{i}e_{\mathbf{j}_{i}}(\mathbf{r})=0\), the terms originating from gradients in the concentration cancel, leading to \[\left(\sum_{i=1}^{N}z_{i}en_{i}^{\mathrm{b}}(\mathbf{r})Q_{i}^{*}\right)\frac {\nabla T}{T}-\left(\sum_{i=1}^{N}z_{i}^{2}e^{2}n_{i}^{\mathrm{b}}(\mathbf{r} )\right)\mathbf{E}^{\mathrm{th}}(\mathbf{r})=0. \tag{11}\] To linear order in the thermal gradient, we replace the ion concentration and temperature by their unperturbed values \(n_{i}^{b}(\mathbf{r})\mapsto n_{i,0}^{b},T(\mathbf{r})\mapsto T_{0}\), such that the thermoelectric field becomes uniform \[\mathbf{E}^{\mathrm{th}}=-\phi^{\mathrm{th}}\frac{\nabla T}{T_{0}}, \tag{12}\] where we define the thermoelectric potential as \[\phi^{\mathrm{th}}=-\frac{\sum_{i=1}^{N}z_{i}n_{i,0}^{b}Q_{i}^{*}}{\sum_{i=1} ^{N}z_{i}^{2}en_{i,0}^{b}}. \tag{13}\] Substituting the obtained thermoelectric field in Eq. (10), the steady state of the ionic solutes in bulk is governed to linear order in the thermal gradients by \[\frac{\nabla n_{i}^{\mathrm{b}}(\mathbf{r})}{n_{i,0}^{\mathrm{b}}}=-\frac{Q_ {i}^{*}+q_{i}\phi^{\mathrm{th}}}{k_{B}T_{0}}\frac{\nabla T}{T_{0}}=:-S_{T}^{i} \nabla T. \tag{14}\] From the last identity we read off the _ionic Soret coefficients_ \[S_{T}^{i}=\mathcal{S}_{T}^{i}+\frac{z_{i}e\phi^{\mathrm{th}}}{k_{\mathrm{B}} T_{0}^{2}}, \tag{15}\] with \(\mathcal{S}_{T}^{i}=Q_{i}^{*}/k_{\mathrm{B}}T_{0}^{2}\). The first contribution arises from hydration effects of the water molecules and is connected to the ionic heat of transport \(Q_{i}^{*}\), whereas the second contribution originates from electric migration in the thermoelectric field \(\mathbf{E}^{\mathrm{th}}\). ## Appendix B Chebyshev spectral collocation Using the nonlinear transformation \(\Phi_{L}(t)\), the derivatives with respect to the new variable \(t\) are readily calculated as \[\frac{\mathrm{d}\phi_{0}(r)}{\mathrm{d}r} =\frac{(t-1)^{2}}{2L}\frac{\mathrm{d}\phi_{0}(t)}{\mathrm{d}t}, \tag{16a}\] \[\frac{\mathrm{d}^{2}\phi_{0}(r)}{\mathrm{d}r^{2}} =\frac{(t-1)^{4}}{4L^{2}}\frac{\mathrm{d}^{2}\phi_{0}(t)}{\mathrm{ d}t^{2}}+\frac{(t-1)^{3}}{2L^{2}}\frac{\mathrm{d}\phi_{0}(t)}{\mathrm{d}t}, \tag{16b}\] by successively applying the chain rule. Consequently, the transformed nonlinear differential equation with respect to the variable \(t\in[-1,1)\) reads \[\frac{(t-1)^{4}}{4L^{2}} \frac{\mathrm{d}^{2}\phi_{0}(t)}{\mathrm{d}t^{2}}+\left[\frac{(t- 1)^{4}(L-1)}{2L^{2}(L+1+t(L-1))}\right]\frac{\mathrm{d}\phi_{0}(t)}{\mathrm{d }t}\] \[+\kappa_{0}^{2}\sum_{i=1}^{N}z_{i}n_{i,0}^{\mathrm{b}}\exp(-z_{i} \phi_{0}(t))=0. \tag{17}\] and the boundary conditions transforms to \[\lim_{t\to 1}\phi_{0}(t) =0, \tag{111a}\] \[\left.\frac{\mathrm{d}\phi_{0}(t)}{\mathrm{d}t}\right|_{t=-1} =-\frac{L}{2}\sigma_{0}. \tag{111b}\] The solution \(\phi_{0}(t)\) is approximated at the Chebyshev-Gauss-Lobatto nodes \[t_{j}=\cos\left(\frac{j\pi}{M}\right),\quad j=0,\ldots,M, \tag{112}\] by a global polynomial interpolant \[\phi_{0}(t)\approx p_{M}(t)=\sum_{k=0}^{M}\varphi_{k}\ell_{k}(t), \tag{113}\] where \(\varphi_{k}:=\phi_{0}(t_{k})\) and \(\ell_{j}(t)\) denotes the Lagrange polynomial basis functions satisfying \(\ell_{k}(t_{j})=\delta_{jk}\)[70; 74]. Then, the approximation of the \(p\)-th derivative of the function \(\phi_{0}(t)\) is achieved by differentiating the interpolant and evaluating the result at the nodal points \(\{t_{j}\}\) defining the Chebyshev differentiation matrices \(\mathsf{D}^{(p)}\) with entries \[\mathsf{D}^{(p)}_{jk}=\frac{\mathrm{d}^{p}\ell_{k}(t)}{\mathrm{d}t^{p}}\bigg{|} _{t=t_{j}}. \tag{114}\] For the first-order differentiation matrix \(\mathsf{D}^{(1)}\), this yields [69; 70] \[\mathsf{D}^{(1)}_{jk}=\begin{cases}\frac{c_{j}}{c_{k}}\frac{(-1)^{j+k}}{t_{j} -t_{k}},&j\neq k\\ -\sum_{k=0,k\neq j}^{M}\mathsf{D}^{(1)}_{jk},&j=k,\end{cases} \tag{115}\] where \(j,k=0,\ldots,M\), \(c_{0}=c_{M}=2\) and \(c_{l}=1\) for \(l=1,\ldots,M-1\). Here, we reduce possible cancellation errors in the diagonal elements of the differentiation matrix as \(M\) increases by calculating them from the analytic expressions for the off-diagonal elements [90; 91; 92] (see first line in Eq. (115)). Furthermore, the summands in Eq. (115) are rearranged in ascending order to avoid smearing. Moreover, the second-order Chebyshev differentiation matrix can be obtained from \(\mathsf{D}^{(2)}=\left(\mathsf{D}^{(1)}\right)^{2}\), applying the same correction technique for the diagonal entries \(\mathsf{D}^{(2)}_{jj}\) which leads to significantly higher accuracy. Consequently, the numerical differentiation at the Chebyshev collocation points \(t_{j}\) can be written in vector form \[\mathbf{y}^{(p)}=\mathsf{D}^{(p)}\mathbf{y},\quad p=1,2, \tag{116}\] with the coefficient vectors \[\mathbf{y} :=\left(\varphi_{0},\ldots,\varphi_{M}\right)^{T}, \tag{117a}\] \[\mathbf{y}^{(p)} :=\left(\varphi_{0}^{(p)},\ldots,\varphi_{M}^{(p)}\right)^{T}. \tag{117b}\] The collocation method states that the polynomial interpolant [Eq. (113)] satisfies the nonlinear ODE [Eq. (114)] at the inner collocation points \(t_{j},\ j=1,\ldots,M-1\), yielding the discrete approximation \[\sum_{k=0}^{M}\left[\frac{(t_{j}-1)^{4}(L-1)}{2L^{2}(L+1+t_{j}(L- 1))}\mathsf{D}^{(1)}_{jk}+\frac{(t_{j}-1)^{4}}{4L^{2}}\mathsf{D}^{(2)}_{jk} \right]\varphi_{k}\] \[\qquad\quad+\kappa_{0}^{2}\sum_{i=1}^{N}z_{i}n^{\mathrm{b}}_{i,0 }\exp\left(-z_{i}\varphi_{k}\right)=0. \tag{118}\] Evaluating the boundary conditions at the noncollocated endpoints \(t_{0}\) and \(t_{M}\) \[\varphi_{0} =0, \tag{119a}\] \[\sum_{k=0}^{M}\mathsf{D}^{(1)}_{Mk}\varphi_{k} =-\frac{L}{2}\sigma_{0} \tag{119b}\] results in a set of \(M+1\) nonlinear algebraic equations for the variables \(\varphi_{1},\ldots,\varphi_{M}\) which are solved using a Newton-Raphson method with a constant initial guess \(\varphi_{j}=1\) for all \(j=1,\ldots,M\). An approximate solution on the unbounded domain \([1,\infty)\) in terms of a transformed barycentric interpolant then reads \[\phi_{0}(r)\approx p_{M}(r)=\frac{\sum_{j=0}^{M}W_{j}(r)\varphi_{j}}{\sum_{j=0 }^{M}W_{j}(r)}, \tag{120}\] with \(W_{j}(r)=(-1)^{j}w_{j}/\left[\Phi_{L}^{-1}(r)-\Phi_{L}^{-1}(r_{j})\right]\) and the reduced barycentric weights \(w_{0}=w_{M}=2\) or \(w_{j}=1\), \(j=1,\ldots,M-1\)[76]. Similar expressions for the first and second derivative can be obtained by substituting \(\phi_{j}\) with \(\mathsf{D}^{(1)}\phi_{j}\), respectively \(\mathsf{D}^{(2)}\phi_{j}\). ## Appendix C Matrix representation for the asymptotic constants From the solutions \(\omega_{i}^{k}(r),R^{k}(r),\,k=1,\ldots,N+2\) for the \(N+2\) linear ODEs we can calculate the components of the coefficient matrix \(\mathsf{A}\) for the linear problem [Eq. (79)] \[\mathsf{A}_{i,k} =\frac{\mathrm{d}\omega_{i}^{k}(r)}{\mathrm{d}r}\bigg{|}_{r=1}, \tag{121a}\] \[\mathsf{A}_{N+1,k} =R^{k}(r)|_{r=1},\] (121b) \[\mathsf{A}_{N+2,k} =\frac{\mathrm{d}R^{k}(r)}{\mathrm{d}r}\bigg{|}_{r=1}-\lambda \frac{\mathrm{d}^{2}R^{k}(r)}{\mathrm{d}r^{2}}\bigg{|}_{r=1}, \tag{121c}\] with \(i=1,\dots,N\) and \(k=1,\dots,N+2\). In addition, the components of the corresponding vector \(\mathbf{B}\) are given as \[B_{i}= -\left.\frac{\mathrm{d}\omega_{i}(r)}{\mathrm{d}r}\right|_{r=1}\] \[-\left.\begin{cases}0,&(1)\\ z_{i}\left[\phi_{0}(r)+\phi^{\mathrm{th}}\right]|_{r=1},&(2)\end{cases}, \tag{57a}\] \[B_{N+1}= -\frac{R(r)}{r}|_{r=1}+\left.\begin{cases}1/2,&(1)\\ 0,&(2)\end{cases},\] (57b) \[B_{N+2}= -\frac{\mathrm{d}R(r)}{\mathrm{d}r}\bigg{|}_{r=1}+\lambda\frac{ \mathrm{d}^{2}R(r)}{\mathrm{d}r^{2}}\bigg{|}_{r=1}\] \[+\left.\begin{cases}1/2,&(1)\\ 0,&(2)\end{cases}\right. \tag{57c}\] with \(i=1,\dots,N\). Thus the asymptotic coefficients for each problem can be calculated formally as \[\mathbf{C}=\mathsf{A}^{-1}\cdot\mathbf{B}. \tag{58}\] ## Appendix D Typical values for relevant parameters In this appendix, we provide typical values of the various parameters for an aqueous electrolyte in the presence of different salt ions. Unless otherwise stated all values are determined at reference temperature \(T_{0}=298.15\,\mathrm{K}\) (\(25\,\mathrm{\SIUnitSymbolCelsius}\)). Here the solvent is modeled as pure water with relative dielectric permittivity \(\epsilon_{\mathrm{r}}^{0}=78.304\)[94], logarithmic derivative \(\alpha=1.35\)[93] and solvent viscosity \(\eta=890.45\times 10^{-6}\,\mathrm{Pa}\) s. In addition, Soret coefficients \(\mathcal{S}_{T}^{i}\) arising from hydration effects of the water molecules, electric mobilities \(\mu_{i}^{\varepsilon}=z_{i}\epsilon\mu_{i}^{0}\) and the corresponding ionic Peclet numbers \(\mathrm{Pe}_{i}\) for different ion species are summarized in Table 1 and refer to an infinitely dilute aqueous solution. We also list the dimensionless thermoelectric potential \(\phi^{\mathrm{th}}\) for the various monovalent salts. It can be calculated as \(\phi^{\mathrm{th}}=-(\mathcal{S}_{T}^{+}-\mathcal{S}_{T}^{-})T_{0}/2\) from \(\mathcal{S}_{T}^{\pm}=Q_{\pm}^{*}/k_{\mathrm{B}}T_{0}^{2}\) for cations (\(+\)) and anions (\(-\)) arising from the heat of ion hydration \(Q_{\pm}^{*}\), which had been measured experimentally by [55] at temperature \(T_{0}\) for a broad range of different ionic solutes. Again, since relevant values for the TRIS-H\({}^{+}\) are not available, we follow Ref. [19] and choose \(\mathcal{S}_{T}^{\mathrm{TRIS-H}^{+}}=\mathcal{S}_{T}^{\mathrm{Na}^{+}}\) together with the mobility \(\mu_{\mathrm{TRIS-H}^{+}}^{e}=2.67\times 10^{-8}\,\mathrm{m}^{2}\mathrm{V}^{-1} \mathrm{s}^{-1}\) taken from a similar organic compound, the amino acid Leucine [84]. All other mobilities are converted from limiting equivalent conductivities of the ions [93]. Moreover, using the Stokes-Einstein relation, the ionic Peclet numbers are computed from \(\mathrm{Pe}_{i}=U_{0}a/D_{i}^{0}=\epsilon_{0}\epsilon_{\mathrm{r}}^{0}k_{ \mathrm{B}}T_{0}z_{i}/\eta\mu_{i}^{\varepsilon}\) where only properties of the dissolved ions except for the solvent viscosity determine their values.
2303.08666
Search for dark photon decays to $μ^+μ^-$ at NA62
The NA62 experiment at CERN, designed to study the ultra-rare decay $K^+ \to \pi^+\nu\overline{\nu}$, has also collected data in beam-dump mode. In this configuration, dark photons may be produced by protons dumped on an absorber and reach a decay volume beginning 80 m downstream. A search for dark photons decaying in flight to $\mu^+\mu^-$ pairs is reported, based on a sample of $1.4 \times 10^{17}$ protons on dump collected in 2021. No evidence for a dark photon signal is observed. A region of the parameter space is excluded at 90% CL, improving on previous experimental limits for dark photon masses between 215 and 550 MeV$/c^2$.
NA62 Collaboration
2023-03-15T14:53:07Z
http://arxiv.org/abs/2303.08666v2
# European Organization for Nuclear Research ###### Abstract The NA62 experiment at CERN, designed to study the ultra-rare decay \(K^{+}\to\pi^{+}\nu\bar{\nu}\), has also collected data in beam-dump mode. In this configuration, dark photons may be produced by protons dumped on an absorber and reach a decay volume beginning 80 m downstream. A search for dark photons decaying in flight to \(\mu^{+}\mu^{-}\) pairs is reported, based on a sample of \(1.4\times 10^{17}\) protons on dump collected in 2021. No evidence for a dark photon signal is observed. A region of the parameter space is excluded at 90% CL, improving on previous experimental limits for dark photon masses between 215 and 550 MeV/\(c^{2}\). Introduction Proposed extensions of the Standard Model (SM) aimed at explaining the abundance of dark matter in the universe predict an additional \(U(1)\) gauge-symmetry sector with a vector mediator field \(A^{\prime}\), often called "dark photon". In a simple realization of such a scenario [1, 2], the \(A^{\prime}_{\mu}\) field with mass \(M_{A^{\prime}}\) interacts with the gauge field \(B^{\mu}\) associated with the SM \(U(1)\) symmetry through a kinetic-mixing Lagrangian term: \[-\varepsilon\frac{1}{2\cos\theta_{W}}F^{\prime}_{\mu\nu}B^{\mu\nu}, \tag{1}\] where \(F^{\prime}_{\mu\nu}=\partial_{\mu}A^{\prime}_{\nu}-\partial_{\nu}A^{\prime}_{\mu}\), \(B^{\mu\nu}=\partial^{\mu}B^{\nu}-\partial^{\nu}B^{\mu}\), \(\theta_{W}\) is the Weinberg angle, and \(\varepsilon\ll 1\) is the coupling constant. The mass \(M_{A^{\prime}}\) and the coupling constant \(\varepsilon\) are the free parameters of the model. The relevant features of the dark photon phenomenology are: * Dark photons can be produced in proton-nucleus interactions via bremsstrahlung or decays of secondary mesons. The two mechanisms differ in terms of production cross-section and distributions of the momenta and angles of the dark photons. At the energy of SPS protons (400 GeV), the probability for production of a dark photon with a momentum above 10 GeV/\(c\) is of the order of \(10^{-2}\times\varepsilon^{2}\) per proton. * For \(\varepsilon\) in the range from \(10^{-7}\) to \(10^{-5}\) and \(M_{A^{\prime}}\) in the range from MeV/\(c^{2}\) to GeV/\(c^{2}\), the decay lengths of dark photons with momenta above 10 GeV/\(c\) span from tens of metres to tens of kilometres. * Due to the feeble interaction with SM particles, dark photons can punch through tens of metres of material before decaying. * For \(M_{A^{\prime}}\) below 700 MeV/\(c^{2}\), the dark photon decay width is dominated by di-lepton final states. Other new-physics scenarios can lead to di-lepton final states. Proton beam-dump experiments are a high-intensity source of secondary muons, providing an opportunity to probe muon-specific dark sectors [3]. Another scenario, which is considered here, is the proton-induced emission of axion-like particles (ALP) coupled to SM fermionic fields [4]. An ALP \(a\) can be emitted in the decays of charged or neutral \(B\) mesons produced in proton-nucleus interactions: \[pN\to BX,\ \ \mbox{followed by}\ B\to K^{(*)}a. \tag{2}\] ALPs with masses below 700 MeV/\(c^{2}\) and interacting only with SM fermionic fields decay mainly to di-lepton modes. To address the general scenario in which the coupling of ALPs to SM fermionic fields is not uniform (for example, the coupling to quarks differs from that to leptons), a model-independent approach is adopted: the product of branching ratios \[\mbox{BR}(B\to K^{(*)}a)\times\mbox{BR}(a\rightarrow\mu^{+}\mu^{-}) \tag{3}\] is assumed to be independent of the \(a\) lifetime. The free parameters in this case are the \(a\) mass and lifetime, and the product of the branching ratios of eq. (3). The intense proton beam extracted from the CERN SPS and the NA62 setup have been exploited to search for the production and decay of dark photons by taking data in beam-dump mode: \(1.4\times 10^{17}\) protons were dumped in 10 days in 2021. The first NA62 search for dark photon decays to di-muon final states in beam-dump mode is presented. ## 2 Beam-dump operation of NA62 In the standard operation, dedicated to the study of the \(K^{+}\rightarrow\pi^{+}\nu\bar{\nu}\) decay, a 400 GeV proton beam extracted from the CERN SPS is focused onto a 400 mm long, 2 mm diameter beryllium rod to generate a secondary hadron beam. An achromat composed of two movable collimators (TAX) located between two pairs of dipoles is used for momentum selection, as sketched in the left panel of figure 1. The origin of the coordinate system is at the target centre, the Z axis is directed downstream along the beam line, the Y axis points upwards, the X-Y-Z axes form a right-handed coordinate system. The dipoles upstream of the TAX (B1A, B1B) produce a downward translation of the beam axis, with a vertical shift inversely proportional to the particle momentum. The TAX holes are used to select beam particles in a narrow momentum range centred at 75 GeV/\(c\). The dipoles downstream of the TAX (B1C, B2) shift the beam back to the original axis. In the beam-dump operation, sketched in the right panel of figure 1, the target is removed and the holes in the two movable sections of the TAX are misaligned with respect to each other and the beam axis. The proton beam is dumped on 800 mm of copper followed by 2400 mm of iron, corresponding to a total of 19.6 nuclear interaction lengths. The currents of the dipoles preceding the TAX are set as in the standard operation. The coordinates of the average proton impact point at the TAX front plane are \[P_{0}=(0,-22,23070)\ \mathrm{mm}, \tag{4}\] with standard deviations of 4.7 and 3.2 mm in X and Y, respectively [5]. The beam axis at the impact point is parallel to the Z axis. In the beam-dump operation (unlike in the standard Figure 1: Schematic Y-Z view of the TAX achromat: standard (left) and beam-dump (right) setups. The holes in the TAX movable parts are aligned (left) and misaligned (right). The beam enters from the left. The trajectory of a proton with 400 GeV/\(c\) momentum along Z at the origin is drawn in red. In the left panel, the trajectory of a particle with positive charge and 75 GeV/\(c\) momentum along Z at the origin is drawn in blue. mode) the currents of the B1C and B2 dipoles are set to produce magnetic fields in the same direction. The magnetic field strength generated by B1C (B2) is \(-1.8\) T (\(-0.6\) T) along X to minimise the flux of "halo" muons produced by pion decays within the TAX, as predicted by simulations [6]. The measurement of the muon flux relative to the standard operation as a function of the B2 current has confirmed the prediction (figure 2). ### NA62 beam line and detector The beam line and detector [7] are sketched in figure 3. The elements relevant for the beam-dump operation are discussed here. In addition to the dipole pair B1C-B2, other elements increase the capability of the beam line to sweep halo muons away from the detector acceptance. The elements with the highest sweeping power are: a triplet of magnetization-saturated dipole magnets (B3); a toroidally-magnetized iron collimator (SCR) and the return yokes of the B5 and B6 magnets in the beam-tracker region (GTK, not used for this analysis). The cleaning collimator preceding the most downstream GTK station (COL, a 1.2 m thick steel block with outer dimensions \(1.7\times 1.8\) m\({}^{2}\)) and the newly-installed ANTI0 scintillator hodoscope [8] are used to intercept and detect particles outside the vacuum pipe, respectively. The most downstream GTK station at Z = 102.4 m marks the beginning of a 117 m long vacuum tank evacuated to a pressure of \(10^{-6}\) mbar. Momenta and directions of charged particles are measured by a magnetic spectrometer (STRAW). The STRAW, comprising two pairs of straw chamber stations on either side of a dipole magnet, measures momentum-vectors. The resolution of the momentum \(p\) expressed in GeV/\(c\) is \(\sigma_{p}/p=(0.30\oplus 0.005\times p)\%\). The ring-imaging Cherenkov counter (RICH) is not used in the present analysis. Two scintillator hodoscopes (CHOD and NA48-CHOD), consisting of a matrix of tiles and two orthogonal planes of slabs, provide time measurements with 600 and 200 ps resolution, respectively. Particle identification is provided by a quasi-homogeneous Figure 2: Relative muon flux measured by the MUV3 detector (section 2.1) as a function of the B2 magnet current. The reference point for the standard operation is \(+770\) A, corresponding to a field strength of 1.8 T. The arrow indicates the working point for the beam-dump data taking, \(-250\) A. liquid krypton electromagnetic calorimeter (LKr), two hadronic calorimeters (MUV1,2), and a muon detector (MUV3) just downstream of a 80 cm thick iron absorber. A photon veto system includes the LKr, twelve ring-shaped lead-glass detectors (LAV) and small angle calorimeters (IRC and SAC). Synchronous energy deposits in nearby LKr cells are grouped into clusters. The LKr resolution of the energy \(E\) expressed in GeV is \(\sigma_{E}/E=(4.8/\sqrt{E}\oplus 11/E\oplus 0.9)\%\). The LKr spatial and time resolutions are 1 mm and between 0.5 and 1 ns, respectively, depending on the amount and type of energy released. ### Data sample Three trigger lines are employed during beam-dump operation. Two of them are used to identify charged particles: Q1, triggered by events with at least one signal in the CHOD and downscaled by a factor of 20; H2, triggered by events with at least two in-time signals in different tiles of the CHOD. The third trigger line, the Control trigger, is used to identify both charged and neutral particles. The Control trigger requires a total energy above 1 GeV in the LKr, with one or more reconstructed clusters. More details on the trigger can be found in [9, 10]. The attenuation by the TAX allows the proton beam to be operated at a rate of \(6.6\times 10^{12}\) protons per spill of 4.8 seconds effective duration, equivalent to 1.7 times the intensity of the standard operation. At this intensity, the rates of Control, downscaled Q1, and H2 triggers are 4, 14, and 18 kHz, respectively. ## 3 Signal simulation Monte Carlo (MC) simulations of particle interactions with the detector and its response are performed using a software package based on the GEANT4 toolkit [11]. The response of the trigger lines is emulated as well. After a proton interaction in the TAX, \(A^{\prime}\) emission can proceed via a bremsstrahlung process or in a decay of secondary mesons. Bremsstrahlung production is understood in the generalized Fermi-Weizsacker-Williams approximation from the scattering process [12] \[\gamma^{*}p\to A^{\prime}p^{\prime}, \tag{5}\] where the virtual photon \(\gamma^{*}\) is exchanged between the incoming proton \(p\) and a nucleus (\(N\)), leading to a scattered proton \(p^{\prime}\) and a dark photon \(A^{\prime}\) in the final state. The production chain Figure 3: Schematic layout in the Y-Z plane of the NA62 experiment for the 2021 data taking. Certain elements of the beam line are not shown. via meson decays can be summarized as \[pN\to MX\mbox{, where }M=\pi^{0}\mbox{, }\eta^{(\prime)}\mbox{, }\rho\mbox{, } \omega\mbox{, }\phi\mbox{,} \tag{6}\] followed by \[\begin{array}{rcl}M\rightarrow&\gamma A^{\prime}&\mbox{ for }M=&\pi^{0}\mbox{, }\eta^{(\prime)}\mbox{;}\\ M\rightarrow&\pi^{0}A^{\prime}&\mbox{ for }M=&\eta^{\prime}\mbox{, }\rho\mbox{, } \omega\mbox{, }\phi\mbox{;}\\ M\rightarrow&\eta A^{\prime}&\mbox{ for }M=&\rho\mbox{, }\omega\mbox{, }\phi\mbox{.}\end{array} \tag{7}\] The PYTHIA 8.2 generator [13] is used to model meson production. The differential cross-sections predicted by the simulation have been validated against available data [14]. Simulations of \(A^{\prime}\) production and decay are used to evaluate the acceptance, the selection efficiency and other properties of the expected signal. For each production mechanism, bremsstrahlung or meson decay, two decay modes, \(A^{\prime}\to e^{+}e^{-}\) and \(A^{\prime}\rightarrow\mu^{+}\mu^{-}\), are considered, with \(A^{\prime}\) masses in the range 5-700 MeV/\(c^{2}\) in 5-MeV/\(c^{2}\) steps. The \(A^{\prime}\) is constrained to decay in the volume \(102<\mbox{Z}<180\) m, with a decay path sampled from a flat distribution. At least \(1.2\times 10^{5}\) events are simulated for each mass value, production mechanism, and decay mode. The expected dark photon yield for each value of the mass and coupling constant is expressed as: \[N_{\rm exp}=N_{p}\times\mbox{P}(pN\to A^{\prime})\times\mbox{P}_{ \rm D}\times\mbox{BR}(A^{\prime}\rightarrow\ell^{+}\ell^{-})\times A_{\rm sel}\mbox {,} \tag{8}\] where * \(N_{p}=1.4\times 10^{17}\) is the number of protons dumped on TAX; * \(\mbox{P}(pN\to A^{\prime})\) is the \(A^{\prime}\) production probability per proton: depending on the production mechanism, it accounts for the bremsstrahlung cross-section or the multiplicity of each meson type times the expected decay branching ratio quoted in eq. (7); * \(\mbox{P}_{\rm D}\) is the probability for the dark photon to decay within the range \(102<\mbox{Z}<180\) m, which depends on the \(A^{\prime}\) lifetime and three-momentum distribution; * \(\mbox{BR}(A^{\prime}\rightarrow\ell^{+}\ell^{-})\) is the branching ratio of the \(A^{\prime}\) decay into a lepton pair; * \(A_{\rm sel}\) is the combined selection and trigger efficiency defined as: \[A_{\rm sel}=\left.\sum\limits_{\rm selected}w_{j}\right/\sum\limits_{\rm simulated }w_{i}\mbox{,}\] (9) where the sums run over the selected events and all simulated events. The weights \(w_{i}\) are used to correct for the flat distribution of the \(A^{\prime}\) decay paths \(D_{i}\) sampled at generation level, and depend on the \(A^{\prime}\) mean decay length \(\lambda_{i}\): \[w_{i}=\frac{1}{\lambda_{i}}\ e^{-\frac{D_{i}}{\lambda_{i}}}\mbox{.}\] (10) A geometrical selection, which requires that the \(A^{\prime}\) decays in the range \(105<\mbox{Z}<180\) m and its daughters are within the LKr active region, is used to compute \(A_{\rm sel}\) in eq. (8). The resulting 90% confidence level (CL) excluded region assuming zero events observed in the absence of background is shown in figure 4. For masses above 215 MeV/\(c^{2}\), the expected exclusion region from \(\mu^{+}\mu^{-}\) decays is only marginally smaller than including both di-lepton modes. The yield of bremsstrahlung events exceeds that from meson decays due to the production cross-section and the hardness of the spectra. ## 4 Event selection Events triggered by the H2 condition are used for the signal search. A good quality track, reconstructed by the STRAW spectrometer, must satisfy the following requirements: momentum in excess of 10 GeV/\(c\); downstream extrapolation within the geometrical acceptance of the NA48-CHOD, CHOD, LKr, MUV1, MUV2, MUV3 detectors, and within the inner aperture of the last LAV station; extrapolated positions at the front face of the first STRAW chamber and the LKr isolated from the other tracks; upstream extrapolation within the geometrical acceptance of the ANTI0; spatial association to an in-time CHOD signal. The track time is defined as the time of the associated NA48-CHOD signal if present, otherwise of the associated CHOD signal, and must be within 5 ns of the trigger time. An LKr cluster located within 50 mm of the track impact point and within 6 ns of the track time is associated to the STRAW track. A MUV3 signal found within a momentum-dependent search radius around the track impact point and within 5 ns of the track time is associated to the STRAW track. A signal from MUV3 must only be associated to one STRAW track. Particle identification (PID) relies on the ratio \(E/p\) of the LKr cluster energy associated (\(E\)) to the STRAW track momentum (\(p\)): * \(\mu\) PID: zero or one associated LKr cluster with \(E/p<0.2\) and exactly one associated MUV3 signal; * \(e\) PID: one associated LKr cluster with \(0.95<E/p<1.05\) and no associated MUV3 signal; * \(\pi\) PID: one associated LKr cluster with \(0.2<E/p<0.9\) and no associated MUV3 signal. Exactly one two-track vertex should be present in the event, reconstructed by extrapolating STRAW tracks backwards accounting for the residual magnetic field in the vacuum tank. The vertex Z coordinate must lie in the range 105-180 m. No requirement on the total charge at the vertex is imposed. The mean time of the two tracks defines the reference time. Vertices composed of oppositely charged tracks and \(\mu\)-\(\mu\) PID assignments are considered as \(A^{\prime}\rightarrow\mu^{+}\mu^{-}\) candidates. No signal from any LAV station must be present within 10 ns of the reference time to reduce the background due to secondary interactions in the material. The position of the \(A^{\prime}\) production point is evaluated as the point of closest approach, \(P_{\rm CDA}\), between the dark photon line of flight (defined by the vertex position and the sum of the three-momenta at the vertex) and the proton beam line (defined by the average impact point on the Figure 4: Regions excluded at 90% CL assuming zero events observed in the absence of background for meson decays or bremsstrahlung \(A^{\prime}\) production, separated by decay mode (left panel) and by production mode (right panel). The grey underlying excluded regions are obtained using the DarkCast package [15] and results from ref. [16]. dump, eq. (4), and parallel to the Z axis). The distance of closest approach \(\rm{CDA_{TAX}}\) is shown in figure 5 as a function of the longitudinal coordinate \(\rm{Z_{TAX}}\) of \(P_{\rm{CDA}}\) for simulated signal events. The \(\rm{Z_{TAX}}\) distribution has a mean value of 23 m with a rms width of 5.5 m. The rms width of the \(\rm{CDA_{TAX}}\) distribution is 7 mm. The signal region (SR) is defined as \[{\rm SR:}\;6<\rm{Z_{TAX}}<40\mbox{ m and }\rm{CDA_{TAX}}<20\mbox{ mm}, \tag{11}\] and the control region (CR) is defined as \[{\rm CR:}\;-4<\rm{Z_{TAX}}<50\mbox{ m and }\rm{CDA_{TAX}}<150\mbox{ mm, excluding SR}. \tag{12}\] The CR is used for validation of the background estimates with the data, allowing the unmasking of the SR if a satisfactory agreement is found. ## 5 Background determination The evaluation of the expected background would require the simulation of about \(N_{p}=10^{17}\), which is technically too demanding. A combination of data-driven and MC methods was developed to overcome this difficulty. The distribution of the time difference between the two selected tracks, inverting some of the selection criteria, is exploited to give indications about the origin of the expected background. The following data side bands are considered: * Opposite-charge vertices with \(e\)-\(e\) or \(\mu\)-\(\mu\) PID, outside the signal and control regions; * Same-charge vertices with \(\mu\)-\(\mu\) PID, both outside and within the signal or control regions; * Same- or opposite-charge vertices with \(e\)-\(\mu\) PID, both outside and within the signal or control regions. The time difference distributions, shown in figure 6, indicate that vertices with at least one electron or positron are formed mostly by in-time tracks: this "prompt" background can be Figure 5: Distance of closest approach \(\rm{CDA_{TAX}}\) vs longitudinal coordinate of the point of minimum approach \(\rm{Z_{TAX}}\) for simulated signal events. The signal region defined by eq. (11) is shown inside the rectangular contour. explained by secondary interactions of incoming muons within the material traversed. In contrast, di-muon vertices formed by unrelated tracks randomly paired produce a "combinatorial" background with a uniformly distributed time difference. ### Prompt background In the available data set, \(5\times 10^{9}\) halo muons are in the acceptance of the CHOD, LKr, and MUV3 detectors. The prompt background originates from interactions of halo muons in the material traversed upstream of or within the decay volume. The main prompt background mechanism is muon-nucleus inelastic production of a hadron, usually a charged pion, followed by an in-flight decay to a muon. Two in-time muons are then present in the event. Two approaches to the simulation of the muon flux after proton interactions in the TAX have been exploited. The first method consists of the simulation of a limited number of proton interactions and the parameterisation of the muon kinematics at the TAX exit plane. This parameterisation is then used for simulation [6]. The second method enhances the proton-induced muon production to increase the simulation efficiency [17]. However, neither approach led to satisfactory results due to: the limited knowledge of the pion/kaon cross-sections for forward production and for quasi-elastic scattering in TAX nuclei; the uncertainties in the multiple scattering treatment, particularly within the iron yokes of the beam line magnets. Moreover, both methods require an oversampling of the resulting halo muons of the order of a thousand times to achieve a number of events equivalent to \(N_{p}=10^{17}\), potentially inducing non-physical correlations. To overcome these issues, a backward MC simulation (BMC) fed with real data is used. The input consists of a set of distributions from single tracks with \(\mu\) PID: X,Y coordinates and three-momentum components measured at a reference plane (\(\mathrm{Z=180~{}m}\)) upstream of the STRAW spectrometer. PUMAS [18], a standalone tool used in muography studies and interfaced with GEANT4, propagates each muon backward, increasing its energy according to the amount of material traversed, until reaching the upstream face of the B5 magnet at \(\mathrm{Z=92~{}m}\). The result is a sample of muons, which is expected to reproduce the experimental distributions. To validate the method, the sample of muons from BMC is input into the NA62 standard MC simulation based on GEANT4 and the results at the reference plane are compared to the original Figure 6: Time difference between the two selected tracks for various PID combinations: \(\mu^{+}\mu^{-}\) (black), \(e^{+}e^{-}\) (green), \(\mu^{-}e^{+}\) (red), \(\mu^{+}e^{-}\) (light blue). data. Disagreements can be explained by the different treatments of multiple scattering in \(\mathrm{\PUMAS}\) and \(\mathrm{\GEANT4}\) and the asymmetric distribution of the energy loss, which induces tails at high momenta. To correct for such biases, a weight depending on the track momentum and its radial position at the B5 magnet plane is assigned to each muon track. A systematic uncertainty of 50% in results obtained using these simulations is derived from the comparison between data and MC distributions of angles and positions in the transverse plane. Technical limitations for the full halo muon simulation remain, particularly because of muon-induced showers downstream of the LKr, \(\mathrm{Z_{LKr}=241~{}m}\). Therefore, the MC simulation is split into two stages. All particles are propagated from the B5 magnet to the STRAW spectrometer downstream plane (\(\mathrm{Z_{STRAW}=219~{}m}\)). Events are then kept for further propagation if either (_a_) one \(e^{\pm}/\gamma/\pi^{\pm}/p/n/K^{\pm}/K_{L}^{0}\) with a momentum above 1 GeV/\(c\) or (_b_) at least two muons, regardless of their charge, reach \(\mathrm{Z_{STRAW}}\). A number of events equivalent to \(N_{p}=0.67\times 10^{17}\) (\(8.37\times 10^{15}\)) is generated using the condition \(a\) (_b_). Pions produced by muon interaction can decay at \(\mathrm{Z<Z_{STRAW}}\) ("\(\mu\)-\(\mu\)" background) or at \(\mathrm{Z_{STRAW}<Z<Z_{LKr}}\) ("\(\mu\)-\(\pi\)" background). To increase the statistics of the \(\mu\)-\(\pi\) component, events are oversampled forcing the pion decay to a muon before reaching the LKr. An additional background component ("Other") is due to: \(K^{\pm}\) production followed by a decay to muons; muon hard ionisation with emission of \(e^{\pm}\) interacting before reaching the LKr. In the latter case, the emitted particles can be accidentally associated to a MUV3 in-time signal from the original muon. The expected and observed numbers of events satisfying the selection without the LAV veto condition are compared, excluding the signal and control regions. The distribution of the two-track time difference for \(\mu^{+}\mu^{-}\) data events is shown in figure 7. A prompt component on top of a combinatorial background amounts to \(270\pm 27_{\mathrm{stat}}\) events. From the simulation, \(141\pm 66_{\mathrm{stat}}\pm 71_{\mathrm{syst}}\) prompt-background events are expected. The data/MC ratio, \(1.91\pm 0.91_{\mathrm{stat}}\pm 0.95_{\mathrm{syst}}\), is used to scale the predictions from the MC simulation. The expected numbers of events due to the prompt background with the LAV veto condition applied are given in table 1. The distribution of the prompt background before the LAV veto Figure 7: Time difference between the two tracks selected as \(\mu^{+}\mu^{-}\), without the LAV veto condition (section 4) and excluding vertices in the CR or SR. condition in the (\(\rm Z_{TAX},\,CDA_{TAX}\)) plane is exploited to evaluate the fraction of background events in the CR (\(\rm\eta_{CR}\)). As shown in figure 8, no simulated events are observed in the CR. At 90% CL, \(\rm\eta_{CR}<1.6\%\). The corresponding upper limit on the number of expected events in the CR is 0.004. A possible prompt-background contribution produced by secondary interactions in the collimators or magnets preceding the decay volume is also investigated. In the (\(\rm Z_{TAX},\,CDA_{TAX}\)) plane, the distribution of the upstream-produced prompt background does not significantly differ from that of figure 8. A conservative estimate establishes an upper limit of 0.069 expected events in the CR at 90% CL. ### Combinatorial background A control data sample is used to evaluate the combinatorial background. The control sample consists of events satisfying the Q1 but not the H2 trigger conditions, to avoid any overlap with the signal selection. Events with a single STRAW track in time with the trigger are selected. The requirements of track quality, association with downstream detectors, and \(\mu\) PID are applied as in the signal selection. The selected single tracks are paired, simulating a random coincidence within 10 ns in the same event. The vertex reconstruction is performed as in the signal selection. Each simulated track pair is weighted to account for the time window required by the signal selection, the spill duration, the downscale factor of the Q1 trigger and the efficiency for the H2 trigger given two tracks fulfilling the Q1 condition. The relative systematic uncertainty in the event weight is 15% \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(\mu\)-\(\mu\) & \(\mu\)-\(\pi\) & Other & Total \\ \hline 0.235 \(\pm\) 0.177 & 0.038 \(\pm\) 0.019 & 0.004 \(\pm\) 0.003 & 0.28 \(\pm\) 0.19 \(\pm\) 0.20 \\ \hline \end{tabular} \end{table} Table 1: Expected numbers of prompt-background events for \(N_{p}=1.4\times 10^{17}\) obtained from simulations. The signal selection is applied, and events in the SR or CR are excluded. The uncertainties quoted are statistical; the second uncertainty in the last column is systematic. Figure 8: \(\mu^{+}\mu^{-}\) expected background distribution of the prompt component before the LAV veto condition, in the (\(\rm Z_{TAX},\,CDA_{TAX}\)) plane. The rectangles are the external contours of SR and CR regions. and significantly outweighs the statistical uncertainty. After weighting events, the distributions of \(\mathrm{CDA_{TAX}}\) vs \(\mathrm{Z_{TAX}}\) for \(\mu^{+}\mu^{+}\) and \(\mu^{-}\mu^{-}\) events are shown in figure 9. Data events are superimposed as full dots. In figure 10, \(\mu^{+}\mu^{-}\) events are shown. Three outer control regions closer and closer to the CR and labelled as \(\mathrm{OCR_{3,2,1}}\) are considered. A comparison between observed and expected numbers of events is shown in table 2 and a good agreement is observed. For the \(\mu^{+}\mu^{-}\) final state, an alternative evaluation of the combinatorial background is obtained by determining the data/MC scaling from same-sign events outside the CR: 61 events are observed in data, while \(71.6\pm 9.5\) are expected. The scale factor is lower than that used in the previous approach by 15%, although consistent within the systematic error. Using same-sign events for scaling allows a relative statistical uncertainty of 13% and a negligible systematic contribution. The final estimate of the combinatorial background employs this alternative approach and is shown in table 3. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline PID & Region & \(N_{\mathrm{exp}}\) & \(N_{\mathrm{obs}}\) & \(P_{\mathrm{L}\leq\mathrm{L_{obs}}}\) \\ \hline & Outside CR & \(62.5\pm 9.4\) & 53 & 0.46 \\ \(\mu^{+}\mu^{+}\) & CR & \(0.46\pm 0.07\) & 0 & 1.0 \\ & SR & \(0.040\pm 0.006\) & 0 & 1.0 \\ \hline & Outside CR & \(9.1\pm 1.4\) & 8 & 0.88 \\ \(\mu^{-}\mu^{-}\) & CR & \(0.050\pm 0.007\) & 0 & 1.0 \\ & SR & \(0.0050\pm 0.0007\) & 0 & 1.0 \\ \hline & Outside CR & \(30.9\pm 4.6\) & 28 & 0.78 \\ & \(\mathrm{OCR_{3}}\) & \(2.00\pm 0.30\) & 2 & 1.0 \\ & \(\mathrm{OCR_{2}}\) & \(0.68\pm 0.10\) & 1 & 0.48 \\ \(\mu^{+}\mu^{-}\) & \(\mathrm{OCR_{1}}\) & \(0.34\pm 0.05\) & 1 & 0.29 \\ & \(\mathrm{OCR_{1+2+3}}\) & \(3.02\pm 0.45\) & 4 & 0.56 \\ & CR & \(0.20\pm 0.04\) & – & – \\ & SR & \(0.019\pm 0.004\) & – & – \\ \hline \end{tabular} \end{table} Table 2: Numbers of expected di-muon events from combinatorial background (\(N_{\mathrm{exp}}\)), numbers of observed data events (\(N_{\mathrm{obs}}\)), and probabilities to obtain a likelihood L for data-MC compatibility equal or smaller than that corresponding to \(N_{\mathrm{obs}}\) (\(P_{\mathrm{L}\leq\mathrm{L_{obs}}}\)). The dominant uncertainty in \(N_{\mathrm{exp}}\) is systematic. Figure 9: Distributions of \(\mathrm{CDA_{TAX}}\) vs \(\mathrm{Z_{TAX}}\) for \(\mu^{+}\mu^{+}\) (left) and \(\mu^{-}\mu^{-}\) (right) events: expected combinatorial background (colour-scale plot) and data events (black dots). Data events in the SR and CR are not masked. ### Background summary The estimates of the prompt and combinatorial backgrounds are displayed in table 4. For the prompt and upstream-prompt components, the fraction of events within the SR is assumed to be ten times smaller than that for the CR, as observed for the combinatorial background. The total expected number of background events is \(0.016\pm 0.002\), dominated by the combinatorial component. Assuming a 90% CL coverage and no signal, no observed events are expected in the data SR. A five-sigma signal discovery for any mass \(M_{A^{\prime}}\) would correspond to the observation of three or more signal candidates in the data SR in a window of \(\pm 3\) standard deviations of the mass. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Region & Combinatorial & Prompt & Upstream-prompt \\ \hline CR & \(0.17\pm 0.02\) & \(<0.004\) & \(<0.069\) \\ SR & \(0.016\pm 0.002\) & \(<0.0004\) & \(<0.007\) \\ \hline \end{tabular} \end{table} Table 4: Summary of expected numbers of background events for the search of \(A^{\prime}\rightarrow\mu^{+}\mu^{-}\) with the related uncertainty. The limits reported are defined with a 90% CL. Figure 10: Distribution of \(\mathrm{CDA_{TAX}}\) vs \(\mathrm{Z_{TAX}}\) for \(\mu^{+}\mu^{-}\) events: expected combinatorial background (colour-scale plot) and data events (black dots). Control and signal regions are masked for data. Additional regions, \(\mathrm{OCR_{3,2,1}}\), are not masked. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Region & \(N_{\mathrm{exp}}\) & \(N_{\mathrm{obs}}\) & \(P_{\mathrm{L<L_{obs}}}\) \\ \hline Outside CR & \(26.3\pm 3.4\) & 28 & 0.74 \\ \(\mathrm{OCR_{3}}\) & \(1.70\pm 0.22\) & 2 & 0.68 \\ \(\mathrm{OCR_{2}}\) & \(0.58\pm 0.07\) & 1 & 0.44 \\ \(\mathrm{OCR_{1}}\) & \(0.29\pm 0.04\) & 1 & 0.25 \\ \(\mathrm{OCR_{1+2+3}}\) & \(2.57\pm 0.33\) & 4 & 0.34 \\ \hline CR & \(0.17\pm 0.02\) & – & – \\ SR & \(0.016\pm 0.002\) & – & – \\ \hline \end{tabular} \end{table} Table 3: Numbers of expected \(\mu^{+}\mu^{-}\) events from combinatorial background (\(N_{\mathrm{exp}}\)), numbers of data events (\(N_{\mathrm{obs}}\)), and probabilities to obtain a likelihood L for data-MC compatibility equal or smaller than that corresponding to \(N_{\mathrm{obs}}\) (\(P_{\mathrm{L\leq L_{obs}}}\)). The data/MC ratio for same-sign events is used to determine the MC scaling factor. The dominant uncertainty in \(N_{\mathrm{exp}}\) is statistical. ## 6 Expected signal yield The signal yield is obtained using eq. (8). The number of protons on TAX (\(N_{p}\)) is evaluated for each spill from the measurement of the beam flux which is provided by a titanium-foil secondary-emission monitor placed at the target location. The uncertainty in the \(N_{p}\) measurement is derived from the operational experience of the secondary-emission monitors in various beam-line setups and is estimated conservatively to be 20%. This figure is confirmed using data from the standard setup: the number of selected \(K^{+}\rightarrow\pi^{+}\pi^{+}\pi^{-}\) decays agrees with the number expected from the measured proton flux to within 20%. The selection and trigger efficiencies are determined by simulation as a function of the assumed dark photon mass and coupling constant, separately for the bremsstrahlung and for the meson-decay production processes. The mass is varied from 215 MeV/\(c^{2}\) to 700 MeV/\(c^{2}\). The resulting efficiencies are shown in figure 11. For any value of \(M_{A^{\prime}}\), the maximum efficiency occurs at a given value of \(\varepsilon\), because of two competing effects: for larger values of the coupling constant \(\varepsilon\), the average \(A^{\prime}\) momentum is higher to compensate for the lower \(A^{\prime}\) lifetime at rest, leading to reduced di-muon opening angles and therefore lower reconstruction efficiency for tracks and vertices; for lower values of \(\varepsilon\), the \(A^{\prime}\) lifetime at rest is longer and softer dark photons are selected, leading to a reduced acceptance of the \(A^{\prime}\) decay products. A summary of the relative systematic uncertainties in the signal selection efficiency is given in table 5. Each entry is determined independently using a combination of data control samples and simulation. The simulation entry is of statistical origin and represents a typical value, since it varies with the \(A^{\prime}\) mass and coupling constant. The total relative uncertainty in the efficiency is below 3%. Bremsstrahlung and meson-decay production are characterised by different \(A^{\prime}\) mass resolution, \(\sigma_{M_{A^{\prime}}}\), with the former larger than the latter for most of the parameter space (figure 12). The expected signal yields for the two production mechanisms are shown as functions of the \(A^{\prime}\) coupling constant and mass in figure 13. The bremsstrahlung process dominates for most of the parameter space. Both the \(A^{\prime}\) mass resolution and the expected signal yield are parameterised as two-dimensional functions of the dark photon coupling constant and mass. Figure 11: Selection and trigger efficiency for the \(A^{\prime}\rightarrow\mu^{+}\mu^{-}\) signal, as a function of the \(A^{\prime}\) mass and coupling constant. Left (right) panel refers to the bremsstrahlung (meson-decay) production mode. ## 7 Results After unmasking, no events were observed in the control region. The probability of a non-zero observation is 15%. After unmasking, one event was observed in the signal region. In the absence of a dark photon signal, the probability of a non-zero observation is 1.6%. The two-track invariant mass of the observed event is 411 MeV/\(c^{2}\). The corresponding observed 90% CL upper limit is represented by the region enclosed within the black contour in figure 14. In the same figure, the colour-filled area represents the expected uncertainty in the exclusion contour in the absence of an \(A^{\prime}\) signal with a one-sigma (green) and two-sigma (yellow) statistical coverage. The single observed event would correspond to a 2.4 \(\sigma\) global significance. The event observed could be interpreted as combinatorial background, since the time difference between the two tracks is 1.69 ns, which is two standard deviations away from the mean for signal events. In the (\(\mathrm{Z_{TAX}},\mathrm{CDAT_{AX}}\)) plane, the event observed is close to the border of the SR, consistent with the extreme tails of the expected signal (figure 15). Note that the distribution of the expected signal within the SR is not used to determine the statistical significance. The results are also interpreted in terms of the emission of axion-like particles. In a model-independent approach [4], the ALP lifetime \(\tau_{a}\), the ALP mass \(M_{a}\), and the product of the branching ratios of eq. (3) are free parameters. The NA62 result is shown in figure 16 for selected values of \(M_{a}\). For ALP masses below 280 MeV/\(c^{2}\), the NA62 result extends the exclusion limits from previous experiments (LHCb [19], CHARM [20]). \begin{table} \begin{tabular}{|l|c|} \hline Source & Uncertainty \\ \hline Track and vertex reconstruction & \(<0.1\%\) \\ CHOD association & 0.6\% \\ PID & 1.0\% \\ LAV veto condition & 0.1\% \\ Extrapolation to the impact point & 1.5\% \\ Trigger & 0.5\% \\ Simulation & 2.1\% \\ \hline Total & 2.8\% \\ \hline \end{tabular} \end{table} Table 5: Relative uncertainties of the signal selection efficiency from the contributions considered. Figure 12: Mass resolution as a function of the \(A^{\prime}\) mass and coupling constant. Left (right) panel refers to bremsstrahlung (meson-decay) production. ## 8 Conclusions The search for production and decay of dark photons to a di-muon final state is the first result obtained using NA62 data taken in beam-dump mode. A counting experiment is performed through a cut-based, blind analysis of a data sample equivalent to \(1.4\times 10^{17}\) dumped protons. One event is found, with a possible interpretation as a combinatorial background. No evidence of a dark photon signal is established. A region of the dark photon parameter space (coupling constant, mass) is excluded at 90% CL, extending the limits of previous experiments in the mass range 215-550 MeV/\(c^{2}\) for coupling constants of the order of \(10^{-6}\). In addition, the result is interpreted in terms of the emission of axion-like particles in a model-independent approach. The result is found to improve on previous limits for masses below 280 MeV/\(c^{2}\). Figure 14: The region of the parameter space within the solid line is excluded at 90% CL. The colour-filled area represents the expected uncertainty in the exclusion contour in the absence of a signal: green (yellow) corresponds to a statistical coverage of 68% (95%). Figure 13: Expected number of events for the \(A^{\prime}\) decay to \(\mu^{+}\mu^{-}\) as a function of the \(A^{\prime}\) mass and coupling constant. Left (right) panel refers to bremsstrahlung (meson-decay) production. The black contours correspond to 2.3 events. Figure 16: Search for an axion-like particle \(a\) produced from decay of \(B\) mesons. Four values of the ALP mass are considered. The region of the parameter space above the black line is excluded at 90% CL. The excluded regions from LHCb [19] and CHARM [20] measurements are superimposed as grey-filled areas [4]. Figure 15: Distributions of \(\rm{CDA_{TAX}}\) vs \(\rm{Z_{TAX}}\). Left: data (dots) and expected background (colour-scale plot). Right: data (dots) and expected signal density (colour scale). Bins of \(\rm{2~{}mm}\times 1~{}m\) size are used for the colour scale.
2303.10029
Quantum advantages in timekeeping: dimensional advantage, entropic advantage and how to realise them via Berry phases and ultra-regular spontaneous emission
When an atom is in an excited state, after some amount of time, it will decay to a lower energy state emitting a photon in the process. This is known as spontaneous emission. It is one of the three elementary light-matter interactions. If it has not decayed at time $t$, then the probability that it does so in the next infinitesimal time step $[t, t+\delta t]$, is $t$-independent. So there is no preferred time at which to decay -- in this sense it is a random process. Here we show, by carefully engineering this light-matter interaction, that we can associate it with a clock, where the matter constitutes the clockwork and the spontaneous emission constitutes the ticking of the clock. In particular, we show how to realise the quasi-ideal clock. Said clock has been proven -- in an abstract and theoretic sense -- to be the most accurate clock permissible by quantum theory, with a polynomial enhancement in precision over the best stochastic clock of the same size. Our results thus demonstrate that the seemingly random process of spontaneous emission can in actual fact, under the right circumstances, be the most regular one permissible by quantum theory. To achieve this we use geometric features and flux-loop insertions to induce symmetry and Berry phases into the light-matter coupling. We also study the entropy the clock produces per tick and show that it also possesses a quantum advantage over that generated from the previously known semi-classical clocks in the literature.
Arman Pour Tak Dost, Mischa P. Woods
2023-03-17T14:58:46Z
http://arxiv.org/abs/2303.10029v1
Quantum advantages in timekeeping: dimensional advantage, entropic advantage and how to realise them via Berry phases and ultra-regular spontaneous emission ###### Abstract When an atom is in an excited state, after some amount of time, it will decay to a lower energy state emitting a photon in the process. This is known as spontaneous emission. It is one of the three elementary light-matter interactions. If it has not decayed at time \(t\), then the probability that it does so in the next infinitesimal time step \([t,t+\delta t]\), is \(t\)-independent. So there is no preferred time at which to decay--in this sense it is a random process. Here we show, by carefully engineering this light-matter interaction, that we can associate it with a clock, where the matter constitutes the clockwork and the spontaneous emission constitutes the ticking of the clock. In particular, we show how to realise the quasi-ideal clock. Said clock has been proven--in an abstract and theoretic sense--to be the most accurate clock permissible by quantum theory, with a polynomial enhancement in precision over the best stochastic clock of the same size. Our results thus demonstrate that the seemingly random process of spontaneous emission can in actual fact, under the right circumstances, be the most regular one permissible by quantum theory. To achieve this we use geometric features and flux-loop insertions to induce symmetry and Berry phases into the light-matter coupling. We also study the entropy the clock produces per tick and show that it also possesses a quantum advantage over that generated from the previously known semi-classical clocks in the literature. ## I Introduction Spontaneous emission is the process in which an excited state of matter decays to a lower energy state via the spontaneous emission of a photon. It is the elementary process underlying many light-matter phenomena including luminescence, fluorescence, phosphorescence and is a fundamental component to many technologies such as the laser. The textbook definition tells us that it is a very random process. Indeed, the probability \(P(t)\) of the excited state decaying in a time interval \([t,t+\delta t]\) is governed by the same equation as that of radioactive decay: \[P(t)=\Gamma_{0}\,\mathrm{e}^{-t\,\Gamma_{0}}\delta t,\qquad\Gamma_{0}>0. \tag{1}\] Yet it is a distinctly quantum-mechanical phenomenon since it cannot be described via classical electromagnetism. Note that this is the most random of all possible processes--given that it has not decayed at time \(t\) the probability of decaying in the next infinitesimal time step \(\delta t\) is \(t\)-independent. If we were to associate the spontaneous emission process with the tick of a clock, then it would be the worst clock imaginable--bar a clock which doesn't tick at all. Tantamount to this, spontaneous emission is even used for random number generation [1]. Why is standard spontaneous emission so random? The usual situation is when the exited state of matter is an energy-eigenstate. Since such states do not change over time, they cannot provide any timing information. Therefore, the probability of decaying in an interval \([t,t+\delta t]\) cannot depend on the time \(t\) itself. The only consistent distribution with this property is eq. (1). While this description is classical the decay process itself requires one to take into account quantized vacuum fluctuations of the electromagnetic field--indeed, a purely classical description would predict no decay at all. One can characterise the precision of this decay by the ratio \(R=\mu^{2}/\sigma^{2}\) of the mean over the standard deviation of the time when the spontaneous decay occurred. From eq. (1) it follows an \(R\)-value of unity. At the other extreme, a value \(R=\infty\) corresponds to a completely deterministic decay time. Thus if we were to use the decay even as the ticking of a clock, \(R=\infty\) corresponds to a hypothetical idealised clock. If one aims to use spontaneous decay for tick of a clock one may try to increase its precision by considering an excited state of matter which is in a superposition of two non-degenerate energy eigenstates, since such states do evolve in time. For concreteness, suppose these two levels form the upper-most two of three equidistant energy levels. Due to the electromagnetic field, this excited state will decay to the ground state at some point in time. One can derive a master equation to describe the three-level system. In doing so, one finds that the quantised electromagnetic field decoheres the superposition and the decay probability is described by a probabilistic mixture over decaying from either the top level to the ground state, or the intermediate level to the ground state. Each of the two decay processes is described by eq. (1). In general by considering general stochastic processes like this one, the best possible achievable \(R\)-value can be increased to \(d\), where \(d\) is the number of excited states, [2]. By considering generic quantum systems, the in-principle theoretic maximum for a \(d\)-dimensional quantum system is an \(R\)-value proportional to \(d^{2}\) in the large-system limit [2]. This latter remark follows from putting two observations together: the process of spontaneous emission and the nascent field of quantum clocks both have in common that they are described mathematically via quantum dynamical semi-groups. Indeed, recently an abstract theoretical quantum clock was proposed which demonstrably achieved a quantum advantage [2]. It was later shown that this clock achieved the theoretical maximum accuracy allowed by quantum mechanics [3]. However, the proof is abstract and information-theoretic with no clear system in which it can be realised. Here, we prove that the seemingly random process of spontaneous decay, can in actual fact represent the most precise process permissible by quantum mechanics within the framework of Markovian processes, thus even surpassing the classical limit. Spontaneous emission is already an important process in many technologies, but was never considered useful for producing well-timed emitted photons. Our work suggests that spontaneous emission could be useful as a quantum technology producing extremely precise time-delayed photons and de-excitations of matter. **This manuscript**. In a nutshell, we take advantage of two quantum phenomena to achieve a spontaneously emitted photon which is at the quantum limit of precision--an \(R\)-value of \(d^{2}\). First, we show that if the energy levels over which the exited state of matter is initialised to are very close together in comparison with the lower energy states it can decay to, it will maintain quantum decoherence despite the presence of the electromagnetic field. This however is not sufficient to achieve the fundamental limit of precision. Secondly, we show that if the dipole moments connecting the excited states with the ground states satisfy a certain symmetry--which can be induced by a geometrical Berry phase--then the decay channels are the discrete Fourier transform modes of the exited energy levels. These modes have support on all the exited energy levels and as such, the detection of the matter in a ground state due to it spontaneously decaying provides little information about the exited energy level(s) it came from. It turns out that this uncertainty in energy permits the uncertainty in the decay time to be extremely small. **Paper outline**. In section II we review the abstract clock model of which the quasi-ideal clock is a special case and plot its accuracy in low dimensions using new techniques developed here. With this in mind, we derive from first principles the quasi-ideal clock in the context of spontaneous emission in section III. Given a model stemming from a physical environment, we are in the position to faithfully calculate the entropy produced per tick of our clock; we do so in section IV. We end with a discussion and conclusion in section V. ## II Generic clock model, the quasi-ideal clock and precision We start by reviewing the generic clock model and a particular abstract quantum clock called the quasi-ideal clock [2; 4] which achieves the maximum precision out of all the clocks in the model. We finish by numerically optimising precision over the parameters of the model in low dimensions to show that the optimal quadratic scaling is still achievable in low dimensions. This section is important because in section III the quasi-ideal clock is realised via a light-matter interaction and hence the claim that it is the most accurate clock is to be understood in this context. The clock model consists in a clockwork state and a register state. The aim of the clockwork is to capture the timing resources while the register (or "clock face") records the time. Since we aim to emit classical information about time, the register will be classical, i.e. the emission of a "tick" is the process in which the register changes from one orthonormal state to the next--analogously to the changes in the second hand on a wall clock. The clockwork on the other hand, can change continuously and in principle evolve to any quantum state. We denote it's initial state by \(\rho_{\mathrm{C}}^{0}\). The principle aim of our clock model is to capture the resources needed to run a clock which produces timing at a certain precision. We therefore should put some constraints on the dynamical channel \(\mathcal{M}_{\mathrm{CR}\rightarrow\mathrm{CR}}^{t}\) responsible for evolving the clock forward according to background time \(t\). Arguably the most basic of such constraints is that the channel is divisible: \[\mathcal{M}_{\mathrm{CR}\rightarrow\mathrm{CR}}^{t_{1}+t_{2}}(\rho_{\mathrm{ CR}})=\mathcal{M}_{\mathrm{CR}\rightarrow\mathrm{CR}}^{t_{1}}\circ\mathcal{M}_{ \mathrm{CR}\rightarrow\mathrm{CR}}^{t_{2}}(\rho_{\mathrm{CR}}) \tag{2}\] for any two times \(t_{1},t_{2}\geq 0\) and clockwork-register state \(\rho_{\mathrm{CR}}\). Otherwise, there is the possibility that an unaccounted-for timing resource in the environment is providing timing information, e.g. another clock of un-fortold resource requirements. One can impose a few more conditions, namely that the clock should not skip a tick and its precision should not depend on the initial position of the register [4]. Equation (2) and these additional two other conditions are satisfied if and only if \(\mathcal{M}_{\mathrm{CR}\rightarrow\mathrm{CR}}^{t}\) is of the form \[\mathcal{M}_{\mathrm{CR}\rightarrow\mathrm{CR}}^{t}=\mathrm{e}^{t\mathcal{L}_{ \mathrm{CR}}}, \tag{3}\] for \[\mathcal{L}_{\mathrm{CR}}(\cdot)= -\mathrm{i}[\tilde{H},(\cdot)]+\sum_{j=1}^{N_{T}}\tilde{L}_{j}( \cdot)\tilde{L}_{j}^{\dagger}-\frac{1}{2}\left\{\tilde{L}_{j}^{\dagger}\tilde{L }_{j},(\cdot)\right\} \tag{4}\] \[+\sum_{j=1}^{N_{T}}\underbrace{\tilde{J}_{j}(\cdot)\tilde{J}_{j}^{ \dagger}}_{\mathrm{tick\ generator}}-\frac{1}{2}\left\{\tilde{J}_{j}^{\dagger} \tilde{J}_{j},(\cdot)\right\}.\] Here \(\tilde{H}=H_{\mathrm{C}}\otimes\mathbb{1}_{\mathrm{R}},\tilde{L}_{j}=L_{j} \otimes\mathbb{1}_{\mathrm{R}},\tilde{J}_{j}=J_{j}\otimes O_{\mathrm{R}}\) and \(O_{\mathrm{R}}:=|1\rangle\!\langle 0|_{\mathrm{R}}+|2\rangle\!\langle 1|_{\mathrm{R}}+|3 \rangle\!\langle 2|_{\mathrm{R}}+\ldots+|N_{T}\rangle\!\langle N_{T}-1|_{ \mathrm{R}}\) where \(N_{T}\in\mathbb{N}\) and \(N_{T}+1\) is the dimension of the register. Further, \(H_{\mathrm{C}}\) is hermitian, whereas \(\{J_{j}\}_{j}\), \(\{L_{j}\}_{j}\) are linear. The ticks are generated by the term marked as _tick generator_, the other \(\{\tilde{J}_{j}\}_{j}\) terms generate a type of backreaction on the clockwork as a consequence of ticking. The \(\{L_{j}\}_{j}\) terms are environmental noise not necessarily related to the ticking process itself. Initiating the clock state to one where it has not yet ticked, \(\rho_{\mathrm{CR}}=\rho_{\mathrm{C}}\otimes|0\rangle\!\langle 0|_{\mathrm{R}}\), the probability density to observe the register in the state \(|1\rangle\!\langle 1|_{\mathrm{R}}\) is given by \[P_{\mathrm{tick}}(t)=\mathrm{tr}\!\left(\sum_{j=1}^{N_{T}}J_{j}(\rho_{\mathrm{ C}}^{\mathrm{nt}}(t))J_{j}^{\dagger}\right)\!, \tag{5}\] where \(\rho_{\mathrm{C}}^{\mathrm{nt}}(t):=\mathrm{e}^{t\mathcal{L}_{\mathrm{C}}^{ \mathrm{nt}}}\rho_{\mathrm{C}}^{0}/\,\mathrm{tr}\!\left[\mathrm{e}^{t\mathcal{ L}_{\mathrm{C}}^{\mathrm{nt}}}\rho_{\mathrm{C}}^{0}\right]\) is the time-evolved initial clockwork state conditioned on not having observed a tick at time \(t\), and where \(\mathcal{L}_{\mathrm{C}}^{\mathrm{nt}}(\cdot):=\mathrm{tr}_{\mathrm{R}}[ \mathcal{L}_{\mathrm{CR}}\!\left(\cdot\right)\otimes|0\rangle\!\langle 0|_{ \mathrm{R}}\rangle\left|0\rangle\!\langle 0|_{\mathrm{R}}\right]\), as per [4]. Equation (5) is the _delay function_ (or _waiting time_) of the first tick. Given a particular clock of the form eq. (3), the precision of its first tick is defined by \[R:=\frac{\mu^{2}}{\sigma^{2}},\quad\mu:=\!\!\int_{0}^{\infty}\!\!\!\!dtP_{ \mathrm{tick}}(t)\,t,\quad\sigma:=\!\!\int_{0}^{\infty}\!\!\!\!dtP_{\mathrm{ tick}}(t)\,(t-\mu)^{2}. \tag{6}\] Since the register is a classical counter it can be omitted from the dynamical semigroup description while still reproducing the correct dynamics for the clockwork. In this case the probability of ticking corresponds to the probability of observing exactly one jump and mathematically corresponds to tracing out the register [4]. The resulting clockwork dynamics corresponds to the replacements \(\tilde{H}\to H_{\mathrm{C}}\), \(\{\tilde{L}_{j}\to L_{j}\}\), \(\{\tilde{J}_{j}\to J_{j}\}\) in eq. (4). In section III the presence of a photo/charge detector is continuously measuring the electromagnetic environment and the detection of a photon or corresponding change in charge represents the classical register. Any clock in the model can be specified by providing the initial clockwork state, and matrices \(H_{\mathrm{C}}\), \(\{J_{j}\}_{j}\), \(\{L_{j}\}_{j}\). Quasi-ideal clocks [2; 4; 5] are defined by \[H_{\mathrm{C}}=\sum_{n=0}^{d-1}\omega_{0}n\,|E_{n}\rangle\! \langle E_{n}|\,, \tag{7}\] \[\big{\{}L_{j}=0,\,J_{j}=\sqrt{2V_{j}}\,|\psi_{\mathrm{C}}\rangle\! \langle t_{j}|\,\big{\}}_{j=0}^{d-1}, \tag{8}\] where \(|\psi_{\mathrm{C}}\rangle\) is the initial clockwork state, \(\rho_{\mathrm{C}}^{0}=|\psi_{\mathrm{C}}\rangle\!\langle\psi_{\mathrm{C}}|\). The coefficients \(V_{j}\geq 0\) are coupling coefficients to be defined. The states \[|t_{k}\rangle=\frac{1}{\sqrt{d}}\sum_{j=0}^{d-1}\mathrm{e}^{-\mathrm{i}2\pi j \,k/d}\,|E_{j}\rangle \tag{9}\] correspond to the discrete Fourier transform of the energy basis. This model is parametrized by the choice of \(|\psi_{\mathrm{C}}\rangle\) and \(\{V_{j}\}\). It was proven in [2] that this clock can achieve a precision of the \(m^{\mathrm{th}}\) tick of \[R(d)\propto md^{2}\ \mathrm{as}\ d\rightarrow\infty \tag{10}\] for an appropriately chosen set \(\{V_{j}\}_{j}\) parametrized by \(d\) and quasi-ideal initial state \(|\psi_{\mathrm{C}}\rangle\). It was also shown that the mean ticking time \(\mu\) can take on any value--although in practice may be limited if the strength of the interactions is bounded [6]. It was proven in [3] that all clocks satisfying the axioms of [4] have a precision which is upper bounded by a quadratic function of the dimension, thus proving the optimality of the quasi-ideal clock. Whether this quadratic scaling of the precision could be achieved in low dimensions remained an open question. The answer is of particular interest in the current context since initial experimental realisations are likely to be more feasible in low dimensions. In fig. 1 we show via a numerical optimization method developed in appendix E, that it can. Finally, no experiment will be perfectly accurate, we thus check that the quantum advantage is indeed robust to noise. In fig. 2 we plot the decrease in accuracy for the optimal case when allowing for a small variation in the coupling coefficients and initial state. Since this is the worse-case scenario, experiments with said errors are likely to represent clocks of higher precision. Figure 1: The plot shows the numerically optimized precision (blue dots) in low dimensions \(d\leq 20\). We observe a \(d^{2}\) scaling (orange line) of the precision, demonstrating the quantum advantage. The optimal stochastic precision is also illustrated (green line), as well as the precision of conventional spontaneous emission, \(R=1\); (yellow line). Observe that in certain dimensions the numerically optimized precision can be higher than \(d^{2}\) while in others it is slightly below. ## III Macroscopic derivation via light-matter interactions We are now in the position to provide a macroscopic setup which gives rise to a light-matter realisation of the quasi-ideal clock. For this, we need to find an environment, an initial state on it, and a Hamiltonian over the clockwork, register and environment such that when we trace out the environment, we achieve the same dynamics as that generated by the quasi-ideal clock dynamical semigroup. We consider an electromagnetic environment and the clockwork will consist in the wave function of a negatively charged particle whose initial clockwork state is on a ring called primary ring and centred at the origin of the \(x\)-\(y\) plane. At a distance \(\left|z_{0}\right|\) along the \(z\)-axis below the plane lays a secondary ring of positive change \(q\). The charge difference and separation mean that the secondary ring is of lower energy and the pair form an electric dipole, with dipole vector \(\vec{r}=(0,0,z_{0})\). The clockwork state can decay from the primary ring to the secondary ring via spontaneous photon emission into the electromagnetic field environment--this is the mechanism with which ticks will occur. Later we will allow for more decay channels by adding more secondary rings at different heights along the \(z\)-axis. ### The Hamiltonian part Here we discuss how to construct the Hamiltonian part of the Lindbladian. The primary ring is centred at \(z=0\) along the \(z\)-axis and has \(d\) equally spaced (in the \(\vartheta\) angle) identical wells. Therefore, the primary ring has a \(d\)-fold degenerate ground state. The \(j^{\text{th}}\) degenerate ground state corresponds to the energy level \(\left|E_{j}\right\rangle\). We will later show how to lift the degeneracy to achieve the truncated harmonic spectrum of eq. (7). The secondary ring has \(m\) flux loops inserted, leading to a ground state on the secondary ring given by \[\left|2^{\text{ndry}},m\right\rangle\!:=\!\!\int_{0}^{\infty}\!\! \!\text{d}r\!\!\int_{-\infty}^{\infty}\!\!\!\!\text{d}\vartheta\,f(r,z)\frac{ \text{e}^{\text{i}m\vartheta}}{\sqrt{2\pi R}}\left|r\right\rangle\!\left|z \right\rangle\!\left|\vartheta\right\rangle\!, \tag{11}\] where \(r,z,\vartheta\) are cylindrical-polar coordinates, \(R\) is the radius of the ring, \(f(r,z)\) is an arbitrary normalised wave function over \(r\) and \(z\). The mean separation between primary and secondary rings controls the interaction strength between them. Meanwhile, \(m\) denotes the number of flux quanta giving rise to the Berry phase \(\text{e}^{\text{i}m\vartheta}\). Thus the change of flux quanta allows for control of the matrix elements involving this ground state--we will see how to choose \(m\) later. See Fig. 3 for a depiction of this setup. The secondary ring serves as a decay channel for the \(d\)-dimensional Hilbert space of the primary ring. We can demand that it serves as an energetically well-separated ground state by virtue of the positive particle. Therefore, every state of the primary ring can decay. Under these conditions the coupling between the levels of the primary ring \(\{\left|E_{j}\right\rangle\}_{j=0}^{d-1}\) and the secondary ring are given by the dipole. We will see how the dipole matrix elements enter the dissipater later, but for now let us note some important geometry-induced symmetries. The wave functions \(\psi_{j}(z,r,\vartheta)\) corresponding to the states \(\{\left|E_{j}\right\rangle\}_{j=0}^{d-1}\) satisfy \(\psi_{j}(z,r,\vartheta)=\psi_{0}(z,r,\vartheta+2\pi j/d)\) due to the rotational symmetry. Therefore, the \(s\in\{r,z,\vartheta\}\) component of the dipole matrix element connecting \(\left|E_{j}\right\rangle\) with \(\left|2^{\text{ndry}},m\right\rangle\) is \[\left[D_{s}\right]_{m,j}:=q\left\langle 2^{\text{ndry}},m\middle|\hat{s} \middle|E_{j}\right\rangle=\text{e}^{-\text{i}2\pi j\,m/d}\left[D_{s}\right]_{ m,0}, \tag{12}\] where we have taken into account the orthogonality of \(\{\left|E_{j}\right\rangle\}_{j}\) and \(\left|2^{\text{ndry}},m\right\rangle\). Note how the phase factor in eq. (12) is identical to the ones appearing in eq. (9) when the number of inserted flux loops \(m\) is equal to \(k\). This is a key observation which has resulted from the geometry of the rings and Berry phase, and will turn out to be critical for achieving the quantum advantage in time Figure 2: The plot shows worst-case-scenario robustness of the precision subject to an initial-state-preparation constraint and a constraint on the channel generating the dynamics. In particular, the worst-case-scenario robustness of the precision is achieved by minimizing \(R\) over all initial clockwork states of a fixed fidelity away from the optimal initial clockwork state, and minimizing \(R\) over coupling coefficients \(\{V_{j}\}_{j}\) subject to a fixed value of the 1-norm of the difference in coupling coefficients \(\{V_{j}\}_{j}\) relative to their optimal values. Data is represented as a percentage change in fidelity and 1-norm from their optimal values. Observer that the clock’s precision only becomes comparable with the optimal classical clock when there is a 10% error in both fidelity and 1-norm collectively. This demonstrates that the quantum advantage is quite robust against errors. keeping. It is important that there are no spontaneous transitions between states we do not want to associate with the clock ticking. Therefore, transitions between such states should be dipole forbidden. This is the case here, since the wave functions of \(\{|E_{j}\rangle\}_{j}\) have approximately zero overlap due to the spacing between the wells resulting in a zero dipole matrix elements between any pair \(|E_{j}\rangle\), \(|E_{l}\rangle\). We can also add additional copies of the secondary ring above and below the primary ring in the \(x\)-\(y\) plane all centred along the \(z\)-axis and parallel to one another. These additional secondary rings are useful when each one of them has a different number \(m\) of flux loops inserted. Each additional secondary ring allows for a new decay channel from the primary ring whose strength and energy can be tuned by adjusting its separation along the \(z\)-axis from the primary ring centred at \(z=0\). With two secondary rings, we can place them on opposite sides of the primary ring since its only the absolute value of the separations which matter--not the sign. With three or more rings, with the current geometry, this will always lead to two or more rings being closer along the \(z\)-axis to each other than to the primary ring. While within the dipole approximation this is perfectly sound, in reality, this relatively small inter-secondary-ring separation may lead to virtual transitions between the rings. Luckily, as we will see in section III.2 the ultimate precision limit can already be achieved with just two rings--at least up to moderately large dimensions. For completeness, we assume a total of \(L\leq d\) secondary rings with \(m_{1},\ldots,m_{L}\in\{0,1,\ldots,d-1\}\) flux loops respectively. Therefore, the total free Hamiltonian of the clockwork is thus \[H_{\mathrm{S}}=H_{\mathrm{C}}-\sum_{j=0}^{L-1}\omega_{0m_{j}}\left|2^{\mathrm{ ndry}},m_{j}\right\rangle\!\!\left\langle 2^{\mathrm{ndry}},m_{j}\right|, \tag{13}\] where \(\omega_{0m_{j}}>0\) is the energy gap between the \(|E_{0}\rangle\) eigenstate of \(H_{\mathrm{C}}\) and the \(j^{\mathrm{th}}\) secondary ring. The minus sign in eq. (13) is due to the energy levels of the secondary rings lying below that of \(H_{\mathrm{C}}\). Thus far, the primary ring is energetically degenerate, and so there is no free dynamics. We now add a potential to the lower ring to lift said degeneracy and achieve the harmonic spectrum of \(H_{\mathrm{C}}\). Taking inspiration from tight-binding models (see, e.g. [7]), we show that such a potential is always achievable; we leave the details for appendix B. While it creates a significant difference in the free dynamics of the clockwork, its effect on the dipole moment relations eq. (11) is neglectable. This is important since such relationships are crucial for our clock to work. We calculate the dipole moments numerically to verify that eq. (11) can indeed be satisfied to arbitrary precision. We take into account that the states \(\{|E_{j}\rangle\}_{j}\) are only approximately orthogonal due to the small overlap in the ground state wave functions of the potential wells of the primary ring. ### The Dissipator part Here we discuss how to derive the dissipator part of the Lindbladian. For simplicity, we will consider only one secondary ring with \(m\) flux loops and frequency \(\omega_{0m}\). We explain how the result generalises to the multi-secondary-ring case at the end. The bath of the system is the Fock space of the electromagnetic field which interacts with the negatively changed particle (clockwork state) through spontaneous emission of a photon. To achieve this, we assume that the initial state of the electromagnetic field is a low-temperature bath (an infinite-dimensional Gibbs state). The low temperature is important, since it implies that the mean photon number in the environment is tiny on Figure 3: Depiction of the setup in \(d=6\). **a)**: an excited electron is in a judiciously chosen superposition state with support over the ground states of \(6\) degenerate oscillators. The former are equally-distantly place around a ring—the so-called primary ring. At lower energy and a distance \(z_{0}\) below it, lays another ring—the secondary ring—threaded with \(m\) flux loops. The primary and secondary rings interact via the background quantised electromagnetic field. This permits the electron to spontaneously decay to the secondary-ring state. **b)**: depiction of the primary ring when an additional potential has been activated to lift its degeneracy. average, and thus the probability of spontaneous absorption is neglectable as we will see. Secondly, the typical correlation times of the photons are much shorter than the typical timescale of the matter interactions. Thus light-matter interactions can be effectively modelled with a Markovian evolution. This is known as the Born-Markov approximation. As is standard in light-matter interactions, we will work in the dipole approximation so that the interaction Hamiltonian of Fock space and system is given by \(H_{\mathrm{I}}=\vec{D}\cdot\vec{E}=D_{z}\otimes E_{z}\), where \(\vec{D}\), \(\vec{E}\) are the dipole and electric field operators, and as we have seen the dipole moment for the matter is aligned along the \(z\)-axis. Next, applying the Born-Markov approximation we arrive at the standard textbook result for the system state \(\rho_{\mathrm{S}}(t)\) in the interaction picture \[\begin{split}\frac{d}{dt}\rho_{\mathrm{S}}^{\mathrm{I}}(t)=\sum_{ \omega,\omega^{\prime}}&\mathrm{e}^{i\left(\omega^{\prime}- \omega\right)t}\Gamma(\omega)\bigg{(}D_{z}(\omega)\rho_{\mathrm{S}}^{\mathrm{ I}}(t)D_{z}^{\dagger}\left(\omega^{\prime}\right)\\ &-D_{z}^{\dagger}\left(\omega^{\prime}\right)D_{z}(\omega)\rho_{ \mathrm{S}}^{\mathrm{I}}(t)\bigg{)}+\text{ h.c. }\,,\end{split} \tag{14}\] where \(\Gamma(\omega)\) encodes the bath correlations and we have expanded the dipole moments into terms of equal energy spacing: \[D_{z}(\omega):=\sum_{\varepsilon^{\prime}-\varepsilon=\omega}\Pi(\varepsilon) D_{z}\Pi\left(\varepsilon^{\prime}\right), \tag{15}\] with \(\Pi(\varepsilon)\), \(\Pi(\varepsilon^{\prime})\) projectors onto subspaces of energy \(\varepsilon\) and \(\varepsilon^{\prime}\), respectively of the free clockwork Hamiltonian, eq. (13). Typically, at this point, one invokes the secular approximation also known as rotating-wave approximation (RWA): \(\mathrm{e}^{i(\omega^{\prime}-\omega)t}\approx\delta(\omega-\omega^{\prime})\). The approximation corresponds to widely separated transitions, so that one can resolve from which energy level the decay occurred. This approximation does not hold in our case for all frequencies, as a good clock only has a single fast oscillation corresponding to a tick, because \(\omega_{0m}\gg(d-1)\omega_{0}\). Therefore, we should not observe multiple oscillations of the clockwork before a tick occurs and averaging of phases does not hold for the frequency range \(-(d-1)\omega_{0}\leq\omega^{\prime}-\omega\leq(d-1)\omega_{0}\) appearing in eq. (14). Consequently, we work in a limit where the transitions between energy levels of the truncated oscillator Hamiltonian \(H_{\mathrm{C}}\) cannot be cleanly resolved. On an intuitive level, the advantage of doing so can be understood in terms of the time-energy uncertainty relation: this large uncertainty in energy allows for high certainty in time. Lastly, the aforementioned zero dipole elements of the inter-primary-ring transitions also play a crucial role: they block any remaining decay channels other than the decay channel to the secondary ring. Now let us turn to the bath correlations. As is customary, we will neglect the imaginary part as it only leads to a small type of lamb shift in the energy levels. To calculate the real part, we note that we are assuming that the quantised electromagnetic filed is isotropic. This leads to the classic result \[\Gamma(\omega)=\frac{2\omega^{3}}{3c^{3}}(1+N(\omega)). \tag{16}\] When \(\omega>0\), \(N(\omega)\) is the number of photons with frequency \(\omega\) in the Fock space and given by the Planck distribution. The terms give rise to photo emission and the clock ticking. It is via \(N(\omega)\) that the temperature of the electromagnetic field enters our model. The \(\omega<0\) case correspond to the reverse process and is related to the aforementioned process via \(N(\omega)=-(1+N(-\omega))\). In our low-temperature limit \(N(\omega)\approx 0\), and \[\Gamma(\omega)=\frac{2\omega^{3}}{3c^{3}},\quad\Gamma(-\omega)=0 \tag{17}\] for \(\omega\geq 0\). We numerically show the robustness of this approximation for experimentally-feasible low temperatures in fig. 4 and provide the full derivation in appendix A. Note that at higher temperatures such as room temperature, the spontaneous emission process would still occur--just that this process would no longer be isolated as spontaneous absorption would also be present. We can now return to the dipole transition elements. The only relevant terms are \(\left\langle 2^{\mathrm{ndry}},m|D_{z}(\omega_{0m}+n\omega_{0})\middle|E_{n}\right\rangle\) for \(n=0,1,\ldots,d-1\) since the other terms are either zero or irrelevant due to \(\Gamma(\omega)\) being zero. Moreover, due to geometry and the Berry phase, these dipole elements are identical up to a well-defined time independent phase as seen in eq. (12). Putting everything together and going back to the Schrodinger picture, we arrive at \[\frac{d}{dt}\rho_{\mathrm{S}}(t)=\hat{J}_{m}\rho_{\mathrm{S}}(t)\hat{J}_{m}^{ \dagger}-\frac{1}{2}\Big{\{}\hat{J}_{m}^{\dagger}\hat{J}_{m},\rho_{\mathrm{S} }(t)\Big{\}}, \tag{18}\] where \(\hat{J}_{m}:=\sqrt{2d\Gamma(\omega_{0m})\left|\left\langle 2^{\mathrm{ndry}},m|D_{z}( \omega_{0m})|E_{0}\right\rangle\right|}\left|2^{\mathrm{ndry}},m\middle\rangle \middle\langle t_{m}\right|\). In the case of additional secondary rings with inserted fluxes \(m=0,1,\ldots,d-1\), we would sum over \(m\) from zero to \(d-1\). Now suppose we choose \(d\Gamma(\omega_{0m})\left|\left\langle 2^{\mathrm{ndry}},m|D_{z}(\omega_{0m}) \middle|E_{0}\right\rangle\right|\) equal to \(V_{m}\) in eq. (8). We can do this, for example, by varying the separation of the secondary rings to the primary one. We have now achieved an implementation of the quasi-ideal clock up to the fact that now, after a tick, the clockwork is set to state \(\left|2^{\mathrm{ndry}},m\right\rangle\) rather than the initial state \(\left|\psi_{\mathrm{C}}\right\rangle\) on the primary ring. The classical tick register can either be implemented by detecting the change in charge of the secondary ring to which the negatively changed particle jumped to or by detecting the emitted photon itself. But how many secondary rings to we actually need to achieve the optimal precision? As discussed in section III.1, there would be additional hurdles to overcome going beyond two secondary rings due to the possibility of virtual inter-secondary-ring transitions arising. In fig. 5 we numerically optimise the precision when only allowing for one and two secondary rings. Importantly, we observe, at least up to moderately large dimensions, that just two rings suffice to effectively achieve maximal accuracy. It is unknown whether more decay channels are needed in higher dimensions to achieve the optimal precision. We discuss how this model can be generalised to a multi-consecutive-tick setting in appendix C. ## IV Entropy production per tick In this section we examine the entropy generated per tick and its relation to accuracy and clockwork dimension. The entropy production per tick is defined as the average entropy flux out of the clockwork between two ticks. It corresponds to the amount of entropy flowing into an open quantum system from its environment. For a generic dynamical semi-group with Lindblad operator \(\mathcal{L}(\cdot)=[H_{\mathrm{S}},(\cdot)]+\mathcal{D}(\cdot)\) where \(\mathcal{D}\) is the dissipative part, the entropy flux produced during an infinitesimal time step \(\mathrm{d}t\) for an initial state \(\rho_{\mathrm{S}}(t)\) is \[\mathrm{d}J(t):=-\,\beta\,\mathrm{tr}[H_{\mathrm{S}}\mathcal{D}( \rho_{\mathrm{S}}(t))]\mathrm{d}t=\mathrm{tr}\Big{[}\mathcal{L}(\rho_{ \mathrm{S}}(t))\ln\!\Big{(}\rho_{\mathrm{S}}^{\beta}\Big{)}\Big{]}\mathrm{d}t, \tag{19}\] where \(\rho_{\mathrm{S}}^{\beta}\) is the system's Gibbs state, \(\rho_{\mathrm{S}}^{\beta}:=\mathrm{e}^{-\beta H_{\mathrm{S}}}/Z_{\beta}\) at the ambient temperature \(\beta^{-1}\)[8; 9]. It is well-known that entropy is an observer-dependent quantity since different observers may have different information. In the case of a clock, we have to adjust this definition to take into account that the ticks are classical and readily accessible information. Let us define the entropy for the \(k^{\mathrm{th}}\) tick as follows: We start with the state of the clockwork just after ticking \(k-1\) times (or the initial clockwork state \(\rho_{\mathrm{C}}^{0}\) in the case of \(k=1\)). We then integrate the infinitesimal quantity eq. (19) while conditioning on not ticking up to time \(t\) followed by multiplying by the probability that the \(k^{\mathrm{th}}\) tick occurs at Figure 5: Numerically optimised precision in low dimensions when restricting to one and two decay channels. We observe that one decay channel illustrates linear scaling which is better than the optimal stochastic case but significantly worse than the optimal quantum case when all decay channels are available (recall fig. 1). However, when two decay channels are available, we are close to achieving the (optimal) scaling corresponding to when all \(d\) decay channels are available; at least for dimension \(d\leq 20\). What is more, this scaling is achieved for decay Fourier modes corresponding to the least number of flux loops: modes \(|t_{0}\rangle\), \(|t_{1}\rangle\) and \(m_{1}=0\), \(m_{2}=2\) respectively. Notice now the precision for \(d=19\) and \(d=20\) are less than for \(d=18\) in the 2-decay-channel case. This demonstrates that when restricting the number of decay channels, higher-dimensional decay channels do not necessarily increase the accuracy. This is consistent since \(\{|t_{k}\rangle\}_{k}\) is not a subset of \(\{|t_{i}\rangle\}_{l}\) for \(k<l\). Figure 4: Precision as a function of mean bath photon occupancy number \(N(\omega_{0m})\) for the optimal clock. The \(N=0\) case is the same as in fig. 1. (Plots \(d\) and \(d^{2}\) are guides to the eye). If \(N(\omega_{0m})\approx 0\) is violated, we have a certain probability for the time reverse process by photon absorption. As we consider a thermal state, it obeys Bose-Einstein statistics. At optical frequencies at room temperature one can easily achieve \(N(\omega_{0m})\approx 10^{-50}\) which from the plot we see is effectively zero. At microwave frequencies we have higher numbers. Luckily, cooling has an exponential effect on the occupation number, meaning that reducing the temperature by a half reduces \(N(\omega_{0m})\) by one order of magnitude. Concretely, operating at \(1meV\), at \(T=10mK\), we have \(N(\omega_{0m})\approx 10^{-5}\). While this error is appreciable in the plot, it is still very small relative to the classical bound \(R=d\). time \(t\). Finally, since \(t\) is unknown1, we integrate over all \(t\geq 0\). In the case of a reset clock, the quantity is the same for all ticks and is given by Footnote 1: We only observe the tick, but not the background time \(t\) itself. If the clock is good, they will be correlated but only equal in an idealised clock. \[\Delta S_{\text{tick}}:=\int_{0}^{\infty}\text{d}t\,P_{\text{tick}}(t)\int_{0}^ {t}\!\text{d}s\,\operatorname{tr}\Bigl{[}\mathcal{L}_{\text{C}}^{\text{nt}} \bigl{(}\rho_{\text{C}}^{\text{nt}}(s)\bigr{)}\ln\Bigl{(}\rho_{\text{C}}^{ \beta}\Bigr{)}\Bigr{]}, \tag{20}\] where \(\mathcal{L}_{\text{C}}^{\text{nt}}\) and \(\rho_{\text{C}}^{\text{nt}}(t)\) are the clockwork Lindbladian and state respectively conditioned on having not observed a tick as per eq. (5). Other related notions of clock entropy production can be found in [10]. A clock based on thermodynamic absorption principles was introduced in [11]. This thermal absorption clock has a clockwork consisting in a ladder Hamiltonian with equidistant spacing and dimension \(d\).2 The population starts at the bottom of the ladder and is driven upwards by work performed on it by the flow of heat from a hot thermal bath to a cold thermal bath at inverse temperatures \(\beta_{h}\), \(\beta_{c}\) respectively. The ladder does not couple directly to the thermal baths, but instead couples to two qubits (of energy gaps \(E_{c}\) and \(E_{h}\)) which are maintained at thermal equilibrium with the hot and cold baths respectively. The three-body interaction between the hot and cold qubits and the ladder induces an effective two-body coupling between a virtual qubit with inverted population and every step of the ladder. The population of the ladder then equilibrates with the virtual bath. Since the virtual qubit has population inversion this equilibration causes the population of the ladder to be driven up it. The amount of heat dissipated to the cold bath every time the population climbs one run of the ladder is \(E_{w}=(E_{h}-E_{c})\). When the population reaches the top of the ladder a tick occurs via the emission of a photon. This allows for the population to be re-set to its initial state at the bottom of the ladder and the process to start over again. In this special case, it is readily clear how much entropy is produced per tick--the amount of entropy produced for the population to climb the ladder. We compare this quantity to that generated by our definition in fig. 8. We find that the definitions agree and more generally that the entropy produced per tick is approximately given by Footnote 2: The total dimension of the clockwork is \(4d\), since each of the two baths thermalise a clockwork qubit, which in turn interact with the \(d\)-dimensional ladder. \[\Delta S_{\text{tick}} \approx\beta_{v}(Q_{h}-Q_{c})=\beta_{h}Q_{h}-\beta_{c}Q_{c} \tag{21}\] \[=(\beta_{c}-\beta_{h})Q_{c}-\beta_{h}E_{\gamma}, \tag{22}\] where \(\beta_{v}\) is the inverse temperature of the virtual thermal bath, \(Q_{h}:=(d-1)E_{h}\), is the total heat flowing into the ladder and \(Q_{c}:=(d-1)E_{c}\) the total amount flowing out of it during the process in which the population reaches the top of the ladder where the tick occurs. The second line follows from defining \(E_{\gamma}:=(d-1)\omega\) and puts into focus the fact that the entropy per tick has two contributions: an entropy sink (the cold bath) and the entropy associated with emission of the photon at energy \(E_{\gamma}\). The latter is in principle recoverable and could be recycled as heat back into the hot bath. However, since the aim is to derive fundamental lower bounds, it can be kept without issue. The right hand side of eq. (21) is manifestly proportional to the dimension. However, this relationship only holds approximately when \(Q_{c}/E_{w}\) is much less than the dimension. Otherwise non-linearities due to reflections from ladder boundaries become relevant. The exact dependency is plotted in fig. 7. In [11] the precision \(R\) of each tick was also found to be approximately proportional to the ladder dimension \(d\). One can write the precision per tick as a function of the minimum entropy per tick by eliminating the explicit \(d\) dependency. In the large \(d\) limit the boundary effects vanish and one finds that a minimal entropy per tick for this model is [11] \[R=\frac{\Delta S_{\text{tick}}}{2}. \tag{23}\] It was reasoned that while this linear scaling was derived for a specific model, that it should in fact be a fundamental lower bound on the amount of entropy required to produce a tick of the stated precision. However, said reasoning was classical in nature and did not take into account quantum effects. Nevertheless, the paper is commonly cited in the literature as providing a fundamental limit on the entropy produced per tick. Let us now examine the entropy per tick as a function of the clockwork dimension for the light-matter quasi-ideal clock. From the numerics in fig. 6 we also observe a linear relationship between the clockwork dimension \(d\) and the entropy per tick, namely \[\Delta S_{\text{tick}}\approx\beta(\omega_{\gamma}+\omega_{0}d/2), \tag{24}\] where recall \(\beta\) denotes the inverse temperature of the thermal bath. The linear dependency with \(\beta\) can be understood by observing that at low bath temperatures (large \(\beta\)) the emission of a photon into the bath perturbs it much more than if it were at a high temperature, and creates more entropy in the process. The quantity \(\beta\omega_{\gamma}\in\{\beta\omega_{0m_{1}},\beta\omega_{0m_{2}},\dots, \beta\omega_{0m_{L-1}}\}\) is just the energy emitted by the photon producing the tick, while \(\omega_{0}d/2\) is the mean energy of the initial state of the clockwork. This is essentially the same relationship we observed for the thermal absorption clock [11] in eq. (21) but without the cold bath which acted as an entropy sink. Importantly, in both cases the entropy per tick is directly proportional to the dimension of the clockwork. However, as has been shown, the precision \(R\) for the quasi-ideal clock scales quadratically with the dimension \(d\), therefore by substitution we find that the quasi-ideal clock realised via a thermal environment yields a _quadratic_ relationship between the entropy per tick and precision. Namely \[R\approx\left(\frac{\Delta S_{\text{tick}}}{\beta}-\omega_{\gamma}\right)^{2} \frac{4}{\omega_{0}^{2}} \tag{25}\] for small \(\beta^{-1}\). As such, the quasi-ideal clock can produce ticks of higher precision at the same entropy expense, thus demonstrating that eq. (23) is not a lower bound. As discussed in appendix C, the clockwork is not automatically re-set to its initial value. Since the classical decay channels are physically distinguishable one can in principle tell which of the states \(\left\{\left|2^{\text{ndry}},m_{j}\right\rangle\right\}\) the clockwork is in after the tick occurs in an actual experiment. Since the initial state \(\left|\psi_{\text{C}}\right\rangle\) is also pure, only a (entropy-preserving) unitary transformation is required to re-set the clock. This is in stark contrast to the irreversible process of ticking. As such, the inclusion of the resetting of the clockwork should be realisable without incurring a net entropy flux beyond that associated with applying the unitary transformation. The initial state \(\left|\psi_{\text{C}}\right\rangle\) is of higher energy and thus the unitary will not be energy preserving. So if the source of energy is not pure, then there will be an entropic cost in using it to reset the clock. However, characterising such costs is a generic question for applying non-energy preserving unitaries and is not related to clocks per se. Moreover, as seen from eq. (24) the entropy per tick scales linearly with \(d\) and the initial state \(\left|\psi_{\text{C}}\right\rangle\) only has support of \(d\) energy levels, so even if the entropy required to reset the initial state from any of the secondary-ring states \(\left\{\left|2^{\text{ndry}},m_{j}\right\rangle\right\}\), we would still obtain a precision \(R\) which scales quadratically with the entropy production as in eq. (25). There is another source of entropy associated with the tick-register itself--it also requires re-setting with the usual Launderer erasure cost associated with it. This is of a different nature and discussed in [10]. ## V Discussion and Conclusion In this manuscript we have considered one of the most elementary processes in light-matter interactions: spontaneous emission. In its standard form the emission time is uniform in the sense that the probability of decaying at any given instant, for which it has not decayed already, is independent of the current time. We have proven that by judiciously selecting the excited state and the light-matter coupling, this process can be tuned so that its decay time is the most regular process permitted by quantum mechanics as a function of available energy and dimension in any Markovian setting. It had been shown that a clock can be defined axiomatically from basic principles about what a clock should be [4], and that the abstract and theoretic quasi-ideal clock is asymptotically the most precise clock permissible [2]. By identifying the matter emitter with the clockwork of a clock, and the spontaneously emitted photon with the "ticking" of said clockwork--thus identifying the light-matter system with a quantum clock--we were able to find a light-matter realisation of the quasi-ideal clock. This constitutes the first such realisation and proves that it is at least in-principle realisable. Prior to this work, only semi-classical clocks [11; 12] had been experimentally realised [13; 14] and doubts on weather the appropriate dynamical semi-group for the quasi-ideal clock could be constructed from a physical environment. Here Figure 6: Entropy per tick for the light-matter implementation for the optimal quasi-ideal clock for different inverse bath temperatures \(\beta^{-1}\). Figure 7: Precision as a function of \(Q_{c}/E_{w}\) for two distinct fixed dimensions \(d\). we have proven that no exotic environments are required and derived from 1st principles the appropriate dynamical semi-group from the physics of light-matter interactions. The electromagnetic environment we use is the typical isotropic thermal state used to derive standard spontaneous emission. Therefore our approach should be contrasted to those in which the electromagnetic filed is altered in some way, such as when it is placed in a cavity, which breaks isotropicity. This can also result in a non-conventional waiting time for a spontaneously emitted photon from an atom in the cavity. A classic example of this is the James-Cummings model [15]. Since the realisation of spontaneous emission is already the best possible under the Markovian light-matter interaction assumption, the only way such anisotropies could enhance it further would be if they introduced a memory effect into the environment making its interaction with the matter non-Markovian. Our work is also identifying a distinct phenomena to that of superradiance and subradiance [16]. Among other things, while both phenomena rely on interference effects of the excited matter, these super and sub radiance effects occur in the many photon-emitter regime, where as ours occurs at the single photon-emitter level. To derive the dynamical semi-group of the quasi-ideal clock from first principles we needed to overcome two main obstacles: avoidance of decay in the energy basis and careful engineering of the dipole moments coupling the light to matter. The former was achieved by slow oscillations in the excited state while the latter by matter in a ring geometry which induced Berry phases into the dipole moments. Of course, while our physical derivation of a light-matter quasi-ideal clock demonstrates that it is in-principle possible, in practice it will likely be hard. One possible approach is to use graphene rings where the flux loop insertion has already been achieved and studied in detail [17]. What is more, other implementations might also be a possibility: the circular geometry which was used to induce the Berry phases in the dipole elements might be realisable via other methods. For example, the crystalline structures satisfying Bloch's theorem which are long enough so that finite boundary effects are not observable, might allow for the necessary symmetries of the dipole couplings (eq. (12)) to be realisable. Going forward we envisage that our ultra-regular spontaneous emission source can be used to produce ultra-precise photon-delay systems: The activation of the clock can be achieved by a sudden splitting of the excited energy levels, which then produces an emitted photon and change in charge at a chosen time delay at the quantum limit of precision. The research to achieve time delayed photonic emitters is well underway (see [18] and papers here in), but while the precision achieved in [18] represents an unprecedented control of the emission time statistics, it is still far below the ultimate limit proposed in this paper. ###### Acknowledgements. We thank Christopher T. Chubb with help running our code for the numerics on to the ETH Zurich Euler computing cluster. M.P.W. was supported by an Ambizione fellowship from the Swiss National Science Foundation (grant No. PZ00P2_179914) in addition to the NCCR QSIT. ## Appendix A Derivation of the master equation for the clockwork In this appendix we will derive the master equation corresponding to our experimental proposal. We will clearly layout and justify the approximations we make, which are standard in the literature. This appendix is divided into two subsections. The first is standard in the literature and is included for completeness and to fix notation while the second is coulomb specific to our setup. ### Generic open quantum system part of the derivation Here we will go from a Hamiltonian description of the system and bath, to the description just before the RWA is typically performed. It will be a completely standard textbook derivation for light-matter interactions (indeed it can be found in e.g. [19; 20]. The only small specific specialisation to our particular light-matter interaction will be the choice of spectrum of the matter in eq. (14). Consider the Hamiltonian on the system and bath of the form \[H(t)=H_{\mathrm{S}}\otimes\mathbbm{1}_{\mathrm{B}}+\mathbbm{1}_{\mathrm{S}} \otimes H_{\mathrm{B}}+H_{\mathrm{SB}}(t), \tag{15}\] where \(H_{\mathrm{S}}\), \(H_{\mathrm{B}}\) are the Hamiltonians of the system and bath respectively, and \(H_{\mathrm{SB}}(t)\) is a (potentially time-dependent) interaction term coupling the dynamics of the system and bath. We will proceed by going into the interaction picture. For this we need to defined the unitaries \(U_{0}(t)\), \(U(t)\) via the solution to a differential equation: that of the free dynamics and that of the total dynamics \[\frac{d}{dt}U_{0}(t)=-{\rm i}\left(H_{\rm S}\otimes 1_{\rm B}+1_{\rm S}\otimes H_{ \rm B}\right)U_{0}(t),\quad\frac{d}{dt}U(t)=-{\rm i}H(t)U(t) \tag{10}\] respectively, with initial conditions \(U_{0}(0)=\mathbbm{1}_{\rm SB}\), \(U(0)=\mathbbm{1}_{\rm SB}\). With these two definitions, we can define the dynamics of density operators and observables in the interaction picture by the relations \[\rho^{\rm I}_{\rm SB}(t)=U_{\rm I}(t)\rho_{\rm SB}U_{\rm I}^{\dagger}(t),\quad A _{\rm I}(t)=U_{0}(t)A_{\rm SB}U_{0}^{\dagger}(t), \tag{11}\] where \[U_{\rm I}(t):=U_{0}^{\dagger}(t)U(t) \tag{12}\] and \(\rho_{\rm SB}\), \(A_{\rm SB}\) are the initial system-bath states and operators respectively. It follows that \[\frac{d}{dt}U_{\rm I}(t)=-{\rm i}\bar{H}_{\rm I}(t)U_{\rm I}(t), \tag{13}\] where we have defined the interaction picture interaction term as \(\bar{H}_{\rm I}(t):=U_{0}^{\dagger}(t)H_{\rm SB}(t)U_{0}(t)\). Therefore, \[\frac{d}{dt}\rho^{\rm I}_{\rm SB}(t)=-{\rm i}[\bar{H}_{\rm I}(t),\rho^{\rm I} _{\rm SB}(t)], \tag{14}\] yielding the solution \[\rho^{\rm I}_{\rm SB}(t)=\rho^{\rm I}_{\rm SB}(0)-{\rm i}\int_{0}^{t}dt\left[ \bar{H}_{\rm I}(t),\rho^{\rm I}_{\rm SB}(t)\right]=\rho_{\rm SB}-{\rm i}\int_ {0}^{t}dt\left[\bar{H}_{\rm I}(t),\rho_{\rm I}(t)\right]. \tag{15}\] Substituting the above equation into eq. (14) and tracing out the bath yields \[\frac{d}{dt}\rho^{\rm I}_{\rm S}(t)=-\int_{0}^{t}ds\ {\rm tr}_{\rm B} \Big{[}\bar{H}_{\rm I}(t),[\bar{H}_{\rm I}(s),\rho^{\rm I}_{\rm SB}(s)]\Big{]}, \tag{16}\] where \(\rho^{\rm I}_{\rm S}(t):={\rm tr}_{\rm B}\,\rho^{\rm I}_{\rm SB}(t)\) and we have made our first assumption, namely that \[{\rm tr}_{\rm B}[\bar{H}_{\rm I}(t),\rho_{\rm SB}]=0. \tag{17}\] We will now make two more assumptions. Our second assumption is that the so-called Born approximation holds. This assumption states that \[\rho^{\rm I}_{\rm SB}(s)\approx\rho^{\rm I}_{\rm S}(s)\otimes\rho_{\rm B}. \tag{18}\] This assumption is reasonable when environmental excitations decay over times which are not resolved. This assumption is called the Markov approximation and is our third assumption. it consists in replacing \(\rho^{\rm I}_{\rm S}(s)\) with \(\rho^{\rm I}_{\rm S}(t)\). Together, these two approximations are know as the Born-Markov approximation and yield the following differential equation when substituting into eq. (16) \[\frac{d}{dt}\rho^{\rm I}_{\rm S}(t)=-\int_{0}^{t}ds\ {\rm tr}_{\rm B} \Big{[}\bar{H}_{\rm I}(t),[\bar{H}_{\rm I}(s),\rho^{\rm I}_{\rm S}(t)\otimes \rho_{\rm B}]\Big{]}. \tag{19}\] Finally, there is one more assumption needed in order to turn the above equation into a dynamical semi-group: we must replace \(s\) by \(s-t\) and replace the upper integral limit \(t\) by \(+\infty\). This approximation is permissible when the integrand disappears sufficiently fast for \(s\gg\tau_{B}\), where \(\tau_{B}\) is the time-scale over which the reservoir correlation functions decay. \[\frac{d}{dt}\rho^{\rm I}_{\rm S}(t)=-\int_{0}^{\infty}ds\ {\rm tr}_{\rm B} \Big{[}\bar{H}_{\rm I}(t),[\bar{H}_{\rm I}(s-t),\rho^{\rm I}_{\rm S}(t) \otimes\rho_{\rm B}]\Big{]}. \tag{20}\] The interaction term \(H_{\rm SB}(t)\) is expanded as a sum of product terms between Hermitian system operators \(H_{\rm SB}(t)=\vec{D}\cdot\vec{E}=\sum_{\alpha\in\{x,y,z\}}D_{\alpha}\otimes E _{\alpha}\), where \(D_{\alpha}=q\,\hat{r}_{\alpha}\). We are using the convention that the dipole vector \(q(\hat{r}_{x},\hat{r}_{y},\hat{r}_{z})\) points in the direction of positive change. In our setup, \(q(\hat{r}_{x},\hat{r}_{y},\hat{r}_{z})=q(0,0,\hat{r}_{z})\) since the changes are centred around the \(z\)-axis. However, it is insightful to not assume this now, as derive a more general condition for our clock to work. This generality could account for, e.g. a small misalignment of the change distribution so that the \(x\) and \(y\) components are not exactly zero. We now expand the dipole moment operator \(D_{z}\) in terms of eigenspaces of the free system Hamiltonian \(H_{\mathrm{S}}\). Let \(\{\varepsilon\}\) be the set of eigenvalues of \(H_{\mathrm{S}}\). Let \(\pi(\varepsilon)\) be the projector onto eigenstate corresponding to eigenvalue \(\varepsilon\). Since the summation over said projectors is a resolution of the identity, we have that \[H_{\mathrm{SB}}(t)=\sum_{\alpha}\sum_{\varepsilon,\varepsilon^{\prime}}\pi( \varepsilon)D_{\alpha}\pi(\varepsilon^{\prime})\otimes E_{\alpha}=\sum_{ \alpha}\sum_{\omega}D_{\alpha}(\omega)\otimes E_{\alpha}, \tag{101}\] where \(D_{\alpha}(\omega):=\sum_{\varepsilon^{\prime}-\varepsilon=\omega_{0}}\pi( \varepsilon)D_{\alpha}\pi(\varepsilon^{\prime})\). In particular, since \(H_{\mathrm{S}}\) has evenly spaced eigenvalues, \(\varepsilon=\omega_{0}n\) (\(n\in\{0,1,\ldots,d-1\}\)) and \(\omega\) takes all values of the set \[\{\pm n\omega_{0},\pm(\omega_{0m}+n\omega_{0}):n=0,1,\ldots,d-1\}, \tag{102}\] where recall \(\omega_{0}\) is the frequency of the harmonic oscillator Hamiltonian eq. (7) and \(\omega_{0m}\) is the energy gap between the ground state of the oscillator, \(|E_{0}\rangle\), and the secondary ring \(\left|2^{\mathrm{dry}},m\right\rangle\). The \(\omega=0\) case corresponds to same-energy-state coupling, which will not play a role as we will see. Meanwhile, the \(\pm n\omega_{0}\) terms correspond to inter-primary-ring transitions which are dipole-forbidden (as discussed in the main text) and the terms \(\omega_{0m}+n\omega_{0}\) are responsible for transitions from primary to secondary rings and the terms \(-(\omega_{0m}+n\omega_{0})\) reverse process. We thus find \[\bar{H}_{\mathrm{I}}(t)=U_{0}^{\dagger}(t)\left(\sum_{\omega,\alpha}D_{\alpha }(\omega)\otimes E_{\alpha}\right)U_{0}(t)=\sum_{\omega,\alpha}\mathrm{e}^{- \mathrm{i}\omega t}D_{\alpha}(\omega)\otimes E_{\alpha}(t)=\sum_{\omega, \alpha}\mathrm{e}^{\mathrm{i}\omega t}D_{\alpha}^{\dagger}(\omega)\otimes E_{ \alpha}(t), \tag{103}\] where \(E_{\alpha}(t):=\mathrm{e}^{\mathrm{i}tH_{\mathrm{B}}}E_{\alpha}\mathrm{e}^{- \mathrm{i}tH_{\mathrm{B}}}\). Therefore, plugging into eq. (100) we find \[\frac{d}{dt}\rho_{\mathrm{S}}^{\mathrm{I}}(t) =-\int_{0}^{\infty}ds\,\operatorname{tr_{\mathrm{B}}}\!\left[ \bar{H}_{\mathrm{I}}(t-s)\rho_{\mathrm{S}}^{\mathrm{I}}(t)\otimes\rho_{ \mathrm{B}}\bar{H}_{\mathrm{I}}(t)-\bar{H}_{\mathrm{I}}(t)\bar{H}_{\mathrm{I}} (t-s)\rho_{\mathrm{S}}^{\mathrm{I}}(t)\otimes\rho_{\mathrm{B}}\right]+ \mathrm{h.c.} \tag{104}\] \[=\sum_{\alpha,\alpha^{\prime}}\!\sum_{\omega,\omega^{\prime}} \!\mathrm{e}^{\mathrm{i}t(\omega^{\prime}-\omega)}\Gamma_{\alpha,\alpha^{ \prime}}(\omega)\Big{(}D_{\alpha}(\omega)\rho_{\mathrm{S}}^{\mathrm{I}}(t)D_{ \alpha^{\prime}}^{\dagger}(\omega^{\prime})-D_{\alpha^{\prime}}^{\dagger}( \omega^{\prime})D_{\alpha}(\omega)\rho_{\mathrm{S}}^{\mathrm{I}}(t)\Big{)}+ \mathrm{h.c.}, \tag{105}\] where \[\Gamma_{\alpha,\alpha^{\prime}}(\omega):=\int_{0}^{\infty}ds\,\mathrm{e}^{ \mathrm{i}\omega s}\operatorname{tr_{\mathrm{B}}}\left[E_{\alpha^{\prime}}^{ \dagger}(t)E_{\alpha}(t-s)\rho_{\mathrm{B}}\right]=\int_{0}^{\infty}ds\, \mathrm{e}^{\mathrm{i}\omega s}\operatorname{tr_{\mathrm{B}}}\left[E_{ \alpha^{\prime}}^{\dagger}(s)E_{\alpha}(0)\rho_{\mathrm{B}}\right], \tag{106}\] and in the last line we have used the fact that \(\rho_{\mathrm{B}}\) is a Gibbs state and thus is stationary w.r.t. the free Hamiltonian of the bath, \(H_{\mathrm{B}}\). We can now simplify our first assumption, namely eq. (100), to find \[\operatorname{tr_{\mathrm{B}}}\left[\left(\sum_{\omega,\alpha}\mathrm{e}^{- \mathrm{i}\omega t}D_{\alpha}(\omega)\otimes E_{\alpha}(t)\right),\rho_{ \mathrm{S}}\otimes\rho_{\mathrm{B}}\right]=0, \tag{107}\] which is implied by \[\operatorname{tr}E_{\alpha}\rho_{\mathrm{B}}=0,\quad\forall\alpha\in\{x,y,z\} \tag{108}\] when the above mentioned stationary of \(\rho_{\mathrm{B}}\) is taken into account. ### Special dipole moment symmetries In this section we will complete the derivation of our dynamical semigroup which was started in the previous section. We will specialise to our setup by using the symmetry in the dipole moments and frequency rage it provides. For a thermal bath, in which we neglect the imaginary part of \(\Gamma(\omega)\) we have \[\Gamma_{\alpha,\alpha^{\prime}}(\omega)=\Gamma(\omega)\delta_{\alpha,\alpha^{ \prime}},\quad\Gamma(\omega)=\frac{2\omega^{3}}{3c^{3}}(1+N(\omega)). \tag{109}\] When \(\omega>0\), \(N(\omega)\) is the number of photons with frequency \(\omega\) in the Fock space and given by the Planck distribution. The terms give rise to photo emission and the clock ticking. The \(\omega<0\) case correspond to the reverse process and it is convenient to use the identity \(N(\omega)=-(1+N(-\omega))\) to write the decay coefficient as \[\Gamma(-\omega)=\frac{2\omega^{3}}{3c^{3}}N(\omega). \tag{101}\] As discussed and motivated in section III.2 and recall \(\{\alpha,\alpha^{\prime}\}\in\{x,y,z\}\). Therefore, plugging into eq. (100), we find \[\begin{split}\frac{d}{dt}\rho_{\mathrm{S}}^{\mathrm{I}}(t)=& \sum_{\alpha}\sum_{\omega,\omega^{\prime}>0}\mathrm{e}^{\mathrm{i}t( \omega^{\prime}-\omega)}\Gamma(\omega)\Big{(}D_{\alpha}(\omega)\rho_{\mathrm{S }}^{\mathrm{I}}(t)D_{\alpha}^{\dagger}(\omega^{\prime})-D_{\alpha}^{\dagger}( \omega^{\prime})D_{\alpha}(\omega)\rho_{\mathrm{S}}^{\mathrm{I}}(t)\Big{)}+ \mathrm{h.c.},\\ &+\sum_{\alpha}\sum_{\omega,\omega^{\prime}>0}\mathrm{e}^{- \mathrm{i}t(\omega^{\prime}-\omega)}\Gamma(-\omega)\Big{(}D_{\alpha}^{\dagger} (\omega)\rho_{\mathrm{S}}^{\mathrm{I}}(t)D_{\alpha}(\omega^{\prime})-D_{ \alpha}(\omega^{\prime})D_{\alpha}^{\dagger}(\omega)\rho_{\mathrm{S}}^{ \mathrm{I}}(t)\Big{)}+\mathrm{h.c.}\\ &+\sum_{\alpha}\left(\sum_{\omega>0,\omega^{\prime}<0}+\sum_{ \omega<0,\omega^{\prime}>0}\right)\mathrm{e}^{\mathrm{i}t(\omega^{\prime}- \omega)}\Gamma(\omega)\Big{(}D_{\alpha}(\omega)\rho_{\mathrm{S}}^{\mathrm{I}}(t )D_{\alpha}^{\dagger}(\omega^{\prime})-D_{\alpha}^{\dagger}(\omega^{\prime})D_ {\alpha}(\omega)\rho_{\mathrm{S}}^{\mathrm{I}}(t)\Big{)}+\mathrm{h.c.},\end{split} \tag{102}\] where we have used \(D_{\alpha}(-\omega)=D_{\alpha}^{\dagger}(\omega)\). The only relevant frequencies from eq. (101) are \(\omega=\pm(\omega_{0m}+n\omega_{0})\), \(n\in\{0,1,\ldots,d-1\}\), as all the others are either dipole-forbidden or do not appear in eq. (102). Recalling the identity \(\omega_{0m}\gg(d-1)\omega_{0}\), we thus see that \(|\omega^{\prime}-\omega|\) in the exponential of the last line is much larger than the same term in the first and second lines. These oscillations are occurring on a much faster timescale than the relaxation time of the system, and hence we can invoke the rotation wave approximation to eliminate the last line of eq. (102). However, the rotating wave approximation is invalid for the first and second lines, since the average time it take for the clock to tick corresponds to about half a rotation. We thus have \[\begin{split}\frac{d}{dt}\rho_{\mathrm{S}}^{\mathrm{I}}(t)=& \sum_{\alpha}\sum_{\omega,\omega^{\prime}>0}\mathrm{e}^{\mathrm{i}t( \omega^{\prime}-\omega)}\Gamma(\omega)\Big{(}D_{\alpha}(\omega)\rho_{\mathrm{S }}^{\mathrm{I}}(t)D_{\alpha}^{\dagger}(\omega^{\prime})-D_{\alpha}^{\dagger}( \omega^{\prime})D_{\alpha}(\omega)\rho_{\mathrm{S}}^{\mathrm{I}}(t)\Big{)}\\ &+\sum_{\alpha}\sum_{\omega,\omega^{\prime}>0}\mathrm{e}^{- \mathrm{i}t(\omega^{\prime}-\omega)}\Gamma(-\omega)\Big{(}D_{\alpha}^{\dagger} (\omega)\rho_{\mathrm{S}}^{\mathrm{I}}(t)D_{\alpha}(\omega^{\prime})-D_{\alpha} (\omega^{\prime})D_{\alpha}^{\dagger}(\omega)\rho_{\mathrm{S}}^{\mathrm{I}}(t )\Big{)}+\mathrm{h.c.}\end{split} \tag{103}\] For the remaining relevant frequencies, we have \[D_{\alpha}(-(\omega_{0m}+n\omega_{0}))=D_{\alpha}^{\dagger}(\omega_{0m}+n \omega_{0}),\qquad D_{\alpha}(\omega_{0m}+n\omega_{0})=a_{n\alpha}^{(m)}\left|2^ {\mathrm{ndry}},m\right\rangle\!\left\langle\!E_{n}\right|, \tag{104}\] with \(a_{n\alpha}^{(m)}:=q\left\langle E_{n}\big{|}\hat{\alpha}\big{|}2^{\mathrm{ ndry}},m\right\rangle\). We now make the assumption that \[a_{n\alpha}^{(m)}=\mathrm{e}^{\frac{|2\pi nm}{d}}\mathrm{e}^{\mathrm{i}\theta_{ \alpha,m}}\bigg{|}a_{n\alpha}^{(m)}\bigg{|}, \tag{105}\] where \(\theta_{\alpha,m}\in\mathbb{R}\). This clearly holds in our setup, due to eq. (12). However, it is informative to only assume the weaker assumption eq. (105) for now as this way we can derive a more general condition on the dipole coupling which may be useful if the geometry is not identical to that described. E.g. if there were an imperfection such as the rings not being perfectly perpendicular, and the coupling strength in one of the wells of the primary ring is slightly stronger than that of the other wells. Substituting eq. (104) into eq. (103) and using assumption eq. (105), we arrive at \[\begin{split}\frac{d}{dt}\rho_{\mathrm{S}}^{\mathrm{I}}(t)=\sum_{ \alpha}\sum_{n,n^{\prime}=0}^{d-1}\mathrm{e}^{\mathrm{i}t\omega_{0}(n^{\prime }-n)}\Gamma(\omega_{0m}+n\omega_{0})\Big{(}D_{\alpha}(\omega_{0m}+n\omega_{0}) \rho_{\mathrm{S}}^{\mathrm{I}}(t)D_{\alpha}^{\dagger}(\omega_{0m}+n\omega_{0}) -D_{\alpha}^{\dagger}(\omega_{0m}+n\omega_{0})D_{\alpha}(\omega_{0m}+n\omega_{0} )\rho_{\mathrm{S}}^{\mathrm{I}}(t)\Big{)}\\ +\sum_{\alpha}\sum_{n,n^{\prime}=0}^{d-1}\mathrm{e}^{\mathrm{i}t \omega_{0}(n-n^{\prime})}\Gamma(-(\omega_{0m}+n\omega_{0}))\Big{(}D_{\alpha}^{ \dagger}(\omega_{0m}+n\omega_{0})\rho_{\mathrm{S}}^{\mathrm{I}}(t)D_{\alpha}( \omega_{0m}+n^{\prime}\omega_{0})-D_{\alpha}(\omega_{0m}+n^{\prime}\omega_{0})D_ {\alpha}^{\dagger}(\omega_{0m}+n\omega_{0})\rho_{\mathrm{S}}^{\mathrm{I}}(t) \Big{)}\\ +\mathrm{h.c.}\\ =&\sum_{\alpha}\sum_{n,n^{\prime}=0}^{d-1}\mathrm{e}^{ \mathrm{i}t\omega_{0}(n^{\prime}-n)}\Gamma(\omega_{0m}+n\omega_{0})\Big{|}a_{n \alpha}^{(m)}a_{n^{\prime}\alpha}^{(m)}\Big{|}\mathrm{e}^{2\pi(n-n^{\prime})m /d}\Big{(}\left\langle E_{n}\big{|}\rho_{\mathrm{S}}^{\mathrm{I}}(t)|E_{n^{ \prime}}\right\rangle\left|2^{\mathrm{ndry}},m\right\rangle\!\left\langle\!2^{ \mathrm{ndry}},m\right|-\left|E_{n^{\prime}}\right\rangle\!\left\langle\!E_{n} \right|\rho_{\mathrm{S}}^{\mathrm{I}}(t)\Big{)}\\ +\sum_{\alpha}\sum_{n,n^{\prime}=0}^{d-1}\mathrm{e}^{\mathrm{i}t \omega_{0}(n-n^{\prime})}\Gamma(-(\omega_{0m}+n\omega_{0}))\Big{|}a_{n^{\prime} \alpha}^{(m)}a_{n\alpha}^{(m)}\Big{|}\mathrm{e}^{2\pi(n^{\prime}-n)m/d}\Big{(} \left\langle 2^{\mathrm{ndry}},m\right|\rho_{\mathrm{S}}^{\mathrm{I}}(t)|2^{\mathrm{ndry}},m \right\rangle\left|E_{n}\right\rangle\!\left\langle\!E_{n^{\prime}}\right|\\ &\left.-\delta_{n,n^{\prime}}\big{|}2^{\mathrm{ndry}},m\right\rangle \!\left\langle\!2^{\mathrm{ndry}},m\right|\rho_{\mathrm{S}}^{\mathrm{I}}(t) \Big{)}+\mathrm{h.c.}\end{split} \tag{106}\] Now, let us assume there exits \(C_{0}>0\) independent of \(n\) such that \[\sum_{\alpha}\Gamma(\omega_{0m}\!+\!n\omega_{0})\Big{|}a_{n\alpha}^{(m)}a_{n^{ \prime}\alpha}^{(m)}\Big{|}=C_{0}^{(m)},\quad\forall\,n,n^{\prime}\in\{0,1, \ldots,d-1\}. \tag{103}\] We justify physically in appendix A.3. Furthermore, we will see that when it is satisfied approximately, then to a good approximation we also have \[\sum_{\alpha}\Gamma(-(\omega_{0m}\!+\!n\omega_{0}))\Big{|}a_{n\alpha}^{(m)}a_{ n^{\prime}\alpha}^{(m)}\Big{|}=C_{0}^{\prime(m)},\quad\forall\,n,n^{\prime}\in\{0,1, \ldots,d-1\}. \tag{104}\] from eq. (102) it follows \[\begin{split}\frac{d}{dt}\rho_{\mathrm{S}}^{\mathrm{I}}(t)=& C_{0}^{(m)}d\Big{(}\left\langle t_{m}(t)\big{|}\rho_{\mathrm{S}}^{ \mathrm{I}}(t)\big{|}t_{m}(t)\right\rangle\big{|}2^{\mathrm{ndry}},m\big{\rangle} \!\big{\langle}2^{\mathrm{ndry}},m\big{|}-\left|t_{m}(t)\right\rangle\! \!\langle t_{m}(t)\big{|}\,\rho_{\mathrm{S}}^{\mathrm{I}}(t)\Big{)}\\ &+C_{0}^{\prime(m)}d\Big{(}\left\langle 2^{\mathrm{ndry}},m \big{|}\rho_{\mathrm{S}}^{\mathrm{I}}(t)\big{|}2^{\mathrm{ndry}},m\right\rangle \left|t_{m}(t)\right\rangle\!\!\langle t_{m}(t)|-\left|2^{\mathrm{ndry}},m \right\rangle\!\!\big{\langle}2^{\mathrm{ndry}},m\big{|}\,\rho_{\mathrm{S}}^{ \mathrm{I}}(t)\Big{)}\\ &+\mathrm{h.c.}\\ =&\hat{J}_{m}(t)\rho_{\mathrm{S}}^{\mathrm{I}}(t)\hat{J}_{m}^{ \dagger}(t)-\frac{1}{2}\Big{\{}\hat{J}_{m}^{\dagger}(t)\hat{J}_{m}(t),\rho_{ \mathrm{S}}^{\mathrm{I}}(t)\Big{\}}\\ &+\hat{L}_{m}(t)\rho_{\mathrm{S}}^{\mathrm{I}}(t)\hat{L}_{m}^{ \dagger}(t)-\frac{1}{2}\Big{\{}\hat{L}_{m}^{\dagger}(t)\hat{L}_{m}(t),\rho_{ \mathrm{S}}^{\mathrm{I}}(t)\Big{\}}\end{split} \tag{105}\] where in the first line we have defined \(\left|t_{m}(t)\right\rangle:=\mathrm{e}^{\mathrm{i}tH_{\mathrm{C}}}\left|t_{ m}\right\rangle\), with \(\left|t_{m}\right\rangle\) the \(m^{\mathrm{th}}\) basis element of the quantum Fourier transform given by eq. (9) and \(H_{\mathrm{C}}\) is the clockwork Hamiltonian eq. (7). In the last lines, we have defined \[\hat{J}_{m}(t) :=\mathrm{e}^{\mathrm{i}tH_{\mathrm{S}}}\hat{J}_{m}\mathrm{e}^{- \mathrm{i}tH_{\mathrm{S}}}=\sqrt{2dC_{0}^{(m)}}\left|2^{\mathrm{ndry}},m \right\rangle\!\!\left\langle t_{m}(t)\right|,\quad\hat{J}_{m}:=\sqrt{2dC_{0}^ {(m)}}\left|2^{\mathrm{ndry}},m\right\rangle\!\!\left\langle t_{m}\right|, \tag{106}\] \[\hat{L}_{m}(t) :=\mathrm{e}^{\mathrm{i}tH_{\mathrm{S}}}\hat{L}_{m}\mathrm{e}^{- \mathrm{i}tH_{\mathrm{S}}}=\sqrt{2dC_{0}^{\prime(m)}}\left(\left|2^{\mathrm{ ndry}},m\right\rangle\!\!\left\langle t_{m}(t)\right|\right)^{\dagger},\quad\hat{L}_{m}:= \sqrt{2dC_{0}^{\prime(m)}}\left(\left|2^{\mathrm{ndry}},m\right\rangle\!\! \left\langle t_{m}\right|\right)^{\dagger}, \tag{107}\] where recall \(H_{\mathrm{S}}\) is the total matter system Hamiltonian defined in eq. (13). Finally, we can easily generalise this to \(L\) secondary rings. Since all the secondary rings are non degenerate, there is no inter-secondary-ring coupling and we merely have to add an extra summation over the secondary rings or in other words, \(L\) decay channels. From eq. (105) we find \[\begin{split}\frac{d}{dt}\rho_{\mathrm{S}}^{\mathrm{I}}(t)=& \sum_{j=1}^{L}\hat{J}_{m_{j}}(t)\rho_{\mathrm{S}}^{\mathrm{I}}(t)\hat{J}_{m_{j }}^{\dagger}(t)-\frac{1}{2}\Big{\{}\hat{J}_{m_{j}}^{\dagger}(t)\hat{J}_{m_{j}} (t),\rho_{\mathrm{S}}^{\mathrm{I}}(t)\Big{\}}\\ &\sum_{j=1}^{L}\hat{L}_{m_{j}}(t)\rho_{\mathrm{S}}^{\mathrm{I}}(t) \hat{L}_{m_{j}}^{\dagger}(t)-\frac{1}{2}\Big{\{}\hat{L}_{m_{j}}^{\dagger}(t) \hat{L}_{m_{j}}(t),\rho_{\mathrm{S}}^{\mathrm{I}}(t)\Big{\}}.\end{split} \tag{108}\] Now that we have derived the master equation in the Interaction picture, we can convert back to the Schrodingr picture. Recalling eqs. (105) and (106) and denoting the state evolution in the Schrodinger picture by \(\rho_{\mathrm{SB}}(t)\), we find \[\rho_{\mathrm{SB}}(t):=U(t)\rho_{\mathrm{SB}}U^{\dagger}(t)=U_{0}(t)\rho_{ \mathrm{SB}}^{\mathrm{JB}}(t)U_{0}^{\dagger}(t). \tag{109}\] Therefore, defining \(\rho_{\mathrm{S}}(t):=\mathrm{tr}_{\mathrm{B}}[\rho_{\mathrm{SB}}(t)]\) and recalling \(\rho_{\mathrm{S}}^{\mathrm{I}}(t):=\mathrm{tr}_{\mathrm{B}}\,\rho_{\mathrm{SB} }^{\mathrm{I}}(t)\), it follows \[\rho_{\mathrm{S}}(t)=\mathrm{e}^{-\mathrm{i}tH_{\mathrm{S}}}\,\mathrm{tr}_{ \mathrm{B}}\big{[}\mathrm{e}^{-\mathrm{i}tH_{\mathrm{B}}}\rho_{\mathrm{SB}}^{ \mathrm{I}}(t)\mathrm{e}^{\mathrm{i}tH_{\mathrm{B}}}\big{]}\,\mathrm{e}^{ \mathrm{i}tH_{\mathrm{S}}}=\mathrm{e}^{-\mathrm{i}tH_{\mathrm{S}}}\rho_{ \mathrm{S}}^{\mathrm{I}}(t)\mathrm{e}^{\mathrm{i}tH_{\mathrm{S}}}, \tag{110}\] where we used the cyclicity of the trace. Furthermore, from eq. (109) it also follows \[\frac{d}{dt}\rho_{\mathrm{SB}}(t)=-\mathrm{i}\big{[}H_{\mathrm{S}},\rho_{ \mathrm{SB}}(t)\big{]}-\mathrm{i}\big{[}H_{\mathrm{B}},\rho_{\mathrm{SB}}(t) \big{]}+U_{0}(t)\bigg{(}\frac{d}{dt}\rho_{\mathrm{SB}}^{\mathrm{I}}(t)\bigg{)}U _{0}^{\dagger}(t) \tag{111}\] Thus (112) where we have used the cyclicity of the trace and the first line and eqs. (100) and (101) in the second. Physically, the second line of eq. (101) corresponds to spontaneous emission of a photon and the change jumping from the primary ring to a secondary ring, while the third line correspond to the reverse process. Since we identify a tick as the emission of a photon, and we assume that this process is detectable to us (either by detecting the emitted photon or the change in charge in the rings), we can make the register where this information is stored explicit. We do not associate the reverse process with a tick. This corresponds to the mapping \[J_{m_{j}} \to\tilde{J}_{m_{j}}:=J_{m_{j}}\otimes O_{\rm R}, \tag{102}\] \[L_{m_{j}} \to\tilde{L}_{m_{j}}:=L_{m_{j}}\otimes\mathbb{1}_{\rm R}, \tag{103}\] where \(O_{\rm R}:=\left|1\rangle\!\langle 0\right|_{\rm R}+\left|2\rangle\!\langle 1 \right|_{\rm R}+\left|3\rangle\!\langle 2\right|_{\rm R}+\ldots+\left|N_{T} \rangle\!\langle N_{T}-1\right|_{\rm R}\) advances the classical register by one every time there is a spontaneous emission and \(1_{\rm R}\) is the identity operator. Performing this mapping on eq. (101) gives \[\begin{split}\frac{d}{dt}\rho_{\rm SR}(t)=&-\mathrm{i }\big{[}H_{\rm S},\rho_{\rm SR}(t)\big{]}+\sum_{j=1}^{L}\tilde{J}_{m_{j}}\rho _{\rm SR}(t)\tilde{J}_{m_{j}}^{\dagger}-\frac{1}{2}\Big{\{}\tilde{J}_{m_{j}}^ {\dagger}\tilde{J}_{m_{j}},\rho_{\rm SR}(t)\Big{\}}\\ &+\sum_{j=1}^{L}\tilde{L}_{m_{j}}\rho_{\rm SR}(t)\tilde{L}_{m_{j} }^{\dagger}-\frac{1}{2}\Big{\{}\tilde{L}_{m_{j}}^{\dagger}\tilde{L}_{m_{j}}, \rho_{\rm SR}(t)\Big{\}}.\end{split} \tag{104}\] From eq. (8), we see that we just need to choose \(L=d\), and \(\big{\{}V_{j}=dC_{0}^{(m_{j})}\big{\}}_{j=0}^{d-1}\) with \(m_{j}=j\), we achieve the dynamical semigroup of the quasi-ideal clock if \(\{\tilde{L}_{j}=0\}_{j=0}^{d-1}\). We will see in the next section, the this is true to a very good approximation in the low bath temperature regime. Physically, this is the regime of interest since the revere process of spontaneous emission require the absorption of a photon from the bath. Therefore, at low temperatures, this process is highly suppressed since the mean occupancy number of the bath is close to zero, see fig. 4 and next section. Another important point in that in practice we see in fig. 5 that the optimal solution is for most of the \(V_{j}\) coefficients to be zero. This is equivalent to the secondary ring with \(m_{j}\) flux loops being omitted from the setup. So in practice, we need far fewer secondary rings than the theoretical maximum of \(d\). ### Constraint eq. (102) and low temperature limit #### a.3.1 Constraint eq. (102) We now return to assumption eq. (102). Recalling that \(\omega_{0m}\gg(d-1)\omega_{0}\) for all \(m\in\{0,1,\ldots,d-1\}\), using eq. (100) we deduce \[\Gamma(\omega_{0m}+n\omega_{0})=\Gamma(\omega_{0m})\left(1+\mathcal{O}\left(n \frac{\omega_{0}}{\omega_{0m}}\right)^{2}\right)\frac{1+N(\omega_{0m}+n\omega _{0})}{1+N(\omega_{0m})}\approx\Gamma(\omega_{0m})\quad\forall m,n\in\{0,1, \ldots,d-1\} \tag{105}\] Therefore, the assumption eq. (102) becomes \[\sum_{\alpha}\left|a_{n\alpha}^{(m)}a_{n^{\prime}\alpha}^{(m)}\right|\approx C _{1}^{(m)},\quad\forall\,n,n^{\prime}\in\{0,1,\ldots,d-1\}, \tag{106}\] where \(C_{1}^{(m)}\) is a new constant independent of \(n,n^{\prime}\). To verify that this is indeed satisfied by the setup from the main text, we start by noting that the dipole of secondary ring with \(m\) flux loops is \(d_{m}=q_{m}r_{m}=q_{m}(0,0,z_{m}\hat{z})\) where \(\hat{z}\) is the unit operator for the \(z\)-axis and \(z_{m}\) is the location of the secondary ring along said axis. Therefore, \[a_{n,x}^{(m)} =a_{n,y}^{(m)}=0, \tag{107}\] \[a_{n,z}^{(m)} =q_{m}\left\langle E_{n}\big{|}z_{m}\hat{z}|2^{\rm ndr}\!\gamma,m \right\rangle=q_{m}\mathrm{e}^{\mathrm{i}2\pi n\,m/d}a_{0,z}^{(m)}\quad n,m\in \{0,1,\ldots,d-1\} \tag{108}\] and thus inserting into eq. (106) we find that \(C_{1}^{(m)}\) is \(n,n^{\prime}\) independent as required. Finally we want to verify that eq. (101) is also satisfied. This condition is the same as eq. (102) up to a change of \(\Gamma(\omega_{0m}+n\omega_{0})\) for \(\Gamma(-(\omega_{0m}+n\omega_{0}))\). From eq. (103) and using that \(\omega_{0m}\gg(d-1)\omega_{0}\) we also find that \[\Gamma(-(\omega_{0m}+n\omega_{0}))=\Gamma(-\omega_{0m})\left(1+\mathcal{O} \left(n\frac{\omega_{0}}{\omega_{0m}}\right)^{2}\right)\frac{N(\omega_{0m}+n \omega_{0})}{N(\omega_{0m})}\approx\Gamma(-\omega_{0m})\quad\forall m,n\in\{0,1,\ldots,d-1\}, \tag{109}\] thus eq. (101) is also satisfied. Low temperature limit Since the bath is in a thermal state (formally a Gibbs state). The mean occupation number of the bath decreases rapidly with the temperature of the thermal state, therefore at low temperatures \(N(\omega_{0m})\approx 0\). Therefore in this limit we observe from eqs. (18) and (19) that \[\Gamma(\omega_{0m})\approx\frac{2\omega_{0m}^{3}}{3c^{3}},\qquad\Gamma(-\omega_ {0m})\approx 0. \tag{19}\] Thus \(C_{0}^{(m)}>0\) and relatively large, while \(C_{0}^{\prime(m)}\approx 0\). Therefore, from the definition of \(\{\hat{J}_{m},\hat{L}_{m}\}\) in eq. (17) we see that the last line of eq. (18) is approximately zero and thus we realise the quasi-ideal clock in this low temperature limit. ## Appendix B Lifting the degeneracy of the primary ring We consider a ring with radius \(R\) and parameterize it with a coordinate \(x\in[0,2\pi R)\). Prior to activating the clock, we start with d evenly spaced wells, that is with a potential of the form \[U(x)=\frac{1}{2}m\omega^{2}(x-\lfloor x/a\rfloor\cdot a-a/2)^{2}\text{ for }x\in[0,2\pi R),\quad a=\frac{2\pi R}{d}\] For sufficiently large \(\omega\) or \(R\) we have \(d\) degenerate ground states. In the following we fix \(R=1\) and assume that \(\omega\) is sufficiently large, i.e., we work in the tight-binding limit of a lattice with \(d\) atoms and periodic boundary conditions. ### Identification of Blochwave and \(t_{k}\) Observe that if the wave function of the ground state is \(\psi_{0}(x)\) the discrete Fourier transformer looks like \[\langle x|t_{k}\rangle=\frac{1}{d}\sum_{n=0}^{d-1}\mathrm{e}^{\frac{-2\pi ink} {d}}\psi_{0}(x-an-\frac{a}{2})=\frac{1}{d}\sum_{n=0}^{d-1}\mathrm{e}^{-\mathrm{ i}an\cdot k}\psi_{0}(x-an-\frac{a}{2}). \tag{20}\] for \(k\in\{0,...,d-1\}\). This reminds us of Bloch wave functions. In fact, by Bloch's theorem any single particle wave function that is a solution to this \(a\) periodic potential can be written as \(\psi_{p}(x)=\mathrm{e}^{\mathrm{i}px}u_{p}(x)\) for \(p\) in \([0,2\pi/a)\) and \(u_{p}\) being \(a\) periodic, i.e., \(u_{p}(x)=u_{p}(x+a)\). As we have periodic boundary conditions (Born-von Karman boundary conditions) we have that \(p=\frac{2\pi}{da}k\) for \(k\in 0,...,d-1\). Thus, the tight-binding wave functions of the ground states have the form \[\psi_{p}(x)=\mathrm{e}^{\mathrm{i}px}\frac{1}{d}\sum_{n=0}^{d-1}\psi_{0}(x-an -\frac{a}{2}). \tag{21}\] As the wave function \(\psi_{p}(x)\) has only reasonable support near the potential minima, i.e., for \(x=na+a/2+y\)\(n\in 0,...,d-1\) and approximately \(-a/2<y<a/2\). Thus the factor \(\mathrm{e}^{\mathrm{i}px}\) add the phase \(\mathrm{e}^{\mathrm{i}px}=\mathrm{e}^{\mathrm{i}yna}\mathrm{e}^{\mathrm{i}ya/ 2}\mathrm{e}^{\mathrm{i}py}\) to the wave function at the nth well. Using \(p=\frac{2\pi}{da}k\) yields \(\mathrm{e}^{\mathrm{i}px}=\mathrm{e}^{\frac{2\pi\mathrm{i}kn}{d}}(-1)^{k} \mathrm{e}^{\frac{2\pi\mathrm{i}kx}{da}}\). The factor \((-1)^{k}\) is just a gauge and because \(y\) is small compared to \(da\) we can motivate the identification of \(t_{k}\) with the tight-binding wave function of the groundstate with quasi momentum \(k\). ### The harmonic potential Now we want to add a perturbation \(V(x)\) to \(U(x)\) that leads to a harmonic oscillator Hamiltonian. Any potential on this ring can be written as \[\hat{V}=\int_{0}^{2\pi}\mathrm{d}x\,V(x)\,\lvert x\rangle\!\langle x\rvert\,. \tag{22}\] We define the orthonormal Fourier basis \(\langle x|k\rangle=\frac{\mathrm{e}^{\mathrm{i}kx}}{\sqrt{2\pi}}\) for \(k\in\mathbb{Z}\). Further, we can expand \(V(x)=\sum_{q\in\mathbb{N}_{0}}V_{q}\mathrm{e}^{\mathrm{i}qx}+h.c.\). In total, we get \[\hat{V} =\int_{0}^{2\pi}|x\rangle\!\langle x|=\sum_{q\in\mathbb{N}_{0}}V_{ q}\int_{0}^{2\pi}\mathrm{d}x\,\mathrm{e}^{\mathrm{i}qx}\,|x\rangle\!\langle x|+h.c. \tag{10}\] \[=\sum_{k,l,q\in\mathbb{N}_{0}}V_{q}\frac{1}{2\pi}\int_{0}^{2\pi} \mathrm{d}x\,\mathrm{e}^{\mathrm{i}x(q+l-k)}\,|k\rangle\!\langle l|+h.c.=\sum_ {q,l\in\mathbb{N}_{0}}V_{q}\,|l+q\rangle\!\langle l|+h.c. \tag{11}\] By the above (recall \(2\pi/(da)=1\)) we can to a good approximation identify the \(|l\rangle\) states with \(|t_{l}\rangle\) states. Consequently, we will compare this form to the harmonic oscillator Hamiltonian in the time basis to get the perturbation. Using that the energy eigenstates \(|n\rangle=\frac{1}{\sqrt{d}}\sum_{k=0}^{d-1}\mathrm{e}^{2\pi\mathrm{i}kn/d}\,|t _{k}\rangle\) and that \(H=\sum_{n=0}^{d-1}\omega_{0}n\,|E_{n}\rangle\!\langle E_{n}|\), we get \[H =\sum_{n=0}^{d-1}\omega_{0}n\,|E_{n}\rangle\!\langle E_{n}|= \omega_{0}\sum_{n,k=0}^{d-1}\frac{n}{\sqrt{d}}\mathrm{e}^{\frac{2\pi\mathrm{i} nk}{d}}\,|t_{k}\rangle\!\langle E_{n}| \tag{12}\] \[=\omega_{0}\sum_{n,k,l=0}^{d-1}\frac{n}{d}\mathrm{e}^{\frac{2\pi( k-l)}{d}}\,|t_{k}\rangle\!\langle t_{l}|\] (13) \[=\omega_{0}\sum_{q=0}^{d-1}\sum_{k=0}^{d-1}|t_{k+1}\rangle\! \langle t_{k}|\sum_{n=0}^{d-1}\frac{n}{d}\mathrm{e}^{\frac{2\pi\mathrm{i}n(k- l)}{d}}\] (14) \[=\omega_{0}\sum_{q=0}^{d-1}\sum_{k=0}^{d-1}|t_{k+q}\rangle\! \langle t_{k}|\left(\delta_{q,0}\frac{1}{d}\frac{d(d-1)}{2}+(1-\delta_{0,q}) \frac{1}{d}\frac{1}{\mathrm{e}^{\frac{2\pi\mathrm{i}q}{d}}-1}\right) \tag{15}\] Importantly, the terms in brakets in the last line are \(k\)-idependnet. Thus, comparing this expression with Eq. 11 we get \(V_{q}=\frac{1}{\mathrm{e}^{\frac{2\pi\mathrm{i}q}{d}}-1}\) for \(q\in 1,...,d-1\) and \(V_{0}=(d-1)/2\). In fact, \(V_{0}\) is only a constant shift, so we can drop this term. Transforming this expression back to real space by \(V(x)=\sum_{q}V_{q}\mathrm{e}^{\mathrm{i}qx}+h.c.\) using \[\frac{1}{\mathrm{e}^{\frac{2\pi\mathrm{i}q}{d}}-1}=-\frac{1}{2}-\frac{\mathrm{ i}}{2}\frac{\sin\!\left(\frac{2\pi q}{d}\right)}{1-\cos\!\left(\frac{2\pi q}{d} \right)}\] yields \(V(x)=(d-1)+\sum_{q=1}^{d-1}-\cos(qx)+\frac{\sin\!\left(\frac{2\pi q}{d}\right) }{1-\cos\!\left(\frac{2\pi q}{d}\right)}\sin(qx)\). Numerically, it was successfully checked that the potential \(V(x)+U(x)\) in the tight binding approximation (large \(\omega\)) yields a harmonic splitting in the spectrum which is well separated from the higher energy states. In a real experiment, one should rather use the first \(d\) coefficients \(V_{q}\) and re-optimize them to guarantee harmonic splitting. ### Numerics on the equal dipole moments The goal of this appendix is to show numerically that we can operate in a limit in which the matrix elements \(|\left\langle l\middle|\widehat{\mathcal{D}}\middle|^{2\mathrm{ndry}},m\right\rangle|\) are equal for \(l\in\{0,...,d-1\}\). This is the sufficient condition to mimic the dynamics of the quasi-ideal clock, as can be seen from eq. (10) (which is the generalisation of eq. (12)). For that, we simply take the potential \(V(x)+U(x)\) on the lower ring and solve the Schrodinger equation numerically for the first d eigenstates. Then we fix the lowest energy wave function and compare the \(||\circ||_{L^{2}}\) norm difference of the nth energy eigenstate and the lowest energy shifted by \(2\pi R/d\cdot n\). For \(\omega=3000\) this leads to errors: \[\begin{array}{l ## Appendix C Multiple ticks To extend our in-principle experiment to one where the clock state is re-set to the initial state after each tick, we need to implement the transition \(\left|2^{\mathrm{ndry}},m\right>\rightarrow\left|\psi_{\mathrm{C}}\right>\) instantaneously. However, the state \(\left|\psi_{\mathrm{C}}\right>\) may require the concatenation of some elementary processes to be constructed, such as a sequence of gates on a quantum computer which would take non-negligible and predictable time. One way to do this in-principle, would be to construct two copies of the clock both with their clockwork initiated to \(\left|\rho_{\mathrm{C}}\right>\) with the two primary rings in their degenerate ground state. Then initiate one of the clocks by turning on the primary-ring-degeneracy-lifting potential. Upon the clock sticking, the emitted photon activates the primary-ring-degeneracy-lifting potential of the second clock. One now turns off the primary-ring-degeneracy-lifting potential of the first clock and prepares the initial state \(\left|\psi_{\mathrm{C}}\right>\) on it. This last operation can be performed at an arbitrary time so long as it is significantly before the mean time between ticks \(\mu\); say \(\mu/2\). This way, the probability of the state \(\left|\psi_{\mathrm{C}}\right>\) being prepared on the first clock before the second clock ticking is vanishingly small. We now allow the photon coming from the tick of the secondary clock to activate the primary-ring-degeneracy-lifting potential of the first clock. Repeating the process indefinitely allows for the full implementation of the quasi-ideal clock. The only additional timing this process has required is the ability to regenerate the initial clock state \(\left|\psi_{\mathrm{C}}\right>\) in the time-window \(\left[0,\mu/2\right]\). So as long as \(\mu\) is significantly larger than the average time needed in said preparation, this additional time requirement is effectively negligible. It also only requires a duplication of efforts, since only two classically correlated clocks are required. If the time needed to prepare the initial clockwork state \(\left|\psi_{\mathrm{C}}\right>\), is larger than \(\mu\), one can trivially generalise this scheme to include \(N\) copies of the clock, thus having a time window of \(\left[0,(N-1)\mu\right]\) in which to prepare each initial clockwork state. This method for re-setting can also be applied to other clockwork systems which require a manual reset, such as the semi-classical clock in [21]. ## Appendix D Reproducing the precision per tick in [11] with our definition Here we show that our definition of entropy per tick eq. (20) which is valid for any clock, reproduces that found in [11] when specialised to the clock model found therein. The clock model in [11] can be parametrised in terms of the entropy per tick \(\Delta S^{\prime}_{\mathrm{tick}}\) which the first tick generates. In fig. 8 we plot the entropy per tick according to eq. (20) as a function of \(\Delta S_{\mathrm{tick}}\). We observe a perfect straight line with unit gradient demonstrating that the two quantities are indeed equal as claimed. ## Appendix E Numerical results In this section we present a numerical method for calculating the optimal precision of the first tick \(R\) in low dimensions. This algorithm was essential to obtaining the data for figs. 2, 4 and 5. To do so, we will derive a Figure 8: Entropy per tick as a function of the heat generated by the model (see [11] for model details). connection between the precision of a clock and a set of Lyapunov equations. The most elementary and intuitive method to derive the precision of the clock is to solve the dynamics of the clockwork via routine numerical methods for solving dynamical semigroup such as eq. (3). One then calculates the delay function via eq. (5), and finally computes its first and second moments from which \(R\) follows via eq. (6). Unfortunately, the higher the precision of the clock, the closer the delay function is to a Dirac delta function which results in numerical instabilities. Using this method, we were unable to accurately compute the precision for \(d>3\). We thus derive a new expression for the accuracy which, unlike eq. (6), does not involve the delay function and is amenable to numerical computation. With \(\hat{V}(t):=\mathrm{e}^{(\mathrm{i}H_{\mathrm{C}}-\hat{V})t}\hat{V}\mathrm{e}^{ (-\mathrm{i}H_{\mathrm{C}}-\hat{V})t}\), it follows from eq. (5) that we can rewrite the ticking probability as \[P_{\mathrm{tick}}(t)=2\operatorname{tr}\Bigl{(}\hat{V}(t)\rho_{\mathrm{C}}^{0} \Bigr{)}, \tag{10}\] where \(\hat{V}(t)\) can be viewed as the solution to \[\frac{d\hat{V}}{dt}(t)=M(\hat{V}(t)), \tag{11}\] where the superoperator \(M\) is \[M(\cdot):=\mathrm{i}[\hat{H}_{\mathrm{C}},(\cdot)]-\{\hat{V},(\cdot)\}. \tag{12}\] Using the general solution to ODEs, it follows that \(\hat{V}(t)=\exp(Mt)(\hat{V})\) Assuming that \(M\) is invertible (we will justify this in appendix E.0.1) we have \[\begin{split}\int_{0}^{\infty}\mathrm{d}t\exp(Mt)& =-M^{-1},\quad\int_{0}^{\infty}\mathrm{d}t\,t\exp(Mt)=M^{-2},\\ &\int_{0}^{\infty}\mathrm{d}t\,t^{2}\exp(Mt)=-2M^{-3}.\end{split} \tag{13}\] Since integration, trace and matrix multiplication are linear operators, using the equations above we obtain \[\begin{split}\mu&:=\int_{0}^{\infty}\mathrm{d}t\,tP _{\mathrm{tick}}(t)=2\operatorname{tr}\Bigl{(}M^{-2}(\hat{V})\rho_{\mathrm{C} }^{0}\Bigr{)},\\ \chi&:=\int_{0}^{\infty}\mathrm{d}t\,t^{2}P_{ \mathrm{tick}}(t)=-4\operatorname{tr}\Bigl{(}M^{-3}(\hat{V})\rho_{\mathrm{C} }^{0}\Bigr{)}.\end{split} \tag{14}\] Equation (14) are special, in that they provide a means to calculate the first and second moments of the delay function, without having to calculate said function. Further, from \(M(1)=-2\hat{V}\) it follows that \(M^{-1}(\hat{V})=-\frac{1}{2}\mathbb{1}\), (\(\mathbb{1}\) being the identity operator), so that we end up with \[\mu =-\operatorname{tr}\bigl{(}M^{-1}(1)\rho_{\mathrm{C}}^{0}\bigr{)}, \tag{15}\] \[\chi =2\operatorname{tr}\bigl{(}M^{-2}(1)\rho_{\mathrm{C}}^{0}\bigr{)}, \tag{16}\] and \(\int_{0}^{\infty}\mathrm{d}t\,P_{\mathrm{tick}}(t)=\operatorname{tr}\bigl{(} \rho_{\mathrm{C}}^{0}\bigr{)}=1\), as expected. This means that the precision can be expressed as \[R=\frac{1}{\frac{2\operatorname{tr}\bigl{(}M^{-2}(1)\rho_{\mathrm{C}}^{0} \bigr{)}}{-\operatorname{tr}\bigl{(}M^{-1}(1)\rho_{\mathrm{C}}^{0}\bigr{)}^{2 }}-1}. \tag{17}\] With this, choosing a matrix representation of \(M\), the problem is reduced to the inversion of a matrix, evaluated on the identity--we have bypassed having to numerically solve for the dynamics of the clockwork. To solve eq. (17) we want to find a solution to the two Lyapunov equations \[M(X)=\mathbb{1},\qquad M(X)=M^{-1}(\mathbb{1}). \tag{18}\] The Bartels-Stewart algorithm [22] can solve Lyapunov equations such as these in \(O(d^{3})\) iterations. In essence, it vectorizes the equation and calculates the Schur decomposition (In fact, we get a little speed up as \(M^{-1}(1)\), and \(M^{-1}(M^{-1}(1))\) act on symmetric matrices). In the final step, we need to optimize the precision numerically over the coefficients \(\{V_{1}\geq 0\}_{l=0}^{d-1}\) and \(\rho_{\mathrm{C}}^{0}\). From eq. (17) we can infer that the precision is a rational polynomial in the coefficients \(\{V_{j}\}_{j=0}^{d-1}\) and the coefficients parametrizing the initial clockwork state \(\rho_{\mathrm{C}}^{0}\). Therefore, we have finitely many maxima. The numerical optimization algorithm we used picks random seeds, searches, and compares the found local maxima. We constrained the search on \(0\leq V_{l}\leq d\), as higher coefficients are expected to lead to exponential tails. We used Julia [23], to perform the calculations. The result for the precision is presented in figs. 1 and 5 and shows a \(d^{2}\) scaling. Invertibility of \(M\) One might ask for the general conditions under which \(M\) is invertible so that the above equations are well-defined. For this, recall that for an arbitrary matrix \(C\), \(M(X)=C\) has a unique solution \(X\), if and only if \(M(X)=0\Rightarrow X=0\). From eq. (101), \(M(X)=0\) reads, \[(\mathrm{i}\hat{H}-\hat{V})X+X(\mathrm{i}\hat{H}-\hat{V})^{\dagger}=0. \tag{102}\] This is known as the continuous Lyapunov equation. One can show by inspecting the implication for the characteristic polynomial 3 that it has a unique solution if and only if \(\sigma(\mathrm{i}\hat{H}-\hat{V})\cap\sigma(\mathrm{i}\hat{H}+\hat{V})=\emptyset.\) Here \(\sigma\) refers to the spectrum of a linear operator. From first order perturbation theory in \(\hat{V}\), we see that the eigenvalues of \(\mathrm{i}\hat{H}-\hat{V}\) for \(n=0,1,\ldots,d-1\) are Footnote 3: The equation is telling us that we can replace left multiplication by \((\mathrm{i}\hat{H}-\hat{V})\) with right multiplication by \((-\mathrm{i}\hat{H}+\hat{V})\). Thus, using Cayley-Hamilton \(0=q_{(\mathrm{i}\hat{H}-\hat{V})}((-\mathrm{i}\hat{H}+\hat{V}))X\). By assumption \(q_{(\mathrm{i}\hat{H}-\hat{V})}((-\mathrm{i}\hat{H}+\hat{V}))\) is non singular, so that \(X=0\). \[\lambda_{n}=\mathrm{i}\,n-\sum_{k=0}^{d-1}V_{k}|\left<t_{k}|n\right>|^{2}= \mathrm{i}\,n-\frac{1}{d}\sum_{k=0}^{d-1}V_{k}. \tag{103}\] Further, all \(V_{l}\geq 0\). Thus, if at least one \(V_{k}>0\), we expect \(\mathrm{Re}(\lambda_{l})<0,l\in\{0,...,d-1\}\) and consequently \(\lambda_{i}\neq-\bar{\lambda}_{j}\) for any two eigenvalues and the condition on the spectrum is satisfied. Physically, negative real parts guaranteed the existence of \(M^{-1}\), which implied \(\int_{0}^{\infty}\mathrm{d}t\,P_{\mathrm{tick}}(t)=\mathrm{tr}\big{(}\rho_{ \mathrm{C}}^{0}\big{)}=1\). Hence, it is equivalent to almost surely observing a tick. ## Appendix F Remarks on the precision of the \(d=2\) clock from virtual qubits of [21] Firstly, let us observe that the quasi-ideal clock in \(d=2\) for \(\rho(0)=|t_{1}\rangle\!\langle t_{1}|\) and \(\hat{V}=\frac{1}{\sqrt{2}}\,|t_{0}\rangle\!\langle t_{0}|\) has a precision of exactly \(R=4\), which is above the classical bound of \(R=2\). The model suggested in [21] has some similarities with photofluorescence (and therefore can leverage antibunching phenomena). To see this, observe that the environment for the ladder is driving the system via population inversion, described by a negative virtual temperature \(\beta_{v}\) and is assumed to be perfectly on resonance (or in the RWA limit). We would expect the interaction Hamiltonian \[H_{int}=g(|0_{v}\rangle\!\langle 1_{v}|\otimes|1_{l}\rangle\!\langle 0_{l}|+h.c.) \tag{104}\] to effectively act like \[H_{int}=g(|1_{l}\rangle\!\langle 0_{l}|+h.c.). \tag{105}\] Such a Hamiltonian, however, represents the discrete Fourier transformation of a 2-level system as \[H =-\frac{\omega}{2}\,|0\rangle\!\langle 0|+\frac{\omega}{2}\,|1 \rangle\!\langle 1|=\frac{1}{2}\begin{pmatrix}1&1\\ 1&-1\end{pmatrix}\begin{pmatrix}-\frac{\omega}{2}&0\\ 0&\frac{\omega}{2}\end{pmatrix}\begin{pmatrix}1&1\\ 1&-1\end{pmatrix}=-\frac{\omega}{2}\begin{pmatrix}0&1\\ 1&0\end{pmatrix} \tag{106}\] \[=-\frac{\omega}{2}(|t_{1}\rangle\left<t_{0}|+|t_{0}\right>\left< t_{1}\right|), \tag{107}\] and so \(g=-\frac{\omega}{2}\). Consequently, the transition from the lower to the upper level actually behaves like the transition from \(|t_{0}\rangle\to|t_{1}^{\ast}\rangle\) with respect to this interaction Hamiltonian. This Hamiltonian does not represent the partial trace over the hot and cold bath and can only serve as an intuition connecting both models. In fluorescence experiments the same principle is used with a laser. Different to a laser, however, the timing in this case is transferred without an explicit electric field in between (that would mediate the timing via a harmonic wave) and is just assumed to exist. To make this analogy on the level of Lindbladians let us recall from [11] that \[\frac{\mathrm{d}\rho_{0}}{\mathrm{d}t}=\mathrm{i}\left(\rho_{0}H_{\mathrm{eff }}^{\dagger}\,-H_{\mathrm{eff}}\,\rho_{0}\right)+\mathcal{L}_{h}\rho_{0}+ \mathcal{L}_{c}\rho_{0} \tag{108}\] describes the dynamics on the virtual qubit and the ladder where the effective Hamiltonian is \(H_{0}+H_{int}-\mathrm{i}\Gamma/2\left|1\right\rangle\!\!\left\langle 1\right|\). Assuming always decoherent dynamics we have \[\frac{\mathrm{d}\rho_{0}}{\mathrm{d}t}=-\mathrm{i}[H_{int},\rho_{0}]-\Gamma/2 \{\rho_{0},|1\rangle\!\!\left\langle 1|\right\}+\mathcal{L}_{h}\rho_{0}+\mathcal{L}_{c} \rho_{0}. \tag{101}\] which again does not allow us to trace out hot and cold reservoir as the Hamiltonian term would vanish. Lastly, let us provide the formula for the partial trace, e.g., \(\mathrm{tr}_{h,c}(\exp(-\mathrm{i}H_{int}t)\rho_{C}\otimes\rho_{H}\otimes\rho _{L}\exp(\mathrm{i}H_{int}t))\), assuming only that the cold and hot qubit are in thermodynamic equilibrium. We obtain \[\mathrm{tr}_{h,c}\Big{(}\exp(-\mathrm{i}H_{int}t)\rho_{C}\otimes \rho_{H}\otimes\rho_{L}\exp(\mathrm{i}H_{int}t)\Big{)} \tag{102}\] \[=\,\left\langle 0_{C},0_{H}\right|\Big{(}\exp(-\mathrm{i}H_{int}t) \rho_{C}\otimes\rho_{H}\otimes\rho_{L}\exp(\mathrm{i}H_{int}t)\Big{)}|0_{C},0 _{H}\rangle+\] (103) \[\langle 1_{C},1_{H}|\Big{(}\exp(-\mathrm{i}H_{int}t)\rho_{C} \otimes\rho_{H}\otimes\rho_{L}\exp(\mathrm{i}H_{int}t)\Big{)}|1_{C},1_{H}\rangle\] (104) \[+\mathrm{tr}_{V}\Big{(}\exp(-\mathrm{i}H_{int}t)\rho_{C}\otimes \rho_{H}\otimes\rho_{L}\exp(\mathrm{i}H_{int}t)\Big{)}, \tag{105}\] where \(\mathrm{tr}_{V}\) is tracing over the virtual qubit states. Because \(H_{int}\) acts trivially on non virtual qubit states we get terms proportional to \(\rho_{L}(0)\) on the first two terms. The last term is reduced to a 2-dimensional problem and can be solved since \(H_{int}^{2}\propto 1\) on the ladder and virtual qubit subspace. In total we obtain \[\mathrm{tr}_{h,c}\Big{(}\exp(-\mathrm{i}H_{int}t)\rho_{C}\otimes \rho_{H}\otimes\rho_{L}\exp(\mathrm{i}H_{int}t)\Big{)}=\rho_{L}(0)\Big{(} \frac{1}{Z_{h}Z_{c}}+\frac{\mathrm{e}^{-\beta_{c}E_{c}}\mathrm{e}^{-\beta_{h} E_{h}}}{Z_{h}Z_{c}}\Big{)} \tag{106}\] \[+|1_{L}\rangle\!\!\left\langle 1_{L}\right|\Big{(}\cos^{2}(gt) \frac{\mathrm{e}^{-\beta_{c}E_{c}}}{Z_{c}Z_{h}}\left\langle 1_{L}|\rho_{L}(0)|1_{L} \right\rangle+\sin^{2}(gt)\frac{\mathrm{e}^{-\beta_{h}E_{h}}}{Z_{c}Z_{h}} \left\langle 0_{L}|\rho_{L}(0)|0_{L}\right\rangle\Big{)}\] (107) \[+|0_{L}\rangle\!\!\left\langle 0_{L}\right|\Big{(}\cos^{2}(gt) \frac{\mathrm{e}^{-\beta_{h}E_{h}}}{Z_{c}Z_{h}}\left\langle 0_{L}|\rho_{L}(0)|0_{L} \right\rangle+\sin^{2}(gt)\frac{\mathrm{e}^{-\beta_{c}E_{c}}}{Z_{c}Z_{h}} \left\langle 1_{L}|\rho_{L}(0)|1_{L}\right\rangle\Big{)}. \tag{108}\] This form clearly reveals the effect of a varying initial state. If the probability of ticking is coupled to occupying the top state and we start the dynamics in the ground state \(\rho_{L}(0)=|0\rangle\!\langle 0|\) the probability to occupy the top state is \[P_{top}(t)=\mathrm{tr}_{I}\left(\,|1_{L}\rangle\!\!\left\langle 1_{l}|\, \rho_{l}(t)\right\rangle=\sin^{2}(gt)\frac{\mathrm{e}^{-\beta_{h}E_{h}}}{Z_{c} Z_{h}}. \tag{109}\] Coupling this to a photon field, the way it was done in [21], then yields \[P_{tick}(t)=c\frac{\mathrm{e}^{-\beta_{h}E_{h}}}{Z_{c}Z_{h}}\sin^{2}(gt)\exp \!\left(-\frac{c}{2}\frac{\mathrm{e}^{-\beta_{h}E_{h}}}{Z_{c}Z_{h}}t\right) \exp\!\left(\frac{c}{2}\frac{\mathrm{e}^{-\beta_{h}E_{h}}}{Z_{c}Z_{h}}\frac{ \sin(gt)\cos(gt)}{g}\right)\!. \tag{110}\] Lastly, let us compare the \(d=2\) classical clock without virtual qubits with the quasi-ideal clock in \(d=2\). The \(d=2\) classical clock utilizes a single excitation and is supposed to decay thereafter. The probability of ticking being \(t\mathrm{e}^{-t}\). Now, the virtual qubit as well as the quasi-ideal clock allow for oscillation, that is, we have a finite probability of going back to the ground state (no selection rule). Thus, in \(d=2\) we get oscillations, which is reflected in the appearance of the \(\cos(gt)\) and \(\sin(gt)\) terms multiplying the exponential in eq. (110). The comparison with lasers shows that the virtual qubit case satisfies this. Finally, the virtual qubit case fundamentally reflects an 8-dimensional case, as the interaction with the ladder is providing the timing (meaning that if we were to make the interaction between the virtual qubit and ladder Markovian, the clock would not function). It was this extra dimension that allowed us to tweak the Hamiltonian to mimic a discrete Fourier transformation. To avoid this extra dimension, we could leave the Hamiltonian diagonal but instead utilize a different decay mechanism and initial state (Fourier transform them instead). So we deduce that the key pieces are allowing oscillations, having a non-diagonal decay channel and a non-diagonal initial state. Appendix G Analytical expression for the Entropy production for the \(d=2\) quasi-ideal thermal clock In the following we provide the relevant calculation for the Entropy, provided by Eq. 20 for the case of \(d=2\), \(V_{0}=0,\rho(0)=|t_{0}\rangle\!\!\left\langle t_{0}\right|\). We work here in a three level system as the states \(|0\rangle\,,|1\rangle\) are decaying to the state \(|u\rangle\). We can assume a spectrum of \(\left(-\omega_{0},0,\omega\right)\) for the states \(\left|u\right\rangle,\left|0\right\rangle,\left|1\right\rangle.\) The thermal state will be reached due to equilibration with the photon bath to be \(\exp\{-\beta H\}/Z\), so that the \(\log(\rho_{t}h)=-\beta H\) can be used. To calculate the dynamics, we will assume \(H=\left|1\right\rangle\), \(\hat{V}=V_{1}\left|t_{1}\right\rangle\!\!\left\langle t_{1}\right|\) as the dynamics conditioned on not-ticking is entirely in the two level \(\left|0\right\rangle,\left|1\right\rangle\) subspace. The final solution is then obtained by scaling \(t->t\omega,V_{1}->V_{1}(1+N)/\omega\), where \(N\) is the occupation number of the bath. This factor enters, because we have the possibility of absorption. However, the photons being absorbed also produce a tick, so that conditioning on no-tick also eliminates the absorbing part of the dynamics. We start by decomposing in to the Pauli-matrices \(\sigma_{0},\sigma_{1},\sigma_{2},\sigma_{3}\) and work in the time-basis. \[H=\frac{1}{2}(\sigma_{0}-\sigma_{1}),\quad V=\frac{V_{1}}{2}(\sigma_{0}- \sigma_{3}),\quad\rho(0)=\frac{1}{2}(\sigma_{3}+\sigma_{0}) \tag{101}\] Then \(\exp(-\mathrm{i}Ht-Vt)=\mathrm{e}^{-\mathrm{i}t/2}\mathrm{e}^{-V_{1}t/2} \left(\cosh\!\left(t\frac{\sqrt{V_{1}^{2}-1}}{2}\right)\!\sigma_{0}+\frac{ \mathrm{i}\sinh\!\left(t\frac{\sqrt{V_{1}^{2}-1}}{2}\right)}{\sqrt{V_{1}^{2}- 1}}\sigma_{1}+\frac{V_{1}}{\sqrt{V_{1}^{2}-1}}\sigma_{3}\right)\). Now we can calculate \(\exp(-\mathrm{i}Ht-Vt)*\rho_{\mathrm{C}}^{0}*\exp(\mathrm{i}Ht-Vt)\). We decompose the result into Pauli-matrices too \[\exp(-\mathrm{i}Ht-Vt)\rho_{\mathrm{C}}^{0}\exp(\mathrm{i}Ht-Vt)=a_{0}(t) \sigma_{0}+a_{1}(t)\sigma_{1}+a_{2}(t)\sigma_{2}+a_{3}(t)\sigma_{3} \tag{102}\] and obtain \[a_{0}(t) =\mathrm{e}^{-V_{1}t}(\frac{\cosh^{2}(t\frac{\sqrt{V_{1}^{2}-1}}{ 2})}{2}+\frac{V_{1}\cosh\!\left(t\frac{\sqrt{V_{1}^{2}-1}}{2}\right)}{2\sqrt{ V_{1}^{2}-1}}+\frac{V_{1}}{2\sqrt{V_{1}^{2}-1}}\cosh\!\left(t\frac{\sqrt{V_{1}^{2} -1}}{2}\right)+\frac{V_{1}^{2}}{2(V_{1}^{2}-1)}+\frac{\sinh^{2}(t\frac{\sqrt{ V_{1}^{2}-1}}{2})}{2(V_{1}^{2}-1)}) \tag{103}\] \[a_{1}(t) =0\] (104) \[a_{2}(t) =\mathrm{e}^{-V_{1}t}(\frac{\sinh\!\left(t\frac{\sqrt{V_{1}^{2}-1 }}{2}\right)\cosh\!\left(t\frac{\sqrt{V_{1}^{2}-1}}{2}\right)}{\sqrt{V_{1}^{2} -1}}+\frac{\sinh\!\left(t\frac{\sqrt{V_{1}^{2}-1}}{2}\right)\!V_{1}}{(V_{1}^{2} -1)})\] (105) \[a_{3}(t) =\mathrm{e}^{-V_{1}t}(\frac{\cosh^{2}(t\frac{\sqrt{V_{1}^{2}-1}} {2})}{2}+\frac{V_{1}\cosh\!\left(t\frac{\sqrt{V_{1}^{2}-1}}{2}\right)}{2\sqrt{ V_{1}^{2}-1}}+\frac{V_{1}}{2\sqrt{V_{1}^{2}-1}}\cosh\!\left(t\frac{\sqrt{V_{1}^{2} -1}}{2}\right)+\frac{V_{1}^{2}}{2(V_{1}^{2}-1)}-\frac{\sinh^{2}(t\frac{\sqrt{ V_{1}^{2}-1}}{2})}{2(V_{1}^{2}-1)}) \tag{106}\] By the property of the Pauli-matrices, we have that \(\mathrm{tr}\!\left(\exp(-\mathrm{i}Ht-Vt)\rho_{\mathrm{C}}^{0}\exp(\mathrm{i}Ht -Vt)\right)=2a_{0}(t)\). Then, we need to calculate how the non-tick Lindbladian acts on the Pauli Matrices. Due to the decomposition of \(H,V\) this means \[\mathcal{L}(\rho) =-\frac{\mathrm{i}}{2}[(\sigma_{0}-\sigma_{1}),\rho]-\frac{V_{1}} {2}\{\sigma_{0}-\sigma_{3},\rho\} \tag{107}\] \[\mathcal{L}(\sigma_{0}) =V_{1}(\sigma_{3}-\sigma_{0})\] (108) \[\mathcal{L}(\sigma_{1}) =-2V_{1}\sigma_{1}\] (109) \[\mathcal{L}(\sigma_{2}) =-\sigma_{3}-V_{1}\sigma_{2}\] (110) \[\mathcal{L}(\sigma_{3}) =\sigma_{2}+V_{1}(\sigma_{0}-\sigma_{3}). \tag{111}\] Multiplying from the left with \(H\) and taking the trace yields \[\mathrm{tr}(H\mathcal{L}(\sigma_{0})) =-V_{1} \tag{120}\] \[\mathrm{tr}(H\mathcal{L}(\sigma_{1})) =2V_{1}\] (121) \[\mathrm{tr}(H\mathcal{L}(\sigma_{2})) =0\] (122) \[\mathrm{tr}(H\mathcal{L}(\sigma_{3})) =V_{1} \tag{123}\] Finally, the probability of ticking reads \[P(t) =2\,\mathrm{tr}\!\left(\hat{V}\rho(t)\right)=V_{1}(\mathrm{tr}( \rho(t))-\mathrm{tr}(\sigma_{3}\rho(t)))=2V_{1}(a_{0}(t)-a_{3}(t)) \tag{124}\] \[=\frac{2V_{1}}{V_{1}^{2}-1}\mathrm{e}^{-V_{1}t}\sinh^{2}(t\frac{ \sqrt{V_{1}^{2}-1}}{2}). \tag{125}\] In total, using linearity we obtain \[\Delta S_{\rm tick} =-\beta\int_{0}^{\infty}\mathrm{d}t\,P(t)\int_{0}^{t}\mathrm{d}s\, \mathrm{tr}(H\mathcal{L}(\rho(s))/\,\mathrm{tr}(\rho(s))) \tag{101}\] \[=-\beta\int_{0}^{\infty}\mathrm{d}t\,P(t)\int_{0}^{t}\mathrm{d}s \,(-V_{1}a_{0}(s)+2V_{1}a_{1}(s)+V_{1}a_{3}(s))/a_{0}(s)\] (102) \[=\beta\int_{0}^{\infty}\mathrm{d}t\,P(t)\int_{0}^{t}\mathrm{d}s\, \frac{P(s)}{2a_{0}(s)}. \tag{103}\] Recall that we need to make the substitution \(t,s\to\omega t,\omega s\), \(V_{1}\to V_{1}/\omega\).
2301.09301
Geometric Theory of Mechanical Screening in two-dimensional solids
Holes in mechanical metamaterials, quasi-localized plastic events in amorphous solids, and bound dislocations in a hexatic matter are different mechanisms of generic stress relaxation in solids. Regardless of the specific mechanism, these and other local stress relaxation modes are quadrupolar in nature, forming the foundation for stress screening in solids, similar to polarization fields in electrostatic media. We propose a geometric theory for stress screening in generalized solids based on this observation. The theory includes a hierarchy of screening modes, each characterized by internal length scales, and is partially analogous to theories of electrostatic screening such as dielectrics and Debye-H{\"u}ckel theory. Additionally, our formalism suggests that the hexatic phase, traditionally defined by structural properties, can also be defined by mechanical properties and may exist in amorphous materials.
Noemie Livne, Amit Schiller, Michael Moshe
2023-01-23T07:21:02Z
http://arxiv.org/abs/2301.09301v1
# Geometric Theory of Mechanical Screening in two-dimensional solids ###### Abstract Holes in mechanical metamaterials, quasi-localized plastic events in amorphous solids, and bound dislocations in a hexatic matter are different mechanisms of generic stress relaxation in solids. Regardless of the specific mechanism, these and other local stress relaxation modes are quadrupolar in nature, forming the foundation for stress screening in solids, similar to polarization fields in electrostatic media. We propose a geometric theory for stress screening in generalized solids based on this observation. The theory includes a hierarchy of screening modes, each characterized by internal length scales, and is partially analogous to theories of electrostatic screening such as dielectrics and Debye-Huckel theory. Additionally, our formalism suggests that the hexatic phase, traditionally defined by structural properties, can also be defined by mechanical properties and may exist in amorphous materials. ## I Introduction The concept of screening, which refers to the reduction of energy density through a material's local responses, is central to many physical systems. Examples include dielectrics and ionic liquids, in which induced dipolar or monopolar charge densities respond to the background electric field. As a result, the effective electric field is modified either quantitatively or qualitatively [1]. Previous research has successfully applied the concept of screening to mechanical systems. For instance, the onset of buckling in 2D defective membranes has been interpreted as the screening of structural defects by curvature [2]. Additionally, studies have shown that mechanical stresses in curved self-assembled crystals can be screened through the nucleation of structural defects [3; 4; 5; 6]. The duality between curvature and defects as entities that screen and are screened is reflected in the first Foppl-von Karman equation for the stress potential \(\chi\) \[\frac{1}{Y}\Delta\Delta\chi=K_{D}-K_{G}\;. \tag{1}\] In this equation, the Gaussian curvature of the actual deformed configuration is represented by \(K_{G}\), and singular or distributed defects are represented by \(K_{D}\)[2]. This equation demonstrates that when the curvature \(K_{G}\) is fixed, stresses can be reduced by distributing defects through \(K_{D}\), and vice versa. Physical phenomena that can be explained by geometric screening include the shape of virus capsids [7] and defect patterns on curved colloidal crystals [4; 5]. Another example is the theory of linear and nonlinear screening by imaginary quadrupoles, which was systematically derived to describe the emergent mechanics in Kirigami [8] and planar elastic meta-materials containing arrays of holes [9]. In Fig. 1(a) we demonstrate a state in which imaginary quadrupoles interact nonlinearly, leading to a spontaneous breaking of symmetry with an alternate pattern [9]. Previous works on mechanical screening have been largely influenced by an early discovery of mechanical screening within the statistical theory of 2D crystalline matter, which led to the concept of two-step melting of a solid through an intermediate hexatic phase to a liquid state [10; 11]. In this theory, the three phases are distinguished by their structural properties, and the transitions from solid to hexatic and hexatic to liquid correspond to a sequential destruction of translational and rotational quasi-long-range order. From a mechanical perspective, the low, intermediate, and high temperature phases form elastic solids supplemented by thermally induced tightly-bounded dislocation-pairs, tightly bounded disclination-pairs (dislocations), and free disclinations, respectively. The free element in each phase forms a potential screening mechanism. In the intermediate hexatic phase, for example, dislocations can form in pairs and unbind to screen out external loads, and are the key mechanism behind its vanishing shear modulus and the screened interactions between disclinations [10; 11; 12; 13]. This is illustrated in Fig. 1(b) where a bubble-raft model of a 2d crystalline matter shows the unbinding of dislocations due to external shear. The ever-growing list of systems that contain screening mechanisms is not limited to ordered systems. Examples include granular amorphous solids, where local quadrupolar particle rearrangements are induced in response to external loads [14] (shown in Fig. 1(c)), epithelial tissue [15; 16], and wrinkles and crumples in strongly confined thin sheets, where local out-of-plane deformations are also of quadrupolar nature [17; 18] (shown in Fig. 1(d)). Motivated by the wide range of screening mechanisms found in solids, a linear continuum theory was developed to describe various modes of screening in elastic materials [21]. Specifically, two distinct screening regimes were predicted: a quasi-elastic regime and an anomalous one. It was suggested that a transition between these different screening modes can be achieved, for example, in granular solid by decreasing the confining pressure. Indeed, the theory's predictions, including the emer gence of anomalous mechanics, have been validated through a series of numerical and experimental studies on the mechanics of granular and glassy materials in both two and three dimensions [22; 23; 24; 25; 26]. Despite its success in predicting the mechanics of granular and glassy materials, the theory presented in [21; 22; 23; 24; 25; 26] is derived based on ad hoc assumptions on the general nature of screening. In addition, we identify three main drawbacks of the theory: (i) It is written in a specific coordinate system. (ii) It assumes a geometrically linearized strain measure. (iii) The analytic methods available within the current displacement-formulation are limited. In this paper, we derive a hierarchy of screening theories from (geometric) first principles. We address the limitations of previous theories by developing a covariant geometric formulation of screened elasticity. Our theory reveals three distinct screening regimes, controlled by quadrupole, dipole, and monopole screening mechanisms. Additionally, we develop a generalized Airy potential theory, in which the governing equations take different forms in each of the regimes \[\frac{1}{\bar{Y}}\Delta\Delta\chi=\bar{K}^{0}\] Quadrupole \[\frac{1}{\bar{Y}}\Delta\Delta\chi+\frac{1}{\bar{Y}}\ell_{P}^{-2} \Delta\chi=\bar{K}^{0}\] Dipole \[\frac{1}{\bar{Y}}\Delta\Delta\chi+\frac{1}{\bar{Y}}\ell_{M}^{-4} \chi=\bar{K}^{0}\] Monopole Our study demonstrates that the different screening regimes are characterized by different length scales, \(\ell_{P}\) and \(\ell_{M}\), which act as new moduli that extend classical elasticity. The theories of Dipole and Monopole screening predict non-affine deformations in response to uniform external loads and are expected to be relevant to any solid whose mechanics is controlled by local relaxation mechanisms, such as local rearrangements in amorphous solids, wrinkles in confined thin sheets, and T1 transitions in living cellular tissue. The possible extensions of continuum mechanical screening are summarized in bottom panel of Fig. 1. In this work we focus on the yellow-colored boxes representing linear dipole and monopole screenings, in which an unusual or anomalous mechanical behavior is predicted. Our theory allows studying new problems that the non-geometric formulation in [21; 22] could not address. For example, we show that a monopole elastic charge screened by dipoles is mechanically equivalent to a disclination screened by dislocaitons in the Hexatic phase. Furthermore, we study how screened defects interact via the screening field. These and other predictions are proposed as test measurements for identifying mechanical screening. Surprisingly, the geometric approach to mechanical screening uncovered an explicit link between the mechanics of the hexatic phase within the theory of melting, and the mechanics of screened solids, even in the absence of underlying order. The structure of this paper is as follows: We start with introducing an electrostatic analog in Sec.(II), where we Figure 1: Mechanical Screening. Top panel -Stress relaxation mechanisms: (a) Nonlinear quadrupole screening in holey metamaterials, established in [9], (b) Screening by dipoles via dislocation unbinding in a 2d crystal bubble-raft model [19; 20], (c) Quadrupolar Eshelby plastic event in a model of amorphous solid, adapted with permission from [14], (d) Screening by local quadrupolar wrinkles, adapted with permission from [17]. Bottom panel - diagrammatic description of the different screening modes. The linear and nonlinear quadrupole screening theory was established in [9]. Here we focus on linear dipole and monopole screening theories, extending linear quadrupole screening in analogy with the extension of dielectrics to Debye-Hückel screening. derive electrostatic screening theories from energy functional minimization, an approach that is more natural when athermal mechanical systems are considered. In Sec.(III) we develop the general framework of geometric screening in elastic-like solids. In Sec.(IV) we derive equilibrium equations for the different screening modes, followed by the development of generalized screened Airy stress function approach in Sec.(V). In Sec.(VI) we study the implications of mechanical screening on basic physical properties such as the Green's function associated with each screening mode, and the interactions between sources of stresses in the presence of screening. In Sec.(VII) we conclude by discussing the future road map towards a general theory of screening in solids. ## II The electrostatic analog A familiar implementation of screening theory is within electrostatics of continuous media. As such, we find it instructive to start with the electrostatic analog and later implement the same ideas, with the necessary adjustments, to elastic solids. The main idea behind the analogy is the hierarchical structure of linear and nonlinear electrostatic screening as summarized in Fig. 2. The potential energy density stored in the electric field is \(\mathcal{U}=\frac{1}{2}\varepsilon_{0}\mathbf{E}^{2}\), and the work done on the system by assembling a charge density \(\rho_{f}\) is \(\mathcal{W}=\rho_{f}\phi\). The mechanical free energy in a domain \(\mathcal{M}\) is therefore \[F=\int_{\mathcal{M}}\left(\mathcal{U}-\mathcal{W}\right)\mathrm{d}S=\int_{ \mathcal{M}}\left(\frac{1}{2}\varepsilon_{0}\mathbf{E}^{2}-\rho_{f}\phi\right) \mathrm{d}S \tag{2}\] with \(\mathbf{E}=-\nabla\phi\) the electric field derived from a potential, and \(\varepsilon_{0}\) the vacuum permittivity. If the domain \(\mathcal{M}\) is filled with matter, atoms and molecules may polarize in response to electric field, creating electric dipoles that modify the electric field. At the continuum level the dipoles are described by the polarization density \(\mathbf{P}\)[1]. The self interaction energy of a dipole, or the work required for its nucleation, is material dependent and reflects the microscopic origin of the charge separation within the atom or the molecule. To account for this effect we note that the energetic cost is quadratic in the polarization, and that dipoles interact with each other via the total electric field, so \[\mathcal{U} =\frac{1}{2}\varepsilon_{0}\mathbf{E}^{2}+\mathbf{E}\cdot\mathbf{P}\] \[\mathcal{W} =\frac{1}{2\varepsilon_{0}\chi_{e}}\mathbf{P}^{2}+\rho_{f}\phi\;. \tag{3}\] Here \(\chi_{e}\) is the electric susceptibility, and as before, \(\mathcal{U}\) quantifies the energy stored in the electric field and \(\mathcal{W}\) the work done on the system by assembling the monopole and dipole densities \(\rho_{f}\) and \(\mathbf{P}\). The equilibrium equations are then \[\mathbf{P} =\varepsilon_{0}\chi_{e}\mathbf{E}\] \[\nabla\cdot\mathbf{E} =\frac{1}{\varepsilon_{0}}\left(\rho_{f}-\nabla\cdot\mathbf{P}\right) \tag{4}\] Upon substituting the first relation in the second we get \[\nabla\cdot\mathbf{E}=\frac{1}{\varepsilon_{0}(1+\chi_{e})}\rho_{f}=\frac{1}{ \varepsilon}\rho_{f} \tag{5}\] Thus, we see that the permittivity constant is renormalized by the induced dipoles. These equations are the basis for linear dielectrics. An important observation is that the form of \(\mathcal{W}\) in Eq.(3) is not the most general one. Upon assuming that \(\mathcal{W}\) is an analytic function of \(\mathbf{P}\) and its derivative, the most general form that preserves the symmetries to rotations and translations is \[\mathcal{W} =\rho_{f}\phi+\frac{1}{2}\alpha_{2}\mathbf{P}^{2}+\frac{1}{24} \alpha_{4}\mathbf{P}^{4}+\ldots\] \[+\frac{1}{2}\beta_{2}(\nabla\cdot\mathbf{P})^{2}+\frac{1}{24} \beta_{4}(\nabla\cdot\mathbf{P})^{4}+\ldots\] \[+\frac{1}{2}\gamma_{2}(\nabla\times\mathbf{P})^{2}+\frac{1}{24} \gamma_{4}(\nabla\times\mathbf{P})^{4}+\ldots\] Within a linear theory, only three terms contribute, with nonzero \(\alpha_{2},\beta_{2},\gamma_{2}\), and perhaps additional terms in quadratic higher order derivatives. However, from a physical perspective, the interpretation of \(\mathbf{P}\) as a polarization field, together with the multipole expansion \[\rho=\rho_{f}+\nabla\cdot\mathbf{P}+\nabla\nabla Q+\ldots, \tag{6}\] imply that \(\nabla\times\mathbf{P}\) does not contribute to the charge distribution. Hence, in electrostatic systems we expect \(\gamma_{2}=0\). For the same reason, higher order derivatives of \(\mathbf{P}\) are irrelevant, leaving the general form \[\mathcal{W}=\rho_{f}\phi+\frac{1}{2}\alpha_{2}\mathbf{P}^{2}+\frac{1}{2}\beta_ {2}(\nabla\cdot\mathbf{P})^{2}. \tag{7}\] The term proportional to \((\nabla\cdot\mathbf{P})^{2}\) represents the nucleation cost associated with effective monopoles, created by non-uniformly distributed dipoles. The two coefficients correspond to an inherent length scale \(\ell\equiv\sqrt{\frac{\beta_{2}}{\alpha_{2}}}\). When compared with system size, the dielectric state corresponds to \(\ell\ll L\). In the other limit, \(L\ll\ell\), the term Figure 2: Diagrammatic representation of screening hierarchy in electrostatic media. The equation for the electric field depends on the induced polarization \(P\) which depends on the electric field via a constitutive relation, illustrated here for each screening regime. \({\bf P}^{2}\) is negligible, and Eq.(3) takes the form \[{\cal U} = \frac{1}{2}\varepsilon_{0}{\bf E}^{2}+{\bf E}\cdot{\bf P}\] \[{\cal W} = \frac{1}{2\ell_{0}^{2}}(\nabla\cdot{\bf P})^{2}+\rho_{f}\phi\;. \tag{8}\] Upon minimizing \(F=\int_{\cal M}\left({\cal U}-{\cal W}\right){\rm d}S\) the equilibrium equations are \[\nabla(\nabla\cdot{\bf P}) = -\varepsilon_{0}\ell_{0}^{-2}{\bf E}\] \[\Delta\phi = -\frac{1}{\varepsilon_{0}}\left(\rho_{f}-\nabla\cdot{\bf P} \right)\;. \tag{9}\] The first equation can be written as \[\nabla(\nabla\cdot{\bf P}-\varepsilon_{0}\ell_{0}^{-2}\phi)=0 \tag{10}\] implying that the expression in brackets is constant which can be set to zero using the potential gauge freedom \[\nabla\cdot{\bf P}-\varepsilon_{0}\ell_{0}^{-2}\phi=0\;. \tag{11}\] Since the gauge is fixed, from this point onward we should no longer expect the equations to be invariant under gauge transformations. Upon substituting Eq.(11) in Eq.(9) we find \[\Delta\phi-\ell_{0}^{-2}\phi=-\frac{1}{\varepsilon_{0}}\rho_{f} \tag{12}\] This is the Helmholtz equation from the Debye-Huckel theory, describing screening by mobile monopole charges in an ionic liquid. We emphasize that in both dipole and monopole screenings, the fundamental fields with respect to which the energy is minimized are the electric potential and the polarization field. In the monopole screening case, the variation with respect to the polarization enforces the conservation of total charge. Eq.(12) is traditionally derived from the Poisson-Boltzman equation using a detailed microscopic theory, which gives an explicit expression for the Debye-screening length \(\ell_{0}\) in terms of temperature, ionic strength etc. Our minimization approach avoids the microscopic statistical picture, and thus provide no details on the parameter \(\ell_{0}\). Despite this weakness, such an approach is advantageous in this work, since the systems we are interested in are mostly athermal and disordered. ## III Pure and screened elasticity One challenge in writing a screening theory for solids is the identification of the basic screening element, which arises naturally from a geometric approach to elasticity [27]. In this formulation the reference state of a solid \({\cal M}\) is defined by the rest distances between material elements, and quantified by the reference metric \(\bar{g}^{0}\) via \({\rm d}l_{0}^{2}=\bar{g}^{0}_{\mu\nu}{\rm d}x^{\mu}{\rm d}x^{\nu}\). A configuration is described by the metric \(g\), quantifying the actual (potentially deformed) distances between material elements given by \({\rm d}l^{2}=g_{\mu\nu}{\rm d}x^{\mu}{\rm d}x^{\nu}\). Contrary to the reference metric, the actual one is induced from an embedding \(\phi:{\cal M}\to\mathbb{R}^{2}\) describing the material configuration with \(g=\nabla\phi^{T}\nabla\phi\). The strain is defined as the deviation of \(g\) from its rest state \(u=\frac{1}{2}(g-\bar{g}^{0})\). A key property in this formulation is the curvature associated with the reference metric. A stress free configuration is available if the reference Gaussian curvatures \(\bar{K}^{0}\) associated with \(\bar{g}^{0}\), vanishes. Therefore \(\bar{K}^{0}\) is a measure of geometric incompatibility, and consequently for sources of residual stresses. Singular sources of stresses are described by singular \(\bar{K}^{0}\), exhibiting a natural multipolar hierarchy, as shown in Table 1. In a continuum limit, the reference curvature describes distributed multipoles \[\bar{K}^{0}=M({\bf x})+\bar{\nabla}_{\alpha}P^{\alpha}({\bf x})+\bar{\nabla}_ {\alpha\beta}Q^{\alpha\beta}({\bf x})+\ldots \tag{13}\] with \(M\), \(P\) and \(Q\) distributions of disclinations, dislocations, and quadrupoles [28]. Singular multipoles are materialized via anelastic deformations which modify the reference metric. The simplest anelastic deformation is a local change in the reference state, \[\bar{g}_{\alpha\beta}=\bar{g}^{0}_{\alpha\beta}+\delta^{(n)}({\bf x})\,q_{ \alpha\beta}\;. \tag{14}\] The trace of \(q\) corresponds to an area change, and the trace-less symmetric part corresponds to local shear. This type of metric deformation describes a wide variety of screening mechanism, as illustrated in Fig. 1. For small anelastic deformations the leading order of the reference curvature associated with \(\bar{g}\) is \[\bar{K}=\bar{K}^{0}+Q^{\alpha\beta}\bar{\nabla}_{\alpha\beta}\delta({\bf x}) \tag{15}\] with \(Q^{\alpha\beta}=\bar{\varepsilon}^{\alpha\mu}\bar{\varepsilon}^{\beta\nu}q_{ \mu\nu}\) and \(\bar{\varepsilon}\) are the Levi-Civita tensors with respect to \(\bar{g}^{0}\)[29]. In light of the multipole expansion in Eq.(13) we find that a local material rearrangement induces a localized quadrupolar elastic charge. This reflects a deeper property of elastic charges: In [27] it was proved that the lowest order elastic multipole that can be nucleated by a local material deformation is quadrupolar. The proof relies on global geometric properties which are impossible to change via local deformations. This geometric conservation law makes the elastic quadrupoles analogous to electric dipoles, which are the lowest order electric charges that can be nucleated locally without violating conservation of charge. The inevitable conclusion is that the quadrupolar field \begin{table} \begin{tabular}{l c c} Type & \(\bar{K}\) & Realization \\ \hline Monopole & \(m\,\delta({\bf x})\) & Disclination \\ Dipole & \({\bf p}\cdot\nabla\delta({\bf x})\) & Dislocation \\ Quadrupole & \((\nabla^{T}\cdot{\bf q}\cdot\nabla)\delta({\bf x})\) & Dislocation-pair, Interstitial \\ \end{tabular} \end{table} Table 1: Reference curvatures multipoles and possible realizations. is, in principle, the natural screening field in solids. Motivated by these observations we turn to derive a screening theory of elastic-like solids by accounting for induced quadrupoles and their nucleation cost. For that we briefly review the geometric approach to elasticity and the possible screening modes. ### Elasticity For a purely elastic material the reference metric \(\bar{g}^{0}\) is fixed, and does not change in response to external loads. The elastic strain is then \[u^{\rm el}=\frac{1}{2}\left(g-\bar{g}^{0}\right). \tag{16}\] The equilibrium equations is derived from a mechanical free energy \[F=\int_{\cal M}\left({\cal U}-{\cal W}\right)\,{\rm d}S_{\bar{g}^{0}}-\int_{ \partial{\cal M}}{\cal W}_{B}\,{\rm d}l_{\bar{g}^{0}}\, \tag{17}\] where \({\cal U}\) is the elastic energy density, while \({\cal W}\) and \({\cal W}_{B}\) encode the work density done on the system, e.g. by external forces acting either in the bulk or on the boundary, respectively. Upon assuming small strains, the elastic energy is Hookean \[{\cal U}=\frac{1}{2}{\cal A}^{\alpha\beta\gamma\delta}u^{\rm el}_{\alpha\beta }u^{\rm el}_{\beta\gamma}\;. \tag{18}\] In the absence of body forces and in the presence of traction forces the work densities are \[{\cal W} = 0 \tag{19}\] \[{\cal W}_{B} = {\bf t}\cdot{\bf d}\;. \tag{20}\] Here \({\bf d}\) is the displacement field defined relative to the ground-state, \({\bf t}\) are the imposed traction forces, and \({\cal A}\) is the elastic tensor encoding material properties. In a homogeneous and isotropic material \[{\cal A}^{\alpha\beta\gamma\delta}=\frac{\nu\,Y}{1-\nu^{2}}\left(\bar{g}^{ \alpha\beta}\bar{g}^{\gamma\delta}+\frac{1-\nu}{2\nu}(\bar{g}^{\alpha\gamma} \bar{g}^{\beta\delta}+\bar{g}^{\alpha\delta}\bar{g}^{\beta\gamma})\right)\,, \tag{21}\] with \(Y\) the Young's modulus and \(\nu\) Poissons' ratio. The stress tensor is defined by the variation of energy density with respect to the elastic strain, leading to Hooke's law \[\sigma^{\alpha\beta}={\cal A}^{\alpha\beta\gamma\delta}u_{\gamma\delta}. \tag{22}\] Upon minimizing Eq.(27) with respect to the embedding \(\phi\) we obtain the equilibrium equation \({\rm div}\sigma=0\), which takes the explicit form \[\bar{\nabla}_{\mu}\sigma^{\mu\nu}+\left(\Gamma^{\nu}_{\alpha\beta}-\bar{ \Gamma}^{\nu}_{\alpha\beta}\right)\sigma^{\alpha\beta}=0, \tag{23}\] along with the boundary conditions \[n_{\alpha}\sigma^{\alpha\beta}=t^{\beta}\,. \tag{24}\] This form of the equilibrium equation accounts for geometric nonlinearities and was first introduced in [30], and is given in App. A. A systematic method for solving it nonlinearly in the case of non-euclidean reference metric was introduced in [31]. ### Screened Elasticity When strain relaxation mechanisms are available, the reference metric is no longer fixed, but can evolve in response to deformations. We therefore distinguish between the (fixed) initial reference metric \(\bar{g}^{0}\), and the temporary reference metric relative to which elastic deformations are measured \[\bar{g}=\bar{g}^{0}+q\;. \tag{25}\] Here \(q\) is the density of quadrupole perturbation to the reference metric \(\bar{g}^{0}\). Correspondingly, the elastic tensor \({\cal A}\), covariant derivatives \(\bar{\nabla}\), and the raising and lowering of indices are all defined with the fixed reference metric \(\bar{g}^{0}\). The elastic strain is the deviation of the current metric from the updated reference metric \[u^{\rm el}=\frac{1}{2}\left(g-\bar{g}\right)=\frac{1}{2}\left(g-\bar{g}^{0}-q \right)=u-\frac{1}{2}q\;, \tag{26}\] where \(u=\frac{1}{2}(g-\bar{g}^{0})\) is the total strain, measuring the deformation relative to the initial configuration. The screened elastic energy stored in the system still has the form Eq.(18), \[F_{\rm Sc}=\int_{\cal M}\left({\cal U}-{\cal W}\right)\,{\rm d}S_{\bar{g}^{0}} -\int_{\partial{\cal M}}{\cal W}_{B}\,{\rm d}l_{\bar{g}^{0}}\, \tag{27}\] with \[{\cal U} = \frac{1}{2}{\cal A}^{\alpha\beta\gamma\delta}u^{\rm el}_{\alpha \beta}u^{\rm el}_{\beta\gamma}=\frac{1}{2}{\cal A}^{\alpha\beta\gamma\delta}u _{\alpha\beta}u_{\beta\gamma}\] \[- \frac{1}{2}{\cal A}^{\alpha\beta\gamma\delta}u_{\alpha\beta}q_{ \beta\gamma}+\frac{1}{8}{\cal A}^{\alpha\beta\gamma\delta}q_{\alpha\beta}q_{ \beta\gamma}\;.\] This form of the energy uncovers the elastic interactions between the induced quadrupoles: the first term in the second row represents the elastic interaction between the quadrupole \(q\) at point \({\bf x}\) with the background stress and all the other quadrupoles, and the last term represents the self-interaction elastic energy corresponding to the energy stored in the elastic field induced by a single quadrupole. Another important contribution to the self-interaction term is the work done on the system in order to nucleate the quadrupole core. This material dependent property is therefore contributing to the work term in Eq.(27) \[{\cal W}={\cal W}[q]\;. \tag{29}\] Here \({\cal W}\) is a functional whose specific form depends on the underlying screening mechanism and material properties. At this point we draw inspiration from the electrostatic analogue, specifically from Eq.(7) which builds on the multipole expansion, and write the general form of \({\cal W}\) reflecting screening by quadrupoles, dipoles, and monopoles \[{\cal W}=\frac{1}{2}\Lambda^{\rm Q}_{\alpha\beta\gamma\delta}Q^{\alpha\beta}Q^ {\gamma\delta}+\frac{1}{2}\Lambda^{\rm P}_{\alpha\beta}P^{\alpha}P^{\beta}+ \frac{1}{2}\Lambda^{\rm M}M^{2}\;, \tag{30}\] where \[Q^{\alpha\beta}=\bar{\varepsilon}^{\alpha\mu}\bar{\varepsilon}^{\beta\nu}q_{\mu \nu}\,\,P^{\alpha}=\bar{\nabla}_{\mu}Q^{\alpha\mu}\,\,M=\bar{\nabla}_{\alpha\beta}Q^{\alpha\beta} \tag{31}\] From homogeneity, isotropy, and the dimensions of \(\mathcal{W}\) we find \[\Lambda^{\mathrm{Q}}_{\alpha\beta\gamma\delta} = \lambda_{Q}\bar{g}^{0}_{\alpha\beta}\bar{g}^{0}_{\gamma\delta}+ \mu_{Q}\left(\bar{g}^{0}_{\alpha\gamma}\bar{g}^{0}_{\beta\delta}+\bar{g}^{0}_ {\alpha\delta}\bar{g}^{0}_{\beta\gamma}\right)\] \[\Lambda^{\mathrm{P}}_{\alpha\beta} = \tfrac{1}{2}Y\ell^{2}_{P}\bar{g}^{0}_{\alpha\beta}\,\] \[\Lambda^{\mathrm{M}} = Y\ell^{4}_{M}\, \tag{32}\] with \(Y\) the Youngs modulus and \(\ell_{\mathrm{P}},\ell_{\mathrm{M}}\) the typical length scales associated with each screening multipole. The quadrupole term in Eq.(30) represents the nucleation cost of a quadrupole field describing a distribution of local metric perturbations to \(\bar{g}^{0}\). In this case the anelastic response of the material is quantified by the value of \(Q\), describing the average uniform Eshelby-like deformation. This is similar to the weak screening by dislocation pairs (quadrupoles) in the solid phase of 2d crystalline materials. The second term in Eq.(30) describes the effective nucleation cost for dipoles that emerge from non uniform distribution of quadrupoles. In this case the anelastic response of the material is quantified by the spatial variation of \(Q\) encoded in its divergence, and is similar to screening by dislocations (dipoles) in the hexatic phase of 2d crystalline materials. The last term in Eq.(30) describes the effective nucleation cost for monopoles, which is analogous to screening by disclinations (monopoles) in a melted 2d crystalline. The geometric realization of screening quadrupole and dipole is visualized in Fig. 3 where the semi-transparent and opaque configurations describe the rest states before and after the anelastic deformations, on a finite region. These anelastic deformations are derived by calculating the displacement field induced from uniform distribution of each multipole: The deformation induced by a uniform \(Q\) corresponds to a uniform strain and is visualized in Fig. 3(a). To interpret the dipole term we take a spatially varying quadrupole with uniform dipole \(\mathbf{P}=P_{0}\hat{y}\). The induced deformation is visualized in Fig. 3(b), indicating a non-Eshelby deformation that is of lower order in the multipole expansion. This is analogous to creating electric monopole from nonuniform dipole field. As for the monopole term in Eq.(30), this screening mechanism induces non-zero curvature, thus cannot be visualized via a planar deformation. According to Eq.(30) and Eq.(32), in principle all three screening mechanisms can act simultaneously. However, elastic materials corresponds to large \(\ell_{P}\) and \(\ell_{M}\), suppressing nucleation of dipoles and monopoles. When \(\lambda_{\mathrm{Q}},\mu_{\mathrm{Q}}\to 0\) the nucleation cost of dipoles (the scale \(\ell_{P}\)) may become finite, and when \(\ell_{P}\to 0\), the cost of monopoles (\(\ell_{M}\)) may become finite as well. This hierarchy of screening is based on scale separation of \(\ell_{P},\ell_{M}\) and is in line with the scale-separation discussed after Eq.(7) in the electrostatic analog. It is also analogous to the hierarchy of Solid-Hexatic-Liquid phases, where dipole and monopole screenings correspond to the unbinding of dislocations (dipoles) and disclinations (monopoles) with finite nucleation energy in the hexatic and liquid phases, respectively [3; 11].The mapping between the theories is discussed in Sec. VII. In light of this argument, in what follows we study the mechanics of the three screening modes separately, and we assume three distinct situations in which each of the terms in Eq.(30) dominates. ## IV Equilibrium equations Here we derive equilibrium equations for each of the quadrupole, dipole, and monopole screening regimes. The detailed derivation is given in App. A. The equilibrium equations are derived using the variation of an energy with respect to the embedding \(\phi\) describing the configuration, and the induced quadrupole field \(q\). Since \(\mathcal{W}\) is independent of the configuration, the variation with respect to \(\phi\) is the same in the different screening regimes. Explicitly, the mechanical free energy to be minimized is \[F = \int_{\mathcal{M}}\left(\frac{1}{2}\mathcal{A}^{\alpha\beta \gamma\delta}u^{\mathrm{el}}_{\alpha\beta}u^{\mathrm{el}}_{\gamma\delta}- \mathcal{W}[q]\right)\,\mathrm{d}S_{\bar{g}^{0}} \tag{33}\] \[- \int_{\partial\mathcal{M}}\mathbf{t}\cdot\mathbf{d}\,\mathrm{d}l _{\bar{g}^{0}}\.\] Upon defining the elastic stress \[\sigma^{\alpha\beta}_{\mathrm{el}}=\mathcal{A}^{\alpha\beta\gamma\delta}u^{ \mathrm{el}}_{\gamma\delta}=\frac{1}{2}\mathcal{A}^{\alpha\beta\gamma\delta} \left(g_{\gamma\delta}-g^{0}_{\gamma\delta}-q_{\gamma\delta}\right) \tag{34}\] we find the equilibrium equation \[\bar{\nabla}_{\mu}\sigma^{\mu\nu}_{\mathrm{el}}+\left(\Gamma^{\nu}_{\alpha \beta}-\bar{\Gamma}^{\nu}_{\alpha\beta}\right)\sigma^{\alpha\beta}_{\mathrm{el} }=0 \tag{35}\] along with the boundary conditions \[n_{\alpha}\sigma^{\alpha\beta}_{\mathrm{el}}=t^{\beta} \tag{36}\] Figure 3: Anelastic deformations induced by (a) a uniform quadrupole and (b) a uniform dipole on a finite region. The deformed states are superimposed on the (semi-transparent) undeformed configuration. justifying our definition of the elastic stress tensor. We emphasize that from the solutions for the stress \(\sigma_{\rm el}\) and the induced charges \(q\) we can recover the actual metric through \[g_{\alpha\beta}=\bar{g}_{\alpha\beta}^{0}+q_{\alpha\beta}+2{\cal A}_{\alpha\beta \gamma\delta}\sigma_{\rm el}^{\gamma\delta}\;. \tag{37}\] Here the notation \({\cal A}_{\alpha\beta\gamma\delta}\) is the inverse elastic tensor. To recover the actual metric and configuration in equilibrium, Eq.(35) should be supplemented with an equation for the induced screening charges, obtained by varying the energy Eq.(33) with respect to \(q\) \[\delta_{q}F=\int_{\cal M}\left(-\frac{1}{2}\sigma_{\rm el}^{\alpha\beta} \delta q_{\alpha\beta}-\delta_{q}{\cal W}\right)\,{\rm d}S_{\bar{g}^{0}}\;. \tag{38}\] Next we perform the variation of \({\cal W}\), which is shown to strongly depends on the specific screening regime. _Quadrupole screening:_ In this case \[{\cal W}=\frac{1}{2}\Lambda_{\alpha\beta\gamma\delta}^{\rm Q}Q^{\alpha\beta}Q^ {\gamma\delta}=\frac{1}{2}\Lambda_{\rm q}^{\alpha\beta\gamma\delta}q_{\alpha \beta}q_{\gamma\delta}\;, \tag{39}\] with \(\Lambda_{\rm q}\) proportional to \(\Lambda^{\rm Q}\) (see App. C). Upon varying the the total energy with respect to \(q\) we find a linear relation between the induced quadrupole and the elastic stress \[\sigma_{\rm el}^{\alpha\beta}+2\varepsilon^{\alpha\mu}\bar{\varepsilon}^{ \beta\nu}\Lambda_{\mu\nu\gamma\delta}^{\rm Q}Q^{\gamma\delta}=0\;. \tag{40}\] In analogy to models for dielectric media, such as the Maxwell-Garnett model [32; 33], this screening regime describes a material containing a dilute distribution of quadrupoles induced in response to external loads. At this point we can integrate out the quadrupolar degree of freedom by substituting \(q\) either in the constitutive relation Eq. 34 or the energy in Eq. 33. In both cases we end up with an effective elastic energy \(F_{Q}\) that only depends on the total strain \[F_{Q}=\int_{\cal M}\frac{1}{2}\tilde{\cal A}^{\alpha\beta\gamma\delta}u_{ \alpha\beta}u_{\gamma\delta}\,{\rm d}S_{\bar{g}^{0}}-\int_{\partial{\cal M}}{ \bf t}\cdot{\bf d}\,{\rm d}l_{\bar{g}^{0}}\;, \tag{41}\] where \(\tilde{\cal A}\) is an effective elastic tensor given explicitly in App. C, encoding the mechanical effect of the induced quadrupoles, leading to a quasi-elastic theory. This result is also similar to dielectrics, where screening by electric dipoles re-scales the dielectric constants without otherwise modifying the theory. _Dipole screening:_ In this case \[{\cal W}=\frac{1}{2}\Lambda_{\alpha\beta}^{\rm P}P^{\alpha}P^{\beta}=\frac{1} {2}\Lambda_{\alpha\beta}^{\rm P}(\bar{\nabla}_{\mu}Q^{\alpha\mu})(\bar{\nabla} _{\nu}Q^{\beta\nu})\;. \tag{42}\] Upon substituting the relation between \(Q\) and \(q\), and varying \({\cal W}\) with respect to \(q\) we find \[\sigma_{\rm el}^{\alpha\beta}+\tfrac{1}{2}Y\ell_{P}^{2}\bar{\varepsilon}^{\mu \alpha}\bar{\varepsilon}^{\nu\beta}\left(\bar{\nabla}_{\mu}P_{\nu}+\bar{\nabla} _{\nu}P_{\mu}\right)=0 \tag{43}\] along with the boundary condition \[\bar{\varepsilon}^{\mu\alpha}\bar{\varepsilon}^{\nu\beta}\left(n_{\mu}P_{\nu} +n_{\nu}P_{\mu}\right)=0\;. \tag{44}\] Contrary to the quadrupole screening regime where a linear relation between stress and induced quadrupoles holds, here the stress is linearly proportional to the second gradient of the induced quadrupole field. An immediate consequence is the relation between elastic pressure and the induced isotropic quadrupole \[{\rm Tr}\,\sigma_{\rm el}=\bar{g}_{\alpha\beta}\sigma_{\rm el}^{\alpha\beta} =-Y\ell_{P}^{2}\bar{\nabla}_{\mu\nu}Q^{\mu\nu} \tag{45}\] This situation is similar to its electrostatic analog, wherein a dielectric the induced dipoles are linearly proportional to the electric field, whereas in Debye-Huckel theory the electric field is proportional to the second gradient of the induced dipoles, as in Eq.(9). _Monopole screening:_ In this case \[{\cal W}=\frac{1}{2}\Lambda^{\rm M}M^{2}=\frac{1}{2}\Lambda^{\rm M}(\bar{ \nabla}_{\alpha\beta}Q^{\alpha\beta})(\bar{\nabla}_{\gamma\delta}Q^{\gamma \delta}) \tag{46}\] and from the variation of \({\cal W}\) we find \[\sigma_{\rm el}^{\rho\sigma}+Y\ell_{M}^{4}\varepsilon^{\gamma\rho}\bar{ \varepsilon}^{\delta\sigma}(\bar{\nabla}_{\gamma\delta}\bar{\nabla}_{\alpha \beta}Q^{\alpha\beta})=0\;_{\perp} \tag{47}\] with the boundary condition \[\bar{\varepsilon}^{\mu\alpha}\bar{\varepsilon}^{\nu\beta}\left(n_{\mu}\bar{ \nabla}_{\nu}M+n_{\nu}\bar{\nabla}_{\mu}M\right)=0\;. \tag{48}\] As in the dipole screening regime, here too we will find that the pressure, that is the trace of stress, is useful when integrating out the quadrupolar degree of freedom, and it takes the form \[{\rm Tr}\,\sigma_{\rm el}=-\Lambda^{\rm M}(\bar{\Delta}\bar{\nabla}_{\alpha \beta}Q^{\alpha\beta})\;. \tag{49}\] In summary, the equilibrium equations for each screening mode are \[\begin{array}{ll}&\mbox{Mode}\\ \sigma_{\rm el}^{\alpha\beta}=-\bar{\varepsilon}^{\alpha\mu}\bar{\varepsilon}^{ \beta\nu}\Lambda_{\mu\nu\gamma\delta}^{\rm Q}Q^{\gamma\delta}&\mbox{Quadrupole}\\ \sigma_{\rm el}^{\alpha\beta}=-\tfrac{1}{2}Y\ell_{P}^{2}\bar{\varepsilon}^{\mu \alpha}\bar{\varepsilon}^{\nu\beta}\left(\bar{\nabla}_{\mu}P_{\nu}+\bar{\nabla} _{\nu}P_{\mu}\right)&\mbox{Dipole}\\ \sigma_{\rm el}^{\alpha\beta}=-Y\ell_{M}^{4}\bar{\varepsilon}^{\gamma\alpha} \bar{\varepsilon}^{\delta\beta}(\bar{\nabla}_{\gamma\delta}\bar{\nabla}_{\mu \nu}Q^{\mu\nu})&\mbox{Monopole}\end{array} \tag{50}\] ## V Potential theory To solve the equilibrium equations for the stress and the induced charges, we develop a potential theory generalizing the Airy stress function approach. In this approach a representation of the stress solving Eq.(35) is given in terms of a scalar function \[\sigma_{\rm el}^{\mu\nu}=\frac{1}{\sqrt{|\bar{g}|}}\frac{1}{\sqrt{|\bar{g}|}} \varepsilon^{\mu\alpha}\varepsilon^{\nu\beta}\nabla_{\alpha\beta}^{g}\chi \tag{51}\] A geometric compatibility condition is needed to determine the stress function \(\chi\), that is requiring the Gaussian curvature of the actual metric \(g\) to vanish. From the definition of stress and strain we get an expression for the actual metric \[g_{\alpha\beta}=\bar{g}^{0}_{\alpha\beta}+\varepsilon_{\alpha\mu}\varepsilon_{ \beta\nu}Q^{\mu\nu}+2\mathcal{A}_{\alpha\beta\gamma\delta}\sigma^{\gamma\delta}_ {\rm el}, \tag{52}\] which is implicit due to the complicated dependence of \(\sigma_{\rm el}\) on \(g\). To calculate the curvature of \(g\) and enforce the geometric compatibility condition we now assume that both the elastic and the total strains are small, that is \(g\approx\bar{g}\approx\bar{g}^{0}\). Within this approximation a perturbative expansion for the stress potential is applicable. The leading order term of the curvature takes the form \[0=\bar{K}^{0}+\nabla_{\alpha\beta}Q^{\alpha\beta}-\frac{1}{Y}\Delta\Delta\chi \tag{53}\] The term \(\nabla_{\alpha\beta}Q^{\alpha\beta}\) represent the induced effective monopoles, which depends on the specific screening regime. To close the equation, and integrate out the quadrupolar degrees of freedom, we determined the induced effective monopoles by substituting Eq.(51) in Eq. 50. We find (see App. D for details) \[\bar{\nabla}_{\alpha\beta}Q^{\alpha\beta}=-\frac{1}{Y}\times\left\{\begin{array} []{ll}0&\mbox{Quadrupole}\\ \ell_{P}^{-2}\bar{\Delta}\chi&\mbox{Dipole}\\ \ell_{M}^{-4}\chi&\mbox{Monopole}\end{array}\right. \tag{54}\] In the third equation, corresponding to the monopole screening regime, the induced monopole is determined up to an arbitrary function satisfying \(\bar{\nabla}_{\alpha\beta}\chi_{g}=0\), and we choose a gauge with \(\chi_{g}=0\). Having found the explicit expression of \(\bar{\nabla}_{\alpha\beta}Q^{\alpha\beta}\) in each screening mode, equation Eq.(53) is now closed \[\begin{array}{ll}\mbox{Screened Stress Function}&\mbox{Mode}\\ \frac{1}{\bar{Y}}\Delta\Delta\chi=\bar{K}^{0}&\mbox{Quadrupole}\\ \frac{1}{\bar{Y}}\Delta\Delta\chi+\frac{1}{\bar{Y}}\ell_{P}^{-2}\Delta\chi= \bar{K}^{0}&\mbox{Dipole}\\ \frac{1}{\bar{Y}}\Delta\Delta\chi+\frac{1}{\bar{Y}}\ell_{M}^{-4}\chi=\bar{K}^{ 0}&\mbox{Monopole}\end{array} \tag{55}\] These equations derived based on the assumption of scale separation, discussed in the introduction. Within this assumption we can combine them into one equations that holds when screening is dominated by either quadrupole, dipole, or monopole charges \[\Delta\Delta\chi+\ell_{P}^{-2}\Delta\chi+\ell_{M}^{-4}\chi=Y\bar{K}^{0}\;. \tag{56}\] Once the equation for \(\chi\) is solved the stress tensor can be calculated and boundary conditions enforced to uniquely determine \(\chi\). However, to recover the displacement field it is required to calculate the actual metric of the embedding, and therefore the induced quadrupoles. For that the solution for the elastic stress is substituted in Eq.(50) which then should be solved for the induced quadrupoles, subjected to the boundary conditions (Eq.(44) in the dipole regime and Eq.(48) in the monopole regime). At this point we identify an explicit link with the theory of melting in 2d crystals. It was recently shown that the theory of defects-mediated melting is dual to a sine-Gordon like hamiltonian [12; 13]. Upon deriving the equilibrium equations from the proposed hamiltonian the equation in Eq.(56) are recovered. This observation suggests that the dipole screening regime developed in this work forms a mechanical realization of the hexatic phase, that is traditionally associated with structural properties. A comment on gauge freedom is necessary at this point: One may suspect that the explicit dependence of Eq.(56) on the value of the stress function \(\chi\) violates the gauge freedom of the stress tensor. However, this only reflects the gauge choice made when solving for the induced effective monopole in the monopole screening regime Eq.(54). This is similar to loss of gauge freedom in Debye-Huckel theory as in Eq.(12). ## VI Applications The hierarchical form of Eq.(30) suggests that solids with quadrupolar relaxation mechanism are prone to dipole screening. This hypothesis, if true, unifies a variety of systems that are fundamentally different from each other, under the same screening theory. For example, cellular epithelial tissue respond to mechanical loads by cell rearrangements [15; 34] and shape changes[35; 36], both quadrupolar in nature. Holes in perforated ("holey") mechanical meta-materials release stresses by forming imaginary quadrupoles [9; 37]. Non-uniform hole sizes, as in disordered metamaterials, will induce spatially varying quadrupoles, potentially leading to dipole screening. Last but not the least, screening can form in wrinkled and crumpled thin sheets. The system shown in Fig. 1(d) demonstrate the quadrupolar nature of local wrinkles, that can merge to form long wrinkles, as observed in other scenarios such as [38]. If a wrinkle ends at the bulk it leaves a free dipole, supporting the possibility of dipole screening. We therefore expect our theory to form an effective 2d description of certain wrinkled systems, holey metamaterials, glasses, tissue models and granular matter. In the next subsections we study the mechanical implications of dipole and monopole screening on prototypical mechanical scenarios such as the fields induced by sources of stresses (defects), and the interactions between them. ### Screened Green's function A prominent manifestation of screening is the modified form of the potential associated with a point monopole charge. This potential is of importance for two main reasons: (i) Its functional form characterizes the nature and effect of screening, and (ii) It forms a Green's function for the non-homogeneous equation Eq.(55). Monopolar elastic charges can be created by removal or insertion of an angular section. In hexagonal crystalline structures they form 5- or 7-fold disclinations. A metric description of defects generalizes the concept of structural defects to solids with no underlying order, e.g. amorphous solids [27; 39]. In analogy with the screened fundamental-solution in Debye-Huckel theory, known as Yukawa potential, we solve Eq.(55) in each screening regime for a monopolar source term \(\bar{K}^{0}=\delta(\mathbf{x})\). To solve the equations it is useful to define a Helmholtz operator \[\mathcal{H}^{\theta}_{\ell}=\Delta+e^{\mathrm{i}\theta}\ell^{-2}\;, \tag{57}\] with which Eq.(55) reads \[\frac{1}{\bar{\mathcal{V}}}\mathcal{H}^{0}_{0}\mathcal{H}^{0}_{0 }\chi=\bar{K}^{0}\qquad\qquad\qquad\mathrm{Quadrupole}\] \[\frac{1}{\bar{\mathcal{V}}}\mathcal{H}^{0}_{\ell_{P}}\mathcal{H} ^{0}_{0}\chi=\bar{K}^{0}\qquad\qquad\mathrm{Dipole} \tag{58}\] \[\frac{1}{\bar{\mathcal{V}}}\mathcal{H}^{\pi/4}_{\ell_{M}}\mathcal{ H}^{-\pi/4}_{\ell_{M}}\chi=\bar{K}^{0}\qquad\mathrm{Monopole}\] An important property of \(\mathcal{H}\) is that the kernels of two different operators are disjoint. Therefore the homogeneous equation in the case of dipole and monopole screenings reduce to pairs of second order equations. In the case of quadrupole screening the Green's function \(G^{QS}\) coincides with the classical solution of a single disclination. To find the solution in the case of dipole screening we write the general polar symmetric solutions of the two equation \(\mathcal{H}^{0}_{0}\chi_{D}=0\) and \(\mathcal{H}^{0}_{\ell_{P}}\chi_{D}=0\), hence \[\chi^{DS}(r)=c_{1}\log(r/\ell_{P})+c_{2}J_{0}(r/\ell_{P})+c_{3}Y_{0}(r/\ell_{P })+c_{4}\;. \tag{59}\] Similarly, the solution in the case of monopole screening is found by solving \(\mathcal{H}^{-\pi/4}_{\ell_{M}}\chi_{M}=0\) and \(\mathcal{H}^{\pi/4}_{\ell_{M}}\chi_{M}=0\), and reads \[\chi^{MS}(r) = d_{1}J_{0}\left(e^{\frac{\pi\mathrm{i}}{4}}\,r/\ell_{M}\right)+ d_{2}J_{0}\left(e^{\frac{3\pi\mathrm{i}}{4}}\,r/\ell_{M}\right)\] \[+ d_{3}Y_{0}\left(e^{\frac{\pi\mathrm{i}}{4}}\,r/\ell_{M}\right)+ d_{4}Y_{0}\left(e^{\frac{3\pi\mathrm{i}}{4}}\,r/\ell_{M}\right)\] The coefficients \(c_{i}\) and \(d_{i}\) are determined by boundary conditions, and by a topological condition obtained by integrating both sides of Eq.(55) with \(\bar{K}=\delta(\mathbf{x})\) over the area. In the case of monopole screening we also set the value of stress-function at infinity, reflecting the gauge choice made in Eq.(54). The case of traction-free boundary conditions in a finite systems is detailed in App. F. Green's function is obtained by solving the problem in an infinite system with vanishing stress at infinity. The solutions for the three screening regimes are plotted in Fig. 4 and are given by \[G^{\mathrm{QS}}(\mathbf{x},\mathbf{x}^{\prime}) = \frac{Y\,|\mathbf{x}-\mathbf{x}^{\prime}|^{2}}{8\pi}\log\frac{| \mathbf{x}-\mathbf{x}^{\prime}|}{\ell_{P}}\;,\] \[G^{\mathrm{DS}}(\mathbf{x},\mathbf{x}^{\prime}) = \frac{Y\,\ell_{P}^{2}}{2\pi}\log\frac{|\mathbf{x}-\mathbf{x}^{\prime }|}{\ell_{P}}\;,\] \[G^{\mathrm{MS}}(\mathbf{x},\mathbf{x}^{\prime}) =\] \[\frac{Y\ell_{M}^{2}}{8}\left[H_{0}\left(e^{\frac{\mathrm{i}\pi}{4} }\frac{|\mathbf{x}-\mathbf{x}^{\prime}|}{\ell_{M}}\right)-H_{0}\left(e^{- \frac{\mathrm{i}\pi}{4}}\frac{|\mathbf{x}-\mathbf{x}^{\prime}|}{\ell_{M}}\right)\right]\] with \(H_{0}\) the Hankel function defined by \[H_{0}(z)=J_{0}(z)+\mathrm{i}Y_{0}(z)\;. \tag{61}\] The Green's function screened by dipoles \(G^{\mathrm{DS}}\) in Eq.(60) is consistent with the potential induced by a disclination in the hexatic phase, and forms the basis for the sequential transition from hexatic to fluid phase. Furthermore, this result provides a potential explanation for a problem presented in a visionary study Ref. [40]. In that work the authors studied the elastic fields induced by edge- and screw- dislocations in a Lennard-Jones model of amorphous solid. They discovered that the stress fields of a screw-dislocation are elastic-like, whereas those of edge-dislocation are smeared out. In our theory an edge dislocation is dipolar and therefore is significantly screened by dipoles as expressed by \(G^{\mathrm{DS}}\). This is contrary to the screw dislocation which is not dipolar and therefore cannot be effectively screened by dipoles. A systematic study of this problem from the perspective of screening that will compare theoretical predictions with numerical simulations of amorphous solids is left for a future work. ### Screened geometric charges and their interactions In this section we highlight several key results that follow from the fundamental solution \(G^{\mathrm{DS}}\) in the dipole-screening regime. Additionally, we study the interactions between screened geometric charges.. Figure 4: Green’s functions associated with the inhomogeneous screened equations Eq.(55) plotted on a semi-log scale. The blue, yellow and green curves represent the stress function associated with a monopole screened by quadrupoles, dipoles, and monopoles. It was shown in [27; 39] that defects and other sources of stresses can be defined geometrically regardless of a specific physical model. In this theory sources of stresses are singularities of \(\bar{K}\). For example, dislocation correspond to \(\bar{K}=\mathbf{b}\cdot\nabla\delta(\mathbf{x})\), and isotropic Eshelby inclusion corresponds to \(\bar{K}=p\Delta\delta(\mathbf{x})\). From linearity of Eq.(55), and from commutation of derivatives with the \(\mathcal{H}\) operator, taking the derivative of both sides with \(\bar{K}=\delta(\mathbf{x})\) yields new solutions for higher order sources of stresses. For example, the stress function of a dipole described by \(\bar{K}=\mathbf{b}\cdot\nabla\delta(\mathbf{x})\) and analogous to a dislocation in the hexatic phase, is \[\chi_{\mathbf{b}}=\mathbf{b}\cdot\nabla G^{\mathrm{DS}}=\frac{Y\ell_{P}^{2}}{2 \pi}\frac{\mathbf{b}\cdot\mathbf{x}}{r^{2}}\;. \tag{62}\] The stresses derived from this solution decay rapidly with \(r\). Upon substituting in the energy density one finds that the total energy of a screened dislocation converges at infinite systems, and reflects only the core energy. The second example is that of an isotropic Eshelby inclusion, whose solution is \[\chi_{\mathrm{Iso}}(r)=p\Delta G^{\mathrm{DS}}(r)=p\delta(\mathbf{x})\;. \tag{63}\] This indicates that an isotropic inclusion in an infinite medium will be completely screened by emergent dipoles. It is important to note that the response to a localized expansion in a finite system is different (see solution in App. F), and it exhibits spatial oscillations as previously reported by some of the authors [21; 22; 23; 24; 25; 26]. The stress-functions of the screened dislocation and isotropic inclusion solve the homogeneous equation Eq.(55), thus in the kernel of the relevant differential operator. A comprehensive analysis of the kernel of \(\mathcal{H}_{\ell}^{\,\rho}\) is needed in order to classify and derive all singular solutions, and is an ongoing research topic and will be pursued in future studies. Next we examine the interactions between screened sources of stress. It is well established that in the elastic regime, the energy stored in the medium can be represented by the stress function and charge distribution [41] \[U=\int\chi\,\bar{K}\,\mathrm{dVol}\;. \tag{64}\] In App. E we show that this relation holds also in the screened regime, hence we can use it to study the interactions between basic sources of stresses. For example, it is known that isotropic inclusions do not interact in the elastic framework [41]. However, still in the elastic framework, a disclination do interact with an inclusion. This is seen by taking \(\bar{K}_{\mathrm{disc}}=q\delta(\mathbf{x})\) and \(\chi_{\mathrm{Iso}}=p\log(\mathbf{x}-\mathbf{x}_{0})\). From Eq.(65) we find that the interaction between an inclusion and disclination is \[U=q\,p\log(r)\;. \tag{65}\] where \(r\) is the distance between the two charges. In the case of dipole screening we still have \(\bar{K}_{\mathrm{disc}}=q\delta(\mathbf{x})\), however, the screened stress function of the inclusion is \(\chi_{\mathrm{Iso}}^{DS}=p\delta(\mathbf{x}-\mathbf{x}_{0})\). In that case the interaction is zero, and the induced dipole field completely screen out the interaction. The interactions between other multipoles is calculated in the same way. ## VII Summary and discussion In this work, we developed a hierarchy of continuum screening theories that generalize classical elasticity and are expected to be applicable to a variety of different solid-like systems, such as granular materials, cellular tissue, and mechanical metamaterials. While the traditional approach to non-mechanical screening theories is based on statistical and thermodynamic arguments, our theory is based on geometric arguments under the assumption that a long-wavelength description of screened solids is valid. Based on the conservation laws associated with the geometry of two-dimensional Riemannian manifolds, our theory predicts three states of solid-like matter: Quasi-elastic quadrupole-screened, anomalous dipole-screened, and monopole-screened solids. The case of dipole screening exhibits mechanical behavior that is similar to the hexatic phase, and thus forms an intermediate state between a solid and liquid. The existence of dipole screening has been fully confirmed in a series of recent works on granular and glassy matter. The predictions from the monopole screening regime have not yet been observed in athermal systems. Our findings suggest that the current understanding of the jamming transition in granular matter is incomplete. For example, it is widely accepted that upon decreasing the pressure from a dense granular material, at a critical packing fraction, the material undergoes an unjamming transition to a liquid-like state that does not support shear. Instead, based on our theory, we expect a sequential transition from a dense granular solid, to a dipole-screened solid-like state, and then to an unjammed state described by monopole screening, similar to the liquid state in the melting of two-dimensional crystals. The effect of mechanical screening, in principle, is not limited to quasi-static deformations, as studied in this work, and is expected to have implications on the mechanics of both inertial and dissipative systems. Furthermore, well-studied phenomena such as fracture can now be studied within the framework of screened elasticity. These and other research questions are left for future study. ###### Acknowledgements. We would like to thank Mokhtar Adda-Bedia, Leo Radzihovsky and Keren Schreiber-Re'em for stimulating discussions. The research was supported by the Israel Science Foundation grant No. 1441/19. ## Appendix A Derivation of Equilibrium Equation for the elastic stress In this section we derive the nonlinear equilibrium equations for the elastic stress and the corresponding boundary conditions. An important quantity that will come back later is the coordinate transformation of a vector from one coordinate system to another. Consider two manifolds \(\mathcal{M}\),\(\mathcal{N}\) on which coordinate systems are denoted with Greek indices \(\mu,\nu,...\), and roman indices \(i,j,..\) respectively. Given a mapping \(\phi:\mathcal{M}\to\mathcal{N}\) the transformation of a vector from \(\mathcal{M}\) to \(\mathcal{N}\) is given by \[v^{i}_{\mathcal{N}}=\frac{\partial\phi^{i}}{\partial x^{\mu}}v^{\mu}_{\mathcal{ M}}\;. \tag{10}\] The material is modeled as a manifold \(\mathcal{M}\) equipped with a reference metric \(\bar{g}=\bar{g}^{0}+q\). A configuration is an embedding \(\phi:\mathcal{M}\to\mathbb{R}^{2}\) from which an actual metric is defined on \(\mathcal{M}\) as the pull-back of the euclidean metric on \(\mathbb{R}^{2}\), denoted \(g\). We denoted by \(\phi^{*}\) the energy minimizing configuration in the absence of external loads. The equilibrium equations are derived from an energy variation with respect to the embedding \(\phi\) describing the configuration. The elastic energy to be minimized is \[F=\int_{\mathcal{M}}\mathcal{W}_{\mathrm{el}}(g,\bar{g})\,\mathrm{d}S_{\bar{g }}-\int_{\partial\mathcal{M}}\mathbf{t}\cdot\mathbf{d}\,\mathrm{d}l_{\bar{g}}\;. \tag{11}\] with \(\mathbf{d}=\phi-\phi^{*}\), and \[\mathcal{W}_{\mathrm{el}}(g,\bar{g})=\frac{1}{2}\mathcal{A}^{\alpha\beta\gamma \delta}u^{\mathrm{el}}_{\alpha\beta}u^{\mathrm{el}}_{\gamma\delta}\;. \tag{12}\] Upon defining the elastic stress \[\sigma^{\alpha\beta}_{\mathrm{el}}=\mathcal{A}^{\alpha\beta\gamma\delta}u^{ \mathrm{el}}_{\gamma\delta} \tag{13}\] we find \[\delta_{\phi}F=\int_{\mathcal{M}}\frac{1}{2}\sigma^{\alpha\beta}_{\mathrm{el}} \delta_{\phi}g_{\alpha\beta}\,\mathrm{d}S_{\bar{g}}-\int_{\partial\mathcal{M} }\mathbf{t}\cdot\delta\phi\,\mathrm{d}l_{\bar{g}}\;. \tag{14}\] where we used \(\delta\mathbf{d}=\delta(\phi-\phi^{*})=\delta\phi\). Writing the metric variation in terms of the configuration and using \(\delta_{\phi}g_{\alpha\beta}=(\partial_{\alpha}\phi)(\partial_{\beta}\delta \phi)+(\partial_{\alpha}\delta\phi)(\partial_{\beta}\phi)\) we find \[\delta_{\phi}F =\int_{\mathcal{M}}\sigma^{\alpha\beta}_{\mathrm{el}}(\partial_{ \alpha}\phi)(\partial_{\beta}\delta\phi)\,\mathrm{d}S_{\bar{g}}-\int_{\partial \mathcal{M}}\mathbf{t}\cdot\delta\phi\,\mathrm{d}l_{\bar{g}}\] \[=\oint_{\partial\mathcal{M}}\sigma^{\alpha\beta}_{\mathrm{el}}n_{ \beta}(\partial_{\alpha}\phi)\delta\phi\,\mathrm{d}l_{\bar{g}}\] \[-\int_{\mathcal{M}}\frac{1}{\sqrt{g}}\partial_{\beta}\left( \sigma^{\alpha\beta}_{\mathrm{el}}(\partial_{\alpha}\phi)\sqrt{\bar{g}}\right) \delta\phi\,\mathrm{d}S_{\bar{g}}\] \[-\oint_{\partial\mathcal{M}}\mathbf{t}\cdot\delta\phi\,\mathrm{d }l_{\bar{g}}\;. \tag{15}\] In the second integral we note that integrand can be written as \[\mathrm{div}_{\beta}\sigma^{\alpha\beta}_{\mathrm{el}}\partial_{ \alpha}\phi \equiv\frac{1}{\sqrt{\bar{g}}}\partial_{\beta}\left(\sigma^{\alpha \beta}_{\mathrm{el}}(\partial_{\alpha}\phi)\sqrt{\bar{g}}\right)\] \[=\left(\nabla_{\beta}\sigma^{\alpha\beta}_{\mathrm{el}}+\left( \bar{\Gamma}^{\nu}_{\nu\beta}-\Gamma^{\nu}_{\nu\beta}\right)\sigma^{\alpha \beta}_{\mathrm{el}}\right)\partial_{\alpha}\phi\] In the last integral we transform the vector \(\mathbf{t}\) to the reference manifold by setting \(\mathbf{t}=t^{\mu}\partial_{\mu}\phi\). In this form the traction forces are defined on the reference manifold, which is equivalent to saying that the position on which forces applied are moving with the material, as in Lagrangian coordinates. Therefore the variation takes the form \[\delta_{\phi}F =\oint_{\partial\mathcal{M}}\left(\sigma^{\alpha\beta}_{\mathrm{ el}}n_{\beta}-t^{\alpha}\right)(\partial_{\alpha}\phi)\delta\phi\,\mathrm{d}l_{ \bar{g}}\] \[-\int_{\mathcal{M}}\mathrm{div}_{\beta}\sigma^{\alpha\beta}_{ \mathrm{el}}\partial_{\alpha}\phi\,\delta\phi\,\mathrm{d}S_{\bar{g}}\;. \tag{16}\] We conclude that the equilibrium equation is \[\bar{\nabla}_{\mu}\sigma^{\mu\nu}_{\mathrm{el}}+\left(\Gamma^{\nu}_{\alpha \beta}-\bar{\Gamma}^{\nu}_{\alpha\beta}\right)\sigma^{\alpha\beta}_{\mathrm{el} }=0, \tag{17}\] along with the boundary conditions \[n_{\alpha}\sigma^{\alpha\beta}_{\mathrm{el}}=t^{\beta}\,. \tag{18}\] ## Appendix B Derivation of Equilibrium Equation for the induced quadrupoles Here we derive the relation between the elastic stress and the induced quadrupoles in each screening regime. **Quadrupole screening** The variation of the work term Eq.(39) with respect to \(q\) yields \[\int_{\mathcal{M}}\delta_{q}\mathcal{W}\,\mathrm{d}S_{\bar{g}^{0}}=\int_{ \mathcal{M}}\left(-\Lambda^{\alpha\beta\gamma\delta}_{\mathrm{q}}q_{\gamma \delta}\delta q_{\alpha\beta}\right)\,\mathrm{d}S_{\bar{g}^{0}}\;. \tag{19}\] Substituting in Eq.(38) and requiring the variation to vanish we get a linear relation between the induced quadrupole and the elastic stress \[\sigma^{\alpha\beta}_{\mathrm{el}}+2\Lambda^{\alpha\beta\gamma\delta}_{ \mathrm{q}}q_{\gamma\delta}=0\;. \tag{20}\] Substituting the expressions for \(q\) and \(\Lambda_{q}\) in terms of \(Q\) and \(\Lambda^{Q}\) we obtain the first equation in Eq.(50). **Dipole Screening** The variation of the work term in the dipole screening regime reads \[\int_{\mathcal{M}}\delta\mathcal{W}\,\mathrm{d}S_{\bar{g}^{0}}=\] \[=\int_{\partial\mathcal{M}}\frac{1}{2}\lambda_{P}\bar{\varepsilon}^{ \mu\alpha}\bar{\varepsilon}^{\nu\beta}\left(n_{\mu}P_{\nu}+n_{\nu}P_{\mu} \right)\delta q_{\alpha\beta}\,\mathrm{d}l_{\bar{g}^{0}}\] \[-\int_{\mathcal{M}}\frac{1}{2}\lambda_{P}\bar{\varepsilon}^{\mu \alpha}\bar{\varepsilon}^{\nu\beta}\left(\bar{\nabla}_{\mu}P_{\nu}+\bar{\nabla}_ {\nu}P_{\mu}\right)\delta q_{\alpha\beta}\,\mathrm{d}S_{\bar{g}^{0}}\;. \tag{21}\] Substituting in Eq.(38) and requiring the total variation to vanish we obtain the second equation in Eq.(50). **Monopole Screening** To perform the variation in the monopole regime two integrations by parts are required. This is seen from the following: \[\delta\mathcal{W} =\bar{\nabla}_{\gamma\delta}\left(\Lambda^{\mathrm{M}}(\bar{\nabla} _{\alpha\beta}Q^{\alpha\beta})(\delta Q^{\gamma\delta})\sqrt{\bar{g}_{0}}\right)\] \[-2\nabla_{\delta}\left(\Lambda^{\mathrm{M}}(\bar{\nabla}_{\gamma }\bar{\nabla}_{\alpha\beta}Q^{\alpha\beta})(\delta Q^{\gamma\delta})\sqrt{\bar{ g}_{0}}\right)\] \[+\left(\Lambda^{\mathrm{M}}(\bar{\nabla}_{\gamma\delta}\bar{ \nabla}_{\alpha\beta}Q^{\alpha\beta})(\delta Q^{\gamma\delta})\sqrt{\bar{g}_{0} }\right)\;. \tag{40}\] We substitute \(M=\bar{\nabla}_{\alpha\beta}Q^{\alpha\beta}\), and note that the double integration by parts performed on the first term vanishes because \(\partial\partial\mathcal{M}=0\), that is the boundary of the boundary is closed. The variation therefore takes the form \[\int_{\mathcal{M}}\delta\mathcal{W}\,\mathrm{d}S_{\bar{g}^{0}}= \int_{\mathcal{M}}\lambda_{M}\bar{\varepsilon}^{\mu\alpha}\bar{\varepsilon}^{ \nu\beta}\left(\bar{\nabla}_{\mu\nu}M\right)\delta q_{\alpha\beta}\,\mathrm{d }S_{\bar{g}^{0}}\] \[-\int_{\partial\mathcal{M}}\lambda_{M}\bar{\varepsilon}^{\mu \alpha}\bar{\varepsilon}^{\nu\beta}\left(n_{\mu}\bar{\nabla}_{\nu}M+n_{\nu} \bar{\nabla}_{\mu}M\right)\delta q_{\alpha\beta}\,\mathrm{d}l_{\bar{g}^{0}}\;.\] Substituting in Eq.(38) and requiring the total variation to vanish we obtain the third equation in Eq.(50). ## Appendix C The normalized elastic tensor We fix relate \(\Lambda^{\mathrm{Q}}\) with \(\Lambda^{\mathrm{q}}\) as shown in Eq.(39). Since \(Q^{\alpha\beta}=\varepsilon^{\alpha\mu}\varepsilon^{\beta\nu}q_{\mu\nu}\) we find \[\mathcal{W}_{Q} =\frac{1}{2}\Lambda^{\mathrm{Q}}_{\alpha\beta\gamma\delta}Q^{ \alpha\beta}Q^{\gamma\delta} \tag{41}\] \[=\frac{1}{2}\Lambda^{\mathrm{Q}}_{\alpha\beta\gamma\delta} \varepsilon^{\alpha\mu}\varepsilon^{\beta\nu}q_{\mu\nu}\varepsilon^{\gamma \rho}\varepsilon^{\delta\sigma}q_{\rho\sigma}\] (42) \[\equiv\frac{1}{2}\Lambda^{\mu\nu\rho\sigma}_{\mathrm{q}}q_{\mu\nu}q _{\rho\sigma} \tag{43}\] with \[\Lambda^{\mu\nu\rho\sigma}_{\mathrm{q}}=\Lambda^{\mathrm{Q}}_{\alpha\beta \gamma\delta}\varepsilon^{\alpha\mu}\varepsilon^{\beta\nu}\varepsilon^{\gamma \rho}\varepsilon^{\delta\sigma} \tag{44}\] Next we show how \(\Lambda_{\mathrm{q}}\) normalizes the elastic tensor in Eq.(42) \[q_{\alpha\beta}=-\frac{1}{2}\Lambda^{\mathrm{q}}_{\alpha\beta\gamma\delta} \sigma^{\gamma\delta}_{\mathrm{el}}\;. \tag{45}\] Substituting in Eq. 34 we find \[\sigma^{\alpha\beta}_{\mathrm{el}} =\mathcal{A}^{\alpha\beta\gamma\delta}u_{\gamma\delta}-\frac{1}{2 }\mathcal{A}^{\alpha\beta\gamma\delta}q_{\gamma\delta}\] \[=\mathcal{A}^{\alpha\beta\gamma\delta}u_{\gamma\delta}-\frac{1}{2 }\mathcal{A}^{\alpha\beta\gamma\delta}\left(-\frac{1}{2}\Lambda^{\mathrm{q}}_{ \gamma\delta\mu\nu}\sigma^{\mu\nu}_{\mathrm{el}}\right) \tag{46}\] Noting that \[\sigma^{\alpha\beta}_{\mathrm{el}} =\sigma^{\mu\nu}_{\mathrm{el}}\mathrm{Id}^{\alpha\beta}_{\phantom{ \alpha\beta}\mu\nu}\] \[\mathrm{Id}^{\alpha\beta}_{\phantom{\alpha\beta}\mu\nu} =\frac{1}{2}\left(\delta^{\alpha}_{\phantom{\alpha}\mu}\delta^{ \beta}_{\phantom{\beta}\nu}+\delta^{\alpha}_{\phantom{\alpha}\nu}\delta^{ \beta}_{\phantom{\beta}\mu}\right) \tag{47}\] we get \[\sigma^{\mu\nu}_{\mathrm{el}}\left(\mathrm{Id}^{\alpha\beta}_{\phantom{\alpha \beta}\mu\nu}-\frac{1}{4}\mathcal{A}^{\alpha\beta\gamma\delta}\Lambda^{ \mathrm{q}}_{\gamma\delta\mu\nu}\right)=\mathcal{A}^{\alpha\beta\gamma\delta}u_{ \gamma\delta}\;. \tag{48}\] Upon denoting \[\Gamma^{\alpha\beta}_{\phantom{\alpha\beta}\mu\nu}=\mathrm{Id}^{\alpha\beta}_{ \phantom{\alpha\beta}\mu\nu}-\frac{1}{4}\mathcal{A}^{\alpha\beta\gamma\delta} \Lambda^{\mathrm{q}}_{\gamma\delta\mu\nu} \tag{49}\] we get \[\sigma^{\mu\nu}_{\mathrm{el}}=\Gamma^{-1}_{\phantom{\alpha\beta}\alpha\beta}^{ \phantom{\alpha\beta}\mu\nu}\mathcal{A}^{\alpha\beta\gamma\delta}u_{\gamma \delta}\;. \tag{50}\] that is \[\tilde{\mathcal{A}}^{\alpha\beta\gamma\delta}=\Gamma^{-1}_{\phantom{\alpha \beta}\mu\nu}^{\phantom{\alpha\beta}\alpha\beta}\mathcal{A}^{\mu\nu\gamma\delta} \tag{51}\] Note that in the absence of quadrupole screening, where all the coefficients in Eq. 30 vanishes, \(\Gamma\) reduces to the identity, and the elastic tensor remains intact. ## Appendix D Derivation of Induced Effective Monopoles To derive the induced monopole charge distribution \(M_{\mathrm{ind}}=\bar{\nabla}_{\alpha\beta}Q^{\alpha\beta}\) we use the relation between stress and induced quadrupoles given in Eq.(50). In the quadrupole, in the first equation in Eq.(50), we take the second divergence to express \(\bar{\nabla}_{\alpha\beta}Q^{\alpha\beta}\). The divergence of the elastic stress, and therefore its second divergence as well, vanishes in equilibrium, hence in this regime \(\bar{\nabla}_{\alpha\beta}Q^{\alpha\beta}=0\). In the dipole screening regime we take the trace of the second equation in Eq.(50) and find \[\mathrm{Tr}\,\sigma_{\mathrm{el}}=-Y\ell_{P}^{2}\bar{g}^{0}_{\alpha\beta} \bar{\varepsilon}^{\mu\alpha}\bar{\varepsilon}^{\nu\beta}\left(\bar{\nabla}_ {\mu}P_{\nu}+\bar{\nabla}_{\nu}P_{\mu}\right) \tag{52}\] Upon substituting \(P\) in terms of \(Q\) and \(\mathrm{Tr}\,\sigma_{\mathrm{el}}=\bar{\Delta}\chi\) we obtain the second equation in Eq.(54) \[\bar{\nabla}_{\alpha\beta}Q^{\alpha\beta}=-\frac{1}{2Y\ell_{P}^{2}}\bar{\Delta}\chi \tag{53}\] Lastly, for monopole screening regime, substituting Eq.(51) in Eq.(47) we find \[\bar{\varepsilon}^{\alpha\mu}\bar{\varepsilon}^{\beta\nu}\bar{\nabla}_{\mu\nu} \left(\chi+Y\ell_{M}^{4}\bar{\nabla}_{\alpha\beta}Q^{\alpha\beta}\right)=0\;. \tag{54}\] We conclude that \[\chi+Y\ell_{M}^{4}\bar{\nabla}_{\alpha\beta}Q^{\alpha\beta}=\chi_{g}\;, \tag{55}\] where \(\chi_{g}\) is any function satisfying \(\bar{\nabla}_{\mu\nu}\chi_{g}=0\), reflecting the gauge freedom of the stress function. Upon setting a gauge such that \(\chi_{g}=0\) we find \[\bar{\nabla}_{\alpha\beta}Q^{\alpha\beta}=-\frac{1}{Y\ell_{M}^{4}}\chi\;. \tag{56}\] ## Appendix E Interactions In this section we derive the interaction-form of the mechanical energy stored in the screened solid. The case of quadrupole screening require no analysis since the only effect of the induced quadrupoles is normalizing the elastic tensor and the interaction rmeain intact apart from normalized elastic moduli. In the case of dipole screening, the total energy is \[E =\int_{\mathcal{M}}\left(\frac{1}{2}\mathcal{A}^{\alpha\beta}u_{ \alpha\beta}^{\rm el}u_{\gamma\delta}^{\rm el}-\frac{1}{2}\Lambda_{\alpha\beta}^ {\rm P}P^{\alpha}P^{\beta}\right)\,\mathrm{d}S_{\tilde{g}^{0}}\] \[=\int_{\mathcal{M}}\left(\frac{1}{2}\sigma_{\rm el}^{\alpha\beta} u_{\alpha\beta}^{\rm el}-\frac{1}{2}\lambda_{P}\bar{\nabla}_{\mu}Q^{\mu\alpha}P_{ \alpha}\right)\,\mathrm{d}S_{\tilde{g}^{0}}\] \[=\int_{\mathcal{M}}\left(\frac{1}{2}\sigma_{\rm el}^{\alpha\beta} u_{\alpha\beta}-\frac{1}{4}\sigma_{\rm el}^{\alpha\beta}q_{\alpha\beta}+ \frac{1}{2}\lambda_{P}Q^{\mu\alpha}\bar{\nabla}_{\mu}P_{\alpha}\right)\, \mathrm{d}S_{\tilde{g}^{0}}\] \[-\int_{\partial\mathcal{M}}\lambda_{P}Q^{\mu\alpha}P_{\alpha}n_{ \mu}\mathrm{d}l_{\tilde{g}^{0}} \tag{10}\] Using the symmetry of \(Q\) and substituting it in terms of \(q\) we find that the boundary term vanishes from the boundary condition in Eq.(44), and the second and third terms in the integral cancel from the equilibrium equation Eq.(43). We therefore conclude \[E=\int_{\mathcal{M}}\frac{1}{2}\sigma_{\rm el}^{\alpha\beta}u_{ \alpha\beta}\,\mathrm{d}S_{\tilde{g}^{0}} \tag{11}\] Upon expressing \(\sigma_{\rm el}\) in terms of the stress function and integrating by parts twice we find that at the linear approximation \[E=\int_{\mathcal{M}}\frac{1}{2}\sigma_{\rm el}^{\alpha\beta}u_{ \alpha\beta}\,\mathrm{d}S_{\tilde{g}^{0}}=\int_{\mathcal{M}}\chi\bar{K}\, \mathrm{d}S_{\tilde{g}^{0}}. \tag{12}\] ## Appendix F Complete solution for Green's function The Green's function within the screened elasticity setup is the solution for Eq.(55) with a delta-function singularity, as solved in Eq.(59) and Eq.(27). The solution is first derived for a finite domain with traction free boundary conditions. In the case of dipole screening the constants of integration are \[c_{1} =\frac{q}{2\pi r_{\rm in}r_{\rm out}}\frac{r_{\rm in}Y_{1}\left( r_{\rm in}\right)-r_{\rm out}Y_{1}\left(r_{\rm out}\right)}{Y_{1}\left(r_{\rm in }\right)J_{1}\left(r_{\rm out}\right)-J_{1}\left(r_{\rm in}\right)Y_{1}\left( r_{\rm out}\right)}\] \[c_{2} =\frac{q}{2\pi r_{\rm in}r_{\rm out}}\frac{r_{\rm in}J_{1}\left( r_{\rm in}\right)-r_{\rm out}J_{1}\left(r_{\rm out}\right)}{J_{1}\left(r_{\rm in} \right)Y_{1}\left(r_{\rm out}\right)-Y_{1}\left(r_{\rm in}\right)J_{1}\left(r_ {\rm out}\right)}\] and here \(r_{\rm in}\) and \(r_{\rm out}\) are measured in units of \(r_{s}=\sqrt{2\Lambda_{P}}\). In the limit \(r_{\rm out}\to\infty\) we both constants vanish, leading to the Green's function \(G^{\rm DS}\) given in Eq.(60).
2302.10300
Proof of Vogan's conjecture on Arthur packets for $\mathop{GL}_n$ over $p$-adic fields
In this paper we prove Vogan's conjecture on Arthur packets for general linear groups over $p$-adic fields, building on earlier work. The proof uses a special case of endoscopic lifting, adapted from the 1992 book by Adams, Barbasch and Vogan, where it was articulated for real groups.
Clifton Cunningham, Mishty Ray
2023-02-20T20:42:34Z
http://arxiv.org/abs/2302.10300v2
# Proof of Vogan's conjecture on Arthur packets ###### Abstract. In this paper we prove Vogan's conjecture on Arthur packets for general linear groups over \(p\)-adic fields, building on earlier work. The proof uses a special case of endoscopic lifting, adapted from the 1992 book by Adams, Barbasch and Vogan, where it was articulated for real groups. Cunningham's research is supported by NSERC Discovery Grant RGPIN-2020-05220. He is also grateful to the Fields Institute for Research in Mathematical Sciences where some of this work was conducted and to Casa Matematica Oaxaca where it was first presented at a BIRS-CMO workshop. Ray thanks cafe Stable in downtown Calgary for its excellent research environment of snowy concrete slabs and unintentionally slowed down Taylor Swift vinyl recordings. Examples ## 1. Introduction Thirty years ago, David Vogan conjectured a purely local description of A-packets for \(p\)-adic groups [23], closely related to a more developed theory for real groups by Adams, Barbasch and Vogan [1]. While there is considerable evidence in the form of examples from [13, Chapters 11-16], Vogan's conjecture for \(p\)-adic groups remains open. In this paper we prove this conjecture for general linear groups over \(p\)-adic fields (non-archimedean fields of charactersitic \(0\)), building on previous work in [14]. We view this as a step toward proving Vogan's conjecture for the groups treated by Arthur in [1], and the strategy of the proof given here reflects this objective. This conjecture was explicated in [13, Conjecture 1] for a quasisplit symplectic or orthogonal \(p\)-adic group \(G\) and attributed to Vogan. It predicts that, for every Arthur parameter \[\psi:W_{F}^{\prime\prime}\coloneqq W_{F}\times\operatorname{SL}_{2}(\mathbb{C })\times\operatorname{SL}_{2}(\mathbb{C})\to{}^{L}G,\] the A-packet \(\Pi_{\psi}(G)\) of representations of \(G(F)\) coincides with the ABV-packet attached to the Langlands parameter \(\phi_{\psi}\) determined by \(\psi\). As defined [13, Definition 1] and recalled in [14], the ABV-packet for \(\phi_{\psi}\) is given by \[\Pi_{\phi_{\psi}}^{\text{\tiny{\sc aiv}}}(G)\coloneqq\left\{\pi\in\Pi_{ \lambda}(G)\ \middle|\ \operatorname{Evs}_{\psi}\mathcal{P}(\pi)\neq 0\right\},\] where * \(\Pi_{\lambda}(G)\) is the set of equivalence classes of irreducible, smooth representations of \(G(F)\) with infinitesimal parameter \(\lambda\) determined by \(\psi\); * \(\mathcal{P}(\pi)\) is the simple object in the category \(\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\) of equivariant perverse sheaves on the moduli space \(V_{\lambda}\) of Langlands parameters of matching \(\pi\) under the enhanced local Langlands correspondence, with \(H_{\lambda}\coloneqq Z_{\widehat{G}}(\lambda)\); * \(\operatorname{Evs}_{\psi}\) is the functor \[\operatorname{Evs}_{\psi}:\operatorname{Per}_{H_{\lambda}}(V_{\lambda}) \to\operatorname{Loc}_{H_{\lambda}}(T^{*}_{C_{\psi}}(V_{\lambda})^{\text{reg }})\equiv\operatorname{Rep}(A_{\psi}),\] introduced in [13, Section 7.10], where \(C_{\psi}\) is the \(H_{\lambda}\)-orbit of the point in \(V_{\lambda}\) corresponding to \(\phi_{\psi}\) and where \(T^{*}_{C_{\psi}}(V_{\lambda})^{\text{reg}}\) is the regular part of the conormal bundle \(T^{*}_{C_{\psi}}(V_{\lambda})\). These terms are all defined carefully in [13] and revisited in [14]. The main result of this paper, Theorem 6.1, shows that \[\Pi_{\phi_{\psi}}^{\text{\tiny{\sc aiv}}}(G)=\Pi_{\psi}(G), \tag{1}\] for every Arthur parameter \(\psi\) for \(G\), for \(G=\operatorname{GL}_{n}\). Let us now sketch the proof Theorem 6.1. We begin with an arbitrary Arthur parameter \(\psi\) of \(G\), a map \(W^{\prime\prime}_{F}\to\operatorname{GL}_{n}(\mathbb{C})\). This map, in general, has the form \[\psi=\psi_{1}\oplus\cdots\oplus\psi_{k}.\] Here each \(\psi_{i}\) is an _irreducible_ Arthur parameter (irreducible as a representation of \(W^{\prime\prime}_{F}\); see [14, Section 6] for detailed definition). Set \(m_{i}\coloneqq\dim\psi_{i}\). This decomposition naturally picks out a Levi subgroup \(M\simeq\operatorname{GL}_{m_{1}}\times\cdots\times\operatorname{GL}_{m_{k}}\). Observe that \(\widehat{M}\) is a Levi subgroup of \(\widehat{G}\) containing the image of \(\psi\). Pick \(s\in\widehat{G}\), of finite order, and therefore semisimple, so that \(Z_{\widehat{G}}(s)=\widehat{M}\). Let \(\psi_{M}:W^{\prime\prime}_{F}\to\widehat{M}\) be the Arthur parameter for \(M\) such that the following diagram commutes. Observe that \(\psi_{M}\) is an irreducible Arthur parameter for \(M\). Let \(\lambda_{M}\) be the infinitesimal parameter of \(\psi_{M}\). Now the inclusion \(Z_{\widehat{G}}(s)\to\widehat{G}\) induces an inclusion \[\varepsilon:V_{\lambda_{M}}\hookrightarrow V_{\lambda},\] which is equivariant for the action of \(H_{\lambda_{M}}\coloneqq Z_{\widehat{M}}(\lambda_{M})\) on \(V_{\lambda_{M}}\) and the action of \(H_{\lambda}\coloneqq Z_{\widehat{G}}(\lambda)\) on \(V_{\lambda}\). Indeed, \[V_{\lambda_{M}}=V_{\lambda}^{s}=\{x\in V_{\lambda}\mid\operatorname{Ad}(s)x=x\}.\] Because \(\psi_{M}\) is irreducible, earlier work [14] establishes Vogan's conjecture for \(\psi_{M}\): \[\Pi^{\text{\tiny{\sc aiv}}}_{\phi_{\psi_{M}}}(M)=\Pi_{\psi_{M}}(M).\] In order to lift this result from \(M\) to \(G\) we use the local Langlands correspondence to define a pairing between two Grothendieck groups that appear naturally on either side of the correspondence: on the spectral side, the category \(\operatorname{Rep}^{\text{\rm fl}}_{\lambda}(G)\) of finite-length representations of \(G(F)\) with infinitesimal parameter \(\lambda\) determined by \(\psi\); on the Galois/geometric the category \(\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\) of \(H_{\lambda}\)-equivariant perverse sheaves on \(V_{\lambda}\). In Section 2.5 we recall a virtual representation \(\eta^{\text{\rm Evs}}_{\psi}\in K\operatorname{Rep}^{\text{\rm fl}}_{\lambda}(G)\) that characterizes the ABV-packet \(\Pi^{\text{\tiny{\sc aiv}}}_{\phi_{\psi}}(G)\), with the property that \[\langle\eta^{\text{\rm Evs}}_{\psi},[\mathcal{F}]\rangle_{\lambda}=(-1)^{d( \psi)}\operatorname{rank}\left(\operatorname{Evs}_{\psi}\mathcal{F}\right), \qquad\forall\mathcal{F}\in\operatorname{Per}_{H_{\lambda}}(V_{\lambda}),\] where \(d(\psi)\) is the dimension of the \(H_{\lambda}\)-orbit of \(x_{\psi}\) in \(V_{\lambda}\); see Proposition 2.3. Since A-packets are singletons for \(\operatorname{GL}_{n}\), we set \(\eta_{\psi}:=[\pi_{\psi}]\). Thus, to show that the ABV-packet coincides with this A-packet, it is enough to show \[\eta^{\text{\rm Evs}}_{\psi}=\eta_{\psi}=[\pi_{\psi}].\] As the pairing \(\langle\cdot,\cdot\rangle_{\lambda}\) is non-degenerate, this is equivalent to showing that \[\langle\eta^{\text{\rm Evs}}_{\psi},[\mathcal{F}]\rangle_{\lambda}=\langle\eta _{\psi},[\mathcal{F}]\rangle_{\lambda},\] for all \(\mathcal{F}\in\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\). We do this by showing the three equalities below, for all \(\mathcal{F}\in\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\). \[\langle\eta^{\operatorname{Evs}}_{\psi},[\mathcal{F}]\rangle_{\lambda} \begin{subarray}{c}\text{Vogan's conjecture for $\psi$}\\ \text{\tiny{\sc Theorem 6.1}}\end{subarray}\langle\eta_{\psi},[\mathcal{F}] \rangle_{\lambda}\] Fixed-point formula \(\left\|\text{Prop.~{}\ref{eq:F-point-form}}\right.\)Prop. 5.6 \(\left\|\text{Endoscopic lifting}\right.\) \[\langle\eta^{\operatorname{Evs}}_{\psi_{M}},[\mathcal{F}|_{V_{\lambda_{M}}}] \rangle_{\lambda_{M}}\ \begin{subarray}{c}\text{Prop.~{}\ref{eq:F-point-form}}\end{subarray} \langle\eta_{\psi_{M}},[\mathcal{F}|_{V_{\lambda_{M}}}]\rangle_{\lambda_{M}}\] Even for general linear groups, non-Arthur type ABV-packets hold some surprises. Specifically, [10] presents a non-Arthur type Langlands parameter \(\phi_{\operatorname{KS}}\) for \(\operatorname{GL}_{16}\) such that \(\Pi^{\text{\tiny{\sc ABV}}}_{\phi_{\operatorname{KS}}}(\operatorname{GL}_{16})\) consists of two representations. Even more remarkably, the coronal representation \(\pi_{\psi}\) in \(\Pi^{\text{\tiny{\sc ABV}}}_{\phi_{\operatorname{KS}}}(\operatorname{GL}_{16})\) is of Arthur type. The main result of this paper implies \(\Pi^{\text{\tiny{\sc ABV}}}_{\phi_{\psi}}(\operatorname{GL}_{16})=\Pi_{\psi}( \operatorname{GL}_{16})\). This is an example, then, of the following containments. \[\Pi_{\phi_{\operatorname{KS}}}(\operatorname{GL}_{16})=\{\pi_{\phi_{ \operatorname{KS}}},\pi_{\psi}\}\] \[\Pi_{\phi_{\operatorname{KS}}}(\operatorname{GL}_{16})=\{\pi_{\phi_{ \operatorname{KS}}}\}\] We remark that ABV-packets for \(p\)-adic groups were not introduced in [1], since that book treats real groups only. In [1, Theorem 24.8], the authors define the functor \(Q^{\operatorname{mic}}\) using stratified Morse theory, and this can be used to characterize the microlocal packet attached to an L-parameter, as defined in Definition 19.5 of _loc. cit._. A similar approach is taken in [21] which does treat \(p\)-adic groups, but uses a functor patterned after \(Q^{\operatorname{mic}}\). We use the functor \(\operatorname{Evs}\) instead, as explained above. We use the name ABV-packets simply to acknowledge the debt owed to the authors for the development of this theory. Likewise, we use the term _"Vogan's conjecture"_ for [10, Conjecture 1] as it arose out of Vogan's work in [21]. This paper deals with non-archimedean local fields \(F\) of characteristic \(0\) as the notion of A-packets for non-archimedean local fields of nonzero characteristic is unclear, to the best of our knowledge; however the Local Langlands correspondence and the geometric perspective exists for such fields and we can define an ABV-packet for characteristic nonzero fields in exactly the same manner. Thus, we propose that ABV-packets generalize local A-packets in the sense that the ABV-packet for a Langlands parameter of A-type can be used as an analogue to the corresponding A-packet for a non-archimedean local field of any characteristic. ### Acknowledgements Some ideas in this paper are inspired by [1] which treats real groups; we are happy to acknowledge these three authors, especially David Vogan, for his continued support for this project. We also thank the entire Voganish Project research group, especially Bin Xu, Geoff Vooys, and Kristaps Balodis, for their contributions to this research. We also thank Matthew Sunohara, Tom Haines, and Peter Dillery for helpful conversations. ### Relation to other work This paper is part of the Voganish Project; for related results, we refer to [10], [11], [12], [13], [14], and [15]. The main result of this paper, Theorem 6.1, can be proved by an argument different than the one presented here, as we now explain. In [15, Lemma 5.1] we proved the following geometric statement: if \(\psi\) is a simple (irreducible with trivial restriction to \(W_{F}\)) Arthur parameter and \(C_{\psi}\) is its associated \(H_{\lambda}\)-orbit in \(V_{\lambda}\), then, for any \(H_{\lambda}\)-orbit \(C\) in \(V_{\lambda}\), \(C_{\psi}\leq C\) and \(C_{\psi}^{*}\leq C^{*}\) implies \(C_{\psi}=C\); here we refer to the Zariski-closure relation on these orbits. In his MSc thesis written under the supervision of Andrew Fiori, Connor Riddlesden [11] extended this result to unramified Arthur parameters \(\psi\), dropping the irreducibility condition appearing in [15, Lemma 5.1], though not applying to arbitrary Arthur parameters. When combined with unramified as it appears in [15, Section 6, especially Lemma 6.8] and results from [10], these can be assembled to give an alternate proof of Theorem 6.1. While our proof of Theorem 6.1 is perhaps more complicated than this alternate argument, we believe that our strategy is better adapted to generalizations, specifically, to proving Vogan's conjecture on A-packets to quasisplit classical groups and their pure inner forms. This belief is based on the fact that, in this paper, we have used endoscopic lifting in a very special case. While it is characterized by parabolic induction in this case, we expect Langlands-Shelstad transfer to play a role more generally. Since Arthur's packets are characterized by Langlands-Shelstad transfer and Kottwitz-Shelstad transfer, together with certain normalizing choices referring to Whittaker models, we expect the geometric incarnation of both kinds of transfer to play an important role in extending our main result to other groups \(G\). ### Notation In this work, for the most part, we follow notational conventions established in [15]. Here, \(F\) is a non-archimedean local field of charactersitic \(0\), also known as a \(p\)-adic field. Henceforth, \(G\) is \(\operatorname{GL}_{n}\) and \(P\) is a parabolic subgroup of \(G\) with Levi subgroup \(M\) and unipotent radical \(N\); these statements are made in the category of algebraic groups over \(F\), not their \(F\)-points, for which we use the notation \(G(F)\), \(P(F)\), etc. We use the notation \[W_{F}^{\prime}\coloneqq W_{F}\times\operatorname{SL}_{2}(\mathbb{C});\] this topological group is denoted by \(L_{F}\) in Arthur's work. We also use the notation \[W_{F}^{\prime\prime}\coloneqq W_{F}\times\operatorname{SL}_{2}(\mathbb{C}) \times\operatorname{SL}_{2}(\mathbb{C});\] this topological group is denoted by \(L_{F}^{\prime}\) in Arthur's work. Let \(\widehat{G}\) denote the complex dual group of \(G\), which for us is simply \(\operatorname{GL}_{n}(\mathbb{C})\). By a Langlands parameter we mean an admissible homomorphism \(\phi:W_{F}^{\prime}\to\widehat{G}\), as defined in [1], for example. We refer to \(\widehat{G}\)-conjugacy classes of Langlands parameters as L-parameters. An infinitesimal parameter \(\lambda:W_{F}\to\widehat{G}\) is simply a Langlands parameter with domain \(W_{F}\). The infinitesimal parameter \(\lambda_{\phi}\) of a Langlands parameter \(\phi\) is defined by \[\lambda_{\phi}(w)\coloneqq\phi(w,\operatorname{diag}(|w|^{1/2},|w|^{-1/2})).\] By an Arthur parameter we mean a homomorphism \(\psi:W_{F}^{\prime\prime}\to\widehat{G}\) satisfying conditions explained in [1, Section 3.5], notably, that its restriction to \(W_{F}\) is bounded. The Langlands parameter \(\phi_{\psi}\) is defined by \[\phi_{\psi}(w,x)\coloneqq\psi(w,x,\operatorname{diag}(|w|^{1/2},|w|^{-1/2})).\] The infinitesimal parameter \(\lambda_{\psi}\) of \(\psi\) is the infinitesimal parameter of \(\phi_{\psi}\), thus given by \[\lambda_{\psi}(w)\coloneqq\psi(w,\operatorname{diag}(|w|^{1/2},|w|^{-1/2}), \operatorname{diag}(|w|^{1/2},|w|^{-1/2})).\] When \(\psi\) has been fixed, we set \(\lambda=\lambda_{\psi}\). For a smooth irreducible representation \(\sigma\) of \(M(F)\), the symbol \(\operatorname{Ind}_{P}^{G}(\sigma)\) denotes the normalized parabolic induction of the representation \(\sigma\) of the \(F\)-rational points \(M(F)\) of the Levi subgroup \(M\) of \(P\); this means we inflate \(\sigma\) from \(M(F)\) to \(P(F)\), twist by the modulus quasicharacter \(\delta_{P}^{1/2}\) for the parabolic \(P(F)\) and then induce from \(P(F)\) to \(G(F)\). As in our earlier work, we use the notation \(D_{H_{\lambda}}(V_{\lambda})\) for the \(H_{\lambda}\)-equivariant derived category of \(\ell\)-adic sheaves on \(V\); see [14, Section 1.10], or especially [13, Definition 3.1.14] for the definition of this category and [13, Theorem 9.0.38] for how to navigate different perspectives on this category. In this paper we write \(\operatorname{rank}(E)\) for the Euler characteristic of a graded vector space \(E=\oplus_{i\in\mathbb{Z}}E^{i}\): \[\operatorname{rank}(E)=\sum_{i\in\mathbb{Z}}(-1)^{i}\dim(E^{i}).\] For \(\mathcal{F}\in\operatorname{D}_{H_{\lambda}}(V_{\lambda})\) and \(x\in V_{\lambda}\), we write \(\mathcal{F}_{x}\in\operatorname{D}_{Z_{H}(x)}(x)\) for the stalk of \(\mathcal{F}\) at \(x\) and \(\mathcal{H}_{x}^{\bullet}\mathcal{F}\) for its cohomology complex, often viewed as a graded vector space. We follow the conventions of [1] regarding perverse sheaves; in particular, the restriction of \(\mathcal{IC}(\mathcal{L}_{C})\) to \(C\) is \(\mathcal{L}_{C}[\dim C]\), and complexes are shifted according to \((\mathcal{F}[n])^{i}=\mathcal{F}^{n+i}\). The notation \(\operatorname{\mathbbm{1}}_{X}\) is used to denote the constant sheaf on \(X\). ## 2. Preliminaries on Vogan's conjecture on A-packets In this section we revisit the definition of ABV-packets for \(p\)-adic groups from [1, 20], also recalled in [10] and cast it in a form that is adapted to the proof of the main result, Theorem 6.1. Instead of purely working over \(\Pi_{\lambda}(G)\) and \(\operatorname{Per}_{H_{\lambda}}(V_{\lambda})^{\operatorname{simple}}_{/ \operatorname{iso}}\) as in the previous paper [10], we work over Grothendieck groups \(K\operatorname{Rep}_{\lambda}^{\operatorname{fl}}(G)\) and \(K\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\). We therefore spend some time explaining these groups. ### Spectral side By the Langlands correspondence, the \(\widehat{G}\)-conjugacy class of \(\lambda\) is identified with a cuspidal support \((L,\sigma)_{G}\in\Omega(G)\), the Bernstein variety for \(G(F)\). We remark that \(L\) is a Levi subgroup of \(M\), where \(M\) is determined by \(\psi\) as in Section 1. Let \(\operatorname{Rep}_{\lambda}(G)\) be the _cuspidal support category_ of smooth representations of \(G(F)\) whose Jordan-Holder series is contained in the Jordan-Holder series of \(\operatorname{Ind}_{P}^{G}(\sigma)\) where \(M\) is a parabolic subgroup of \(G\) with Levi component \(M\). Let \(\operatorname{Rep}_{\lambda}^{\operatorname{fl}}(G)\) be the subcategory of finite-length representations. The Grothendieck group \(K\operatorname{Rep}_{\lambda}^{\operatorname{fl}}(G)\) has two different bases - one consisting of smooth irreducible representations of \(G(F)\) that share the infinitesimal parameter \(\lambda\), and the other of standard representations attached to these irreducible representations. We use both these bases in the rest of the paper, so we recall the theory surrounding these objects below. #### 2.1.1. Irreducible representations Let \(\Pi_{\lambda}(G)=\{\pi_{i}\mid i\in I\}\) be the Jordan-Holder series of \(\operatorname{Ind}_{P}^{G}(\sigma)\); this may be identified with the set of isomorphism classes of irreducible admissible representations of \(G(F)\) with infinitesimal parameter \(\lambda\). Smooth irreducible representations of \(G(F)\) are classified using Zelevinsky theory [20]. This was surveyed beautifully in [17] and we use his notation. For any representation \(\pi\) of \(\operatorname{GL}_{m}(F)\), let \(\pi(i):=|\text{det}(\cdot)|^{i}\pi\). For a partition \(n=\underbrace{m+m+\ldots+m}_{\text{$r$-times}}\) and a supercuspidal representation \(\sigma\) of \(\operatorname{GL}_{m}(F)\), we call \[(\sigma,\sigma(1),\ldots,\sigma(r-1))=[\sigma,\sigma(r-1)]=:\Delta \tag{2}\] a segment. This segment determines a representation of a standard parabolic subgroup \(P\) of \(G\) whose Levi subgroup is identified with \(\underbrace{\operatorname{GL}_{m}\times\operatorname{GL}_{m}\times\cdots \times\operatorname{GL}_{m}}_{\text{$r$-times}}\). We can then carry out parabolic induction to obtain the induced representation \(\operatorname{Ind}_{P}^{G}(\sigma\otimes\sigma(1)\otimes\cdots\otimes\sigma( r-1))\) of \(G\), which has a unique irreducible quotient denoted by \(Q(\Delta)\). We refer to \(Q(\Delta)\) as the _Langlands quotient_ associated to \(\Delta\). For a segment \(\Delta=[\sigma,\sigma(r-1)]\), we set \[\Delta(x)\coloneqq[\sigma(x),\sigma(r-1+x)].\] A multisegment is a multiset of segments. A segment \(\Delta_{1}\) is said to _precede_\(\Delta_{2}\) if \(\Delta_{1}\not\subset\Delta_{2}\), \(\Delta_{2}\not\subset\Delta_{1}\), and there exists a positive integer \(x\) so that \[\Delta_{2}=\Delta_{1}(x)\] making \(\Delta_{1}\cup\Delta_{2}\) a segment. A multisegment \(\{\Delta_{1},\Delta_{2},\cdots,\Delta_{k}\}\) where \(\Delta_{i}\) does not precede \(\Delta_{j}\) for any \(i<j\) is said to satisfy the "does not precede" condition. Now let \(\alpha=\{\Delta_{1},\Delta_{2},\ldots,\Delta_{k}\}\) be a multisegment satifying the "does not precede" condition. Let \(P^{\prime}\) denote the standard parabolic subgroup specified by \(\alpha\). The Langlands classification theorem tells us that any smooth irreducible representation of \(G\) occurs as a unique irreducible quotient of the parabolically induced representation \(\operatorname{Ind}_{P^{\prime}}^{G}(Q(\Delta_{1})\otimes\cdots\otimes Q( \Delta_{k}))\) - we denote that quotient by \(Q(\Delta_{1},\Delta_{2},\ldots,\Delta_{k})\) or \(Q(\alpha)\) and refer to it as the _Langlands quotient_ associated to \(\alpha\); see [17, Theorem 1.2.5]. Next, for integers \(i<j\) we introduce the notation \[[i,j]:=(|\cdot|^{i},|\cdot|^{i+1},\ldots,|\cdot|^{j}) \tag{3}\] for a segment which is the special case of (2) when we consider the partition \(1+1+\cdots+1\) and \(\sigma\) to be the character \(|\cdot|\) of \(F^{\times}\). This notation may be extended to half integers \(i<j\) as long as \(j-i+1\) is a positive integer (this is the length of the segment). A segment of length \(1\) of the form \(\{|\cdot|^{i}\}\) is just denoted \([i]\). #### 2.1.2. Standard representations Next, we review the notion of a standard representation, also known as standard module in the literature. While this is written down in many different places, we follow the exposition of [13]. A standard representation of \(G(F)\) corresponds to the data \((P,\nu,\tau)\), where \(P=MN\) is a standard parabolic subgroup of \(G\), \(\nu\in\mathfrak{a}_{P}^{*,+}\), and \(\tau\) a tempered representation of \(M\). The definition of \(\mathfrak{a}_{P}^{*,+}\) is given in Section 2.2 of _loc. cit._. The character \(\nu\) corresponds to a \(P\)-positive unramified quasicharacter \(\exp\nu\) of \(M(F)\) as explained in Section 2.3 of _loc. cit._. The standard representation associated to this data is given by \(\operatorname{Ind}_{P}^{G}(\tau\otimes\exp\nu)\). This representation has a unique irreducible quotient (see Corollary 3.2 of _loc. cit._), say \(\pi\). In this paper we use the notation \[\Delta(\pi):=\operatorname{Ind}_{P}^{G}(\tau\otimes\exp\nu),\] and call it the standard representation of \(\pi\). Thus, \(K\operatorname{Rep}_{\lambda}^{\operatorname{fl}}(G)\) has two \(\mathbb{Z}\)-bases - one given by irreducible representations \[\{[\pi]:\pi\in\Pi_{\lambda}(G)\},\] and the other given by standard representations \[\{[\Delta(\pi)]:\pi\in\Pi_{\lambda}(G)\}.\] We note that the latter is true because every irreducible representation is the unique quotient of its standard representation, by the Langlands classification theorem. ### Galois/geometric side Recall that in this paper we make free use of [10] and [14]. In particular, for every infinitesimal parameter \(\lambda:W_{F}\to\widehat{G}\), set \[V_{\lambda}\coloneqq\{x\in\operatorname{Lie}\widehat{G}\mid\operatorname{Ad} (\lambda(w))(x)=|w|x,\ \forall w\in W_{F}\},\] and \[H_{\lambda}\coloneqq\{g\in\widehat{G}\mid\lambda(w)g\lambda(w)^{-1}=g,\ \forall w\in W_{F}\}.\] Then \(V_{\lambda}\) is a prehomogeneous vector space for the \(H_{\lambda}\)-action inherited from conjugation in \(\operatorname{Lie}\widehat{G}\), stratified into \(H_{\lambda}\)-orbits \(\{C_{i}\mid i\in I\}\). Recall that \(V_{\lambda}\) is a moduli space of Langlands parameters with infinitesimal parameter \(\lambda\). Vogan's geometric perspective relates Langlands parameters to simple objects up to isomorphism in the category \(\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\). As on the spectral side, \(K\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\) has two bases - one consisting of simple perverse sheaves and the other of standard sheaves, which we explain below. #### 2.2.1. Simple perverse sheaves Simple objects in \(\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\) are all of the form \(\mathcal{IC}(\mathcal{L}_{C})\), where \(C\) is an \(H_{\lambda}\)-orbit in \(V_{\lambda}\), and \(\mathcal{L}_{C}\in\operatorname{Loc}_{H_{\lambda}}(V_{\lambda})\) is a simple equivariant local system. For \(G=\operatorname{GL}_{n}\), simple perverse sheaves are of the form \(\mathcal{IC}(\mathbb{1}_{C})\) as each orbit only has the trivial irreducible local system. We invoke the local Langlands correspondence for \(G\) and write \(\pi_{\phi}\in\Pi(G)\) for the isomorphism class of irreducible representation with Langlands parameter \(\phi\), and likewise \(C_{\pi}\) for the \(H_{\lambda}\)-orbit in \(V_{\lambda}\) of parameters that correspond to \(\pi\) We refer to the latter identification as the Vogan-Langlands correspondence in [12]. We summarize the entire correspondence below: Thus, there is a unique orbit \(C_{\phi_{\psi}}\) in \(V_{\lambda}\) attached to \(\phi_{\psi}\). We shorten this notation to \(C_{\psi}\). We write \(\mathcal{P}(\pi)\) for \(H_{\lambda}\)-equivariant intersection cohomology complex on \(V_{\lambda}\) determined by \(\pi\) through \(C_{\pi}\); note that \(\mathcal{P}(\pi)\) is a simple simple object in the abelian category \(\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\) of \(H_{\lambda}\)-equivariant perverse sheaves on \(V_{\lambda}\). #### 2.2.2. Standard sheaves For any \(H_{\lambda}\)-orbit \(C\subseteq V_{\lambda}\) and any simple local system \(\mathcal{L}_{C}\) on \(C\), we introduce the notation \(\mathcal{L}_{C}^{\natural}\) for the \(H_{\lambda}\)-equivariant sheaf on \(V_{\lambda}\) with the defining property \[\left(\mathcal{L}_{C}^{\natural}\right)|_{C^{\prime}}=\begin{cases}\mathcal{L }_{C}&C^{\prime}=C,\\ 0&C^{\prime}\neq C.\end{cases}\] The Grothendieck group \(K\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\) has two \(\mathbb{Z}\)-bases, corresponding to the two t-structures in play: one consisting of simple perverse sheaves \[\{[\mathcal{IC}(\mathds{1}_{C})]:\text{ $C$ ranges over $H_{\lambda}$-orbits in $V_{\lambda}$}\}\] and the other of standard sheaves \[\{[\mathds{1}_{C}^{\natural}]:\text{ $C$ ranges over $H_{\lambda}$-orbits in $V_{\lambda}$}\};\] see [1, Proposition A.9.5] and also [1, p.12, paragraph 1], for a related instance of this phenomenon. ### Dual Grothendieck groups For every infinitesimal parameter \(\lambda:W_{F}\to\widehat{G}\), the local Langlands correspondence determines a perfect pairing between Grothendieck groups \[K\operatorname{Rep}_{\lambda}^{\operatorname{fl}}(G)\times K\operatorname{Per }_{H_{\lambda}}(V_{\lambda})\to\mathbb{Z}\] defined by \[\langle[\pi],[\mathcal{P}]\rangle_{\lambda}=\begin{cases}(-1)^{d(\pi)}&[ \mathcal{P}]=[\mathcal{P}(\pi)],\\ 0&\text{otherwise},\end{cases} \tag{4}\] where \(d(\pi)\coloneqq\dim C_{\pi}\). Recall that \(\mathcal{P}(\pi)\) denotes a simple object in \(\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\) matching \(\pi\in\Pi_{\lambda}(G)\) as in Section 2.2.1. Notice that this perfect pairing between Grothendieck groups matches \(\pi\in\Pi_{\lambda}(G)\) with the shifted perverse sheaf \(\mathcal{P}(\pi)[-\dim C_{\pi}]\). If we index \(\Pi_{\lambda}(G)=\{\pi_{i}\mid i\in I\}\) and likewise index isomorphism classes of simple objects in \(\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\) by \(\{\mathcal{IC}(\mathds{1}_{C_{j}})\mid j\in I\}\) then the pairing above becomes \[\langle[\pi_{i}],[\mathcal{IC}(\mathds{1}_{C_{j}})]\rangle_{\lambda}= \begin{cases}(-1)^{\dim C_{i}}&i=j,\\ 0&\text{otherwise}.\end{cases}\] If we change the scalars to \(\mathbb{C}\) throughout, then the pairing extends: \[\langle\cdot,\cdot\rangle:K_{\mathbb{C}}\operatorname{Rep}_{\lambda}^{\operatorname {fl}}(G)\times K_{\mathbb{C}}\operatorname{Per}_{H_{\lambda}}(V_{\lambda}) \to\mathbb{C}, \tag{5}\] where we set \(K_{\mathbb{C}}\operatorname{Rep}_{\lambda}^{\operatorname{fl}}(G)\coloneqq \mathbb{C}\otimes_{\mathbb{Z}}K\operatorname{Rep}_{\lambda}^{\operatorname{fl} }(G)\) and \(K_{\mathbb{C}}\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\coloneqq\mathbb{C} \otimes_{\mathbb{Z}}K\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\). ### Kazhdan-Lusztig Hypothesis In this section we state the Kazhdan-Lusztig Hypothesis for \(p\)-adic general linear groups. For every irreducible \(\pi\in\operatorname{Rep}_{\lambda}^{\operatorname{fl}}(G)\), let \(\Delta(\pi)\) be the standard representation for \(\pi\); thus, in particular, \(\pi\) is the unique irreducible quotient of \(\Delta(\pi)\). For every \(\pi_{i}\) and \(\pi_{j}\) in \(\Pi_{\lambda}(G)\), let \(m_{ij}\) denoted the muliplicity of \(\pi_{i}\) in \(\Delta(\pi_{j})\); thus, in the Grothendieck group \(K\operatorname{Rep}_{\lambda}^{\operatorname{fl}}(G)\), \[[\Delta(\pi_{j})]=\sum_{i\in I}m_{ij}[\pi_{i}].\] Let \(m_{\lambda}=(m_{ij})\) be the matrix of these entries. It is possible to order \(I\), and thus the representations appearing in \(\Pi_{\lambda}(G)\), so that the matrix \(m\) is lower triangular, with diagonal entries \(1\); consequently, the matrix \(m\) is invertible. Notice that \(m\) is the change of basis matrix for the vector space \(K_{\mathbb{C}}\operatorname{Rep}_{\lambda}^{\operatorname{fl}}(G)\), from the basis \(\{[\Delta(\pi_{i})]\ |\ i\in I\}\) to \(\{[\pi_{j}]\ |\ j\in I\}\). Return to the infinitesimal parameter \(\lambda:W_{F}\to\widehat{G}\) and consider the abelian category \(\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\) of \(H_{\lambda}\)-equivariant perverse sheaves on \(V_{\lambda}\). Simple objects in this category are the intersection cohomology complexes \(\mathcal{IC}(\mathbb{1}_{C_{i}})\). For each \(H_{\lambda}\)-orbit \(C_{j}\) in \(V_{\lambda}\), pick a base point \(x_{j}\in C_{j}\) and let \(c_{ij}\) be the Euler characteristic of the stalk of \(\mathcal{IC}(\mathbb{1}_{C_{j}})[-\dim C_{j}]\) at \(x_{i}\): \[c_{ij}=(-1)^{\dim C_{j}}\operatorname{rank}\left(\mathcal{H}_{x_{i}}^{\bullet }\mathcal{IC}(\mathbb{1}_{C_{j}}))\right).\] Set \(c_{\lambda}=(c_{ij})\). **Hypothesis 2.1** (\(p\)-adic analogue of the Kazhdan-Lusztig Hypothesis).: _In the Grothendieck group \(K\operatorname{Rep}(G)\) the multiplicity of the irreducible representation \(\pi_{i}\) in the standard representation \(\Delta(\pi_{j})\) is given by_ \[m_{ij}=(-1)^{\dim C_{i}}\operatorname{rank}(\mathcal{H}_{x_{j}}^{\bullet} \mathcal{IC}(\mathbb{1}_{C_{i}}))\,.\] _Equivalently, the change of basis matrix in \(K\operatorname{Rep}_{\lambda}^{\operatorname{fl}}(G)\) from standard representations to irreducible representations is computed by the Euler characteristics of stalks of simple objects in \(\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\):_ \[m_{\lambda}=\,^{t}c_{\lambda}.\] Hypothesis 2.1 was first articulated in [21]. It also appears in [1, Chapter 15] for real groups and in [24, Section 8] for real and \(p\)-adic groups, though there are some sign errors in the latter. For general linear groups, Hypothesis 2.1 is a folklore theorem, often attributed to [13] or [23]. More recently, Hypothesis 2.1, as it applies here, is also asserted in [25, Theorem E, (b) and (c)]. In this paper we take Hypothesis 2.1 as given. Using this notation, we revisit the matrix \(c\) from Section 2.4 and write \[[\mathcal{IC}(\mathbb{1}_{C_{j}})]=(-1)^{\dim(C_{j})}\sum_{i\in I}c_{ij}[ \mathbb{1}_{C_{i}}^{\natural}]\] in \(K\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\). Thus, \(c_{\lambda}\) is the change of basis matrix for the vector space \(K_{\mathbb{C}}\operatorname{Rep}_{\lambda}(G)\), from the basis \(\{[\mathbb{1}^{\natural}_{C_{i}}]\mid i\in I\}\) to the basis \(\{[\mathcal{IC}(\mathbb{1}_{C_{i}})[-\dim C_{i}]\mid i\in I\}\). Likewise, \[[\mathbb{1}^{\natural}_{C_{j}}]=\sum_{i\in I}({c_{\lambda}}^{-1})_{ij}(-1)^{ \dim(C_{i})}[\mathcal{IC}(\mathbb{1}_{C_{i}})]\] in \(K_{\mathbb{C}}\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\). Since standard representations form a basis for the Grothendieck group \(K\operatorname{Rep}_{\lambda}^{\operatorname{fl}}(G)\), it is natural to ask what objects of \(K\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\) are dual to this basis, under the pairing of Equation (4). In the lemma below we use Hypothesis 2.1 to show that standard sheaves in \(K\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\) are dual to standard representation \([\Delta(\pi)]\) in \(K\operatorname{Rep}_{\lambda}^{\operatorname{fl}}(G)\). **Lemma 2.2**.: For any \(\pi\in\Pi_{\lambda}(G)\) and any \(H_{\lambda}\)-orbit \(C\) in \(V_{\lambda}\), \[\langle[\Delta(\pi)],[\mathbb{1}^{\natural}_{C}]\rangle_{\lambda}=\begin{cases} 1&[\mathcal{P}(\pi)]=[\mathcal{IC}(\mathbb{1}_{C})];\\ 0&\text{otherwise.}\end{cases}\] Proof.: \[\langle[\Delta(\pi_{j})],[\mathbb{1}^{\natural}_{C_{i}}]\rangle_{\lambda} = \langle\sum_{k\in I}m_{ji}[\pi_{l}],\sum_{l\in I}(-1)^{\dim(C_{l} )}(c)\lambda^{-1})_{li}[\mathcal{IC}(\mathbb{1}_{C_{l}})]\rangle_{\lambda}\] \[= \sum_{k,l\in I}m_{ji}\ (-1)^{\dim(C_{l})}({c_{\lambda}}^{-1})_{li} \langle[\pi_{k}],[\mathcal{IC}(\mathbb{1}_{C_{l}})]\rangle_{\lambda}\] \[= \sum_{l\in I}m_{lj}\ (-1)^{\dim(C_{l})}({c_{\lambda}}^{-1})_{li}(-1)^{ \dim(C_{l})}\] \[= \sum_{l\in I}({c_{\lambda}}^{-1})_{li}\ m_{lj}\] \[= \sum_{l\in I}({\,^{t}c_{\lambda}}^{-1})_{il}\ m_{lj}\] \[= ({\,^{t}c_{\lambda}}^{-1}\ m_{\lambda})_{ij}.\] By Hypothesis 2.1, \({\,^{t}c_{\lambda}}^{-1}=m_{\lambda}^{-1}\), so \[({\,^{t}c_{\lambda}}^{-1}\ m_{\lambda})_{ij} = (m_{\lambda}^{-1}\ m_{\lambda})_{ij}\] \[= \begin{cases}1&i=j\\ 0&i\neq j.\end{cases}\] ### Alternate form of Vogan's conjecture on A-packets Now let \(\psi:W^{\prime\prime}_{F}\to\widehat{G}\) be an Arthur parameter for \(G\). Let \(\lambda\coloneqq\lambda_{\psi}:W_{F}\to\widehat{G}\) be its infinitesimal parameter, as defined in Section 1.3. Based on [1, Definition 2, SS8.2], define \(\eta^{\operatorname{Evs}}_{\psi}\in K\operatorname{Rep}_{\lambda}(G)\) by \[\eta^{\operatorname{Evs}}_{\psi}\coloneqq(-1)^{d(\psi)}\sum_{\pi\in\Pi_{ \lambda}(G)}(-1)^{d(\pi)}\operatorname{rank}\left(\operatorname{Evs}_{\psi} \mathcal{P}(\pi)\right)\ [\pi],\] where \(d(\psi)\coloneqq\dim(C_{\psi})\) and \(d(\pi)=\dim(C_{\phi_{\pi}})\). Recall that the classes \([\pi]\), as \(\pi\) ranges over \(\Pi_{\lambda}(G)\), form a basis for \(K\operatorname{Rep}^{\operatorname{fl}}_{\lambda}(G)\). **Proposition 2.3**.: _For all \(\mathcal{F}\in D_{H_{\lambda}}(V_{\lambda})\),_ \[\langle\eta^{\operatorname{Evs}}_{\psi},[\mathcal{F}]\rangle_{\lambda}=(-1)^{d (\psi)}\operatorname{rank}\left(\operatorname{Evs}_{\psi}\mathcal{F}\right).\] Proof.: It is enough to prove the proposition in the case that \(\mathcal{F}\) is a simple object in \(\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\). To see this, note that classes of simple objects in \(\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\) form a basis for \(K\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\), and since \(\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\) is a heart of \(\operatorname{D}_{H_{\lambda}}(V_{\lambda})\), the Grothendieck groups coincide: \(K\operatorname{Per}_{H_{\lambda}}(V_{\lambda})=K\operatorname{D}_{H_{\lambda}} (V_{\lambda})\). Moreover, the functor \(\operatorname{Evs}_{\psi}\mathcal{F}\) depends only on the class of \(\mathcal{F}\) in \(K\operatorname{D}_{H_{\lambda}}(V_{\lambda})\). So now we assume \(\mathcal{F}\) is a simple object in \(\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\). Recall that every simple object in this category takes the form \(\mathcal{P}(\pi_{i})\) for some \(\pi_{i}\in\Pi_{\lambda}(G)\). We now prove the Proposition for \(\mathcal{F}=\mathcal{P}(\pi_{i})\): \[\begin{array}{l}\langle\eta^{\operatorname{Evs}}_{\psi},[\mathcal{P}(\pi_{i })]\rangle_{\lambda}\\ =\langle(-1)^{d(\psi)}\sum\limits_{\pi\in\Pi_{\lambda}(G)}(-1)^{d(\pi)} \operatorname{rank}\left(\operatorname{Evs}_{\psi}\mathcal{P}(\pi)\right)[ \pi_{i}],[\mathcal{P}(\pi)]\rangle_{\lambda}\\ =(-1)^{d(\psi)}(-1)^{d(\pi_{i})}\operatorname{rank}\left(\operatorname{Evs}_{ \psi}\mathcal{P}(\pi_{i})\right)\langle[\pi_{i}],[\mathcal{P}(\pi_{i})] \rangle_{\lambda}\\ =(-1)^{d(\psi)}(-1)^{d(\pi_{i})}\operatorname{rank}\left(\operatorname{Evs}_{ \psi}\mathcal{P}(\pi_{i})\right)(-1)^{d(\pi_{i})}\\ =(-1)^{d(\psi)}\operatorname{rank}\left(\operatorname{Evs}_{\psi}\mathcal{P}( \pi_{i})\right).\end{array}\qquad\text{by Equation \ref{eq:prop \(\mathrm{GL}_{n_{i}}\) and let \(\lambda_{i}:W_{F}\to\widehat{M}_{i}\) be the composition of \(\lambda_{M}:W_{F}\to\widehat{M}\) with the projection \(\widehat{M}\to\widehat{M}_{i}\). Now factor \(V_{\lambda_{M}}\): \[V_{\lambda_{M}}=V_{\lambda_{1}}\times\cdots\times V_{\lambda_{k}},\] where \(V_{\lambda_{i}}\) is the moduli space of Langlands parameters for \(M_{i}\) with infinitesimal parameter \(\lambda_{i}\). Likewise, set \(H_{\lambda_{i}}\coloneqq Z_{\widehat{M}_{i}}(\lambda_{i})\) so \(H_{\lambda_{M}}=H_{\lambda_{1}}\times\cdots\times H_{\lambda_{k}}.\) Then \[\mathrm{Per}_{H_{\lambda_{M}}}(V_{\lambda_{M}})\cong\mathrm{Per}_{H_{\lambda_ {1}}}(V_{\lambda_{1}})\boxtimes\cdots\boxtimes\mathrm{Per}_{H_{\lambda_{k}}}(V _{\lambda_{k}})\] (finite product of categories). Since \(\sigma\) is irreducible, \(\sigma=\sigma_{1}\boxtimes\cdots\boxtimes\sigma_{k}\) where each \(\sigma_{i}\) is irreducible. Now the Langlands correspondence for \(M\) attaches \[\mathcal{P}(\sigma)=\mathcal{P}(\sigma_{1})\boxtimes\cdots\boxtimes\mathcal{ P}(\sigma_{k})\] to \(\sigma\). Finally, recall \[\psi_{M}=\psi_{1}\boxtimes\cdots\boxtimes\psi_{k},\] and write \(x_{\psi_{M}}=(x_{\psi_{1}},\ldots,x_{\psi_{k}})\in V_{\lambda_{M}}\) for the corresponding elements in the moduli space; likewise write \(y_{\psi_{M}}=(y_{\psi_{1}},\ldots,y_{\psi_{k}})\in V_{\lambda_{M}}^{*}\). By the theorem of Thom-Sebastiani [11][1], \[\left(\mathrm{R}\Psi_{y_{\psi_{M}}}\,\mathcal{P}(\sigma)\right)_{ x_{\psi_{M}}} = \left(\mathrm{R}\Psi_{(y_{\psi_{1}},\ldots,y_{\psi_{k}})}\, \mathcal{P}(\sigma_{1})\boxtimes\cdots\boxtimes\mathcal{P}(\sigma_{k})\right) _{(x_{\psi_{1}},\ldots,x_{\psi_{k}})}\] \[= \left(\mathrm{R}\Psi_{y_{\psi_{1}}}\,\mathcal{P}(\sigma_{1}) \right)_{x_{\psi_{1}}}\boxtimes\cdots\boxtimes\left(\mathrm{R}\Psi_{y_{\psi_{ k}}}\,\mathcal{P}(\sigma_{k})\right)_{x_{\psi_{k}}}.\] Thus, \[\left(\mathrm{R}\Psi_{y_{\psi_{M}}}\,\mathcal{P}(\sigma)\right)_{x_{\psi_{M}}} \neq 0\qquad\Longleftrightarrow\qquad\left(\mathrm{R}\Psi_{y_{\psi_{i}}}\, \mathcal{P}(\sigma_{i})\right)_{x_{\psi_{i}}}\neq 0,\ \forall i=1,\ldots,k.\] Equivalently, \[\mathrm{Evs}_{\psi_{M}}\,\mathcal{P}(\sigma)\neq 0\qquad\Longleftrightarrow \qquad\mathrm{Evs}_{\psi_{i}}\,\mathcal{P}(\sigma_{i})\neq 0,\ \forall i=1,\ldots,k.\] By [10], \[\Pi_{\psi_{i}}^{\mbox{\tiny\sc abv}}(M_{i})=\Pi_{\psi_{i}}(M_{i})=\{\pi_{\psi_{ i}}\},\ \forall i=1,\ldots,k.\] It now follows that \[\Pi_{\psi_{M}}^{\mbox{\tiny\sc abv}}(M)=\Pi_{\psi_{M}}(M)=\{\pi_{\psi_{M}}\}.\] Recall the definition: \[\eta_{\psi_{M}}^{\mbox{\tiny\sc ev}}\coloneqq(-1)^{d(\psi_{M})}\sum_{\sigma\in \Pi_{\lambda_{M}}(G)}(-1)^{d(\sigma)}\operatorname{rank}\left(\mathrm{Evs}_{ \psi_{M}}\,\mathcal{P}(\sigma)\right)\ [\sigma],\] We have just seen that \(\Pi_{\psi_{M}}^{\mbox{\tiny\sc abv}}(M)=\{\pi_{\psi_{M}}\}\). Therefore, \[\eta_{\psi_{M}}^{\mbox{\tiny\sc ev}}=(-1)^{d(\psi_{M})-d(\pi_{\psi_{M}})} \operatorname{rank}\left(\mathrm{Evs}_{\psi_{M}}\,\mathcal{P}(\pi_{\psi_{M}}) \right)\ [\pi_{\psi_{M}}],\] By [11, SS8.2], \[\operatorname{rank}\left(\mathrm{Evs}_{\psi_{M}}\,\mathcal{P}(\pi_{\psi_{M}}) \right)=1,\] so \[\eta^{\text{Evs}}_{\psi_{M}}=[\pi_{\psi_{M}}],\] as claimed. ## 4. Fixed-point Formula The proof of the main result, Theorem 6.1, uses a fixed-point formula, explained in this section. From Section 1, recall that \(V_{\lambda_{M}}\) is the subvariety of \(V_{\lambda}\) fixed by \(\operatorname{Ad}(s)\), where \(s\in\widehat{G}\) be a finite-order element such that \(\widehat{M}=Z_{\widehat{G}}(s)\): \(V_{\lambda_{M}}=V_{\lambda}^{s}\). Let \[\varepsilon:V_{\lambda_{M}}\hookrightarrow V_{\lambda} \tag{6}\] be the obvious inclusion. Let \(\varepsilon^{*}:\operatorname{D}_{H_{\lambda}}(V_{\lambda})\to\operatorname{D }_{H_{\lambda_{M}}}(V_{\lambda_{M}})\) be the equivariant restriction functor of equivariant derived categories. We will also use the notation \[\mathcal{F}|_{V_{\lambda_{M}}}\coloneqq\varepsilon^{*}\mathcal{F}.\] While \(\varepsilon^{*}\) is an exact functor, it does not take perverse sheaves to perverse sheaves. **Lemma 4.1**.: Let \(\psi\) and \(\psi_{M}\) be as above. For all \(\mathcal{F}\in\operatorname{D}_{H_{\lambda}}(V_{\lambda})\), \[(-1)^{d(\psi)}\operatorname{rank}\left(\operatorname{Evs}_{\psi}\mathcal{F} \right)=(-1)^{d(\psi_{M})}\operatorname{rank}\left(\operatorname{Evs}_{\psi_{ M}}\mathcal{F}|_{V_{\lambda_{M}}}\right).\] Proof.: By [13, Proposition 7.8 and Definition 2], the functor \(\operatorname{Evs}_{\psi}\) is related to vanishing cycles by \[\operatorname{Evs}_{\psi}\mathcal{F}=(-1)^{d(\hat{\psi})-\dim V_{\lambda}} \left(\operatorname{R\!\Psi}_{y_{\psi}}[-1]\mathcal{F}\right)_{x_{\psi}},\] where * \(x_{\psi}\) is the point for \(\phi_{\psi}\) in this moduli space \(V_{\lambda}\); * \(y_{\psi}\) is the point in the dual moduli space \(V_{\lambda}^{*}\) matching the Langlands parameter \(\phi_{\hat{\psi}}\) where \(\hat{\psi}(w,x,y)\coloneqq\psi(w,y,x)\); and * \(\operatorname{R\!\Psi}_{y_{\psi}}\) is Deligne's vanishing cycles functor. * \(d(\hat{\psi})\) is the dimension of the \(H_{\lambda}\)-orbit of \(y_{\psi}\) in \(V_{\lambda}^{*}\). Next, recall the relation between vanishing cycles and local Morse groups, as for example in [1, Part II, Chapter 6, Section 6.A.2], so \[\left(\operatorname{R\!\Psi}_{y_{\psi}}[-1]\mathcal{F}\right)_{x_{\psi}}=A_{y _{\psi}}^{\bullet}(\mathcal{F}),\] where we view \(y_{\psi}\in T_{C_{\psi},x_{\psi}}^{*}(V_{\lambda})\). Here we use [13, Proposition 6.1] to see that \((x_{\psi},y_{\psi})\in T_{H_{\lambda}}^{*}(V_{\lambda})\) is regular, so \(y_{\psi}\) is non-degenerate in the sense of Morse theory. Combining these observations, it follows that \[\mathcal{H}^{i}\left(\operatorname{Evs}_{\psi}\mathcal{F}[\dim C_{\psi}] \right)=H^{i}(J,K;\mathcal{F}),\] where \((J,K)\) is normal Morse data corresponding to \(y_{\psi}\) as a linear functional on \(V_{\lambda}\), as in [1, Part II, Chapter 6, Section 6.A.1]. Now recall that \(M\) was chosen from \(\psi\) precisely so that its image lies in \(\widehat{M}=Z_{\widehat{G}}(s)\) and, consequently, \(s\in Z_{\widehat{G}}(\psi)\). Recall also that for \(G=\operatorname{GL}_{n}\), the group \(Z_{\widehat{G}}(\psi)\) is connected, so \(A_{\psi}=\pi_{0}(Z_{\widehat{G}}(\psi))\) is trivial. This allows us to interpret \(\operatorname{rank}\operatorname{Evs}_{\psi}\mathcal{F}\) as a Lefschetz number: \[\operatorname{rank}\left(\operatorname{Evs}_{\psi}\mathcal{F}\right)=\operatorname {trace}\left(s,\operatorname{Evs}_{\psi}\mathcal{F}\right)=(-1)^{d(\psi)}\sum_{ i}(-1)^{i}\operatorname{trace}\left(s,H^{i}(J,K;\mathcal{F})\right).\] Arguing as in the proof of [1, Theorem 25.8], which makes essential use of [1], it now follows that \[\sum_{i}(-1)^{i}\operatorname{trace}\left(s,H^{i}(J,K;\mathcal{F})\right)=\sum _{i}(-1)^{i}\operatorname{trace}\left(s,H^{i}(J^{s},K^{s};\mathcal{F})\right);\] in other words, \[(-1)^{d(\psi)}\operatorname{trace}(s,\operatorname{Evs}_{\psi}\mathcal{F})=(- 1)^{d(\psi_{M})}\operatorname{trace}(s,\operatorname{Evs}_{\psi}\varepsilon^{ *}\mathcal{F});\] equivalently, \[(-1)^{d(\psi)}\operatorname{rank}\operatorname{Evs}_{\psi}\mathcal{F}=(-1)^{d (\psi_{M})}\operatorname{rank}\operatorname{Evs}_{\psi}\varepsilon^{*} \mathcal{F}.\] Here we have also used that, by construction, \(\varepsilon(x_{\psi_{M}})=x_{\psi}\) and likewise, \(y_{\psi_{M}}\) maps to \(y_{\psi}\) under \(V_{\lambda_{M}}^{*}\hookrightarrow V_{\lambda}^{*}\), and by [1, Proposition 6.1], \((x_{\psi},y_{\psi})\in T_{H_{\lambda}}^{*}(V_{\lambda})\) is regular, while the same result shows \((x_{\psi_{M}},y_{\psi_{M}})\in T_{H_{\lambda_{M}}}^{*}(V_{\lambda_{M}})\) is regular. **Proposition 4.2**.: _Let \(M\) be any Levi subgroup of \(G\) and let \(\psi_{M}\) be any Arthur parameter for \(M\); let \(\psi\) be its lift to \(G\). Let \(\lambda_{M}\) (resp. \(\lambda\)) be the infinitesimal parameter of \(\psi_{M}\) (resp. \(\phi\)). Then_ \[\langle\eta_{\psi}^{\operatorname{Evs}},[\mathcal{F}]\rangle_{\lambda}= \langle\eta_{\psi_{M}}^{\operatorname{Evs}},[\mathcal{F}|_{V_{\lambda_{M}}}] \rangle_{\lambda_{M}},\] _for every \(\mathcal{F}\in\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\)._ Proof.: For all \(\mathcal{F}\in D_{H_{\lambda}}(V_{\lambda})\), \[\langle\eta_{\psi}^{\operatorname{Evs}},[\mathcal{F}]\rangle_{\lambda} =(-1)^{d(\psi)}\operatorname{rank}\left(\operatorname{Evs}_{\psi }\mathcal{F}\right). \text{by Proposition \ref{prop:F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_ Following the nomenclature of [1, Definition 26.18] for real groups, we refer to the linear transformation \(\operatorname{Lift}_{M}^{G}\) as _endoscopic lifting_. In this section we show that \(\operatorname{Lift}_{M}^{G}\) coincides with the linear transformation given by \(\operatorname{Ind}_{P}^{G}\) on the level of Grothendieck groups. In Section 7 we see that it also coincides with Langlands-Shelstad transfer. ### Endoscopic lifting of standard representations **Proposition 5.1**.: _Let \(\phi_{M}\) be any Langlands parameter for \(M\) with infinitesimal parameter \(\lambda_{M}\). Let \(\pi_{\phi_{M}}\) be the corresponding irreducible representation of \(M(F)\). Then_ \[\operatorname{Lift}_{M}^{G}\left(\left[\Delta(\pi_{\phi_{M}})\right]\right)=[ \Delta(\pi_{\phi})],\] _where \(\phi\) is the Langlands parameter for \(G\) obtained by lifting \(\phi_{M}\) via \(\widehat{M}\hookrightarrow\widehat{G}\)._ Proof.: We prove the proposition by showing \[\langle\operatorname{Lift}_{M}^{G}\left(\left[\Delta(\pi_{\phi_{M}})\right] \right),[\mathcal{F}]\rangle_{\lambda}=\langle[\Delta(\pi_{\phi})],[\mathcal{ F}]\rangle_{\lambda},\] for every \(\mathcal{F}\in\operatorname{D}_{H_{\lambda}}(V_{\lambda})\). To do this, it is sufficient to take \(\mathcal{F}=\mathbbm{1}_{C}^{\sharp}\) and allow \(C\) to range over \(H_{\lambda}\)-orbits in \(V_{\lambda}\), since these sheaves provide a basis for the Grothendieck group. Observe that \[\varepsilon^{*}\left(\mathbbm{1}_{C}^{\sharp}\right)=\mathbbm{1}_{C\cap V_{ \lambda_{M}}}^{\sharp}.\] Consequently, in \(K\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\), \[[\varepsilon^{*}\left(\mathbbm{1}_{C}^{\sharp}\right)]=\sum_{D}[\mathbbm{1}_{ D}^{\sharp}]\] where the sum is taken over \(H_{\lambda_{M}}\)-orbits \(D\) in \(V_{\lambda_{M}}\) appearing in \(C\cap V_{\lambda_{M}}\), or in other words, over all orbits \(D\) in \(V_{\lambda_{M}}\) whose saturation in \(V_{\lambda}\) is \(C\). Now, \[\langle\operatorname{Lift}_{M}^{G}\left(\left[\Delta(\pi_{\phi_{M }})\right]\right),[\mathbbm{1}_{C}^{\natural}]\rangle_{\lambda} = \langle[\Delta(\pi_{\phi_{M}})],\varepsilon^{*}[\mathbbm{1}_{C}^ {\natural}]\rangle_{\lambda_{M}}\] \[= \langle[\Delta(\pi_{\phi_{M}})],\sum_{D}[\mathbbm{1}_{D}^{\natural }]\rangle_{\lambda_{M}}\] \[= \sum_{D}\langle[\Delta(\pi_{\phi_{M}})],[\mathbbm{1}_{D}^{\natural }]\rangle_{\lambda_{M}}.\] Now, by Lemma 2.2 adapted from \(G\) to \(M\), \(\langle[\Delta(\pi_{\phi_{M}})],[\mathbbm{1}_{D}^{\natural}]\rangle_{\lambda_{M}}\) is non-zero only whee \(D\) is the \(H_{\lambda_{M}}\)-orbit of \(\phi_{M}\in V_{\lambda_{M}}\), in which case the pairing gives the value \(1\). Therefore, \[\langle\operatorname{Lift}_{M}^{G}\left(\left[\Delta(\pi_{\phi_{M}})\right] \right),[\mathbbm{1}_{C}^{\natural}]\rangle_{\lambda} = \begin{cases}1&\phi\in C\\ 0&\phi\not\in C.\end{cases}\] On the other hand, by Lemma 2.2, \[\langle[\Delta(\pi_{\phi})],[\mathbb{1}_{C}^{\natural}]\rangle_{\lambda} = \begin{cases}1&\phi\in C\\ 0&\phi\not\in C.\end{cases}\] It follows that \[\langle\operatorname{Lift}_{M}^{G}\left([\Delta(\pi_{\phi_{M}})]\right),[ \mathbb{1}_{C}^{\natural}]\rangle_{\lambda}=\langle[\Delta(\pi_{\phi})],[ \mathbb{1}_{C}^{\natural}]\rangle_{\lambda},\] for every \(H_{\lambda}\)-orbit \(C\) in \(V_{\lambda}\), and therefore \[\langle\operatorname{Lift}_{M}^{G}\left([\Delta(\pi_{\phi_{M}})]\right),[ \mathcal{F}]\rangle_{\lambda}=\langle[\Delta(\pi_{\phi})],[\mathcal{F}] \rangle_{\lambda},\] for every \(\mathcal{F}\in\operatorname{D}_{H_{\lambda}}(V_{\lambda})\). Since the pairing is perfect, it follows that \[\operatorname{Lift}_{M}^{G}\left([\Delta(\pi_{\phi_{M}})]\right)=[\Delta(\pi_ {\phi})].\] ### Comparison with parabolic induction In this subsection, we make precise and prove the claim that parabolic induction of a standard representation is a standard representation. We then show that endoscopic lifting can be characterized by parabolic induction. We use the theory and notation established in Section 2.1.2 for irreducible and standard representations. Let \(\pi\) be an irreducible representation of \(G(F)\). The standard representation \(\Delta(\pi)\) of \(\pi\) can be extracted in terms of multisegments using Zelevinsky theory. This can be seen directly from [2, Theorem 2.2.2], but we also clarify this explicitly in the lemma below. **Lemma 5.2**.: Let \(\pi\) be a smooth irreducible representation of \(G=\operatorname{GL}_{n}(F)\) with multisegment \(\alpha=\{\Delta_{1},\ldots\Delta_{k}\}\) arranged so that the segments satisfy the "does not precede" condition. Then \[\Delta(\pi)\simeq\operatorname{Ind}_{P}^{G}(Q(\Delta_{1})\otimes Q(\Delta_{2} )\otimes\cdots\otimes Q(\Delta_{k})),\] where \(P\) is the standard parabolic specified by the \(\Delta_{i}\)s. Proof.: We use Kudla's expository work [2], specifically the arguments in pages 372-374. Recall from 2.1.2 that \(\pi\) is the unique irreducible quotient of \(\operatorname{Ind}_{P}^{G}(Q(\Delta_{1})\otimes Q(\Delta_{2})\otimes\cdots \otimes Q(\Delta_{k}))\) where \(\Delta_{i}\)s are arranged so that they satisfy the "does not precede" condition. Each \(Q(\Delta_{i})\) is an essentially tempered representation, which means there is a \(x_{i}\in\mathbb{R}\) so that \(Q(\Delta_{i})\simeq Q(\Delta_{i}^{\prime})(x_{i})\) where \(Q(\Delta_{i}^{\prime})\) is tempered. Thus we have \[\begin{array}{l}\operatorname{Ind}_{P}^{G}(Q(\Delta_{1})\otimes Q(\Delta_{2 })\otimes\cdots\otimes Q(\Delta_{k}))\\ \simeq\operatorname{Ind}_{P}^{G}(Q(\Delta_{1}^{\prime})(x_{1})\otimes Q( \Delta_{2}^{\prime})(x_{2})\otimes\cdots\otimes Q(\Delta_{k}^{\prime})(x_{k}) ).\end{array}\] Since \(Q(\Delta_{i}^{\prime})\)s are square-integrable, none of the \(\Delta_{i}^{\prime}\)s can be linked. Moreover, we must have \(x_{1}\geq x_{2}\ldots\geq x_{k}\) as the \(\Delta_{i}\)s satisfy the "does not precede" condition. If \(x_{i}=x_{i+1}\), then we can replace \(Q(\Delta_{i}^{\prime})(x_{i})\otimes Q(\Delta_{i+1}^{\prime})(x_{i+1})\) with \(Q(\Delta_{i}^{\prime},\Delta_{i+1}^{\prime})(x_{i})\), which is equal to the full induced representation, and an irreducible tempered representation twisted by \(x_{i}\). Thus, we obtain a sequence \(x_{1}>\cdots>x_{k^{\prime}}\) and tempered representations \(\tau_{1}=Q(\alpha_{1}),\ldots,\tau_{k^{\prime}}=Q(\alpha_{k^{\prime}})\) where \(\alpha_{i}\)s partition the set \(\{\Delta_{1}^{\prime},\Delta_{2}^{\prime},\ldots,\Delta_{k}^{\prime}\}\). This gives us, \[\operatorname{Ind}_{P}^{G}(Q(\Delta_{1})\otimes Q(\Delta_{2})\otimes\cdots \otimes Q(\Delta_{k})))\simeq\operatorname{Ind}_{P^{\prime}}^{G}(\tau_{1}(x_{1} )\otimes\cdots\otimes\tau_{k^{\prime}}(x_{k^{\prime}})). \tag{9}\] Here \(P^{\prime}=M^{\prime}N^{\prime}\) is the standard parabolic subgroup specified by the \(\alpha_{i}\)s. By observing that \(\tau_{1}\otimes\cdots\otimes\tau_{k^{\prime}}\) is a tempered representation of \(M^{\prime}\) and \(x_{1}>\cdots>x_{k^{\prime}}\) specifies a \(P^{\prime}\)-positive unramified character of \(M^{\prime}\), we see that the representations in (9) are standard representations. The result now follows by using the that \(\pi\) is the unique irreducible quotient of \(\operatorname{Ind}_{P}^{G}(Q(\Delta_{1})\otimes Q(\Delta_{2})\otimes\cdots \otimes Q(\Delta_{k}))\). The upshot of this lemma is that we can talk about standard representations purely in terms of multisegments, which makes it easy to pin down the standard representation obtained after parabolic induction. We prove the implicit claim in this statement below. **Proposition 5.3**.: _Let \(M=\operatorname{GL}_{m_{1}}\times\operatorname{GL}_{m_{2}}\cdots\times \operatorname{GL}_{m_{k}}\) be a Levi subgroup of \(G\). Let \(P\) denote the standard parabolic of \(G\) with Levi component \(M\). Then, for any \(\pi_{M}\in\Pi_{\lambda_{M}}(M)\), \([\operatorname{Ind}_{P}^{G}\left(\Delta(\pi_{M})\right)]\) is the image of a standard representation of \(G\) in \(K_{\mathbb{C}}\operatorname{Rep}_{\lambda}(G)\). Moreover, \(\operatorname{Ind}_{P}^{G}\left(\Delta(\pi_{M})\right)\) has a unique composition factor \(\pi\) so that_ \[[\Delta(\pi)]=[\operatorname{Ind}_{P}^{G}\left(\Delta(\pi_{M})\right)].\] Proof.: A representation \(\pi_{M}\in\Pi_{\lambda_{M}}(M)\) can be written as \(\pi_{1}\otimes\cdots\otimes\pi_{k}\), where \(\pi_{i}\in\Pi_{\lambda_{i}}(\operatorname{GL}_{m_{i}})\) for \(1\leq i\leq k\). Moreover, \(\Delta(\pi_{M})=\Delta(\pi_{1})\otimes\cdots\otimes\Delta(\pi_{k})\). It suffices to prove this proposition for the case \(k=2\). Each \(\pi_{i}\) has the associated data of \(\Delta_{1}^{i},\ldots,\Delta_{k_{i}}^{i}\) and \(x_{i}^{i}>x_{2}^{i}>\cdots>x_{k_{i}}^{i}\) where \(Q(\Delta_{j}^{i})\)s are irreducible tempered representations. Then, from Lemma 5.2, \(\Delta(\pi_{i})=\operatorname{Ind}_{P_{i}}^{\operatorname{GL}_{m_{i}}}(Q( \Delta_{1}^{i})(x_{1}^{i})\otimes\cdots\otimes Q(\Delta_{k_{i}}^{i})(x_{k_{i} }^{i}))\), where \(P_{i}\) is specified by the \(Q(\Delta_{j}^{i})\)s. Thus, we have \[\Delta(\pi_{1})\otimes\Delta(\pi_{2})\] \[=\operatorname{Ind}_{P_{1}}^{\operatorname{GL}_{m_{1}}}(Q(\Delta_ {1}^{1})(x_{1}^{1})\otimes\cdots\otimes Q(\Delta_{k_{1}}^{1})(x_{k_{1}}^{1})) \otimes\operatorname{Ind}_{P_{2}}^{\operatorname{GL}_{m_{2}}}(Q(\Delta_{1}^{2 })(x_{1}^{2})\otimes\cdots\otimes Q(\Delta_{k_{2}}^{2})(x_{k_{2}}^{2})).\] Let \(P\) be the standard parabolic subgroup of \(G\) with Levi component \(\operatorname{GL}_{m_{1}}\times\operatorname{GL}_{m_{2}}\). Applying the exact functor, \(\operatorname{Ind}_{P}^{G}\) throughout, we get \[\operatorname{Ind}_{P}^{G}(\Delta(\pi_{1})\otimes\Delta(\pi_{2}))\] \[\simeq\operatorname{Ind}_{P_{12}}^{G}(Q(\Delta_{1}^{1})(x_{1}^{1} )\otimes\cdots\otimes Q(\Delta_{k_{1}}^{1})(x_{k_{1}}^{1})\otimes Q(\Delta_{1 }^{2})(x_{1}^{2})\otimes\cdots\otimes Q(\Delta_{k_{2}}^{2})(x_{k_{2}}^{2})).\] Here \(P_{12}\subset P\) is the standard parabolic subgroup specified by the \(Q(\Delta_{j}^{i})\)s and the identification follows from transitivity of induction. We rearrange the \(Q(\Delta_{j}^{i})(x_{j}^{i})\)s so that the \(x_{j}^{i}\) are decreasing, and whenever two consecutive \(x_{j}^{i}\)s are equal, we may replace \(Q(\Delta_{j}^{i})(x_{j}^{i})\otimes Q(\Delta_{j^{\prime}}^{i^{\prime}})(x_{j}^ {i})\) by \(Q(\Delta_{j}^{i},\Delta_{j^{\prime}}^{i^{\prime}})(x_{j}^{i})\), which is the full induced representation and also an irreducible tempered representation twisted by \(x_{j}^{i}\). This rearrangement does not affect the representative of \(\operatorname{Ind}_{P_{12}}^{\operatorname{GL}_{n}}(Q(\Delta_{1}^{1})(x_{1}^{1 })\otimes\cdots\otimes Q(\Delta_{k_{1}}^{1})(x_{k_{1}}^{1})\otimes Q(\Delta_{ 1}^{2})(x_{1}^{2})\otimes\cdots\otimes Q(\Delta_{k_{2}}^{2})(x_{k_{2}}^{2}))\) in \(K_{\mathbb{C}}\operatorname{Rep}_{\lambda}(G)\); this follows from [21, Thoerem 1.2]. Thus, we obtain a decreasing sequence \(y_{1},\ldots,y_{l}\) from the \(x_{j}^{i}\)s and multisets \(\alpha_{1},\ldots,\alpha_{l}\) which partition the \(\Delta_{j}^{i}\)s. Setting \(\tau_{i}=Q(\alpha_{i})\), we may write \[[\operatorname{Ind}_{P}^{\operatorname{GL}_{m}(F)}(\Delta(\pi_{1})\otimes \Delta(\pi_{2}))]=[\operatorname{Ind}_{P_{12}^{\prime}}^{\operatorname{GL}_{n} (F)}(\tau_{1}(y_{1})\otimes\cdots\otimes\tau_{l}(y_{l})].\] Here \(P^{\prime}_{12}=M^{\prime}N^{\prime}\) is the standard parabolic subgroup specified by the \(\alpha_{i}\)s. Now \(\tau_{1}\otimes\cdots\otimes\tau_{2}\) is a tempered representation of \(M^{\prime}\) and \(y_{1}>\cdots>y_{l}\) specifies a \(P^{\prime}_{12}\)-positive unramified character of \(M^{\prime}\). This shows that the representation in the above equation is a standard representation of \(G\). We now show a unique choice of \(\pi\) so that \([\Delta(\pi)]=[\operatorname{Ind}_{P}^{G}(\Delta(\pi_{1})\otimes\Delta(\pi_{2 }))]\). This is completely determined by the multisegment data of the \(\tau_{i}(y_{i})\)s as we explain below. For a segment \(\Delta=[\rho(b),\rho(e)]\), set the notation \(\Delta(x)=[\rho(b+x),\rho(e+x)]\). For a multisegment \(\beta=\{\Delta_{1},...,\Delta_{s}\}\), set \(\beta(x)=\{\Delta_{1}(x),\cdots,\Delta_{s}(x)\}\). With this in mind, write \(\alpha=\alpha_{1}(y_{1})\sqcup\alpha_{2}(y_{2})\sqcup\cdots\sqcup\alpha_{l}(y _{l})\) where \(\alpha_{i}\)s and \(y_{i}\)s were determined above. If we write this disjoint union like a concatenation, _i.e.,_ preserve the order of \(\alpha_{i}\)s and the segments within them, then this multisegment satisfies the "does not precede" condition due to the procedure carried out above. This \(\alpha\) corresponds to a unique irreducible representation \(Q(\alpha)\) obtained from Langlands classification via multisegments, which is the unique irreducible quotient of \(\operatorname{Ind}_{P^{\prime}_{12}}^{G}(\tau_{1}(y_{1})\otimes\cdots\otimes \tau_{l}(y_{l}))\). Setting \(\pi=Q(\alpha)\), \(\Delta(\pi)=\operatorname{Ind}_{P^{\prime}_{12}}^{\operatorname{GL}_{n}(F)}( \tau_{1}(y_{1})\otimes\cdots\otimes\tau_{l}(y_{l}))\). Thus, we have \[[\operatorname{Ind}_{P}^{G}(\Delta(\pi_{1})\otimes\Delta(\pi_{2}))]=[\Delta( \pi)]\] in \(K_{\mathbb{C}}\operatorname{Rep}_{\lambda}(G)\). **Remark 5.4**.: In the proof above, observe that \(\alpha\) is given by the disjoint union of the multisegments of \(\pi_{1}\) and \(\pi_{2}\)_after_ appropriate rearrangement to satisfy the "does not precede" condition. Thus, it is easy to see how the multisegment data of the \(\pi_{i}\)s completely determines the representation \(\Delta(\pi)\). This procedure generalizes to \(k\) representations: If \(\pi_{i}\) corresponds to \(\alpha_{i}\), set \(\alpha=\sqcup_{i}\alpha_{i}\) rearranged so that the segments satisfy the "does not precede" condition. Then \(\pi=Q(\alpha)\) is the uniquely determined Langlands quotient of \(\operatorname{Ind}_{P}^{G}(\otimes_{\Delta\in\alpha}Q(\Delta))\) so that \[[\Delta(\pi)]=[\operatorname{Ind}_{P}^{G}(\Delta(\pi_{1})\otimes\cdots\otimes \Delta(\pi_{k}))]=[\operatorname{Ind}_{P}^{G}(\Delta(\pi_{M})].\] where \(\Delta(\pi_{M})=\Delta(\pi_{1})\otimes\cdots\otimes\Delta(\pi_{k})\). Once again, we are able to rearrange the segments of \(\alpha\) because we are working with full induced representations in the Grothendieck group, and can therefore invoke [28, Theorem 1.2], which assets that this rearrangement should not change the representative in the Grothendieck group. This property of induction coincides with endoscopic lifting. Thus, we see that endoscopic lifting is characterized by parabolic induction in this case. **Proposition 5.5**.: _Let \(M\simeq\operatorname{GL}_{m_{1}}\times\operatorname{GL}_{m_{2}}\times\cdots \times\operatorname{GL}_{m_{k}}\) be a Levi subgroup of \(G\). Let \(P=MN\) be the standard parabolic subgroup with Levi component \(M\). Then, for any \([\pi]\in K_{\mathbb{C}}\operatorname{Rep}_{\lambda}(M)\),_ \[\operatorname{Lift}_{M}^{G}([\pi])=[\operatorname{Ind}_{P}^{G}(\pi)].\] Proof.: We show that \(\operatorname{Lift}_{M}^{G}\) and \(\operatorname{Ind}_{P}^{G}\) have the same image on the basis consisting of standard representations of \(K_{\mathbb{C}}\operatorname{Rep}_{\lambda}(M)\). Let \(\phi\) be a langlands parameter for \(G\) with infinitesimal parameter \(\lambda\), both of which factor through \(M\). We denote the Langlands and infinitesimal parameters for \(M\) using \(\phi_{M}\) and \(\lambda_{M}\), respectively. Now \(\phi\) determines a multisegment \(\alpha\) whose segments are arranged in an order so that it satisfies the "does not precede" condition, which determines a smooth irreducible representation, \(\pi_{\phi}\), of \(G\). The parameter \(\phi_{M}\) for \(M\) determines multisegments \(\alpha_{i}\) for \(1\leq i\leq k\), which determine smooth irreducible representations \(\pi_{\phi_{i}}\) of \(\operatorname{GL}_{m_{i}}\) so that \[\pi_{\phi_{M}}=\pi_{\phi_{1}}\otimes\cdots\otimes\pi_{\phi_{k}},\] is interpreted as an external tensor product and thus a representation of \(M\). Since \(\phi\) factors through \(M\) via \(\phi_{M}\), \(\alpha=\sqcup_{i=1}^{k}\alpha_{i}\), upto rearrangement of segments. Using Remark 5.4, we have that \[[\operatorname{Ind}_{P}^{G}(\Delta(\pi_{\phi_{M}}))]=[\Delta(\pi_{\phi})].\] However, from Lemma 5.1, we have that \[\operatorname{Lift}_{M}^{G}([\Delta(\pi_{\phi_{M}})])=[\Delta(\pi_{\phi})].\] Since the two maps agree on the basis of standard representations, they are the same. Finally, we show that endoscopic lifting identifies the \(A\)-packet of the Levi with the \(A\)-packet of \(G\). **Proposition 5.6**.: _For every \(\mathcal{F}\in D_{H_{\lambda}}(V_{\lambda})\),_ \[\langle\eta_{\psi_{M}},[\mathcal{F}|_{V_{\lambda_{M}}}]\rangle_{\lambda_{M}}= \langle\eta_{\psi},[\mathcal{F}]\rangle_{\lambda};\] _equivalently,_ \[\operatorname{Lift}_{M}^{G}[\pi_{\psi_{M}}]=[\pi_{\psi}].\] Proof.: Let \(P\) be the standard parabolic subgroup of \(G\) with Levi component \(M\). The representation \(\pi_{\psi_{M}}\) is a product of unitary Speh representations. We know that \(\operatorname{Ind}_{P}^{G}(\pi_{\psi_{M}})\) is an irreducible representation of \(G\), see [1, Section 2.4], for example. By matching mutisegments of \(\phi_{\psi_{M}}\) and \(\phi_{\psi}\), we have \[[\operatorname{Ind}_{P}^{G}(\pi_{\psi_{M}})]=[\pi_{\psi}]. \tag{10}\] Recall that \(\eta_{\psi}=[\pi_{\psi}]\), as the A-packet for \(\psi\) is a singleton. We know from 3.1 that \(\Pi_{\psi_{M}}(M)=\{\pi_{\psi_{M}}\}\), which gave us \(\eta_{\psi_{M}}=[\pi_{\psi_{M}}]\). Now we have \[\begin{array}{llll}\langle\eta_{\psi_{M}},[\mathcal{F}|_{V_{\lambda_{M}}}] \rangle_{\lambda_{M}}&=\langle[\pi_{\psi_{M}}],\varepsilon^{*}[\mathcal{F}] \rangle_{\lambda_{M}}&\Pi_{\psi_{M}}(M)=\{\pi_{\psi_{M}}\},\\ &=\langle\operatorname{Lift}_{M}^{G}[\pi_{\psi_{M}}],[\mathcal{F}]\rangle_{ \lambda_{M}}&\text{Definition \ref{eq:A-packet}},\\ &=\langle[\operatorname{Ind}_{P}^{G}(\pi_{\psi_{M}})],[\mathcal{F}]\rangle_{ \lambda_{M}}&\text{Proposition \ref{eq:A-packet}},\\ &=\langle[\pi_{\psi}],[\mathcal{F}]\rangle_{\lambda}&\text{by \eqref{eq:A-packet}},\\ &=\langle\eta_{\psi},[\mathcal{F}]\rangle_{\lambda}&\Pi_{\psi}(G)=\{\pi_{ \psi}\}.\end{array}\] ## 6. Main result **Theorem 6.1** (Vogan's conjecture for A-packets for \(p\)-adic general linear groups).: _For every \(p\)-adic field \(F\), every positive integer \(n\) and every Arthur parameter \(\psi\) for \(\operatorname{GL}_{n}(F)\), the \(A\)-packet for \(\psi\) coincides with the ABV-packet for the Langlands parameter \(\phi_{\psi}\), and the virtual representation attached to this packet agrees with Arthur's:_ \[\Pi^{\textsc{\tiny{ABV}}}_{\phi_{\psi}}(G)=\Pi_{\psi}(G)=\{\pi_{\psi}\},\qquad \text{and}\qquad\eta^{\textsc{\tiny{Evs}}}_{\psi}=\eta_{\psi}=[\pi_{\psi}].\] Proof.: The proof is obtained by the following diagram, which we call the "endoscopy square", in which \(\mathcal{F}\in\operatorname{D}_{H_{\lambda}}(V_{\lambda})\) is arbitrary. We establish the equality across the top by verifying the equality on the other three sides. The left-hand side of the endoscopy square is a consequence of Proposition 4.2. The equality on the bottom of the endoscopy square is direct consequence of Proposition 3.1; we remark that this result makes use of the main result from [2]. The right-hand side of the endoscopy square is Proposition 5.6. We may now conclude \[\langle\eta^{\textsc{\tiny{Evs}}}_{\psi},[\mathcal{F}]\rangle_{\lambda}= \langle\eta_{\psi},[\mathcal{F}]\rangle_{\lambda},\] for every \(\mathcal{F}\in D_{H_{\lambda}}(V_{\lambda})\). Since the pairing above is non-degenerate, if follows that \[\eta^{\textsc{\tiny{Evs}}}_{\psi}=\eta_{\psi}.\] Since \(\eta_{\psi}=[\pi_{\psi}]\), it follows that \[\Pi^{\textsc{\tiny{ABV}}}_{\psi}(G)=\{\pi_{\psi}\}=\Pi_{\psi}(G).\] This concludes the proof of Vogan's conjecture for A-packets of general linear groups. ## 7. Langlands-Shelstad transfer and endoscopic lifting In this section we show that the endoscopic lifting \(\varepsilon^{*}\) from Section coincides with Langlands-Shelstad transfer from the Levi subgroup \(M\) to the general linear group \(G\). This result does not play a role in the proof of the main result, Theorem 6.1, so it is offered here as a remark. Recall that Langlands-Shelstad transfer is defined first on Schwartz functions. Since we are considering the case \(G=\operatorname{GL}_{n}\) and \(M=\operatorname{GL}_{m_{1}}\times\cdots\times\operatorname{GL}_{m_{k}}\) for \(n=m_{1}+\cdots+m_{k}\), there is no need to mention stability in this context. The geometric transfer coefficients \(\Delta(\gamma,\delta)\) are very simple in this case: functions \(f\in C^{\infty}_{c}(G(F))\) and \(f^{M}\in C^{\infty}_{c}(M(F))\) are said to match if \[\mathcal{O}^{M}_{\gamma}(f^{M})=\Delta(\gamma,\delta)\ \mathcal{O}^{G}_{\delta}(f),\] for regular semisimple \(\gamma\in M(F)\) and \(\delta\in G(F)\), where \(\Delta(\gamma,\delta)=0\) unless \(\gamma\) and \(\delta\) have the same characteristic polynomials, in which case \[\Delta(\gamma,\delta)=|\mathrm{det}_{\mathfrak{g}/\mathfrak{m}}\left(1-\mathrm{ Ad}(\gamma)\right)|_{F}.\] See, for example, [10, SS2]. The factor \(\Delta(\gamma,\delta)\) agrees with the Langlands-Shelstad transfer factor for this case, which equals \(\Delta_{IV}\), as defined in [11, Section 3]. In fact, in the case at hand, Langlands-Shelstad transfer is given by the linear transformation \[C_{c}^{\infty}(G(F)) \to C_{c}^{\infty}(M(F))\] \[f \mapsto f^{M}\] defined by \[f^{M}(m)=\delta_{P}^{1/2}(m)\int_{K}\int_{N(F)}f(kmuk^{-1})\,du\,dk\] where \(K\) is the maximal compact subgroup of \(G\), \(N\) is the unipotent radical of the standard parabolic \(P\) with Levi component \(M\) and \(\delta_{P}\) is the modulus quasicharacter for \(P(F)\). Recall also that distributions \(D\) on \(G(F)\) and \(D^{M}\) on \(M(F)\) are related by Langlands-Shelstad transfer if \[D^{M}(f^{M})=D(f),\] for all \(f\in C_{c}^{\infty}(G(F))\). We recall the distribution character \(\Theta_{\pi}\) attached to an admissible representation \((\pi,V)\) of \(G\). For any \(f\in C_{c}^{\infty}(G)\), the linear operator \[\pi(f)v=\int_{G}f(g)\pi(g)vdg\] is of finite rank, by admissibility of \(\pi\). Therefore, it has a well-defined trace \[\Theta_{\pi}(f):=\operatorname{tr}\pi(f).\] Furthermore, Harish-Chandra's work gives us a locally integrable function \(\theta_{\pi}\) on \(G\) so that \(\Theta_{\pi}\) is written in terms of \(\theta_{\pi}\). \[\Theta_{\pi}(f)=\int_{G}f(g)\theta_{\pi}(g)dg. \tag{11}\] We note here that the above is true for reductive \(p\)-adic groups, and not just general linear groups. We also note that \(\Theta_{\pi_{1}}=\Theta_{\pi_{2}}\) if \([\pi_{1}]=[\pi_{2}]\) in \(K_{\mathbb{C}}\operatorname{Rep}_{\lambda}(G)\). **Lemma 7.1**.: Let \(\pi_{M}\) be an irreducible admissible representation of \(M(F)\). Let \(\phi_{M}\) be its Langlands parameter of \(\pi_{M}\); let \(\phi\) be the lift of \(\phi_{M}\) to \(G\) and let \(\pi\) be the irreducible admissible representation of \(G(F)\) matching \(\pi\) under the Langlands correspondence. Recall that \(\Delta(\pi)\) (resp. \(\Delta(\pi_{M})\)) denotes the standard representation for \(\pi\) (resp. \(\pi_{M}\)). Then Langlands-Shelstad transfer matches standard representations with standard representations: \[\Theta_{\Delta(\pi_{M})}(f^{M})=\Theta_{\Delta(\pi)}(f),\] for all \(f\in C_{c}^{\infty}(G(F))\). Proof.: We show this by direct calculation. Recall that \(\pi_{M}\in\Pi_{\lambda_{M}}(M)\) is matched with a \(\pi\in\Pi_{\lambda}(G)\) by matching their multisegments, in the sense of Remark 5.4. Set \(\tau=\operatorname{Ind}_{P}^{G}(\pi_{M})\) where \(P\) is the standard parabolic subgroup of \(G\) with Levi subgroup \(M\). Recall that \(K\) is the maximal compact subgroup of \(G\), and we have \(G(F)=K\,M(F)\,N(F)\). After making all the appropriate choices for Haar measures, we have \[\begin{array}{ll}&\Theta_{\Delta(\pi_{M})}(f^{M})\\ &=\int_{M(F)}f^{M}(m)\theta_{\Delta(\pi_{M})}(m)\,dm&\text{by (\ref{eq:11}),}\\ &=\int_{M(F)}\delta_{P}^{1/2}(m)\int_{K}\int_{N(F)}f(kmuk^{-1})\,du\,dk\,\theta _{\Delta(\pi_{M})}(m)\,dm&\text{definition of $f^{M}$},\\ &=\delta_{P}^{1/2}(m)\int_{K}\int_{N(F)}\int_{M(F)}\theta_{\Delta(\pi_{M})}(m )f(kmuk^{-1})\,dm\,dn\,dk&\text{Fubini-Tonelli},\\ &=\Theta_{\tau}(f),\end{array}\] where the last equality follows from [10, Theorem 2] paired with the remark at the end of Section 5 in _loc. cit_. We know from Proposition 5.3 that \([\tau]=[\operatorname{Ind}_{P}^{G}(\Delta(\pi_{M}))]=[\Delta(\pi)]\). In this sense, the transfer of functions \(f\mapsto f^{M}\) matches distribution characters of standard representations at the level of Grothendieck groups. Passing from distributions build from irreducible representations to the Grothendieck group of these representations, Langlands-Shelstad transfer defines a linear transformation \[\operatorname{LS}:K_{\mathbb{C}}\operatorname{Rep}_{\lambda_{M}}(M)\to K_{ \mathbb{C}}\operatorname{Rep}_{\lambda}(G).\] This linear transformation sends standard representations to standard representations exactly as in Proposition 5.3 and Remark 5.4. Thus, this is another way to characterize endoscopic transfer, analogous to Proposition 5.5. **Proposition 7.2**.: _Let \(M\simeq\operatorname{GL}_{m_{1}}\times\operatorname{GL}_{m_{2}}\times\dots \times\operatorname{GL}_{m_{k}}\) be a Levi subgroup of \(G\). Let \(P\) be the standard parabolic subgroup with Levi component \(M\). Then, for any \([\pi]\in K_{\mathbb{C}}\operatorname{Rep}_{\lambda}(M)\),_ \[\operatorname{Lift}_{M}^{G}([\pi])=\operatorname{LS}([\pi]).\] Finally, we show that LS lifts A-packets from \(M\) to A-packets of \(G\). This follows immediately once we recall that \(\operatorname{Ind}_{P}^{G}(\pi_{\psi_{M}})\simeq\pi_{\psi}\) from the proof of Proposition 5.6. Now, we may re-purpose the proof of Lemma 7.1, to assert the following. **Proposition 7.3**.: _Langlands-Shelstad transfer matches \(\pi_{\psi_{M}}\) with \(\pi_{\psi}\), in the sense that_ \[\Theta_{\pi_{\psi_{M}}}(f^{M})=\Theta_{\pi_{\psi}}(f)\] _for any \(f\in C_{c}^{\infty}(G(F))\)._ **Remark 7.4**.: While it may appear that we are repackaging the results of Section 5.1 in a slightly different language, our purpose here is to demonstrate that Langlands-Shelstad transfer will potentially give us the results in more general settings that we obtain from parabolic induction for general linear groups. In particular, we expect Langlands-Shelstad transfer to match standard representations with standard representations at the level of Grothendieck groups, just as parabolic induction does. However, parabolic induction of a representation from the \(A\)-packet of a Levi subgroup (or more generally, an endoscopic group) may not be irreducible, and is therefore not a good candidate for lifting A-packets of the Levi subgroup to A-packets of the group. In future work, we study Vogan's conjecture for a classical group \(G\). Suppose \(H\) is an endoscopic group of \(G\). We propose an independent study of Langlands-Shelstad transfer to obtain the image \(\operatorname{Lift}^{G}_{H}([\pi_{\psi_{H}}])\). ## 8. Examples In this section, we provide examples to supplement the theory developed in the paper. Although this paper generalizes the situation from a simple Arthur parameter to an arbitrary parameter, we begin with a simple parameter and then move on to sums of simple parameters. **Example 8.1** (Steinberg \(\operatorname{GL}_{2}\)).: In this example, we work with a simple Arthur parameter, calculate the spectral and geometric multiplicity matrices, and demonstrate Hypothesis 2.1 in this case. For \(G=\operatorname{GL}_{2}\) over \(F\), consider the Arthur parameter \(\psi:W^{\prime\prime}_{F}\to\widehat{G}\) defined by \[\psi(w,x,y)=\operatorname{Sym}^{1}(x).\] Then, \(\phi_{\psi}(w,x)=\operatorname{Sym}^{1}(x)\) and \(\lambda(w)=\operatorname{diag}(|w|^{1/2},|w|^{-1/2})\). We start with the spectral side: \(\operatorname{Rep}^{\mathbb{A}}_{\lambda}(G)\) contains exactly two irreducible representations - the trivial representation \(\pi_{0}\) and the Steinberg representation \(\pi_{1}\). The Steinberg representation is its own standard representation because it is tempered, so \(\Delta(\pi_{1})=\pi_{1}\). The standard representation for the trivial representation \(\pi_{0}\) is \(\Delta(\pi_{0})=\operatorname{Ind}^{G}_{B}(\chi)\), where \(\chi(\operatorname{diag}(t_{1},t_{2}))=|t_{1}|^{1/2}|t_{2}|^{-1/2}\) and \(B\) is the standard Borel subgroup of \(\operatorname{GL}_{2}\). From the short exact sequence we see that \(\pi_{0}\) and \(\pi_{1}\) both appear in \(\Delta(\pi_{0})\) with multiplicity \(1\), so \[[\Delta(\pi_{0})] = [\pi_{0}]+[\pi_{1}],\text{ and}\] \[[\Delta(\pi_{1})] = [\pi_{1}]\] in \(K\operatorname{Rep}^{\mathbb{A}}_{\lambda}(G)\). Thus, we have \[m=\begin{bmatrix}1&0\\ 1&1\end{bmatrix}.\] Now we describe the geometry. \[V_{\lambda}=\left\{\begin{pmatrix}0&x\\ 0&0\end{pmatrix}:\ x\in\mathbb{C}\right\}\simeq\mathbb{A}^{1}_{\mathbb{C}},\] and \[H_{\lambda}=\operatorname{GL}_{1}(\mathbb{C})\times\operatorname{GL}_{1}( \mathbb{C}),\] with action \(s\cdot x=s_{1}xs_{2}^{-1}\), where \(s=(s_{1},s_{2})\). The two \(H_{\lambda}\)-orbits in \(V_{\lambda}\) are \(C_{0}=\{0\}\) and \(C_{1}=\{x\in\mathbb{A}^{1}\ :\ x\neq 0\}\), and simple objects in \(\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\) are \(\mathcal{IC}(\mathbb{1}_{C_{0}})\) and \(\mathcal{IC}(\mathbb{1}_{C_{1}})\). In order to compute the matrix \(c\), we pick base points \(x_{0}\in C_{0}\) and \(x_{1}\in C_{1}\) and compute the stalks: The sheaf complex \(\mathcal{IC}(\mathbb{1}_{C_{0}})\) is the skyscraper sheaf \(\mathbb{1}_{C_{0}}^{\natural}\) at \(C_{0}\) in degree \(0\) while \(\mathcal{IC}(\mathbb{1}_{C_{1}})\) is the constant sheaf on \(V_{\lambda}\) shifted by \(1\), \(\mathbb{1}_{V}[1]\). It follows that \[[\mathcal{IC}(\mathbb{1}_{C_{0}})] = [\mathbb{1}_{C_{0}}^{\natural}]\] \[(-1)[\mathcal{IC}(\mathbb{1}_{C_{1}})] = [\mathbb{1}_{C_{0}}^{\natural}]+[\mathbb{1}_{C_{1}}^{\natural}],\] in \(K\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\), so \[c=\begin{bmatrix}1&1\\ 0&1\end{bmatrix}.\] It is now clear that \(m=\,^{t}c\), as predicted by Hypothesis 2.1. **Example 8.2**.: We now consider an Arthur parameter of small dimension which is not simple. We show the computation of the geometric and spectral multiplicity matrices \(c\) and \(m\). We will also see in the next two examples that the Vogan variety for this parameter does not decompose into a product of Vogan varieties, thus it is an example of the type of parameter this paper is dealing with. Consider the group \(\operatorname{GL}_{4}(F)\) and the Arthur parameter \[\psi(w,x,y)=\operatorname{Sym}^{1}(x)\oplus\operatorname{Sym}^{1}(y).\] The infinitesimal parameter \(\lambda_{\psi}\) is given by \[\lambda_{\psi}(w)=\begin{bmatrix}|w|^{1/2}&0&0&0\\ 0&|w|^{-1/2}&0&0\\ 0&0&|w|^{1/2}&0\\ 0&0&0&|w|^{1/2}\end{bmatrix}.\] We can replace \(\lambda_{\psi}\) by an element in its \(\operatorname{GL}_{4}(\mathbb{C})\)-conjugacy class - this will not change the geometry. Thus, we apply the permutation \((2\,3)\) to \(\lambda_{\phi_{\psi}}\) and drop the subscript \(\psi\) from the notation to get \[\lambda(w)=\begin{bmatrix}|w|^{1/2}&0&0&0\\ 0&|w|^{1/2}&0&0\\ 0&0&|w|^{-1/2}&0\\ 0&0&0&|w|^{-1/2}\end{bmatrix}.\] Let us do the geometric side first. The above rearrangement enables us to easily compute the Vogan variety and the group action. \[V_{\lambda}=\left\{\begin{bmatrix}0&X\\ 0&0\end{bmatrix}:X\in\operatorname{Mat}_{2}(\mathbb{C})\right\}\cong \operatorname{Mat}_{2}(\mathbb{C}).\] and \[H_{\lambda}=\operatorname{GL}_{2}(\mathbb{C})\times\operatorname{GL}_{2}( \mathbb{C}).\] The action of \(H_{\lambda}\) on \(V_{\lambda}\) is given by \[(g_{1},g_{2})\cdot X=g_{1}Xg_{2}^{-1}.\] Thus, the rank of \(X\) completely determines its \(H_{\lambda}\)-orbit. There are 3 orbits - \(C_{0},C_{1}\), and \(C_{2}\) consisting of matrices of matrices of ranks \(0,1\) and \(2\) respectively. Note that \(C_{1}\) is the orbit corresponding to \(\phi_{\psi}\), so we set \(C_{\psi}=C_{1}\). In order to find the matrix \(c_{\lambda}\) in this case, pick base points \(x_{0}\in C_{0}\), \(x_{1}\in C_{1}\) and \(x_{2}\in C_{2}\) and consider the stalks of simple \(\mathcal{IC}(\mathbbm{1}_{C})\). Since \(\mathcal{IC}(\mathbbm{1}_{C_{0}})\) is the skyscraper sheaf at \(0\in V_{\lambda}\), its stalks are easy to compute: \(\mathcal{H}_{x_{0}}^{\bullet}\mathcal{IC}(\mathbbm{1}_{C_{0}})=\mathbbm{1}[0]\); \(\mathcal{H}_{x_{1}}^{\bullet}\mathcal{IC}(\mathbbm{1}_{C_{0}})=0\); and \(\mathcal{H}_{x_{1}}^{\bullet}\mathcal{IC}(\mathbbm{1}_{C_{0}})=0\). Likewise, since \(\mathcal{IC}(\mathbbm{1}_{C_{2}})\) is the constant sheaf on \(V_{\lambda}\) shifted by \(4\), we have \(\mathcal{H}_{x}^{\bullet}\mathcal{IC}(\mathbbm{1}_{C_{2}})=\mathbbm{1}[4]\) for every \(x\in V_{\lambda}\). Only \(\mathcal{IC}(\mathbbm{1}_{C_{1}})\) is interesting - its stalks are given by \[\mathcal{H}_{x_{0}}^{\bullet}\mathcal{IC}(\mathbbm{1}_{C_{1}}) = H^{\bullet}(\mathbb{P}^{1})[3]=\mathbbm{1}[1]\oplus\mathbbm{1}[3]\] \[\mathcal{H}_{x_{1}}^{\bullet}\mathcal{IC}(\mathbbm{1}_{C_{1}}) = \mathbbm{1}[3]\] \[\mathcal{H}_{x_{2}}^{\bullet}\mathcal{IC}(\mathbbm{1}_{C_{1}}) = 0.\] In \(K\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\) we now have \[[\mathcal{IC}(\mathbbm{1}_{C_{0}})] = [\mathbbm{1}_{C_{0}}^{\natural}]\] \[(-1)[\mathcal{IC}(\mathbbm{1}_{C_{1}})] = 2[\mathbbm{1}_{C_{0}}^{\natural}]+[\mathbbm{1}_{C_{1}}^{\natural}]\] \[= [\mathbbm{1}_{C_{0}}^{\natural}]+[\mathbbm{1}_{C_{1}}^{\natural}] +[\mathbbm{1}_{C_{2}}^{\natural}].\] Therefore, \[c_{\lambda}=\begin{bmatrix}1&2&1\\ 0&1&1\\ 0&0&1\end{bmatrix}.\] On the spectral side, \(\operatorname{Rep}_{\lambda}^{\operatorname{fl}}(G)\) contains exactly three irreducible representations, all appearing in \(\operatorname{Ind}_{B}^{G}(\chi_{\lambda})\) for \(\chi_{\lambda}(\operatorname{diag}(t_{1},t_{2},t_{3},t_{4}))=|t_{1}t_{2}|^{1/2 }|t_{3}t_{4}|^{-1/2}\). The these three irreducible representations are: the unique irreducible quotient \(\pi_{0}\) of \(\operatorname{Ind}_{B}^{G}(\chi)\) and two irreducible subrepresentations \(\pi_{1}\) and \(\pi_{2}\), of which only \(\pi_{2}\) is tempered. Note that \(\pi_{\psi}=\pi_{1}\) as it corresponds to the parameter \(\phi_{\psi}\). The standard representations for \(\pi_{0}\), \(\pi_{1}\) and \(\pi_{2}\) are given as follows, where \(P_{0}\) is the standard Borel, \(P_{1}\) is the standard parabolic with Levi \(\operatorname{GL}_{2}\times\operatorname{GL}_{1}^{2}\) and \(P_{2}\) is the standard parabolic with Levi \(\operatorname{GL}_{2}\times\operatorname{GL}_{2}\): \[\Delta(\pi_{0}) = \operatorname{Ind}_{P_{0}}^{G}(\chi_{\lambda})\] \[\Delta(\pi_{1}) = \operatorname{Ind}_{P_{1}}^{G}(\operatorname{St}_{2}\otimes\chi_{1})\] \[\Delta(\pi_{0}) = \operatorname{Ind}_{P_{2}}^{G}(\operatorname{St}_{2}\otimes \operatorname{St}_{2}).\] Here, \(\operatorname{St}_{2}\) is Steinberg for \(\operatorname{GL}_{2}(F)\) and \(\chi_{1}\) is the character \(\chi\) appearing in Example 8.1. In fact, \(\operatorname{Ind}_{B}^{G}(\chi)\) is a length-four representation and \(\pi_{1}\) appears with multiplicity \(2\) in \(\operatorname{Ind}_{B}^{G}(\chi)\): \[[\Delta(\pi_{0})]=[\pi_{0}]+2[\pi_{1}]+[\pi_{2}]\] Being tempered, \(\pi_{2}\) is its own standard representation, so \([\Delta(\pi_{2})]=[\pi_{2}]\). In fact, \[\pi_{2}=\operatorname{Ind}_{P_{2}}^{G}(\operatorname{St}_{\operatorname{GL}_{2}} \otimes\operatorname{St}_{\operatorname{GL}_{2}}),\] which can be used to see that is a short exact sequence in \(\operatorname{Rep}_{\lambda}^{\operatorname{fl}}(G)\). We therefore have \[[\Delta(\pi_{1})] = [\pi_{1}]+[\pi_{2}]\] in \(K\operatorname{Rep}_{\lambda}^{\operatorname{fl}}(G)\). Now we know the matrix \(m\): \[m_{\lambda}=\begin{bmatrix}1&0&0\\ 2&1&0\\ 1&1&1\end{bmatrix}.\] We see that \(m_{\lambda}=\,^{t}c_{\lambda}\), yet another demonstration of Hypothesis 2.1. **Example 8.3**.: In this example, we discuss the geometry of the Arthur parameter for the Levi subgroup carved out by the parameter in Example 8.2, and compute \(c_{\lambda_{M}}\) and \(m_{\lambda_{M}}\). Once again, we consider the Arthur parameter from Example 8.2: \[\psi(w,x,y)=\operatorname{Sym}^{1}(x)\oplus\operatorname{Sym}^{1}(y)\] Following the recipe of Section 1, this picks out the Levi subgroup \(M=\operatorname{GL}_{2}\times\operatorname{GL}_{2}\) of \(\operatorname{GL}_{4}\), with associated simple Arthur parameters \(\psi_{1}(w,x,y)=\operatorname{Sym}^{1}(x)\otimes\operatorname{Sym}^{0}(y)\) and \(\psi_{2}(w,x,y)=\operatorname{Sym}^{0}(x)\otimes\operatorname{Sym}^{1}(y)\) which correspond in turn to Langlands parameters \(\phi_{1},\phi_{2}\) and infinitesimal parameters \(\lambda_{1},\lambda_{2}\), respectively. We use \(\psi_{M}\) and \(\lambda_{M}\) to denote the Arthur parameter for \(M\). Then \(V_{\lambda_{M}}=V_{\lambda_{1}}\times V_{\lambda_{2}}\) consists of elements of the type \[(x_{0},x_{1})\coloneqq\left(\begin{bmatrix}0&x_{0}\\ 0&0\end{bmatrix},\begin{bmatrix}0&x_{1}\\ 0&0\end{bmatrix}\right),\] the group \(H_{\lambda_{M}}=H_{\lambda_{1}}\times H_{\lambda_{1}}\) consists of elements of the type \[(t,s)\coloneqq\left(\begin{bmatrix}t_{1}&0\\ 0&t_{2}\end{bmatrix},\begin{bmatrix}s_{1}&0\\ 0&s_{2}\end{bmatrix}\right),\] and the action is given by \[(t,s)\cdot(x_{0},x_{1})=(t_{1}x_{0}t_{2}^{-1},s_{1}x_{1}s_{2}^{-1}).\] There are four orbits depending on whether or not \(x_{i}=0\) for \(i=0,1\). We denote them as \(C_{00},C_{10},C_{01}\) and \(C_{11}\), where \(C_{ij}\) corresponds to the \(H_{\lambda_{M}}\)-orbit of \((x_{0},x_{1})=(i,j)\). To identify \(V_{\lambda_{M}}\) as a subspace of \(V_{\lambda}\), we consider each element in \(V_{\lambda_{M}}\) as a block diagonal \(4\times 4\) matrix in the obvious way, then apply the same permutation \((2\,3)\) as before. The variety is still denoted \(V_{\lambda_{M}}\) and its elements are identified with matrices of the type \[\begin{bmatrix}0_{2\times 2}&X\\ 0_{2\times 2}&0_{2\times 2}\end{bmatrix}\text{ where }X=\begin{bmatrix}x_{0}&0\\ 0&x_{1}\end{bmatrix}.\] We do the same to elements of \(H_{\lambda_{M}}\) to identify it with a torus in \(\operatorname{GL}_{4}\), with elements as matrices \[\begin{bmatrix}\operatorname{diag}(t_{1},s_{1})&0_{2\times 2}\\ 0_{2\times 2}&\operatorname{diag}(t_{2},s_{2})\end{bmatrix}\text{ where }t_{i},s_{i} \in\mathbb{C}^{\times}.\] The conjugation action still takes \(x_{0}\mapsto t_{1}x_{0}t_{2}^{-1}\); likewise \(x_{1}\mapsto s_{1}x_{1}s_{2}^{-1}\). Thus, the embedding \(V_{\lambda_{M}}\hookrightarrow V_{\lambda}\) is \(H_{\lambda_{M}}\)-equivariant. We continue to use the notation \(C_{ij}\) for the \(H_{\lambda_{M}}\)-orbits. The orbit \(C_{\psi_{M}}\) of type \(\psi_{M}\) is \(C_{10}\). At this point, we encourage the reader to think about the restriction of the orbits \(C_{0},C_{1},C_{2}\) from Example 8.2 to \(V_{\lambda_{M}}\). In particular, observe that \(C_{1}\) restricts to \(C_{10}\sqcup C_{01}\). Let us compute the matrix \(c_{\lambda_{M}}\). From Example 8.1 we know \[c_{\lambda_{i}}=\begin{bmatrix}1&1\\ 0&1\end{bmatrix},\] for \(i=1,2\). This matrix can be interpreted as a change of basis matrix from the standard sheaves to shifted simple perverse sheaves. One easily sees that \[c_{\lambda_{M}}=c_{\lambda_{1}}\otimes c_{\lambda_{2}}=\begin{bmatrix}1&1\\ 0&1\end{bmatrix}\otimes\begin{bmatrix}1&1\\ 0&1\end{bmatrix}=\begin{bmatrix}1&1&1&1\\ 0&1&0&1\\ 0&0&1&1\\ 0&0&0&1\end{bmatrix}.\] This is a change of basis matrix from the shifted simple perverse sheaves \[\{(-1)^{\dim C_{ij}}[\mathcal{IC}(\mathbbm{1}_{C_{ij}})]:0\leq i,j\leq 1\}\] to standard sheaves \[\{[\mathbbm{1}_{C_{ij}}^{\frac{1}{2}}]:0\leq i,j\leq 1\}\] in \(K\operatorname{Per}_{H_{\lambda_{M}}}(V_{\lambda_{M}})\). The change of basis matrix from standard sheaves to shifted simple perverse sheaves is therefore \[c_{\lambda_{M}}^{-1}=\begin{bmatrix}1&-1&-1&-1\\ 0&1&0&-1\\ 0&0&1&-1\\ 0&0&0&1\end{bmatrix}.\] On the spectral side, the orbits \(C_{ij}\) correspond to irreducible representations \(\pi_{ij}\) in \(\operatorname{Rep}_{\lambda_{M}}^{\operatorname{fl}}(M)\). Following an analogous calculation for the spectral multiplicity matrix using \(m\) from Example 8.1, we get \[m_{\lambda_{M}}^{-1}=\begin{bmatrix}1&0&0&0\\ 0&1&1&0\\ 0&0&0&1\end{bmatrix}.\] **Example 8.4**.: In this example we compute the endoscopic lifts of some irreducible representations, using only the definition of \(\operatorname{Lift}_{M}^{G}\) from Section 5.1, with \(G=GL_{4}\), and \(\psi\) and \(M\) as in Examples 8.2 and 8.3. First, we calculate restriction of standard sheaves via \[\varepsilon^{*}:K_{\mathbb{C}}\operatorname{Per}_{H_{\lambda}}(V_{\lambda}) \to K_{\mathbb{C}}\operatorname{Per}_{H_{\lambda_{M}}}(V_{\lambda_{M}}).\] We note that \[[\mathbbm{1}_{C_{0}}^{\natural}|_{V_{\lambda_{M}}}] = [\mathbbm{1}_{C_{00}}^{\natural}]\] \[= [\mathbbm{1}_{C_{10}}^{\natural}]+[\mathbbm{1}_{C_{01}}^{\natural }]\] \[= [\mathbbm{1}_{C_{11}}^{\natural}],\] because, respectively, \(C_{0}\cap V_{\lambda_{M}}=C_{00}\), \(C_{1}\cap V_{\lambda_{M}}=C_{10}\sqcup C_{01}\), \(C_{2}\cap V_{\lambda_{M}}=C_{11}\). Thus, the matrix for \(\varepsilon^{*}\) with respect to the basis of standard sheaves is \[[\varepsilon^{*}]_{\text{sts}}=\begin{bmatrix}1&0&0\\ 0&1&0\\ 0&1&0\\ 0&0&1\end{bmatrix}.\] From Example 8.2, recall that \(c_{\lambda}\) is a change of basis matrix from shifted simple perverse sheaves to standard sheaves in \(K\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\). From Example 8.3 recall that \(c_{\lambda_{M}}^{-1}\) is the change of basis matrix from standard sheaves to shifted simple perverse sheaves in \(K\operatorname{Per}_{H_{\lambda_{M}}}(V_{\lambda_{M}})\). Therefore, restriction of shifted simple objects from \(K\operatorname{Per}_{H_{\lambda}}(V_{\lambda})\) to \(K\operatorname{Per}_{H_{\lambda_{M}}}(V_{\lambda_{M}})\) via \(\varepsilon^{*}\) is given by \[[\varepsilon^{*}]_{\text{ssim}} =c_{\lambda_{M}}^{-1}\ [\varepsilon^{*}]_{\text{sts}}\ c_{\lambda}\] \[=\begin{bmatrix}1&-1&-1&1\\ 0&1&0&-1\\ 0&0&1&-1\\ 0&0&0&1\end{bmatrix}\ \begin{bmatrix}1&0&0\\ 0&1&0\\ 0&1&1\\ 0&0&1\end{bmatrix}\ =\begin{bmatrix}1&0&0\\ 0&1&0\\ 0&1&0\\ 0&0&1\end{bmatrix}.\] This shows: \[\varepsilon^{*}[\mathcal{IC}(\mathbbm{1}_{C_{0}})] =[\mathcal{IC}(\mathbbm{1}_{C_{00}})],\] \[\varepsilon^{*}[\mathcal{IC}(\mathbbm{1}_{C_{1}})[-3]] =[\mathcal{IC}(\mathbbm{1}_{C_{01}})[-1]]+[\mathcal{IC}(\mathbbm{1 }_{C_{01}})[-1]],\] \[\varepsilon^{*}[\mathcal{IC}(\mathbbm{1}_{C_{2}})[-4]] =[\mathcal{IC}(\mathbbm{1}_{C_{11}})[-2]],\] so \[\varepsilon^{*}[\mathcal{IC}(\mathbbm{1}_{C_{0}})] =[\mathcal{IC}(\mathbbm{1}_{C_{00}})],\] \[\varepsilon^{*}[\mathcal{IC}(\mathbbm{1}_{C_{1}})] =[\mathcal{IC}(\mathbbm{1}_{C_{01}})]+[\mathcal{IC}(\mathbbm{1}_{C_{01 }})],\] \[\varepsilon^{*}[\mathcal{IC}(\mathbbm{1}_{C_{2}})=[\mathcal{IC}( \mathbbm{1}_{C_{11}})].\] Now let us use this calculation, together with Hypothesis 2.1, to calculate \(\operatorname{Lift}_{M}^{G}[\sigma]\) for every \(\sigma\in\Pi_{\lambda_{M}}(M)\): \[[\operatorname{Lift}_{M}^{G}]_{\text{sim}} =\,^{t}[\varepsilon^{*}]_{\text{ssim}}\] \[=\,^{t}\left(c_{\lambda_{M}}^{-1}\,\left[\varepsilon^{*}\right]_{ \text{sts}}\,c_{\lambda}\right)\] \[=\,^{t}c_{\lambda}\,\,^{t}[\varepsilon^{*}]_{\text{sts}}\,\,^{t} c_{\lambda_{M}}^{-1}\] \[=m_{\lambda}\,\,^{t}[\varepsilon^{*}]_{\text{sts}}\,\,m_{\lambda_ {M}}^{-1}\] \[=\begin{bmatrix}1&0&0\\ 2&1&0\\ 1&1&1\end{bmatrix}\begin{bmatrix}1&0&0&0\\ 0&1&1&0\\ 0&0&0&1\end{bmatrix}\begin{bmatrix}1&0&0&0\\ -1&1&0&0\\ -1&0&1&0\\ 1&-1&-1&1\end{bmatrix}=\begin{bmatrix}1&0&0&0\\ 0&1&1&0\\ 0&0&0&1\end{bmatrix}.\] This shows: \[\operatorname{Lift}_{M}^{G}[\pi_{00}] =[\pi_{0}]\] \[\operatorname{Lift}_{M}^{G}[\pi_{01}] =[\pi_{1}]\] \[\operatorname{Lift}_{M}^{G}[\pi_{10}] =[\pi_{1}]\] \[\operatorname{Lift}_{M}^{G}[\pi_{11}] =[\pi_{2}].\] In particular, \(\operatorname{Lift}_{M}^{G}[\pi_{10}]=[\pi_{1}]\), in other words, \(\operatorname{Lift}_{M}^{G}[\pi_{\psi_{M}}]=[\pi_{\psi}]\), which is true in general for \(\operatorname{GL}_{n}\) and we prove this in Proposition 5.6.
2304.13587
Reduced basis surrogates for quantum spin systems based on tensor networks
Within the reduced basis methods approach, an effective low-dimensional subspace of a quantum many-body Hilbert space is constructed in order to investigate, e.g., the ground-state phase diagram. The basis of this subspace is built from solutions of snapshots, i.e., ground states corresponding to particular and well-chosen parameter values. Here, we show how a greedy strategy to assemble the reduced basis and thus to select the parameter points can be implemented based on matrix-product-states (MPS) calculations. Once the reduced basis has been obtained, observables required for the computation of phase diagrams can be computed with a computational complexity independent of the underlying Hilbert space for any parameter value. We illustrate the efficiency and accuracy of this approach for different one-dimensional quantum spin-1 models, including anisotropic as well as biquadratic exchange interactions, leading to rich quantum phase diagrams.
Paul Brehmer, Michael F. Herbst, Stefan Wessel, Matteo Rizzi, Benjamin Stamm
2023-04-26T14:29:58Z
http://arxiv.org/abs/2304.13587v3
# Reduced basis surrogates of quantum spin systems based on tensor networks ###### Abstract Within the reduced basis methods approach, an effective low-dimensional subspace of a quantum many-body Hilbert space is constructed in order to investigate, e.g., the ground-state phase diagram. The basis of this subspace is built from solutions of snapshots, i.e., ground states corresponding to particular and well-chosen parameter values. Here, we show how a greedy strategy to assemble the reduced basis and thus to select the parameter points can be implemented based on matrix-product-states (MPS) calculations. Once the reduced basis has been obtained, observables required for the computation of phase diagrams can be computed with a computational complexity independent of the underlying Hilbert space for any parameter value. We illustrate the efficiency and accuracy of this approach for different one-dimensional quantum spin-1 models, including anisotropic as well as biquadratic exchange interactions, leading to rich quantum phase diagrams. ## I Introduction A central topic in modern condensed matter theory is the exploration of ground-state phase diagrams of quantum many-body systems. They uncover the rich collective behavior of various physical models and often harbor interesting states of matter that emerge from interaction effects. However, only a small number of many-body Hamiltonians are solvable analytically, and in most cases one needs to resort to computational methods to explore strongly correlated quantum systems. Moreover, in order to relate microscopic models to experimental findings, it is often required to extend beyond the most basic model Hamiltonians that describe an emerging quantum-many body effect by taking into account additional interaction terms or anisotropy effects. An illustrative example from quantum magnetism is provided by its most basic model system -- the one-dimensional spin chain. In particular, it is by now well established, dating back to the seminal works by Haldane [1; 2; 3], that the spin-1 Heisenberg chain exhibits a gapped, quantum-disordered ground state with symmetry-protected topological order [4; 5]. A quantitative description of typical Haldane spin chain materials requires to account for both uniaxial and rhombic-type single-ion anisotropies, yielding a rich overall ground-state phase diagram [6; 7]. In the numerical study of complex phase diagrams of quantum many-body systems we then face an exacerbated computational challenge: Not only is solving the Hamiltonian already demanding due the curse of dimensionality, but the solutions have to be obtained on many parameter points. Typically, this leads either to the compromise of resorting to low-resolution scans of the phase diagram or to the brute-force way of solving the problem in massively parallel computations. When these approaches become impractical, we advocate here that so-called reduced basis (RB) methods [8; 9] provide a powerful third option, as we started exploring recently [10]. Harnessing the linear dependence of eigenstates across the phase diagram, a low-dimensional surrogate model is constructed based on a few select ground states at different parameter points. Using the surrogate, one is then able to evaluate observables independent of the Hilbert space dimension on any point in the phase diagram, providing accuracy and efficiency at the same time. The RB approach for parametrized eigenvalue problems originated from the structural analysis of mechanical systems in the late nineties [11; 12]. It has since then migrated to the applied mathematics community with the pioneering work presented in Ref. [13]. While the RB method became popular for studying parametrized partial differential equations (PDEs) [8; 9], with many engineering applications and maturity for industrial applications, the RB method for parametrized eigenvalue problems had not received much attention thus far. Only years later, the RB methodology developed in Ref. [13] was further extended and generalized to account for better error control and more general settings [14; 15; 16; 17]. In the context of quantum many-body physics, RB methods for eigenvalue problems found application only recently and are most prominent in the nuclear physics community [18; 19; 20; 21; 22; 23], where their use emerged with the eigenvector continuation (EC) method [24; 25; 26]. EC was retrospectively identified as belonging to the family of RB methods, and more broadly, to the field of model order reduction [23; 27]. In further physical applications, RB surrogate models, referred to as _emulators_ in EC language, proved viable in determining phase diagrams of quantum spin systems [10]. Furthermore, EC emulators were used as a subspace diagonalization method on quantum computers [28], for emulating superconducting phenomena [29; 30], as well as in quantum chemistry applications [31]. In this article, we expand upon our previous work [10] that utilizes exact diagonalization (ED) techniques in combination with a greedy RB approach. As suggested therein, we apply the density matrix renormalization group (DMRG) [32; 33] to treat spin chain Hamiltonians and perform all necessary vector operations using MPS [34; 35; 36; 37], which belong to the family of tensor network states [38; 39] that our approach could be naturally extended to (at least for finite system sizes). While this approach has been very recently adopted in [29], we focus on methodological aspects and develop the combined RB-MPS method in application to quantum spin systems. In using compressed MPS, we gain access to larger many-body systems, allowing us to probe whether RB methods remain a viable approach in the light of larger Hilbert spaces and more complex parameter domains. We made a related open-source software package available [40]. The remainder of this paper is organized as follows: In Sec. II we review the greedy RB algorithm and introduce notation, before explaining the combined RB-MPS approach. This allows us to apply the method to different spin-1 chain Hamiltonians in Sec. III, namely the Haldane chain with uniaxial and rhombic single-ion anisotropies as well as the bilinear-biquadratic model with a uniaxial anisotropy, where we scan the ground-state phase diagrams for various correlations. Section IV discusses the accuracy of the surrogate models in different settings as well as the effects of MPS approximations on the RB framework, after which we conclude in Sec. V. ## II Method ### The RB framework _Problem setting_. We begin by defining the physical problems that we aim to treat in the RB approach. To that end, we consider a generic stationary quantum many-body problem \[H(\mathbf{\mu})\ket{\mathbf{\Psi}(\mathbf{\mu})}=E(\mathbf{\mu})\ket{\mathbf{\Psi}(\mathbf{\mu})}, \tag{1}\] that consists of finding the ground-state energy \(E(\mathbf{\mu})\) and its corresponding \(m\) ground states \(\ket{\mathbf{\Psi}(\mathbf{\mu})}=(\ket{\Psi^{[1]}(\mathbf{\mu})},\dots,\ket{\Psi^{[m]}( \mathbf{\mu})})\). The many-body Hilbert space \(\mathcal{H}=\mathbb{C}^{\mathcal{N}}\) under consideration is assumed finite but high-dimensional since \(\mathcal{N}\gg 1\) grows exponentially with the number of physical constituents. We assume the eigenvalue problem to be parametrized by a vector of physical model parameters \(\mathbf{\mu}\in\mathbb{P}\) that resides in the parameter space \(\mathbb{P}\), i.e., for each \(\mathbf{\mu}\) we obtain a new Hamiltonian to solve. In order to apply the RB framework, we consider a particular class of Hamiltonians that can be expressed as _affine decompositions_ \[H(\mathbf{\mu})=\sum_{q=1}^{Q}\theta_{q}(\mathbf{\mu})\,H_{q}, \tag{2}\] with a number of terms \(Q\) independent of the number of physical degrees of freedom. Such linear combinations distinguish between parameter-dependent coefficients \(\theta_{q}:\mathbb{P}\rightarrow\mathbb{R}\) and parameter-independent Hermitian matrices \(H_{q}:\mathcal{H}\rightarrow\mathcal{H}\). Not only the Hamiltonian must be affinely decomposable, but in fact all observables that we may want to measure are assumed to be of the form \[O(\mathbf{\mu};p)=\sum_{r=1}^{R}\alpha_{r}(\mathbf{\mu};p)\,O_{r}, \tag{3}\] in terms of operators \(O_{r}:\mathcal{H}\rightarrow\mathcal{H}\), and where the coefficients \(\alpha_{r}(\mathbf{\mu};p)\in\mathbb{C}\) now include additional parameters \(p\) that are separate from \(\mathbb{P}\). This generalization is required, e.g., when studying observables in Fourier space. Note that the individual operators \(O_{r}\) need not necessarily be Hermitian. To then extract physical information, one measures affine decompositions by taking the expectation value \[\left\langle O(\mathbf{\mu};p)\right\rangle=\sum_{r=1}^{R}\alpha_{r}(\mathbf{\mu};p)\, \frac{1}{m}\sum_{i=1}^{m}\left\langle\Psi^{[i]}(\mathbf{\mu})|O_{r}|\Psi^{[i]}(\bm {\mu})\right\rangle, \tag{4}\] where we average over the degenerate ground-state subspace, if needed. In the particular setting we consider here, the goal is to scan domains of \(\mathbb{P}\) with fine resolution, i.e., to solve the many-body problem in Eq. (1) on a large set of parameter points and then, based on the solutions, measure observables on the same parameter domain. This is addressed by the RB approach in two steps: In the first step, which is referred to as the _offline stage_ in RB parlance, a surrogate model is assembled based on a small number of ground states across the parameter space, whereas in the second step, the so-called _online stage_, observable measurements are efficiently obtained. This approach is warranted by the insight that ground states at different parameter points tend to show significant linear dependence. While being demonstrated in various numerical applications (see e.g. [10; 23]), this can also be reasoned in terms of analytical continuation in the context of eigenvector continuation [24; 25]. One particular strategy to build RB surrogate models features a greedy mechanism [8; 9; 10] to sample the parameter domain, which we want to review next. _Offline stage_. The goal in the offline stage is to construct a low-dimensional _reduced basis space_ \[\mathbb{V}_{n}\coloneqq\text{span}\left\{\ket{\mathbf{\Psi}(\mathbf{\mu}_{1})},\dots, \ket{\mathbf{\Psi}(\mathbf{\mu}_{n})}\right\} \tag{5}\] that is spanned by degenerate subspaces extracted at \(n\) different parameter points \(\{\mathbf{\mu}_{1},\ldots,\mathbf{\mu}_{n}\}\). Here we refer to \(|\mathbf{\Psi}(\mathbf{\mu}_{j})\rangle=(|\Psi^{[1]}(\mathbf{\mu}_{j})\rangle\,,\ldots,|\Psi ^{[m_{j}]}(\mathbf{\mu}_{j})\rangle)\) as a "snapshot", such that one snapshot contains \(m_{j}\geq 1\) states. Since the snapshots might share linear dependent modes, the reduced basis dimension \(N\coloneqq\dim\mathbb{V}_{n}\leq M\coloneqq\sum_{j=1}^{n}m_{j}\) can in general be smaller than the total number of obtained ground states \(M\). For the single ground-state solutions that span the RB space we use the shorthand \(|\Psi_{j}\rangle\) where \(j=1,\ldots,N\). Moreover, the solver used to obtain the lowest eigenvalue and eigenvectors of \(H(\mathbf{\mu})\) is called the "truth solver", which is to be understood in the sense that it can obtain the true solution of the Hamiltonian at any point in the considered parameter domain, up to high numerical accuracy. Beyond that, we do not need to further specify the actual method yet, i.e., the RB approach is solver-agnostic. The idea is then to represent ground states at any other parameter point \(\mathbf{\mu}\in\mathbb{P}\) as a linear combination of snapshots \[|\Phi_{\mathrm{rb}}(\mathbf{\mu})\rangle=\sum_{j=1}^{N}a_{j}(\mathbf{\mu})\,|\Psi_{j} \rangle\,, \tag{6}\] with parameter-dependent coefficients \(a_{j}(\mathbf{\mu})\in\mathbb{C}\). This prompts the following two questions: First, how do we generate the RB space \(\mathbb{V}_{n}\) and in particular how are the parameter points chosen, and second, how do we determine the coefficients \(a_{j}(\mathbf{\mu})\)? To answer the first question, we turn to the greedy algorithm. We restrict the snapshots to a subset of all parameter points, which we call the _training grid_\(\Xi_{\mathrm{train}}\subset\mathbb{P}\), that preselects the domain on which the RB will be "trained". Starting from a suitable first parameter point \(\mathbf{\mu}_{1}\) and its corresponding snapshot \(|\mathbf{\Psi}(\mathbf{\mu}_{1})\rangle\), the RB is constructed in an inductive manner. Namely at iteration \(n\), using the current RB space \(\mathbb{V}_{n}\), we select the next parameter point \(\mathbf{\mu}_{n+1}\) where a truth solve will be performed. Based on the ground states that are contained in the RB space \[|\Phi_{\mathrm{rb}}(\mathbf{\mu})\rangle=\operatorname*{arg\,min}_{|\Phi\rangle \in\mathbb{V}_{n}}\frac{\langle\Phi|H(\mathbf{\mu})|\Phi\rangle}{\langle\Phi|\Phi \rangle},\quad\forall\mathbf{\mu}\in\Xi_{\mathrm{train}}, \tag{7}\] solved at all training points, we select the parameter point that meets the _greedy condition_ \[\mathbf{\mu}_{n+1}=\operatorname*{arg\,max}_{\mathbf{\mu}\in\Xi_{\mathrm{train}}} \operatorname*{Res}_{n}(\mathbf{\mu}), \tag{8}\] i.e., which corresponds to the snapshot that maximizes the _residual_ \[\operatorname*{Res}_{n}(\mathbf{\mu})\coloneqq\sqrt{\sum_{i=1}^{m}\big{\|}H(\bm {\mu})\,|\Phi_{\mathrm{rb}}^{[i]}(\mathbf{\mu})\rangle-E_{\mathrm{rb}}(\mathbf{\mu}) \,|\Phi_{\mathrm{rb}}^{[i]}(\mathbf{\mu})\rangle\,\big{\|}^{2}}. \tag{9}\] In this way, we sample first the parts of \(\Xi_{\mathrm{train}}\) where the RB performs worst, meaning that the surrogate ground states and energies least accurately fulfill the eigenvalue problem of \(H(\mathbf{\mu})\). Notice the dependency of \(|\Phi_{\mathrm{rb}}\rangle\) and \(E_{\mathrm{rb}}\) on the number of greedy iterations since they were obtained from the \(n\)-snapshot RB space \(\mathbb{V}_{n}\). Following this, we perform a truth solve at \(\mathbf{\mu}_{n+1}\) and obtain a snapshot \(|\mathbf{\Psi}(\mathbf{\mu}_{n+1})\rangle\) that is appended to the basis, forming \(\mathbb{V}_{n+1}\). This iteration is repeated until an adequate exit condition, e.g., a target residual accuracy, is reached. In the current formulation, it seems difficult and expensive to solve Eq. (7) in order to determine the greedy condition, i.e., to compute the residual on all training points. To clarify this and to answer the question of determining the linear coefficients of \(|\mathbf{\Phi}_{\mathrm{rb}}(\mathbf{\mu})\rangle\), we now map the problem onto the low-dimensional RB space via the _reduced basis_\(B:\mathbb{V}_{n}\to\mathcal{H}\) by way of the Rayleigh-Ritz method. To that end, we define the RB coefficients \(\mathbf{\varphi}_{\mathrm{rb}}(\mathbf{\mu})\in\mathbb{C}^{N\times m}\) by relating them to \[|\mathbf{\Phi}_{\mathrm{rb}}(\mathbf{\mu})\rangle=B\mathbf{\varphi}_{\mathrm{rb}}(\mathbf{\mu} )\in\mathbb{C}^{N\times m}, \tag{10}\] corresponding to the Ritz vector, as well as the normalization matrix \(b=B^{\dagger}B\) and the reduced Hamiltonian \[h(\mathbf{\mu})=\sum_{q=1}^{Q}\theta_{q}(\mathbf{\mu})\,h_{q},\quad h_{q}=B^{\dagger}H_ {q}B. \tag{11}\] Then we can reformulate the variational problem of Eq. (7) as a generalized eigenvalue problem \[h(\mathbf{\mu})\,\mathbf{\varphi}_{\mathrm{rb}}(\mathbf{\mu})=E_{\mathrm{rb}}(\mathbf{\mu})\,b \,\mathbf{\varphi}_{\mathrm{rb}}(\mathbf{\mu}), \tag{12}\] solving only for the lowest eigenvalue and corresponding eigenvectors. Indeed, by expressing the residual in terms of reduced quantities and using the eigenvalue Eq. (12), one finds the more efficient expression \[\operatorname*{Res}_{n}^{2}(\mathbf{\mu}) =\sum_{q,q^{\prime}=1}^{Q}\theta_{q}(\mathbf{\mu})\,\theta_{q^{\prime }}(\mathbf{\mu})\,\sum_{i=1}^{m}\varphi_{\mathrm{rb}}^{[i]}(\mathbf{\mu})^{\dagger}\,h _{qq^{\prime}}\,\varphi_{\mathrm{rb}}^{[i]}(\mathbf{\mu})\] \[\quad-E_{\mathrm{rb}}^{2}(\mathbf{\mu})\sum_{i=1}^{m}\varphi_{\mathrm{rb }}^{[i]}(\mathbf{\mu})^{\dagger}\,b\,\varphi_{\mathrm{rb}}^{[i]}(\mathbf{\mu}), \tag{13}\] where \(h_{qq^{\prime}}=B^{\dagger}H_{q}H_{q^{\prime}}B\) are the reduced matrices of the \(h^{2}(\mathbf{\mu})\) affine decomposition. We have thus obtained the greedy assembly algorithm, where computing the greedy condition boils down to the \(N\)-dimensional generalized eigenvalue problem of Eq. (12) and all Hilbert space dimension dependent operations were isolated to the computation of \(b\), \(h_{q}\), \(h_{qq^{\prime}}\) and the truth solve, which are performed only once per iteration. Finally, to determine \(B\) itself, we might make the simple ansatz of using the snapshots as column vectors \([|\mathbf{\Psi}(\mathbf{\mu}_{1})\rangle\cdots|\mathbf{\Psi}(\mathbf{\mu}_{n})\rangle]\). This, however, leads to a poorly-conditioned normalization \(b\), due to the increasing linear dependence between the columns as we add snapshots, and quickly renders the solution of Eq. (12) numerically unfeasible. For that reason, one typically orthogonalizes the RB such that \(b\simeq I\). The orthogonalization procedure can be implemented on the level of the coefficients \(\mathbf{\varphi}_{\mathrm{rb}}\) or directly on the snapshots -- this depends on the truth solver and in particular the associated vector format, which we leave unspecified for now. Online stage.Once the offline stage is finished, we are left with the low-dimensional basis \(B\) and the reduced quantities \(b\), \(h(\mathbf{\mu})\) and \(h^{2}(\mathbf{\mu})\) as byproducts of the greedy assembly algorithm. In order to compute expectation values of affine decompositions \(O(\mathbf{\mu};p)\), we compute the reduced matrices \(o_{r}=B^{\dagger}O_{r}B\), so that we now operate entirely in RB space. Namely, by taking the expectation value with the emulated ground state \(|\mathbf{\Phi}_{\text{rb}}(\mathbf{\mu})\rangle\), we obtain \[\left\langle O(\mathbf{\mu};p)\right\rangle_{\text{rb}}=\sum_{r=1}^{R}\alpha_{r}( \mathbf{\mu};p)\,\frac{1}{m}\sum_{i=1}^{m}\varphi_{\text{rb}}^{[i]}(\mathbf{\mu})^{ \dagger}\,o_{r}\,\varphi_{\text{rb}}^{[i]}(\mathbf{\mu}). \tag{14}\] It becomes clear that we can, as advertised, evaluate the above on any parameter point \(\mathbf{\mu}\in\mathbb{P}\), independent of \(\mathcal{N}\) -- thanks to the affine decomposition, \(o_{r}\) does not depend on \(\mathbf{\mu}\) and we only need to evaluate the coefficient functions and compute the RB coefficients \(\mathbf{\varphi}_{\text{rb}}(\mathbf{\mu})\). This again amounts to solving the \(N\)-dimensional generalized eigenvalue problem of Eq. (12). For further details on the greedy RB approach, we refer to Refs. [8; 9; 10]. ### RB assembly using MPS We now aim to combine the RB framework with tensor network techniques from quantum many-body physics. More specifically, we represent the snapshot many-body ground states as MPS \[|\psi\rangle=\!\!\sum_{\alpha_{1}\cdots\alpha_{L}}\!\!\sum_{a_{1}\cdots a_{L+ 1}}\!\!\!M_{a_{1}a_{2}}^{\alpha_{1}}\!\!M_{a_{2}a_{3}}^{\alpha_{2}}\cdots M_{ a_{L}a_{L+1}}^{\alpha_{L}}\left|\alpha_{1}\cdots\alpha_{L}\right\rangle, \tag{15}\] for systems of \(L\) physical degrees of freedom \(\alpha_{i}\). In doing so, the treatment of large many-body Hamiltonians becomes feasible since efficient algorithms for the computation of state overlaps, matrix elements and ground-state MPS exist, that scale polynomially in \(L\). These rely on low-rank approximations of the MPS tensors \(M_{a_{i}a_{i+1}}^{\alpha_{i}}\) which can be implemented by repeated singular value decompositions to reduce the matrix rank \(d\), known as the bond dimension in tensor network theory [39], by dropping singular values according to a cutoff \[\mathtt{cut}_{\sigma}>\sqrt{\frac{\sum_{k\in\text{trunc}}\sigma_{k}^{2}}{ \sum_{k=1}^{d}\sigma_{k}^{2}}}. \tag{16}\] The sum \(\sum_{k\in\text{trunc}}\) is to be understood in the sense that we remove the smallest singular values \(\sigma_{k}\) once the singular value error surpasses \(\mathtt{cut}_{\sigma}\), which corresponds to a truncation of the MPS tensors in the Frobenius norm. In order to perform truth solves in the MPS format, we use DMRG, which is the most commonly used method for variational ground MPS searches. For reviews on general MPS theory and the DMRG algorithms we refer the reader to Refs. [37; 38; 39]. Carrying out the greedy offline stage in MPS formulation spawns two new aspects: First, we need to use efficient MPS contractions for overlaps and matrix elements when computing the reduced quantities \(b\), \(h\) and \(h^{2}\) which generates additional inaccuracies on top of the truth solve, and second, we need an orthogonalization scheme for \(B\) that is compatible with MPS snapshots. Here, we elaborate on these aspects by going through the greedy algorithm once again, and postpone the discussion of MPS inaccuracies in the RB framework to Sec. IV.2. For each iteration of the greedy assembly, we obtain \(m_{j}\) normalized ground-state MPSs and append them to the matrix \(\Upsilon=[|\Psi_{1}\rangle\cdots|\Psi_{M}\rangle]\in\mathbb{C}^{N\times M}\) as column vectors. In practice, this matrix cannot be constructed explicitly and only operations between its columns are allowed. To orthogonalize this matrix, we make the ansatz \(B=\Upsilon V\), where \(V\in\mathbb{C}^{M\times N}\) mixes the truth MPSs into orthogonal linear combinations. However, these linear combinations are not computed explicitly since this would entail inefficient MPS addition. Instead, we first compute the overlap matrix \[S\coloneqq\Upsilon^{\dagger}\Upsilon,\quad S_{ij}=\left\langle\Psi_{i}|\Psi_{j} \right\rangle, \tag{17}\] and matrix elements \([\Upsilon^{\dagger}A\Upsilon]_{ij}=\left\langle\Psi_{i}|A|\Psi_{j}\right\rangle\) where \(i,j\in\{1,\ldots,M\}\) and \(A=H_{q},H_{q}H_{q^{\prime}},O_{r}\) might be any operator of interest. Note that we assume the operator \(A\) to be represented in tensor format, e.g., as a matrix product operator (MPO) or a multi-site operator. Then all reduced quantities are computed in the orthogonal basis by transforming in RB space as \[b=V^{\dagger}SV,\quad a=V^{\dagger}\Upsilon^{\dagger}A\Upsilon V, \tag{18}\] where correspondingly \(a=h_{q},h_{qq^{\prime}},o_{r}\). Note that by adding a new MPS snapshot \(|\mathbf{\Psi}(\mathbf{\mu}_{n+1})\rangle\) to \(\Upsilon\), we need to compute \(M+m_{n+1}\) new overlaps and matrix elements per observable, exploiting the hermiticity of \(S\) and \(\Upsilon^{\dagger}A\Upsilon\) (granted that \(A\) is Hermitian). While there are multiple options to determine the orthogonalizing matrix \(V\), we here opt for the numerically efficient approach of Lowdin symmetric orthogonalization [41]. To that end, we decompose the overlap matrix \(S=U\Lambda U^{-1}\) into its eigenvalues \(\Lambda=\text{diag}(\lambda_{1},\ldots,\lambda_{M})\), sorted in descending order. Since \(S\) is Hermitian, \(U\) can be chosen to be a unitary matrix, so that by demanding \(b=V^{\dagger}SV\stackrel{{!}}{{=}}I\), we can immediately identify \[V=U\Lambda^{-1/2}. \tag{19}\] Furthermore, the eigenvalue decomposition of \(S\) provides a way to compress the RB. Due to accumulating linear dependence, the eigenvalues \(\lambda_{j}\) decrease as snapshots are appended until we cannot further resolve new directions in RB space, given the truth solver's accuracy. Hence we may truncate \(B\) in the Frobenius norm of \(S\) according to an eigenvalue cutoff \[\mathtt{cut}_{\lambda}>\sqrt{\frac{\sum_{j\in\text{trunc}}\lambda_{j}^{2}}{ \sum_{j=1}^{M}\lambda_{j}^{2}}}, \tag{20}\] i.e., when the smallest normalized sum of squared eigenvalues exceeds \(\mathsf{cut}_{\lambda}\) the corresponding snapshots are removed. This approach of orthogonalizing a set of snapshots is akin to what is known as _proper orthogonal decomposition_[8; 9] in RB theory, which provides an (expensive) alternative to the greedy sampling algorithm. On a slightly more technical note, we mention the possibility of using the RB to produce initial guesses for the DMRG solver. By explicitly computing the linear combination \(\left|\Phi_{\text{rb}}^{[i]}(\mathbf{\mu})\right\rangle=B\varphi_{\text{rb}}^{[i]} (\mathbf{\mu})=\sum_{j=1}^{N}[V\varphi_{\text{rb}}^{[i]}(\mathbf{\mu})]_{j}\left| \Psi_{j}\right\rangle\) for \(i=1,\ldots,m\) at the selected parameter point and using it as the initial MPS, one can speed up DMRG convergence. In order for this to work sufficiently fast, it is necessary to heavily truncate the MPS while computing the linear combination, since MPS addition leads to an additive increase of bond dimensions. Nonetheless, this approach globally reduces the number of DMRG sweeps and makes the RB assembly more stable as well as deterministic. Before proceeding with the numerical results, we remark that the procedures we covered here in the context of MPS generalize to a larger class of vector representations and truth solvers. In summary, one can identify the following requirements for compatibility with the RB framework: 1. Computation of state overlaps \(\langle\Psi(\mathbf{\mu}_{i})|\Psi(\mathbf{\mu}_{j})\rangle\). 2. Computation of matrix elements \(\langle\Psi(\mathbf{\mu}_{i})|A|\Psi(\mathbf{\mu}_{j})\rangle\) for all relevant operators \(A=H_{q},H_{q}H_{q^{\prime}},O_{r}\). 3. High truth solver accuracy: Large approximation errors on overlaps and matrix elements prevent us from generating a meaningful surrogate. Since this turns out to be a subtle point, we will further discuss this for MPS in Sec. IV.2. In consequence, we do not necessarily need access to the truth ground-state vectors, only to contractions between them. This opens the door to further tensor network architectures such as projected entangled pair states [42] and tree tensor networks [43; 44; 45], or possibly, artificial neural network based representations [46] as well as various truth solving methods, e.g., quantum Monte Carlo approaches [47]. ## III Results We now turn to discuss various numerical results obtained using the RB-MPS method. Alongside this work, we developed a code package implementing RB methods for parametrized eigenvalue problems, in particular many-body Hamiltonians, with the possibility for using DMRG and ED-based solvers as well as custom truth solving methods. The code package is written in the Julia programming language [48] and all MPS and DMRG procedures are performed using the ITensor library [49; 50]. We made the code, including user instructions and documentation, publicly available [40]. ### Haldane spin-1 chain with single-ion anisotropies As a first application of the RB-MPS method, we consider the one-dimensional Haldane spin-1 chain \[H_{\text{HD}}\!=\!J\sum_{i=1}^{L-1}\mathbf{S}_{i}\!\cdot\!\mathbf{S}_{i+1}\!+\!D\sum_{ i=1}^{L}(S_{i}^{z})^{2}\!+\!E\sum_{i=1}^{L}\big{[}(S_{i}^{x})^{2}\!-\!(S_{i}^{y}) ^{2}\big{]}, \tag{21}\] with a uniaxial \(D\) and a rhombic-type \(E\) single-ion anisotropy, here using open boundary conditions. With regard to the RB formalism we express \(H_{\text{HD}}\) as a dimensionless affine decomposition using the parameter vector \(\mathbf{\mu}=(D/J,\,E/J)\) and the coefficient function \(\mathbf{\theta}(\mathbf{\mu})=(1,\mu_{1},\mu_{2})\), such that the matrices \(H_{q}\) correspond to the summands of \(H_{\text{HD}}\). In order to benchmark the RB-MPS method, we want to resolve the ground-state phase diagram of \(H_{\text{HD}}\), which features: i) the symmetry-protected topological Haldane phase [1; 2; 3; 4; 5] that is robust against small anisotropies, ii) Neel-ordered phases, as well as so-called iii) large-\(D\) and iv) large-\(E\) phases. We summarize and sketch the full phase diagram in Fig. 1: cf. Refs. [6; 7] for a detailed discussion and an overview of previous investigations. DMRG studies of \(H_{\text{HD}}\) can profit from several parity quantum numbers [51]. In particular, \(H_{\text{HD}}\) conserves the magnetization parity \(p_{m}=\sum_{i=1}^{L}S_{i}^{z}\mod 2\in\{0,1\}\) together with the spatial parity \(p_{s}=\pm 1\) and time reversal \(t=\pm 1\). The ground-state sector is described by \(p_{m}=0\), whereas the corresponding \(p_{s}\) and \(t\) can be determined based on the valence-bond-solid picture [52] in dependence of the boundary conditions [6]. In Figure 1: Ground-state phase diagram of the spin-1 chain in Eq. (21), where the phase boundaries are taken from Ref. [6]. In the Néel phases the model exhibits antiferromagnetic order in \(x\), \(y\) and \(z\)-direction, respectively. The Haldane phase is gapped and an example of symmetry-protected topological order and exhibits non-local string order. The uniaxial anisotropic coupling leads to the large-\(D\) phase that confines the spins to the \(xy\)-plane, since the \(D\)-term penalizes configurations which are polarized in \(z\)-direction. Similarly, due to the rhombic coupling the large-\(E_{x}\) and \(E_{y}\) phases, which differ in the sign of \(E\), favor configurations in the \(yz\) and \(xz\)-plane. our RB-MPS approach we thus use DMRG with Abelian quantum number conservation and fix \(p_{m}=0\) to operate in the ground-state sector. Note that \(p_{s}\) and \(t\) are not explicitly fixed here. For the RB-MPS calculations we consider spin chains of length \(L\in\{20,40,60,80\}\) using MPS with a singular value error cutoff [53] of up to \(\mathtt{cut}_{\sigma}=10^{-8}\) -- corresponding to bond dimensions of mostly \(d\sim 100\) up to \(1000\), depending on the phases -- and DMRG sweeps which are converged up to an energy tolerance of \(10^{-9}\). Furthermore, we focus on the upper half-plane \(\mathbb{P}_{\rm HD}=[-2,2]\times[0,2]\) since the remaining phase diagram is obtained by flipping the sign of \(E\) which corresponds to swapping \(x\) and \(y\) terms. On this domain, we use a regular training grid of \(80\times 60\) parameter points and converge the surrogate up to an overlap eigenvalue accuracy of at least \(\mathtt{cut}_{\lambda}<10^{-8}\). Note that we here and in all following examples do not target degenerate ground-states with DMRG solves, such that the number of snapshots \(n=N\) corresponds to the RB dimension. At these settings, the greedy algorithm takes \(N=173\) snapshots to reach convergence at the largest system size of \(L=80\), with maximal residuals of order \(\max_{\mathbf{\mu}\in\Xi_{\rm train}}\mathrm{Res}(\mathbf{\mu})\sim 10^{-2}\). Only a fraction of the \(80\times 60\) possible training points is thus needed to converge the surrogate model. To illustrate the convergence, we show the residual as well as the decay of the minimal eigenvalue of the overlap matrix \(S\) as a function of the RB dimension \(N\) in Fig. 2. The exponential decay of the overlap eigenvalues numerically demonstrates the increasing linear dependence among ground states on \(\mathbb{P}_{\rm HD}\) as snapshots are appended to the RB. Viewed from a physical angle, the fact that the decrease of overlap eigenvalues coincides with decreasing residuals over the parameter domain can be interpreted as larger eigenvalues being associated with global ground-state behavior, while smaller eigenvalues resolve more localized features in the phase diagram. Unsurprisingly, the decay rates decrease with growing system sizes -- in the thermodynamic limit \(L\to\infty\) we would in fact run into an orthogonality catastrophe, requiring a diverging number of snapshots to assemble a RB. Note that this also hinders the immediate use of _infinite_ matrix product states [54, 55, 56] in the RB framework, since all overlaps and matrix elements between ground states at different parameter points would vanish, effectively prohibiting any linear combinations between snapshots. However, for \(H_{\rm HD}\) on \(\mathbb{P}_{\rm HD}\), the RB dimension required to converge a surrogate up to a fixed residual increases merely sublinearly in \(L\), as opposed to the exponential Hilbert space growth, which indicates that there exists a sweet spot for system sizes, where the thermodynamic regime is approached while still being amenable to the RB-MPS method. It is also interesting to examine the residual on the entire parameter domain, i.e., on a high-resolution online grid covering \(\mathbb{P}_{\rm HD}\), together with the snapshot parameter points in Fig. 3. In particular, many snapshots are selected along phase boundaries and especially around the transition point between the large-\(E_{x}\) and Haldane phase, whereas deep in the phases less snapshots are needed to resolve the ground-state behavior. We also find that a noticeable number of snapshots are located along the boundary of the parameter range \(\mathbb{P}_{\rm HD}\), which is standard in greedy RB algorithms. The clustering of snapshot parameter points hints at the selection mechanism of the greedy algorithm that indicates domains in the phase diagram where the ground states vary more rapidly. More specifically, a higher density of sample points indicates a Figure 2: Maximal residual (upper panels) and the normalized minimal eigenvalue \(\tilde{\lambda}^{\rm min}=\|\Lambda\|_{F}^{-1}\min(\Lambda)\) of the overlap matrix \(S\) (lower panels) in dependence of the RB dimension \(N\). Both quantities follow an exponential decay, although with different rates. The right panels show the curves on rescaled \(x\)-axes which illustrates the scaling behavior of \(N\) with system size. More specifically, we find \(N\sim L^{\eta}\) with approximately \(\eta=0.8\) for the residual and \(\eta=0.65\) for the minimal eigenvalues. Figure 3: Residual error estimate \(\mathrm{Res}(\mathbf{\mu})\) on \(\mathbb{P}_{\rm HD}\). The dots indicate the \(N=173\) parameter points \(\{\mathbf{\mu}_{1},\dots,\mathbf{\mu}_{N}\}\) corresponding to the snapshots that span the RB space. The dotted white lines show the phase boundaries of [6]. slower decay of the local Kolmogorov \(N\)-width [57] and thus a higher linear independence of the solutions under local parameter variation. In order to reproduce the phase diagram of Fig. 1, we next measure various correlation functions that distinguish the different types of ground-state order [6]. The Neel-ordered phases are identified by measuring the spin-spin correlation functions \(\langle S_{r}^{\alpha}S_{r}^{\alpha}\rangle_{\rm{rb}}\) in the \(\alpha=z\) and \(y\)-direction, while the large-\(D\) and large-\(E_{x}\) phases are characterized by quadrupolar correlations \(\langle\tilde{Q}_{r}^{\alpha}Q_{r}^{\gamma}\rangle_{\rm{rb}}\) where we focus on the \(\gamma=z^{2}\) order with \(Q_{r}^{z^{2}}=\big{[}3(S_{r}^{z^{2}})^{2}-2\big{]}\big{/}\sqrt{3}\). To detect the Haldane phase, we consider the non-local string order operator [58]\(O_{rr^{\prime}}=-S_{r}^{z}\prod_{j=r+1}^{r^{\prime}-1}e^{i\pi S_{j}^{z}}S_{r^{ \prime}}^{z}\) from which we subtract the spin correlator in \(z\)-direction to remove the trivial background signal in the \(z\)-Neel phase and isolate the string order in the Haldane phase, \[\langle\tilde{O}_{rr^{\prime}}\rangle_{\rm{rb}}=-\big{\langle}S_{r}^{z}e^{i \pi\sum_{j=r+1}^{r^{\prime}-1}S_{j}^{z}}S_{r^{\prime}}^{z}\big{\rangle}_{\rm{rb }}-\langle S_{r}^{z}S_{r^{\prime}}^{z}\rangle_{\rm{rb}}. \tag{22}\] We show the corresponding measurements obtained from the \(L=80\) RB in Fig. 4. The RB correctly reproduces the different ground-state phases, up to finite-size effects, matching the results from Ref. [6]. It can be clearly seen how the string order persists in the Haldane phase also for small values of the anisotropies \(D\) and \(E\). Overall, we find that the RB-MPS approach allows us to efficiently uncover the parameter regions of the various ground-state phases at high resolution, based on a comparably low number \(N\sim 100\) of DMRG truth solves. We leave a quantitative treatment of the accuracy of RB measurements to Sec. IV and move to a more complex example application. ### Bilinear-biquadratic spin-1 chain with uniaxial single-ion anisotropy In the second example we stay in the realm of spin-1 chains. We now add a biquadratic exchange interaction term, while removing the rhombic anisotropy, resulting in the bilinear-biquadratic model with a uniaxial single-ion anisotropy \[H_{\rm{BLBQ}} = J\sum_{i=1}^{L-1}\big{[}\cos(\theta)\mathbf{S}_{i}\cdot\mathbf{S}_{i+1} +\sin(\theta)(\mathbf{S}_{i}\cdot\mathbf{S}_{i+1})^{2}\big{]} \tag{23}\] \[+D\sum_{i=1}^{L}(S_{i}^{z})^{2},\] on a chain with open boundaries. Written as a dimensionless affine decomposition, we identify the parameter vector \(\mathbf{\mu}=(\theta,D/J)\) and coefficient function \(\mathbf{\theta}(\mathbf{\mu})=(\cos(\mu_{1}),\sin(\mu_{1}),\mu_{2})\). Various aspects of the rather rich physics of this model, featuring multiple gapped as well as critical phases, were previously studied using both analytical and computational approaches, including extensive DMRG calculations. We refer in particular to the overall ground-state phase diagram reported in Ref. [59], which also provides an overview of previous studies. The ground-state phase diagram of \(H_{\rm{BLBQ}}\) is outlined and visualized in Fig. 5. Due to the increased complexity in pa Figure 5: Phase diagram of the bilinear-biquadratic spin-1 chain with uniaxial coupling \(D\). The phase boundaries are taken from Ref. [59], here shown without error bars. The \(\theta=0\) cut is equivalent to the \(E=0\) cut of \(H_{\rm{HD}}\) and therefore already contained in Fig. 1. Besides the Haldane, large-\(D\) and \(z\)-Neel phase, further regimes are the fully polarized ferromagnetic (FM) and in-plane XY-FM ordered phases, a dimerized phase with a finite bond order parameter (24), and two critical phases, denoted critical A and critical B. In the former, the central charge \(c=1\), while for the latter, two domains have been identified, where \(c=1\) and \(c=2\), respectively. Figure 4: Different two-site correlation functions measured on \(\mathbb{P}_{\rm{HD}}\) for an \(L=80\) chain at sites \(r=20\) and \(r^{\prime}=60\). The dotted white lines show the phase boundaries obtained in [6]. In the upper left and right panels, the Néel phases are clearly indicated by the spin correlators in \(z\) and \(y\)-direction, whereas the quadrupolar correlator in the bottom left panel is smoothed along the phase boundaries. The bottom right panel shows the modified string order parameter from Eq. (22) where spin-spin correlations in \(z\)-direction were subtracted. rameter space, the anisotropic bilinear-biquadratic spin chain serves as an interesting stress test for the RB-MPS approach. To begin with, we consider the entire parameter domain of Fig. 5, corresponding to \(\mathbb{P}_{\mathrm{BLBQ}}=[-\pi,\pi]\times[-2,3]\), covering all possible phases. Here, the training grid consists of \(100\times 100\) uniformly spaced parameter points. Since this gives rise to a significant linear independence between snapshots, we restrict ourselves to a small system of \(L=24\) spins. Again an energy convergence tolerance of \(10^{-9}\) is used, while the singular value cutoff is reduced to \(\mathfrak{cut}_{\sigma}=10^{-5}\), due to increased computational complexity of DMRG in the critical phases. At this relatively low precision, we face the problem that the induced MPS errors prohibit the greedy algorithm to resolve the phase diagram up to the desired accuracy -- we merely reach a maximal residual of \(\max_{\mathbf{\mu}\in\Xi_{\mathrm{train}}}\mathrm{Res}(\mathbf{\mu})=0.2\) at \(N=198\) basis snapshots, before terminating the assembly. Hence the results obtained from the RB at these settings have to be interpreted with caution. We follow up on the interplay of MPS accuracy and RB convergence in Sec. IV.2, and here first discuss the numerical results. Despite the crude accuracy, we are able to make several interesting observations. We begin by illustrating the greedy parameter selection, where the snapshot parameters and residual on \(\mathbb{P}_{\mathrm{BLBQ}}\) are shown in Fig. 6. It can be observed that the bulk of snapshots concentrates around the critical A phase and the \(c=2\) region features plateau-like structures in the residual. Moreover, a less concentrated clustering can be seen around the SU(3) point [60] at \(\mathbf{\mu}=(-3/4\pi,0)\) and the entire ferromagnetic phase is spanned by only one snapshot. We proceed and investigate the gapped domains in parameter space. As in the previous application, the \(z\)-Neel and large-\(D\) phases are again indicated by the \(z\)-spin and \(z^{2}\)-quadrupolar correlation functions, respectively, and the Haldane phase is captured by the appropriate string order parameter \(\langle\tilde{O}_{rr^{\prime}}\rangle_{\mathrm{rb}}\) of Eq. (22). To detect the dimerized phase, we consider the bond order parameter \[\mathcal{D}_{r}=|H_{r}-H_{r+1}|, \tag{24}\] where \(H_{r}=\cos(\theta)\mathbf{S}_{r}\cdot\mathbf{S}_{r+1}+\sin(\theta)(\mathbf{S}_{r}\cdot\mathbf{ S}_{r+1})^{2}\), i.e., the difference of next-neighbor interactions among three adjacent sites. We present the RB measurements in Fig. 7. The most pronounced signal for dimerization is obtained along the \(D=0\) line. The RB results furthermore indicate that the phase boundary between the dimerized and the large-\(D\) regime is located at finite \(D>0\) near the \(D=0\) line. This is in accord with earlier high-precision determinations of this phase boundary line [59, 60, 61, 62, 63, 64, 65, 66]. Next, we consider the critical phases. Previous investigations [59, 60] pointed out that these exhibit dominant (algebraic) correlations of quadrupolar spin-nematic operators. In order to systematically investigate these correlations, we measured various quadrupolar structure factors \[\langle Q^{\gamma}(-k)\,Q^{\gamma}(k)\rangle_{\mathrm{rb}}=\frac{1}{L}\sum_{r,r^{\prime}=1}^{L}\exp[-ik(r-r^{\prime})]\,\langle Q_{r}^{\gamma}Q_{r^{\prime }}^{\gamma}\rangle_{\mathrm{rb}}, \tag{25}\] using the \(L=24\) surrogate, with \(\gamma\in\{xy,x^{2}-y^{2},z^{2}\}\), probing thus both transverse and longitudinal quadrupolar correlations, where \(Q_{r}^{xy}=S_{r}^{x}S_{r}^{y}+S_{r}^{y}S_{r}^{x}\), \(Q_{r}^{x^{2}-y^{2}}=0\). Figure 7: Correlation functions on the entire parameter domain \(\mathbb{P}_{\mathrm{BLBQ}}\). Since we consider only a small \(L=24\) spin chain, we observe finite-size effects such as shifted and blurred phase boundaries, e.g., in the dimerized phase. The correlators are measured at sites \(r=6\) and \(r^{\prime}=19\), and the dimer order parameter is computed in the chain center. Note that the color bar scale is truncated at \(1\), which is slightly surpassed by the dimer order parameter and quadrupolar correlator. Figure 6: Residual on \(\mathbb{P}_{\mathrm{BLBQ}}\) as well as the \(N=198\) selected parameter points for the \(L=24\) bilinear-biquadratic model. The critical A phase at central charge \(c=2\) accumulates the bulk of the parameter points, whereas the FM domain is emulated by only one snapshot, with the surrogate obtaining a low residual across the FM phase. \((S_{r}^{x})^{2}-(S_{r}^{y})^{2}\), and \({Q_{r}^{z}}^{2}\) was given above. Note that the case \(\gamma=x^{2}-y^{2}\) is in fact equivalent to the \(xy\) case, due to the residual \(\mathrm{U}(1)_{xy}\) symmetry of the Hamiltonian \(H_{\mathrm{BLBQ}}\). Hence, we do not discuss this case separately here. The results for \(\gamma=xy\) are shown in Fig. 8 in the vicinity of both the critical A and B phases. From Fig. 8 we detect enhanced transverse ferroquadrupolar correlations, i.e., at \(k=0\), in the regime of the critical B phase, as expected [59; 60]. Moreover, dominant transverse antiferroquadrupolar correlations with \(k=\pi\) are observed in the \(c=1\) domain of critical A. This is in accord with the overall expectation for enhanced nematic correlations in this regime [59]. The antiferroquadrupolar character has however not been reported in Ref. [59]. Here, this additional information is directly available from the RB-MPS approach. For the \(\gamma=z^{2}\) quadrupolar structure factor we observe an enhanced signal in the \(c=2\) domain of the critical A phase, as shown in Fig. 9. In particular, along the \(D=0\) line, the dominant wave vector equals \(k=2\pi/3\), as shown in the left panel of Fig. 9, in accord with earlier reports [64]. For finite values of \(D\) however, the signal at \(k=2\pi/3\) reduces, and a plateau structure is observed. The reduced signal could result from either an overall suppression of the quadrupolar correlations or from a shift of the dominant wave vector away from its \(D=0\) value of \(k=2\pi/3\). The emergence of the plateau structure indeed already hints towards the latter scenario, as modulations of the spin structure that are not commensurate with the chain length of \(L=24\) result in the pinning of the best-matching quadrupolar structure over a finite parameter regime. In order to further investigate this effect, we narrow down the parameter domain to the one-dimensional parameter cut \(\{0.3\pi\}\times[-2,3]\), which crosses the critical A phase, and on which we generate a second RB for \(L=24\) spins. On this smaller parameter domain we reach a maximal residual of \(10^{-2}\) at \(N=42\) snapshots, using a higher MPS accuracy of \(\mathtt{cut}_{\sigma}=10^{-6}\). Hence the RB-MPS method this time requires significantly less snapshots and simultaneously is able to rectify previous inaccuracies. Again, we evaluate the \(z^{2}\)-quadrupolar structure factor, which is shown for different wave vectors in the right panel of Fig. 9. It can be observed that the dominant wave vector \(k\) for the quadrupolar correlations varies across the \(c=2\) regime of the critical A phase, suggesting a continuously varying \(k\) to emerge in the thermodynamic limit. We are not aware that these incommensurate quadrupolar correlations have been previously reported for the Hamiltonian \(H_{\mathrm{BLBQ}}\). On the other hand, the presence of such incommensurate correlations provides a simple explanation for the particular abundance of snapshots picked by the greedy algorithm throughout the full \(c=2\) domain of the critical A phase. Note that while a relatively large number of snapshots is required to build the RB on this one-dimensional parameter cut, a comparative scan of truth solves would be significantly more expensive since a fine resolution is needed to detect the sharp jumps between plateaus. From this particular application it becomes apparent, that one of the strengths of the RB-MPS method lies in being able to scan large domains and cheaply vary additional parameters, such as the wave vector, for many different observables with minimal overhead. This enables us to gain additional insight during a post-processing stage by considering various order parameters on the fly, which would be significantly more involved and computationally expensive by way of truth solving only. Furthermore, it proves viable to generate surrogates on smaller parameter domains, once a large-scale scan has been performed, in order to improve accuracy and lower the required RB dimension. Such a procedure of partitioning the parameter space and respectively generating independent RBs could be systematically implemented using the techniques from Refs. [67; 68; 69]. Partitioning approaches could also allow treating larger system sizes for Hamilto Figure 9: Quadrupolar structure factor \(\langle Q^{z^{2}}(-k)\,Q^{z^{2}}(k)\rangle_{\mathrm{rb}}\) on the critical A domain at \(k=2\pi/3\) (left) and on the parameter cut \(\{0.3\pi\}\times[-2,3]\) (right), which is indicated by the green dashed line. By varying the wave vector, different plateaus are highlighted in the \(c=2\) region of the critical phase. nians, such as \(H_{\rm BLBQ}\), that exhibit complex phase diagrams. ## IV Technical discussion ### Accuracy and convergence We move on to assess the accuracy of the RB-MPS method. Similarly to Ref. [10], we quantify the accuracy of RB surrogates by considering the differences between truth and RB measurements and maximizing over the parameter domain under consideration, to obtain the most conservative error estimates. In order to sample the regions of the parameter domain that have not been selected by the greedy algorithm and truth solved, we introduce a test grid \(\Xi_{\rm test}\) that has no mutual points with the training grid. In practice, the test grid is obtained by shifting the training grid by half a grid spacing in each parameter direction. Taking these considerations into account, we estimate the eigenvalue error by \[\mathtt{err}_{\rm val}=\max_{\boldsymbol{\mu}\in\Xi_{\rm test}}\frac{|E( \boldsymbol{\mu})-E_{\rm fb}(\boldsymbol{\mu})|}{|E(\boldsymbol{\mu})|}, \tag{26}\] i.e., the maximal relative difference between truth \(E(\boldsymbol{\mu})\) and RB energies \(E_{\rm fb}(\boldsymbol{\mu})\). For errors of observables, we consider absolute differences \[\mathtt{err}_{\rm obs}=\max_{\boldsymbol{\mu}\in\Xi_{\rm test}}|\langle O( \boldsymbol{\mu})\rangle-\langle O(\boldsymbol{\mu})\rangle_{\rm fb}|, \tag{27}\] since for the observables that we will treat in the following -- spin-spin correlators and quadrupolar structure factors -- the measured values become exactly zero on certain subdomains. Note that the absolute errors in these cases provide comparable error quantifiers alongside the relative errors because the range of observable values concentrates around \(\langle O(\boldsymbol{\mu})\rangle_{\rm fb}\sim 0.1\) and \(\sim 1\). The vector error estimate is more subtle since it has to account for different global phases between truth and RB solutions. We avoid such phase issues by computing the difference of outer products \[\delta_{\rm vec}(\boldsymbol{\mu})=\frac{\|\boldsymbol{\Psi}(\boldsymbol{\mu} )\boldsymbol{\Psi}^{\dagger}(\boldsymbol{\mu})-\boldsymbol{\Phi}_{\rm fb}( \boldsymbol{\mu})\boldsymbol{\Phi}_{\rm fb}^{\dagger}(\boldsymbol{\mu})\|_{F}} {\|\boldsymbol{\Psi}(\boldsymbol{\mu})\boldsymbol{\Psi}^{\dagger}(\boldsymbol {\mu})\|_{F}}, \tag{28}\] where we denote \(\boldsymbol{\Psi}(\boldsymbol{\mu})\equiv(|\Psi^{[1]}(\boldsymbol{\mu}) \rangle,\ldots,|\Psi^{[m]}(\boldsymbol{\mu})\rangle)\) to declutter notation, and again maximize over the test grid, \(\mathtt{err}_{\rm vec}=\max_{\boldsymbol{\mu}\in\Xi_{\rm test}}\delta_{\rm vec }(\boldsymbol{\mu})\). In this particular form, we would need to explicitly reconstruct Hilbert space dimensional vectors from MPS, which is exponentially hard in the system size. This can be circumvented by computing the norm using the Frobenius inner product and the cyclic property of the trace, producing the expression \[\delta_{\rm vec}(\boldsymbol{\mu})=\sqrt{1+\frac{\|\boldsymbol{\Phi}_{\rm fb}^ {\dagger}(\boldsymbol{\mu})\boldsymbol{\Phi}_{\rm fb}(\boldsymbol{\mu})\|_{F}^ {2}-2\|\boldsymbol{\Phi}_{\rm fb}^{\dagger}(\boldsymbol{\mu})\boldsymbol{\Psi }(\boldsymbol{\mu})\|_{F}^{2}}{\|\boldsymbol{\Psi}^{\dagger}(\boldsymbol{\mu}) \boldsymbol{\Psi}(\boldsymbol{\mu})\|_{F}^{2}}}, \tag{29}\] where \(\|\boldsymbol{\Phi}_{\rm fb}^{\dagger}(\boldsymbol{\mu})\boldsymbol{\Phi}_{ \rm fb}(\boldsymbol{\mu})\|_{F}=\|\boldsymbol{\varphi}_{\rm fb}^{\dagger}( \boldsymbol{\mu})b\boldsymbol{\varphi}_{\rm fb}(\boldsymbol{\mu})\|_{F}\) and the norm of RB and truth overlaps is found to be \[\|\boldsymbol{\Phi}_{\rm fb}^{\dagger}(\boldsymbol{\mu})\boldsymbol{\Psi}( \boldsymbol{\mu})\|_{F}^{2} =\sum_{i,j=1}^{m}\left|\langle\Phi_{\rm fb}^{[i]}(\boldsymbol{ \mu})|\Psi^{[j]}(\boldsymbol{\mu})\rangle\right|^{2}\] \[=\sum_{i,j=1}^{m}\Big{|}\sum_{k=1}^{N}[V\varphi_{\rm fb}^{[i]}( \boldsymbol{\mu})]_{k}\left\langle\Psi_{k}|\Psi^{[j]}(\boldsymbol{\mu}) \rangle\right|^{2}. \tag{30}\] Thus, we are able to compute the eigenvector error using only RB coefficients and MPS overlaps at the particular parameter point \(\boldsymbol{\mu}\), without resorting to \(\mathcal{N}\)-dependent operations. For the first example, we return to the Haldane chain of \(L=40\) spins and compute the above error quantities on the one-dimensional cut of parameters \(\boldsymbol{\mu}\in\{0\}\times[0,2]\). Note that we restrict ourselves to a medium chain length and small parameter cut due to the high computational effort associated with the required truth solves on \(\Xi_{\rm test}\). At a singular value cutoff of \(\mathtt{cut}_{\sigma}=10^{-8}\) and \(N=14\) snapshots, we converge the surrogate to a maximal residual below \(10^{-2}\). For the observable error, we measure the spin-spin correlator \(\langle S_{r}^{y}S_{r^{\prime}}^{y}\rangle\). The resulting convergence of maximal errors as the RB is generated is shown in Fig. 10. It is observed, that the residual decays at a similar rate to the eigenvector error, while the eigenvalue errors show a significantly faster convergence, as was previously found for ED-based surrogate models [10]. Moreover, the data demonstrates that the residual acts as an error surrogate by providing an upper bound to all real errors -- at a residual of \(10^{-2}\), the considered error quantities have decayed to the sub-percent range. For the next example, we reexamine the \(L=24\) bilinear-biquadratic chain on the \(\{0.3\pi\}\times[-2,3]\) parameter cut crossing the critical A phase. On this parameter domain the surrogate has to resolve a large degree of ground-state variation, and in particular discrete plateaus in the evaluated observables, which serves as a good example to illustrate possible difficulties in converging RBs. We adopt the offline settings from Sec. III.2 Figure 10: Decay of maximal RB errors for the \(L=40\) Haldane chain with respect to the \(\{0\}\times[0,2]\) parameter cut. and again perform truth solves on a shifted test grid. As for the observable error, we here consider the quadrupolar structure factor \(\langle Q^{z^{2}}(-k)\,Q^{z^{2}}(k)\rangle_{\text{rb}}\) from before. In addition to maximizing the error over the test grid, we also maximize with respect to all possible wave vectors \(k\in[0,2\pi]\). The maximal and median errors as functions of the RB dimension are presented in Fig. 11. Most strikingly, we observe that the maximal RB errors plateau above the maximal residual, which still decays to \(10^{-2}\). While this breaks the desired property of the residual error estimator of providing an upper bound on all RB errors, we see that the median errors do converge nicely below the residual line. This disparity between maximal and median errors is owed to the discrete jumps observed in the \(c=2\) critical A phase as well as the sensitivity of the Fourier transform in the structure factor to variations of \(k\). As shown in Fig. 12, at one particular point in \(\Xi_{\text{test}}\) a truth solve is performed directly at the transition between two plateaus, where the RB linearly interpolates between the plateau values instead of producing a sharp gap, such that we obtain structure factor deviations of order \(10^{-1}\). Leaving this point aside, the DMRG and RB observable measurements are in excellent agreement, whereas the decay rate of the vector errors is generally slower. At the chosen MPS accuracies, we thus observe mostly well-behaved convergence properties similar to those of ED-generated RBs, and the fact that the surrogate operates based on approximate MPS snapshots and contractions thereof does not hinder us from obtaining accurate observable measurements. This prompts the question of when the MPS approximations do become significant and thereby impede the generation of a meaningful surrogate, for which we turn to the next section. ### Sensitivity to MPS approximations One property of RBs which is independent of the specific truth solving method, is that the surrogate accuracy is ultimately bounded by the truth accuracy. More specifically, when assembling a RB using the greedy algorithm, we expect the residual error estimator to stagnate at some value, since finite truth solving accuracy implies that further snapshots cannot improve the RB accuracy. In this regard, using DMRG in conjunction with RB methods does not bring forth anything new since ED solvers are also numerically approximate, albeit to higher accuracy. What is new, however, is that contractions of snapshots with operators \(\langle\Psi_{i}|A|\Psi_{j}\rangle\) are approximate because applications of MPOs multiplicatively increase bond dimensions, therefore requiring further MPS compressions. To numerically probe the effects of MPS approximations on the greedy RB assembly, we generate surrogates of the \(L=20\) Haldane chain at different singular value cutoffs on the \(\mathbb{P}_{\text{HD}}\) domain, while fixing all other settings. In particular, we set the residual tolerance and \(\mathfrak{cut}_{\lambda}\) to zero, such that the basis assembly could continue indefinitely, and only terminate when reaching a predefined maximal number of snapshots. We show the corresponding decay of maximal residuals and minimal overlap eigenvalues in Fig. 13. Generally it is observed Figure 12: Structure factor \(\langle Q^{z^{2}}(-k)\,Q^{z^{2}}(k)\rangle_{\text{rb}}\) on \(\mathbf{\mu}\in\{0.3\pi\}\times[-2,3]\) computed using the surrogate model (solid lines) and DMRG truth solves on \(\Xi_{\text{test}}\) (markers). The corresponding deviations between DMRG and RB ground states (vec), structure factor measurements (obs) and energies (val) are shown below. Note that we take the maximal structure factor deviations with respect to \(k\in[0,2\pi]\). The maximal observable error, indicated by the red dot, occurs at a particularly error-sensitive transition between two plateaus in the \(c=2\) domain, as shown in the inset, where the maximal deviation is produced by the jump in the \(k=0\) structure factor (encircled in red). Figure 11: RB errors for the \(L=24\) bilinear-biquadratic model on \(\mathbf{\mu}\in\{0.3\pi\}\times[-2,3]\). The solid lines show the maximal errors while dotted lines indicate the median errors with respect to \(\Xi_{\text{test}}\). that lower bond dimension cutoffs allow us to assemble surrogates with lower final residuals, so to higher overall accuracy. Furthermore, for each greedy assembly we encounter a point where the residuals suddenly spike and the minimal eigenvalues drop to zero -- which would be unexpected for ED-based surrogates, where instead the residual is expected to plateau when reaching maximal surrogate accuracy. The overlap eigenvalues vanish since the greedy algorithm starts to select close-by or even equal parameter points over the span of the last iterations, causing a strong increase in linear dependence. This is accompanied by approximation errors in \(h_{q}\), \(h_{qq^{\prime}}\) and hence \(\mathbf{\varphi}_{\text{rb}}\) that eventually prohibit the use of the Rayleigh-Ritz method as in Eqs. (12, 13, 14) and result in abrupt increases of the residual. Note that, due to erroneous coefficients \(\mathbf{\varphi}_{\text{rb}}\) that occur when, and sometimes right before, spiking residuals are produced, RB expectation values of observables then may exhibit unphysical artifacts. Fortunately, in numerical practice, one can devise simple heuristics to terminate the greedy assembly at the appropriate time, i.e., right before approaching a residual spike (see red dots in Fig. 13), in case sufficient MPS accuracy cannot be guaranteed. Whenever a parameter point is selected twice, which is forbidden in a correctly assembled greedy basis, the iteration is stopped. Moreover, a suddenly ill-conditioned RB normalization \(b=B^{\dagger}B\) indicates the drop of minimal overlap eigenvalues, such that the assembly is stopped when \(\|I-b\|>\delta\) for some threshold \(\delta\) or equivalently when the condition number of \(b\) becomes too large to converge the eigenvalue problem of Eq. (12). We want to remark that even when too many snapshots are included in the RB, it is possible, without further computational expense, to roll back the surrogate model by removing the last snapshots and the corresponding overlaps and matrix elements. For a more practical discussion of these technical aspects, exemplified by further physical models, we point the reader to our code documentation available via the aforementioned repository [40]. ## V Conclusions In the present work, we expanded the RB method for quantum spin systems with greedy parameter selection, introduced in Ref. [10], to operate on MPSs obtained from DMRG solves. MPS approximations allow us to access larger many-body systems than with ED, and thereby open up the RB approach to more physically interesting scenarios, while simultaneously providing sufficient truth accuracy to generate accurate surrogate models. In order to combine the RB framework with MPS snapshots, we adjusted the orthogonalization method for the reduced basis \(B\), and obtained the required overlaps and matrix elements for the RB construction from efficient MPS procedures -- beyond that the RB-MPS approach treats the DMRG as a black-box solver (with finite accuracy), thus profiting from the complementary structure of RB methods. The combined RB-MPS method was then applied to one-dimensional spin models with rich phase diagrams: the Haldane spin-1 chain with uniaxial and rhombic-type single-ion anisotropies as well as the bilinear-biquadratic spin-1 chain with uniaxial anisotropy. In both applications, the RB-MPS method proved to be a numerically efficient and versatile tool for determining complex ground-state phase diagrams. In particular, the phase diagram of the anisotropic Haldane spin-1 chain was efficiently obtained from RB surrogates, where the number of required snapshots grows sublinearly in the system size \(L\). Here, the degree of linear independence across the parameter domain was monitored and controlled by the eigenvalues of the overlap matrix. The greedy parameter selection mechanism was found to pick up on physical features of the phase diagram: e.g., the Haldane phase of \(H_{\text{HD}}\) was sampled more frequently, whereas the ferromagnetic phase in the bilinear-biquadratic chain model \(H_{\text{BLBQ}}\) was spanned by merely one snapshot. Therefore, we actually gain an automatized indicator for regions of strong ground-state variation. Furthermore, it was demonstrated that the RB-MPS approach functions on comparatively large parameter domains containing multiple phases. Overall, RBs of dimension \(N\sim 100\) were sufficient to capture the whole variety of correlations with fairly high resolution. Especially in the application to \(H_{\text{BLBQ}}\), the RB-MPS approach qualitatively captured a rather rich phase diagram from large-scale parameter scans. Furthermore, based on a systematic exploration of quadrupolar structure factors within the RB-MPS approach, we were able to identify dominant antiferroquadrupolar correlations within the \(c=1\) regime of the critical A phase, and obtained characteristic signatures of incommensurate spin-nematic correlations within the \(c=2\) domain of the critical A phase. Both these findings have apparently not been uncovered by previous investigations, emphasizing the role of the RB-MPS approach. Figure 13: Decay of maximal residuals and minimal normalized overlap eigenvalues \(\tilde{\lambda}^{\text{min}}=\|\Lambda\|_{F}^{-1}\min(\Lambda)\) for different singular value error cutoffs \(\texttt{cut}_{\sigma}\). The red dots indicate the final chosen dimension of the RB surrogates. ing the usefulness of a computational tool that efficiently scans large parameter domains. In the last part, we analyzed the convergence properties of the RB-MPS approach. For this purpose, we quantified the maximal RB errors and found well-behaved error convergence properties for \(H_{\text{HD}}\), i.e., exponentially decaying RB errors that enter the sub-percent range after \(N\sim 10\) snapshots, where the residual constitutes a surrogate error estimator by providing an upper bound on all RB errors. For the bilinear-biquadratic chain on the critical A phase, the errors converge mostly well across the parameter domain despite the rapid and discrete ground-state changes, except for sharp transitions between structure factor plateaus that are sometimes inaccurately interpolated by the surrogate models. Lastly, it was explored how the RB accuracy ultimately depends on the chosen MPS singular value cutoff, where we found that the greedy algorithm reaches a point of breaking down that can, however, be avoided by simple heuristics and reversed without additional computational effort. The presence of MPS approximations does raise issues that depend on the specific physics underlying the system on the sampled domain. On very elaborate parameter domains, the required number of MPS snapshots can become unwieldy and the necessary bond dimensions may become high, such that DMRG solves become considerably expensive. Yet, these issues can be circumvented by first performing large-scale scans using a crude RB surrogate and then, informed by the qualitative results, assembling RBs on smaller subdomains with higher accuracy. In this sense, the RB-MPS approach offers a particularly convenient workflow for the exploration of unknown phase diagrams: based on a given surrogate, refined RB models can be devised in any relevant subregion of the parameter space upon adapting the offline sampling domain. Moreover, such subregions can be identified based on readily accessible observables during the online stage at low computational costs. Potential future directions of research on the RB-MPS approach include (automated) grid refinements [67, 68, 69] and parallelization of the offline stage (parallelizing the online stage is straightforward). Furthermore, we anticipate generic ground-state probes such as wavefunction overlaps [70] and associated fidelity susceptibilities [71, 72], as well as entanglement measures [73, 74] and fluctuations of conserved quantities [75, 76, 77, 78], to be promising further diagnostic tools to combine with the RB-MPS approach presented here. Finally, we could foresee other tensor network techniques [42, 43, 44, 45] (at least those for finite-size systems) to be naturally integrated in the RB framework, thus extending the application realm to higher spatial dimensions, too. ###### Acknowledgements. We thank Norbert Schuch for helpful suggestions regarding the computation of eigenvector errors using MPS. Simulations were performed with computing resources granted by RWTH Aachen University under project thes1253. M.R. acknowledges support from the Deutsche Forschungsgemeinschaft (DFG), project grant 277101999, within the CRC network TR 183 (subproject B01). P.B. and S.W. acknowledge support by DFG through RTG 1995.
2307.14960
Spectroscopy and topological properties of a Haldane light system
We introduce a local spectroscopic method in real space to probe the topological properties of a circuit quantum electrodynamics (cQED) array generalizing previous approaches from one to two dimensions in the plane. As an application, we develop the theory of microwave light propagating in the local probe capacitively coupled to the cQED array associated to a bosonic Haldane model. Interestingly, we show that the measured reflection coefficient, resolved in frequency through the resonance, reveals the geometrical properties of the model and the topological phase transition. We discuss the role of physical parameters such as the lifetime of the light modes and stability towards local disorder related to further realizations.
Julian Legendre, Karyn Le Hur
2023-07-27T15:53:27Z
http://arxiv.org/abs/2307.14960v2
# Spectroscopy and topological properties of a Haldane light system ###### Abstract We present a method to probe the topological properties of a circuit quantum electrodynamics (cQED) array described through a Haldane model on the honeycomb lattice. We develop the theory of microwave light propagating in a local probe or a microscope (a one-dimensional transmission line) capacitively coupled to the topological cQED lattice model. Interestingly, we show that even if the microwave light has no transverse polarization, the measured reflection coefficient, resolved in frequency through the resonance, allows us to reveal the geometrical properties and topological phase transition associated to the model. This spectroscopy tool developed for cQED lattice models reveals the same topological information as circularly polarized light, locally within the Brillouin zone of the honeycomb lattice. Furthermore, our findings hold significance for topological magnon systems and are _a priori_ applicable to all Chern insulators, presenting an intriguing opportunity for their adaptation to other systems with different particle statistics. _Introduction.--_ Topological systems find various interesting applications in physics, in particular related to the protected mesoscopic transport at the edges. In two dimensions, the quantum Hall effect, induced by a perpendicular uniform magnetic field, has been generalized to situations with no net flux in a unit cell, referring to the Haldane honeycomb lattice model [1], and then generally to the quantum anomalous Hall effect and Chern insulators. The Haldane's seminal model is realized in solid-state systems, in cold atom gases and in photonic systems (coupled waveguides) [2; 3; 4]. One elegant way to realize this model for artificial systems is through Floquet engineering [3; 4; 5; 6; 7; 8]. The most common way to probe the topological properties in condensed matter systems is to determine the Hall conductance [9; 10]. The topological responses of artificial systems are accessible in several ways [11; 12; 13; 14]. In cold atom gases, topological properties are revealed through transport or Hall drift [4; 15], interferometry [16; 17; 18], the physics of chiral edge states [19; 20] or via a measurement of the Berry curvature [21]. For condensed matter systems and cold atom gases, a circular drive on the system also enables to probe the topological information [22; 23; 24; 25; 26], even with a local resolution within the Brillouin zone [27; 28]. Topological properties of light systems have also attracted attention such as gyromagnetic photonic crystals [29; 30; 31; 32], arrays of coupled waveguides [3; 33; 34], optomechanical systems [35; 36], cavity and circuit quantum electrodynamics (cQED) [37; 38; 39; 40; 41; 42]. In Ref. [43], a protocol to probe the topological properties of a one-dimensional LC circuit system is proposed. This system is closely connected to the SSH model which has been implemented recently [44; 45; 46; 47]. In Ref. [43], the authors considered a transmission line (capacitively) coupled to a single cell within the chain. From the reflection of an input triggered in the probe, they reconstructed the Zak phase, which is the topological invariant characterizing the studied one-dimensional system. The extension of such a local capacitive probe to two-dimensional systems is not readily apparent. Previous proposals for light-matter topological probes in two-dimensional systems have used the transverse polarization of light to detect the chirality associated with the system's topological nature [22; 24]. In striking contrast to these approaches, our study focuses on a local probe, specifically a long transmission line capacitively coupled to a Haldane bosonic model in circuit quantum electrodynamics (cQED). Remarkably, we demonstrate how the Chern number can be measured by analyzing the reflection coefficient, which relates the input and output voltage signals. Bosonic Haldane model.--We introduce a cQED system made of an array of resonators coupled together in such a way [8] that the system is described by a usual Haldane Hamiltonian \(H=\sum_{\mathbf{k}}\Psi^{\dagger}_{\mathbf{k}}h_{\mathbf{k}}\Psi_{\mathbf{k}}\)[1; 2; 3; 4; 5; 6; 7], with \[h_{\mathbf{k}}=h_{0}(\mathbf{k})+\text{Re}\left[h_{1}(\mathbf{k})\right] \sigma_{x}-\text{Im}\left[h_{1}(\mathbf{k})\right]\sigma_{y}+h_{2}(\mathbf{k })\sigma_{z}, \tag{1}\] and with \(h_{0}(\mathbf{k})=\hbar\Omega_{0}+2t_{2}\cos\phi\sum_{i=1}^{3}\cos(\mathbf{k} \cdot\mathbf{b}_{i})\), \(h_{1}(\mathbf{k})=t_{1}\sum_{i=1}^{3}\exp(-i\mathbf{k}\cdot\mathbf{a}_{i})\), and \(h_{2}(\mathbf{k})=M-2t_{2}\sin\phi\sum_{i=1}^{3}\sin(\mathbf{k}\cdot\mathbf{b} _{i})\). We define \(\Psi^{\dagger}_{\mathbf{k}}=\left(a^{\dagger}_{1,\mathbf{k}},a^{\dagger}_{2, \mathbf{k}}\right)\), with \(a^{\dagger}_{j,\mathbf{k}}\) the creation operator for a color-\(j\) (say \(j=1(2)\) correspond to the lattice site A(B) appearing in Fig. 1(a)) boson with momentum \(\mathbf{k}\), \(\mathbf{a}_{i}\), \(i\in\{1,2,3\}\) and \(\mathbf{b}_{i}\), \(i\in\{1,2,3\}\) are defined in Fig. 1(a), hopping amplitudes \(t_{1}\in\mathbb{R}\) and \(t_{2}\in\mathbb{R}\), Semenoff mass \(M\in\mathbb{R}\)[48] and \(\sigma^{x},\sigma^{y},\sigma^{z}\) the Pauli matrices in sub-lattice space. Hereafter, we study the case where \(t_{2}\) is small compared to \(t_{1}\), as it is often the situation in physical systems. In Ref. [8], a Haldane effective Hamiltonian is derived from Floquet engineering with a high-frequency approximation. Such a photonic system is permanently driven to compensate for the photon decay processes that necessarily happen [14]. In typical photonic systems the on-site energy \(\hbar\Omega_{0}\) is large (usually \(\sim\) GHz order of magnitude) compared to the effective hopping amplitudes on the lattice (_e.g._ can be \(\sim 10\) MHz to \(\sim 100\) MHz) [37; 49; 50; 42]. The Haldane model is characterized by two energy bands in momentum space \(E_{i,{\bf k}}=h_{0}({\bf k})\pm\epsilon({\bf k})\) where the minus (plus) sign is associated to the band \(i=1\) (\(i=2\)) and with \(\epsilon({\bf k})=\sqrt{|h_{1}({\bf k})|^{2}+h_{2}({\bf k})^{2}}\). Band crossing appears at \(h_{1}({\bf k})=h_{2}({\bf k})=0\). \(h_{1}({\bf k})=0\) is reached at both nonequivalent Dirac points \({\bf K}=\left({\bf g}_{3}-{\bf g}_{2}\right)/3\) and \({\bf K}^{\prime}=\left({\bf g}_{2}-{\bf g}_{3}\right)/3\) (see Fig. 1(c)). Moreover, we have \(h_{2}({\bf K})=0\) if \(M=+3\sqrt{3}t_{2}\sin\phi\) and \(h_{2}({\bf K}^{\prime})=0\) if \(M=-3\sqrt{3}t_{2}\sin\phi\). When the bands cross, the dispersion relation around \({\bf K}\) and \({\bf K}^{\prime}\) is linear. The energies for a chosen point in parameter space are represented in Fig. 1(b). _Topological properties.--_ Following the approach of Kohmoto [51], we will write the Chern number \(C_{i}\) (energy band \(i=1\) or \(i=2\)) in terms of a phase entering in the wavefunction, which is associated to the definition of the \(h_{1}({\bf k})\) kinetic term only. This will elegantly allow us to show that the light probe can detect the topological response of the system locally within the Brillouin zone through the conservation of energy. We write \(\left|u_{i,{\bf k}}\right>\) the Bloch eigenvectors of the Haldane Hamiltonian, _i.e._\({\rm e}^{-i{\bf k}\cdot\hat{\boldsymbol{r}}}H{\rm e}^{i{\bf k}\cdot\hat{ \boldsymbol{r}}}\left|u_{i,{\bf k}}\right>=E_{i,{\bf k}}\left|u_{i,{\bf k}}\right>\) and we define the coefficients \(\alpha_{i}^{1}({\bf k})\) and \(\alpha_{i}^{2}({\bf k})\) such that \(\left|u_{i,{\bf k}}\right>=\left[\alpha_{i}^{1}({\bf k})a_{1,{\bf k}}^{\dagger }+\alpha_{i}^{2}({\bf k})a_{2,{\bf k}}^{\dagger}\right]\left|0\right>\). These coefficients may vanish only at the Dirac points. (i) If the sign of \(h_{2}\) is opposite at the Dirac points (\(|M|<3\sqrt{3}t_{2}|\sin\phi|\), see Table 1), then \(\alpha_{i}^{1}({\bf k})\) and \(\alpha_{i}^{2}({\bf k})\) vanish at the opposite Dirac points. It follows that it is impossible to find a unique and smooth phase over all the BZ for the Bloch state \(\left|u_{i,{\bf k}}\right>\). Rather, we choose to define two non-overlapping domains \({\cal D}_{I}\) and \({\cal D}_{II}\) in the BZ, each containing a different Dirac point (see Fig. 1(c)), and we use a different gauge choice for the Bloch states in each domain [51]. We define a closed path P along the boundary between \({\cal D}_{I}\) and \({\cal D}_{II}\), surrounding once the Dirac points, and a smooth phase along P, \(\varphi({\bf k})\), such that \({\rm e}^{-i\varphi({\bf k})}={\rm e}^{i{\bf k}\cdot{\bf a}_{3}}h_{1}({\bf k}) ^{*}\big{/}\left|h_{1}({\bf k})\right|\). Then \(C_{i}\) reads [52] \[2\pi C_{i}=(-1)^{i}\,{\rm sgn}(\sin\phi)\oint_{\rm P}d{\bf k}\cdot\nabla_{\bf k }\varphi({\bf k}). \tag{2}\] From the expressions of \(\varphi({\bf k})\) and \(h_{1}({\bf k})\), we find that \(\varphi({\bf k})\) changes by \(-2\pi\) when moving along the entire closed path P. This gives \(C_{i}=(-1)^{i+1}\,{\rm sgn}(\sin\phi)\). Let us remark that the vanishing of Bloch eigenvectors' components at different points in the BZ, which prevents the smooth definition of Bloch states, is a characteristic feature of Chern insulators. As we will see, this fundamental property forms the basis of the probe proposed in this letter, rendering it _a priori_ relevant for all Chern insulators. (ii) If the sign of \(h_{2}\) is the same at both Dirac points (\(|M|>3\sqrt{3}t_{2}|\sin\phi|\), see Table 1), then \(\alpha_{i}^{1}({\bf k})\) or \(\alpha_{i}^{2}({\bf k})\) can be chosen non-zero over all the BZ, and it is possible to find a unique and smooth phase for \(\left|u_{i,{\bf k}}\right>\), leading to a unique and smooth Berry gauge field. Because the BZ is a torus, we find that the Chern numbers \(C_{i}\) are vanishing. From this analysis, for \(|M|\neq 3\sqrt{3}t_{2}|\sin\phi|\), we find \(C_{i}=(-1)^{i+1}\,{\rm sgn}(\sin\phi)\left[1-{\rm sgn}(h_{2}({\bf K})h_{2}({\bf K }^{\prime})\right]/2\), _i.e._ \[C_{i}=\frac{(-1)^{i}}{2}\left[{\rm sgn}\,h_{2}({\bf K})-{\rm sgn}\,h_{2}({\bf K }^{\prime})\right]. \tag{3}\] This formula has in fact a simple physical understanding for a Hamiltonian \(h_{\bf k}\) written as a \(2\times 2\) matrix. From the Ehrenfest theorem and a Bloch sphere correspondence the topological number is equivalent to \(C_{i}=\) Figure 1: (a) Definition of the sublattices, real space vectors and hopping amplitudes for the Haldane model on the honeycomb lattice. (b) Haldane model energies, in units of \(t_{1}\), as a function of the momentum \(q_{3}=a{\bf k}\cdot{\bf g}_{3}\) (\(a\) is the lattice spacing), with parameters \(\phi=\pi/2\), \(t_{2}=0.15t_{1}\) and \(M=3\sqrt{3}t_{2}/2\). Both lowest band energies at the Dirac points, \(E_{1,{\bf K}}\) and \(E_{1,{\bf K}^{\prime}}\), are shown. (c) Brillouin zone for the honeycomb lattice and the \({\cal D}_{I}\) and \({\cal D}_{II}\) domains. (d) The quantity \(1-\int d\omega S^{\rm out}(\omega)\), computed from Eq. (9); the color scale of the figure is logarithmic. We considered \({\rm sgn}\,M={\rm sgn}\,(\sin\phi)=1\), \(j_{0}=1\), and the energies \(\Omega_{0}=10\) GHz, \(t_{1}/\hbar=100\) MHz, \(t_{2}=0.15t_{1}\), \(\Delta_{\rm CI}=\Delta_{\rm P}=10\)MHz and \(g_{q_{0}}/\hbar=1\)MHz, as described in the main text. We considered a \(30\times 30\) unit cells Haldane model with periodic boundary conditions. \((-1)^{i}\left[\langle\sigma_{z}(0)\rangle-\langle\sigma_{z}(\pi)\rangle\right]/2\) with \(\langle\sigma_{z}\rangle=(-1)^{i}\cos\theta=(-1)^{i}{\rm sgn}\,h_{2}(\theta)\)[53]. In the following, we namely rely on Eq. (3) to show how \(C_{i}\) can be probed from the reflected light in a local probe capacitively coupled to a Haldane photonic system. The simple idea behind our proposal is that the topological properties manifest as discernible sublattice weight variations of the wave function, enabling to reveal the topological transition through the coupling of a probe to one of the sublattice sites. Spectroscopic probe.--Let us consider a local light probe with weak capacitive coupling to a Haldane boson model. The probe is a resonator with a certain number of (relevant) modes, each mode \(q\) being characterized by the frequency \(\omega_{q}\). The probe is coupled to the system at position \({\bf R}_{0}\), on the sublattice identified by the index \(j_{0}\), where \(j_{0}=1\) (\(j_{0}=2\)) for the sublattice A (B). We write the Hamiltonian associated to the probe \(H_{\rm prb}=\sum_{q}\hbar\omega_{q}b_{q}^{\dagger}b_{q}\), with \(b_{q}\) the annihilation operators for the mode \(q\) of the probe. The coupling is described by \(H_{\rm cpl}=\left(a_{{\bf R}_{0}}+a_{{\bf R}_{0}}^{\dagger}\right)\sum_{q}g_{ q}\left(b_{q}+b_{q}^{\dagger}\right)\), with \(a_{{\bf R}_{0}}^{\dagger}\) the creation operator for a boson at site \({\bf R}_{0}\). For simplicity, first of all, we disregard the dissipation effects induced by the probe in the Chern insulator and we assume that the light modes \(|u_{i,{\bf k}}\rangle\) exhibit an infinitely long lifetime. To build intuition, let us show that the transition rate \(\Gamma\) from a state \(|\psi(t)\rangle\) which, at initial time \(t_{i}\), is a probe's mode with frequency \(\omega_{q_{0}}\), _i.e._\(|\psi(t_{i})\rangle=|b_{q_{0}}\rangle\), to the eigenstates \(|u_{i,{\bf k}}\rangle\) of the Haldane Hamiltonian's bears information about the topological character of the system. Fermi Golden rule states that at sufficiently long times \(t\), \(\Gamma\left[\hbar\omega_{q_{0}}\right]=\frac{2\pi}{\hbar}\sum_{i,{\bf k}}| \left\langle b_{q_{0}}\right|H_{\rm cpl}\left|u_{i,{\bf k}}\right|^{2}\! \delta\left(\hbar\omega_{q_{0}}-E_{i,{\bf k}}\right)\). \(\left\langle b_{q_{0}}\right|H_{\rm cpl}\left|u_{i,{\bf k}}\right\rangle\) involves the components of the Bloch state in the basis \(\left(a_{1,{\bf k}}^{\dagger},a_{2,{\bf k}}^{\dagger}\right)\) and a factor \({\rm e}^{i{\bf k}\cdot{\bf R}_{0}}\) (transformation to the real space representation), such that we obtain \(\langle b_{q_{0}}\left|H_{\rm cpl}\left|u_{i,{\bf k}}\right\rangle=g_{q_{0}} \alpha_{i}^{j_{0}}({\bf k}){\rm e}^{i{\bf k}\cdot{\bf R}_{0}}\). As one can see from Table 2, which is constructed using Eq. (3) and the related analysis of the coefficients \(\alpha_{i}^{j_{0}}({\bf k})\), depending on \({\rm sgn}\,M\) and \({\rm sgn}\,(\sin\phi)\), it is possible to express the Chern number as a function of the coefficients \(\alpha_{i}^{j_{0}}({\bf k})\). If \({\rm sgn}\,M={\rm sgn}\,(\sin\phi)\) (\({\rm sgn}\,M=-{\rm sgn}\,(\sin\phi)\)) we notice that the Chern number is directly related to the coefficients \(\alpha_{i}^{j_{0}}({\bf k})\) evaluated at \(\mathbf{K}\) (\(\mathbf{K}^{\prime}\)) and \(E_{i,\mathbf{K}}\) (\(E_{i,\mathbf{K}^{\prime}}\)), \(i\in\{1,2\}\), is non-degenerate. Therefore, choosing \(\hbar\omega_{q_{0}}=E_{i,{\bf k}}\) with \(i\) and \({\bf k}\) according to Table 3, we find a simple relation between \(\Gamma\) and the topological invariant: \(\Gamma\left[E_{i,{\bf k}}\right]=J\left[E_{i,{\bf k}}\right]|C_{i}|^{2}=J\left[ E_{i,{\bf k}}\right]|C_{i}|\), where the spectral function \(J\) is: \(J[\omega]=(2\pi/\hbar)\sum_{q}g_{q}^{2}\left[\delta(\omega-\omega_{q})-\delta( \omega+\omega_{q})\right]\). In other words, \[\Gamma\left[E_{i,{\bf k}}\right]=J\left[E_{i,{\bf k}}\right]|C_{i}|, \tag{4}\] where the indices \(i\) and \({\bf k}\) are functions of \({\rm sgn}\,M\) and \({\rm sgn}\,(\sin\phi)\) as indicated in Table 3. The relation appearing in Eq. (4) has been established from Table 2 and Table 3, which are constructed from the previous section. Therefore, Eq. (4) relies on a fundamental property characterizing a Chern insulator: the impossibility of defining smooth Bloch states over the BZ, which translates here into the vanishing of the Bloch eigenvectors' components \(\alpha_{i}^{1}\) and \(\alpha_{i}^{2}\) at the opposite Dirac points. Motivated by this, we now investigate the relation between an input voltage \(\langle V_{{\bf R}_{0}}^{\rm in}[\omega]\rangle\) and the resulting output voltage \(\langle V_{{\bf R}_{0}}^{\rm out}[\omega]\rangle\), both at frequency \(\omega\) in the probe at \({\bf R}_{0}\). For \(\omega\) resolved around one Dirac point, this relation between \(\langle V_{{\bf R}_{0}}^{\rm in}[\omega]\rangle\) and \(\langle V_{{\bf R}_{0}}^{\rm out}[\omega]\rangle\) enables to rebuild the Haldane topological phase diagram [54]. To fourth order in the coupling amplitudes \(\{g_{q}\}\), we indeed find \[\langle V_{{\bf R}_{0}}^{\rm out}[\omega]\rangle=R(\omega)\langle V_{{\bf R}_ {0}}^{\rm in}[\omega]\rangle, \tag{5}\] with \(R(\omega)=1+iJ[\omega]\chi_{{\bf R}_{0},{\bf R}_{0}}\), and \[\chi_{{\bf R}_{0},{\bf R}_{0}}=\frac{1}{N}\sum_{i=1}^{2}\sum_{{\bf k}}\gamma_{j _{0},{\bf k}}^{i}\left[\frac{1}{-\hbar\omega-E_{i,{\bf k}}+i0^{+}}-\frac{1}{- \hbar\omega+E_{i,{\bf k}}+i0^{+}}\right], \tag{6}\] where \(N\) is the number of lattice sites and \[\gamma_{j_{0},{\bf k}}^{i}=\frac{1}{2}+\frac{(-1)^{j_{0}+i+1}h_{2}({\bf k})}{2 \epsilon({\bf k})}\in\,\mathbb{R}. \tag{7}\] From \(\epsilon({\bf k})=\sqrt{|h_{1}({\bf k})|^{2}+h_{2}({\bf k})^{2}}\), we notice that evaluated at the Dirac points depends on the sign of the function \(h_{2}\): we have \(2\gamma_{j_{0},\mathbf{k}}^{i}=1-(-1)^{j_{0}+i}\mathrm{sgn}\,h_{2}(\mathbf{k})\) for \(\mathbf{k}=\{\mathbf{K},\mathbf{K}^{\prime}\}\). This is related to the topological invariant via the Eq. (3) and Table 1. As we show in Table 4, depending on the sign of \(\sin\phi\) and on the sign of the Semenoff mass, the \(i^{\mathrm{th}}\) band topological invariant is given by the coefficient \(\gamma_{j_{0},\mathbf{k}}^{i}\), evaluated at \(j_{0}=i\) or \(j_{0}=\overline{i}\) and at \(\mathbf{k}=\mathbf{K}\) or \(\mathbf{k}=\mathbf{K}^{\prime}\), with \(\overline{i}=2(1)\) if \(i=1(2)\). Again, this outcome is obtained thanks to a fundamental characteristic associated to the topological phase: the vanishing of the Bloch eigenvectors' components \(\alpha_{i}^{1}\) and \(\alpha_{i}^{2}\) at the opposite Dirac points. Now, we can understand how a simple measurement of the reflection of a light input in the probe gives access to the topological invariant. We write \(S^{\mathrm{in}}(\omega)=\left|\left\langle V_{\mathbf{R}_{0}}^{\mathrm{in}}[ \omega]\right\rangle\right|^{2}\) and \(S^{\mathrm{out}}(\omega)=\left|\left\langle V_{\mathbf{R}_{0}}^{\mathrm{out} }[\omega]\right\rangle\right|^{2}\) the energy spectral density respectively associated to the input and output voltages. To leading order in the coupling amplitudes, we have \(S^{\mathrm{out}}(\omega)=|R(\omega)|^{2}S^{\mathrm{in}}(\omega)\) and for \(\omega>0\), \[|R(\pm\omega)|^{2}=1\mp\frac{2\pi J[\pm\omega]}{N}\sum_{i=1}^{2}\sum_{ \mathbf{k}}\gamma_{j_{0},\mathbf{k}}^{i}\delta(\hbar\omega-E_{i,\mathbf{k}}). \tag{8}\] For \(\mathrm{sgn}\,M=\mathrm{sgn}\,(\sin\phi)\), the energies \(E_{i,\mathbf{K}}\), \(i\in\{1,2\}\) are non-degenerate, therefore, choosing \(\hbar\omega=E_{i,\mathbf{K}}\) selects only the \(\mathbf{k}=\mathbf{K}\) point in the integral appearing in the Eq. (8). Moreover, as indicated in Table 4, \(\gamma_{j_{0},\mathbf{K}}^{i}\) is related to the topological invariant if we choose a probe at \(j_{0}=i\) (\(\overline{i}\)) for \(\mathrm{sgn}\,(\sin\phi)=1\) (\(\mathrm{sgn}\,(\sin\phi)=-1\)). Therefore, for a well-chosen frequency \(\omega\), \(|R(\omega)|^{2}\) clearly depends on the topological invariant. This is also true for \(\mathrm{sgn}\,M=-\mathrm{sgn}\,(\sin\phi)\), if, in the previous analysis, we replace \(\mathbf{K}\) by \(\mathbf{K}^{\prime}\) and \(j_{0}\) by \(\overline{j_{0}}\). Finite lifetimes for the light modes.--Eventually, we address the more realistic scenario in which we incorporate finite lifetimes for both the modes in the probe \(|b_{q}\rangle\) and the Chern insulator's modes \(|u_{i,\mathbf{k}}\rangle\). For simplicity, we consider the same bandwidth amplitude \(\Delta_{\mathrm{CI}}\) (\(\Delta_{\mathrm{P}}\)) for all the modes \(|u_{i,\mathbf{k}}\rangle\) (\(|b_{q}\rangle\)). We assume the following ordering of the energies \(\max_{q}(g_{q})\ll\{\Delta_{\mathrm{CI}},\Delta_{\mathrm{P}}\}\ll\{t_{1},\min_ {q,q^{\prime}}(|\omega_{q}-\omega_{q^{\prime}}|)\}\). We replace the Dirac Delta functions appearing in Eq. (8) by normalized Gaussian spectral distributions denoted \(G\left(\omega;\overline{\omega},\Delta\right)\) with mean value \(\overline{\omega}\) and standard deviation \(\Delta\): \(\delta(\hbar\omega-E_{i,\mathbf{k}})\) is replaced by \(G\left(\omega;E_{i,\mathbf{k}}/\hbar,\Delta_{\mathrm{CI}}\right)\) and \(J[\omega]\) is replaced by \(\tilde{J}[\omega]=2\pi\sum_{q}\left(g_{q}/\hbar\right)^{2}\left[G\left( \omega,\omega_{q},\Delta_{\mathrm{P}}\right)-G\left(\omega,-\omega_{q}, \Delta_{\mathrm{P}}\right)\right]\). We also consider an input energy spectral density with Gaussian distribution: \(S^{\mathrm{in}}(\omega)=G\left(\omega;\omega_{q_{0}},\Delta_{\mathrm{P}}\right)\). For a well chosen \(\omega_{q_{0}}\) (\(E_{i,\mathbf{K}}\) or \(E_{i,\mathbf{K}}\)), \(|R(\omega)|^{2}\) still depends on the topological invariant because \(\gamma_{j_{0},\mathbf{k}}^{i}\) is directly related to the Chern number. To illustrate this point, let us consider the case \(\mathrm{sgn}\,M=\mathrm{sgn}\,(\sin\phi)=1\) for which we choose \(\hbar\omega_{q_{0}}=E_{1,\mathbf{K}}\) and \(j_{0}=i=1\) and we expect \(\gamma_{1,\mathbf{K}}^{1}=C_{1}\). \(S^{\mathrm{out}}(\omega)\) depends on \(\gamma_{1,\mathbf{K}}^{1}\), especially around \(\omega=\omega_{q_{0}}\), leading to a decrease of the output peak's weight \(\int d\omega S^{\mathrm{out}}(\omega)\) compared to the normalized weight of the input peak. This decrease is given by \[1-\int d\omega S^{\mathrm{out}}(\omega)=\frac{2\pi}{N}\sum_{i=1}^{2}\sum_{ \mathbf{k}}\gamma_{j_{0},\mathbf{k}}^{i}I_{i,\mathbf{k}}, \tag{9}\] with \(I_{i,\mathbf{k}}=\int d\omega\tilde{J}[\omega]G\left(\omega;E_{i,\mathbf{k}}/ \hbar,\Delta_{\mathrm{CI}}\right)G\left(\omega;E_{1,\mathbf{K}}/\hbar,\Delta_{ \mathrm{P}}\right)\), which is \(\left(g_{q_{0}}/\hbar\right)^{2}/\left(\sqrt{2\pi}\Delta_{\mathrm{CI}}\Delta_{ \mathrm{P}}^{2}\right)\) times the overlap area \(\int d\omega\exp-\frac{\left(\omega-E_{1,\mathbf{K}}/\hbar\right)^{2}}{ \Delta_{\mathrm{P}}^{2}}\exp-\frac{\left(\omega-E_{i,\mathbf{k}}/\hbar\right)^{ 2}}{2\Delta_{\mathrm{CI}}^{2}}\)\(1-\int d\omega S^{\mathrm{out}}(\omega)\) reproduces the topological phase diagram associated to the Haldane model, as we show in Fig. 1(d), from a numerical evaluation of Eq. (9), with the energy scales \(\Omega_{0}=10\) GHz, \(t_{1}/\hbar=100\) MHz, \(\Delta_{\mathrm{CI}}=\Delta_{\mathrm{P}}=10\)MHz and \(g_{q_{0}}/\hbar=1\)MHz. These scales corresponds to a relatively low quality factor \(Q=\Omega_{0}/\Delta_{\mathrm{CI}}=10^{3}\) (here the same for both the cQED Chern insulator and the probe) and a low coupling amplitude and should be reachable in a cQED experiment. Magnon system and material suggestions.--The results described here are directly relevant for other Chern insulator systems. For instance, our probe can be used for a topological magnon insulator, as the one proposed in Ref. [55], which is described by a bosonic Haldane model. Indeed, consider a magnetic tip described by a polarization state denoted \(\mathbf{s}=(s^{x},s^{y},s^{z})\), and assume the tip thin enough to couple to only one magnetic site \(\mathbf{S}_{i}\) of the system. The Hamiltonian coupling the tip to the magnon system is \(H_{\mathrm{cpl}}=J\,\mathbf{S}_{i}\cdot\mathbf{s}_{i}\), with \(J\) the coupling amplitude. Suppose now the tip is polarized along \(x\) only. Then \(H_{\mathrm{cpl}}=J\,\mathbf{S}_{i}\cdot\mathbf{s}_{i}=(J/4)(S_{i}^{+}+S_{i}^{-})(s ^{+}+s^{-})\), where \(S_{i}^{+}\), \(S_{i}^{-}\), \(s^{+}\) and \(s^{-}\) are bosonic creation/annihilation operators. \(H_{\mathrm{cpl}}\) is completely analogue to the coupling Hamiltonian we considered for the cQED system. Therefore, measuring the response to a magnetic excitation provides a way to access the topological number. Such a measurement should be applicable in topological magnon quantum materials, such as CrI\({}_{3}\)[56] or maybe \(\beta\)-Cu\({}_{2}\)V\({}_{2}\)O\({}_{7}\)[57]. Even though the magnon can condense at the lowest energy mode, thermal excitation shall generate magnon modes at all energies in the system, namely the ones required for the proposed probe, with energy around the Dirac point. Remarks.--Several observations are in order. (i) Topological insulators are robust against weak disorder. Then we expect the general structure of the wave function over space, which is related to the bulk invariant, to be robust against disorder. This central feature give robustness to the probe proposed in this letter. In a magnon system, even though the disorder could be high compared to cQED systems, a magnetic tip enables to scan several sites in the sample; averaging the output signal over these sites would mitigate disorder effects. (ii) It is interesting to observe that the probe is able to measure the topological number based on a real space local coupling to the system and with a resolution in reciprocal space thanks to the energy conservation, similarly to circularly polarized light [28]. (iii) We checked the applicability of the probe proposed here to a simple topological bosonic kagome system. (iv) From Eqs. (7) and (8), we notice that the energy density of states at \(\hbar\omega=E_{i,\mathbf{k}}\) can be evaluated by summing the local responses \(1-\mathrm{Re}\left[R(\omega)\right]\) measured separately in a probe at \(j_{0}=1\) (sublattice A) and in a probe at \(j_{0}=2\) (sublattice B). (v) If the input is triggered at a site with position \(\mathbf{r}\) lying on the sublattice \(j_{0}\) and the output is measured at a different site with position \(\mathbf{r}^{\prime}\) and sublattice index \(j_{0}^{\prime}\), the expression in the summation of Eq. (6) is replaced by \[\frac{\left[\beta_{j_{0}}^{i}(\mathbf{k})\alpha_{i}^{j_{0}^{\prime}}(\mathbf{ k})\right]^{*}\mathrm{e}^{i\mathbf{k}\left(\mathbf{r}-\mathbf{r}^{\prime} \right)}}{-\hbar\omega-E_{i,\mathbf{k}}+i0^{+}}-\frac{\beta_{j_{0}}^{i}( \mathbf{k})\alpha_{i}^{j_{0}^{\prime}}(\mathbf{k})\mathrm{e}^{-i\mathbf{k} \left(\mathbf{r}-\mathbf{r}^{\prime}\right)}}{-\hbar\omega+E_{i,\mathbf{k}}+i0 ^{+}}, \tag{10}\] with \(\beta_{j_{0}}^{i}(\mathbf{k})\alpha_{i}^{j_{0}}(\mathbf{k})=\gamma_{j_{0}, \mathbf{k}}^{i}\) and for \(j_{0}^{\prime}=\overline{j_{0}}\neq j_{0}\), \(\beta_{j_{0}}^{i}(\mathbf{k})\alpha_{i}^{\overline{j_{0}}}(\mathbf{k})\propto h _{1}(\mathbf{k})/2\epsilon(\mathbf{k})\). At the Dirac points, \(h_{1}\) is vanishing, therefore, in the case \(j_{0}^{\prime}=\overline{j_{0}}\), the simple protocol we sketched above does not help to rebuild the topological phase diagram. This outcome can be anticipated based on the fact that, at one given Dirac point, one of both Bloch eigenvectors' components vanish, in the topological regime. In the scenario \(j_{0}^{\prime}=j_{0}\), because the coefficients in the numerator of Eq. (10) are complex-valued, the Chern number dependency of \(1-\int d\omega S^{\mathrm{out}}(\omega)\) is mitigated. Indeed, the latter contains principal values of integrals over frequency involving \([1/\left(-\hbar\omega\pm E_{i,\mathbf{k}}\right)]\) terms. Conclusion.--We have considered a local microwave-light probe with capacitive coupling to a cQED array described by a Haldane bosonic system, in the regime of small coupling amplitudes. We have explained how this probe is relevant for the detection of the topological character of Chern insulators. Firstly, using FGR, we established a connection between the Chern number and the transition rate from a probe's eigenstate (with frequency corresponding to one of the Dirac points energy) to the eigenstates of the Haldane Hamiltonian. Secondly, we developed the input-output theory for the probe, enabling us to compute the reflection coefficient which relates an input voltage and an output voltage. We showed that for an input with frequency resolved at one of the Dirac points, this reflection coefficient is directly related to the system's topological invariant. The fundamental working principle of this probe makes it inherently relevant for all Chern insulators. Additionally, as a future prospect, it appears both possible and intriguing to adapt this probe to other systems that may exhibit different particle statistics, such as cold atoms or various material platforms. Acknowledgments.--This work was supported by the french ANR BOCA grant. JL acknowledges support from the National Research Fund Luxembourg under Grant No. INTER/QUANTERA21/16447820/MAGMA.
2309.02476
Optimal Sample Selection Through Uncertainty Estimation and Its Application in Deep Learning
Modern deep learning heavily relies on large labeled datasets, which often comse with high costs in terms of both manual labeling and computational resources. To mitigate these challenges, researchers have explored the use of informative subset selection techniques, including coreset selection and active learning. Specifically, coreset selection involves sampling data with both input ($\bx$) and output ($\by$), active learning focuses solely on the input data ($\bx$). In this study, we present a theoretically optimal solution for addressing both coreset selection and active learning within the context of linear softmax regression. Our proposed method, COPS (unCertainty based OPtimal Sub-sampling), is designed to minimize the expected loss of a model trained on subsampled data. Unlike existing approaches that rely on explicit calculations of the inverse covariance matrix, which are not easily applicable to deep learning scenarios, COPS leverages the model's logits to estimate the sampling ratio. This sampling ratio is closely associated with model uncertainty and can be effectively applied to deep learning tasks. Furthermore, we address the challenge of model sensitivity to misspecification by incorporating a down-weighting approach for low-density samples, drawing inspiration from previous works. To assess the effectiveness of our proposed method, we conducted extensive empirical experiments using deep neural networks on benchmark datasets. The results consistently showcase the superior performance of COPS compared to baseline methods, reaffirming its efficacy.
Yong Lin, Chen Liu, Chenlu Ye, Qing Lian, Yuan Yao, Tong Zhang
2023-09-05T14:06:33Z
http://arxiv.org/abs/2309.02476v1
# Optimal Sample Selection Through Uncertainty Estimation and Its Application in Deep Learning ###### Abstract Modern deep learning heavily relies on large labeled datasets, which often come with high costs in terms of both manual labeling and computational resources. To mitigate these challenges, researchers have explored the use of informative subset selection techniques, including coreset selection and active learning. Specifically, coreset selection involves sampling data with both input (\(\mathbf{x}\)) and output (\(\mathbf{y}\)), active learning focuses solely on the input data (\(\mathbf{x}\)). In this study, we present a theoretically optimal solution for addressing both coreset selection and active learning within the context of linear softmax regression. Our proposed method, COPS (unCertainty based OPtimal Sub-sampling), is designed to minimize the expected loss of a model trained on subsampled data. Unlike existing approaches that rely on explicit calculations of the inverse covariance matrix, which are not easily applicable to deep learning scenarios, COPS leverages the model's logits to estimate the sampling ratio. This sampling ratio is closely associated with model uncertainty and can be effectively applied to deep learning tasks. Furthermore, we address the challenge of model sensitivity to misspecification by incorporating a down-weighting approach for low-density samples, drawing inspiration from previous works. To assess the effectiveness of our proposed method, we conducted extensive empirical experiments using deep neural networks on benchmark datasets. The results consistently showcase the superior performance of COPS compared to baseline methods, reaffirming its efficacy. ## 1 Introduction In recent years, deep learning has achieved remarkable success in various domains, including computer vision (CV), natural language processing (NLP), reinforcement learning (RL) and autonomous driving, among others. However, the success of deep learning often relies on a large amount of labeled data. This requirement not only incurs expensive labeling processes but also necessitates substantial computational costs. To address this challenge, an effective approach is to select an informative subset of the training data. Based on the selected subset, we can learn a deep neural network to achieve comparable performance with that trained on the full dataset. There are two key types of problems related to this approach. The first is known as coreset selection [15, 12, 3, 56], which assumes that both the input data \(\{\mathbf{x}_{i}\}_{i=1}^{n}\) and their corresponding labels \(\{\mathbf{y}_{i}\}_{i=1}^{n}\) are available for the full dataset \(\mathcal{S}=\{\mathbf{x}_{i},\mathbf{y}_{i}\}_{i=1}^{n}\) containing \(n\) samples. The objective here is to identify a subset of \(\{\mathbf{x}_{i},\mathbf{y}_{i}\}_{i=1}^{n}\) that significantly reduces the computation cost involved in training the models, thereby alleviating the computational burden. This problem is commonly referred to as the coreset selection problem. The second problem type, active learning, assumes that only the input data \(\{\mathbf{x}_{i}\}_{i=1}^{n}\) is accessible [8, 1, 36], without the corresponding labels. In this scenario, the aim is to selectively query the labels for a subset of \(\{\mathbf{x}_{i}\}_{i=1}^{n}\). With the inquired labels, the neural network is trained on the selected subset. This problem is often referred to as active learning. Overall, these approaches provide promising solutions to mitigate the computational and labeling costs associated with training deep neural networks by intelligently selecting informative subsets of the data. In this study, we theoretically derive the optimal sub-sampling scheme for both coreset selection and active learning in linear softmax regression. Our objective is to minimize the expected loss of the resulting linear classifier on the selected subset. The optimal sampling ratio is closely connected to the uncertainty of the data which has been extensively explored in reinforcement learning [23, 9, 4, 29, 52]. The detailed formulation and explanation of our sampling ratio is deferred to Section 3.1. We further show that the optimal sampling ratio is equivalent to the covariance of the output logits of independently trained models with proper scaling, which can be easily estimated in deep neural networks. We name our method as unCertainty based OPtimal Sub-sampling (COPS). While prior works such as [44, 48, 20] have explored related theoretical aspects, their approaches for estimating the sampling ratio are prohibitively expensive in the context of deep learning: [44] relies on the influence function of each data which has been recognized as computationally demanding according to existing literature [27]; [48, 20] rely on the inverse of covariance matrix of input which is also computationally expensive due to the large dimensionality of the input data. There are also vast amount of literature on coreset selection and active learning, but few of them can claim optimality, which will be briefly reviewed in Section 2. We then conduct empirical experiments on real-world datasets with modern neural architectures. Surprisingly, we find that directly applying COPS leads to bad performance which can be even inferior to that of random sub-sampling. Upon conducting a thorough analysis of the samples selected by COPS, we observe a tendency for the method to excessively prioritize data exhibiting high uncertainty, i.e., samples from the low density region. Notably, existing literature has established that model estimation can be highly sensitive to misspecification issues encountered with low density samples [16, 52]. It is important to note that the optimality of COPS is based on a well-specified linear model. Hence, this observation has motivated us to consider modifying COPS to effectively handle potential misspecification challenges. We use the short-hand notation \(u_{i}\) to represent the uncertainty of \(i\)th sample, which is our original sampling ratio up to some scaling. [16, 52] show that applying the reweighting \(\frac{1}{\max\{\alpha,u_{i}\}}\) to each sample during linear regression can make models more robust to misspecification, where \(\alpha\) is a hyper-parameter. Thus, we simply borrow the idea and modify the sampling ratio \(u_{i}\) by \(\frac{u_{i}}{\max\{\alpha,u_{i}\}}\propto\min\{\alpha,u_{i}\}\). We show the effectiveness of this modification by numerical simulations and real-world data experiments in Section 4. In Section 5, we conduct comprehensive experiments on several benchmark datasets, including SVHN, Places, and CIFAR10, using various backbone models such as ResNet20, ResNet56, MobileNetV2, and DenseNet121. Additionally, we verify the effectiveness of our approach on the CIFAR10-N dataset, which incorporates natural label noise. Furthermore, we extend our evaluation to include a NLP benchmark, IMDB, utilizing a GRU-based neural network. Across all these scenarios, our method consistently surpasses the baselines significantly, highlighting its superior performance. We summarize our contribution as follows The contribution of this work can be summarized as follows: * **Theoretical derivation**: The study theoretically derives the optimal sub-sampling scheme for well-specified linear softmax regression. The objective is to minimize the expected loss of the linear classifier on the sub-sampled dataset. The optimal sampling ratio is found to be connected to the uncertainty of the data. * **COPS method**: The proposed method, named unCertainty based OPtimal Sub-sampling (COPS), provides an efficient approach for coreset selection and active learning tasks. We show that the sampling ratio can be efficiently estimated using the covariance of the logits of independently trained models, which addresses the computational challenges faced by previous approaches [44, 48, 20]. * **Modification to handle misspecification**: We empirically identified a potential issue with COPS, which overly emphasizes high uncertainty samples in the low-density region, leading to model sensitivity to misspecification. To address this, we draw inspiration from existing theoretical works [16, 52] that downweight low-density samples to accommodate for the misspecification. By combining their techniques, we propose a modification to COPS that involves a simple thresholding of the sampling ratio. Both numerical simulations and real-world experiments demonstrate the significant performance improvements resulting from our straightforward modification. * **Empirical Validation**: Empirical experiments are conducted on various CV and NLP benchmark datasets, including SVHN, Places, CIFAR10, CIFAR10-N and IMDB, utilizing different neural architectures including ResNet20, ResNet56, MobileNetV2, DenseNet121 and GRU. The results demonstrate that COPS consistently outperforms baseline methods in terms of performance, showcasing its effectiveness. ## 2 Related Works Statistical Subsampling Methods.A vast amount of early methods adopts the statistical leverage scores to perform subsampling which is later used for ordinary linear regression [10, 11, 32]. The leverage scores are estimated approximately [10, 7] or combined with random projection [34]. These methods are relative computational expensive in the context of deep learning when the input dimension is large. Some recent works [44, 48, 20] achieves similar theoretical properties with ours. However, [44] is based on the influence function of each sample, which is computational expensive. [44, 20] need to compute the inverse of covariance matrix, which is also impractical for deep learning. Active Learning.This method designs acquisition functions to determine the importance of unlabeled samples and then trains models based on the selected samples with the inquired label [36]. There are mainly uncertainty-based and representative-based active learning methods. **Uncertainty-based** methods select samples with higher uncertainty, which can mostly reduces the uncertainty of the target model [1]. They design metrics such as entropy [47, 6], confidence [8], margin [24, 38], predicted loss [53] and gradient [1]. Some recent works leverage variational autoencoders and adversarial networks to identify samples that are poorly represented by correctly labeled data [42, 26]. Some of these works provide theoretical guarantees expressed as probabilistic rates, but they do not claim to achieve optimality [44]. The uncertainty-related technique has also been extended to RL [16, 52]. **Representative-based** methods are also known as the diversity based methods [6]. They try to find samples with the feature that is most representative of the unlabeled dataset [50, 41]. [50] casts the problem of finding representative samples as submodular optimization problem. [41] tries to find the representative sampling by clustering, which is later adopted in [1]. Coreset Selection.This method aims to find a subset that is highly representative of the entire labeled dataset. Some early works have focused on designing coreset selection methods for specific learning algorithms, such as SVM [46], logistic regression [19], and Gaussian mixture models [31]. However, these methods cannot be directly applied to deep neural networks (DNNs). To address this limitation, a solution has been proposed that leverages bi-level optimization to find a coreset specifically tailored for DNNs [3]. This approach has been further enhanced by incorporating probabilistic parameterization [56, 55]. Another line of recent research efforts have aimed to identify coreset solutions with gradients that closely match those of the full dataset [35, 25, 1]. ## 3 Theoretical Analysis Notation.We use bold symbols \(\mathbf{x}\) and \(\mathbf{y}\) to denote random variables and use \(\mathbf{x}\) and \(\mathbf{y}\) to denote deterministic values. Consider the \(d\)-dimensional vector \(\mathbf{x}\in\mathcal{X}\) and the categorical label \(\mathbf{y}\in\mathcal{Y}=\{c_{0},c_{1},\ldots,c_{K}\}\). Denote the joint distribution \((\mathbf{x},\mathbf{y})\) as \(\mathcal{D}\). For any matrix \(\mathbf{X}\in\mathbb{R}^{d_{1}\times d_{2}}\), define \(\|\mathbf{X}\|_{\text{op}},\ \|\mathbf{X}\|_{N}\), and \(\|\mathbf{X}\|_{F}\) to be its \(l_{2}\) operator norm, nuclear norm and Frobenius norm, respectively. The vectorized version of \(\mathbf{X}\) is denoted as \(\mathbf{Vec}(\mathbf{X})=(X_{1}^{\top},X_{2}^{\top},\ldots,X_{d_{2}}^{\top})^ {\top}\), where \(X_{j}\) is the \(j\)-th column of \(\mathbf{X}\). Let \(\mathcal{S}\) denote the dataset containing \(n\) labeled samples, i.e., \(\mathcal{S}:=\{\mathbf{x}_{i},\mathbf{y}_{i}\}_{i=1}^{n}\). We use \(\mathcal{S}_{X}\) to denote the unlabeled dataset \(\mathcal{S}:=\{\mathbf{x}_{i}\}_{i=1}^{n}\). Let \(\otimes\) denote the Kronecker product. For a sequence of random variables \(X_{1},X_{2},\ldots\), we say that \(X_{n}=o_{P}(1)\) if \(X_{n}\to 0\) as \(n\rightarrow\infty\), and \(X_{n}=O_{P}(1)\) if for all \(\epsilon>0\), there exists an \(M\) such that \(\sup_{n>N}\mathbb{P}(X_{n}>M)<\epsilon\). ### Optimal Sampling in Linear Softmax Regression Consider a \(K\)-class categorical response variable \(\mathbf{y}\in\{c_{0},c_{1},\ldots,c_{K}\}\) and a \(d\)-dimensional covariate \(\mathbf{x}\). The conditional probability of \(\mathbf{y}=c_{k}\) (for \(k=0,1,\ldots,K\)) given \(\mathbf{x}\) is \[p_{k}(\beta;\mathbf{x})=\frac{\exp(\mathbf{x}^{\top}\beta_{k})}{\sum_{l=0}^{K}\exp(\bm {x}^{\top}\beta_{l})}, \tag{1}\] where \(\beta_{k},\ k=0,1,\ldots,K\) are unknown regression coefficients belonging to a compact subset of \(\mathbb{R}^{d}\). Following [51], we assume \(\beta_{0}=0\) for identifiability. We further denote \(\beta=(\beta_{1}^{\top},\ldots,\beta_{K}^{\top})^{\top}\in\mathbb{R}^{Kd}\). We use the bold symbol \(\mathbf{\beta}\) to denote the \(d\)-by-\(K\) matrix \((\beta_{1},\ldots,\beta_{K})\). In the sequel, we first derive the optimal sub-sampling schemes for both coreset selection and active learning in linear softmax regression which minimize the expected test loss. Suppose the model is well-specified such that there exists an true parameter \(\beta^{*}\in\mathbb{R}^{Kd}\) with \(\mathbb{P}(\mathbf{y}=c_{k}|\mathbf{x})\ =\ p_{k}(\beta^{*};\mathbf{x})\) for all \(\mathbf{x}\) and \(k\). Define \(\delta_{k}(\mathbf{y}):=\mathbb{I}(y=c_{k})\) where \(\mathbb{I}\) is the indicator function. Let \(\ell(\beta;\mathbf{x},\mathbf{y})\) denote the cross entropy loss on the sample \((\mathbf{x},\mathbf{y})\) as \[\ell(\beta;\mathbf{x},\mathbf{y})=-\sum_{k=0}^{K}\delta_{k}(\mathbf{y})\log p_{k}(\beta; \mathbf{x})=\sum_{k=1}^{K}\left[-\delta_{k}(\mathbf{y})\mathbf{x}^{\top}\beta_{k}+\log\{1 +\sum_{l=1}^{K}\exp(\mathbf{x}^{\top}\beta_{l})\}\right]. \tag{2}\] We calculate the gradient and the hessian matrix of the loss function as follows: \[\frac{\partial\ell(\beta;\mathbf{x},\mathbf{y})}{\partial\beta}=-s(\beta;\mathbf{x},\mathbf{ y})\otimes\mathbf{x},\quad\text{ and }\quad\frac{\partial^{2}\ell(\beta;\mathbf{x},\mathbf{y})}{\partial\beta^{2}}=\phi(\beta; \mathbf{x})\otimes(\mathbf{x}\mathbf{x}^{\top}). \tag{3}\] Here \(s(\beta;\mathbf{x},\mathbf{y})\) is a \(K\)-dimensional vector with each element \(s_{k}(\beta;\mathbf{x},\mathbf{y})=\delta_{k}(y)-p_{k}(\beta;\mathbf{x})\) for \(k=1,...,K\); and \(\phi(\beta;\mathbf{x})\) is a \(K\times K\) matrix with element \[\phi_{kk}(\beta;\mathbf{x})=p_{k}(\beta;\mathbf{x})-p_{k}(\beta;\mathbf{x})^{2},\phi_{k_{ 1}k_{2}}(\beta;\mathbf{x})=-p_{k_{1}}(\beta;\mathbf{x})p_{k_{2}}(\beta;\mathbf{x}), \tag{4}\] where \(k,k_{1},k_{2}=1,...,K\) and \(k_{1}\neq k_{2}\). We further define the \(K\times K\) matrix \(\psi(\beta;\mathbf{x},y):=s(\beta;\mathbf{x},y)s(\beta;\mathbf{x},y)^{\top}\). For \(k_{1},k_{2}=1,...,K\), we have \[\psi_{k_{1}k_{2}}(\beta;\mathbf{x},\mathbf{y})=[\delta_{k_{1}}(\mathbf{y})-p_{k_{1}}( \beta;\mathbf{x})][\delta_{k_{2}}(\mathbf{y})-p_{k_{2}}(\beta;\mathbf{x})]. \tag{5}\] We show \(\mathbb{E}_{\mathbf{y}}[\psi(\beta^{*};\mathbf{x},\mathbf{y})|\mathbf{x}]=\phi(\beta^{*};\mathbf{ x})\) in Lemma 2. We use \(\mathcal{L}(\beta;\mathcal{D})\) to denote the expected cross-entropy loss on the distribution \(\mathcal{D}\) as \[\mathcal{L}(\beta;\mathcal{D})=\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim\mathcal{D}}[ \ell(\beta;\mathbf{x},\mathbf{y})], \tag{6}\] It is easy to know that \(\beta^{*}=\arg\min_{\beta\in\mathbb{R}^{Kd}}\mathcal{L}(\beta;\mathcal{D}).\) Given the dataset \(\mathcal{S}=\{(\mathbf{x}_{i},\mathbf{y}_{i})\}_{i=1}^{n}\), we use \(\mathcal{L}(\beta;\mathcal{S})\) to denote the cross entropy loss of \(\beta\) on \(\mathcal{S}\), i.e., \[\mathcal{L}(\beta;\mathcal{S}):=\frac{1}{n}\sum_{(\mathbf{x},\mathbf{y})\in \mathcal{S}}\ell(\beta;\mathbf{x},\mathbf{y}). \tag{7}\] We further use \(\mathcal{L}(\beta)\) to denote \(\mathcal{L}(\beta;\mathcal{S})\) when it is clear from the context. Recall that \(\mathcal{L}(\beta;\mathcal{S})\) is the negative likelihood achieved by \(\beta\) on \(\mathcal{S}\), then the maximum log-likelihood estimation (MLE) solution of \(\beta\) on \(\mathcal{S}\) is \[\hat{\beta}_{\text{MLE}}:=\operatorname*{arg\,min}_{\beta\in\mathbb{R}^{Kd}} \mathcal{L}(\beta;\mathcal{S}).\] We further define \[\mathbf{M}_{X}(\beta;\mathcal{D}) :=\frac{\partial^{2}\mathcal{L}(\beta;\mathcal{D})}{\partial^{2} \beta}=\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim\mathcal{D}}[\phi(\beta;\mathbf{x})\otimes( \mathbf{x}\mathbf{x}^{\top})],\] \[\mathbf{M}_{X}(\beta;\mathcal{S}) :=\frac{\partial^{2}\mathcal{L}(\beta;\mathcal{S})}{\partial^{2} \beta}=\frac{1}{n}\sum_{(\mathbf{x},\mathbf{y})\in\mathcal{S}}[\phi(\beta; \mathbf{x})\otimes(\mathbf{x}\mathbf{x}^{\top})].\] Coreset Selection.First, we focus on the coreset selection problem, assuming that we have access to the entire labeled dataset, i.e., \(\mathcal{S}=\{\mathbf{x}_{i},\mathbf{y}_{i}\}_{i=1}^{n}\). We assign an sampling \(\pi(\mathbf{x},\mathbf{y})\) to each samples in \(\mathcal{S}\) and then randomly select a subset of size \(r\) according to \(\pi(\mathbf{x},\mathbf{y})\). Denote the selected subset as \(\bar{\mathcal{S}}=\{\bar{\mathbf{x}},\bar{y}\}\). We then estimate the parameter \(\bar{\beta}\) based on the weighted loss \[\bar{\beta}=\operatorname*{arg\,min}_{\beta}\left(-\frac{1}{r}\sum_{(\bar{ \mathbf{x}},\bar{\mathbf{y}})\in\bar{\mathcal{S}}}\frac{1}{\pi(\bar{\mathbf{x }},\bar{\mathbf{y}})}\left(\sum_{k=1}^{K}\delta_{k}(\bar{\mathbf{y}})\bar{ \mathbf{x}}^{\top}\beta_{k}-\log\{1+\sum_{l=1}^{K}\exp(\bar{\mathbf{x}}^{\top }\beta_{l})\}\right)\right), \tag{8}\] We want the \(\bar{\beta}\) estimated on the weighted sub-sampled dataset \(\bar{\mathcal{S}}\) to achieve low expected loss \(\mathcal{L}(\bar{\beta};\mathcal{D})\). Omitting the higher order terms, we are interested in the gap between the loss of \(\bar{\beta}\) and \(\beta^{*}\) as \[\mathcal{L}(\bar{\beta};\mathcal{D})-\mathcal{L}(\beta^{*}; \mathcal{D})=\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim\mathcal{D}}\left[(\beta^{*}-\bar {\beta})^{\top}\left(\phi(\beta;\mathbf{x})\otimes(\mathbf{x}\mathbf{x}^{\top})\right)( \beta^{*}-\bar{\beta})\right] \tag{9}\] Our goal is to find a sampling scheme parameterized by \(\pi(\cdot)\) which minimizes the expectation of \(\mathcal{L}(\bar{\beta};\mathcal{D})-\mathcal{L}(\beta^{*};\mathcal{D})\), i.e., \[\min_{\pi}\mathbb{E}_{\bar{\mathcal{S}}|\mathcal{S},\pi}\left[ \mathcal{L}(\bar{\beta};\mathcal{D})-\mathcal{L}(\beta^{*};\mathcal{D})\right], \tag{10}\] where the expectation is taking over the randomness in sampling based on \(\pi(\cdot)\). **Active Learning**. For active learning problem, we have the unlabeled dataset \(\mathcal{S}_{X}=\{\mathbf{x}_{i}\}_{i=1}^{n}\). We aim to assign a sampling weight \(\pi(\mathbf{x})\) for each sample \(\mathbf{x}\) in \(\mathcal{S}_{X}\). Here we use the subscript \(X\) in \(\pi_{X}\) to explicitly show that the sampling ratio in active learning only depends on \(\mathbf{x}\). When it is clear from the context, we also use \(\pi(\mathbf{x})\) to denote \(\pi_{X}(\mathbf{x})\) for simplicity. Based on the sampled subset and queried label, which is also denoted as \(\bar{\mathcal{S}}=\{(\bar{\mathbf{x}}_{i},\bar{\mathbf{y}}_{i})\}_{i=1}^{r}\), we train the classifier \(\bar{\beta}\) on the weighted loss as shown in Eqn (8) by replacing the weight \(\pi(\mathbf{x},\mathbf{y})\) with \(\pi(\mathbf{x})\), i.e., \[\bar{\beta}=\operatorname*{arg\,min}_{\beta}\left(-\frac{1}{r}\sum_{(\bar{ \mathbf{x}},\bar{\mathbf{y}})\in\bar{\mathcal{S}}}\frac{1}{\pi(\bar{\mathbf{x }})}\left(\sum_{k=1}^{K}\delta_{k}(\bar{\mathbf{y}})\bar{\mathbf{x}}^{\top} \beta_{k}-\log\{1+\sum_{l=1}^{K}\exp(\bar{\mathbf{x}}^{\top}\beta_{l})\}\right) \right). \tag{11}\] Similar to that in coreset selection, we try to find a sampling scheme \(\pi\) which optimizes the following equation: \[\min_{\pi}\mathbb{E}_{\bar{\mathcal{S}}|\mathcal{S}_{X},\pi}\left[ \mathcal{L}(\bar{\beta};\mathcal{D})-\mathcal{L}(\beta^{*};\mathcal{D})\right]. \tag{12}\] Before presenting the main theorem, we introduce two assumptions, which are standard in the subsampling literature [48, 51]. **Assumption 1**.: _The covariance matrix \(\textbf{M}(\beta^{*};\mathcal{S})\) goes to a positive definite matrix \(\textbf{M}(\beta^{*};\mathcal{D})\) in probability; and \(n^{-2}\sum_{(\mathbf{x},\mathbf{y})\in\mathcal{S}}\|\mathbf{x}\|^{3}=O_{p}(1)\)._ **Assumption 2**.: _For \(k=2,4\), \(n^{-2}\sum_{(\mathbf{x},\mathbf{y})\in\mathcal{S}}\pi(\mathbf{x})\|\mathbf{x }\|^{k}=O_{p}(1)\); and there exists some \(\delta>0\) such that \(n^{-(2+\delta)}\sum_{(\mathbf{x},y)\in\mathcal{S}}\pi(\mathbf{x})^{-1-\delta} \|\mathbf{x}\|^{2+\delta}=O_{p}(1)\)._ Assumption 1 requires that the asymptotic matrix \(\textbf{M}(\beta^{*};\mathcal{D})\) is non-singular and \(\mathbb{E}\|x\|^{3}\) is upper-bounded. Assumption 2 imposes conditions on both subsampling probability and covariates. **Theorem 1** (Optimal sampling in linear softmax regression).: _Suppose that the Assumptions 1 and 2 hold._ * _For coreset selection, the optimal sampling ratio of coreset selection that minimizes_ \(\mathbb{E}_{\bar{\mathcal{S}}|\mathcal{S},\pi}[\mathcal{L}(\bar{\beta}; \mathcal{D})-\mathcal{L}(\beta^{*};\mathcal{D})]\) _is_ \[\pi(\mathbf{x},\mathbf{y})=\frac{\sqrt{\textit{Tr}\left(\psi(\hat{\beta}_{ \textit{MLE}};\mathbf{x},\mathbf{y})\otimes(\mathbf{x}\mathbf{x}^{\top}) \textbf{M}_{X}^{-1}(\hat{\beta}_{\textit{MLE}};\mathcal{S})\right)}}{\sum_{( \mathbf{x}^{\prime},\mathbf{y}^{\prime})\in\mathcal{S}}\sqrt{\textit{Tr} \left(\psi(\hat{\beta}_{\textit{MLE}};\mathbf{x}^{\prime},\mathbf{y}^{\prime}) \otimes(\mathbf{x}^{\prime}(\mathbf{x}^{\prime})^{\top})\textbf{M}_{X}^{-1}( \hat{\beta}_{\textit{MLE}};\mathcal{S})\right)}}.\] (13) * _For active learning, the optimal sampling ratio of active learning that minimizes_ \(\mathbb{E}_{\bar{\mathcal{S}}|\mathcal{S}_{X},\pi}[\mathcal{L}(\bar{\beta}; \mathcal{D})-\mathcal{L}(\beta^{*};\mathcal{D})]\) _is_ \[\pi(\mathbf{x})=\frac{\sqrt{\textit{Tr}\left(\phi(\hat{\beta}_{ \textit{MLE}};\mathbf{x})\otimes(\mathbf{x}\mathbf{x}^{\top})\textbf{M}_{X}^ {-1}(\hat{\beta}_{\textit{MLE}};\mathcal{S})\right)}}{\sum_{\mathbf{x}^{ \prime}\in\mathcal{S}_{X}}\sqrt{\textit{Tr}\left(\phi(\hat{\beta}_{\textit{ MLE}};\mathbf{x}^{\prime})\otimes(\mathbf{x}^{\prime}(\mathbf{x}^{\prime})^{\top}) \textbf{M}_{X}^{-1}(\hat{\beta}_{\textit{MLE}};\mathcal{S})\right)}}.\] (14) The interpretation of the optimal sampling ratio will become clearer as we present the results for the binary logistic regression, which we will discuss later on. In the proof, by using the asymptotic variance of \(\bar{\beta}\) first derived in [48, 51] and Taylor expansions, we can approximate the gap \(\mathbb{E}_{\bar{\mathcal{S}}|\mathcal{S},\pi}\left[\mathcal{L}(\bar{\beta}; \mathcal{D})-\mathcal{L}(\beta^{*};\mathcal{D})\right]\) for coreset selection by \[\mathbb{E}_{\mathcal{S}|\mathcal{S},\pi}\left[\mathcal{L}(\bar{ \beta};\mathcal{D})-\mathcal{L}(\beta^{*};\mathcal{D})\right]=\frac{1}{rn^{2}} \sum_{(\mathbf{x},\mathbf{y})\in\mathcal{S}}\frac{1}{\pi(\mathbf{x},\mathbf{y })}\mathrm{Tr}\left(\psi(\hat{\beta}_{\mathrm{MLE}};\mathbf{x},\mathbf{y}) \otimes(\mathbf{x}\mathbf{x}^{\top})\textbf{M}_{X}^{-1}(\hat{\beta}_{\mathrm{ MLE}};\mathcal{S})\right), \tag{15}\] and approximate the gap \(\mathbb{E}_{\mathcal{S}|\mathcal{S}_{X},\pi}\left[\mathcal{L}(\bar{\beta}; \mathcal{D})-\mathcal{L}(\beta^{*};\mathcal{D})\right]\) for active learning by \[\mathbb{E}_{\bar{\mathcal{S}}|\mathcal{S}_{X},\pi}\left[\mathcal{L}(\bar{\beta} ;\mathcal{D})-\mathcal{L}(\beta^{*};\mathcal{D})\right]=\frac{1}{rn^{2}}\sum_ {\mathbf{x}\in\mathcal{S}_{X}}\frac{1}{\pi(\mathbf{x})}\mathrm{Tr}\left(\phi( \hat{\beta}_{\mathrm{MLE}};\mathbf{x})\otimes(\mathbf{x}\mathbf{x}^{\top}) \mathbf{M}_{X}^{-1}(\hat{\beta}_{\mathrm{MLE}};\mathcal{S})\right). \tag{16}\] Then, we obtain the minimizers of the two terms above with the Cauthy-Schwarz inequality separately. The detailed proof is in Appendix A.1. Note that distinct from [48, 51] that aim to reduce the variance of \(\bar{\beta}\), we target on the expected generalization loss. However, directly computing our sampling ratio as well as those in [48, 51] is computationally prohibitive in deep learning, since they rely on the inverse of the covariance matrix. Whereas, as we will show later, our sampling ratio is closely connected to sample uncertainty and can be effectively estimated by the output of DNN. Now we illustrate the main intuition for the optimal sampling by considering the binary logistic classification problem as an example. In this case, we known that \(K=1\), \(\mathbf{y}\in\{c_{0},c_{1}\}\), and \(\beta=\beta_{1}\in\mathbb{R}^{d}\). Correspondingly, the binary logistic regression model is in the following form: \[p_{1}(\beta;\mathbf{x})=\frac{\exp(\mathbf{x}^{\top}\beta_{1})}{1+\exp(\mathbf{x}^{\top} \beta_{1})}.\] The covariance matrix becomes \[M_{X}=1/n\sum_{(\mathbf{x},\mathbf{y})\in\mathcal{S}}(p_{1}(\hat{\beta}_{ \mathrm{MLE}};\mathbf{x})-p_{1}(\hat{\beta}_{\mathrm{MLE}};\mathbf{x})^{2}) \mathbf{x}\mathbf{x}^{\top}.\] **Corollary 1** (Logistic regression optimal sampling).: _Suppose that the Assumptions 1 and 2 hold._ 1. _For coreset selection, the optimal sampling ratio that minimizes_ \(\mathbb{E}_{\bar{\mathcal{S}}|\mathcal{S},\pi}[\mathcal{L}(\bar{\beta}; \mathcal{D})-\mathcal{L}(\beta^{*};\mathcal{D})]\) _is_ \[\pi(\mathbf{x},\mathbf{y})=\frac{\left|\delta_{1}(y)-p_{1}(\hat{\beta}_{ \mathrm{MLE}};\mathbf{x})\right|\left\|\mathbf{x}\right\|_{M_{X}^{-1}}}{\sum_ {(\mathbf{x}^{\prime},\mathbf{y}^{\prime})\in\mathcal{S}}\left|\delta_{1}(y)- p_{1}(\hat{\beta}_{\mathrm{MLE}};\mathbf{x}^{\prime})\right|\left\|\mathbf{x}^{ \prime}\right\|_{M_{X}^{-1}}}.\] (17) 2. _For active learning, the optimal sampling ratio that minimizes_ \(\mathbb{E}_{\mathcal{S}|\mathcal{S}_{X},\pi}[\mathcal{L}(\bar{\beta}; \mathcal{D})-\mathcal{L}(\beta^{*};\mathcal{D})]\) _is_ \[\pi(\mathbf{x})=\frac{\sqrt{p_{1}(\hat{\beta}_{\mathrm{MLE}};\mathbf{x})-p_{1} (\hat{\beta}_{\mathrm{MLE}};\mathbf{x})^{2}\left\|\mathbf{x}\right\|_{M_{X}^{ -1}}}}{\sum_{\mathbf{x}^{\prime}\in\mathcal{S}}\sqrt{p_{1}(\hat{\beta}_{ \mathrm{MLE}};\mathbf{x})-p_{1}(\hat{\beta}_{\mathrm{MLE}};\mathbf{x})^{2} \left\|\mathbf{x}^{\prime}\right\|_{M_{X}^{-1}}}}.\] (18) Intuition of the optimal sampling ratio.The optimal ratio for coreset selection is proportional to \(\left|\delta_{1}(y)-p_{1}(\hat{\beta}_{\mathrm{MLE}};\mathbf{x})\right| \cdot\left\|\mathbf{x}\right\|_{M_{X}^{-1}}\), which is decomposed of two components: * \(\left|\delta_{1}(y)-p_{1}(\hat{\beta}_{\mathrm{MLE}};\mathbf{x})\right|\) is related to the prediction error of \(\hat{\beta}_{\mathrm{MLE}}\). * \(\left\|\mathbf{x}\right\|_{M_{X}^{-1}}\) has been widely explored in RL literature which is connected to uncertainty. Specifically, \(\left\|\mathbf{x}\right\|_{M_{X}^{-1}}^{2}\) represents the inverse of the effective sample number in the \(\mathcal{S}\) along the \(\mathbf{x}\) direction [23]. A larger \(\left\|\mathbf{x}\right\|_{M_{X}^{-1}}^{2}\) indicates that there are less effective samples in the \(\mathbf{x}\) direction. In this case, the prediction on \(\mathbf{x}\) will be more uncertain. Therefore, \(\left\|\mathbf{x}\right\|_{M_{X}^{-1}}\) is used to characterize the uncertainty along the \(\mathbf{x}\) direction by. Samples with significant uncertainty and substantial prediction errors will result in a higher sampling weight for coreset selection. As for the active learning, \(\left|\delta_{1}(y)-p_{1}(\hat{\beta}_{\mathrm{MLE}};\mathbf{x})\right|\) is replaced by \(\sqrt{p_{1}(\hat{\beta}_{\mathrm{MLE}};\mathbf{x})-p_{1}(\hat{\beta}_{\mathrm{ MLE}};\mathbf{x})^{2}}\) as we take conditional expectation over \(\boldsymbol{y}\) since \[p_{1}(\hat{\beta}_{\mathrm{MLE}};\mathbf{x})-p_{1}(\hat{\beta}_{\mathrm{MLE}}; \mathbf{x})^{2}\approx\mathbb{E}_{\boldsymbol{y}|\boldsymbol{x}=\mathbf{x}}( \delta_{1}(y)-p_{1}(\hat{\beta}_{\mathrm{MLE}};\mathbf{x}))^{2},\] as \(n\to\infty\). \(\sqrt{p_{1}(\hat{\beta}_{\mathrm{MLE}};\mathbf{x})-p_{1}(\hat{\beta}_{\mathrm{ MLE}};\mathbf{x})^{2}}\) assigns large weights to those samples near the decision boundary. In summary, the optimal sampling ratios can be determined by weighting the uncertainty of samples with their corresponding prediction errors. ### Efficient approximation of the optimal sampling ratio There are some issues in estimating the optimal sampling ratio in Eqn (13) and (14): 1. We can not obtain \(\hat{\beta}_{\mathrm{MLE}}\) in practice since it is solved on the whole dataset; 2. Calculating the inverse of the covariance matrix \(\mathbf{M}_{X}(\hat{\beta}_{\mathrm{MLE}};\mathcal{S})\) is computationally prohibitive due to the high dimentionality in deep learning. To solve the issue (a), [48] proposes to fit a \(\beta\) on the held out probe dataset \(\mathcal{S}^{\prime}\) (a small dataset independent of \(\mathcal{S}\)) to replace \(\hat{\beta}_{\mathrm{MLE}}\). Whereas, the issue (b) remains to be the major obstacle for our method as well as those in [44, 51]. In the following part, we will by-pass the issue (b) by showing that \(\psi(\hat{\beta}_{\mathrm{MLE}};\mathbf{x},y)\otimes(\mathbf{x}\mathbf{x}^{ \top})\mathbf{M}_{X}^{-1}(\hat{\beta}_{\mathrm{MLE}};\mathcal{S})\) is related to the standard deviation of the output logits from independently trained models. To be more specific, we fit \(M\) independent MLE linear classifiers \(\{\hat{\boldsymbol{\beta}}^{(m)}\}_{m=1}^{M}\) on \(M\) probe datasets \(\{\mathcal{S}^{(m)}\}_{m=1}^{M}\) which is independent of \(\mathcal{S}\). We then show that for each sample \((\mathbf{x},\mathbf{y})\) in \(\mathcal{S}\), we can estimate \(\psi(\hat{\beta}_{\mathrm{MLE}};\mathbf{x},\mathbf{y})\otimes(\mathbf{x} \mathbf{x}^{\top})\mathbf{M}_{X}^{-1}(\hat{\beta}_{\mathrm{MLE}};\mathcal{S})\) by the covariance of each model's logits i.e., \((\hat{\boldsymbol{\beta}}^{(m)})^{\top}\mathbf{x}\), as shown in Eqn (20) of Algorithm 1. **Theorem 2** (Uncertainty estimation in linear models).: _Supposing that Assumptions 1 and 2 hold, we have \(M\) probe datasets \(\{\mathcal{S}^{(m)}\}_{m=1}^{M}\) and each \(\mathcal{S}^{(m)}\) contains \(n^{\prime}\) samples, we independently fit \(M\) MLE classifiers \(\{\hat{\boldsymbol{\beta}}^{(m)}\}_{m=1}^{M}\) on \(\{\mathcal{S}^{(m)}\}_{m=1}^{M}\). Denote \(\tilde{\beta}=\frac{1}{M}\sum_{m=1}^{M}\text{Vec}(\hat{\boldsymbol{\beta}}^{ (m)})\) and define \(\Sigma_{M}(\mathbf{x})\) as Eqn (20) in Algorithm 1, then as \(M\to\infty\), \(n^{\prime}\to\infty\) and \(n\to\infty\), for \((\mathbf{x},\mathbf{y})\in\mathcal{S}\), we have_ \[n^{\prime}\text{Tr}\Big{(}\psi(\tilde{\beta};\mathbf{x},y)\Sigma _{M}(\mathbf{x})\Big{)}-\text{Tr}\Big{(}\psi(\hat{\beta}_{\text{MLE}};\mathbf{ x},\mathbf{y})\otimes(\mathbf{x}\mathbf{x}^{\top})\boldsymbol{M}_{X}^{-1}( \hat{\beta}_{\text{MLE}};\mathcal{S})\Big{)} =o_{P}(1),\] \[n^{\prime}\text{Tr}\Big{(}\phi(\tilde{\beta};\mathbf{x})\Sigma _{M}(\mathbf{x})\Big{)}-\text{Tr}\Big{(}\phi(\hat{\beta}_{\text{MLE}};\mathbf{ x})\otimes(\mathbf{x}\mathbf{x}^{\top})\boldsymbol{M}_{X}^{-1}(\hat{\beta}_{\text{MLE}}; \mathcal{S})\Big{)} =o_{P}(1).\] See Appendix A.2 for a proof. This theorem demonstrates that the uncertainty quantities can be approximated without explicitly calculating the inverse of covariance matrix. Instead, we only need to calculate a MLE estimator \(\tilde{\beta}\) and the covariance of the output logits \(\{(\hat{\boldsymbol{\beta}}^{(m)})^{\top}\mathbf{x}\}_{m=1}^{M}\) derived from \(M\) models. In other words, we only need to obtain \(\{\hat{\boldsymbol{\beta}}^{(m)}\}\) on \(M\) probe sets, respectively. We then obtain the optimal sampling ratio through calculating \(\Sigma_{M}(\mathbf{x})\), which is the covariance of \(\{(\hat{\boldsymbol{\beta}}^{(m)})^{\top}\mathbf{x}\}_{m=1}^{M}\) as defined in Eqn (20). ``` Input: Probe datasets \(\{\mathcal{S}^{(m)}\}_{m=1}^{M}\), the sampling dataset \(\mathcal{S}\) for coreset selection or \(\mathcal{S}_{X}\) for active learning. Output: The estimated uncertainty for each sample in \(\mathcal{S}\) or \(\mathcal{S}_{X}\). 1 For \(m=1,...,M\), solve \(\hat{\beta}^{(m)}=\arg\min_{\beta\in\mathbb{R}^{Kd}}\mathcal{L}(\beta;\mathcal{S }^{(m)})\). Denote \[\tilde{\beta}=\frac{1}{M}\sum_{m=1}^{M}\hat{\beta}^{(m)},\ \hat{\boldsymbol{\beta}}^{(m)}=[\hat{\beta}^{(m)}_{1},\hat{\beta}^{(m)}_{2},...,\hat{\beta}^{(m)}_{K}],\ \text{and}\ \tilde{\boldsymbol{\beta}}=\frac{1}{M}\sum_{m=1}^{M}\hat{ \boldsymbol{\beta}}^{(m)}.\] (19) 2 For each \(\mathbf{x}\), obtain \(\{(\hat{\boldsymbol{\beta}}^{(m)})^{\top}\mathbf{x}\}_{m=1}^{M}\) and the covariance of them: \[\Sigma_{M}(\mathbf{x})=\frac{1}{M-1}\sum_{m=1}^{M}\left(\left(\hat{ \boldsymbol{\beta}}^{(m)}\right)^{\top}\mathbf{x}-\tilde{\boldsymbol{\beta}}^{ \top}\mathbf{x}\right)\left(\left(\hat{\boldsymbol{\beta}}^{(m)}\right)^{\top }\mathbf{x}-\tilde{\boldsymbol{\beta}}^{\top}\mathbf{x}\right)^{\top}.\] (20) 3 Get the predicted probability of \(\mathbf{x}\), i.e., \(p(\tilde{\beta};\mathbf{x})\), as in Eqn (1). Estimate the uncertainty for each sample as following: * Case (1) coreset selection. Obtain \(\psi(\tilde{\beta};\mathbf{x},\mathbf{y})\) according to Eqn (5) and obtain the uncertainty estimation as \[u(\mathbf{x},\mathbf{y})=\operatorname{Tr}\left(\psi(\tilde{\beta};\mathbf{x}, \mathbf{y})\Sigma_{M}(\mathbf{x})\right);\] * Case (2) active learning. Obtain \(\phi(\tilde{\beta};\mathbf{x})\) according to Eqn (4) and obtain the uncertainty estimation as \[u(\mathbf{x})=\operatorname{Tr}\left(\phi(\tilde{\beta};\mathbf{x})\Sigma_{M} (\mathbf{x})\right).\] **Approximations in Deep Learning.** Our objective is to develop a sub-sampling method for deep learning. Let's consider a deep neural network \(f_{\theta}(\mathbf{x})\) with parameters \(\theta\in\mathbb{R}^{d^{\prime}}\), where both \(d\) and \(d^{\prime}\) are extremely large in the context of deep learning. There exist gaps between the theory presented in Section 3.1 and deep learning due to the nonlinearity involved in \(f_{\theta}\). However, we can leverage insights from learning theory, such as the Neural Tangent Kernel [22], which demonstrates that a wide DNN can be approximated by a linear kernel with a fixed feature map \(\nabla_{\theta}f_{\theta}(\cdot)\in\mathbb{R}^{d}\rightarrow\mathbb{R}^{d^{ \prime}}\). Consequently, we can approximate uncertainty by calculating the standard deviation from different linear kernels, as outlined in Theorem 2. Importantly, our method does not necessitate explicit computation of the linear kernel, as we only require the output \(\beta^{\top}\mathbf{x}\) from Theorem 2. Thus, we can directly replace \(\beta^{\top}\mathbf{x}\) with the output of the DNN, i.e., \(f_{\theta}(\mathbf{x})\). Let \(f_{\theta,k}(\boldsymbol{x})\) denote the \(k\)th dimension of \(f_{\theta}(\boldsymbol{x})\) for \(k=0,...,K\). We denote the output probability of \(f_{\theta}\) on sample \(\mathbf{x}\) by \[p(f_{\theta};\mathbf{x})=[p_{0}(f_{\theta};\mathbf{x}),p_{1}(f_{\theta};\mathbf{x}),...,p_{K} (f_{\theta};\mathbf{x})],\text{ where }p_{k}(f_{\theta};\mathbf{x})=\frac{\exp(f_{\theta,k}(\mathbf{x}))}{ \sum_{l=0}^{K}\exp(f_{\theta,l}(\mathbf{x}))}.\] Recall in Algorithm 1 that we train \(M\) independent linear models on \(M\) different probe sets, respectively. In practice, getting \(M\) additional probe sets can be costly. One option is to use bootstrap, where \(M\) subsets are resampled from a single probe set \(\mathcal{S}^{\prime}\) and the variance is estimated based on the \(M\) trained models. [14] shows that the variance estimated by bootstrap converges to the asymptotic variance, which is the uncertainty quantity. However, we adopt a different way which is more popular in deep learning: we train \(M\) neural networks, \(\{f_{\theta^{(m)}}\}_{m=1}^{M}\), on a single probe set \(\mathcal{S}^{\prime}\) with different initialization and random seeds, which empirically outperforms the bootstrap method. With \(\{f_{\theta^{(m)}}\}_{m=1}^{M}\), we then replace the linear models in Algorithm 1 by their DNN counterparts, i.e., replace \(\tilde{\mathbf{\beta}}_{m}^{\top}\mathbf{x}\) by \(f_{\theta^{(m)}}(\mathbf{x})\), \(\tilde{\mathbf{\beta}}^{\top}\mathbf{x}\) by \(\frac{1}{M}\sum_{m=1}^{M}f_{\theta^{(m)}}(\mathbf{x})\), and \(p(\tilde{\beta};\mathbf{x})\) by \(\frac{1}{M}\sum_{m=1}^{M}p(f_{\theta^{(m)}};\mathbf{x})\). We summarize the uncertainty estimation for DNN in Algorithm 6 in Appendix C.1. Notably, our method can be further simplified by training a single model on \(\mathcal{S}^{\prime}\) with dropout and then can obtain \(\{f_{\theta^{(m)}}\}_{m=1}^{M}\) by using Monte Carlo Dropout during inference. In Section 5, we also empirically compare different uncertainty estimation methods including different initialization, bootstrap, and dropout. The detailed algorithm as summarized in Algorithm 4 and 5 in Appendix C.1. ``` Input: Training data \(\mathcal{S}\), \(M\) probe datasets \(\{\mathcal{S}^{(m)}\}_{m=1}^{M}\), sub-sampling size \(r\). Output: The selected subset \(\bar{\mathcal{S}}\) with input \(\{\mathcal{S}^{(m)}\}_{m=1}^{M}\). 1 For each \(\mathbf{x}\in\mathcal{S}_{X}\), obtain \(u(\mathbf{x})\) by Algorithm 1 with \(\{\mathcal{S}^{(m)}\}_{m=1}^{M}\); 2 Randomly draw \(\bar{\mathcal{S}}\) containing \(r\) samples from \(\mathcal{S}\) by \(\pi(\mathbf{x})=u(\mathbf{x})/\sum_{\mathbf{x}^{\prime}\in\mathcal{S}}u( \mathbf{x}^{\prime})\). 3 Obtain the labeled data set \(\bar{\mathcal{S}}\) by labeling each sample in \(\bar{\mathcal{S}}_{X}\). 4 Solve \(\bar{\beta}\) on the weighted subset \(\bar{\mathcal{S}}(\pi)\) according to Eqn (8). ``` **Algorithm 2**COPS for coreset selection on linear models ## 4 Towards Effective Sampling Strategy in Real Word Applications In this section, we enhance the theoretically motivated sampling algorithm by incorporating insights gained from empirical observations. To begin, we experiment with the optimal sampling strategy Algorithm 4 and 5 on deep learning datasets. ### Vanilla uncertainty sampling strategy is ineffective in applications Settings.We try out the sampling for DNN, i.e., Algorithm 4 and 5 (with uncertainty estimation in Algorithm 6) with ResNet20 [17]. We performed experiments on three datasets: (1) CIFAR10 [28], (2) CIFARBinary, and (3) CIFAR10-N [49]. CIFARBinary is a binary classification dataset created by selecting two classes (plane and car) from CIFAR10. CIFAR10-N is a variant of CIFAR10 with natural label noise [49]. For a more comprehensive description of the datasets, please refer to Section 5. For all settings, we split the training set into two subsets, i.e., the probe set (\(\mathcal{S}^{\prime}\) in Algorithm 4-5) and the sampling dataset set (\(\mathcal{S}\) in Algorithm 4-5). We train 10 probe neural networks on \(\mathcal{S}^{\prime}\) and estimate the uncertainty of each sample in \(\mathcal{S}\) with these networks. We select an subset with 300 samples per class from \(\mathcal{S}\) according to Algorithm 4-5, on which we train the a ResNet20 from scratch. Since we conduct experiments on multiple datasets with different sub-sampling size, and for both coreset selection and active learning problems. We then use WithY to denote the coreset selection since we have the whole labeled dataset and we use WithoutY for active learning. We use the triple "(dataset name)-(target sub-sampling size)-(whether with \(Y\))" to denote an experimental setting, for example: CIFAR10-3000-WithY is short for the setting to select 3,000 samples from labeled CIFAR10 dataset for coreset selection. Results.Surprisingly, the results in Figure 1 shows that the sampling Algorithm 4 and 5 are even inferior than uniform sampling in some settings both for coreset selection (WithY) and active learning (WithoutY). For example, in the CIFARBinary-600-WithY setting in Figure 1, uncertainty sampling leads to a testing performance of 75.26%, which is much worse than uniform sampling's performance 88.31%. A closer look at the Uncertainty sampling.Figure 2(a) visualizes the uncertainty distribution of samples in CIFAR10 estimated by Algorithm 6. Figure 2(b) shows the uncertainty of the 3000 samples selected according to the sample selection ratio in Eqn (13), i.e., the uncertainty of 3000 samples selected by COPS in the CIFAR10-3000-WithY setting. The uncertainty distribution of the selected data in Figure 2(b) is quite different from the uncertainty distribution of the full dataset in Fig 2(a). The selected subset contains a large number of data with high uncertainty. Figure 3 shows similar trends in CIFAR10-3000-WithoutY. Recall that the optimal sampling ratio is derived in a simplified setting where we assume that there is no model misspecification. The sampling schemes in Eqn. (13) and (14) tend to select samples from the low density region with high uncertainty. Whereas, previous studies [52, 16] demonstrate that in cases where substantial misspecification happens to samples on low-density regions, the model estimation can be significantly impacted. We conjecture that the uncertainty sampling methods in Algorithms 4 and 5 suffer from this issue since they place unprecedented emphasis on the low density region. We then illustrate this effect by a logistic linear classification example in the following section. ### Simulating the effect of model misspecification on sampling algorithms Simulation with a linear example.The optimal sampling strategy Eqn. (13) and (14) is derived under the assumption that the model is well-specified, i.e., there exists an oracle \(\beta^{*}\) such that \(\mathbb{P}(y=c_{k}|\mathbf{x})=p_{k}(\beta^{*};\mathbf{x})\) for all \(\mathbf{x}\) and \(k\). To illustrate how the uncertainty sampling can suffer from model misspecification, we conduct simulations on the following example which contains model misspecification following the setting of [16, 52, 2]. Consider a binary classification problem \(y\in\{0,1\}\) with 2-dimensional input \(\mathbf{x}\in\mathbb{R}^{2}\). The true parameter \(\beta^{*}=[2,2]^{\top}\). In this simulation, we consider adversarial corruption, a typical case of misspecification in a line of previous research [16, 52, 2]. In this case, an adversary corrupts the classification responses \(\mathbf{y}\) before they are revealed to the learners. Hence, if the learner still make estimations via the linear logistic model, the misspecification occurs. Suppose that the there exists model misspecification characterized by \(\zeta:\mathbb{R}^{2}\rightarrow\mathbb{R}\) such that \[P(y=1|\mathbf{x},\beta^{*},\zeta)=\frac{\exp\left(\mathbf{x}^{\top}\beta^{*}+\zeta(x) \right)}{1+\exp\left(\mathbf{x}^{\top}\beta^{*}+\zeta(x)\right)}. \tag{21}\] Consider a training dataset consisting of 1,000 instances of \(\mathbf{x}_{1}\), 100,000 instances of \(\mathbf{x}_{2}\), and 100,000 instances of \(\mathbf{x}_{3}\), where \(\mathbf{x}_{1}=[1,0]\), \(\mathbf{x}_{2}=[0.1,0.1]\), and \(\mathbf{x}_{3}=[0,1]\). It is evident that \(\mathbf{x}_{1}\) falls within the low density region. In the following part, we will introduce non-zero corruption on \(\mathbf{x}_{1}\). It is easy to infer that a corruption on \(\mathbf{x}_{1}\) would induce estimation error on the first dimension of \(\beta\). We incorporate \(\mathbf{x}_{2}\) within the dataset to ensure that the estimation error on the first dimension would affect the estimation error on the second dimension. We conduct simulations involving three cases of corruption in the low-density region \(\mathbf{x}_{1}\): (a) Figure 1: The vanilla implementation of the uncertainty Algorithm 4 and 5 (i.e., COPS-vanilla) displays inferior performance. Whereas, thresholding the maximum uncertainty during sample selection (i.e., COPS-clip) significantly enhances the overall performance. \(\zeta(\mathbf{x}_{1})=0\), (b) \(\zeta(\mathbf{x}_{1})=-1\), and (c) \(\zeta(\mathbf{x}_{1})=-3\). We select 1,000 samples from a total of 201,000 samples and obtain \(\bar{\beta}\) by uniform sampling, COPS for coreset selection (the linear Algorithm 2) and COPS for active learning (the linear Algorithm 3). We also visualize parameter estimation error \(|\bar{\beta}-\beta^{*}|\). We evaluate the regret loss \(\mathcal{L}(\bar{\beta})-\mathcal{L}(\beta^{*})\) on the testing set without corruption as shown in Table 1. The results of the comparison for each method are presented in Figure 4. The simulation results demonstrate that the vanilla uncertainty sampling (i.e., COPS-vanilla) strategy performs well when there is no corruption. However, as the level of corruption increases, the performance of uncertainty sampling deteriorates quickly and can be even worse than random sampling when \(\zeta(\mathbf{x}_{1})=-3\). ### A simple fix [16, 52] argues that the corruption in the low density region can make \(\beta_{\mathrm{MLE}}\) deviates significantly from \(\beta^{*}\). To alleviate this problem, [16, 52] propose to assign a smaller weight to the samples in low density regions when performing weighted linear regression, resulting in a solution closer to \(\beta^{*}\). Specifically, they assign a weight \(1/\max(\alpha,\left\|\mathbf{x}\right\|_{M_{X}^{-1}})\) to each sample to perform linear regression where \(\alpha\) is a pre-defined hyper-parameter. For the samples with large uncertainty, they will have a small weight. \begin{table} \begin{tabular}{c|c|c|c} \hline \hline \(\boldsymbol{x}\) & \(\mathbf{x}_{1}=\left[1,0\right]^{\top}\) & \(\mathbf{x}_{2}=\left[0.1,0.1\right]^{\top}\) & \(\mathbf{x}_{3}=\left[0,1\right]^{\top}\) \\ \hline Sampling Set & \(n_{1}=1,000\) & \(n_{2}=100,000\) & \(n_{3}=100,000\) \\ Testing Set & \(n_{1}=1,000\) & \(n_{2}=100,000\) & \(n_{3}=100,000\) \\ \hline \hline \end{tabular} \end{table} Table 1: A simple example with 2-dimensional input \(\boldsymbol{x}\in\mathbb{R}^{2}\) and binary output \(y\in\{0,1\}\). There are three kinds of inputs as shown in the table. Both the training (sampling) and testing set contains 1,000 \(\mathbf{x}_{1}\), 100,000 \(\mathbf{x}_{2}\) and 100,000 \(\mathbf{x}_{3}\), respectively. Figure 2: Histogram of estimated uncertainty of samples on CIFAR10-3000-WithY. Recall that we select data according to the uncertainty \(u(\mathbf{x},\mathbf{y})=|\delta_{1}(\mathbf{y})-p_{1}(\hat{\beta}_{\mathrm{MLE}}; \mathbf{x})|\cdot\left\|\mathbf{x}\right\|_{M_{X}^{-1}}\). We can incorporate the idea of [16, 52] through modifying the uncertainty sampling ratio by multiplying \(u(\mathbf{x},\mathbf{y})\) with \(1/\max(\alpha,\left\|\mathbf{x}\right\|_{M_{X}^{-1}})\), i.e., draw samples according to \(u(\mathbf{x},\mathbf{y})/\max(\alpha,\left\|\mathbf{x}\right\|_{M_{X}^{-1}})\). Furthermore, since \(u(\mathbf{x},\mathbf{y})\) and \(\left\|\mathbf{x}\right\|_{M_{X}^{-1}}\) only differ by an scaling term \(|\delta_{1}(\mathbf{y})-p_{1}(\hat{\beta}_{\mathrm{MLE}};\mathbf{x})|\), we use an even simpler version \(u(\mathbf{x},\mathbf{y})/\max(\alpha,u(\mathbf{x},\mathbf{y}))\propto\min( \alpha,u(\mathbf{x},\mathbf{y}))\), which turns out to simply threshold the maximum value of \(u(\mathbf{x},\mathbf{y})\) for sampling. Therefore, the overall sampling ratio for coreset selection in Eqn (17) is modified as follows: \[\pi^{\alpha}(\mathbf{x},\mathbf{y})=\frac{\min(\alpha,u(\mathbf{x},\mathbf{y} ))}{\sum_{(\mathbf{x}^{\prime},\mathbf{y}^{\prime})\in\mathcal{S}}\min(\alpha,u(\mathbf{x}^{\prime},\mathbf{y}^{\prime})\}}, \tag{22}\] where \(u(\mathbf{x},\mathbf{y})=|\delta_{1}(y)-p_{1}(\hat{\beta}_{\mathrm{MLE}}; \mathbf{x}^{\prime})|\cdot\left\|\mathbf{x}^{\prime}\right\|_{M_{X}^{-1}}\). The full modified algorithm for coreset selection is included in Algorithm 7 in Appendix C.2. The algorithm for active learning selection is also modified accordingly as shown in Algorithm 8 in Appendix C.2. Notably, we don't modify the Figure 4: Comparison on of different sampling methods on simulation data. Left)the error of parameter estimation \(|\bar{\beta}-\beta^{*}|\) ; Right) the regret loss \(\mathcal{L}(\bar{\beta})-\mathcal{L}(\beta^{*})\) on the testing set. Figure 3: Histogram of estimated uncertainty on CIFAR10 without labels (active learning). reweighting accordingly. Intuitively, original COPS select samples by \(u\) and the minimize the loss weighted by \(1/u\). Here we select samples according to \(\min\{\alpha,u\}\) but still use the original reweighting \(1/u\). By this method, we can reduce the negative impact of model misspecification on the samples from the low density region i.e., samples with high uncertainty, obtaining a \(\bar{\beta}\) closer to \(\beta^{*}\). We applied this method in the simulation experiment, testing the threshold at 3 or 10 times the minimum uncertainty. Take the threshold 3 for coreset selection for example, we set \(\alpha=3\cdot\min_{(\mathbf{x},\mathbf{y})\in\mathcal{S}}u(\mathbf{x},\mathbf{ y})\). To differentiate, we use the suffix 'COPS-clip' to represent the method with limited uncertainty from above. On the other hand, we refer to the unmodified COPS method as 'COPS-vanilla'. The outcomes displayed in Figure 4 demonstrate how this straightforward approach enhances the performance of uncertainty sampling in case of substantial corruption, achieving significant improvement over both uniform sampling and COPS-vanilla in terms of both \(|\bar{\beta}-\beta^{*}|\) and \(\mathcal{L}(\bar{\beta})-\mathcal{L}(\bar{\beta}^{*})\). The results in Figure 1 show that the 'COPS-clip' also works well in real world applications. Figure 2(c) and Figure 3(c) illustrate the uncertainty distribution of the 3000 samples selected by COPS-clip in the CIFAR10-3000-WithY and CIFAR10-3000-WithoutY settings, respectively. We can see that compared to COPS-vanilla, COPS-clip selects samples whose uncertainty distribution is closer to the uncertainty distribution of the entire CIFAR10 dataset, with only a slight increase in samples exhibiting high uncertainty. In Appendix E.1, we provide additional results that COPS-vanilla selects a higher proportion of noisy data in CIFAR10-N compared to uniform sampling. However, COPS-clip does not exhibit an increase in the noisy ratio when compared to uniform sampling. Remark 1.To simplify the discussion, let \(u\) denote the \(u(\mathbf{x},\mathbf{y})\) for coreset selection, and \(u(\mathbf{x})\) for active learning. In the vanilla COPS method, two stages are performed: (Stage 1): Data subsampling according to \(u\). (Stage 2): Weighted learning, where each selected sample is assigned a weight of \(1/u\) to get an unbiased estimator. Since the sample weighting in Stage 2 involves calculating the inverse of \(u\), it can result in high variance if \(u\) approaches zero. To address this, previous work has implemented a threshold of \(1/\max\{\beta,u\}\)[21, 43, 6], which limits the minimum value of \(u\). Both COPS-vanilla and COPS-clip adopt this strategy by default in the second stage to limit the variance and \(\beta\) is set to 0.1 for all real-world dataset experiments (including the experiments in Figure 1). Appendix D.1 shows the full details on this part. However, our empirical analysis reveals the importance of also limiting the maximum of \(u\) by \(\min\{\alpha,u\}\) in the first stage, which can alleviate the negative impact of potential model misspecification on COPS. To the best of our knowledge, this hasn't been discussed in existing works [48, 44, 51, 43, 6]. Appendix E.2 presents empirical results to compare the impact of threshold on the first and second stages. ## 5 Experiments and results Settings.In this section, we conduct extensive experiments to verify COPS. Here the COPS method refers to COPS-clip in Section 4 by default and the detailed algorithms are in Algorithm 9-10. We compare COPS with various baseline methods, validate COPS on various datasets including both CV and NLP task and also datasets with natural label noise. For all the methods studied in this section, we use the same setting as described in Section 4 that we train probe networks on one probe dataset and performing sampling at once on the sampling dataset. The datasets used in our experiments are as follows: * CIFAR10 [28]: We utilize the original CIFAR10 dataset [28]. To construct the probe set, we randomly select 1000 samples from each class, while the remaining training samples are used for the sampling set. For our experiments, we employ ResNet20, ResNet56 [17], MobileNetV2 [39], and DenseNet121 [18] as our backbone models. * CIFARBinary: We choose two classes, plane and car, from the CIFAR10 dataset for binary classification. Similar to CIFAR10, we assign 1000 samples from the training images for the probe set of each class, and the remaining training samples form the sampling set. In this case, we employ ResNet20 as our backbone model. * CIFAR100: From the CIFAR100 dataset [28], we randomly select 200 samples for each class and assign them to the probe set. The remaining training samples are used in the sampling set. For this dataset, ResNet20 is utilized as the backbone model. * CIFAR10-N: We use CIFAR10-N, a corrupted version of CIFAR10 introduced by Wei et al. [49]. The training set of CIFAR10-N contains human-annotated real-world noisy labels collected from Amazon Mechanical Turk and the testing set of CIFAR10-N is the same with CIFAR10. Similar to CIFAR10, we split 1000 samples from each class for the probe set, while the rest are included in the sampling set. We employ ResNet20 as our backbone model. * IMDB: The IMDB dataset [33] consists of positive and negative movie comments, comprising 25000 training samples and 25000 test samples. We split 5000 samples from the training set for uncertainty estimation and conduct our scheme on the remaining 20000 samples. For this dataset, we use a GRU-based structure [5], and further details can be found in Appendix D.3. * SVHN: The SVHN dataset contains images of house numbers. We split 1000 samples from the train set for each class to estimate uncertainty, while the remaining train samples are used for the sampling schemes. ResNet20 serves as our backbone model in this case. * Place365 (subset): We select ten classes from the Place365 dataset [54], each consisting of 5000 training samples and 100 testing samples. The chosen classes are department_store, lighthouse, discotheque, museum-indoor, rock_arch, tower, hunting_lodge-outdoor, hayfield, arena-rodeo, and movie_theater-indoor. We split the training set, assigning 1000 instances for each class to the probe set, and the remaining samples form the sampling set. ResNet18 is employed as the backbone model for this dataset. We summarize the datasets in Table 2: Comparison with Baselines.In this part, we compare our method COPS with existing sample selection methods. We adopt competitive baselines for coreset selection and active learning, respectively. The baselines for coreset selection (WithY) are as follows: * **Uniform sampling**. * **IWeS(WithY)**[6] first fit two functions \(f_{\theta^{(1)}}\) and \(f_{\theta^{(2)}}\) on the probe set and then use the disagreement of the two functions with respect to entropy to calculate the sampling ratio for a sample \((\mathbf{x},\mathbf{y})\): \[\pi(\mathbf{x},\mathbf{y})=\sum_{k=0}^{K}\delta_{k}(\mathbf{y})\big{|}p_{k}(f_ {\theta^{(1)}};\mathbf{x})\log_{2}(p_{k}(f_{\theta^{(1)}};\mathbf{x}))-p_{k}(f _{\theta^{(2)}};\mathbf{x})\log_{2}(p_{k}(f_{\theta^{(2)}};\mathbf{x}))\big{|}\] (23) * **BADGE(WithY)**[1] calculates the gradient of the last layer and use kmeans++ to cluster the gradient. They then select the samples closest to cluster centers. * **Margin**[40]. The margin is computed by subtracting the predicted probability of the true class from 1. \[\pi(\mathbf{x},\mathbf{y})=1-\sum_{k=1}^{K}\delta_{k}(\mathbf{y})p_{k}(f_{ \theta};\mathbf{x})\] (24) The baselines for active learning selection (WithoutY) are as follows: * **Uniform sampling**. * **IWeS (WithoutY)**[6] uses a normalized version of entropy is as follows, \[\pi(\mathbf{x})=-\sum_{k=0}^{K}p_{k}(f_{\theta};\mathbf{x})\log_{2}(p_{k}(f_{ \theta};\mathbf{x}))/\log_{2}(K)\] (25) \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline Dataset & Class Number & Probe Set & Sampling Set & Target Size of Sub-sampling & Test Set \\ \hline CIFARBinary & 2 & 2,000 & 8,000 & 600/2,000/6,000 & 2,000 \\ CIFAR10 & 10 & 10,000 & 40,000 & 3,000/10,000/20,000 & 10,000 \\ CIFAR10-N & 10 & 10,000 & 40,000 & 3,000/10,000/20,000 & 10,000 \\ CIFAR100 & 100 & 20,000 & 30,000 & 3,000/10,000/20,000 & 10,000 \\ SVHN & 10 & 10,000 & 63,257 & 3,000/10,000/20,000 & 26,032 \\ Places365 & 10 & 10,000 & 40,000 & 3,000/10,000/20,000 & 1,000 \\ IMDB & 2 & 5,000 & 20,000 & 2,000/4,000/10,000 & 25,000 \\ \hline \hline \end{tabular} \end{table} Table 2: The table provides descriptions of the datasets used in our study. The ”Probe Set/ Sampling Set” column indicates the number of samples included in the Probe Set and Sampling Set for each dataset. The “Target Size of Sub-sampling” column represents the number of samples selected from the Sampling Set for sub-sampling. For example, if the value is shown as ”600”, it indicates that we choose 600 instances from the Sampling Set for sub-sampling. * **BADGE (WithoutY)[1]** first obtain the pseudo label \(\hat{\mathbf{y}}=\operatorname*{arg\,max}_{k}p_{k}(f_{\theta};\mathbf{x})\) and the calculates the gradient of the last layer with the pseudo label \(\hat{\mathbf{y}}\). Then they use K-means++ to cluster samples and select the samples closest to cluster centers. * **Least confidence**[40] is determined by calculating the difference between 1 and the highest probability assigned to a class: \[\pi(\mathbf{x})=1-\max_{k}p_{k}(f_{\theta};\mathbf{x})\] (26) * **Feature Clustering**[41]1 first latent feature of the model and then uses K-means cluster the samples by its feature. They further select the samples closest to cluster. Footnote 1: [41] named their method as coreset, whereas, we refer to their method as feature clustering in order to avoid confusion with the coreset task. We first compare COPS with the above baselines on both coreset selection (WithY) and active learning (WithoutY) settings on three datasets, CIFAR10, CIFAR100 and CIFAR10-N. The results Figure 5: Results for coreset selection (WithY). For Badge with 3000 samples, the performance is lower than 50, so the bar is clipped in our figures. Figure 6: Results for active learning (WithoutY). For Badge with 3000 samples, the performance is lower than 50, so the bar is clipped in our figures. in Figure 5 and 6 show that COPS can consistently outperform the baselines in these settings. The improvement is even more significant on CIFAR10-N, which contains nature label noise. Multiple Architectures.To verify the effectiveness of COPS, we conduct experiments on CIFAR10 with different neural network structures. Specifically, we choose several widely-used structures, including ResNet56 [17], MobileNetV2 [39] and DenseNet121 [18]. The results are shown in Figure. 7. Our method COPS can stably improve over random sampling for both WithY and WithoutY on different DNN architectures. Additional Datasets.Furthermore, we evaluate the effectiveness of COPS on three additional datasets: SVHN, Places365 (subset), and IMDB (an NLP dataset). The results in Fig. 8 consistently demonstrate that our method consistently outperforms random sampling on these datasets. Different methods for uncertainty estimation.In Algorithm 6, we obtain \(M\) models \(\{f_{\theta^{(m)}}\}_{m=1}^{M}\) on the probe dataset \(\mathcal{S}^{\prime}\) by training DNNs independently with different initializations and random seeds. This method is referred to as the **different initialization** method. In this section, we compare this method with two alternative approaches to obtain \(\{f_{\theta^{(m)}}\}_{m=1}^{M}\) given \(\mathcal{S}^{\prime}\): 1. **Bootstrap**: Each \(f_{\theta^{(m)}}\) is obtained by training a DNN on a randomly drawn subset from \(\mathcal{S}^{\prime}\). 2. **Dropout**[13]: A single DNN is trained on \(\mathcal{S}^{\prime}\) with dropout. Then, \(\{f_{\theta^{(m)}}\}_{m=1}^{M}\) are obtained by performing Monte Carlo Dropout during inference for \(M\) iterations. The comparison of these three methods on CIFAR10 is depicted in Figure 9. It is evident that the different initialization method achieves the best performance, while the bootstrap method performs the worst among the three. The dropout method shows similar performance to the different initialization method in the coreset selection task (WithY). Figure 7: Results for Cifar10 with different architectures. ## 6 Conclusion This study presents the COPS method, which offers a theoretically optimal solution for coreset selection and active learning in linear softmax regression. By leveraging the output of the models, the sampling ratio of COPS can be effectively estimated even in deep learning contexts. To address the challenge of model sensitivity to misspecification, we introduce a downweighting approach for low-density samples. By incorporating this strategy, we modify the sampling ratio of COPS through thresholding the sampling ratio. Empirical experiments conducted on benchmark datasets, utilizing deep neural networks, further demonstrate the effectiveness of COPS in comparison to baseline methods. The results highlight the superiority of COPS in achieving optimal subsampling and performance improvement.
2308.15223
Evaluating Explanation Methods for Multivariate Time Series Classification
Multivariate time series classification is an important computational task arising in applications where data is recorded over time and over multiple channels. For example, a smartwatch can record the acceleration and orientation of a person's motion, and these signals are recorded as multivariate time series. We can classify this data to understand and predict human movement and various properties such as fitness levels. In many applications classification alone is not enough, we often need to classify but also understand what the model learns (e.g., why was a prediction given, based on what information in the data). The main focus of this paper is on analysing and evaluating explanation methods tailored to Multivariate Time Series Classification (MTSC). We focus on saliency-based explanation methods that can point out the most relevant channels and time series points for the classification decision. We analyse two popular and accurate multivariate time series classifiers, ROCKET and dResNet, as well as two popular explanation methods, SHAP and dCAM. We study these methods on 3 synthetic datasets and 2 real-world datasets and provide a quantitative and qualitative analysis of the explanations provided. We find that flattening the multivariate datasets by concatenating the channels works as well as using multivariate classifiers directly and adaptations of SHAP for MTSC work quite well. Additionally, we also find that the popular synthetic datasets we used are not suitable for time series analysis.
Davide Italo Serramazza, Thu Trang Nguyen, Thach Le Nguyen, Georgiana Ifrim
2023-08-29T11:24:12Z
http://arxiv.org/abs/2308.15223v2
# Evaluating Explanation Methods for Multivariate Time Series Classification ###### Abstract Multivariate time series classification is an important computational task arising in applications where data is recorded over time and over multiple channels. For example, a smartwatch can record the acceleration and orientation of a person's motion, and these signals are recorded as multivariate time series. We can classify this data to understand and predict human movement and various properties such as fitness levels. In many applications classification alone is not enough, we often need to classify but also understand what the model learns (e.g., why was a prediction given, based on what information in the data). The main focus of this paper is on analysing and evaluating explanation methods tailored to Multivariate Time Series Classification (MTSC). We focus on saliency-based explanation methods that can point out the most relevant channels and time series points for the classification decision. We analyse two popular and accurate multivariate time series classifiers, ROCKT and dResNet, as well as two popular explanation methods, SHAP and dCAM. We study these methods on 3 synthetic datasets and 2 real-world datasets and provide a quantitative and qualitative analysis of the explanations provided. We find that flattening the multivariate datasets by concatenating the channels works as well as using multivariate classifiers directly and adaptations of SHAP for MTSC work quite well. Additionally, we also find that the popular synthetic datasets we used are not suitable for time series analysis. Keywords:Time Series Classification Explanation Evaluation ## 1 Introduction Real-world time series data are often multivariate, i.e., data collected over a period of time on different channels. An example is human motion data collected from participants wearing a tri-axial accelerometer on their dominant wrist. The tri-variate data can be examined to identify epilepsy convulsions in everyday life [25]. Another example is traffic data where multiple sensors are set up at different locations to measure the traffic occupancy in a city1. While univariate time series have been the main research focus, there is a steadily growing interest in multivariate time series (MTS), in particular for the classification task (MTSC). The release of the MTSC benchmark [2], a collaborative effort by researchers from multiple institutions, is an important milestone that has accelerated studies of MTSC methods. Explainable AI is another important topic due to the explosion of interest in complex machine learning models and deep learning methods. Pioneers in this field have been working mostly on text and image data and, as a result, a number of explanation frameworks including LIME [20], DeepLift [14], Shapley [15] have been introduced. The similarity between image and time series data allows such techniques to be adapted to time series models [26]. Nevertheless, there are some notable differences between images and time series. Firstly, images are usually represented using RGB encoding and all the 3 channels contain necessary information, while for time series it is common to have channels that do not contribute to, or even hinder, the classification decision. Secondly, in images there is a lot of homogeneity in the pixel values while moving between pixels belonging to the same objects and a sharp difference when moving between pixels belonging to different objects. In time series, it is less common to find such a strong locality, especially across all the channels. Furthermore, the data magnitude and pre-processing, such as normalisation, are important factors for time series, but less so for images. In this work, we focus on methods for explaining MTSC as this is an important open problem that is often as important as the classification itself. In a scenario in which people wear accelerometers on their body while executing a physical exercise, other than classifying the exercise as correctly executed or not, it is also important to provide feedback to users, e.g., an explanation of why the exercise was incorrectly executed by pointing out the relevant data. In this paper, a _multivariate time series explanation_ is a 2D saliency map [3] highlighting the importance of each time series channel and each time point for the classification decision, as illustrated in Figure 1. A proper MTSC explanation should be able to point out for each channel the relevant time points that may Figure 1: Sample multivariate time series and explanation heat map. The 3 plots show the x, y, z channels for a jump sample. be located at different parts of the time series. For example, CAM [27] was designed for explaining univariate time series thus it can not identify important time points which vary across channels. In this work we aim to analyse and evaluate a few MTSC explanation methods we found in the literature. Throughout our literature research, the only bespoke MTS explanation methods found are all tailored for deep learning methods (especially CNN), while few others are able to provide a 2D heat map by adapting _univariate time series explanation_ to work in a multivariate scenario (most of the time by flattening the dataset and reshaping the 1D heat map into a matrix). The lack of bespoke multivariate time series explanations, combined with the lack of explanation evaluation methods, is an important gap in the scientific literature. The main aim of this work is to study and evaluate existing MTSC explanation methods in order to start addressing this gap. **Our main contributions in this paper are:** * We analyse the literature on saliency-based explanation methods for MTSC and find very few bespoke methods, all of which are designed for deep learning models. Among these, we select dCAM [3] which extends CAM, a very popular method for time-series and image explanations. * We conduct experiments using state-of-the-art multivariate time series classifiers ROCKET [6] and dResNet [3] and explanation methods SHAP [16] and dCAM [3]. We study ways to adapt SHAP to work with multivariate time series and compare it to the bespoke MTSC explanation method dCAM. * We use 3 synthetic datasets and 2 real-world datasets to compare the classifiers and the explanations. We evaluate the explanations both quantitatively, using the methodology proposed in [18], as well as qualitatively. We find that for truly multivariate datasets (i.e., where multiple channels are needed for the correct classification), ROCKET-SHAP works better than dCAM, but is also more computationally expensive. We also find that flattening the datasets by concatenating the channels and using univariate classifiers works as well as using multivariate classifiers directly. In the rest of the paper, in Section 2 we discuss prior work addressing the MTSC explanation task. In Section 3 we formally define the problem addressed, the classifiers and the explanation methods used in the experiments. In Section 4 we describe the datasets used in our study, in Section 5 we describe our experiments and in Section 6 we summarise our main findings. ## 2 Related Work **Explanation Methods adapted from Univariate to Multivariate TSC.** Some multivariate time series explanation methods are simple adaptations of methods developed for univariate data. In [1], the authors explain the adapted classifiers by applying the timeXplain [17] framework on each channel _independently_. The result is a multivariate explanation that highlights the important segments in each channel of the multivariate sample. Nonetheless, it is arguable whether this approach is appropriate since the explained model(s) (univariate) and the model that needs to be explained (multivariate) are not the same. Additionally, it is not clear if the accuracy of the channel-wise univariate model is similar or worse than that of the multivariate model, and this is not discussed in the paper. **Bespoke Explanation Methods for MTSC.** Most of the previous explanation methods designed for MTSC are tailored to deep learning methods, which are not state-of-the-art with regard to classification accuracy. In [3], the authors discussed the drawbacks of the CAM explanation method for MTS data. CAM can only produce a univariate saliency map, thus it is unable to identify the important channels. Features that depend on more than one channel are also not detectable. dCAM, proposed in the same paper, addressed these limitations by rearranging the input time series with all the permutations of the channels. The paper shows that this technique can be applied to any architecture with a Global Average Pooling layer (GAP) such as ResNet or InceptionTime. dCAM limitations are discussed by comparing this method with other deep learning explanation methods, as for instance it was shown that dCAM is not the best option when dealing with multivariate datasets that can be classified focusing on just one channel, but there is no comparison against model agnostic methods such as SHAP [15] or LIME [20]. **Evaluation of Explanation Methods for MTSC.** While explanation methods for MTSC are few, works on evaluating such methods are even fewer. For univariate time series, several approaches have been proposed to compare explanation methods from different angles. The work in [5, 11] benchmarks the methods with controllable synthetic datasets. The work of [8] attempted to extract "ground-truth" explanations with a white-box classifier. The "ground-truth" explanation is then used to evaluate post-hoc explanations. AMEE [18] is a recent framework to quantitatively compare explanation methods on a dataset by perturbing input time series and measuring the impact on the classification accuracy of several classifiers. For multivariate time series, recently [24] designed an evaluation framework that is also based on the idea of perturbation, but the work is only limited to evaluating deep learning classifiers and associated explanations. The paper also proposed a synthetic multivariate time series dataset to benchmark explanation methods. ## 3 Background A multivariate time series \(X\) can be represented as a \(d\times L\) matrix, where the \(d\) rows are also called channels and the \(L\) columns store the values associated with each channel at every time point. Hence \(X^{j}_{i}\) is the value of the time series at time point \(i\) and channel \(j\), with \(0\leq i<L\) and \(0\leq j<d\). We also refer to \(X^{j}\) as the univariate time series at channel \(j\), therefore \(X\) can be written as \(X=[X^{0},X^{1},\ldots,X^{d-1}]\). An explanation of a time series \(X\) is a saliency map \(W\) that provides an importance weight for each data point (at every time point \(i\) and every channel \(j\)) in the time series. Hence the saliency map can also be represented by a \(d\times L\) matrix. A common visualisation method for the saliency map is a heat map where more important data points are highlighted with warmer colours. An explanation method for MTSC is a method that, given the input MTS, can produce a saliency map highlighting the relevance of each time point to the classifier decision. Intrinsically explainable models such as Ridge Classifier can also be an explanation method while black-box models such as RestNet (dRResNet) and ROCKET need a post-hoc explanation method. In our experiments we compare three different classifiers and explanation methods: ROCKET [6] coupled with SHAP [16], dResNet coupled with dCAM [3] and the Ridge Classifier [10] which is an intrinsically explainable model. We also use a random explanation (a matrix of random weights) as a sanity check. ### Classification Methods The first classifier we used is **ROCKET**[6] which was originally designed for UTS, but also has an adaptation for MTS: it applies a large set of random convolution kernels to the time series in order to transform it into tabular data with \(20,000\) features. It introduced some key concepts such as dilation, proportion of positive values (PPV), etc., starting an algorithm family in which recent members such as Minirocket [7], MultiRocket [23] are improvements of the original idea. All the hyper-parameters for ROCKET were learned from the UCR archive. The authors selected \(40\) random datasets from the archive and used them as the development set to find the best values for the hyper-parameters. Finally, all the kernel weights are sampled from a distribution \(\mathcal{N}(0,1)\). After the transformation step, the authors use classic linear classifiers Ridge or Logistic Regression. The second classifier is **dResNet**[3] which is a variation of ResNet [9]. This last one, originally designed for image classification, was used for the first time in TSC in [26]. It introduced the key concept of _shortcut connections_ to mitigate the gradient vanishing problem. The main architecture of the network is composed of three consecutive blocks which in turn contain three different convolutional layers. These three blocks are followed by a GAP layer and a softmax layer for classification. The dResNet version uses the same architecture with two differences specifically designed to work alongside dCAM. Firstly, for a multivariate time series \(X\) with \(d\) channels, i.e., a matrix \(X=[X^{0},X^{1},\ldots,X^{d-1}]\), the input \(C(X)\) of the network will be a 3D tensor: \[C(X)=\begin{bmatrix}X^{d-1}&X^{0}&\ldots&X^{d-3}&X^{d-2}\\ \vdots&\vdots&\vdots&\vdots&\vdots\\ X^{1}&X^{2}&\ldots&X^{d-1}&X^{0}\\ X^{0}&X^{1}&\ldots&X^{d-2}&X^{d-1}\end{bmatrix}\] In other words, the input is turned from a \(2D\) matrix into a \(3D\) one in which each row contains the \(d\) channels in different positions. The second change was to turn the convolution shapes from \(1D\) to \(2D\) to have the same output shape as ResNet. These changes were made so that the network is able to capture patterns depending on multiple channels while still learning on individual channels. The third model we used is the well-known **Ridge Classifier**[10], meant to be a baseline in the experiments: we used the scikit-learn [19] package RidgeCV using Cross Validation, leaving the other solver parameters as default. This classifier disregards the time ordering in each time series as it treats each time series as a tabular vector of features. ### Explanation Methods The first explanation method considered in this paper is **SHAP**[15] which measures feature importance using Shapley values borrowed from game theory. SHAP quantifies the contribution of each feature by examining the differences in the model output when a specific feature is masked, i.e., it is replaced with a specific value and when it is not. SHAP considers every possible masking configuration, thus is computationally expensive. The timeXplain library [17] applies SHAP on the UTSC task by dividing the time series into segments, each is treated as a feature. The segmentation exploits locality in time series and significantly reduces the number of features before applying SHAP. As SHAP is a model-agnostic method, it works with any TSC model. We couple it with ROCKET due to its efficiency and accuracy. The second explanation method (used along dResNet), is **dCAM**[3]. It computes _CAM_[27] for each row of the input (described in Section 3.1), resulting in a \(2D\) matrix \(\mathcal{M}\) where all channels are brought back to their original positions to evaluate their contribution. Since the network is trained to compute meaningful predictions regardless of the order in which the channels are provided, dCAM computes \(k\) different matrices \(\mathcal{M}\) each of them obtained by a different random permutation of the channel order: all these \(k\) matrices are then averaged into \(\hat{\mathcal{M}}\). The final step to retrieve the explanation \(W\) consists in filtering out uninformative time points and uninformative channels using respectively the average value of \(\hat{\mathcal{M}}\) in each channel and the variance of all positions for a single channel. dCAM can tell how important a time point was for the classification by taking the differences in \(\hat{\mathcal{M}}\) when the time point is present in different positions. The third explanation method is **Ridge**. As mentioned before, this method is intrinsically explainable because the explanation weights are the weights learned by the classifier. The model is basically a vector of coefficients for each feature, i.e., data point in the time series. The final explanation method **Random** is a baseline that generates the saliency map \(W\) by sampling values randomly from a continuous uniform distribution. The idea is that any good explanation method should provide a better explanation than the random one. ## 4 Datasets We work with 3 synthetic multivariate time series classification datasets and 2 real-world ones. In Figure 2 we present one sample from one synthetic dataset and one sample each for the real-world datasets. ### Synthetic Datasets For the synthetic datasets, we use the multivariate time series classification benchmark by Ismail et al. [12]. We generated three different datasets, using the _Pseudo Periodic_, _Gaussian_ and _Auto Regressive_ distributions. Each has 100 samples in both train and test sets, with \(L=100\) and \(d=20\). The two classes for classification are _positive_ and _negative_. The discriminative data points are stationary and within a small box, i.e., \(X_{i}^{j}\) is discriminative if and only if \(10\leq i<20\) and \(0\leq j<10\). In other words, 50% of the channels and 10% of the time steps are relevant. Overall, only 5% of the time series matter for predicting the class. ### Real-World Datasets The first real-world dataset is **Counter Movement Jump** (CMJ) [13]. The data were collected using accelerometer sensors attached on the participants while performing the counter-movement jump exercise. The three classes are: jumps with acceptable form (class 1), with their legs bending during the flight Figure 2: Sample time series: Fig 1(a) PseudoPeriodic negative sample. Fig 1(b) one instance from CMJ Bend. Fig 1(c) one instance from MP Normal. (class 2), and with a stumble upon landing (class 3). The training set has 419 samples while the test set has 179 samples. Each time series has 3 channels (\(d=3\)) that record the acceleration in \(x\), \(y\), and \(z\) axis. The original data is variable-length thus we resampled every time series to the same length (\(L=596\)). From the domain experts, we know that the distinctions between classes are more observable on channel \(y\), thus it makes this channel the most important one. The second real-world dataset is **Military Press** (MP) [22]. To collect the data, 56 participants were asked to perform the Military Press strength-and-conditioning exercise. Each of them completed 10 repetitions in the normal form and another 30 in induced forms, with 10 repetitions each (simulating 3 types of errors). The time series were extracted from video using the OpenPose library [4]. The dataset has 1452 samples in the training set and 601 in the test set, each time series has 161 time points and 50 channels corresponding to the \(x,y\) coordinates of 25 body parts. From the original dataset we have selected 8 channels representing the \(y\) coordinates of both left and right Shoulder, Elbow, Wrist and Hip. This dataset has 4 different classes representing the kind of exercise done, namely Normal (N), Asymmetrical (A), Reduced Range (R) and Arch (Arch). We know from domain experts that the importance of channels for this dataset is in decending order: Elbows, Wrists, Shoulders, Hip. High accuracy can be obtained only by using the Elbows and Wrists while it is not possible to achieve a high accuracy by only using one channel. We later show experiments both in Section 5 and in the Appendix to document this behaviour. ## 5 Experiments In our experiments we aim to understand the strengths and weaknesses of existing methods for explaining multivariate time series classification. As summarised in Table 1, we compared one of the bespoke multivariate method found (dResNet), the popular SHAP, which has the downside of being adapted to provide a 2D heatmap, and Ridge as a sanity check baseline. Some different coupling such as ROCKET paired with dCAM or dResNet paired with SHAP are not possible respectively because dCAM can only explain models having a GAP layer and the timeXplain library (used for ROCKET-SHAP concatenated) is implemented only for 1D-vector instances (univariate time series). To make the timeXplain library work with MTS, we apply the following two strategies (Figure 3): (1) **Concatenated**: Concatenating all the channels \begin{table} \begin{tabular}{|c|c|c|} \hline Classifier & Explanation Method & MTS Approach \\ \hline dResNet & dCAM & Bespoke MTSC \\ ROCKET & SHAP & Concatenated \\ ROCKET & SHAP & Channel by Channel \\ Ridge Classifier & Ridge Classifier & Concatenated \\ n/a & Random & n/a \\ \hline \end{tabular} \end{table} Table 1: Summary of the explanation methods tested in this paper. to a single univariate time series. As a result, the output saliency map is also univariate and thus needs to be reshaped. (2) **Channel by Channel**: Train and explain one model for each channel independently. The MTSC model in this case is an ensemble of per-channel UTSC models. For SHAP-channel-by-channel, we assign the number of segments to 10 while, for SHAP-concatenated, the number of segments is set to \(d\times 10\). Since Ridge can only work using univariate datasets, we only used the dataset concatenation strategy for this classifier. The output of all explanation methods is a saliency map in the form of either \(d\times L\) or \(d\times 10\) matrix (reshaped if necessary). It is important to note that we have only one bespoke method for multivariate time series, dCAM, that computes a saliency map of the same shape as the original time series instance. All the experiments were done using a machine with 32GB RAM, Intel i7-12700H CPU and an Nvidia GeForce RTX 350 Ti GPU (the GPU was used only for dResNet/dCAM). All the code used to perform the experiments is available on a Github repository2. Footnote 2: [https://github.com/mlgig/Evaluating-Explanation-Methods-for-MTSC](https://github.com/mlgig/Evaluating-Explanation-Methods-for-MTSC) ### Classification Accuracy Analysis Before diving into the explanations, we first take a look at the accuracy of the classifiers used for producing the explanations. All the classifiers listed in Table 2 were trained 5 different times (for ROCKET we also tried to either normalize the data or not). In this Table are reported the most accurate ones i.e., the models Figure 3: Strategies to use the timeXplain library in a multivariate scenario, for \(d=3\). In Fig (a)a, a classifier is trained for each channel: for explaining each classifier, \(d\) heat maps of length 10 are produced: stacking these vectors together results in a matrix of dimension \(d\times 10\). In Fig (b)b the time series are concatenated and one single classifier is trained. We explain the classifier using a number of segments \(d\times 10\) and reshape the resulting vector into a 2D matrix having the same shape as in the previous case. used in the experiments as well as the accuracy for the univariate concatenated datasets. Having a look at the table we can notice that all the times both ROCKET and dResNet have high accuracy (with some exceptions for the synthetic datasets): this is an important pre-requisite when comparing explanations methods applied to different classifiers as we did. We note that RidgeCV does particularly well on the synthetic datasets. On Military Press, the multivariate models are more accurate than the univariate ones (on concatedatedated data). This is expected since it is difficult to achieve a high accuracy with a single channel for this dataset, so this is a trully multivariate dataset. Concatenating all the channels for Military Press hurts more dResNet which loses 9 percentage points accuracy, while ROCKET loses only 4. For CMJ, the behaviour is reversed, with univariate models being more accurate than the multivariate ones. dResNet has a noticeable 9 percentage points improvement on the concatenated dataset, while ROCKET gains 1 percentage point. ### Synthetic Data For the synthetic data, we performed 5-fold cross-validation to train a Logistic Regression classifier for ROCKET, allowing up to 1000 iterations. For dResNet we used 64 filters, and we trained using the Cross-Entropy Loss and Adam optimizer with a learning rate set to 0.0001. Finally for RidgeCV we used the standard scikit-learn parameters for cross-validation using 5 folds. Regarding the explanation methods we used 10 segments for ROCKET concatenated in the channel-by-channel scenario and 200 segments in the concatenated one; for dCAM the number of permutations to evaluate \(k\) was set to 200 (this is the maximum recommended in [3]). The steps done for syntethic data evaluation are illustrated in Figure 4. The first step is to reshape all the explanations so that they all have the same dimension. Specifically, the saliency maps we obtained from dCAM and Ridge have a shape of \(d\times L=20\times 100\) while the ones from SHAP concatenated and channel-by-channel have a shape of \(d\times\text{n. segments}=20\times 10\). We chose \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Classifier/Dataset & PseudoPeriodic & Gaussian & AutoRegressive & CMJ & MilitaryPress \\ \hline dResNet multivariate & 1.0 & 0.83 & 0.82 & 0.82 & 0.79 \\ dResNet concatenated & 1.0 & 0.89 & 0.81 & 0.91 & 0.68 \\ \hline ROCKET multivariate & 1.0 & 0.93 & 0.87 & 0.87 & 0.87 \\ ROCKET concatenated & 1.0 & 0.72 & 0.73 & 0.88 & 0.83 \\ ROCKET ch-by-ch & 0.99 & 0.72 & 0.95 & 0.85 & 0.65 \\ \hline RidgeCV & 1.0 & 1.0 & 1.0 & 0.44 & 0.61 \\ \hline \end{tabular} \end{table} Table 2: Accuracy for the models listed in Table 1 plus dResNet concatenated and ROCKET multivariate: using this table it is possible to appreciate the differences when using multivariate vs univariate datasets. to average 10 consecutive elements for dCAM and Ridge explanation as we empirically verified that all the metrics had slight improvements. The alternative was to repeat 10 times the same item in SHAP explanations. After this stage, all explanations have the shape of \(20\times 10\). The second step rescales the explanation weights as they can have different magnitudes among different instances and different methods. First of all, we take the absolute value of each explanation (to also take into consideration variables that have a negative contribution for the classification) and then we rescale by min-max normalization in the range \([0,100]\). The third step is to instantiate a ground truth matrix \(G\) and compare each explanation against it. For the settings described before, this is a binary matrix having shape \(20\times 10\) (same dimension of explanations after Step 2), all the elements are set to 0 except for the ones in \(G_{i}^{j}\) with \(i=1,0\leq j<10\) that are set to 1. In other words, this is a binary matrix describing whether or not a segment is important for the classification. To be noted that synthetic dataset parameters such as the number and range of informative time points and channels, and explanation method parameters such as the number of segments were chosen so that the resulting segments are made up either by only informative time points or by only uninformative time points. The last step is simply to compute the metrics used for the evaluation i.e., Precision, Recall, F1-score, PR-AUC and ROC-AUC [3]. For Precision and Recall we had to fix a threshold dividing the values considered uninformative from the ones considered informative: we have chosen 50 as the medium value between 0 and 100. On the other hand PR-AUC and ROC-AUC do not fix any threshold as they average multiple scores using different thresholds into one single value. All these metrics computed for the 3 synthetic datasets are reported in Table 3. Looking at the table it is possible to note that all the time Ridge has perfect metrics but for Recall (and consequently F1 score) with the Gaussian dataset. Figure 4: Steps performed in the synthetic data evaluation when comparing dCAM and SHAP. In Step 1, dCAM is reshaped into \((20,10)\) averaging 10 consecutive elements in each channel, while SHAP is untouched. In Step 2, the reshaped matrices are rescaled in the range \([0,100]\). In Step 3, both the explanations achieved so far are compared against the ground truth matrix \(G\) and finally in Step 4 the scores computed in the previous step are evaluated. These results along with the one provided in Table 2 (perfect accuracies of Ridge for the 3 synthetic datasets), are very strong evidence that these commonly used benchmarks are not ideal for time series analyses at least using the parameters described before. We think this is the case due to the way the benchmarks are created, by adding or subtracting a single value to consecutive time points. This means that a simple tabular classifier such as Ridge is enough to perfectly classify these datasets. In conclusion, we recommend against the use of these synthetic benchmarks for analysing time series classification or explanation methods. Comparing the other method, most of the time SHAP channel by channel is the second best model, while comparing dCAM with SHAP concatenated there is no clear winner as in some metrics the first one has better results while in some others is the opposite. The two last points to be noted are that some methods have metrics close to random, especially for Recall, and the time required for computing the explanations is high, taking into account that these are small datasets: 50 minutes for dCAM, 3.5 hours for SHAP concatenated, and more than 6 hours for SHAP channel by channel. ### Real-world Data In this section we used some different hyper-parameters: for dResNet the number of filters is now set to 128 as we found better classification results, the number of dCAM permutations \(k\) was set to 6 for dCAM (this dataset has 3 channels so the number of possible channel permutations is just 6) while it is still 200, i.e. the maximum recommended, for MP which has 8 channels. We set the number of timeXplain segments using the concatenated dataset to 30 for CMJ and 80 for MP so that they are still equal to \(d\times 10\). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Dataset & XAI Method & Precision & Recall & F1 & PR-AUC & ROC-AUC & Time \\ \hline Pseudo-Periodic & SHAP ch-by-ch & 0.73 & 0.94 & 0.82 & 0.99 & 0.99 & 6.2 h \\ Pseudo-Periodic & SHAP concatenated & 0.92 & 0.66 & 0.77 & 0.99 & 0.99 & 3.5 h \\ Pseudo-Periodic & dCAM & 0.50 & 0.50 & 0.50 & 0.63 & 0.98 & 50 m \\ Pseudo-Periodic & Ridge & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 0 s \\ \hline Gaussian & SHAP ch-by-ch & 0.88 & 0.63 & 0.73 & 0.91 & 0.99 & 6.2 h \\ Gaussian & SHAP concatenated & 0.34 & 0.18 & 0.24 & 0.16 & 0.71 & 3.5 hr \\ Gaussian & dCAM & 0.36 & 0.15 & 0.21 & 0.35 & 0.94 & 50 m \\ Gaussian & Ridge & 0.83 & 1.0 & 0.9 & 1.0 & 1.0 & 0 s \\ \hline Auto-Regressive & SHAP ch-by-ch & 0.85 & 0.60 & 0.71 & 0.49 & 0.77 & 6.2 h \\ Auto-Regressive & SHAP concatenated & 0.27 & 0.13 & 0.18 & 0.29 & 0.57 & 3.5 h \\ Auto-Regressive & dCAM & 0.34 & 0.15 & 0.21 & 0.06 & 0.57 & 50 m \\ Auto-Regressive & Ridge & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 0 s \\ \hline All & Random & 0.05 & 0.15 & 0.08 & 0.05 & 0.5 & 0 s \\ \hline \end{tabular} \end{table} Table 3: Scores and runtime of each XAI method for synthetic datasets: h stands for hours, m stands for minutes and s stand for seconds; ch-by-ch stands for channel by channel. Looking at the classifier accuracy in Table 2 we notice how for the two real-world datasets, the accuracies achieved by dResNet and ROCKET are comparable or even better when using the concatenated dataset versions. This means that analysing the explanation methods for MTSC by turning the multivariate problems into univariate ones could be useful. The close accuracy between original multivariate and concatenated univariate datasets can arise some questions whether these datasets are truly multivariate (i.e., the necessary information for correct classification is spread among different channels). This seems to be the case for Military Press, but less so for CMJ. We plan to investigate further this point in future work. In this work we decided to use the concatenated datasets and the methodology developed by [18] to evaluate the explanation methods. For the case of dCAM which produces a matrix as an explanation, we flatten the matrix to a vector by concatenating the rows and using it as any other univariate explanation. So dCAM is obtained in a truly multivariate setting (dResNet is a multivariate classifier and dCAM a multivariate explanation), but reshaped to look like a univariate explanation. The explanations obtained from SHAP and Ridge, on the other hand, are univariate explanations obtained by first concatenating the channels and then running univariate classifiers. For the real-world datasets we do not have precise explanation ground truth as for the synthetic datasets, but we do have domain knowledge about which channels and parts of the time series are important for the classification. Finally in this section we didn't include SHAP channel by channel in the MP dataset experiment as the accuracy is low (Table 2) therefore it does not make sense to derive an explanation. #### 4.2.2 Evaluation of Explanation Methods. We apply AMEE [18], an explanation evaluation method for the univariate time series classification task, on the CMJ and MP univariate datasets obtained through concatenating all channels. This method aims to measure the faithfulness of an explanation by estimating its impact on a set of independent classifiers (the _referee classifiers_). If an explanation correctly points out the important areas of a univariate time series, perturbation of such areas will lead to a drop in accuracies of the referee classifiers. The faithfulness of the explanation is then calculated using the Area Under the Curve (AUC) of the accuracy drop curves of each of the referee classifiers. AMEE is designed to be robust by employing various perturbation strategies (i.e. how an important area is perturbed and replaced with a new value) and a diverse set of high-performing referee classifiers. The main idea is that masking away important parts of the data as pointed out by the explanation, should affect a set of high performing classifiers leading to a drop in accuracy across the board. For our task, we use the default perturbation strategies with three classifiers included in the standard referees set: MrSEQL [13] WEASEL 2.0 [21] and ROCKET (for more information and results about this methodology we invite the readers to have a look to the original publication [18]). Table 4 shows the accuracy of these referee classifiers on the evaluated datasets. The result of the explanation evaluation is presented in Table 5 as well as the methodology and the evaluation running time. The methodology running time is dependent on the number of both perturbation strategies and employed referees. It is specific to our choice of the three mentioned referees and four perturbation types using Mean and Gaussian sample from both time-point dependent (local) and time-point independent (global) statistics of the test samples. Looking at the second one (time for running the explanation methods) we notice the high SHAP computational complexity: this was the main reason why we used only 2 real-world datasets for the experiments. We focused on human motion data because in this case we can rely on domain expertise. From the quantitative evaluation with AMEE, we note that for the CMJ dataset, SHAP concat is the best method, although it is close to a random explanation. dCAM ranks third for this dataset. We note that this dataset is quite noisy due to quiet parts after the jump, and this could explain why SHAP and Random are so close in ranking. For the MP dataset, SHAP concatenated is by far the best method, significantly better than dCAM, as well as Random and Ridge. This is an interesting finding considering that dCAM was proposed to deal with datasets where there are clear dependencies between channels, but for MP this method does not seem to perform so well. We supplement the quantitative ranking with a more detailed qualitative analysis in the Appendix. In short we find that for CMJ, the importance rankings of channels given by SHAP concat and dCAM are the same, while for MP, SHAP provides a ranking more in line with domain knowledge, while dCAM places the least informative channels at the top of the ranking. ## 6 Conclusion In this paper we have investigated explanation methods for MTSC. We studied two very popular explanation methods, dCAM and SHAP, and have provided a quantitative and qualitative analysis of their behavior on synthetic as well as real-world datasets. We found that adaptations of SHAP for MTSC work quite well, and they outperform the recent bespoke MTSC explanation method dCAM. We have also pointed out that a very popular synthetic MTSC benchmark does not seem suitable for MTSC evaluation, since a simple Ridge classifier outperforms all other methods both in classification accuracy and in explanation quality. Finally, while SHAP seems to work effectively to point out important time series channels and time points, we highlighted the time required to run SHAP and \begin{table} \begin{tabular}{|c|c|c|c|} \hline Dataset & MrSEQL & ROCKET & WEASEL 2.0 \\ \hline CMJ-concat & 0.76 & 0.88 & 0.92 \\ \hline MP-concat & 0.82 & 0.84 & 0.80 \\ \hline \end{tabular} \end{table} Table 4: Accuracy of referee classifiers for the AMEE evaluation of explanation methods for univariate time series classification. pointed out the open problem of excessive time requirements for this method. In future work we plan to investigate the computation time for SHAP, as well as other frameworks for evaluating bespoke explanation methods for MTSC. ## Acknowledgments This publication has emanated from research supported in part by a grant from Science Foundation Ireland under Grant number 18/CRT/6183. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
2301.02254
Strong pinning transition with arbitrary defect potentials
Dissipation-free current transport in type II superconductors requires vortices to be pinned by defects in the underlying material. The pinning capacity of a defect is quantified by the Labusch parameter $\kappa \sim f_p/\xi\bar{C}$, measuring the pinning force $f_p$ relative to the elasticity $\bar{C}$ of the vortex lattice, with $\xi$ denoting the coherence length (or vortex core size) of the superconductor. The critical value $\kappa = 1$ separates weak from strong pinning, with a strong defect at $\kappa > 1$ able to pin a vortex on its own. So far, this weak-to-strong pinning transition has been studied for isotropic defect potentials, resulting in a critical exponent $\mu = 2$ for the onset of the strong pinning force density $F_\mathrm{pin} \sim n_p f_p (\xi/a_0)^2(\kappa-1)^\mu$, with $n_ p$ denoting the density of defects and $a_0$ the intervortex distance. The behavior changes dramatically when studying anisotropic defects with no special symmetries: the strong pinning then originates out of isolated points with length scales growing as $\xi (\kappa - 1)^{1/2}$, resulting in a different force exponent $\mu = 5/2$. Our analysis of the strong pinning onset for arbitrary defect potentials $e_p(\mathbf{R})$, with $\mathbf{R}$ a planar coordinate, makes heavy use of the Hessian matrix describing its curvature and leads us to interesting geometrical structures. Both, onset and merger points are defined by local differential properties of the Hessian's determinant $D(\mathbf{R})$, specifically, its minima and saddle points. Extending our analysis to the case of a random two-dimensional pinning landscape, we discuss the topological properties of unstable and bistable regions as expressed through the Euler characteristic, with the latter related to the local differential properties of $D(\mathbf{R})$ through Morse theory.
Filippo Gaggioli, Gianni Blatter, Martin Buchacek, Vadim B. Geshkenbein
2023-01-05T19:00:01Z
http://arxiv.org/abs/2301.02254v1
# Strong pinning transition with arbitrary defect potentials ###### Abstract Dissipation-free current transport in type II superconductors requires vortices, the topological defects of the superfluid, to be pinned by defects in the underlying material. The pinning capacity of a defect is quantified by the Labusch parameter \(\kappa\sim f_{p}/\xi\bar{C}\), measuring the pinning force \(f_{p}\) relative to the elasticity \(\bar{C}\) of the vortex lattice, with \(\xi\) denoting the coherence length (or vortex core size) of the superconductor. The critical value \(\kappa=1\) separates weak from strong pinning, with a strong defect at \(\kappa>1\) able to pin a vortex on its own. So far, this weak-to-strong pinning transition has been studied for isotropic defect potentials, resulting in a critical exponent \(\mu=2\) for the onset of the strong pinning force density \(F_{\rm pin}\sim n_{p}f_{p}(\xi/a_{0})^{2}(\kappa-1)^{\mu}\), with \(n_{p}\) denoting the density of defects and \(a_{0}\) the intervortex distance. This result is owed to the special rotational symmetry of the defect producing a _finite_ trapping area \(S_{\rm trap}\sim\xi^{2}\) at the strong-pinning onset. The behavior changes dramatically when studying anisotropic defects with no special symmetries: the strong pinning then originates out of isolated points with length scales growing as \(\xi(\kappa-1)^{1/2}\), resulting in a different force exponent \(\mu=5/2\). Our analysis of the strong pinning onset for arbitrary defect potentials \(e_{p}({\bf R})\), with \({\bf R}\) a planar coordinate, makes heavy use of the Hessian matrix describing its curvature and leads us to interesting geometrical structures: the strong pinning onset is characterized by the appearance of _unstable_ areas of elliptical shape whose boundaries mark the locations where vortices jump. The associated locations of asymptotic vortex positions define areas of _bistable_ vortex states; these bistable regions assume the shape of a crescent with boundaries that correspond to the spinodal lines in a thermodynamic first-order transition and cusps corresponding to critical endpoints. Both, unstable and bistable areas grow with \(\kappa>1\) and join up into larger domains; for a uniaxially anisotropic defect, two face to face crescents merge into the ring-shaped area previously encountered for the isotropic defect. Both, onset and merger points are defined by local differential properties of the Hessian's determinant \(D({\bf R})\), specifically, its minima and saddle points. Extending our analysis to the case of a random two-dimensional pinning landscape, we discuss the topological properties of unstable and bistable regions as expressed through the Euler characteristic, with the latter related to the local differential properties of \(D({\bf R})\) through Morse theory. ## I Introduction Vortex pinning by material defects [1] determines the phenomenological properties of all technically relevant (type II) superconducting materials, e.g., their dissipation-free transport or magnetic response. Similar applies to the pinning of dislocations in metals [2] or domain walls in magnets [3], with the commonalities found in the topological defects of the ordered phase being pinned by defects in the host material: these topological defects are the vortices [4], dislocations [5], or domain walls [6; 7] appearing within the respective ordered phases--superconducting, crystalline, or magnetic. The theory describing the pinning of topological defects has been furthest developed in superconductors, with the strong pinning paradigm [8; 9] having been strongly pushed during the last decade [10; 11; 12; 13]. In its simplest form, it boils down to the setup involving a single vortex subject to one defect and the cage potential [14; 15] of other vortices. While still exhibiting a remarkable complexity, it produces quantitative results which benefit the comparison between theoretical predictions and experimental findings [16]. So far, strong pinning has focused on isotropic defects, with the implicit expectation that more general potential shapes would produce small changes. This is not the case, as first demonstrated by Buchacek et al. [17] in their study of correlation effects between defects that can be mapped to the problem of a string pinned to an anisotropic pinning potential. In the present work, we generalize strong pinning theory to defect potentials of arbitrary shape. We find that this simple generalization has pronounced (geometric) effects near the onset of strong pinning that even change the growth of the pinning force density \(F_{\rm pin}\propto(\kappa-1)^{\mu}\) with increasing pinning strength \(\kappa>1\) in a qualitative manner, changing the exponent \(\mu\) from \(\mu=2\) for isotropic defects [8; 10] to \(\mu=5/2\) for general anisotropic pinning potentials. The pinning of topological defects poses a rather complex problem that has been attacked within two paradigms, weak-collective- and strong pinning. These have been developed in several stages: originating in the sixties of the last century, weak pinning and creep [9] has been further developed with the discovery of high temperature superconductors as a subfield of vortex matter physics [18]. Strong pinning was originally introduced by Labusch [8] and by Larkin and Ovchinnikov [9] and has been further developed recently with several works studying critical currents [10], current-voltage characteristics [11; 12], magnetic field penetration [12; 20; 21], and creep [13; 22; 23]; results on numerical simulations involving strong pins have been reported in Refs. [24; 25; 23]. The two theories come together at the onset of strong pinning: an individual defect is qualified as weak if it is unable to pin a vortex, i.e., a vortex traverses the pin smoothly. Crossing a strong pin, however, the vortex undergoes jumps that mathematically originate in bistable distinct vortex configurations, 'free' and 'pinned'. Quantitatively, the onset of strong pinning is given by the Labusch criterion \(\kappa=1\), with the Labusch parameter \(\kappa\equiv\max[-e_{p}^{\prime\prime}]/\bar{C}\sim f_{p}/\xi\bar{C}\), the dimensionless ratio of the negative curvature \(e_{p}^{\prime\prime}\) of the isotropic pinning potential and the effective elasticity \(\bar{C}\) of the vortex lattice. Strong pinning appears for \(\kappa>1\), i.e., when the lattice is soft compared to the curvatures in the pinning landscape. So far, the strong pinning transition at \(\kappa=1\) has been described for defects with isotropic pinning potentials; it can be mapped [10] to the magnetic transition in the \(h\)-\(T\) (field-temperature) space, with the strong-pinning phenomenology at \(\kappa>1\) corresponding to the first-order Ising magnetic transition at \(T<T_{c}\) and the critical point at \(T=T_{c}\) corresponding to the strong pinning transition at \(\kappa=1\). The role of the reduced temperature \(T/T_{c}\) is then assumed by the Labusch parameter \(\kappa\) and the bistabilities associated with the ferromagnetic phases at \(T/T_{c}<1\) translate to the bistable pinned and free vortex states at \(\kappa>1\), with the bistability disappearing on approaching the critical point, \(T/T_{c}=1\) and \(\kappa=1\), respectively. A first attempt to account for correlations between defects has been done in Ref. [17]. The latter analysis takes into account the enhanced pinning force excerted by pairs of isotropic defects that can be cast in the form of _anisotropic effective_ pinning centers. Besides shifting the onset of strong pinning to \(\kappa=1/2\) (with \(\kappa\) defined for one individual defect), the analysis unravelled quite astonishing (geometric) features that appeared as a consequence of the symmetry reduction in the pinning potential. In the present paper, we take a step back and study the transition to strong pinning for anisotropic defect potentials \(e_{p}({\bf R})\), with \({\bf R}\) a planar coordinate, see Fig. 1. Note that collective effects of many weak defects can add up to effectively strong pins that smoothen the transition at \(\kappa=1\), thereby turning the strong pinning transition into a weak-to-strong pinning crossover. We find that the onset of strong pinning proceeds quite differently when going from the isotropic defect to the anisotropic potential of a generic defect without special symmetries and further on to a general random pinning landscape. The simplest comparison is between an isotropic and a uniaxially anisotropic defect, acting on a vortex lattice that is directed along the magnetic field \({\bf B}\parallel{\bf e}_{z}\) chosen parallel to the \(z\)-axis; for convenience, we place the defect at the origin of our coordinate system \({\bf r}=({\bf R},z)\) and have it act only in the \(z=0\)-plane. In this setup, see Fig. 1, the pinning potential \(e_{p}({\bf R})\) acts on the _nearest_ vortex with a force \({\bf f}_{p}({\bf R})=-\nabla_{\bf R}e_{p}|_{z=0}\) attracting the vortex to the defect; the presence of the _other vortices_ constituting the lattice renormalizes the vortex elasticity \(\bar{C}\). With the pinning potential acting in the \(z=0\) plane, the vortex is deformed with a pronounced cusp at \(z=0\), see Fig. 1; we denote the tip position of the vortex where the cusp appears by \(\bar{\bf R}\), while the asymptotic position of the vortex at \(z\to\pm\infty\) is fixed at \(\bar{\bf R}\). With this setup the problem can be reduced to a planar one, with the tip coordinate \(\bar{\bf R}\) and the asymptotic coordinate \(\bar{\bf R}\) determining the location and full shape (and hence the pinning force) of the vortex line. In the case of an _isotropic_ pin, e.g., produced by a point-like defect [11], strong pinning first appears on a circle of finite radius \(R_{m}\sim\xi\), typically of order of the vortex core radius \(\xi\), see left panel of Fig. 2(a). This is owed to the fact that, given the radial symmetry, the Labusch criterion \(\kappa=\max_{R}[-e_{p}^{\prime\prime}(R)]/\bar{C}=1\) is satisfied on a circle \(R=R_{m}\) where the (negative) curvature \(-e_{p}^{\prime\prime}>0\) is maximal. Associated with the radius \(R_{m}\) where the tip is located at \(\kappa=1\), \(\bar{R}(\kappa=1)\equiv\bar{R}_{m}=R_{m}\), there is an asymptotic vortex position \(\bar{R}(\kappa=1)=\bar{R}_{m}>\bar{R}_{m}\). Increasing the Labusch parameter beyond \(\kappa=1\), the circle of radius \(\bar{R}_{m}\) transforms into a ring \(\bar{R}_{-}<\bar{R}<\bar{R}_{+}\) of finite width. Vortices placed inside the ring at small distances \(\bar{R}<\bar{R}_{-}\) near the defect are qualified as 'pinned', while vortices at large distances \(\bar{R}>\bar{R}_{+}\) away from the pin are described as 'free', see right panel in Fig. 2(a); physically, we denote a vortex configuration as 'free' when it is smoothly connected to the asymptotic undeformed state, while a 'pinned' vortex is localized to a finite region around the defect. Vortices placed inside the bistable ring at \(\bar{R}_{-}<\bar{R}<\bar{R}_{+}\) acquire two possible states, pinned and free (colored magenta in Fig. 2, the superposition of red (pinned state) and blue (free state) colors). The onset of strong pinning for the _uniaxially Figure 1: Sketch of a vortex interacting with a defect located at the origin. The vortex approaches the asymptotic position \(\bar{\bf R}\) at \(z\to\pm\infty\) and is attracted to the defect residing at the origin; the cusp at \(z=0\) defines the tip position \(\bar{\bf R}\) and its angle quantifies the pinning strength. _anisotropic_ defect proceeds in several stages. Let us consider an illustrative example and assume a defect with an anisotropy aligned with the axes and a steeper potential along \(x\). In this situation, strong pinning as defined by the criterion \(\kappa_{m}=1\), with a properly generalized Labusch parameter \(\kappa_{m}\), appears out of two points (\(\pm\bar{x}_{m},0\)) where the Labusch criterion \(\kappa_{m}=1\) is met first, see Fig. 2(b) left. Increasing \(\kappa_{m}>1\) beyond unity, two bistable domains spread around these points and develop two crescent-shaped areas (with their large extent along \(\bar{y}\)) in asymptotic \(\bar{\mathbf{R}}\)-space, see Fig. 2(b) right. Vortices with asymptotic positions within these crescent-shaped regions experience bistability, while outside these regions the vortex state is unique. Classifying the bistable solutions as 'free' and 'pinned' is not possible, with the situation resembling the one around the gas-liquid critical point with a smooth crossover (from blue to white to red) between phases. With \(\kappa_{m}\) increasing further, the cusps of the crescents approach one another. As the arms of the two crescents touch and merge at a sufficiently large value of \(\kappa_{m}\), the topology of the bistable area changes: the two merged crescents now define a ring-like geometry and separate \(\bar{\mathbf{R}}\)-space into an inside region where vortices are pinned, an outside region where vortices are free and the bistable region with pinned and free states inside the ring-like region. As a result, the pinning geometry of the isotropic defect is recovered, though with the perfect ring replaced by a deformed ring with varying width. Using the language describing a thermodynamic first-order transition, the cusps of the crescents correspond to critical points while its boundaries map to spinodal lines; the merging of critical points changing the topology of the bistable regions of the pinning landscape goes beyond the standard thermodynamic analogue of phase diagrams. The bistable area is defining the trapping area where vortices get pinned to the defect; this trapping area is one of the relevant quantities determining the pinning force density \(F_{\text{pin}}\), the other being the jumps in energy associated with the difference between the bistable states [8; 10], see the discussion in Secs. II.3, II.5, and III.7 below. It is the change in the bistable- and hence trapping geometry that modifies the exponent \(\mu\) in \(F_{\text{pin}}\propto(\kappa-1)^{\mu}\), replacing the exponent \(\mu=2\) for isotropic defects by the new exponent \(\mu=5/2\) for general anisotropic pinning potentials. While the existence of bistable regions \(\mathcal{B}_{\bar{\mathbf{R}}}\) in the space of asymptotic vortex positions \(\bar{\mathbf{R}}\) is an established element of strong pinning theory by now, in the present paper, we introduce the new concept of unstable domains \(\mathcal{U}_{\bar{\mathbf{R}}}\) in tip-space. The two coordinates \(\bar{\mathbf{R}}\) and \(\bar{\mathbf{R}}\) represent dual variables in the sense of the thermodynamic analog, with the asymptotic coordinate \(\bar{\mathbf{R}}\) corresponding to the driving field \(h\) in the Ising model and the tip position \(\bar{\mathbf{R}}\) replacing the magnetic response \(m\); from a thermodynamic perspective it is then quite natural to change view by going back and forth between intensive (\(h\)) and extensive (\(m\)) variables. In tip space \(\bar{\mathbf{R}}\), the onset of pinning appears at isolated points \(\bar{\mathbf{R}}_{m}\) that grow into ellipses as \(\kappa\) is increased beyond unity. These ellipses describe _unstable areas_\(\mathcal{U}_{\bar{\mathbf{R}}}\) in the \(\bar{\mathbf{R}}\)-plane across which vortex tips jump when flipping between bistable states; they relate to the _bistable crescent-shaped_ areas \(\mathcal{B}_{\bar{\mathbf{R}}}\) in asymptotic space through the force balance equation; the latter determines the vortex shape with elastic and pinning forces compen Figure 2: Illustration of bistable regions in asymptotic \(\bar{\mathbf{R}}\)-space for a vortex pinned to a defect located at the origin. (a) For an isotropic defect (Lorentzian shape with \(\kappa=1,\ 1.5\)), pinning appears at \(\kappa=1\) along a ring with radius \(\bar{R}_{m}\), with the red area corresponding to pinned states and free states colored in blue. With increasing pinning strength \(\kappa\), see right panel at \(\kappa=1.5\), a bistable region (in magenta) appears in a ring geometry, with vortices residing inside, \(\bar{R}<\bar{R}_{-}\), being pinned and vortices outside, \(\bar{R}>\bar{R}_{+}\), remaining free. Vortices with asymptotic positions inside the ring (\(\bar{R}_{-}<\bar{R}<\bar{R}_{+}\)) exhibit bistable states, pinned and free. The dashed circle \(\bar{R}_{0}\) marks the crossing of pinned and free branches, see Fig. 4. (b) For a uniaxially anisotropic defect, see Eq. (94) with \(\epsilon=0.3\) and largest (negative) curvature along \(x\), pinning appears in two points (\(\pm\bar{x}_{m},0\)) along the \(x\)-axis. As the pinning strength increases beyond unity, see right panel, bistable regions (magenta) develop in a crescent-shape geometry. Pinned- and free-like states are smoothly connected as indicated by the crossover of colors (see Sec. III.3 for the precise description of coloring in terms of an ‘order parameter’). As \(\kappa_{m}\) further increases, the cusps of the two crescents merge on the \(y\)-axis, changing the topology of the \(\bar{\mathbf{R}}\)-plane through separation into inner and outer regions (not shown). A ring-like bistable region appears as in (a), with the inner (outer) region corresponding to unique vortex states that are pinned (free), while vortices residing inside the ring-shaped domain exhibit bistable states, pinned and free. sating one another. The unstable regions \(\mathcal{U}_{\tilde{\mathbf{R}}}\) in tip space are actually more directly accessible than the bistable regions \(\mathcal{B}_{\tilde{\mathbf{R}}}\) in asymptotic space and play an equally central role in the discussion of the strong pinning landscape. The simplification introduced by the concept of unstable domains \(\mathcal{U}_{\tilde{\mathbf{R}}}\) in tip space \(\tilde{\mathbf{R}}\) is particularly evident when going from individual defects as described above to a generic pinning landscape. Here, we focus on a model pinning potential landscape (or short pinscape) confined to the two-dimensional (2D) \(\mathbf{R}\) plane at \(z=0\); such a pinscape can be produced, e.g., by defects that reside in the \(z=0\) plane. The pinned vortex tip \(\tilde{\mathbf{R}}\) then still resides in the \(z=0\) plane as well and the strong pinning problem remains two-dimensional. For a 2D random pinscape, unstable ellipses appear sequentially out of different (isolated) points and at different pinning strength \(\kappa_{m}\); their assembly defines the unstable area \(\mathcal{U}_{\tilde{\mathbf{R}}}\), with each newly appearing ellipse changing the topology of \(\mathcal{U}_{\tilde{\mathbf{R}}}\), specifically, its number of components. Increasing \(\kappa_{m}\), the ellipses first grow in size, then deform away from their original elliptical shapes, and finally touch and merge in a hyperbolic geometry. Such mergers change, or more precisely reduce, the number of components in \(\mathcal{U}_{\tilde{\mathbf{R}}}\) and hence correspond again to topological transitions as described by a change in the Euler characteristic \(\chi\) associated with the shape of \(\mathcal{U}_{\tilde{\mathbf{R}}}\). Furthermore, these mergers tend to produce \(\mathcal{U}_{\tilde{\mathbf{R}}}\) shapes that are non-simply connected, again implying a topological transition in \(\mathcal{U}_{\tilde{\mathbf{R}}}\) with a change in \(\chi\). Such non-simply connected parts of \(\mathcal{U}_{\tilde{\mathbf{R}}}\) separate the tip space into 'inner' and 'outer' regions that allows to define proper 'pinned' states (localized near a potential minimum) in the 'inner' of \(\mathcal{U}_{\tilde{\mathbf{R}}}\), while 'free' states (smoothly connected to asymptotically undeformed vortices) occupy the regions outside of \(\mathcal{U}_{\tilde{\mathbf{R}}}\). The discussion below is dominated by three mathematical tools: for one, it is the Hessian matrix \(\mathrm{H}(\mathbf{R})\) of the pinning potential [17; 26]\(e_{p}(\mathbf{R})\), its eigenvalues \(\lambda_{\pm}(\mathbf{R})\) and eigenvectors \(\mathbf{v}_{\pm}(\mathbf{R})\), its determinant \(\mathrm{det}[\mathrm{H}](\mathbf{R})\) and trace \(\mathrm{tr}[\mathrm{H}](\mathbf{R})\). The Hessian matrix involves the curvatures \(\mathrm{H}_{ij}=\partial_{i}\partial_{j}e_{p}(\mathbf{R})\), \(i,j\in\{x,y\}\), of the pinning potential, that in turn are the quantities determining strong pinning, as can be easily conjectured from the form of the Labusch parameter \(\kappa\propto-e_{p}^{\prime\prime}\) for the isotropic defect. The second tool is the Landau-type expansion of the total pinning energy near the strong-pinning onset around \(\tilde{\mathbf{R}}_{m}\) at \(\kappa_{m}=1\) (appearance of a critical point) as well as near merging around \(\tilde{\mathbf{R}}_{s}\) at \(\kappa(\tilde{\mathbf{R}}_{s})\equiv\kappa_{s}=1\) (disappearance of a pair of critical points); the standard manipulations as they are known from the description of a thermodynamic first-order phase transition produce most of the new results. Third, the topological structure of the unstable domain \(\mathcal{U}_{\tilde{\mathbf{R}}}\) associated with a generic 2D pinning landscape, i.e., its components and their connectedness, is conveniently described through its Euler characteristic \(\chi\) with the help of Morse theory. The structure of the paper is as follows: In Section II, we briefly introduce the concepts of strong pinning theory with a focus on the isotropic defect. The onset of strong pinning by a defect of arbitrary shape is presented in Sec. III; we start with a translation and extension of the strong pinning ideas from the isotropic situation to a general anisotropic one, that leads us to the Hessian analysis of the pinning potential as our basic mathematical tool. Close to onset, we find (using a Landau-type expansion, see Sec. III.1) that the unstable (Sec. III.2) and bistable (Sec. III.3) domains are associated with minima of the determinant of the Hessian curvature matrix and assume the shape of an ellipse and a crescent, respectively. Due to the anisotropy, the geometry of the trapping region depends non-trivially on the Labusch parameter and the critical exponent for the pinning force is changed from \(\mu=2\) to \(\mu=5/2\), see Sec. III.7. The analytic solution of the strong pinning onset for a weakly uniaxial defect presented in Sec. IV leads us to define new hyperbolic points associated with saddle points of the determinant of the Hessian curvature matrix. These hyperbolic points describe the merging of unstable and bistable domains, see Sec. V.1, and allow us to relate the new results for the anisotropic defect to our established understanding of isotropic defects. In a final step, we extend the local perspective on the pinscape, as acquired through the analysis of minima and saddles of the determinant of the Hessian curvature matrix, to a global description in terms of the topological characteristics of the unstable domain \(\mathcal{U}_{\tilde{\mathbf{R}}}\): in Sec. VI, we discuss strong pinning in a two-dimensional pinning potential of arbitrary shape, e.g., as it appears when multiple pinning defects overlap (though all located in one plane). We follow the evolution of the unstable domain \(\mathcal{U}_{\tilde{\mathbf{R}}}\) with increasing pinning strength \(\kappa_{m}\) and express its topological properties through the Euler characteristic \(\chi\); the latter is related to the local differential properties of the pinscape's curvature, its minima, saddles, and maxima, through Morse theory. Finally, in Appendix A, we map the two-dimensional Landau-type theories (involving two order parameters) describing onset and merging, to effective one-dimensional Landau theories and rederive previous results following standard statistical mechanics calculations as they are performed in the analysis of the critical point in the van der Waals gas. ## II Strong pinning theory We start with a brief introduction to strong pinning theory, keeping a focus on the transition region at moderate values of \(\kappa>1\). We consider an isotropic defect (Sec. II.1) and determine the unstable and bistable ring domains for this situation in Sec. II.2. We derive the general expression for the pinning force density \(F_{\mathrm{pin}}\) in Sec. II.3, determine the relevant scales of the strong pinning characteristic near the crossover in Sec. II.4, and apply the results to derive the scaling \(F_{\mathrm{pin}}\propto(\kappa-1)^{2}\) for the isotropic defect (Sec. II.5). In Sec. II.6, we relate the strong pinning theory for the isotropic defect to the Landau mean-field description for the Ising model in a magnetic field. ### Isotropic defect The standard strong-pinning setup involves a vortex lattice directed along \(z\) with a lattice constant \(a_{0}\) determined by the induction \(B=\phi_{0}/a_{0}^{2}\) that is interacting with a dilute set of randomly arranged defects of density \(n_{p}\). This many-body problem can be reduced [10; 13; 20] to a much simpler effective problem involving an elastic string with effective elasticity \(\bar{C}\) that is pinned by a defect potential \(e_{p}(\mathbf{R})\) acting in the origin, as described by the energy function \[e_{\rm pin}(\tilde{\mathbf{R}};\tilde{\mathbf{R}})=\frac{\bar{C}}{2}(\tilde{ \mathbf{R}}-\bar{\mathbf{R}})^{2}+e_{p}(\tilde{\mathbf{R}}) \tag{1}\] depending on the tip- and asymptotic coordinates \(\tilde{\mathbf{R}}\) and \(\bar{\mathbf{R}}\) of the vortex, see Fig. 1. The energy (or Hamiltonian) \(e_{\rm pin}(\tilde{\mathbf{R}};\tilde{\mathbf{R}})\) of this setup involves an elastic term and the pinning energy \(e_{p}(\mathbf{R})\) evaluated at the location \(\tilde{\mathbf{R}}\) of the vortex tip. We denote the depth of the pinning potential by \(e_{p}\). A specific example is the point-like defect that produces an isotropic pinning potential which is determined by the form of the vortex [11] and assumes a Lorentzian shape \(e_{p}(R)=-e_{p}/(1+R^{2}/2\xi^{2})\) with \(R=|\mathbf{R}|\); in Sec. III below, we will consider pinning potentials of arbitrary shape \(e_{p}(\mathbf{R})\) but assume a small (compared to the coherence length \(\xi\)) extension along \(z\). 'Integrating out' the vortex lattice, the remaining string or vortex is described by the effective elasticity \(\bar{C}\approx\nu\varepsilon(a_{0}^{2}/\lambda_{\rm L})\sqrt{c_{66}c_{44}(0)} \sim\varepsilon\varepsilon_{0}/a_{0}\). Here, \(\varepsilon_{0}=(\phi_{0}/4\pi\lambda_{\rm L})^{2}\) is the vortex line energy, \(\lambda_{\rm L}\) denotes the London penetration depth, \(\varepsilon<1\) is the anisotropy parameter for a uniaxial material [18], and \(\nu\) is a numerical, see Refs. [23; 25]. The most simple pinning geometry is for a vortex that traverses the defect through its center. Given the rotational symmetry of the isotropic defect, we choose a vortex that impacts the defect in a head-on collision from the left with asymptotic coordinate \(\bar{\mathbf{R}}=(\bar{x},0)\) and increase \(\bar{x}\) along the \(x\)-axis; finite impact parameters \(\bar{y}\neq 0\) will be discussed later. The geometry then simplifies considerably and involves the asymptotic vortex position \(\bar{x}\) and the tip position \(\tilde{x}\) of the vortex, reducing the problem to a one-dimensional one; the full geometry of the deformed string can be determined straightforwardly [20] once the tip position \(\tilde{x}\) has been found. The latter follows from minimizing (1) with respect to \(\tilde{x}\) at fixed asymptotic position \(\tilde{x}\) and leads to the non-linear equation \[\bar{C}(\tilde{x}-\bar{x})=-\partial_{x}e_{p}|_{x=\tilde{x}}=f_{p}(\tilde{x}). \tag{2}\] This can be solved graphically, see Fig. 3, and produces either a single solution or multiple solutions--the appearance of multiple tip solutions is the signature of strong pinning. The relevant parameter that distinguishes the two cases is found by taking the derivative of (2) with respect to \(\bar{x}\) that leads to \[\partial_{\tilde{x}}\tilde{x}=\frac{1}{1-f_{p}^{\prime}(\tilde{x})/\bar{C}}, \tag{3}\] where prime denotes the derivative, \(f_{p}^{\prime}(x)=\partial_{x}f_{p}(x)=-\partial_{x}^{2}e_{p}(x)\). Strong pinning involves vortex instabilities, i.e., jumps in the tip coordinate \(\tilde{x}\), that appear when the denominator in (3) vanishes; this leads us to the strong pinning parameter \(\kappa\) first introduced by Labusch [8], \[\kappa=\max_{\tilde{x}}\frac{f_{p}^{\prime}(\tilde{x})}{\bar{C}}=\frac{f_{p}^{ \prime}(\tilde{x}_{m})}{\bar{C}}, \tag{4}\] with \(\tilde{x}_{m}\) defined as the position of maximal force derivative \(f_{p}^{\prime}\), i.e., \(f_{p}^{\prime\prime}(\tilde{x}_{m})=0\), or maximal negative curvature \(-e_{p}^{\prime\prime}\) of the defect potential. Defining the force scale \(f_{p}\equiv e_{p}/\xi\) and estimating the force derivative or curvature \(f_{p}^{\prime}=-e_{p}^{\prime\prime}\sim f_{p}/\xi\) produces a Labusch parameter \(\kappa\sim e_{p}/\bar{C}\xi^{2}\), for the Lorentzian potential, we find that \(f_{p}^{\prime}(\tilde{x}_{m})=e_{p}/4\xi^{2}\) at \(\tilde{x}_{m}=\sqrt{2}\,\xi\) and hence \(\kappa=e_{p}/4\bar{C}\xi^{2}\). We see that strong pinning is realized for either large pinning energy \(e_{p}\) or small effective elasticity \(\bar{C}\). Figure 3: Graphical illustration [13] of the self-consistent solution of the microscopic force-balance equation Eq. (2) for a Lorentzian potential with \(\kappa=2.5\). The vortex coordinates \(\tilde{x}\) and \(\bar{x}\) are expressed in units of \(\xi\). When moving the asymptotic vortex position \(\bar{x}\) across the bistable interval \([\tilde{x}_{-},\tilde{x}_{+}]\), we obtain three solutions describing pinned \(\tilde{x}_{p}\lesssim\xi\), free \(\tilde{x}_{t}\) close to \(\bar{x}\), and unstable \(\tilde{x}_{\rm us}\) states; they define the corresponding pinned (red), free (blue), and unstable (black dotted) branches. The tip-positions at the edges of the bistable interval denoted by \(\tilde{x}_{\rm p+}\) and \(\tilde{x}_{t-}\) denote jump points where the vortex tip turns unstable, see Eq. (3); they are defined by the condition \(f_{p}^{\prime}(\tilde{x}_{\rm p+})=f_{p}^{\prime}(\tilde{x}_{t-})=\bar{C}\) (black solid dots). The associated positions \(\tilde{x}_{t+}\) and \(\tilde{x}_{\rm p-}\) denote the tip landing points after the jump (open circles); they are given by the second solution of Eq. (2) at the same asymptotic position \(\bar{x}\). The open red/blue circles and the cross mark the positions of metastable minima and the unstable maximum in Fig. 4. The lower right inset shows the weak-pinning situation at \(\kappa<1\), here implemented with a larger \(\bar{C}\), where the tip solution \(\tilde{x}\) is unique for all \(\bar{x}\). As follows from Fig. 3 (inset), for \(\kappa<1\) (large \(\bar{C}\)) the solution to Eq. (2) is unique for all values of \(\bar{x}\) and pinning is weak, while for \(\kappa>1\) (small \(\bar{C}\)), multiple solutions appear in the vicinity of \(\tilde{x}_{m}\) and pinning is strong. These multiple solutions appear in a finite interval \(\bar{x}\in[\bar{x}_{-},\bar{x}_{+}]\) and we denote them by \(\tilde{x}=\tilde{x}_{\rm f},\tilde{x}_{\rm p},\tilde{x}_{\rm us}\), see Fig. 3; they are associated with free (weakly deformed vortex with \(\tilde{x}_{\rm f}\) close to \(\bar{x}\)), pinned (strongly deformed vortex with \(\tilde{x}_{\rm p}<\xi\)), and unstable vortex states. Inserting the solutions \(\tilde{x}(\bar{x})=\tilde{x}_{\rm f}(\bar{x}),\tilde{x}_{\rm p}(\bar{x}),\tilde {x}_{\rm us}(\bar{x})\) of Eq. (2) at a given vortex position \(\bar{x}\) back into the pinning energy \(e_{\rm pin}(\bar{x};\bar{x})\), we find the energies of the corresponding branches, \[e_{\rm pin}^{\rm i}(\bar{x})\equiv e_{\rm pin}[\tilde{x}_{\rm i}(\bar{x}); \bar{x}],\quad{\rm i=f,p,us}. \tag{5}\] The pair \(e_{p}(\bar{x})\) and \(e_{\rm pin}^{\rm i}(\bar{x})\) of energies in tip- and asymptotic spaces then has its correspondence in the force: associated with \(f_{p}(\bar{x})\) in tip space are the force branches \(f_{\rm pin}^{\rm i}(\bar{x})\) in asymptotic \(\bar{x}\)-space defined as \[f_{\rm pin}^{\rm i}(\bar{x})=f_{p}[\tilde{x}_{\rm i}(\bar{x})],\quad{\rm i=f,p,us}. \tag{6}\] Using Eq. (2), it turns out that the force \(f_{\rm pin}\) can be written as the total derivative of \(e_{\rm pin}\), \[f_{\rm pin}(\bar{x})=-\frac{de_{\rm pin}[\tilde{x}(\bar{x});\bar{x}]}{d\bar{x }}. \tag{7}\] The multiple branches \(e_{\rm pin}^{\rm i}\) and \(f_{\rm pin}^{\rm i}\) associated with a strong pinning situation at \(\kappa>1\) are shown in Figs. 4 and 5. ### Unstable and bistable domains \(\mathcal{U}_{\rm\tilde{R}}\) and \(\mathcal{E}_{\rm\tilde{R}}\) Next, we identify the unstable (in \(\tilde{x}\)) and bistable (in \(\bar{x}\)) domains of the pinning landscape that appear as signatures of strong pinning when \(\kappa\) increases beyond unity. Figure 5(a) shows the force profile \(f_{p}(\tilde{x})\) as experienced by the tip coordinate \(\tilde{x}\). A vortex passing the defect on a head-on trajectory from left to right undergoes a forward jump in the tip from \(-\tilde{x}_{\rm f-}\) to \(-\tilde{x}_{\rm p-}\); subsequently, the tip follows the pinned branch until \(\tilde{x}_{\rm p+}\) and then returns back to the free state with a forward jump from \(\tilde{x}_{\rm p+}\) to \(\tilde{x}_{\rm f+}\). The _jump positions_ (later indexed by a subscript 'jp') are determined by the two solutions of the equation \[f_{p}^{\prime}(x)\Big{|}_{-\tilde{x}_{\rm f-},\tilde{x}_{\rm p+}}=\bar{C} \tag{8}\] that involves the curvature of the pinning potential \(e_{p}(x)\); the _landing positions_\(-\tilde{x}_{\rm p-}\) and \(\tilde{x}_{\rm f+}\) (later indexed by a subscript 'lp'), on the other hand, are given by the second solution of the force-balance equation (2) that involves the driving term \(\bar{C}(\bar{x}-\bar{x})\) and hence depends on the asymptotic position \(\bar{x}\). Finally, the positions in asymptotic space \(\bar{x}\) where the vortex tip jumps are obtained again from the force balance equation (2), \[\tilde{x}_{-} =\tilde{x}_{\rm f-}-f_{p}(\tilde{x}_{\rm f-})/\bar{C}, \tag{9}\] \[\tilde{x}_{+} =\tilde{x}_{\rm p+}-f_{p}(\tilde{x}_{\rm p+})/\bar{C}.\] Note that the two pairs of tip jump and landing positions, \(\tilde{x}_{\rm p+}\), \(\tilde{x}_{\rm f+}\) and \(\tilde{x}_{\rm f-}\), \(\tilde{x}_{\rm p-}\) are associated with only two asymptotic positions \(\bar{x}_{+}\) and \(\bar{x}_{-}\). Let us generalize the geometry and consider a vortex moving parallel to \(\bar{x}\), impacting the defect at a finite distance \(\bar{y}\). We then have to extend the above discussion to the entire \(z=0\) plane, see Fig. 5. For an isotropic defect, the jump- and landing points now define jump circles with radii \(\tilde{R}_{\rm lp}\) given by \(\tilde{R}_{\rm f-}=\tilde{x}_{\rm f-}\) and \(\tilde{R}_{\rm p+}=\tilde{x}_{\rm p+}\) (solid circles in Fig. 5(c)) and landing circles with radii \(\tilde{R}_{\rm lp}\) given by \(\tilde{R}_{\rm f+}=\tilde{x}_{\rm f+}\), \(\tilde{R}_{\rm p-}=\tilde{x}_{\rm p-}\) (dashed circles in Fig. 5(c)). Their combination defines an unstable ring \(\tilde{R}_{\rm p+}<\tilde{R}<\tilde{R}_{\rm f-}\) in tip space where tips cannot reside. The existence of unstable domains \(\mathcal{U}_{\rm\tilde{R}}\) in tip space is a signature of strong pinning. Figures 5(b) and (d) show the corresponding results in asymptotic coordinates \(\bar{x}\) and \(\bar{\bf R}\), respectively. The pinning force \(f_{\rm pin}(\bar{x})=f_{p}[\tilde{x}(\bar{x})]\) shown in (b) is simply an 'outward tilted' version of \(f_{p}(\tilde{x})\), with \(S\)-shaped overhangs that generate bistable intervals \([-\bar{x}_{+},-\bar{x}_{-}]\) and \([\bar{x}_{-},\bar{x}_{+}]\). Extending them to the asymptotic \(\tilde{\bf R}\)-plane with radii \(\bar{R}_{-}\equiv\bar{x}_{-}\) and \(\bar{R}_{+}\equiv\bar{x}_{+}\), see Fig. 5(d), we obtain a ring \(\bar{R}_{-}<\bar{R}<\bar{R}_{+}\) that marks the location of bistability. Again, the appearance of bistable domains Figure 4: Multi-valued pinning energy landscape \(e_{\rm pin}^{\rm i}(\bar{x})\) for a defect producing a Lorentzian-shaped potential with \(\kappa=2.5\); the branches \({\rm i=p,f,us}\) correspond to the pinned (red), free (blue), and unstable (black dotted) vortex states. The bistability extends over the intervals \(|\bar{x}|\in[\bar{x}_{-},\bar{x}_{+}]\) where the different branches coexist; pinned and free vortex branches cut at the branch crossing point \(\bar{x}=\bar{x}_{0}\). A vortex traversing the defect from left to right assumes the free and pinned states marked with thick colored lines and undergoes jumps \(\Delta e_{\rm pin}^{\rm fp}\) and \(\Delta e_{\rm pin}^{\rm pf}\) in energy (vertical black solid lines) at the boundaries \(-\bar{x}_{-}\) and \(\bar{x}_{+}\). The asymmetric occupation of states produces a finite pinning force density \(F_{\rm pin}\). Inset: Total energy \(e_{\rm pin}(\bar{x};\bar{x})\) versus vortex tip position \(\bar{x}\) for a fixed vortex position \(\bar{x}\) (vertical dashed line in the main figure). The points \(\tilde{x}_{\rm f}\), \(\tilde{x}_{\rm p}\), and \(\tilde{x}_{\rm us}\) mark the free, pinned, and unstable solutions of the force-balance equation (2); they correspond to local minima and the maximum in \(e_{\rm pin}(\bar{x};\bar{x})\) and are marked with corresponding symbols in Fig. 3. \(\mathcal{B}_{\mathbf{R}}\) in asymptotic space is a signature of strong pinning. Both, the size of the unstable- and bistable rings depend on the Labusch parameter \(\kappa\); they appear out of circles with radii \(\tilde{R}=\tilde{x}_{m}\) and \(\bar{R}=\bar{x}_{m}=\bar{x}_{m}-f_{p}(\bar{x}_{m})/\bar{C}\) at \(\kappa=1\) and grow in radius and width when \(\kappa\) increases. The unstable and bistable domains \(\mathcal{U}_{\tilde{\mathbf{R}}}\) and \(\mathcal{B}_{\tilde{\mathbf{R}}}\) (see Ref. [27]) will exhibit interesting non-trivial behavior as a function of \(\kappa\) when generalizing the analysis to defect potentials of arbitrary shape. #### iii.2.1 Alternative strong pinning formulation An alternative formulation of strong pinning physics is centered on the local differential properties of the pinning energy \(e_{\mathrm{pin}}(\tilde{x};\bar{x})\), i.e., its extremal points in \(\tilde{x}\) at different values of the asymptotic coordinate \(\bar{x}\). We start from equation (1) restricted to one dimension and rearrange terms to arrive at the expression \[e_{\mathrm{pin}}(\tilde{x};\bar{x})=e_{\mathrm{eff}}(\tilde{x})-\bar{C}\bar{x} \,\tilde{x}+\bar{C}\bar{x}^{2}/2 \tag{10}\] with the effective pinning energy \[e_{\mathrm{eff}}(\tilde{x})=e_{p}(\tilde{x})+\bar{C}\bar{x}^{2}/2 \tag{11}\] involving both pinning and elastic terms. Equation (10) describes a particle at position \(\tilde{x}\) subject to the potential \(e_{\mathrm{eff}}(\tilde{x})\) and the force term \(f\,\tilde{x}=-\bar{C}\bar{x}\,\tilde{x}\), see also Ref. [26]. The potential \(e_{\mathrm{eff}}(\tilde{x})\) can trap two particle states if there is a protecting maximum with negative curvature \(\partial_{x}^{2}e_{\mathrm{eff}}=\partial_{\tilde{x}}^{2}e_{\mathrm{pin}}<0\), preventing its escape from the metastable state at forces \(f=\pm\bar{C}\bar{x}\) with \(\bar{x}\in[\bar{x}_{+},\bar{x}_{-}]\); the maximum in \(e_{\mathrm{pin}}\) at \(\tilde{x}_{\mathrm{us}}\) then separates two minima in \(e_{\mathrm{pin}}\) defining distinct branches with different tip coordinates \(\tilde{x}_{\mathrm{p}}\) and \(\tilde{x}_{\mathrm{f}}\), see the inset of Fig. 4. As the asymptotic position \(\bar{x}\) approaches the boundaries \(\bar{x}_{\pm}\), one of the minima joins up with the maximum to define an inflection point with \[[\partial_{\tilde{x}}^{2}e_{\mathrm{eff}}]_{\tilde{x}_{\mathrm{ip}}}=[\partial _{\tilde{x}}^{2}e_{\mathrm{pin}}]_{\tilde{x}_{\mathrm{ip}}}=0, \tag{12}\] that corresponds to the instability condition (8) where the vortex tip jumps; the persistent second minimum in \(e_{\mathrm{pin}}(\tilde{x};\bar{x})\) defines the landing position \(\tilde{x}_{\mathrm{lp}}\) and the condition for a flat inflection point \([\partial_{\tilde{x}}e_{\mathrm{pin}}]_{\tilde{x}_{\mathrm{ip}}}=0\) defines the associated asymptotic coordinate \(\pm\bar{x}_{\pm}\). Finally, strong pinning vanishes at the Labusch point \(\kappa=1\), with the inflection point in \(e_{\mathrm{eff}}(\tilde{x})\) coalescing with the second minimum at \(\tilde{x}_{m}\), hence \[[\partial_{\tilde{x}}^{2}e_{\mathrm{eff}}]_{\tilde{x}_{m}}=0\quad \text{and} \tag{13}\] \[[\partial_{\tilde{x}}^{3}e_{\mathrm{eff}}]_{\tilde{x}_{m}}=[ \partial_{\tilde{x}}^{3}e_{p}]_{\tilde{x}_{m}}=0.\] Note the subtle use of \(e_{\mathrm{pin}}\) versus \(e_{\mathrm{eff}}\) versus \(e_{p}\) in the above discussion; as we go to higher derivatives, first the asymptotic coordinate \(\bar{x}\) turns irrelevant in the second derivative \(\partial_{\tilde{x}}^{2}e_{\mathrm{pin}}=\partial_{\tilde{x}}^{2}e_{\mathrm{ eff}}\) and then all of the elastic response, i.e., \(\bar{C}\), drops out in the third derivative \([\partial_{\tilde{x}}^{3}e_{\mathrm{pin}}]=[\partial_{\tilde{x}}^{3}e_{p}]\). The above alternative formulation of strong pinning turns out helpful in several discussions below, e.g., the derivation of strong pinning characteristics near the transition in Secs. II.4 and III.1 and in the generalization of the instability condition to an anisotropic defect in Sec. III and furthermore provides an inspiring link to the Landau theory of phase transitions discussed below in Sec. II.6. ### Pinning force density \(F_{\mathrm{pin}}\) Next, we determine the pinning force density \(F_{\mathrm{pin}}\) at strong pinning, assuming a random homogeneous distribution of pins with a small density \(n_{p}\), \(n_{p}a_{0}\xi^{2}\ll 1\), see Figure 5: (a) and (b): Force profiles \(f_{p}(\tilde{x})\) and \(f_{\mathrm{pin}}(\bar{x})\) in tip- and asymptotic coordinates for a Lorentzian-shaped potential with \(\kappa=2.5\). The tip of a vortex moving from left to right along the \(x\)-axis approaches the defect on the free branch (thick blue line) undergoes a jump (arrow) from \(-\tilde{x}_{t-}\) to \(-\tilde{x}_{\mathrm{p}-}\), follows the pinned branch (red) until \(\tilde{x}_{\mathrm{p}+}\) and then jumps back (arrow) to the free (blue) state at \(\tilde{x}_{t+}\). Extending these jump positions to the \((\tilde{x},\tilde{y})\)-plane, see (c), defines jump (solid) and landing (dashed) circles, with the jump circles enclosing an unstable domain \(\mathcal{U}_{\tilde{\mathbf{R}}}\) characteristic of strong pinning. The force profile \(f_{\mathrm{pin}}(\tilde{x})\) in (b) includes free (blue), pinned (red), and unstable branches (black dotted). (d) Extending the bistable intervals \([-\bar{x}_{+},-\bar{x}_{-}]\) and \([\bar{x}_{-},\bar{x}_{+}]\) to the \([\bar{x},\bar{y}]\)-plane defines a bistable ring \(\bar{\mathcal{B}}_{\mathbf{R}}\) (magenta), again a strong pinning characteristic. The dashed circle of radius \(\bar{R}_{0}\) in (d) marks the branch crossing point. Vortices passing the defect with a finite impact parameter \(\bar{y}\neq 0\) move on a straight line in asymptotic space, see (d); the associated trajectory in tip space is nontrivial, see (c) and undergoes jumps at pinning (circle \(\bar{R}_{t-}\)) and depinning (circle \(\bar{R}_{\mathrm{p}+}\)). Refs. [13; 20]. The derivation of \(F_{\rm pin}\) is conveniently done in asymptotic \(\bar{\bf R}\) coordinates where vortex trajectories follow simple straight lines. Vortices approach the pin by following the free branch until its termination, jump to the pinned branch to again follow this to its termination, and finally jump back to the free branch. This produces an asymmetric pinned-branch occupation \(p_{c}(\bar{\bf R})\) that leads to the pinning force density (we assume vortices approaching the defect along \(\bar{x}\) from the left; following convention, we include a minus sign) \[{\bf F}_{c} =-n_{p}\int\frac{d^{2}\bar{\bf R}}{a_{0}^{2}}\big{[}p_{c}(\bar{\bf R }){\bf f}_{\rm pin}^{\rm p}(\bar{\bf R})+(1-p_{c}(\bar{\bf R})){\bf f}_{\rm pin} ^{\rm f}(\bar{\bf R})\big{]}\] \[=-n_{p}\int\frac{d^{2}\bar{\bf R}}{a_{0}^{2}}p_{c}(\bar{\bf R})[ \partial_{x}\Delta e_{\rm pin}^{\rm fp}(\bar{\bf R})]\,{\bf e}_{\bar{x}}, \tag{14}\] with the energy difference \(\Delta e_{\rm pin}^{\rm fp}(\bar{\bf R})=e_{\rm pin}^{\rm f}(\bar{\bf R})-e_{ \rm pin}^{\rm p}(\bar{\bf R})\) and \({\bf e}_{\bar{x}}\) the unit vector along \(\bar{x}\); the \(\bar{y}\)-component of the pinning force density vanishes due to the antisymmetry in \(f_{\rm pin,\bar{y}}\). For the isotropic defect, the jumps \(\Delta e_{\rm pin}^{\rm fp}(\bar{\bf R})\) in energy appearing upon changing branches are independent of angle and the average in (III.2) separates in \(\bar{x}\) and \(\bar{y}\) coordinates; note that the energy jumps are no longer constant for an anisotropic defect and hence such a separation does not occur. Furthermore, i) all vortices approaching the defect within the transverse length \(|\bar{y}|<R_{-}\) get pinned, see Fig. 5(d), while those passing further away follow a smooth (weak pinning) trajectory that does not undergo jumps and hence do not contribute to the pinning force, and ii) all vortices that get pinned contribute the same force that is most easily evaluated for a head-on vortex-defect collision on the \(\bar{x}\)-axis with \(p_{c}(\bar{x})=\Theta(\bar{x}+\bar{x}_{-})-\Theta(\bar{x}-\bar{x}_{+})\) and \[\langle f_{\rm pin}\rangle =-\!\!\int_{-a_{0}/2}^{a_{0}/2}\frac{d\bar{x}}{a_{0}}\;\big{[}p_{ c}(\bar{x})f_{\rm pin}^{\rm p}(\bar{x})+(1-p_{c}(\bar{x}))f_{\rm pin}^{\rm f }(\bar{x})\big{]}\] \[=\frac{\Delta e_{\rm pin}^{\rm fp}(-\bar{x}_{-})+\Delta e_{\rm pin }^{\rm pf}(\bar{x}_{+})}{a_{0}}, \tag{15}\] where we have replaced \(-\Delta e_{\rm pin}^{\rm fp}(\bar{x}_{+})\) by \(\Delta e_{\rm pin}^{\rm pf}(\bar{x}_{+})>0\). Hence, the average pinning force \(\langle f_{\rm pin}\rangle\) is given by the jumps in the pinning energy \(e_{\rm pin}^{\rm i}(\bar{x})\) associated with different branches \({\rm i}={\rm p,f}\), see Fig. 4. Finally, accounting for trajectories with finite impact parameter \(|\bar{y}|<\bar{R}_{-}\), we arrive at the result for the pinning force density \(F_{\rm pin}\) acting on the vortex system, \[F_{\rm pin}=n_{p}\frac{2\bar{R}_{-}}{a_{0}}\langle f_{\rm pin}\rangle=n_{p} \frac{2\bar{R}_{-}}{a_{0}}\frac{\Delta e_{\rm pin}^{\rm fp}+\Delta e_{\rm pin }^{\rm pf}}{a_{0}}, \tag{16}\] where the factor \(2\bar{\bf R}_{-}/a_{0}\) accounts for the averaging of the pinning force along the \(y\)-axis. As strong pins act independently, a consequence of the small defect density \(n_{p}\), the pinning force density is linear in the defect density, \(F_{\rm pin}\propto n_{p}\). If pinning is weak, i.e., \(\kappa<1\), we have no jumps, \(\langle f_{\rm pin}\rangle=0\), and \(F_{\rm pin}|_{\rm strong}=0\). A finite pinning force then only arises from correlations between pinning defects and scales in density as [9; 10]\(F_{\rm pin}|_{\rm weak}\propto n_{p}^{2}\). This contribution to the pinning force density \(F_{\rm pin}\) continues beyond \(\kappa=1\), hence, while the strong pinning onset at \(\kappa=1\) can be formulated in terms of a transition, weak pinning goes to strong pinning in a smooth crossover. Knowing the pinning force density \(F_{\rm pin}\), the motion of the vortex lattice follows from the bulk dynamical equation \[\eta{\bf v}={\bf F}_{\rm L}({\bf j})-{\bf F}_{\rm pin}. \tag{17}\] Here, \(\eta=BH_{c2}/\rho_{n}c^{2}\) is the Bardeen-Stephen viscosity [28] (per unit volume; \(\rho_{n}\) is the normal state resistivity) and \({\bf F}_{\rm L}={\bf j}\times{\bf B}/c\) is the Lorentz force density driving the vortex system. The pinning force density \({\bf F}_{\rm pin}\) is directed along \({\bf v}\), in our case along \(x\). Next, we determine the strong pinning characteristics \(\bar{x}_{-}\), \(\bar{x}_{+}\), \(\tilde{x}_{t\pm}\), \(\tilde{x}_{p\pm}\), \(\Delta e_{\rm pin}^{\rm fp}\) and \(\Delta e_{\rm pin}^{\rm pf}\) as a function of the Labusch parameter \(\kappa\) close to the strong pinning transition, i.e., \(\kappa\gtrsim 1\). ### Strong pinning characteristics near the transition Near the strong pinning transition at \(\kappa\gtrsim 1\), we can derive quantitative results for the strong pinning characteristics by expanding the pinning energy \(e_{\rm pin}(\tilde{x};\bar{x})\) in \(\tilde{x}\) at fixed \(\bar{x}\); this reminds about the Landau expansion of the free energy \(f(\phi,h)\) in the order parameter \(\phi\) at a fixed field \(h\) in a thermodynamic transition, see Sec. II.6 below for a detailed discussion. We expand \(e_{\rm pin}(\tilde{x};\bar{x})\) in \(\tilde{x}\) around the point of first instability \(\tilde{x}_{m}\) by introducing the relative tip and asymptotic positions \(\tilde{u}=\tilde{x}-\tilde{x}_{m}\) and \(\bar{u}=\bar{x}-\bar{x}_{m}\) and make use of our alternative strong pinning formulation summarized in Sec. II.2.1. At \(\tilde{x}_{m}\) and close to \(\kappa=1\), we have \([\partial_{\tilde{x}}^{2}e_{\rm pin}]_{\tilde{x}_{m}}=[\partial_{\tilde{x}}^{2}e _{p}]_{\tilde{x}_{m}}+\bar{C}=\bar{C}(1-\kappa)\) and \([\partial_{\tilde{x}}^{3}e_{\rm pin}]_{\tilde{x}_{m}}=0\), hence, \[e_{\rm pin}(\tilde{x};\bar{x})\approx\frac{\bar{C}}{2}(1-\kappa)\,\tilde{u}^{2} +\frac{\gamma}{24}\,\tilde{u}^{4}-\bar{C}\tilde{u}\tilde{u}, \tag{18}\] where we have introduced the shape parameter \(\gamma=[\partial_{x}^{4}e_{p}]_{x_{m}}\) describing the quartic term in the expansion and we have made use of the force balance equation (2) to rewrite \(f_{p}(\tilde{x}_{m})=\bar{C}(\tilde{x}_{m}-\bar{x}_{m})\); furthermore, we have dropped all irrelevant terms that do not depend on \(\tilde{u}\). We find the jump and landing positions \(\tilde{x}_{\rm ip}\) and \(\tilde{x}_{\rm lp}\) exploiting the differential properties of \(e_{\rm pin}(\tilde{x})\) at a fixed \(\bar{x}\): As discussed above, the vortex tip jumps at the boundaries \(\bar{x}_{\pm}\) of the bistable regime, where \(e_{\rm pin}\) develops a flat inflection point at \(\tilde{x}_{\rm lp}\) with one minimum joining up with the unstable maximum and the second minimum at the landing position \(\tilde{x}_{\rm lp}\) staying isolated. Within our fourth-order expansion the jump positions at (de)pinning are placed symmetrically with respect to the onset at \(\tilde{x}_{m}\), \[\tilde{x}_{\rm p+}=\tilde{x}_{m}+\tilde{u}_{\rm lp},\quad\tilde{x}_{\rm f-}= \tilde{x}_{m}-\tilde{u}_{\rm lp} \tag{19}\] and imposing the condition \([\partial_{\tilde{a}}^{2}e_{\rm pin}]_{\tilde{x}_{\rm ip}}=0\) (that is equivalent to the jump condition \(f_{p}^{\prime}[\tilde{x}_{\rm f-}]=f_{p}^{\prime}[\tilde{x}_{\rm p+}]=\bar{C}\) of Eq. (8), see also Fig. 3), we find that \[\tilde{u}_{\rm ip}\approx-\sqrt{\frac{2\bar{C}}{\gamma}}(\kappa-1)^{1/2}. \tag{20}\] In order to find the (symmetric) landing positions, it is convenient to shift the origin of the expansion to the jump position, \(\tilde{u}\to\tilde{u}-\tilde{u}_{\rm ip}\equiv\tilde{u}^{\prime}\), and define the jump distance \(\Delta\tilde{u}\), \[\tilde{x}_{\rm f+}=\tilde{x}_{\rm p+}+\Delta\tilde{u},\quad\tilde{x}_{\rm p-} =\tilde{x}_{\rm f-}-\Delta\tilde{u}. \tag{21}\] At the jump position, the linear and quadratic terms in \(\tilde{u}^{\prime}\) vanish, resulting in the expansion (up to an irrelevant constant) \[e_{\rm pin}(\tilde{x}_{\rm p+}+\tilde{u}^{\prime};\tilde{x}_{+})\approx\frac{ \gamma}{6}\tilde{u}_{\rm ip}\tilde{u}^{\prime\,3}+\frac{\gamma}{24}\tilde{u}^{ \prime\,4} \tag{22}\] and similar at \(\tilde{x}_{\rm f-}\) and \(\bar{x}_{-}\) for a left moving vortex. This expression is minimal at the landing position \(\tilde{x}_{\rm f+}\), i.e., at \(\tilde{u}^{\prime}=\Delta\tilde{u}\), \([\partial_{\tilde{u}^{\prime}}e_{\rm pin}]_{\Delta\tilde{u}}=0\), and we find the jump distance \[\Delta\tilde{u}=-3\tilde{u}_{\rm ip}. \tag{23}\] Inserting this result back into (22), we obtain the jump in energy \(\Delta e_{\rm pin}^{\rm pf}=e_{\rm pin}(\tilde{x}_{\rm p+};\bar{x}_{+})-e_{ \rm pin}(\tilde{x}_{\rm f+};\bar{x}_{+})\), \[\Delta e_{\rm pin}^{\rm pf}(\bar{x}_{+})\approx\frac{\gamma}{72}(\Delta\tilde {u})^{4}\approx\frac{9\bar{C}^{2}}{2\gamma}(\kappa-1)^{2}, \tag{24}\] and similar at \(\bar{x}_{-}\). Note that all these results have been obtained without explicit knowledge of the asymptotic coordinates \(\bar{x}_{\pm}\) where these tip jumps are triggered. The latter follow from the force equation (2) that corresponds to the condition \([\partial_{\tilde{a}}e_{\rm pin}]_{\tilde{x}_{\rm ip}}=0\) for a flat inflection point. Using the expansion (18) of the pinning energy, we find that \[\bar{x}_{\pm}-\bar{x}_{m}=\mp\frac{2}{3}\tilde{u}_{\rm ip}(\kappa-1)=\pm\frac {2}{3}\sqrt{\frac{2\bar{C}}{\gamma}}(\kappa-1)^{3/2}. \tag{25}\] The pair \(\tilde{x}_{m}\) and \(\tilde{x}_{m}\) of asymptotic and tip positions depends on the details of the potential; while \(\tilde{x}_{m}\) derives solely from the shape \(e_{p}(\tilde{x})\), \(\bar{x}_{m}\) as given by (2) involves \(\bar{C}\) and shifts \(\propto(\kappa-1)\). For a Lorentzian potential, we find that \[\tilde{x}_{m}=\sqrt{2}\xi,\quad\bar{x}_{m}=2\sqrt{2}\xi+\sqrt{2}\xi(\kappa-1). \tag{26}\] The shape coefficient is \(\gamma=3e_{p}/4\xi^{4}\) and the Labusch parameter is given by \(\kappa=e_{p}/4\bar{C}\xi^{2}\) (hence \(\bar{C}^{2}/\gamma=e_{p}/12\kappa^{2}\)), providing us with the results \[\tilde{u}_{\rm ip}\approx-\xi\left[2(\kappa-1)/3\right]^{1/2}\ \ {\rm and}\ \ \Delta e_{\rm pin}^{\rm pf}\approx\frac{3}{8}e_{p}(\kappa-1)^{2}. \tag{27}\] ### Pinning force density for the isotropic defect Using the results of Sec. II.4 in the expression (16) for the pinning force density, we find, to leading order in \(\kappa-1\), \[F_{\rm pin}=9n_{p}\frac{\bar{x}_{m}}{a_{0}}\frac{\bar{C}^{2}}{\gamma a_{0}}( \kappa-1)^{2}. \tag{28}\] The scaling \(F_{\rm pin}\sim n_{p}(\xi/a_{0})^{2}f_{p}(\kappa-1)^{2}\) (with \(\bar{C}\xi^{2}/e_{\rm p}\sim 1/\kappa\), up to a numerical) uniquely derives from the scaling \(\propto(\kappa-1)^{2}\) of the energy jumps in (24), as the asymptotic trapping length \(\bar{x}_{-}\sim\xi\) remains finite as \(\kappa\to 1\) for the isotropic defect; this will change for the anisotropic defect. ### Relation to Landau's theory of phase transitions The expansion (18) of the pinning energy \(e_{\rm pin}(\bar{x};\bar{x})\) around the inflection point \(\tilde{x}_{m}\) of the force takes the same form as the Landau free energy of a phase transition [10], \[f(\phi;h)=\frac{r_{0}}{2}(T/T_{c}-1)\phi^{2}+u\phi^{4}-h\phi, \tag{29}\] with the straightforward transcription \(\tilde{u}\leftrightarrow\phi\), \(\bar{C}(1-\kappa)\leftrightarrow r_{0}(T/T_{c}-1)\), \(\gamma/24\leftrightarrow u\) and the conjugate field \(\bar{C}\tilde{u}\leftrightarrow h\). The functional (29) describes a one-component oder parameter \(\phi\) driven by \(h\), e.g., an Ising model with magnetization density \(\phi\) in an external magnetic field \(h\). This model develops a mean-field transition with a first-order line in the \(h\)-\(T\) phase diagram that terminates in a critical point at \(T=T_{c}\) and \(h=0\). The translation to strong pinning describes a strong pinning region at large \(\kappa\) that terminates (upon decreasing \(\kappa\)) at \(\kappa=1\). The ferromagnetic phases with \(\phi=\pm\sqrt{r_{0}(1-T/T_{c})/4u}\) correspond to pinned and unpinned states, the paramagnetic phase at \(T>T_{c}\) with \(\phi=0\) translates to the unpinned domain at \(\kappa<1\). The spinodals associated with the hysteresis in the first-order magnetic transition correspond to the termination of the free and pinned branches at \(\bar{x}_{\pm}\); indeed, the flat inflection points appearing in \(e_{\rm pin}(\tilde{x};\bar{x})\) at the boundaries of the bistable region \(\mathcal{B}_{\mathbf{R}}\) as discussed in Sec. II.2 correspond to the disappearance of metastable magnetic phases in (29) at the spinodals of the first-order transition where \(\partial_{\phi}f(\phi;h)=\partial_{\phi}^{2}f(\phi;h)=0\). When including correlations between defects, the unpinned phase at \(\kappa<1\) transforms into a weakly pinned phase that continues beyond \(\kappa=1\) into the strongly pinned phase. Including such correlations, the strong-pinning transition at the onset of strong pinning at \(\kappa=1\) transforms into a weak-to-strong pinning crossover. ## III Anisotropic defects Let us generalize the above analysis to make it fit for the ensuing discussion of an arbitrary pinning landscape or short, pinscape. Central to the discussion are the unstable and bistable domains \(\mathcal{U}_{\tilde{\mathbf{R}}}\) and \(\mathcal{B}_{\tilde{\mathbf{R}}}\) in tip- and asymptotic space. The boundary of the unstable domain \(\mathcal{U}_{\tilde{\mathbf{R}}}\) in tip space is determined by the jump positions of the vortex tip. The latter follows from the local differential properties of \(e_{\mathrm{pin}}(\tilde{\mathbf{R}};\tilde{\mathbf{R}})\) at fixed asymptotic coordinate \(\tilde{\mathbf{R}}\), for the isotropic defect, the appearence of an inflection point \([\partial_{\tilde{x}}^{2}e_{\mathrm{pin}}(\tilde{x},\tilde{x})]=0\), see Eq. (12). In generalizing this condition to the anisotropic situation, we have to study the Hessian matrix of \(e_{\mathrm{pin}}(\tilde{\mathbf{R}};\tilde{\mathbf{R}})\) defined in Eq. (1), \[\left[\mathrm{Hess}\big{[}e_{\mathrm{pin}}(\tilde{\mathbf{R}};\tilde{\mathbf{ R}})|_{\mathbf{R}}\big{]}\right]_{ij}=\bar{C}\delta_{ij}+\mathrm{H}_{ij}( \tilde{\mathbf{R}}) \tag{30}\] with \[\mathrm{H}_{ij}(\tilde{\mathbf{R}})=\partial_{\tilde{x}_{i}}\partial_{\tilde {x}_{j}}e_{p}(\tilde{\mathbf{R}};\tilde{\mathbf{R}}) \tag{31}\] the Hessian matrix associated with the defect potential \(e_{p}(\tilde{\mathbf{R}})\). The vortex tip jumps when the pinning landscape \(e_{\mathrm{pin}}(\tilde{\mathbf{R}};\tilde{\mathbf{R}})\) at fixed \(\tilde{\mathbf{R}}\) opens up in an unstable direction, i.e., develops an inflection point; this happens when the lower eigenvalue \(\lambda_{-}(\tilde{\mathbf{R}})<0\) of the Hessian matrix \(\mathrm{H}_{ij}(\tilde{\mathbf{R}})\) matches up with \(\bar{C}\), \[\lambda_{-}(\tilde{\mathbf{R}})+\bar{C}=0, \tag{32}\] and strong pinning appears in the location where this happens first, say in the point \(\tilde{\mathbf{R}}_{m}\), implying that the eigenvalue \(\lambda_{-}(\tilde{\mathbf{R}})\) has a minimum at \(\tilde{\mathbf{R}}_{m}\). Furthermore, the eigenvector \(\mathbf{v}_{-}(\tilde{\mathbf{R}}_{m})\) associated with the eigenvalue \(\lambda_{-}(\tilde{\mathbf{R}}_{m})\) provides the unstable direction in the pinscape \(e_{\mathrm{pin}}(\tilde{\mathbf{R}};\tilde{\mathbf{R}})\) along which the vortex tip escapes. Defining the reduced curvature function \[\kappa(\tilde{\mathbf{R}})\equiv\frac{-\lambda_{-}(\tilde{\mathbf{R}})}{\bar {C}}, \tag{33}\] we find the generalized Labusch parameter \[\kappa_{m}\equiv\kappa(\tilde{\mathbf{R}}_{m}), \tag{34}\] and the Labusch criterion takes the form \[\kappa_{m}=1. \tag{35}\] The latter has to be read as a double condition: i) find the location \(\tilde{\mathbf{R}}_{m}\) where the smaller eigenvalue \(\lambda_{-}(\tilde{\mathbf{R}})\) is negative and largest, from which ii), one obtains the critical elasticity \(\bar{C}\) where strong pinning sets in. A useful variant of the strong pinning condition (32) is provided by the representation of the determinant of the Hessian matrix, \[D(\tilde{\mathbf{R}})\equiv\det\bigl{\{}\mathrm{Hess}\big{[}e_{\mathrm{pin}} (\tilde{\mathbf{R}};\tilde{\mathbf{R}})|_{\tilde{\mathbf{R}}}\big{]}\bigr{\}}, \tag{36}\] in terms of its eigenvalues \(\lambda_{\pm}(\tilde{\mathbf{R}})\), \(D(\tilde{\mathbf{R}})=[\bar{C}+\lambda_{-}(\tilde{\mathbf{R}})][\bar{C}+ \lambda_{+}(\tilde{\mathbf{R}})]\); near onset, the second factor \(\bar{C}+\lambda_{+}(\tilde{\mathbf{R}})\) stays positive and the strong pinning onset appears in the point \(\tilde{\mathbf{R}}_{m}\) where \(D(\tilde{\mathbf{R}})\) has a minimum which touches zero for the first time, i.e., the two conditions \(\nabla D(\tilde{\mathbf{R}})|_{\tilde{\mathbf{R}}_{m}}=0\) and \(D(\tilde{\mathbf{R}}_{m})=0\) are satisfied simultaneously. The latter conditions make sure that the minima of \(\lambda_{-}(\tilde{\mathbf{R}})\) and \(D(\tilde{\mathbf{R}})\) line up at \(\tilde{\mathbf{R}}_{m}\). Note that the Hessian determinant \(D(\tilde{\mathbf{R}})\) does not depend on the asymptotic coordinate \(\tilde{\mathbf{R}}\) as it involves only second derivatives of \(e_{\mathrm{pin}}(\tilde{\mathbf{R}};\tilde{\mathbf{R}})\). The Labusch criterion defines the situation where jumps of vortex tips appear for the first time in the isolated point \(\tilde{\mathbf{R}}_{m}\). Increasing the pinning strength, e.g., by decreasing the elasticity \(\bar{C}\) for a fixed pinning potential \(e_{p}(\mathbf{R})\) (alternatively, the pinning scale \(e_{p}\) could be increased at fixed \(\bar{C}\)) the condition (32) is satisfied on the boundary of a finite domain and we can define the unstable domain \(\mathcal{U}_{\tilde{\mathbf{R}}}\) through (see also Ref. [27]) \[\mathcal{U}_{\tilde{\mathbf{R}}}=\left\{\tilde{\mathbf{R}}\ \ |\ \ \lambda_{-}(\tilde{ \mathbf{R}})+\bar{C}\leq 0\right\}. \tag{37}\] Once the latter has been determined, the bistable domain \(\mathcal{B}_{\tilde{\mathbf{R}}}\) follows straightforwardly from the force balance equation \[\bar{C}(\tilde{\mathbf{R}}-\bar{\mathbf{R}})=\mathbf{f}_{p}(\tilde{\mathbf{R}} )=\mathbf{f}_{\mathrm{pin}}(\bar{\mathbf{R}}), \tag{38}\] i.e., [27] \[\mathcal{B}_{\tilde{\mathbf{R}}}=\left\{\tilde{\mathbf{R}}=\tilde{\mathbf{R}}- \mathbf{f}_{p}(\tilde{\mathbf{R}})/\bar{C}\ \ |\ \ \tilde{\mathbf{R}}\in\mathcal{U}_{\tilde{\mathbf{R}}}\right\}. \tag{39}\] In a last step, one then evaluates the energy jumps appearing at the boundary of \(\mathcal{B}_{\tilde{\mathbf{R}}}\) and proper averaging produces the pinning force density \(\mathbf{F}_{\mathrm{pin}}\). Let us apply the above generalized formulation to the isotropic situation. Choosing cylindrical coordinates \((r,\varphi)\), the Hessian matrix \(\mathrm{H}_{ij}\) is already diagonal; close to the inflection point \(\tilde{R}_{m}\), where \(e_{p}^{\prime\prime\prime}(\tilde{R}_{m})=0\), the eigenvalues are \(\lambda_{-}(\tilde{R})=e_{p}^{\prime\prime}(\tilde{R})<0\) and \(\lambda_{+}(\tilde{R})=e_{p}^{\prime}(\tilde{R})/\tilde{R}>0\), producing results in line with our discussion above. ### Expansion near strong pinning onset With our focus on the strong pinning transition near \(\kappa(\tilde{\mathbf{R}}_{m})=1\), we can obtain quantitative results using the expansion of the pinning energy \(e_{\mathrm{pin}}(\tilde{\mathbf{R}};\tilde{\mathbf{R}})\), Eq. (1), close to \(\tilde{\mathbf{R}}_{m}\), cf. Sec. II.4. Hence, we construct the Landau-type pinning energy corresponding to (29) for the case of an anisotropic pinning potential, i.e., we generalize (18) to the two-dimensional situation. When generalizing the strong pinning problem to the anisotropic situation, we are free to define local coordinate systems \((\tilde{u},\tilde{v})\) and \((\tilde{u},\bar{v})\) in tip- and asymptotic space centered at \(\tilde{\mathbf{R}}_{m}\) and \(\tilde{\mathbf{R}}_{m}\), where the latter is associated with \(\tilde{\mathbf{R}}_{m}\) through the force balance equation (38) in the original laboratory system. Furthermore, we fix our axes such that the unstable direction coincides with the \(u\)-axis, i.e., the eigenvector \(\mathbf{v}_{-}(\tilde{\mathbf{R}}_{m})\) associated with \(\lambda_{-}(\tilde{\mathbf{R}}_{m})\) points along \(u\); as a result, the mixed term \(\propto\tilde{u}\tilde{v}\) is absent from the expansion. Keeping all potentially relevant terms up to fourth order in \(\tilde{u}\) and \(\tilde{v}\) in the expansion, we then have to deal with an expression of the form \[e_{\text{pin}}(\tilde{\mathbf{R}};\bar{\mathbf{R}})=\frac{\bar{C} +\lambda_{-}}{2}\,\tilde{u}^{2}+\frac{\bar{C}+\lambda_{\pm}}{2}\,\tilde{v}^{2}- \bar{C}\,\bar{u}\tilde{u}-\bar{C}\,\tilde{v}\tilde{v}\] \[\quad+\frac{a}{2}\,\tilde{u}\tilde{v}^{2}+\frac{a^{\prime}}{2}\, \tilde{u}^{2}\tilde{v}+\frac{b^{\prime}}{6}\,\tilde{u}^{3}+\frac{b^{\prime \prime}}{6}\,\tilde{v}^{3} \tag{40}\] \[\quad+\frac{\alpha}{4}\,\tilde{u}^{2}\tilde{v}^{2}+\frac{\beta}{ 6}\,\tilde{u}^{3}\tilde{v}+\frac{\beta^{\prime\prime}}{6}\,\tilde{u}\tilde{v}^ {3}+\frac{\gamma}{24}\,\tilde{u}^{4}+\frac{\gamma^{\prime\prime}}{24}\,\tilde{ v}^{4},\] with \(\lambda_{\pm}=\lambda_{\pm}(\tilde{\mathbf{R}}_{m})\), \[\tilde{\mathbf{R}}=\tilde{\mathbf{R}}_{m}+\delta\tilde{\mathbf{ R}},\quad\delta\tilde{\mathbf{R}}=(\tilde{u},\tilde{v}), \tag{41}\] \[\tilde{\mathbf{R}}=\bar{\mathbf{R}}_{m}+\delta\tilde{\mathbf{R}}, \quad\delta\tilde{\mathbf{R}}=(\bar{u},\tilde{v}),\] and coefficients given by the corresponding derivatives of \(e_{p}(\mathbf{R})\), e.g., \(a\equiv\partial_{u}\partial_{v}^{2}e_{p}(\mathbf{R})|_{\tilde{\mathbf{R}}_{m} }\),..., \(\gamma^{\prime\prime}\equiv\partial_{u}^{4}e_{p}(\mathbf{R})|_{\tilde{ \mathbf{R}}_{m}}\). As we are going to see, the primed terms in this expansion vanish due to the condition of a minimal Hessian determinant at the onset of strong pinning, while double-primed terms will turn out irrelevant to leading order in the small distortions \(\tilde{u}\) and \(\tilde{v}\). The first term in (40) drives the strong pinning transition as it changes sign when \(\lambda_{-}=-\bar{C}\). Making use of the Labusch parameter \(\kappa_{m}\) defined in (34), we can replace (see also (18)) \[\bar{C}+\lambda_{-}\to\bar{C}(1-\kappa_{m}). \tag{42}\] In our further considerations below, the quantity \(\kappa_{m}-1\ll 1\) acts as the small parameter; it assumes the role of the distance \(1-T/T_{c}\) to the critical point in the Landau expansion of a thermodynamic phase transition. The second term in (40) stabilizes the theory along the \(v\) direction as \(\bar{C}+\lambda_{+}>0\) close to the Labusch point, while the sign of the cubic term \(a\,\tilde{u}\tilde{v}^{2}/2\) determines the direction of the instability along \(x\), i.e., to the right (\(a>0\)) or left (\(a<0\)). The quartic terms \(\propto\alpha,\gamma>0\) bound the pinning energy at large distances, while the term \(\propto\beta\) determines the skew angle in the shape of the unstable domain \(\mathcal{U}_{\tilde{\mathbf{R}}}\), see below. Finally, we have used the force balance equation (38) in the derivation of the driving terms \(\bar{C}\,\tilde{u}\tilde{u}\) and \(\bar{C}\,\tilde{v}\tilde{v}\). The parameters in (40) are constrained by the requirement of a minimal determinant \(D(\tilde{\mathbf{R}})\) at the strong pinning onset \(\tilde{\mathbf{R}}=\tilde{\mathbf{R}}_{m}\) and \(\kappa_{m}=1\), i.e., its gradient has to vanish, \[\nabla_{\tilde{\mathbf{R}}}\,D(\tilde{\mathbf{R}})\big{|}_{\tilde{\mathbf{R} }_{m}}=0, \tag{43}\] and its Hessian \(\text{Hess}[D(\tilde{\mathbf{R}})]\) has to satisfy the relations \[\det\big{[}\text{Hess}\big{[}D(\tilde{\mathbf{R}})\big{]}\big{]} \big{|}_{\tilde{\mathbf{R}}_{m}} >0, \tag{44}\] \[\text{tr}\big{[}\text{Hess}\big{[}D(\tilde{\mathbf{R}})\big{]} \big{]}\big{|}_{\tilde{\mathbf{R}}_{m}} >0. \tag{45}\] Making use of the expansion (40), the determinant \(D(\tilde{\mathbf{R}})\) reads \[D(\tilde{\mathbf{R}})=\big{\{}[\partial_{\tilde{u}}^{2}e_{\text{pin}}][ \partial_{\tilde{v}}^{2}e_{\text{pin}}]-[\partial_{\tilde{u}}\partial_{\tilde {v}}e_{\text{pin}}]^{2}\big{\}}_{\tilde{\mathbf{R}}} \tag{46}\] with \[\partial_{\tilde{u}}^{2}e_{\text{pin}}=\bar{C}\,(1-\kappa_{m})+a ^{\prime}\tilde{v}+b^{\prime}\tilde{u}+\alpha\tilde{v}^{2}/2+\beta\tilde{u} \tilde{v}+\gamma\tilde{u}^{2}/2,\] \[\partial_{\tilde{v}}^{2}e_{\text{pin}}=\bar{C}+\lambda_{+}+a \tilde{u}+b^{\prime\prime}\tilde{v}+\alpha\tilde{u}^{2}/2+\beta^{\prime\prime} \tilde{u}\tilde{v}+\gamma^{\prime\prime}\tilde{v}^{2}/2,\] \[\partial_{\tilde{u}}\partial_{\tilde{v}}e_{\text{pin}}=a\tilde{v }+a^{\prime}\tilde{u}+\alpha\tilde{u}\tilde{v}+\beta\tilde{u}^{2}/2+\beta^{ \prime\prime}\tilde{v}^{2}/2,\] and produces the gradient \[\nabla_{\tilde{\mathbf{R}}}\,D(\tilde{\mathbf{R}})\Big{|}_{\tilde{\mathbf{R}}_ {m}}=(\bar{C}+\lambda_{+})(b^{\prime},a^{\prime}), \tag{47}\] hence the primed parameters indeed vanish, \(a^{\prime}=0\) and \(b^{\prime}=0\). The Hessian then takes the form \[\text{Hess}\big{[}D(\tilde{\mathbf{R}})\big{]}\Big{|}_{\tilde{\mathbf{R}}_{m} }=(\bar{C}+\lambda_{+})\begin{bmatrix}\gamma&\beta\\ \beta&\delta\end{bmatrix} \tag{48}\] at the Labusch point \(\kappa_{m}=1\), where we have introduced the parameter \[\delta\equiv\alpha-\frac{2a^{2}}{\bar{C}}\frac{1}{1+\lambda_{+}/\bar{C}}. \tag{49}\] The stability conditions (44) and (45) translate, respectively, to \[\gamma\delta-\beta^{2}>0 \tag{50}\] (implying \(\delta>0\)) and \[\gamma+\delta>0. \tag{51}\] The Landau-type theory (40) involves the two 'order parameters' \(\tilde{u}\) and \(\tilde{v}\) and is driven by the dual coordinates \(\bar{u}\) and \(\bar{v}\). This \(n=2\) theory involves a soft order parameter \(\tilde{u}\) and the stiff \(\tilde{v}\), allowing us to integrate out \(\tilde{v}\) and reformulate the problem as an effective one-dimensional Landau theory (46) of the van der Waals kind--the way of solving the strong pinning problem near onset in this 1D formulation is presented in Appendix A.1. ### Unstable domain \(\mathcal{U}_{\tilde{\mathbf{R}}}\) Next, we determine the unstable domain \(\mathcal{U}_{\tilde{\mathbf{R}}}\) in tip space as defined in (37). We will find that, up to quadratic order, the boundary of \(\mathcal{U}_{\tilde{\mathbf{R}}}\) has the shape of an ellipse with the semiaxes lengths scaling as \(\sqrt{\kappa_{m}-1}\). #### v.2.1 Jump line \(\mathcal{J}_{\tilde{\mathbf{R}}}\) We find the unstable domain \(\mathcal{U}_{\tilde{\mathbf{R}}}\) by determining its boundary \(\partial\mathcal{U}_{\tilde{\mathbf{R}}}\) that is given by the set of jump positions \(\tilde{\mathbf{R}}_{\text{jp}}\) making up the jump line \(\mathcal{J}_{\tilde{\mathbf{R}}}\). The boundary \(\partial\mathcal{U}_{\tilde{\mathbf{R}}}\) is determined by the condition \(\bar{C}+\lambda_{-}=0\) or, equivalently, the vanishing of the determinant \[D(\tilde{\mathbf{R}}_{\text{jp}})\equiv 0. \tag{52}\] The latter condition guarantees the existence of an unstable direction parallel to the eigenvector \(\mathbf{v}_{-}(\tilde{\mathbf{R}}_{\mathrm{jp}})\) associated with the eigenvalue \(\lambda_{-}(\tilde{\mathbf{R}}_{\mathrm{jp}})\) where the energy (40) turns flat, cf. our discussion in Sec. II.2. The edges of the unstable domain \(\mathcal{U}_{\tilde{\mathbf{R}}}\) therefore correspond to a line of inflection points in \(e_{\mathrm{pin}}(\tilde{\mathbf{R}};\tilde{\mathbf{R}})\) along which one of the bistable tip configurations of the force balance equation (38) coalesces with the unstable solution. Near the onset of strong pinning, the unstable domain \(\mathcal{U}_{\tilde{\mathbf{R}}}\) is closely confined around the point \(\tilde{\mathbf{R}}_{m}\) where \(\mathbf{v}_{-}(\tilde{\mathbf{R}}_{m})\parallel\tilde{\mathbf{u}}\). The unstable direction \(\mathbf{v}_{-}(\tilde{\mathbf{R}}_{\mathrm{jp}})\) is therefore approximately homogeneous within the unstable domain \(\mathcal{U}_{\tilde{\mathbf{R}}}\) and is parallel to the \(u\) axis. This fact will be of importance later, when determining the topological properties of the unstable domain \(\mathcal{U}_{\tilde{\mathbf{R}}}\). Inspection of the condition (52) with \(D(\tilde{\mathbf{R}})\) given by Eq. (46) shows that the components of \(\delta\tilde{\mathbf{R}}_{\mathrm{jp}}\) scale as \(\sqrt{\kappa_{m}-1}\): in the product \([\partial_{\tilde{u}}^{2}e_{\mathrm{pin}}][\partial_{\tilde{v}}^{2}e_{\mathrm{ pin}}]\), the first factor involves the small constant \(\bar{C}(1-\kappa_{m})\) plus quadratic terms (as \(a^{\prime}=0\) and \(b^{\prime}=0\)), while the second factor comes with the large constant \(\bar{C}+\lambda_{+}\) plus corrections. The leading term in \([\partial_{\tilde{u}}\partial_{\tilde{v}}e_{\mathrm{pin}}]\) is linear in \(\tilde{v}\) with the remaining terms providing corrections. To leading order, the condition of vanishing determinant then produces the quadratic form \[[\gamma\,\tilde{u}^{2}+2\beta\,\tilde{u}\tilde{v}+\delta\,\tilde{v}^{2}]_{ \tilde{\mathbf{R}}_{\mathrm{jp}}}=2\bar{C}\left(\kappa_{m}-1\right). \tag{53}\] With \(\gamma\) and \(\delta\) positive, this form is associated with an elliptic geometry of extent \(\propto\sqrt{\kappa_{m}-1}\). For later convenience, we rewrite Eq. (53) in matrix form \[\delta\tilde{\mathbf{R}}_{\mathrm{jp}}^{\mathrm{T}}M_{\mathrm{jp}}\,\delta \tilde{\mathbf{R}}_{\mathrm{jp}}=\bar{C}(\kappa_{m}-1) \tag{54}\] with \[M_{\mathrm{jp}}=\begin{bmatrix}\gamma/2&\beta/2\\ \beta/2&\delta/2\end{bmatrix} \tag{55}\] and \(\det M_{\mathrm{jp}}=(\gamma\delta-\beta^{2})/4>0\), see Eq. (50). The jump line \(\mathcal{J}_{\tilde{\mathbf{R}}}\) can be expressed in the parametric form \[\begin{split}\tilde{u}_{\mathrm{jp}}(|\tilde{v}|<\tilde{v}_{c})& =-\frac{1}{\gamma}\Big{[}\beta\tilde{v}\\ &\pm\sqrt{2\gamma\bar{C}(\kappa_{m}-1)-(\gamma\delta-\beta^{2}) \tilde{v}^{2}}\Big{]},\end{split} \tag{56}\] with \[\tilde{v}_{c}=\sqrt{2\gamma\,\bar{C}(\kappa_{m}-1)/(\gamma\delta-\beta^{2})} \tag{57}\] and is shown in Fig. 6 for the example of an anisotropic potential inspired by the uniaxial defect in Sec. IV with 10 % anisotropy. The associated unstable domain \(\mathcal{U}_{\tilde{\mathbf{R}}}\) assumes a compact elliptic shape, with the parameter \(\beta\) describing the ellipse's skew. Comparing with the isotropic defect, this ellipse assumes the role of the ring bounded by solid lines in Fig. 5(c), see Sec. III.5 for a discussion of its different topology. An additional result of the above discussion concerns the terms that we need to keep in the expansion of the pinning energy (40): indeed, dropping corrections amounts to dropping terms with double-primed coefficients and we find that the simplified expansion \[e_{\mathrm{pin}}(\tilde{\mathbf{R}};\tilde{\mathbf{R}})=\frac{ \bar{C}}{2}(1-\kappa_{m})\,\tilde{u}^{2}+\frac{\bar{C}+\lambda_{+}}{2}\,\tilde {v}^{2}+\frac{a}{2}\,\tilde{u}\tilde{v}^{2}\] \[\quad+\frac{\alpha}{4}\,\tilde{u}^{2}\tilde{v}^{2}+\frac{\beta}{6} \,\tilde{u}^{3}\tilde{v}+\frac{\gamma}{24}\,\tilde{u}^{4}-\bar{C}\,\tilde{u} \tilde{u}-\bar{C}\,\tilde{v}\tilde{v} \tag{58}\] produces all of our desired results to leading order. #### iii.3.2 Landing line \(\mathcal{L}_{\tilde{\mathbf{R}}}\) We find the landing positions \(\tilde{\mathbf{R}}_{\mathrm{jp}}\) by extending the discussion of the isotropic situation in Sec. II.4 to two dimensions: we shift the origin of the expansion (58) to the jump point \(\tilde{\mathbf{R}}_{\mathrm{jp}}\) and find the landing point \(\tilde{\mathbf{R}}_{\mathrm{jp}}+\Delta\tilde{\mathbf{R}}\) by minimizing the total energy \(e_{\mathrm{pin}}(\Delta\tilde{\mathbf{R}})\) at the landing position. Below, we use \(\Delta\tilde{\mathbf{R}}\) both as a variable and as the jump distance to avoid introducing more coordinates. We exploit the differential properties of \(e_{\mathrm{pin}}\) at the jump and landing positions. At landing, \(e_{\mathrm{pin}}(\tilde{\mathbf{R}}_{\mathrm{jp}}+\Delta\tilde{\mathbf{R}})\) has a minimum, hence, the configuration is force free, in particular along \(\tilde{v}\), \[\partial_{\tilde{v}}e_{\mathrm{pin}}(\tilde{\mathbf{R}}_{\mathrm{ jp}}+\Delta\tilde{\mathbf{R}})\approx[\partial_{\tilde{v}}\partial_{\tilde{u}}e_{ \mathrm{pin}}]_{\tilde{\mathbf{R}}_{\mathrm{jp}}}\Delta\tilde{u}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad+[\partial_{\tilde{v}}^ {2}e_{\mathrm{pin}}]_{\tilde{\mathbf{R}}_{\mathrm{jp}}}\Delta\tilde{v}=0,\] from which we find that \(\Delta\tilde{u}\) and \(\Delta\tilde{v}\) are related via \[\Delta\tilde{v}\approx-\frac{[\partial_{\tilde{v}}\partial_{\tilde{u}}e_{ \mathrm{pin}}]_{\tilde{\mathbf{R}}_{\mathrm{jp}}}}{[\partial_{\tilde{v}}^{2}e _{\mathrm{pin}}]_{\tilde{\mathbf{R}}_{\mathrm{jp}}}}\Delta\tilde{u}. \tag{59}\] Here, we have dropped higher order terms in the expansion, assuming that the jump is mainly directed along the unstable \(u\)-direction--indeed, using the expansion (58), we find that \[\Delta\tilde{v}\approx-\frac{a\tilde{v}_{\mathrm{jp}}}{\tilde{C}+\lambda_{+}} \,\Delta\tilde{u}\propto\sqrt{\kappa_{m}-1}\,\Delta\tilde{u}. \tag{60}\] Note that we cannot interchange the roles of \(\tilde{u}\) and \(\tilde{v}\) in this force analysis, as higher order terms in the expression for the force along \(\tilde{u}\) cannot be dropped. At the jump position \(\tilde{\mathbf{R}}_{\mathrm{jp}}\), the state is force-free, i.e., the derivatives \([\partial_{\tilde{u}}e_{\mathrm{pin}}]_{\tilde{\mathbf{R}}_{\mathrm{jp}}}^{1}\) and \([\partial_{\tilde{v}}e_{\mathrm{pin}}]_{\tilde{\mathbf{R}}_{\mathrm{jp}}}\) vanish, and the Hessian determinant vanishes as well. Therefore, the expansion of \(e_{\mathrm{pin}}(\tilde{\mathbf{R}}_{\mathrm{jp}}+\Delta\tilde{\mathbf{R}})\) has no linear terms and the second order terms \([\partial_{\tilde{u}}^{2}e_{\mathrm{pin}}]_{\tilde{\mathbf{R}}_{\mathrm{jp}}} \Delta\tilde{u}^{2}/2+[\partial_{\tilde{u}}\partial_{\tilde{v}}e_{\mathrm{ pin}}]_{\tilde{\mathbf{R}}_{\mathrm{jp}}}\Delta\tilde{u}\Delta\tilde{v}+[ \partial_{\tilde{v}}^{2}e_{\mathrm{pin}}]_{\tilde{\mathbf{R}}_{\mathrm{jp}}} \Delta\tilde{v}^{2}/2\) combined with Eq. (59) can be expressed through the Hessian determinant, \(\{[\partial_{\tilde{u}}^{2}e_{\mathrm{pin}}][\partial_{\tilde{v}}^{2}e_{ \mathrm{pin}}]-[\partial_{\tilde{u}}\partial_{\tilde{v}}e_{\mathrm{pin}}]^{2} \}_{\tilde{\mathbf{R}}_{\mathrm{jp}}}\Delta\tilde{u}^{2}/2=0\), that vanishes as well. Therefore, the expansion of \(e_{\mathrm{pin}}\) around \(\tilde{\mathbf{R}}_{\mathrm{jp}}\) starts at third order in \(\Delta\tilde{\mathbf{R}}\approx(\Delta\tilde{u},0)\) and takes the form (we make use of (60), dropping terms \(\propto\Delta\tilde{v}\) and a constant) \[e_{\mathrm{pin}}(\tilde{\mathbf{R}}_{\mathrm{jp}}+\Delta\tilde{\mathbf{R}}) \approx\frac{1}{6}\big{(}\gamma\tilde{u}_{\mathrm{jp}}+\beta\tilde{v}_{ \mathrm{jp}}\big{)}\Delta\tilde{u}^{3}+\frac{\gamma}{24}\Delta\tilde{u}^{4}. \tag{61}\] Minimizing this expression with respect to \(\Delta\tilde{u}\) (as \(e_{\mathrm{pin}}\) is minimal at \(\tilde{\mathbf{R}}_{\mathrm{jp}}\)), we obtain the result \[\Delta\tilde{u}\approx-3(\gamma\tilde{u}_{\mathrm{jp}}+\beta\tilde{v}_{ \mathrm{jp}})/\gamma. \tag{62}\] Making use of the quadratic form (54), we can show that the equation for the landing position \(\tilde{\mathbf{R}}_{\mathrm{lp}}=\tilde{\mathbf{R}}_{\mathrm{jp}}+\Delta\tilde{ \mathbf{R}}\) can be cast into a similar quadratic form (with \(\delta\tilde{\mathbf{R}}_{\mathrm{lp}}\) measured relative to \(\tilde{\mathbf{R}}_{\mathrm{m}}\)) \[\delta\tilde{\mathbf{R}}_{\mathrm{lp}}^{\mathrm{T}}M_{\mathrm{lp}}\,\delta \tilde{\mathbf{R}}_{\mathrm{lp}}=\bar{C}(\kappa_{m}-1), \tag{63}\] but with the landing matrix now given by \[M_{\mathrm{lp}}=\frac{1}{4}M_{\mathrm{jp}}+\begin{bmatrix}0&0\\ 0&\frac{3}{4}\Big{(}\frac{\delta}{2}-\frac{\beta^{2}}{2\gamma}\Big{)}\end{bmatrix}. \tag{64}\] In the following, we will refer to the solutions of Eq. (63) as the 'landing' or'stable' ellipse \(\tilde{\mathbf{R}}_{\mathrm{lp}}\) and write the jump distance in a parametric form involving the shape \(\tilde{u}_{\mathrm{jp}}(\tilde{v})\) in Eq. (56) of the jumping ellipse, \[\Delta\tilde{u}(\tilde{v}) =-3\left[\gamma\,\tilde{u}_{\mathrm{jp}}(\tilde{v})+\beta\,\tilde{ v}\right]/\gamma, \tag{65}\] \[\Delta\tilde{v}(\tilde{v}) =-\left[a/(\bar{C}+\lambda_{+})\right]\,\tilde{v}\,\Delta\tilde{u} (\tilde{v}). \tag{66}\] The landing line derived from (63) is displayed as a dashed line in Fig. 6. Two tip jumps connected by an arrow are shown for illustration, with solid dots marking the jump position \(\tilde{\mathbf{R}}_{\mathrm{jp}}\) of the tip and open dots its landing position \(\tilde{\mathbf{R}}_{\mathrm{lp}}\); they describe tip jumps for a vortex approaching the unstable ellipse once from the left (upper pair) and another time from the right (lower pair). The different topologies associated with jumps and landing showing up for the isotropic defect in Fig. 5(c) (two concentric circles) and for the generic onset in Fig. 6 (two touching ellipses) will be discussed later. Inspecting the matrix equation (63), we can gain several insights on the landing ellipse \(\mathcal{L}_{\tilde{\mathbf{R}}}\): (i) the matrix \(M_{\mathrm{jp}}/4\) on the right-hand side of (64) corresponds to an ellipse with the same geometry as for \(\mathcal{J}_{\tilde{\mathbf{R}}}\) but double in size, (ii) the remaining matrix with vanishing entries in the off-diagonal and the \(M_{xx}\) elements leaves the size doubling of the stable ellipse \(\mathcal{L}_{\tilde{\mathbf{R}}}\) at \(\tilde{v}=0\) unchanged, and (iii) the finite \(M_{yy}\) component exactly counterbalances the doubling along the \(v-\)direction encountered in (i), cf. the definiton (55) of \(M_{\mathrm{jp}}\), up to a term proportional to the skew parameter \(\beta\) accounting for deviations of the semiaxis from the \(v-\)axis. Altogether, the stable ellipse \(\mathcal{L}_{\tilde{\mathbf{R}}}\) extends with a double width along the \(u-\)axis and smoothly overlaps with the unstable ellipse at the two contact points \(\tilde{v}_{c,\pm}\). The latter are found by imposing the condition \(\Delta\tilde{u}=\Delta\tilde{v}=0\) in Eqs. (65) and (66); we find them located (relative to \(\tilde{\mathbf{R}}_{m}\)) at \[\delta\tilde{\mathbf{R}}_{c,\pm}=\pm\left(-\beta/\gamma,1\right)\,\tilde{v}_{c}, \tag{67}\] with the endpoint coordinate \(\tilde{v}_{c}\) given in Eq. (57), and mark them with crosses in Fig. 6. As anticipated, the contact points are off-set with respect to the \(v-\)axis for a finite skew parameter \(\beta\). At these points, the unstable and the stable tip configurations coincide and the vortex tip undergoes no jump. Furthermore, the vector tangent to the jump (or landing) ellipse is parallel to the \(u-\)direction at the contact points. To see that, we consider (56) and find that \[\frac{\partial\tilde{u}}{\partial\tilde{v}}\Big{|}_{\tilde{v}\to\pm\tilde{v}_{c} }\approx\pm\left(\sqrt{\tilde{v}_{c}^{2}-\frac{2\gamma\,\bar{C}(\kappa_{m}-1)}{ \gamma\beta-\delta^{2}}}\right)^{-1}\to\pm\infty, \tag{68}\] hence, the corresponding tangents \(\partial_{\tilde{u}}\tilde{v}\) vanish. The asymptotic positions \(\tilde{\mathbf{R}}\) where the vortex tips jump and land belong to the boundary of the bistable region \(\mathcal{B}_{\mathbf{R}}\); for the isotropic case in Fig. 5(d) these correspond to the circles with radii \(\bar{R}_{-}\) (pinning) and (depinning) with jump and landing radii \(\tilde{R}_{\rm f-}(\tilde{R}_{-})\) and \(\tilde{R}_{\rm p-}(\tilde{R}_{-})\) and \(\tilde{R}_{\rm p+}(\tilde{R}_{+})\) and \(\tilde{R}_{\rm f+}(\tilde{R}_{+})\), respectively, see Fig. 5(c). For the anisotropic defect, we have only a single jump/landing event at one asymptotic position \(\bar{\bf R}\) that we are going to determine in the next section. ### Bistable domain \(\mathcal{B}_{\bf R}\) The set of asymptotic positions \(\bar{\bf R}\) corresponding to the tip positions \(\bar{\bf R}_{\rm jp}\) along the edges of \(\mathcal{U}_{\bar{\bf R}}\) forms the boundary \(\partial\mathcal{B}_{\bar{\bf R}}\) of the bistable domain \(\mathcal{B}_{\bar{\bf R}}\); they are related through the force-balance equation (38), with every vortex tip position \(\tilde{\bf R}_{\rm jp}\in\partial\mathcal{U}_{\bar{\bf R}}\) defining an associated asymptotic position \(\bar{\bf R}(\bar{\bf R}_{\rm jp})\in\partial\mathcal{B}_{\bar{\bf R}}\). At the onset of strong pinning, the bistable domain corresponds to the isolated point \(\bar{\bf R}_{m}\), related to \(\bar{\bf R}_{m}\) through (38). Beyond the Labusch point, \(\mathcal{B}_{\bar{\bf R}}\) expands out of \(\bar{\bf R}_{m}\) and its geometry is found by evaluating the force balance equation (38) at a given tip position \(\bar{\bf R}_{\rm jp}\in\partial\mathcal{U}_{\bar{\bf R}}\), \(\bar{\bf R}(\bar{\bf R}_{\rm jp})=\bar{\bf R}_{\rm jp}-{\bf f}_{\rm p}(\tilde{ \bf R}_{\rm jp})/\bar{C}\in\partial\mathcal{B}_{\bar{\bf R}}\). Using the expansion (58) for \(e_{\rm pin}(\bar{\bf R};\bar{\bf R})\), this force equation can be expressed as \(\nabla_{\bf R}e_{\rm pin}(\bar{\bf R};\bar{\bf R})\big{|}_{\bar{\bf R}}=0\), or explicitly (we remind that we measure \(\bar{\bf R}=\bar{\bf R}_{m}+(\bar{u},\bar{v})\) relative to \(\bar{\bf R}_{m}\)), \[\bar{C}\bar{u} =\bar{C}(1-\kappa_{m})\bar{u}+\frac{a}{2}\tilde{v}^{2}+\frac{ \gamma}{6}\tilde{u}^{3}+\frac{\beta}{2}\tilde{u}^{2}\tilde{v}+\frac{\alpha}{2} \tilde{u}\tilde{v}^{2},\] \[\bar{C}\bar{v} =(\bar{C}+\lambda_{+})\tilde{v}+a\,\tilde{u}\tilde{v}+\frac{\beta }{6}\tilde{u}^{3}+\frac{\alpha}{2}\tilde{u}^{2}\tilde{v}. \tag{69}\] Inserting the results for the jump ellipse \(\mathcal{J}_{\bar{\bf R}}\), Eq. (56), into Eqs. (69), we find the crescent-shape bistable domain \(\mathcal{B}_{\bar{\bf R}}\) shown in Fig. 7; let us briefly derive the origin of this shape. Solving (69) to leading order, \(\bar{C}\bar{u}^{{(0)}}\approx(a/2)\tilde{v}^{2}\) and \(\bar{C}\bar{v}^{{(0)}}\approx(\bar{C}+\lambda_{+})\tilde{v}\), we find the parabolic approximation \[\bar{u}^{{(0)}}\approx\frac{a}{2\bar{C}}\frac{1}{(1+ \lambda_{+}/\bar{C})^{2}}\,\bar{v}^{{(0)}\,2}, \tag{70}\] telling that the extent of \(\mathcal{B}_{\bar{\bf R}}\) scales as \((\kappa_{m}-1)\) along \(\bar{u}\) and \(\propto(\kappa_{m}-1)^{1/2}\) along \(\bar{v}\), i.e., we find a flat parabola opening towards positive \(\bar{u}\) for \(a>0\), see Fig. 7. In order to find the width of \(\mathcal{B}_{\bar{\bf R}}\), we have to solve (69) to the next higher order, \(\bar{u}=\bar{u}^{{(0)}}+\bar{u}^{{(1)}}\); for \(\beta=0\), we find the correction \[\bar{u}^{{(1)}}=(1-\kappa_{m})\tilde{u}+\frac{ \gamma}{6\bar{C}}\tilde{u}^{3}+\frac{\alpha}{2\bar{C}}\tilde{u}\tilde{v}^{2} \tag{71}\] that produces a \(\bar{v}\leftrightarrow-\bar{v}\) symmetric crescent. Inserting the two branches (56) of the jump ellipse, we arrive at the width of the crescent that scales as \((\kappa_{m}-1)^{3/2}\). The correction to \(\bar{v}\) is \(\propto(\kappa_{m}-1)\) and we find the closed form \[\bar{v}\approx\left[1+(\lambda_{+}+a\tilde{u})/\bar{C}\right]\tilde{v} \tag{72}\] with a small antisymmetric (in \(\tilde{u}\)) correction. For a finite \(\beta\neq 0\), the correction \(\bar{u}^{{(1)}}\) picks up an additional term \((\beta/2\bar{C})\,\tilde{u}^{2}\tilde{v}\) that breaks the \(\bar{v}\leftrightarrow-\bar{v}\) symmetry and the crescent is distorted. Viewing the boundary \(\partial\mathcal{B}_{\bar{\bf R}}\) as a parametric curve in the variable \(\tilde{v}\) with \(\tilde{u}=\tilde{u}_{\rm jp}(\tilde{v})\) given by Eq. (56), we obtain the boundary \(\partial\mathcal{B}_{\bar{\bf R}}\) in the form of two separate arcs that define the crescent-shaped domain \(\mathcal{B}_{\bar{\bf R}}\) in Fig. 7(a). The two arcs merge in two cusps at \(\bar{\bf R}_{c,\pm}\) that are associated to the touching points (67) in dual space and derive from Eqs. (69); measured with respect to \(\bar{\bf R}_{m}\) Figure 7: (a) Bistable domain \(\mathcal{B}_{\bar{\bf R}}\) in asymptotic \(\bar{\bf R}\)-space measured in units of \(\xi\); the same parameters as in Fig. 6 have been used. Note the different scaling of the axes in \(\kappa_{m}-1\); the right panel (b) shows \(\mathcal{B}_{\bar{\bf R}}\) in isotropic scales. The bistable domain \(\mathcal{B}_{\bar{\bf R}}\) is elongated along the transverse direction \(\bar{v}\) and narrow/bent along the unstable direction \(\bar{u}\), giving \(\mathcal{B}_{\bar{\bf R}}\) its peculiar crescent-like shape. The branch crossing line \(\bar{\bf R}_{0}\), see (77), is shown as a dashed black line. Black crosses mark the cusps of \(\mathcal{B}_{\bar{\bf R}}\) and are associated with the contact points of \(\mathcal{U}_{\bar{\bf R}}\) through the force balance equation (38); they correspond to critical end-points in the thermodynamic Ising analogue, while the boundaries \(\partial\mathcal{B}_{\bar{\bf R}}\) map to spinodals. Blue and red colors identify different characters of vortex tip configurations as quantified through the ‘order parameter’ \(\tilde{u}\) of the Landau expansion (at \(\beta=0\)), see text, while magenta is associated to the bistable area \(\mathcal{B}_{\bar{\bf R}}\); the blue and red branches extend to the far side of the crescent and terminate in the blue and red colored boundaries \(\partial\mathcal{B}_{\bar{\bf R}}^{\rm p}\) and \(\partial\mathcal{B}_{\bar{\bf R}}^{\rm r}\), respectively. Thin horizontal lines show vortex trajectories that proceed smoothly in asymptotic space, see also Fig. 5(d). Blue and red dots mark the asymptotic positions associated with vortex tip jumps that happen at the exit of \(\mathcal{B}_{\bar{\bf R}}\); they correspond to the pairs of tip positions in Fig. 6. (b) Bistable domain \(\mathcal{B}_{\bar{\bf R}}\) in isotropic scaled coordinates \(\bar{u}\) and \(\bar{v}\) showing the ‘true’ shape of \(\mathcal{B}_{\bar{\bf R}}\). Vortices impacting on the bistable domain with an angle \(|\theta|\leq\theta^{*}\) undergo a single jump on the far side of \(\mathcal{B}_{\bar{\bf R}}\), with the pinning force density directed along \(u\) and scaling as \(F_{\rm pin}^{\parallel}\propto(\kappa-1)^{5/2}\). Vortices crossing \(\mathcal{B}_{\bar{\bf R}}\) at large angles close to \(\pi/2\) jump either never, once, or twice; at \(\theta=\pi/2\) the pinning force density is small, \(F_{\rm pin}^{\perp}\propto(\kappa-1)^{3}\), and directed along \(v\). these cusps are located at \[\delta\bar{\mathbf{R}}_{c,\pm} =(\bar{u}_{c},\pm\bar{v}_{c}) \tag{73}\] \[\approx\left[\left(a/2\,C\right)\,\tilde{v}_{c}^{2},\,\pm(1+\lambda _{+}/\bar{C})\tilde{v}_{c}\right].\] The coloring in Fig. 7 indicates the characters'red' and 'blue' of the vortex states; these are defined in terms of the 'order parameter' \(\tilde{u}-\tilde{u}_{m}(\bar{v})\) of the Landau functional (58) that changes sign at the branch crossing line Eq. (77), with the shift \[\tilde{u}_{m}(\bar{v})=-\frac{\beta}{\gamma}\tilde{v}(\bar{v})\approx-\frac{ \beta}{\gamma}\frac{\bar{v}}{1+\lambda_{+}/\bar{C}}, \tag{74}\] \(\tilde{u}_{m}(\bar{v})=0\) for our symmetric case with \(\beta=0\) in Fig. 7. Going beyond the cusps (or critical points) at \(\bar{\mathbf{R}}_{c,\pm}\), the two states smoothly crossover between'red' and 'blue' (indicated by the smooth blue-white-red transition), as known for the van der Waals gas (or Ising magnet) above the critical point. Within the bistable region \(\mathcal{B}_{\bar{\mathbf{R}}}\), both'red' and 'blue' states coexist and we color this region in magenta. The geometry of the bistable domain \(\mathcal{B}_{\bar{\mathbf{R}}}\) is very different from the ring-shaped geometry of the isotropic problem discussed in Sec. II.1, see Fig. 5(d); in the discussion of the uniaxial anisotropic defect below, we will learn how these two geometries are interrelated. Comparing the overall dimensions of the crescent with the ring in Fig. 5(d), we find the following scaling behavior in \(\kappa_{m}-1\): while the crescent \(\mathcal{B}_{\bar{\mathbf{R}}}\) grows along \(\bar{v}\) as \((\kappa_{m}-1)^{1/2}\), the isotropic ring involves the characteristic size \(\xi\) of the defect, \(\bar{R}_{-}\sim\xi\) and hence its extension along \(\bar{v}\) is a constant. On the other hand, the scaling of the crescent's and the ring's width is the same, \(\propto(\kappa_{m}-1)^{3/2}\). The different scaling of the transverse width then will be responsible for the new scaling of the pinning force density, \(F_{\rm pin}\propto(\kappa_{m}-1)^{5/2}\). ### Comparison to isotropic situation Let us compare the unstable domains \(\mathcal{U}_{\bar{\mathbf{R}}}\) for the isotropic and anisotropic defects in Figs. 5(c) and 6, respectively. In the isotropic example of Sec. II.1, the jump- and landing-circles \(\tilde{R}_{\rm lp}(\bar{R})\) and \(\tilde{R}_{\rm lp}(\bar{R})\) are connected to different phases, e.g., free (colored in blue at \(\tilde{R}_{\rm lp}=\tilde{R}_{\rm f-}\)) and pinned (colored in red at \(\tilde{R}_{\rm lp}=\tilde{R}_{\rm p-}\)) associated with \(\tilde{R}_{-}\). Furthermore, the topology is different, with the unstable ring domain separating the two distinct phases, free and pinned ones. As a result, a second pair of jump- and landing-positions associated with the asymptotic circle \(\tilde{R}_{+}\) appears along the vortex trajectory of Fig. 5(c); these are the located at the radii \(\tilde{R}_{\rm lp}=\tilde{R}_{\rm p+}\) and \(\tilde{R}_{\rm lp}=\tilde{R}_{\rm f+}\) and describe the depinning process from the pinned branch back to the free branch (while the previous pair at radii \(\tilde{R}_{\rm f-}\) and \(\tilde{R}_{\rm p-}\) describes the pinning process from the free to the pinned branch). The pinning (at \(\bar{R}_{-}\)) and depinning (at \(\bar{R}_{+}\)) processes in the asymptotic coordinates are shown in figure 5(d). The bistable area \(\mathcal{B}_{\bar{\mathbf{R}}}\) with coexisting free and pinned states has a ring-shape as well (colored in magenta, the superposition of blue and red); the two pairs of jump and landing points in tip space have collapsed to two pinning and depinning points in asymptotic space. In the present situation describing the strong pinning onset for a generic anisotropic potential, the unstable domain \(\mathcal{U}_{\bar{\mathbf{R}}}\) grows out of an isolated point (in fact, \(\bar{\mathbf{R}}_{m}\)) and assumes the shape of an ellipse that is simply connected; as a result, a vortex incident on the defect undergoes only a single jump, see Fig. 6. The bistable domain \(\mathcal{B}_{\bar{\mathbf{R}}}\) is simply connected as well, but now features two cusps at the end-points of the crescent, see Fig. 7. The bistability again involves two states, but we cannot associate them with separated pinned and free phases--we thus denote them by 'blue'-type and'red'-type. The two states approach one another further away from the defect and are distinguishable only in the region close to bistability; in Fig. 7, this is indicated with appropriate color coding. Note that the Landau-type expansion underlying the coloring in Fig. 7 fails at large distances; going beyond a local expansion near \(\bar{\mathbf{R}}_{m}\), the distortion of the vortex vanishes at large distances and red/blue colors faint away to approach 'white'. ### Topology The different topologies of unstable and bistable regions appearing in the isotropic and anisotropic situations are owed to the circular symmetry of the isotropic defect; we will recover the ring-like topology for the anisotropic situation later when describing a uniaxially anisotropic defect at larger values of the Labusch parameter \(\kappa_{m}\). Indeed, such an increase in pinning strength will induce a change in topology with two crescents facing one another joining into a ring-like shape. Let us discuss the consequences of the different topologies that we encountered for the isotropic and anisotropic defects in the discussion above. Specifically, the precise number and position of the contact points have an elegant topological explanation. When a vortex tip touches the edges \(\bar{\mathbf{R}}_{\rm ip}\) of the unstable domain there are two characteristic directions: one is given by the unstable eigenvector \(\mathbf{v}_{-}(\bar{\mathbf{R}}_{\rm ip})\) discussed in Sec. III.2 along which the tip will jump initially. The second is the tangent vector to the boundary \(\partial\mathcal{U}_{\bar{\mathbf{R}}}\) of the unstable domain, i.e., to the unstable ellipse. While the former is approximately constant and parallel to the unstable \(u\)-direction along \(\bar{\mathbf{R}}_{\rm ip}\), the latter winds around the ellipse exactly once after a full turn around \(\mathcal{U}_{\bar{\mathbf{R}}}\). The contact points \(\bar{\mathbf{R}}_{c,\pm}\) of the unstable and stable ellipses then coincide with those points on the ellipse where the tangent vector are parallel and anti-parallel to \(\mathbf{v}_{-}\); at these points, the tip touches the unstable ellipse but does not undergo a jump any more. Given the different winding numbers of \(\mathbf{v}_{-}\) and of the tangent vector, there are exactly two points along the circumference of \(\mathcal{U}_{\bar{\mathbf{R}}}\) where the tangent vector is parallel/anti parallel to the \(u\)-direction; these are the points found in (67). This argument remains valid as long as the contour \(\partial\mathcal{U}_{\mathbf{\tilde{R}}}\) is not deformed to cross/encircle the singular point of the \(\mathbf{v}_{-}(\mathbf{\tilde{R}}_{\mathrm{jp}})\) field residing at the defect center. The same arguments allow us to understand the absence of contact points in the isotropic scenario: For an isotropic potential, the winding number \(n_{\iota}\) of the tangent vector around \(\mathcal{U}_{\mathbf{\tilde{R}}}\) remains unchanged, i.e., \(n_{\iota}=\pm 1\), while the unstable direction \(\mathbf{v}_{-}\) is pointing along the radius and thus acquires a unit winding number as well. Indeed, the two directions, tangent and jump, then rotate simultaneously and do not wind around each other after a full rotation, explaining the absence of contact points in the isotropic situation. ### Energy jumps Within strong pinning theory, the energy jump \(\Delta e_{\mathrm{pin}}\) associated with the vortex tip jump between bistable vortex configurations at the boundaries of \(\mathcal{B}_{\mathbf{\tilde{R}}}\) determines the pinning force density \(F_{\mathrm{pin}}\) and the critical current \(j_{c}\), see Eqs. (16) and (17). Formally, the energy jump \(\Delta e_{\mathrm{pin}}\) is defined as the difference in energy \(e_{\mathrm{pin}}(\mathbf{\tilde{R}};\mathbf{\tilde{R}})\) at fixed asymptotic position \(\mathbf{\tilde{R}}\in\partial\mathcal{B}_{\mathbf{\tilde{R}}}\) between vortex configurations with tips in the jump (\(\mathbf{\tilde{R}}_{\mathrm{jp}}(\mathbf{\tilde{R}})\)) and landing (\(\mathbf{\tilde{R}}_{\mathrm{lp}}(\mathbf{\tilde{R}})=\mathbf{\tilde{R}}_{ \mathrm{jp}}(\mathbf{\tilde{R}})+\Delta\mathbf{\tilde{R}}\)) positions, \[\Delta e_{\mathrm{pin}}(\mathbf{\tilde{R}}\in\partial\mathcal{B }_{\mathbf{\tilde{R}}})\equiv e_{\mathrm{pin}}[\mathbf{\tilde{R}}_{\mathrm{jp }}(\mathbf{\tilde{R}});\mathbf{\tilde{R}}]\\ -e_{\mathrm{pin}}[\mathbf{\tilde{R}}_{\mathrm{lp}}(\mathbf{\tilde {R}});\mathbf{\tilde{R}}]. \tag{75}\] In Sec. III.2.2 above, we have found that the jump \(\Delta\mathbf{\tilde{R}}\) is mainly forward directed along \(u\). Making use of the expansion (61) of \(e_{\mathrm{pin}}\) at \(\mathbf{\tilde{R}}_{\mathrm{jp}}\) and the result (62) for the jump distance \(\Delta\tilde{u}\), we find the energy jumps \(\Delta e_{\mathrm{pin}}\) in tip- and asymptotic space in the form (cf. with the isotropic result Eq. (24)), \[\Delta e_{\mathrm{pin}}(\mathbf{\tilde{R}}) \approx\frac{\gamma}{72}\Delta\tilde{u}^{4}\approx\left(\frac{9} {8\gamma^{3}}\right)\left[\gamma\,\tilde{u}_{\mathrm{jp}}(\tilde{v})+\beta\, \tilde{v}\right]^{4} \tag{76}\] \[\approx\left(\frac{9}{8\gamma^{3}}\right)\left[(\gamma\delta- \beta^{2})\left(\tilde{v}_{c}^{2}-\tilde{v}^{2}\right)\right]^{2}\] \[\approx\left(\frac{9}{8\gamma^{3}}\right)\left[\frac{(\gamma \delta-\beta^{2})}{(1+\lambda_{+}/\bar{C})^{2}}\left(\bar{v}_{c}^{2}-\bar{v}^ {2}\right)\right]^{2}.\] Here, we have used the parametric shape \(\tilde{u}_{\mathrm{jp}}(\tilde{v})\) in Eq. (56) for the jumping ellipse as well as (69) to lowest order, \(\tilde{v}\approx\bar{v}/(1+\lambda_{+}/\bar{C})\), to relate the tip and asymptotic positions in the last equation. The energy jump (76) scales as \((\kappa_{m}-1)^{2}\) and is shown in Fig. 8. It depends on the \(v\) coordinate of the asymptotic (or tip) position only and vanishes at the cusps \(\mathbf{\tilde{R}}_{c,\pm}\), see Eq. (73) (or at the touching points \(\mathbf{\tilde{R}}_{c,\pm}\), see Eq. (67)). To order \((\kappa_{m}-1)^{2}\), the energy jumps are identical at the left and right edges of the bistable domain \(\mathcal{B}_{\mathbf{\tilde{R}}}\). Following the two bistable branches and the associated energy jumps between them to the inside of \(\mathcal{B}_{\mathbf{\tilde{R}}}\), the latter vanish along the branch crossing line \(\mathbf{\tilde{R}}_{0}\). In the thermodynamic analogue, this line corresponds to the first-order equilibrium transition line that is framed by the spinodal lines; for the isotropic defect, this is the circle with radius \(\bar{R}_{0}=x_{0}\) framed by the spinodal circles with radii \(\bar{R}_{\pm}\), see Figs. 4 and 5(d). For the anisotropic defect with \(\beta=0\), this line is trivially given by the centered parabola of \(\mathcal{B}_{\mathbf{\tilde{R}}}\), see Eq. (70), and hence \[\bar{u}_{0}\approx\frac{a}{2\bar{C}}\frac{1}{(1+\lambda_{+}/\bar{C})^{2}}\bar{ v}_{0}^{2}. \tag{77}\] The result for a finite skew parameter \(\beta\neq 0\) is given by Eq. (117) in Appendix A.1. ### Pinning force density The pinning force density \(F_{\mathrm{pin}}\) is defined as the average force density exerted on a vortex line as it moves across the superconducting sample. For the isotropic case described in Sec. II.5, the individual pinning force \(\mathbf{f}_{\mathrm{pin}}(\mathbf{\tilde{R}})=-\nabla_{\mathbf{\tilde{R}}}e_{ \mathrm{pin}}(\mathbf{\tilde{R}})\), see Eq. (7), is directed radially and the force density \(F_{\mathrm{pin}}\) is given by the (constant) energy jump \(\Delta e_{\mathrm{pin}}\propto(\kappa-1)^{2}\) on the edge \(\partial\mathcal{B}_{\mathbf{\tilde{R}}}\) of the bistable domain and the transverse length \(t_{\perp}\sim\xi\), hence, \(F_{\mathrm{pin}}\propto t_{\perp}\Delta e_{\mathrm{pin}}\) scales as \((\kappa-1)^{2}\). For an anisotropic defect, the pinning force depends on the vortex direction of motion \(\mathbf{\hat{v}}=(\cos\theta,\sin\theta)\) relative to the axis of the bistable region: we choose angles \(-\pi/2\leq\theta\leq\pi/2\) measured from the unstable direction \(\bar{u}\), i.e., vortices incident from the left; the case of larger impact angles \(|\theta|>\pi/2\) corresponds to vortices incident from the right and can be reduced to the previous case by inverting the sign of the parameter \(a\) in the expansion (58), i.e., the curvature of the parabola (70); to our leading order analysis, the results remain the same. The Figure 8: Energy jump \(\Delta e_{\mathrm{pin}}\) along the edges of the bistable domain \(\mathcal{B}_{\mathbf{R}}\) as a function of the transverse coordinate \(\bar{v}\); we have used the same parameters as in Fig. 6. The energy jump vanishes at the cusps \(\pm\bar{v}_{c}\), as the bistable tip configurations become identical and their energies turn equal. pinning force is no longer directed radially but depends on \(\theta\); furthermore, the energy jump (76) is non-uniform along the boundary \(\mathcal{B}_{\mathbf{R}}\). In spite of these complications, we can perform some simple scaling estimates as a first step: let us assume a uniform distribution of identical anisotropic defects, all with their unstable direction pointing along \(x\). The jumps in energy still scale as \(\Delta e_{\rm pin}\propto(\kappa_{m}-1)^{2}\), however, the trapping distance is no longer finite but grows from zero as \(\kappa_{m}-1\) increases. Due to their elongated shapes, the bistable domains \(\mathcal{B}_{\mathbf{R}}\) exhibit different extensions along the \(y\) and \(x\) directions, i.e., \(\propto\bar{v}_{c}\propto\sqrt{\kappa_{m}-1}\) along \(y\) and \(\propto\bar{u}_{c}\propto(\kappa_{m}-1)\) along \(x\), respectively. These simple considerations then suggest that the pinning force density exhibits a scaling \(F_{\rm pin}\propto(\kappa_{m}-1)^{\mu}\) with \(\mu>2\), different from the setup with isotropic defects. Even more, vortices moving along the \(x\) or \(y\) directions, respectively, will experience different forces \(F_{\rm pin}^{\parallel}\) and \(F_{\rm pin}^{\perp}\) scaling as \[F_{\rm pin}^{\parallel}\propto(\kappa_{m}-1)^{5/2},\quad F_{\rm pin}^{\perp} \propto(\kappa_{m}-1)^{3} \tag{78}\] near the onset of strong pinning. While such uniform anisotropic defects could be created artificially, a more realistic scenario will involve defects that are randomly oriented and an additional averaging over angles \(\theta\) has to be performed; this will be done at the end of this section. We first determine the magnitude and orientation of the pinning force density \(\mathbf{F}_{\rm pin}(\theta)\) as a function of the vortex impact angle \(\theta\) for randomly positioned but uniformly oriented (along \(x\)) defects of density \(n_{p}\). The pinning force density is given by the average over relative positions between vortices and defects (with a minus sign following convention; \(\mathcal{V}_{\mathbf{R}}\) denotes the vortex lattice unit cell), \[\mathbf{F}_{\rm pin}(\theta)=-n_{p}\int_{\mathcal{V}_{\mathbf{R}} \setminus\mathcal{B}_{\mathbf{R}}}\!\frac{{\rm d}^{2}\bar{\mathbf{R}}}{a_{0}^ {2}}\,\mathbf{f}_{\rm pin}(\bar{\mathbf{R}}) \tag{79}\] \[-n_{p}\int_{\mathcal{B}_{\mathbf{R}}}\!\!\frac{{\rm d}^{2}\bar{ \mathbf{R}}}{a_{0}^{2}}\left[p_{\rm b}(\bar{\mathbf{R}};\theta)\,\mathbf{f}_{ \rm pin}^{\rm b}(\bar{\mathbf{R}})+p_{\rm r}(\bar{\mathbf{R}};\theta)\,\mathbf{ f}_{\rm pin}^{\rm r}(\bar{\mathbf{R}})\right].\] Outside of the bistable domain, i.e., in \(\mathcal{V}_{\mathbf{R}}\setminus\mathcal{B}_{\mathbf{R}}\), a single stable vortex tip configuration exists and the pinning force \(\mathbf{f}_{\rm pin}(\bar{\mathbf{R}})\) is uniquely defined. Inside \(\mathcal{B}_{\mathbf{R}}\), the branch occupation functions \(p_{\rm b,r}(\bar{\mathbf{R}};\theta)\) are associated with the tip positions appertaining to the 'blue' and the'red' vortex configurations with different tip positions \(\bar{\mathbf{R}}^{\rm b,r}(\bar{\mathbf{R}})\), cf. Figs. 6 and 7. The pinning forces \(\mathbf{f}_{\rm pin}^{\rm b,r}(\bar{\mathbf{R}})\) are evaluated for the corresponding vortex tip positions and are defined as \[\mathbf{f}_{\rm pin}^{\rm b,r}(\bar{\mathbf{R}})=-\nabla_{\bar{\mathbf{R}}}e_{ \rm pin}[\bar{\mathbf{R}}^{\rm b,r}(\bar{\mathbf{R}});\bar{\mathbf{R}}]. \tag{80}\] Let us now study how vortex lines populate the bistable domain as a function of the impact angle \(\theta\). Examining Fig. 7, we can distinguish between two different angular regimes: a _frontal_-impact regime at angles away from \(\pi/2\), \(|\theta|\leq\theta^{*}\), where all the vortices that cross the bistable domain undergo exactly one jump on the far edge of \(\mathcal{B}_{\mathbf{R}}\), see the blue dot and blue boundary \(\partial\mathcal{B}_{\mathbf{R}}^{\rm b}\) in Fig. 7; and a _transverse_ regime for angles \(\theta^{*}\leq|\theta|\leq\pi/2\), where vortices crossing the bistable domain undergo either no jump, one or two. The angle \(\theta^{*}\) is given by the (outer) tangent of the bistable domain at the cusps \(\bar{\mathbf{R}}_{\rm c,\pm}\); making use of the lowest order approximation (70) of the crescent's geometry, we find that \[\tan(\theta^{*})=\frac{\partial\bar{v}^{(0)}}{\partial\bar{u}^{(0)}}\Big{|}_{ \bar{v}_{c}}=\frac{(\bar{C}+\lambda_{+})}{a}\sqrt{\frac{\gamma\delta-\beta^{2 }}{2\gamma\bar{C}(\kappa_{m}-1)}}, \tag{81}\] implying that \(\pi/2-\theta^{*}\propto\sqrt{\kappa_{m}-1}\) is small, \[\theta^{*}\approx\pi/2-\frac{a}{(\bar{C}+\lambda_{+})}\sqrt{\frac{2\gamma\bar {C}(\kappa_{m}-1)}{\gamma\delta-\beta^{2}}}. \tag{82}\] #### iv.2.1 Impact angles \(|\theta|<\theta^{*}\) For a frontal impact with \(|\theta|<\theta^{*}\), vortices occupy the 'blue' branch and remain there throughout the bistable domain \(\mathcal{B}_{\bar{\mathbf{R}}}\) until its termination on the far edge \(\partial\mathcal{B}_{\mathbf{R}}^{\rm b}\), see Fig. 7, implying that \(p_{\rm b}(\bar{\mathbf{R}}\in\mathcal{B}_{\bar{\mathbf{R}}})=1\) and \(p_{\rm r}(\bar{\mathbf{R}}\in\mathcal{B}_{\bar{\mathbf{R}}})=0\), independent of \(\theta\). As a consequence, the pinning force \(\mathbf{F}_{\rm pin}\) does not depend an the impact angle and is given by the expression \[\mathbf{F}_{\rm pin}^{<}=-n_{p}\!\int_{\mathcal{V}_{\mathbf{R}}\setminus\mathcal{ B}_{\mathbf{R}}}\!\!\frac{{\rm d}^{2}\bar{\mathbf{R}}}{a_{0}^{2}}\,\mathbf{f}_{\rm pin }(\bar{\mathbf{R}})-n_{p}\!\int_{\mathcal{B}_{\mathbf{R}}}\!\!\frac{{\rm d}^{2} \bar{\mathbf{R}}}{a_{0}^{2}}\,\mathbf{f}_{\rm pin}^{\rm b}(\bar{\mathbf{R}}).\] Next, Gauss' formula tells us that for a function \(e(\mathbf{x})\), we can transform \[\int_{\mathcal{V}}{\rm d}^{n}x\,\nabla e(\mathbf{x})=\int_{\partial\mathcal{V}}{ \rm d}^{n-1}\,\mathbf{S}_{\perp}\,e(\mathbf{x}), \tag{83}\] with the surface element \({\rm d}^{n-1}\,\mathbf{S}_{\perp}\) oriented perpendicular to the surface and pointing outside of the domain \(\mathcal{V}\). In applying (83) to the first integral of \(\mathbf{F}_{\rm pin}^{<}\), we can drop the contribution from the outer boundary \(\partial\mathcal{V}_{\bar{\mathbf{R}}}\) since we assume a compact defect potential. The remaining contribution from the crescent's boundary \(\partial\mathcal{B}_{\bar{\mathbf{R}}}\) joins up with the second integral but with an opposite sign, as the two terms involve the same surface but with opposite orientations. Altogether, we then arrive at the expression \[\mathbf{F}_{\rm pin}^{<}=n_{p}\int_{\partial\mathcal{B}_{\mathbf{R}} ^{\rm b}}\frac{{\rm d}\,\mathbf{S}_{\perp}}{a_{0}^{2}}\left(\epsilon_{\rm pin}^{ \rm b}(\bar{\mathbf{R}})-e_{\rm pin}(\bar{\mathbf{R}})\right)\\ +n_{p}\int_{\partial\mathcal{B}_{\mathbf{R}}^{\rm b}}\frac{{ \rm d}\,\mathbf{S}_{\perp}}{a_{0}^{2}}\left(\epsilon_{\rm pin}^{\rm b}(\bar{ \mathbf{R}})-e_{\rm pin}(\bar{\mathbf{R}})\right), \tag{84}\] where we have separated the left and right borders \(\partial\mathcal{B}_{\bar{\mathbf{R}}}^{\rm r,b}\) of the bistable domain. Due to continuity, the stable vortex energy \(e_{\rm pin}(\bar{\mathbf{R}})\) will be equal to \(e_{\rm pin}^{\rm b}(\bar{\mathbf{R}})\) on the left border \(\partial\mathcal{B}_{\mathbf{R}}^{\mathrm{c}}\) and equal to \(e_{\mathrm{pin}}^{\mathrm{r}}(\mathbf{\bar{R}})\) on the right border \(\partial\mathcal{B}_{\mathbf{R}}^{\mathrm{b}}\). The expression (84) for \(\mathbf{F}_{\mathrm{pin}}^{<}\) then reduces to \[\mathbf{F}_{\mathrm{pin}}^{<} =n_{p}\int_{\partial\mathcal{B}_{\mathbf{R}}^{\mathrm{b}}}\frac{ \mathrm{d}\,\mathbf{S}_{\perp}}{a_{0}^{2}}\left(e_{\mathrm{pin}}^{\mathrm{b}}( \mathbf{\bar{R}})-e_{\mathrm{pin}}^{\mathrm{r}}(\mathbf{\bar{R}})\right)\] \[=n_{p}\int_{-\bar{v}_{c}}^{\bar{v}_{c}}\frac{\mathrm{d}\bar{v}}{a _{0}}\,\frac{\Delta e_{\mathrm{pin}}(\bar{v})}{a_{0}}\left[1,-\partial\bar{u}/ \partial\bar{v}\right]\] \[=n_{p}\left[\frac{2\bar{v}_{c}}{a_{0}}\frac{\langle\Delta e_{ \mathrm{pin}}\rangle}{a_{0}},\,0\right]\equiv[F_{\mathrm{pin}}^{\parallel},0] \tag{85}\] with \(\langle\Delta e_{\mathrm{pin}}\rangle\) the average energy jump evaluated along the \(v\)-direction. The force \(\mathbf{F}_{\mathrm{pin}}^{<}\) is aligned with the unstable directed along \(u\), with the \(v\)-component vanishing due to the antisymmetry in \(\bar{v}\leftrightarrow-\bar{v}\) of the derivative \(\partial\bar{u}/\partial\bar{v}\), and is independent on \(\theta\) for \(|\theta|<\theta^{*}\). #### v.2.2 Impact angle \(|\theta|=\pi/2\) Second, let us find the pinning force density \(\mathbf{F}_{\mathrm{pin}}^{\pi/2}\) for vortices moving along the (positive) \(v\)-direction, \(\theta=\pi/2\). As follows from Fig. 7, vortices occupy the blue branch and jump to the red one upon hitting the lower half of the boundary \(\partial\mathcal{B}_{\mathbf{R}}^{\mathrm{b}}\); vortices that enter \(\mathcal{B}_{\mathbf{R}}\) but do not cross \(\partial\mathcal{B}_{\mathbf{R}}^{\mathrm{b}}\) undergo no jump and hence do not contribute to \(\mathbf{F}_{\mathrm{pin}}^{\pi/2}\). As vortices in the red branch proceed upwards, they jump back to the blue branch upon crossing the red boundary \(\partial\mathcal{B}_{\mathbf{R}}^{\mathrm{r}}\). While jumps appear on all of the lower half of \(\partial\mathcal{B}_{\mathbf{R}}^{\mathrm{b}}\), a piece of the upper boundary \(\partial\mathcal{B}_{\mathbf{R}}^{\mathrm{r}}\) that contributes with a second jump is cut away (as vortices to the left of \(\bar{u}^{(0)}+\bar{u}^{(1)}\) do not change branch from blue to red). The length \(\Delta\bar{v}\) of this interval scales as \(\Delta\bar{v}/\bar{v}_{c}\propto(\kappa_{m}-1)^{1/4}\); ignoring this small jump-free region, we determine \(\mathbf{F}_{\mathrm{pin}}^{\pi/2}\) assuming that vortices contributing to \(\mathbf{F}_{\mathrm{pin}}^{\pi/2}\) undergo a sequence of two jumps, from blue to red on the lower half \(\partial\mathcal{B}_{\mathbf{R}}^{\mathrm{b}<}\) and back from red to blue on the upper half \(\partial\mathcal{B}_{\mathbf{R}}^{\mathrm{r>}}\) of the boundary \(\partial\mathcal{B}_{\mathbf{\bar{R}}}\). Repeating the above analysis, we find that the \(u\)-components in \(\mathbf{F}_{\mathrm{pin}}^{\pi/2}\) arising from the blue and red boundaries now cancel, while the \(v\)-components add up, \[\mathbf{F}_{\mathrm{pin}}^{\pi/2} =n_{p}\int_{\partial\mathcal{B}_{\mathbf{R}}^{\mathrm{b}<}}\frac{ \mathrm{d}\,\mathbf{S}_{\perp}}{a_{0}^{2}}\left(e_{\mathrm{pin}}^{\mathrm{b}}( \mathbf{\bar{R}})-e_{\mathrm{pin}}^{\mathrm{r}}(\mathbf{\bar{R}})\right)\] \[+n_{p}\int_{\partial\mathcal{B}_{\mathbf{R}}^{\mathrm{b}<}}\frac{ \mathrm{d}\,\mathbf{S}_{\perp}}{a_{0}^{2}}\left(e_{\mathrm{pin}}^{\mathrm{r}}( \mathbf{\bar{R}})-e_{\mathrm{pin}}^{\mathrm{b}}(\mathbf{\bar{R}})\right)\] \[=2n_{p}\int_{0}^{\bar{v}_{c}}\frac{\mathrm{d}\bar{v}}{a_{0}}\,\frac {\Delta e_{\mathrm{pin}}(\bar{v})}{a_{0}}\left[0,\partial\bar{u}/\partial\bar{v}\right] \tag{86}\] \[=n_{p}\left[0,\frac{2\bar{v}_{c}}{a_{0}}\frac{\langle\Delta e_{ \mathrm{pin}}\partial_{\bar{v}}\bar{u}\rangle}{a_{0}}\right]\equiv[0,F_{ \mathrm{pin}}^{\perp}].\] Making use of the result (76) for \(\Delta e_{\mathrm{pin}}(\bar{v})\) in (85), we find explicit expressions for the pinning force densities for impacts parallel and perpendicular to the unstable direction \(u\), \[F_{\mathrm{pin}}^{\parallel} \approx\left(\frac{9n_{p}}{8\,a_{0}^{2}\gamma^{3}}\right)\!\int_{ -\bar{v}_{c}}^{\bar{v}_{c}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! resulting in a critical force density \[F_{c}(\varphi)=\sqrt{F_{\text{pin,u}}^{2}(\theta)+F_{\text{pin,v}}^{2}(\theta)} \tag{90}\] with angles \(\varphi\) and \(\theta\) related via \[\tan\varphi=\frac{F_{\text{pin,u}}(\theta)}{F_{\text{pin,v}}(\theta)}. \tag{91}\] Since \(F_{\text{pin,u}}(\theta<\theta^{*})=0\), the entire interval \(\theta<\theta^{*}\) is compressed to \(\varphi=0\) and it is the narrow regime \(\theta^{*}<\theta<\pi/2\) that determines the angular characteristic of the critical force density \(F_{c}(\varphi)\). The critical force density \(F_{c}(\varphi)\) is peaked at \(\varphi=0\) as shown in Fig. 9 (with a correspondingly sharp peak in \(j_{c}\) at right angles). Combing Eqs. (90) and (91), we can derive a simple expression bounding the function \(F_{c}(\varphi)\), \[F_{c}(\varphi)=F_{\text{pin,v}}(\theta)\sqrt{1+\cot^{2}(\varphi)}\leq\frac{F_ {\text{pin}}^{\perp}}{\sin(\varphi)}, \tag{92}\] that traces \(F_{c}(\varphi)\) over a wide angular region, see the dashed line in Fig. 9. At small values of \(\varphi\) we cannot ignore the angular dependence in \(F_{\text{pin,v}}(\theta)\) any more that finally cuts off the divergence \(\propto 1/\sin(\varphi)\) at the value \(F_{c}(\varphi\to 0)\to F_{\text{pin}}^{\parallel}\). #### iii.2.4 Isotropized pinning force density \(F_{\text{pin}}\) In a last step, we assume an ensemble of equal anisotropic defects that are uniformly distributed in space and randomly oriented. In this situation, we have to perform an additional average over the instability directions \(\hat{\mathbf{u}}_{i}\) associated with the different defects \(i=1,\ldots N\). Neglecting the modification of \(\mathbf{F}_{\text{pin}}(\theta)\) away from \([F_{\text{pin}}^{\parallel},0]\) in the small angular regions \(\theta^{*}<|\theta|<\pi/2\), we find that the force along any direction \(\hat{\mathbf{R}}\) has the magnitude \[F_{\text{pin}} \approx\frac{1}{N}\sum_{i=1}^{N}|(F_{\text{pin}}^{\parallel} \hat{\mathbf{u}}_{i})\cdot\hat{\mathbf{R}}| \tag{93}\] \[\approx F_{\text{pin}}^{\parallel}\int_{-\pi/2}^{\pi/2}\frac{\text{d} \theta}{\pi}\,\cos\theta=\frac{2}{\pi}F_{\text{pin}}^{\parallel}.\] As a result of the averaging over the angular directions, the pinning force density is now effectively isotropic and directed against the velocity \(\mathbf{v}\) of the vortex motion. ## IV Uniaxial defect In Sec. III, we have analyzed the onset of strong pinning for an arbitrary potential and have determined the shape of the unstable and bistable domains \(\mathcal{U}_{\hat{\mathbf{R}}}\) and \(\mathcal{B}_{\mathbf{R}}\)--with their elliptic and crescent forms, they look quite different from their ring-shaped counterparts for the isotropic defect in Figs. 5(c) and (d). In this section, we discuss the situation for a weakly anisotropic defect with a small uniaxial deformation quantified by the small parameter \(\epsilon\) in order to understand how our previous findings, the results for the isotropic defect and those describing the strong-pinning onset, relate to one another. Our weakly deformed defect is described by equipotential lines that are nearly circular but slightly elongated along \(y\), implying that pinning is strongest in the \(x\)-direction. We will find that the unstable (bistable) domain \(\mathcal{U}_{\hat{\mathbf{R}}}\) (\(\mathcal{B}_{\hat{\mathbf{R}}}\)) for the uniaxially anisotropic defect starts out with two ellipses (crescents) on the \(x\)-axis as \(\kappa_{m}\) crosses unity. With increasing pinning strength, i.e., \(\kappa_{m}\), these ellipses (crescents) grow and deform to follow the equipotential lines, with the end-points approaching one another until they merge on the \(\pm y\)-axis. These merger points, we denote them as \(\tilde{\mathbf{R}}_{\ast}\) and \(\tilde{\mathbf{R}}_{\ast}\), define a second class of important points (besides the onset points \(\tilde{\mathbf{R}}_{m}\) and \(\tilde{\mathbf{R}}_{m}\)) in the buildup of the strong pinning landscape: while the onset points \(\tilde{\mathbf{R}}_{m}\) are defined as minima of the Hessian determinant \(D(\tilde{\mathbf{R}})\), the merger points \(\tilde{\mathbf{R}}_{s}\) turn out to be associated with saddle points of \(D(\tilde{\mathbf{R}})\). Pushing across the merger of the deformed ellipses (crescents) by further increasing the Labusch parameter \(\kappa_{m}\), the unstable (bistable) domains \(\mathcal{U}_{\hat{\mathbf{R}}}\) (\(\mathcal{B}_{\mathbf{R}}\)) undergo a change in topology, from two separated areas to a ring-like geometry as it appears for the isotropic defect, see Figs. 5(c) Figure 9: Top: scaled pinning force densities \(F_{\text{pin,u}}\) and \(F_{\text{pin,v}}\) versus impact angle \(\theta\); we have used the same parameters as in Fig. 6. The longitudinal (along \(u\)) force \(F_{\text{pin,u}}\) remains constant and equal to \(F_{\text{pin}}^{\parallel}\) for all angles \(|\theta|<\theta^{*}\), while the transverse (along \(v\)) component \(F_{\text{pin,v}}\) vanishes in this regime. The longitudinal force drops and vanishes over the narrow interval \(\theta^{*}<|\theta|<\pi/2\), while the transverse force \(F_{\text{pin,v}}\) increases up to \(F_{\text{pin}}^{\perp}\). Bottom: critical force density \(F_{c}\) (directed along the Lorentz force \(\mathbf{F}_{\text{\tiny L}}=\mathbf{j}\wedge\mathbf{B}/c\)) versus angle \(\varphi\) of the Lorentz force; the dashed line shows the upper bound \(F_{c}<F_{\text{pin}}^{\perp}/\sin(\varphi)\). and (d), thus explaining the interrelation of our results for isotropic and anisotropic defects. With this analysis, we thus show how the strong pinning landscape for the weakly uniaxial defect will finally assume the shape and topology of the isotropic defect as the pinning strength \(\kappa_{m}\) overcomes the anisotropy \(\epsilon\). Second, this discussion will introduce the merger points \(\tilde{\mathbf{R}}_{s}\) as a second type of characteristic points of strong pinning landscapes that we will further study in section V.1 using a Landau-type expansion as done in section III.1 above; we will find that the geometry of the merger points \(\tilde{\mathbf{R}}_{s}\) is associated with hyperbolas, as that of the onset points was associated with ellipses. Our uniaxially anisotropic defect is described by the stretched (along the \(y\)-axis) Lorentzian \[e_{p}(\tilde{x},\tilde{y})=-e_{p}\left(1+\frac{\tilde{x}^{2}}{2\xi^{2}}+\frac{ \tilde{y}^{2}}{2\xi^{2}\left(1+\epsilon\right)^{2}}\right)^{-1}, \tag{94}\] with equipotential lines described by ellipses \[\frac{\tilde{x}^{2}}{\xi^{2}}+\frac{\tilde{y}^{2}}{\xi^{2}\left(1+\epsilon \right)^{2}}=\text{const}, \tag{95}\] and the small parameter \(0<\epsilon\ll 1\) quantifying the degree of anisotropy. At fixed radius \(\tilde{R}^{2}=\tilde{x}^{2}+\tilde{y}^{2}\), the potential (94) assumes maxima in energy and in negative curvature on the \(x-\)axis, and corresponding minima on the \(y-\)axis. Along both axes, the pinning force is directed radially towards the origin and the Labusch criterion (34) for strong pinning is determined solely by the curvature along the radial direction. At the onset of strong pinning, the unstable and bistable domains then first emerge along the \(x-\)axis at the points \(\tilde{\mathbf{R}}_{m}=(\pm\sqrt{2}\xi,0)\) and \(\tilde{\mathbf{R}}_{m}=(\pm 2\sqrt{2}\xi,0)\) when \[\kappa_{m}=\frac{e_{p}}{4\tilde{C}\xi^{2}}=1. \tag{96}\] Upon increasing the pinning strength \(\kappa_{m}\), e.g., via softening of the vortex lattice as described by a decrease in \(\tilde{C}\), the unstable and bistable domains \(\mathcal{U}_{\tilde{\mathbf{R}}}\) and \(\mathcal{B}_{\tilde{\mathbf{R}}}\) expand away from these points, and eventually merge along the \(y-\)axis at \(\tilde{\mathbf{R}}_{s}=(0,\pm\sqrt{2}\xi(1+\epsilon))\), \(\tilde{\mathbf{R}}_{s}=(0,\pm 2\sqrt{2}\xi(1+\epsilon))\) when \[\kappa_{s}=\frac{e_{p}}{4\tilde{C}\xi^{2}(1+\epsilon)^{2}}=\frac{\kappa_{m}} {(1+\epsilon)^{2}}=1, \tag{97}\] i.e., for \(\kappa_{m}=(1+\epsilon)^{2}\). The evolution of the strong pinning landscape from onset to merging takes place in the interval \(\kappa_{m}\in[1,(1+\epsilon)^{2}]\); pushing \(\kappa_{m}\) beyond this interval, we will analyze the change in topology and appearance of non-simply connected unstable and bistable domains after the merging. The quantity determining the shape of the unstable domain \(\mathcal{U}_{\tilde{\mathbf{R}}}\) is the Hessian determinant \(D(\tilde{\mathbf{R}})\) of the total vortex energy \(e_{\text{pin}}(\tilde{\mathbf{R}};\tilde{\mathbf{R}})\), see Eqs. (36) and (1), respectively. At onset, the minimum of \(D(\tilde{\mathbf{R}})\) touches zero for the first time; with increasing \(\kappa_{m}\), this minimum drops below zero and the condition \(D(\tilde{\mathbf{R}})=0\) determines the unstable ellipse that expands in \(\tilde{\mathbf{R}}\)-space. Viewing the function \(D(\tilde{\mathbf{R}})\) as a height function of a landscape in the \(\tilde{\mathbf{R}}\) plane, this corresponds to filling this landscape, e.g., with water, up to the height level \(D=0\) with the resulting lake representing the unstable domain. In the present uniaxially symmetric case, a pair of unstable ellipses grow simultaneously, bend around the equipotential line near the radius \(\sim\sqrt{2}\xi\) and finally touch upon merging on the \(y\)-axis. In our geometric interpretation, this corresponds to the merging of the two (water-filled) valleys that happens in a saddle-point of the function \(D(\tilde{\mathbf{R}})\) at the height \(D=0\). Hence, the merger point \(\tilde{\mathbf{R}}_{s}\) correspond to saddles in \(D(\tilde{\mathbf{R}})\) with \[D(\tilde{\mathbf{R}}_{s})=0,\quad\nabla_{\tilde{\mathbf{R}}}\left.D(\mathbf{R} )\right|_{\tilde{\mathbf{R}}_{s}}=0, \tag{98}\] and \[\det\bigl{[}\text{Hess}\bigl{[}D(\tilde{\mathbf{R}})\bigr{]}\bigr{]}\bigr{|}_ {\tilde{\mathbf{R}}_{s}}<0, \tag{99}\] cf. Eq. (44). In our calculation of \(D(\tilde{\mathbf{R}})\), we exploit that the Hessian in (36) does not depend on the asymptotic position \(\tilde{\mathbf{R}}\) and we can set it to zero, \[D(\tilde{\mathbf{R}})=\det\bigl{\{}\text{Hess}[\tilde{C}\tilde{R}^{2}/2+e_{p}^ {(i)}(\tilde{R})+\delta e_{p}(\tilde{\mathbf{R}})]\bigr{\}}, \tag{100}\] where we have split off the anisotropic correction \(\delta e_{p}(\tilde{\mathbf{R}})=e_{p}(\tilde{\mathbf{R}})-e_{p}^{ (i)}(\tilde{R})\) away from the isotropic potential \(e_{p}^{(i)}(\tilde{R})\) with \(\epsilon=0\). In the following, we perform a perturbative analysis around the isotropic limit valid in the limit of weak anisotropy \(\epsilon\ll 1\); this motivates our use of polar (tip) coordinates \(\tilde{R}\) and \(\tilde{\phi}\). The isotropic contribution \(\mathrm{H}^{(i)}\) to the Hessian matrix \(\mathrm{H}\) is diagonal with components \[\mathrm{H}^{(i)}_{\tilde{R}\tilde{R}}(\tilde{R}) \equiv\partial_{\tilde{R}}^{2}[\tilde{C}\tilde{R}^{2}/2+e_{p}^{ (i)}(\tilde{R})]\] \[=\tilde{C}+\partial_{\tilde{R}}^{2}e_{p}^{ (i)}(\tilde{R}) \tag{101}\] and \[\mathrm{H}^{(i)}_{\tilde{\phi}\tilde{\phi}}(\tilde{R}) \equiv(\tilde{R}^{-2}\partial_{\tilde{\phi}\tilde{\phi}}^{2}+ \tilde{R}^{-1}\partial_{\tilde{R}})[\tilde{C}\tilde{R}^{2}/2+e_{p}^{ (i)}(\tilde{R})]\] \[=\tilde{C}-f_{p}^{ (i)}(\tilde{R})/\tilde{R}. \tag{102}\] The radial component \(\mathrm{H}^{(i)}_{\tilde{R}\tilde{R}}\propto(\kappa_{m}-1)\) vanishes at onset, while \(\mathrm{H}^{(i)}_{\tilde{\phi}\tilde{\phi}}\) remains finite, positive, and approximately constant. The anisotropic component \(\delta e_{p}(\tilde{\mathbf{R}})\) introduces corrections \(\propto\epsilon\); these significantly modify the radial entry of the full Hessian while leaving its azimutal component \(\mathrm{H}_{\tilde{\phi}\tilde{\phi}}\) approximately unchanged; the off-diagonal entries of the full Hessian scale as \(\epsilon\) and hence contribute in second order of \(\epsilon\) to \(D(\tilde{\mathbf{R}})\). As a result, the sign change in the determinant \[D(\tilde{\mathbf{R}})\approx\mathrm{H}_{\tilde{R}\tilde{R}}(\tilde{\mathbf{R}}) \mathrm{H}_{\tilde{\phi}\tilde{\phi}}(\tilde{R})+\mathcal{O}\left(\epsilon^{2} \right), \tag{103}\] is determined by \[\mathrm{H}_{\tilde{R}\tilde{R}}(\tilde{\mathbf{R}})=\mathrm{H}_{\tilde{R}\tilde{R }}^{{}^{(i)}}(\tilde{R})+\partial_{\tilde{R}}^{2}\delta e_{p}(\tilde{\mathbf{ R}}) \tag{104}\] for radii close to \(\tilde{R}_{m}\) with \(\delta\tilde{R}=\tilde{R}-\tilde{R}_{m}\approx\mathcal{O}(\sqrt{\kappa_{m}-1})\). We expand the potential (94) around the isotropic part \(e_{p}^{{}^{(i)}}(\tilde{R})\), \[\delta e_{p}(\tilde{\mathbf{R}})\approx-\epsilon\,[\partial_{\tilde{R}}e_{p} ^{{}^{(i)}}(\tilde{R})]\tilde{R}\sin^{2}\tilde{\phi}, \tag{105}\] and additionally expand both \(e_{p}^{{}^{(i)}}(\tilde{R})\) and \(\delta e_{p}(\tilde{\mathbf{R}})\) around \(\tilde{R}_{m}\), keeping terms \(\propto\epsilon\,\sqrt{(\kappa_{m}-1)}\). The radial entry of the anisotropic Hessian matrix then assumes the form \[\mathrm{H}_{\tilde{R}\tilde{R}}(\tilde{\mathbf{R}})\approx\bar{C}\,[1-\kappa _{m}(\tilde{\phi})]\\ +\gamma\,[\delta\tilde{R}^{2}/2-\epsilon\,\sin^{2}\tilde{\phi}\, \tilde{R}_{m}\delta\tilde{R}] \tag{106}\] with \(\gamma=\partial_{\tilde{R}}^{4}e_{p}^{{}^{(i)}}(\tilde{R})|_{\tilde{R}_{m}}\) and the angle-dependent Labusch parameter \[\kappa_{m}(\tilde{\phi})\equiv\frac{\max_{\tilde{R}}[-\partial_{\tilde{R}}^{2 }e_{p}(\tilde{R},\tilde{\phi})|_{\tilde{\phi}}]}{\bar{C}}=\kappa_{m}-2 \epsilon\sin^{2}\tilde{\phi}. \tag{107}\] The edges of the unstable region \(\mathcal{U}_{\tilde{\mathbf{R}}}\) then can be obtained by imposing the condition \(\mathrm{H}_{\tilde{R}\tilde{R}}(\tilde{\mathbf{R}})=0\) and the solution to the corresponding quadratic equation define the jump positions \(\tilde{R}_{\mathrm{ip}}(\tilde{\phi})\) (or boundaries \(\partial\mathcal{U}_{\tilde{\mathbf{R}}}\)) \[\tilde{R}_{\mathrm{ip}}(\tilde{\phi})\approx\tilde{R}_{m}(\tilde{\phi})\pm \delta\tilde{R}(\tilde{\phi}). \tag{108}\] These are centered around the ('large') ellipse defined by \[\tilde{R}_{m}(\tilde{\phi})=\tilde{R}_{m}(1+\epsilon\sin^{2}\tilde{\phi}) \tag{109}\] and separated by (cf. Eq. (20)) \[2\,\delta\tilde{R}(\tilde{\phi})=\sqrt{\frac{8\bar{C}}{\gamma}(\kappa_{m}( \tilde{\phi})-1)} \tag{110}\] along the radius. Making use of the form (107) of \(\kappa_{m}(\tilde{\phi})\) and assuming a small value of \(\kappa_{m}>1\) near onset, we obtain the jump line in the form of a ('small') ellipse centered at \([\pm\tilde{R}_{m},0]\), \[\gamma\,\delta\tilde{R}^{2}+\epsilon\bar{C}\,\tilde{\phi}^{2}=\bar{C}(\kappa _{m}-1). \tag{111}\] Hence, we find that the anisotropic results are obtained from the isotropic ones by replacing the circle \(\tilde{R}_{m}\) by the ellipse \(\tilde{\mathbf{R}}_{m}(\tilde{\phi})\) and substituting \(\kappa\to\kappa_{m}(\tilde{\phi})\) in the width (20), see Figs. 10(a) and (b) evaluated for small values \(\kappa_{m}-1=0.01\) and \(\epsilon=0.1\). Analogously, the boundaries of the bistable domain \(\mathcal{B}_{\tilde{\mathbf{R}}}\) can be found by applying the same substitutions to the result (25), see Figs. 10(c) and (d), \[\bar{R}(\tilde{\phi})\approx\bar{R}_{m}(\tilde{\phi})\pm\delta\bar{R}(\bar{ \phi}) \tag{112}\] with \(\bar{R}_{m}(\tilde{\phi})=\bar{R}_{m}(1+\epsilon\sin^{2}\bar{\phi})\) and the width \[2\,\delta\bar{R}(\tilde{\phi})=\frac{2}{3}\sqrt{\frac{8\bar{C}}{\gamma}}( \kappa_{m}(\tilde{\phi})-1)^{3/2}. \tag{113}\] The landing line \(\mathcal{L}_{\tilde{\mathbf{R}}}\) is given by (see Eq. (23) and note that the jump point is shifted by \(\tilde{u}_{\mathrm{jp}}\) away from \(\tilde{x}_{m}\), see Eq. (19)) \[\tilde{R}_{\mathrm{lp}}(\tilde{\phi})\approx\tilde{R}_{m}(\tilde{\phi})\mp 2 \,\delta\tilde{R}(\tilde{\phi}). \tag{114}\] An additional complication is the finite angular extension of the unstable and bistable domains \(\mathcal{U}_{\tilde{\mathbf{R}}}\) and \(\mathcal{B}_{\tilde{\mathbf{R}}}\); Figure 10: Unstable and bistable domains close to the onset of strong pinning for a uniaxial defect (94) centered at the origin, with \(\epsilon=0.1\) and \(\kappa_{m}-1=0.01\). The pinning potential is steepest at angles \(\tilde{\phi}=0,\,\pi\) and least steep at \(\tilde{\phi}=\pm\pi/2\), hence strong pinning is realized first in a small interval around \(\tilde{\phi}=0,\,\pi\) (solid black dots) where \(\kappa_{m}(\tilde{\phi})\geq 1\). (a) The unstable domain \(\mathcal{U}_{\tilde{\mathbf{R}}}\) in tip space is bounded by red/blue solid lines (jump lines \(\mathcal{J}_{\tilde{\mathbf{R}}}\), see Eq. (108)); dashed lines mark the associated landing lines \(\mathcal{L}_{\tilde{\mathbf{R}}}\), see (114). (b) Focus on the unstable domain near \(\tilde{\phi}=0\) in polar coordinates \(\tilde{R}\) and \(\tilde{\phi}\). The jumping (solid) and landing (dashed) lines have the approximate shape of ellipses, see Eq. (111), in agreement with our analysis of Sec. III.2. (c) The bistable domain \(\mathcal{B}_{\tilde{\mathbf{R}}}\) in asymptotic space involves symmetric crescents centered at \(\tilde{\phi}=0,\,\pi\) and a narrow width \(\propto(\kappa_{m}(\tilde{\phi})-1)^{3/2}\), see Eq. (112), in agreement with the analysis of Sec. III.3. (d) Focus on the bistable domain at \(\tilde{\phi}=0\) in polar coordinates \(\tilde{R}\) and \(\tilde{\phi}\). Red/blue colors indicate different vortex configurations as quantified through the order parameter \(\tilde{R}-\tilde{R}_{m}(\tilde{\phi})\). these are limited by the condition \(\kappa_{m}(\phi_{\rm max})=1\), providing us with the constraint \[\tilde{\phi}_{\rm max}=\tilde{\phi}_{\rm max}\approx\pm\sqrt{\frac{\kappa_{m}-1} {2\epsilon}} \tag{115}\] near the strong pinning onset with \((\kappa_{m}-1)\ll\epsilon\). The resulting domains \(\mathcal{U}_{\tilde{\mathbf{R}}}\) have characteristic extensions of scale \(\propto\sqrt{\kappa_{m}-1}\), see Fig. 10. Close to merging (marked by crosses in the figure) at \(\phi=\pm\pi/2\), we define the deviation \(\delta\phi=\pi/2-\phi\) with \(\delta\phi\ll 1\), and imposing the condition \(\kappa_{m}(\phi_{\rm max})=1\), we find \[\delta\tilde{\phi}_{\rm max}=\delta\tilde{\phi}_{\rm max}\approx\sqrt{1-\frac {\kappa_{m}-1}{2\epsilon}}\approx\sqrt{\frac{1-\kappa_{s}}{2\epsilon}}. \tag{116}\] The corresponding geometries of \(\mathcal{U}_{\tilde{\mathbf{R}}}\) and \(\mathcal{B}_{\tilde{\mathbf{R}}}\) are shown in Fig. 11 for \(1-\kappa_{s}\approx 0.01\) and \(\epsilon=0.1\). Finally, \(\delta\tilde{\phi}_{\rm max}\) vanishes at merging for \(\kappa_{s}=1\) (or \(\kappa_{m}-1\approx 2\epsilon\)), in agreement, to order \(\epsilon\), with the exact result (97). Pushing the Labusen parameter beyond the merger with \(\kappa_{s}>1\) or \(\kappa_{m}>(1+\epsilon)^{2}\approx 1+2\epsilon\), the unstable and bistable regimes \(\mathcal{U}_{\tilde{\mathbf{R}}}\) and \(\mathcal{B}_{\tilde{\mathbf{R}}}\) change their topology: they develop a (non-simply connected) ring-like geometry with separated inner and outer edges that are a finite distance apart in the radial direction at all angles \(\tilde{\phi}\) and \(\tilde{\phi}\). The situation after the merger is shown in Fig. 12 for Figure 11: Unstable and bistable domains before merging for a uniaxial defect (94) centered at the origin, with \(\epsilon=0.1\) and \(1-\kappa_{s}\approx 0.01\). Strong pinning is realized everywhere but in a small interval around \(\tilde{\phi}=\pm\pi/2\) where \(\kappa_{m}(\tilde{\phi})<1\). (a) The unstable domain \(\mathcal{U}_{\tilde{\mathbf{R}}}\) in the tip plane is bounded by the solid red/blue jump lines \(\mathcal{J}_{\tilde{\mathbf{R}}}\), see Eq. (108) and involves two strongly bent ellipses originating from angles \(\tilde{\phi}=0\), \(\pi\) (black dots) and approaching one another close to \(\tilde{\phi}=\pm\pi/2\) (black crosses); red/blue dashed lines are landing points as given by Eqs. (114). (b) Focus (in polar coordinates \(\tilde{R}\), \(\tilde{\phi}\)) on the tips of the unstable domain near \(\tilde{\phi}=\pi/2\). (c) The bistable domain \(\mathcal{B}_{\mathbf{R}}\) in the asymptotic space consists of thin symmetric crescents (colored in magenta) originating from \(\tilde{\phi}=0\), \(\pi\), with the delimiting black solid lines given by Eq. (112). (d) Focus on the cusps of the bistable domain close to \(\tilde{\phi}=\pi/2\) in polar coordinates \(\tilde{R}\), \(\tilde{\phi}\). Red/blue colors indicate different vortex configurations as quantified through the order parameter \(\tilde{R}-\tilde{R}_{m}(\tilde{\phi})\). Figure 12: Unstable and bistable domains for a uniaxial defect (94) after merging, with \(\epsilon=0.1\) and \(\kappa_{s}-1\approx 0.01\). (a) The unstable domain \(\mathcal{U}_{\tilde{\mathbf{R}}}\) in tip plane is enclosed between the jump lines \(\mathcal{J}_{\tilde{\mathbf{R}}}\) (solid red/blue, see Eq. (108)) and takes the shape of a deformed ring with a wider (narrower) width at strongest (weakest) pinning near the solid dots (crosses). Red/blue dashed lines mark the landing positions \(\mathcal{L}_{\tilde{\mathbf{R}}}\) of the vortex tips and are given by Eq. (114). (b) Focus on the narrowing in the unstable domain close to the merger points (crosses) at \(\tilde{\phi}=\pi/2\) in the polar coordinates \(\tilde{R}\), \(\tilde{\phi}\). (c) The bistable domain \(\mathcal{B}_{\mathbf{R}}\) in asymptotic space is a narrow ring (colored in magenta) thicker (thinner) at points of strongest (weakest) pinning near \(\tilde{\phi}=0\), \(\pi\) (\(\tilde{\phi}=\pm\pi/2\)); black lines correspond to Eq. (112). (d) Focus on the constriction in the bistable domain close to \(\tilde{\phi}=\pi/2\) in polar coordinates \(\tilde{R}\), \(\tilde{\phi}\). Red/blue colors indicate different vortex configurations as quantified through the order parameter \(\tilde{R}-\tilde{R}_{m}(\tilde{\phi})\). \(\kappa_{s}-1\approx 0.01\) and \(\epsilon=0.1\), with the merging points \(\tilde{\mathbf{R}}_{s}\) and \(\tilde{\mathbf{R}}_{s}\) marked by crosses. The merging of the unstable domains at the saddle point \(\tilde{\mathbf{R}}_{s}\) is a general feature of irregular pinning potentials. In the next section, we will analyze the behavior of the unstable domains close to a saddle point \(\tilde{\mathbf{R}}_{s}\) of the Hessian determinant \(D(\tilde{\mathbf{R}})\) and obtain a universal description of their geometry close to this point. We will see that the geometry associated with this merger is of a hyperbolic type described by \(\gamma\tilde{u}^{2}+\delta\tilde{v}^{2}=2\tilde{C}(\kappa_{s}-1)\), \(\gamma>0\) and \(\delta<0\) (assuming no skew). The change in topology then is driven by the sign change in \(\kappa_{s}-1\): before merging, \(\kappa_{s}<1\), the hyperbola is open along the unstable (radial) direction \(\tilde{u}\), thus separating the two unstable regions, while after merging, \(\kappa_{s}>1\), the hyperbola is open along the transverse direction \(\tilde{v}\), with the ensuing passage defining the single, non-simply connected, ring-like unstable region. ## V Merger points The merging of unstable and bistable domains is a general feature of irregular pinning potentials that is relevant beyond the simple example of a weakly anisotropic uniaxial defect discussed above. Indeed, while the exact geometries of \(\mathcal{U}_{\tilde{\mathbf{R}}}\) and \(\mathcal{B}_{\tilde{\mathbf{R}}}\) depend on the precise shape of the pinning potential, their behavior close to merging is universal. Below, we will study this universal behavior by generalizing the expansions of Sec. III to saddle points \(\tilde{\mathbf{R}}_{s}\) of the determinant \(D(\tilde{\mathbf{R}})\). As with the onset of strong pinning, the merger of two domains induces a change in topology in the unstable and bistable domains; we will discuss these topological aspects of onsets and mergers in Secs. V.4 and VI below. ### Expansion near merger Following the strategy of Sec. III, we expand the energy functional around a saddle point \(\tilde{\mathbf{R}}_{s}\) of the determinant \(D(\tilde{\mathbf{R}})\) in order to obtain closed expressions for the unstable and bistable domains at merging. In doing so, we again define local coordinate systems \((\tilde{u},\tilde{v})\) and \((\tilde{u},\tilde{v})\) in tip- and asymptotic space centered at \(\tilde{\mathbf{R}}_{s}\) and \(\tilde{\mathbf{R}}_{s}\), where the latter is associated with \(\tilde{\mathbf{R}}_{s}\) through the force balance equation (38) in the original laboratory system. Furthermore, we fix our axes such that \(D(\tilde{\mathbf{R}}_{s})\) is a local maximum along the (unstable) \(u\)- and a local minimum along the (stable) \(v\)-direction of the saddle; the mixed term \(\propto\tilde{u}\tilde{v}\) is absent from the expansion (as the Hessian matrix is symmetric). Furthermore, the vanishing slopes at the saddle point, see (98), imply the absence of terms \(\propto\tilde{u}^{3}\) and \(\propto\tilde{u}^{2}\tilde{v}\) in the expansion and dropping higher-order terms (corresponding to double-primed terms in (40)), we arrive to the expression \[e_{\text{pin}}(\tilde{\mathbf{R}};\tilde{\mathbf{R}})=\frac{ \tilde{C}}{2}(1-\kappa_{s})\,\tilde{u}^{2}+\frac{\tilde{C}+\lambda_{+,s}}{2} \,\tilde{v}^{2}+\frac{a_{s}}{2}\,\tilde{u}\tilde{v}^{2}\\ +\frac{\alpha_{s}}{4}\,\tilde{u}^{2}\tilde{v}^{2}+\frac{\beta_{s}} {6}\,\tilde{u}^{3}\tilde{v}+\frac{\gamma_{s}}{24}\,\tilde{u}^{4}-\tilde{C} \tilde{u}\tilde{u}-\tilde{C}\tilde{v}\tilde{v}, \tag{117}\] with \(\kappa_{s}\equiv-\lambda_{-}(\tilde{\mathbf{R}}_{s})/\tilde{C},\ \lambda_{+,s}\equiv\lambda_{+}(\tilde{\mathbf{R}}_{s})\) and the remaining coefficients defined in analogy to Eq. (58). The most important term in the expansion (117) is the curvature term \(\tilde{C}(1-\kappa_{s})\,\tilde{u}^{2}/2\) along the unstable direction \(u\). As before in Sec. III.2, see Eq. (58), the coefficient \((1-\kappa_{s})\) changes sign at some value of the pinning strength and will serve as the small parameter in our considerations. The higher-order terms in the expansion (117) are constrained by the saddle condition (99), implying that (cf. (48) and (50)) \[\gamma_{s}\delta_{s}-\beta_{s}^{2}<0 \tag{118}\] with \[\delta_{s}\equiv\alpha_{s}-\frac{2a_{s}^{2}}{\tilde{C}+\lambda_{+,s}} \tag{119}\] (for the saddle point there is no condition on the trace of the Hessian). The mapping of the two-dimensional pinning energy (117) to an effective one-dimensional Landau theory (103) of the van der Waals kind is discussed in Appendix A.2, both before and after merging. ### Unstable domain \(\mathcal{U}_{\tilde{\mathbf{R}}}\) #### v.2.1 Jump line \(\mathcal{J}_{\tilde{\mathbf{R}}}\) The boundary of the unstable domain \(\mathcal{U}_{\tilde{\mathbf{R}}}\) is determined by the jump condition \(D(\tilde{\mathbf{R}}_{s,\text{jp}})=0\). Making use of the expansion (117) and keeping only terms quadratic in \(\tilde{u},\tilde{v}\), the edges \(\delta\tilde{\mathbf{R}}_{s,\text{jp}}=(\tilde{u}_{s,\text{jp}},\tilde{v}_{s, \text{jp}})\) of \(\mathcal{U}_{\tilde{\mathbf{R}}}\) (measured relative to \(\tilde{\mathbf{R}}_{s}\)) are given by the solutions of the quadratic form (cf. (53)) \[[\gamma_{s}\,\tilde{u}^{2}+2\beta_{s}\,\tilde{u}\tilde{v}+\delta_{s}\,\tilde{v }^{2}]_{\tilde{\mathbf{R}}_{s,\text{jp}}}=2\tilde{C}(\kappa_{s}-1). \tag{120}\] Equation (120) describes a hyperbola (centered at \(\tilde{\mathbf{R}}_{s}\)) as its associated determinant is negative, see Eq. (118). Again, (120) can be cast in the form of a matrix equation \[\delta\tilde{\mathbf{R}}_{s,\text{jp}}^{T}M_{s,\text{jp}}\delta\tilde{ \mathbf{R}}_{s,\text{jp}}=\tilde{C}(\kappa_{s}-1), \tag{121}\] with \(M_{s,\text{jp}}\) given by \[M_{s,\text{jp}}=\begin{bmatrix}\gamma_{s}/2&\beta_{s}/2\\ \beta_{s}/2&\delta_{s}/2\end{bmatrix} \tag{122}\] with \(\det M_{s,\text{jp}}=(\gamma_{s}\delta_{s}-\beta_{s}^{2})/4<0\). As shown in Fig. 13, the geometry of the unstable domain \(\mathcal{U}_{\tilde{\mathbf{R}}}\) changes drastically when \(1-\kappa_{s}\) changes sign. Before merging, i.e., for \(1-\kappa_{s}>0\), the unstable domain (top and bottom regions in Fig. 13(a)) is disconnected along the stable \(v\)-direction and the two red/blue branches of the hyperbola (120) describe the tips of \(\mathcal{U}_{\mathbf{\tilde{R}}}\). When \(\kappa_{s}\) goes to unity, the tips of the unstable domain merge at the saddle point \(\tilde{\mathbf{R}}_{s}\). After merging, the unstable domain extends continuously from the top to the bottom in Fig. 13(b) with a finite width along the unstable \(u\)-direction, similarly to the isotropic case shown in Fig. 5(c). Correspondingly, the two (red and blue) branches of the hyperbola (120) now describe the edges of \(\mathcal{U}_{\mathbf{\tilde{R}}}\). Solving the quadratic equation (120) before merging, i.e., \(1-\kappa_{s}>0\), we find solutions \(\tilde{u}_{s,\mathrm{jp}}(\tilde{v})\) away from a gap along the stable \(v\)-direction, \[\tilde{u}_{s,\mathrm{jp}}(|\tilde{v}|\geq\tilde{v}_{s,c})=-\frac {1}{\gamma_{s}}\Big{[}\beta_{s}\tilde{v}\\ \pm\sqrt{2\gamma_{s}\bar{C}(\kappa_{s}-1)-(\gamma_{s}\delta_{s}- \beta_{s}^{2})\tilde{v}^{2}}\Big{]}, \tag{123}\] i.e., Eq. (123) has real solutions in the (unbounded) interval \(|\tilde{v}|\geq\tilde{v}_{s,c}\), with \[\tilde{v}_{s,c}=\sqrt{2\gamma_{s}\bar{C}(1-\kappa_{s})/|\gamma_{s}\delta_{s}- \beta_{s}^{2}|}. \tag{124}\] For the uniaxial defect (94) before merging, this gap corresponds to a splitting of \(\mathcal{U}_{\mathbf{\tilde{R}}}\) along the stable angular direction, producing two separated domains as shown in Fig. 11(a). The coordinates \((\tilde{u}_{s,\mathrm{jp}}(\pm\tilde{v}_{s,c}),\pm\tilde{v}_{s,c})\) give the positions of the vertices \(\delta\tilde{\mathbf{R}}_{s,c,\pm}^{<}\) (relative to \(\tilde{\mathbf{R}}_{s}\)) of the hyperbola before merging, \[\delta\tilde{\mathbf{R}}_{s,c,\pm}^{<}=\pm\left(-\beta_{s}/\gamma_{s},1 \right)\,\tilde{v}_{s,c}. \tag{125}\] These are marked as black crosses in Fig. 13(a) (note the rotation in the geometry as compared with Fig. 11(a)). We denote the distance between these vertices by \(\delta v^{<}\), defining a gap of width \(\propto\sqrt{1-\kappa_{s}}\) given by \[\delta v^{<}=2|\delta\tilde{\mathbf{R}}_{s,c,\pm}^{<}|=2\sqrt{\left(\gamma_{s }+\frac{\beta_{s}^{2}}{\gamma_{s}}\right)\frac{\bar{C}(1-\kappa_{s})}{|\gamma _{s}\delta_{s}-\beta_{s}^{2}|}}. \tag{126}\] After merging, i.e., for \(\kappa_{s}-1>0\), the (local) topology of \(\mathcal{U}_{\mathbf{\tilde{R}}}\) has changed as the gap along \(v\) closes and reopens along the unstable \(u\)-direction; as a result, the two separated domains of \(\mathcal{U}_{\mathbf{\tilde{R}}}\) have merged. The two branches of the hyperbola derived from (120) are now parametrized as \[\tilde{v}_{s,\mathrm{jp}}(|\tilde{u}|\geq\tilde{u}_{s,e})=-\frac {1}{\delta_{s}}\Big{[}\beta_{s}\tilde{u}\\ \pm\sqrt{2\delta_{s}\bar{C}(\kappa_{s}-1)-(\gamma_{s}\delta_{s}- \beta_{s}^{2})\tilde{u}^{2}}\Big{]}, \tag{127}\] with \[\tilde{u}_{s,e}=\sqrt{2\delta_{s}\bar{C}(\kappa_{s}-1)/|\gamma_{s}\delta_{s} -\beta_{s}^{2}|}. \tag{128}\] The corresponding unstable domain is shown in Fig. 13(b). For the uniaxial defect (94) after merging, this gap now corresponds to the finite width of \(\mathcal{U}_{\mathbf{\tilde{R}}}\) along the radial direction, as shown in Fig. 12(a). The coordinates \((\pm\tilde{u}_{s,e},\tilde{v}_{s,\mathrm{jp}}(\pm\tilde{u}_{s,e}))\) for the vertices \(\tilde{\mathbf{R}}_{s,e,\pm}^{>}\) read \[\delta\tilde{\mathbf{R}}_{s,e,\pm}^{>}=\pm\left(1,-\frac{\beta_{s}}{\delta_{ s}}\right)\,\tilde{u}_{s,e} \tag{129}\] and correspond to the points of closest approach in the branches of the hyperbola (120); these are again marked as black crosses in Fig. 13(b) but are no longer associated with critical points (we index these extremal points by 'e'). Their distance \(\delta u^{>}\) is given by \[\delta u^{>}=2|\delta\tilde{\mathbf{R}}_{s,e,\pm}^{>}|=2\sqrt{\left(\delta_{s }+\frac{\beta_{s}^{2}}{\delta_{s}}\right)\frac{\bar{C}(\kappa_{s}-1)}{|\gamma_{ s}\delta_{s}-\beta_{s}^{2}|}}, \tag{130}\] i.e., the smallest width in \(\mathcal{U}_{\mathbf{\tilde{R}}}\) grows as \(\propto\sqrt{\kappa_{s}-1}\). As discussed above and shown in Fig. 13, the solutions of the quadratic form (120) before and after merging are unbounded for every value of \(\kappa_{s}-1\). As a consequence, neglecting the higher order terms in the determinant \(D(\tilde{\mathbf{R}})\) is valid only in a narrow neighborhood of the saddle \(\tilde{\mathbf{R}}_{s}\), where the boundaries of \(\mathcal{U}_{\tilde{\mathbf{R}}}\) have the shape of a hyperbola. Away from the saddle, these higher order terms are relevant in determining the specific shape of the unstable and bistable domain, e.g., the ring-like structures of \(\mathcal{U}_{\tilde{\mathbf{R}}}\) and \(\mathcal{B}_{\tilde{\mathbf{R}}}\) in Figs. 11 and 12. #### v.2.2 Landing line \(\mathcal{L}_{\tilde{\mathbf{R}}}\) To find the second bistable vortex tip configuration \(\tilde{\mathbf{R}}_{s,\mathrm{lp}}\) associated to the edges of \(\mathcal{B}_{\mathbf{R}}\) before and after merging, we repeat the steps of Sec. III.2.2. For the jump vector \(\Delta\tilde{\mathbf{R}}_{s}=\tilde{\mathbf{R}}_{s,\mathrm{lp}}-\tilde{ \mathbf{R}}_{s,\mathrm{jp}}\), we find the result \[\Delta\tilde{u}_{s}(\tilde{v}) =-3\left(\gamma_{s}\,\tilde{u}_{s,\mathrm{jp}}(\tilde{v})+\beta_{ s}\,\tilde{v}\right)/\gamma_{s}, \tag{131}\] \[\Delta\tilde{v}_{s}(\tilde{v}) =-\left[a_{s}/(\bar{C}+\lambda_{s,+})\right]\tilde{v}\,\Delta \tilde{u}_{s}(\tilde{v}), \tag{132}\] cf. Eqs. (65) and (66) above. Here, we make use of the parametrization for the jump coordinate \(\tilde{u}_{s,\mathrm{jp}}(\tilde{v})\) in (123) before merging; after merging, the above result is still valid but should be expressed in terms of the parametrization \(\tilde{v}_{s,\mathrm{jp}}(\tilde{u})\) in Eq. (127). The landing positions \(\tilde{\mathbf{R}}_{s,\mathrm{lp}}=\tilde{\mathbf{R}}_{s,\mathrm{jp}}+\Delta \tilde{\mathbf{R}}_{s}\) arrange along the branches \(\mathcal{L}_{\tilde{\mathbf{R}}}\) of a hyperbola in \(\tilde{\mathbf{R}}\)-space that are described by the matrix equation \[\delta\tilde{\mathbf{R}}_{s,\mathrm{lp}}^{\mathrm{T}}M_{s,\mathrm{lp}}\, \delta\tilde{\mathbf{R}}_{s,\mathrm{lp}}=\bar{C}(\kappa_{s}-1), \tag{133}\] with the landing matrix now given by \[M_{s,\mathrm{lp}}=\frac{1}{4}M_{s,\mathrm{jp}}+\begin{bmatrix}0&0\\ 0&\frac{3}{4}\Big{(}\frac{\delta_{s}}{2}-\frac{\beta_{s}^{2}}{2\gamma_{s}} \Big{)}\end{bmatrix} \tag{134}\] with \(\det M_{s,\mathrm{lp}}=(\gamma_{s}\delta_{s}-\beta_{s}^{2})/16<0\). Before merging, the vertices of the landing and jumping hyperbolas coincide and the jump (131)-(132) vanishes at these points. Moreover, as for the contact points (67) close to onset of strong pinning, the tangent to the jumping and landing hyperbolas at the vertices is parallel to the \(u\)-direction, as is visible in Fig. 13(a). For \(\kappa_{s}=1\), the tips of \(\mathcal{U}_{\tilde{\mathbf{R}}}\) merge and both the jumping and landing hyperbolas coincide at \(\tilde{\mathbf{R}}_{s}\). After merging, i.e., for \(\kappa_{s}-1>0\), the condition \(\Delta\tilde{u}_{s}=\Delta\tilde{v}_{s}=0\) cannot be realized along the hyperbola (120) and the jumping and landing lines separate completely; as a result, both the jumping distance \(\Delta\tilde{\mathbf{R}}_{s}\) as well as the jump in energy \(\Delta\mathrm{e}_{\mathrm{pin}}\) are always finite (see also Appendix A.2). Indeed, after merging the landing hyperbola (133) has vertices \[\delta\tilde{\mathbf{R}}_{s,v,\pm}=\pm\left(1,-\frac{\gamma_{s}\beta_{s}}{(4 \gamma_{s}\delta_{s}-3\beta_{s}^{2})}\right)\,\tilde{u}_{s,v}, \tag{135}\] with \[\tilde{u}_{s,v}=\sqrt{\frac{2\bar{C}(\kappa_{s}-1)(4\gamma_{s}\delta_{s}-3 \beta_{s}^{2})}{\gamma_{s}(\gamma_{s}\delta_{s}-\beta_{s}^{2})}} \tag{136}\] different from the jumping hyperbola in (129). At these points, the stable and unstable hyperbolas are tangent to the \(v\)-direction, as is visible in Fig. 13(b). In section Sec. V.4 below, we will take a step back from the local analysis of the unstable domain \(\mathcal{U}_{\tilde{\mathbf{R}}}\) close to a saddle point \(\tilde{\mathbf{R}}_{s}\) and consider the evolution of its geometry across the merging transition from a global perspective using specific examples. Elaborating on the analysis of Sec. III.5, we will provide a simple argument explaining the absence of contact points between jump and landing lines after merging. Furthermore, we discuss the two possible roles of mergers as changing the number of components of \(\mathcal{U}_{\tilde{\mathbf{R}}}\) or changing the connectivity of \(\mathcal{U}_{\tilde{\mathbf{R}}}\) between simply and non-simply connected areas. Before doing so, we discuss the behavior of the bistable region \(\mathcal{B}_{\tilde{\mathbf{R}}}\) close to merging. ### Bistable domain \(\mathcal{B}_{\tilde{\mathbf{R}}}\) The set of asymptotic positions corresponding to \(\mathcal{U}_{\tilde{\mathbf{R}}}\) before and after merging, i.e., the bistable do main \(\mathcal{B}_{\mathbf{\hat{R}}}\), can be found by systematically repeating the steps in Sec. III.3. Applying the force balance equation \(\nabla_{\mathbf{R}}\epsilon_{\mathrm{pin}}(\mathbf{R};\mathbf{\hat{R}})\Big{|}_ {\mathbf{\hat{R}}}=0\) to the energy expansion (117), we find the counterpart of Eqs. (69), \[\bar{C}\bar{u} =\bar{C}(1-\kappa_{s})\tilde{u}+\frac{a_{s}}{2}\tilde{v}^{2}+ \frac{\gamma_{s}}{6}\tilde{u}^{3}+\frac{\beta_{s}}{2}\tilde{u}^{2}\tilde{v}+ \frac{\alpha_{s}}{2}\tilde{u}\tilde{v}^{2},\] \[\bar{C}\bar{v} =(\bar{C}+\lambda_{s,+})\tilde{v}+a_{s}\,\tilde{u}\tilde{v}+ \frac{\beta_{s}}{6}\tilde{u}^{3}+\frac{\alpha_{s}}{2}\tilde{u}^{2}\tilde{v}, \tag{137}\] relating tip and asymptotic positions close to merging. As for the unstable domain, the topology of \(\mathcal{B}_{\mathbf{\hat{R}}}\) depends on the sign of \(1-\kappa_{s}\). The bistable domain \(\mathcal{B}_{\mathbf{\hat{R}}}\) before merging is shown in Fig. 14(a) for \(1-\kappa_{s}=0.01\). It consists of two parts, corresponding to the two pieces of \(\mathcal{U}_{\mathbf{\hat{R}}}\) for \(1-\kappa_{s}>0\), that terminate at the cusps \(\mathbf{\hat{R}}_{s,c,\pm}^{<}\). The latter are related to the vertices \(\mathbf{\hat{R}}_{s,c,\pm}^{<}\) of the jumping hyperbola through the force balance equation (137), \[\delta\mathbf{\hat{R}}_{s,c,\pm}^{<}\approx\left[\left(a_{s}/2\,\bar{C} \right)\,\tilde{v}_{s,c}^{2},\pm\left(1+\lambda_{s,+}/\bar{C}\right)\tilde{v} _{s,c}\right]. \tag{138}\] For finite values of \((1-\kappa_{s})\), the cusps are separated by a distance \(2|\delta\mathbf{\hat{R}}_{s,c,\pm}^{<}|\approx 2\left(1+\lambda_{s,+}/\bar{C} \right)\tilde{v}_{s,c}\propto\sqrt{1-\kappa_{s}}\). They approach one another along the parabola \[\bar{u}_{s,0}\approx\frac{a}{2\bar{C}}\frac{1}{(1+\lambda_{+}/\bar{C})^{2}} \tilde{v}_{s,0}^{2}, \tag{139}\] see the black dashed line in Fig. 14, with higher-order corrections appearing at finite skew \(\beta\neq 0\). After merging, this line lies within \(\mathcal{B}_{\mathbf{\hat{R}}}\) and defines the branch crossing line, cf. Eq. (77). After merging, when \(\kappa_{s}-1>0\), the cusps have vanished and the edges have rearranged to define a connected bistable region, see Fig. 14(b). The extremal points of the two edges are found by evaluating the force balance equation (137) at the vertices \(\mathbf{\hat{R}}_{s,c,\pm}^{>}\), Eq. (129), to lowest order, \[\delta\mathbf{\hat{R}}_{s,c,\pm}^{>}\approx\frac{\beta_{s}}{\delta_{s}}\left[ \frac{a_{s}}{2\,\bar{C}\,\frac{\beta_{s}}{\delta_{s}}}\,\tilde{u}_{s,c}^{2}, \mp\left(1+\frac{\lambda_{s,+}}{\bar{C}}\right)\,\tilde{u}_{s,c}\right]. \tag{140}\] For finite values of \((\kappa_{s}-1)\), these points are separated by a distance \(2|\delta\mathbf{\hat{R}}_{s,c,\pm}^{>}|\approx 2\left(1+\lambda_{s,+}/\bar{C} \right)(\beta_{s}/\delta_{s})\tilde{u}_{s,c}\propto\sqrt{\kappa_{s}-1}\). Note that the extremal points \(\mathbf{\hat{R}}_{s,c,\pm}^{>}\) are no longer associated to cusps or critical points as these have disappeared in the merging process. When the skew parameter vanishes as in Fig. 14, \(\beta_{s}=0\), higher-order terms in \((\kappa_{s}-1)\) in the force-balance equation (137) become relevant in determining the positions \(\mathbf{\hat{R}}_{s,c,\pm}^{>}\), separating them along the unstable \(u\)-direction. In this case, we obtain a different scaling for their distance, i.e., \(|\delta\mathbf{\hat{R}}_{s,c,\pm}^{>}|\propto\left(1-\kappa_{s}\right)^{3/2}\). ### Topological aspect of mergers In order to discuss the topological aspect of a merger, it is convenient to consider some specific examples. In Sec. IV, we have analyzed the case of a uniaxial defect with a quadrupolar anisotropy \(\delta e_{\mathrm{p}}\propto\epsilon\sin^{2}\tilde{\phi}\) in the pinning potential, see (105), that produced a degenerate onset at symmetric points \([\pm\tilde{x}_{m},0]\). Here, we choose again a weakly anisotropic defect centered in the origin but with a dipolar deformation \(\delta e_{\mathrm{p}}\propto\epsilon\cos\tilde{\phi}\) that results in an angle-dependent Labusch parameter \[\kappa_{m}(\tilde{\phi})=\kappa_{m}-\epsilon\cos\tilde{\phi}, \tag{141}\] see Eq. (107). The strong pinning onset of such a defect then appears in an isolated point on the negative \(x\)-axis, with the unstable ellipse \(\mathcal{U}_{\mathbf{\hat{R}}}\) deforming with increasing \(\kappa_{m}\) into a horseshoe that is open on the positive \(x\)-axis- the closing of the horseshoe to produce a ring, see Fig. 15, then corresponds to the local merger shown in Fig. 13. With this example in mind, we can repeat the discussion in Sec. III.5: The unstable eigenvector \(\mathbf{v}_{-}(\mathbf{R}_{\mathrm{jp}})\) points radially outwards from the origin over the entire horseshoe, including the merging region at positive \(x\). On the other hand, the tangent to the boundary \(\partial\mathcal{U}_{\mathbf{\hat{R}}}\) rotates forward and back along the horseshoe as shown in Fig. 15 (we attribute a direction to \(\partial\mathcal{U}_{\mathbf{\hat{R}}}\) with the convention of following the boundary with the unstable region on the left); in fact, over most of the boundary, the tangent is simply orthogonal to \(\mathbf{v}_{-}\), with both vectors rotating together when going along \(\partial\mathcal{U}_{\mathbf{\hat{R}}}\). At the ends of the horseshoe, however, the tangent locally aligns parallel (anti-parallel) to \(\mathbf{v}_{-}\) and the two vectors rotate (anti-clockwise) with respect to one another, with the total winding equal to \(2\pi\). After the merger, this winding has disappeared, with the resulting ring exhibiting no winding in the tangent fields on the inner/outer boundary; as a result, the contact points between the jump and landing lines have disappeared. Furthermore, the merger changes the topology of \(\mathcal{U}_{\mathbf{\hat{R}}}\) from the simply-connected horseshoe to the non-simply connected ring, while the number of components in \(\mathcal{U}_{\mathbf{\hat{R}}}\) has not changed. Note that the change in the relative winding is not due to crossing the singularity of the vector field \(\mathbf{v}_{-}\) as alluded to in Sec. III.5:rather, it is the merger of the horseshoe tips that rearranges the boundaries of \(\mathcal{U}_{\mathbf{\hat{R}}}\) and make them encircle the singularity. In the above example, we have discussed a merger that changes the connectedness of \(\mathcal{U}_{\mathbf{\hat{R}}}\). On the other hand, as we are going to show, a merger might leave the connectedness of \(\mathcal{U}_{\mathbf{\hat{R}}}\) unchanged, while modifying the number of components, i.e., the number of disconnected parts, in \(\mathcal{U}_{\mathbf{\hat{R}}}\). Let us again consider a specific example in the form of an anisotropic defect with a warped well shape, producing several (in general subsequent) onsets and mergers; in Fig. 16, we consider a situation with three onset points and subsequent individual mergers. After the onset, the three ellipses define an unstable region \(\mathcal{U}_{\mathbf{\hat{R}}}\) with three disconnected parts that are simply-connected each. This configuration is characterized by its number of components measuring \(C=3\). As two of the three ellipses merge, the number of components of \(\mathcal{U}_{\mathbf{\hat{R}}}\) reduces to \(C=2\), the next merger generates a horseshoe that is still simply-connected with \(C=1\). The final merger produces a ring; while the number of components remains unchanged, \(C=1\), the unstable area assumes a non-simply connected shape with a 'hole'; we associate the index \(H=1\) with the appearance of this hole within \(\mathcal{U}_{\tilde{\mathbf{R}}}\). In physics terms, the last merger producing a hole in \(\mathcal{U}_{\tilde{\mathbf{R}}}\) is associated with the appearance of a pinned state; the unstable ring separates stable tip positions that are associated with pinned and free vortex configurations residing at small and large radii, respectively. Defining the (topological) characteristic \(\chi\equiv C-H\), we see that \(\chi\) changes by unity at every onset and merger, either through an increase (for an onset) or decrease (for a merger) in the number of components \(C\to C\pm 1\), or through the appearance of a hole (in a merger) \(H\to H+1\). Indeed, the quantity \(\chi\) is known as the Euler characteristic of a manifold and describes its global topological properties; it generalizes the well known Euler characteristic of a polyhedron to surfaces and manifolds [29], see Sec. VI below. Finally, Morse theory [30] connects the Euler characteristic with the local differential properties (minima, maxima, saddles) of that manifold, hence establishing a connection between local onsets and mergers (at minima and saddles of \(D(\tilde{\mathbf{R}})\)) and the global properties of \(\mathcal{U}_{\mathbf{R}}\) such as the appearance of new pinned states. In Sec. VI below, we consider the general case of a random pinning landscape in two dimensions and discuss the connection between local differential and global topological properties of \(\mathcal{U}_{\tilde{\mathbf{R}}}\) in the light of Morse theory--the topology of bistable domains \(\mathcal{B}_{\tilde{\mathbf{R}}}\) then follows trivially. ## VI \(\mathcal{U}_{\tilde{\mathbf{R}}}\) of a two-dimensional pinscape We consider a two-dimensional pinning landscape \(e_{p}(\mathbf{R})\), e.g., as produced by a superposition of several (anisotropic Lorentzian) defects residing in the \(z=0\) plane. In the figures 17 and 18, we analyse two specific cases with \(n=3\) and \(n=2\) defects as given in Eq. (94) with \(\epsilon=0.1\) and positions listed in Tables 1 and 2; these produce unstable landscapes \(\mathcal{U}_{\tilde{\mathbf{R}}}\) of considerable complexity already, see Figs. 17(a) and 18(a). Our defects are compact with \(e_{p}(\mathbf{R})\to 0\) vanishing at \(R\to\infty\); as a result, \(e_{\text{pin}}\) becomes flat at infinity. Note that a dense assembly of uniformly distributed individual defects produces a random Gaussian pinning landscape, as has been shown in Ref. [26]. Here, we are interested in the evolution of the unstable and bistable domains \(\mathcal{U}_{\tilde{\mathbf{R}}}\) and \(\mathcal{B}_{\tilde{\mathbf{R}}}\) associated with the 2D pinning landscape \(e_{\text{pin}}\); we focus on the unstable domain \(\mathcal{U}_{\tilde{\mathbf{R}}}\), with the properties of the bistable domain \(\mathcal{B}_{\tilde{\mathbf{R}}}\) following straightforwardly from the solution of the force balance equation (2). Unlike the analysis above that is centered on special points of \(\mathcal{U}_{\tilde{\mathbf{R}}}\), ellipses near onset and hyperbolas near mergers, here, we are interested in the global properties of the unstable region produced by a generic (though still two-dimensional) pinscape. Figure 16: The unstable domain \(\mathcal{U}_{\tilde{\mathbf{R}}}\) starting out with \(C=3\) components in (a) changes topology in three steps: after the first (b) and second (c) mergers the number of components \(C\) has changed from three in (a) to two in (b) to one in (c), leading to a horseshoe shape of \(\mathcal{U}_{\tilde{\mathbf{R}}}\). The third merger closes the horseshoe to produce the ring geometry in (d) characterized by the coefficients \(C=1\) and \(H=1\) (\(H\) denotes the number of ‘holes’ in \(\mathcal{U}_{\tilde{\mathbf{R}}}\)); the Euler characteristic \(\chi=C-H\) changes by unity in every merger. Figure 15: Left: Unstable region \(\mathcal{U}_{\tilde{\mathbf{R}}}\) for a defect with dipolar asymmetry. Upon the onset of strong pinning, an unstable ellipse appears to the left of the defect center (black solid dot). With increasing pinning strength (decreasing \(\bar{C}\)) the ellipse grows and deforms into a horseshoe geometry. The unstable eigenvector field \(\mathbf{v}_{-}\) (red arrows) points radially outward away from the defect center. The tangent field to the boundary \(\partial\mathcal{U}_{\tilde{\mathbf{R}}}\) (black arrows) follows the unstable direction at an angle of \(\pi/2\) over most of \(\partial\mathcal{U}_{\tilde{\mathbf{R}}}\), with the exception of the two turning points where the tangent rotates by \(\pi\) with respect to \(\mathbf{v}_{-}\), producing a relative winding of \(2\pi\). Right: After the merger of the turning points the unstable region \(\mathcal{U}_{\tilde{\mathbf{R}}}\) changes topology and assumes the shape of a ring. The windings of the tangent field with respect to the eigenvector-field \(\mathbf{v}_{-}\) vanish separately for both boundaries of \(\mathcal{U}_{\tilde{\mathbf{R}}}\). As discussed in Sec. III above, the unstable region \(\mathcal{U}_{\mathbf{R}}\) associated with strong pinning is determined by the condition \(D(\tilde{\mathbf{R}})=0\) of vanishing Hessian determinant, more precisely, by the competition between the lowest eigenvalue \(\lambda_{-}(\tilde{\mathbf{R}})\) of the Hessian matrix \(\mathrm{H}_{ij}\) of the pinning potential \(e_{p}(\mathbf{R})\) and the effective elasticity \(\bar{C}\), see Eq. (37). In order to avoid the interference with the second eigenvalue \(\lambda_{+}(\tilde{\mathbf{R}})\) of the Hessian matrix, we consider the shifted (by \(\bar{C}\)) curvature function \[\Lambda_{\bar{C}}(\tilde{\mathbf{R}})\equiv\bar{C}+\lambda_{-}(\tilde{ \mathbf{R}}), \tag{142}\] i.e., the relevant factor of the determinant \(D(\tilde{\mathbf{R}})=[\bar{C}+\lambda_{-}(\tilde{\mathbf{R}})][\bar{C}+ \lambda_{-}(\tilde{\mathbf{R}})]\). The condition \[\Lambda_{\bar{C}}(\tilde{\mathbf{R}})=0 \tag{143}\] then determines the boundaries of \(\mathcal{U}_{\tilde{\mathbf{R}}}\). The above problem can be mapped to the problem of cutting a surface, where \(\Lambda_{\bar{C}}(\tilde{\mathbf{R}})\) is interpreted as a height-function over \(\mathbb{R}^{2}\) that is cut at zero level; the elasticity \(\bar{C}\) then plays the role of a shift parameter that moves the function \(\lambda_{-}(\tilde{\mathbf{R}})\) downwards in height with decreasing \(\bar{C}\) (that corresponds to increasing the relative pinning strength of the pinscape in physical terms). As \(\bar{C}\) is decreased to compensate the absolute _minimum_ of \(\lambda_{-}(\tilde{\mathbf{R}})<0\), \(\bar{C}+\lambda_{-}(\tilde{\mathbf{R}})=0\), strong pinning sets in locally at \(\tilde{\mathbf{R}}_{m}\) for the first time in the form of an unstable ellipse \(\mathcal{U}_{\tilde{\mathbf{R}}}\), see Fig. 17(b) for our specific example with three defects; the Labusch parameter \(\kappa(\tilde{\mathbf{R}})\) evaluated at the point \(\tilde{\mathbf{R}}_{m}\) defines \(\kappa_{m}\), the parameter tuned in Fig. 17. Decreasing \(\bar{C}\) further, this ellipse grows and deforms, while other local _minima_ of \(\lambda_{-}(\tilde{\mathbf{R}})\) produce new disconnected parts of \(\mathcal{U}_{\tilde{\mathbf{R}}}\), a situation illustrated in Fig. 17(c) where four 'ellipses' have appeared around (local) minima (blue filled dots). A further increase in pinning strength (decrease in \(\bar{C}\)) continuous to deform these 'ellipses' and adds three new ones. As the first _saddle_ drops below the zero level (red cross), two components merge and the number of components decreases; in Fig. 17(d), we have three below-zero saddles and only four components remain, \(C=4\). In Fig. 17(e) four further mergers have reduced \(C\) to \(1\) as the corresponding _saddles_ drop below zero level. This produces a single non-simply connected component, i.e., \(C=1\) and a hole, increasing the number of holes \(H\) from zero to one. The last merger leading to (f) finally leaves \(C=1\) but cuts the stable region inside the ring into two, increasing the number of holes to \(H=2\). This sequence of onsets and mergers is conveniently described in the topographic language introduced in section IV that interprets stable tip regions as land mass (green with bright regions indicating higher mountains in Fig. 17) and unstable regions as lakes (flat blue with (below-water) height levels indicated by thin black lines), with the height \(\Lambda_{\bar{C}}=0\) defining the water level. The sequence (b) to (f) then shows the flooding of the landscape as pinning increases (\(\bar{C}\) decreasing), with white dot minima turning blue at strong pinning onsets and white cross saddles turning red at mergings; maxima in the landscape are shown as black open circles. Note that we distinguish critical points (minima, saddles) residing below (blue and red) and above (white) water level. Similarly, a (local) maximum above sea level (black open dot) turns into a blue open dot as it drops below sea level; such an event is missing in Fig. 17 but can be produced with other configurations of defects, see Fig. 18 where the curvature landscape for two defects is shown. The above discussion relates the local differential properties of the function \(\Lambda_{\bar{C}}(\tilde{\mathbf{R}})<0\), minima and saddles, to the global topological properties of \(\mathcal{U}_{\tilde{\mathbf{R}}}\), its number of components \(C(\mathcal{U}_{\tilde{\mathbf{R}}})\) and holes \(H(\mathcal{U}_{\tilde{\mathbf{R}}})\). This connection between local and global properties is conveniently discussed within Morse theory [30]. Before presenting a general mathematical formulation, let us discuss a simple heuristic argument producing the result relevant in the present context; in doing so, we make use of the above topographic language. Starting with the _minima_ of the function \(\Lambda_{\bar{C}}(\tilde{\mathbf{R}})\), a new disconnected component appears in \(\mathcal{U}_{\tilde{\mathbf{R}}}\) whenever the minimum drops below sea level as \(\bar{C}\) is decreased, that produces an increase \(C\to C+1\). With the further decrease of \(\bar{C}\), these disconnected regions expand and merge pairwise whenever a _saddle_ point of \(\Lambda_{\bar{C}}(\tilde{\mathbf{R}})\) goes below sea level, thereby inducing a change in the topology of \(\mathcal{U}_{\tilde{\mathbf{R}}}\) by either reducing the number of components \(C\to C-1\) (keeping \(H\) constant) or leaving it unchanged (changing \(H\to H+1\)), see, e.g., the example with the horseshoe closing up on itself in Sec. V.4. The below sea-level minima and saddles of \(\Lambda_{\bar{C}}(\tilde{\mathbf{R}})\) can naturally be identified with the vertices and edges of a graph; the edges in the graph then define the boundaries of the graph's faces (the same way as the vertices are the boundaries of the edges). For a connected graph, Euler's formula then tells us that the number \(V\) of vertices, \(E\) of edges, and \(F\) of faces are constrained via \(V-E+F=1\) (not counting the outer face extending to infinity) and a graph with \(C\) components satisfies the relation \(C=V-E+F\) as follows from simple addition. We have already identified minima and saddles of \begin{table} \begin{tabular}{l|c c c} & \(x/\xi\) & \(y/\xi\) & weight \\ \hline defect \#1 & \(-1.32\) & \(0.33\) & \(1\) \\ defect \#2 & \(1.48\) & \(-0.76\) & \(1\) \\ \end{tabular} \end{table} Table 2: Positions and relative weights of 2 uniaxially anisotropic Lorentzian defects in Fig. 18 as given by Eq. (94). \begin{table} \begin{tabular}{l|c c c} & \(x/\xi\) & \(y/\xi\) & weight \\ \hline defect \#1 & \(1.14\) & \(1.07\) & \(0.65\) \\ defect \#2 & \(-0.98\) & \(-0.19\) & \(1\) \\ defect \#3 & \(0.20\) & \(-0.67\) & \(1\) \\ \end{tabular} \end{table} Table 1: Positions and relative weights of 3 uniaxially anisotropic Lorentzian defects in Fig. 17 as given by Eq. (94). \(\Lambda_{\tilde{C}}(\tilde{\mathbf{R}})<0\) with vertices and edges of a graph; denoting the number of below sea-level minima and saddles by \(m\) and \(s\), we have \(V=m\) and \(E=s\). It remains to express the number \(F\) of faces in terms of critical points of the surface \(\Lambda_{\tilde{C}}(\tilde{\mathbf{R}})<0\). Indeed, the faces of our graph are associated with maxima of the function \(\Lambda_{\tilde{C}}(\tilde{\mathbf{R}})\): following the boundaries of a face, we cross the corresponding saddles with the function \(\Lambda_{\tilde{C}}(\tilde{\mathbf{R}})\) curving upwards away from the edges, implying that the faces of our graph include maxima of \(\Lambda_{\tilde{C}}(\tilde{\mathbf{R}})\). These maxima manifest in two possible ways: either the face contains a single below sea-level maximum or a single above sea-level landscape. The above sea-level landscape comprises at least one maximum but possibly also includes other extremal points that we cannot analyse with our knowledge of the below sea-level function \(\Lambda_{\tilde{C}}(\tilde{\mathbf{R}})<0\) only; we therefore call the above sea-level landscape a (single) hole. The appearance of a _single_ maximum or hole is owed to the fact that faces are not split by a below sea-level saddle as these have already been accounted for in setting up the graph. Let us denote the number of (below sea-level) maxima by \(M\) and the number of holes by \(H\), then \(F=H+M\). Combining this last expression with Euler's formula and regrouping topological coefficients \(C(\mathcal{U}_{\tilde{\mathbf{R}}})\) and \(H(\mathcal{U}_{\tilde{\mathbf{R}}})\) on one side and extremal points \(m[\Lambda_{\tilde{C}}(\tilde{\mathbf{R}})]\), \(s[\Lambda_{\tilde{C}}(\tilde{\mathbf{R}})]\), and \(M[\Lambda_{\tilde{C}}(\tilde{\mathbf{R}})]\) on the other, we arrive at the Euler characteristic \(\chi\equiv C-H\) and its representation through local differential properties, \[\chi(\mathcal{U}_{\tilde{\mathbf{R}}})\equiv[C-H]_{\mathcal{U}_{\tilde{ \mathbf{R}}}}=[m-s+M]_{\Lambda_{\tilde{C}}(\tilde{\mathbf{R}})<0}. \tag{144}\] The result (144) follows rigorously from the Euler-Poincare theorem [30; 29] in combination with Morse's theorem [30], with the former expressing the Euler characteristic \(\chi(\mathcal{U}_{\tilde{\mathbf{R}}})\) through the so-called Betti numbers \(b_{i}(\mathcal{U}_{\tilde{\mathbf{R}}})\), \[\chi(\mathcal{U}_{\tilde{\mathbf{R}}})\equiv\sum_{i=0}^{2}(-1)^{i}b_{i}( \mathcal{U}_{\tilde{\mathbf{R}}}), \tag{145}\] where the \(i\)-th Betti number \(b_{i}(\mathcal{U}_{\tilde{\mathbf{R}}})=\text{Dim}[H_{i}(\mathcal{U}_{\tilde {\mathbf{R}}})]\) is given by the dimension or rank of the \(i\)-th (singular) homology group \(H_{i}(\mathcal{U}_{\tilde{\mathbf{R}}})\). In colloquial terms, the Betti numbers \(b_{i}\) count the number of 'holes' in the manifold with different dimensions \(i\): the zeroth Betti number gives the number of components \(b_{0}=C\) of \(\mathcal{U}_{\tilde{\mathbf{R}}}\), the first Betti number \(b_{1}=H\) counts the holes, and the second Betti number refers to cavities, here \(b_{2}=0\) for our open manifold. Hence, we find that the Euler characteristic is given by the number of components and holes in \(\mathcal{U}_{\tilde{\mathbf{R}}}\), \[\chi(\mathcal{U}_{\tilde{\mathbf{R}}})=C(\mathcal{U}_{\tilde{\mathbf{R}}})-H (\mathcal{U}_{\tilde{\mathbf{R}}}), \tag{146}\] in agreement with the discussion in Sec. V.4 and (144). Morse theory [30] then provides a connection between the topological properties of the manifold \(\mathcal{U}_{\tilde{\mathbf{R}}}\) and the local differential properties of the surface \(\Lambda_{\tilde{C}}(\tilde{\mathbf{R}})<0\) defining it: with \(C_{i}\) the number of critical points with index \(i\) of the surface \(\Lambda_{\tilde{C}}(\tilde{\mathbf{R}})<0\) (the index \(i\) counts the number of negative eigenvalues of the Hessian matrix evaluated at the critical point), the Euler characteristic \(\chi(\mathcal{U}_{\tilde{\mathbf{R}}})\) relates the manifold's topology to the number and properties of Figure 17: (a) Grayscale image of the pinning potential landscape \(e_{p}(\tilde{\mathbf{R}})\), with the three diamonds marking the positions of the defects. (b)–(f) Shifted curvature function \(\Lambda_{\tilde{C}}(\tilde{\mathbf{R}})\) versus tip position \(\tilde{\mathbf{R}}\) for increasing values of \(\kappa_{m}\) (decreasing \(\tilde{C}\)) as we proceed from (b) to (f). We make use of the topographic interpretation with positive values of \(\Lambda_{\tilde{C}}\) marked as landmarks (greenish colors, with low/high elevation in dark/light green) and negative values of \(\Lambda_{\tilde{C}}\) constituting \(\mathcal{U}_{\tilde{\mathbf{R}}}\) in flat light blue (height levels are shown by thin black lines). The pinscape in (a) produces a curvature landscape with 7 minima (solid dots), 4 maxima (open dots), and 10 saddles (crosses). Several unstable regions \(\mathcal{U}_{\tilde{\mathbf{R}}}\) appear (solid dots turn blue) and merge (crosses turn red) to change the topology of \(\mathcal{U}_{\tilde{\mathbf{R}}}\). The Euler characteristic \(\chi(\mathcal{U}_{\tilde{\mathbf{R}}})=m-s+M=1-0+0=1\) in (b) changes to \(\chi(\mathcal{U}_{\tilde{\mathbf{R}}})=4\) in (c) and (d), drops to \(\chi(\mathcal{U}_{\tilde{\mathbf{R}}})=0\) in (e) and \(\chi(\mathcal{U}_{\tilde{\mathbf{R}}})=-1\) in (f); indeed, \(\mathcal{U}_{\tilde{\mathbf{R}}}\) in (f) has one component \(C=1\) and two holes \(H=2\), reproducing \(\chi(\mathcal{U}_{\tilde{\mathbf{R}}})=C-H=-1\). critical points, \[\chi(\mathcal{U}_{\tilde{\mathbf{R}}})=\sum_{i=0}^{2}(-1)^{i}C_{i}(\Lambda_{ \bar{C}}<0). \tag{147}\] For our 2D manifold the coefficients \(C_{i}\) count the minima \(C_{0}=m\), the number of saddles \(C_{1}=s\), and \(C_{2}=M\) refers to the number of maxima, hence, \[\chi(\mathcal{U}_{\tilde{\mathbf{R}}})=[m-s+M]_{\Lambda_{C}<0} \tag{148}\] and the combination with (146) produces the result (144) anticipated above. Summarizing, knowing the number of critical points \(m\), \(M\), and \(s\) of the seascape, i.e., its _local differential properties_, we can determine the global topological aspects of the pinning landscape via the evaluation of the Euler characteristic \(\chi(\mathcal{U}_{\tilde{\mathbf{R}}})\) with the help of Eq. (148). The latter then informs us about the number \(C\) of unstable domains in \(\mathcal{U}_{\tilde{\mathbf{R}}}\) where locally pinned states appear and the number of holes \(H\) in \(\mathcal{U}_{\tilde{\mathbf{R}}}\) where globally distinct pinned states show up. Furthermore, the outer boundaries of the lakes, of which we have \(C\) components, are to be associated with instabilities of the free vortex state, while inner boundaries (or boundaries of holes, which count \(H\) elements) tell about instabilities of pinned states, hence the Betti numbers \(C\) and \(H\) count different types of instabilities. It would then have been nice to determine the separate topological coefficients \(C\) and \(H\) individually-- unfortunately, \(\chi(\mathcal{U}_{\tilde{\mathbf{R}}})\) as derived from local differential properties provides us only with the difference \(C-H\) between locally and globally pinned areas and not their individual values. Nevertheless, using Morse theory, we could connect our discussion of local differential properties of the pinning landscape in Secs. III.1 and V.1 with the global pinning properties of the pinning energy landscape as expressed through the topology of the unstable domain \(\mathcal{U}_{\tilde{\mathbf{R}}}\). Regarding our previous examples, the isotropic and uniaxial defects, we remark that for the latter the two simultaneous mergers on the \(y\)-axis produce a reduction in \(C=2\to 1\) and an increase of \(H=0\to 1\) and hence a jump from \(\chi=2\) to \(\chi=0\) in one step, as expected for two simultaneous mergers. The symmetry of the isotropic defect produces a (degenerate) critical line at \(\tilde{R}_{m}\) rather than a critical point; adding a small perturbation \(\propto x^{3}\) breaks this symmetry and produces the horseshoe geometry discussed in Sec. V.4 above that is amenable to the standard analysis. A last remark is in place about the topological properties in dual space, i.e., of bistable regions \(\mathcal{B}_{\tilde{\mathbf{R}}}\). Here, the mergers produce another interesting phenomenon as viewed from the perspective of its thermodynamic analogue. Indeed, the merger of deformed ellipses in tip-space corresponds to the merger of cusps in asymptotic space, what translates to the vanishing of critical points and a smooth continuation of the first-order critical and spinodal lines in the thermodynamic analogue, see also Sec. V.3. We are not aware of a physical example in thermodynamics that produces such a merger and disappearance of critical points. Figure 18: (a) Grayscale image of the pinning potential landscape \(e_{p}(\tilde{\mathbf{R}})\), with the two diamonds marking the positions of the defects. (b)–(f) Shifted curvature function \(\Lambda_{C}(\tilde{\mathbf{R}})\) (in topographic coloring, see caption of Fig. 17) versus tip position \(\tilde{\mathbf{R}}\) for increasing values of \(\kappa_{m}\) as we proceed from (b) to (f). The pinscape in (a) produces a curvature landscape with 6 minima (solid dots), 4 maxima (open dots), and 9 saddles (crosses). Upon increasing \(\kappa_{m}\), several unstable regions \(\mathcal{U}_{\tilde{\mathbf{R}}}\) appear (solid dots turn blue) and merge (crosses turn red) to change the topology of \(\mathcal{U}_{\tilde{\mathbf{R}}}\). The Euler characteristic \(\chi(\mathcal{U}_{\tilde{\mathbf{R}}})=m-s+M=1=C\) in (b), remains \(\chi(\mathcal{U}_{\tilde{\mathbf{R}}})=1\) in (c), but with \(C=2\) and \(H=1\), changes to \(\chi(\mathcal{U}_{\tilde{\mathbf{R}}})=-1\) in (d), and \(\chi(\mathcal{U}_{\tilde{\mathbf{R}}})=-3\) with one component \(C=1\) and four holes \(H=4\) in (e). In going from (e) to (f) two of the maxima (black open dots turn blue) drop below zero, producing a characteristic \(\chi(\mathcal{U}_{\tilde{\mathbf{R}}})=6-9+2=-1\); indeed, \(\mathcal{U}_{\tilde{\mathbf{R}}}\) in (f) has one component \(\bar{C}=1\) and two holes \(H=2\), reproducing \(\chi(\mathcal{U}_{\tilde{\mathbf{R}}})=C-H=-1\). ## VII Summary and outlook Strong pinning theory is a quantitative theory describing vortex pinning in the dilute defect limit where this complex many-body system can be reduced to an effective single-pin-single-vortex problem. The accuracy offered by this theory then allows for a realistic description of the shape of the pinning potential \(e_{p}({\bf R})\) associated with the defects. While previous work focused on the simplest case of isotropic defects, here, we have generalized the strong pinning theory to the description of arbitrary anisotropic pinning potentials. Surprisingly, going from an isotropic to an anisotropic defect has quite astonishing consequences for the physics of strong pinning--this reminds about other physical examples where the removal of symmetries or degeneracies produces new effects. While the strong pinning problem is quite a complex one requiring the use of numerical tools in general, we have identified several generic features that provide the essential physics of the problem and that are amenable to an analytic treatment. Specifically, these are the points of strong pinning onset and the merger points, around which the local expansions of the pinning potential \(e_{\rm pin}(\tilde{\bf R};\tilde{\bf R})\) in the tip coordinate \(\tilde{\bf R}\) allow us to find all the characteristics of strong pinning. In particular, we identify the instability region \({\cal U}_{\tilde{\bf R}}\) in the vortex tip space (with coordinates \(\tilde{\bf R}\)) and the bistable region \({\cal B}_{\tilde{\bf R}}\) in the space of asymptotic vortex positions \(\tilde{\bf R}\) as the main geometric objects that determine the critical pinning force density \(F_{\rm pin}\), from which the critical current density \(j_{c}\), the technologically most relevant quantity of the superconductor, follows straightforwardly. While the relevance of the bistable region \({\cal B}_{\tilde{\bf R}}\) was recognized in the past [8; 9; 10], the important role played by the unstable region \({\cal U}_{\tilde{\bf R}}\) went unnoticed so far. When going from an isotropic defect to an anisotropic one, the strong pinning onset changes dramatically: while the unstable region \({\cal U}_{\tilde{\bf R}}\) grows out of a circle of radius \(\sim\xi\) and assumes the shape of a ring at \(\kappa>1\) for the isotropic situation, for an anisotropic defect the onset appears in a point \(\tilde{\bf R}_{m}\) and grows in the shape of an ellipse with increasing \(\kappa_{m}>1\); the location where this onset appears is given by the Hessian of \(e_{\rm pin}\), specifically, the point \(\tilde{\bf R}_{m}\) where its determinant touches zero first, \(\det\{{\rm Hess}[e_{\rm pin}(\tilde{\bf R};\tilde{\bf R})|_{\tilde{\bf R}}]\}_ {\tilde{\bf R}_{m}}=0\). The boundary of this ellipse defines the jump positions \({\cal J}_{\tilde{\bf R}}\) associated with the strong pinning instabilities; when combined with the landing ellipse \({\cal L}_{\tilde{\bf R}}\), these two ellipses determine the jump distance \(\delta\tilde{u}\) of the vortex tip, from which follows the jump in the pinning energy \(\Delta e_{\rm pin}\propto\delta\tilde{u}^{4}\), which in turn determines \(F_{\rm pin}\) and \(j_{c}\). The bistable region \({\cal B}_{\tilde{\bf R}}\) in asymptotic vortex space comes into play when calculating the average critical force density \(F_{\rm pin}\) opposing the vortex motion: while the vortex tip undergoes a complex trajectory including jumps, the vortex motion in asymptotic space \(\tilde{\bf R}\) is described by a straight line. As this trivial trajectory in \(\tilde{\bf R}\)-space traverses the bistable region \({\cal B}_{\tilde{\bf R}}\), the vortex tip jumps upon exiting \({\cal B}_{\tilde{\bf R}}\), that produces the jump \(\Delta e_{\rm pin}\) and hence \(F_{\rm pin}\). Again, the shape of \({\cal B}_{\tilde{\bf R}}\) changes when going from the isotropic to the anisotropic defect, assuming a ring of finite width around a circle of radius \(\sim\xi\) in the former case, while growing in the form of a crescent out of a point for the anisotropic defect. The new geometries associated with \({\cal U}_{\tilde{\bf R}}\) and \({\cal B}_{\tilde{\bf R}}\) then produce a qualitative change in the scaling behavior of the pinning force density \(F_{\rm pin}\propto(\kappa_{m}-1)^{\mu}\) near onset, with the exponent \(\mu\) changing from \(\mu=2\) to \(\mu=5/2\) when going from the isotropic to the anisotropic defect. This change is due to the change in the scaling of the geometric size of \({\cal B}_{\tilde{\bf R}}\), with the replacement of the fixed radius \(\sim\xi\) of the ring by the growing size of the crescent \(\sim\xi(\kappa_{m}-1)^{1/2}\) [the exponent \(\mu\) assumes a value \(\mu=3\) for trajectories cutting the crescent along its short dimension of size \(\xi(\kappa_{m}-1)\)]. Furthermore, for directed defects, the pinning force density \(F_{\rm pin}(\theta)\) depends on the impact angle \(\theta\) relative to the unstable direction \(u\) and is aligned with \(u\), except for a small angular regime close to \(\theta=\pi/2\). This results in a pronounced anisotropy in the critical current density \(j_{c}\) in the vicinity of the strong pinning onset. A fundamental difference between the strong pinning onsets in the isotropic and in the anisotropic case are the geometries of the unstable \({\cal U}_{\tilde{\bf R}}\) and bistable \({\cal B}_{\tilde{\bf R}}\) regions: these are non-simply connected for the isotropic case (rings) but simply connected for the anisotropic defect (ellipse and crescent). The resolution of this fundamental difference is provided by the second type of special points, the mergers. Indeed, for a general anisotropic defect, the strong pinning onset appears in a multitude of points, with unstable and bistable regions growing with \(\kappa_{m}>1\) and finally merging into larger areas. Two examples illustrate this behavior particularly well, the uniaxial defects with a quadrupolar and a dipolar deformation, see Secs. IV and V.4. In the first case, symmetric onset points on the \(x\) axis produce two ellipses/crescents that grow, approach one another, and finally merge in a ring-shaped geometry that is non-simply connected. In the case of a dipolar deformation, we have seen \({\cal U}_{\tilde{\bf R}}\) grow out of a single point with its ellipse expanding and deforming around a circle, assuming a horseshoe geometry, that finally undergoes a merging of the two tips to produce again a ring; similar happens when multiple \({\cal U}_{\tilde{\bf R}}\) domains grow and merge as in Figs. 16 (a warped defect) and 18(c) (a 2D pinning landscape where four unstable domains have merged to enclose an 'island'). These merger points are once more amenable to an analytic study using a proper expansion of \(e_{\rm pin}(\tilde{\bf R};\tilde{\bf R})\) in \(\tilde{\bf R}\) around the merger point \(\tilde{\bf R}_{s}\), the latter again defined by the local differential properties of the determinant \(\det\{{\rm Hess}[e_{\rm pin}(\tilde{\bf R};\tilde{\bf R})|_{\tilde{\bf R}}]\}\), this time not a minimum but a saddle. Rather than elliptic as at onset, at merger points the geometry is hyperbolic, with the sign change associated with increasing \(\kappa_{s}\equiv\kappa(\tilde{\bf R}_{s})\) across unity producing a reconnection of the jump- and landing lines and \(\mathcal{L}_{\tilde{\mathbf{R}}}\). While the expansions of \(e_{\text{pin}}(\tilde{\mathbf{R}};\tilde{\mathbf{R}})\) are describing the local pinning landscape near onset and merging (and thus produce generic results), the study of the _combined set_ of onset- and merger-points describe the global topological properties of \(\mathcal{U}_{\tilde{\mathbf{R}}}\) as discussed in Sec. VI: every new (non-degenerate) onset increases the number of components \(C\) in \(\mathcal{U}_{\tilde{\mathbf{R}}}\), while every merger either decreases \(C\) or increases \(H\), the number of 'holes' or 'islands' (or nontrivial loops in a non-simply connected region) in the pinning landscape. It is the 'last' merging producing a non-simply connected domain that properly defines a new pinned state; in our examples these are the closing of the two deformed ellipses in the uniaxial defect with quadrupolar deformation and the closing of the horseshoe in the defect with a dipolar deformation. Formally, the relation between the local differential properties of the curvature function \(\Lambda_{\tilde{C}}(\tilde{\mathbf{R}})=\tilde{C}+\lambda_{-}(\tilde{\mathbf{ R}})\) [with \(\lambda_{-}(\tilde{\mathbf{R}})\) the lower eigenvalue of the Hessian of \(e_{p}(\tilde{\mathbf{R}})\)], its minima, saddles, and maxima, are related to the global topological properties of \(\mathcal{U}_{\tilde{\mathbf{R}}}\) as described by its Euler characteristic \(\chi=C-H\) through Morse theory, see Eq. (144). Such topological structures have recently attracted quite some interest, e.g., in the context of Fermi surface topologies and topological Lifshitz transitions [31; 32]. The physics around the onset points as expressed through an expansion of \(e_{\text{pin}}(\tilde{\mathbf{R}};\tilde{\mathbf{R}})\) resembles a Landau theory with \(\tilde{\mathbf{R}}\) playing the role of an order parameter and \(\tilde{\mathbf{R}}\) the dual variable corresponding to a driving field--here, \(\tilde{\mathbf{R}}\) drives the vortex lattice across the defect and \(\tilde{\mathbf{R}}\) describes the deformation of the pinned vortex. The endpoints of the crescent \(\mathcal{B}_{\tilde{\mathbf{R}}}\) correspond to critical end points as they appear in the Landau theory of a first-order transition line, e.g., the Ising model in an external field or the van der Waals gas. The boundary lines of \(\mathcal{B}_{\tilde{\mathbf{R}}}\) correspond to spinodal lines where phases become unstable, e.g., the termination of overheated/undercooled phases in the van der Waals gas. The existence of critical end points tells that 'phases', here in the form of different pinning branches, are smoothly connected when going around the critical point, similar as in the gas-liquid transition of the van der Waals gas. As the 'last' critical point vanishes in a merger, a well defined new phase, here a new pinned branch, appears. Perspectives for future theoretical work include the study of correlations between anisotropic defects (see Ref. [17] addressing isotropic defects) or the inclusion of thermal fluctuations, i.e., creep (see Refs. [13; 21]). Furthermore, our discussion of the extended pinscape in Sec. VI has been limited to a two-dimensional pinning potential. In reality, defects are distributed in all three dimensions that considerable complicates the corresponding analysis of a full three-dimensional disordered pinning potential, with the prospect of interesting new results. On the experimental side, there are several possible applications for our study of anisotropic defects. For a generic anisotropic defect, the inversion symmetry may be broken. In this case, the pinning force along opposite directions is different in magnitude, as different jumps are associated to the boundaries of the bistable region \(\mathcal{B}_{\tilde{\mathbf{R}}}\) away from onset, i.e., at sufficiently large values of \(\kappa_{m}\). Reversing the current, the different critical forces then result in a ratchet effect [33; 34]. This leads to a rectification of an ac current and hence a superconducting diode effect. While for randomly oriented defects the pinning force is averaged and the symmetry is statistically restored, for specially oriented defects, the diode effect will survive. Indeed, introducing nanoholes into the material, vortex pinning was enhanced [23; 35] and a diode effect has been observed recently [36]. Generalizing strong pinning theory to this type of defects then may help in the design of superconducting metamaterials with interesting functionalities. Furthermore, vortex imaging has always provided fascinating insights into vortex physics. Recently, the SQUID-on-tip technique has been successful in mapping out a 2D pinning landscape in a film [37] (including the observation of vortex jumps) that has inspired a new characterization of the pinscape through its Hessian analysis [26]; the adaptation of this current-driven purely 2D setup to the 3D situation described in the present paper is an interesting challenge. Finally, we recap the main benefits of this work in a nutshell: For one, we have established a detailed connection of the strong pinning transition with a the concept of first-order phase transitions in thermodynamics, with the main practical result that the scaling of the pinning force density \(F_{\text{pin}}\propto(\kappa_{m}-1)^{\mu}\) comes with an exponent \(\mu=5/2\) when working with generic defects of arbitrary shapes. Second, we have found a mechanism, the breaking of a defect's inversion symmetry, that produces reaches and a diode effect in superconducting material. Third, we have uncovered the geometric structure and its topological features that is underlying strong pinning theory, including a proper understanding of the appearance of distinguished pinned states. While understanding these geometric structures seems to be of rather fundamental/scholarly interest at present, future work may establish further practical consequences that can be used in the development of superconducting materials with specific functional properties. ###### Acknowledgements. We thank Tomas Bzdusek, Gian Michele Graf, and Roland Willa for discussions and acknowledge financial support of the Swiss National Science Foundation, Division II. ## Appendix A Effective \(1\)D Landau theory The Landau-type pinning energies (18) and (117) for the vector order parameter \((\tilde{u},\tilde{v})\) involves a soft variable \(\tilde{u}\) with a vanishing quadratic term \(\propto(1-\kappa_{m})\,\tilde{u}^{2}\), as well as a stiff one, \(\tilde{v}\), characterized by a finite elasticity. By eliminating the stiff direction \(\tilde{v}\), we can arrive at a 1D Landau expansion for the order parameter \(\tilde{u}\) that provides us with the desired results for the unstable and bistable domains \(\mathcal{U}_{\tilde{\mathbf{R}}}\) and \(\mathcal{B}_{\tilde{\mathbf{R}}}\) near onset and merging in a very efficient manner. ### Close to onset We start with the two-dimensional Landau-type energy functional (58) \[e_{\text{pin}}(\tilde{\mathbf{R}};\tilde{\mathbf{R}})=\frac{ \bar{C}\left(1-\kappa_{m}\right)}{2}\,\tilde{u}^{2}+\frac{\bar{C}+\lambda_{+}} {2}\,\tilde{v}^{2}+\frac{a}{2}\,\tilde{u}\tilde{v}^{2}\] \[\quad+\frac{\alpha}{4}\,\tilde{u}^{2}\tilde{v}^{2}+\frac{\beta}{ 6}\,\tilde{u}^{3}\tilde{v}+\frac{\gamma}{24}\,\tilde{u}^{4}-\bar{C}\,\tilde{u} \tilde{u}-\bar{C}\,\tilde{v}\tilde{v} \tag{112}\] written in terms of the tip coordinates \(\tilde{u},\tilde{v}\) measured relative to \(\tilde{\mathbf{R}}_{m}\), the position of the minimal determinant \(D(\tilde{\mathbf{R}})\) at strong pinning onset, and with \(\tilde{u}\) and \(\tilde{v}\) aligned with the stable and unstable directions, respectively. The expansion (A) is anisotropic: the quadratic (elastic) coefficient along the unstable \(\tilde{u}\)-direction vanishes at the onset of strong pinning, while the one along the stable \(\tilde{v}\)-direction stays positive and large, allowing us to 'integrate out' the latter. The asymptotic coordinates \(\bar{u}\), \(\bar{v}\) assume the role of the driving (conjugate) fields for the tip positions (or order parameters) \(\tilde{u}\), \(\tilde{v}\); the latter then are determined by the force equations \(\partial_{\tilde{\mathbf{R}}}e_{\text{pin}}(\tilde{\mathbf{R}};\tilde{ \mathbf{R}})=0\), \[\bar{C}\bar{u}=\bar{C}(1-\kappa)\tilde{u}+\frac{a}{2}\tilde{v}^{2 }+\frac{\gamma}{6}\tilde{u}^{3}+\frac{\beta}{2}\tilde{u}^{2}\tilde{v}+\frac{ \alpha}{2}\tilde{u}\tilde{v}^{2}, \tag{113}\] \[\bar{C}\bar{v}=(\bar{C}+\lambda_{+})\tilde{v}+a\,\tilde{u}\bar{ v}+\frac{\beta}{6}\tilde{u}^{3}+\frac{\alpha}{2}\tilde{u}^{2}\tilde{v}, \tag{114}\] see Eq. (69), with \(\delta\tilde{\mathbf{R}}=(\bar{u},\bar{v})\) measured relative to \(\bar{\mathbf{R}}_{m}\). Inspection of Eqs. (113) and (114) shows that near the strong pinning onset, the Ansatz \(\tilde{u}\), \(\tilde{v}\), \(\bar{v}\propto\sqrt{\kappa_{m}-1}\) and \(\bar{u}\propto(\kappa_{m}-1)\) produces a consistent solution. Solving the second equation (114) for the stiff degree of freedom \(\tilde{v}\), we then find that \[\tilde{v}\approx\frac{\bar{C}\bar{v}}{\bar{C}+\lambda_{+}\!+a\tilde{u}}\approx \frac{\bar{v}}{1\!+\!\lambda_{+}/\bar{C}}\Big{(}1-\frac{a/\bar{C}}{1\!+\! \lambda_{+}/\bar{C}}\,\tilde{u}\Big{)} \tag{115}\] which is precise to order \((\kappa_{m}-1)\). Inserting \(\tilde{v}\) back into the force-balance equation (113) for the unstable component \(\tilde{u}\), we find a cubic equation for \(\tilde{u}\) (precise to order \((\kappa_{m}-1)^{3/2}\)) that is driven by a combination of \(\bar{u}\) and \(\bar{v}^{2}\), \[\bar{C}\bar{u}-\frac{(a/2)\,\bar{v}^{2}}{(1+\lambda_{+}/\bar{C})^ {2}}\approx \left[\bar{C}(1-\kappa_{m})+\frac{(\delta/2)\,\bar{v}^{2}}{(1+ \lambda_{+}/\bar{C})^{2}}\right]\tilde{u}\] \[+\frac{(\beta/2)\,\bar{v}}{(1+\lambda_{+}/\bar{C})}\tilde{u}^{2}+ \frac{\gamma}{6}\tilde{u}^{3}. \tag{116}\] Upon integration, we finally arrive at the effective one-dimensional Landau expansion for the 1D order parameter \(\tilde{u}\) that is precise to order \((\kappa_{m}-1)^{2}\) (up to an irrelevant shift \(\propto\bar{v}^{2}\)), \[e_{\text{pin}}^{\text{eff}}(\tilde{u};\bar{u},\bar{v})=\frac{r(\bar{v})}{2} \tilde{u}^{2}+\frac{w(\bar{v})}{6}\tilde{u}^{3}+\frac{\gamma}{24}\tilde{u}^{4} \!-\!h(\bar{u},\bar{v})\tilde{u}, \tag{117}\] with the coefficients \(r\), \(w\), and \(h\) defined as \[r(\bar{v}) =\left[\bar{C}(1-\kappa_{m})+\frac{\delta}{2}\frac{\bar{v}^{2}}{ (1+\lambda_{+}/\bar{C})^{2}}\right], \tag{118}\] \[w(\bar{v}) =\beta\frac{\bar{v}}{(1+\lambda_{+}/\bar{C})},\] (119) \[h(\bar{u},\bar{v}) =\bar{C}\tilde{u}-\frac{a}{2}\frac{\bar{v}^{2}}{(1+\lambda_{+}/ \bar{C})^{2}}.\] The Landau-type energy function (117) belongs to the van der Waals (gas-liquid) universality class; its first-order transition line maps to the branch crossing line in the strong pinning problem, its spinodals correspond to the arcs of the crescent defining the bistable region \(\mathcal{B}_{\tilde{\mathbf{R}}}\), and its critical points map to the two cusps of \(\mathcal{B}_{\tilde{\mathbf{R}}}\), i.e., in the strong pinning problem, the spinodals end in _two_ critical points. The cubic term \(w\tilde{u}^{3}/6\) is determined by the skew parameter \(\beta\); in the absence of such a skew, i.e., for a \(\pm\bar{v}\)-symmetric unstable ellipse \(\mathcal{U}_{\tilde{\mathbf{R}}}\), we have \(\beta=0\) and our problem assumes an Ising-type \(\mathbb{Z}_{2}\) symmetry. Let us begin with the determination of the critical coefficients \(r_{c}\), \(w_{c}\), and \(h_{c}\). These are found by setting the first three derivatives of \(e_{\text{pin}}^{\text{eff}}(\tilde{u})\) to zero [two spinodals (implying \(\partial_{\tilde{u}}e_{\text{pin}}^{\text{eff}}=0\) and \(\partial_{\tilde{u}}^{\text{eff}}e_{\text{pin}}^{\text{eff}}=0\)) coalescing into a single point (\(\to\partial_{\tilde{u}}^{3}e_{\text{pin}}^{\text{eff}}=0\))]. Setting the cubic derivative to zero, we find the order parameter \[\tilde{u}_{c}=-w_{c}/\gamma\approx-(\beta/\gamma)\tilde{v}_{c}, \tag{120}\] where we have used Eq. (118) and the transformation \(\bar{v}\leftrightarrow\tilde{v}\) in (115) to leading order. The vanishing of the second derivative relates the critical coefficients \(r_{c}\) and \(w_{c}\), \[r_{c}=w_{c}^{2}/2\gamma, \tag{121}\] (where we have made use of \(\tilde{u}_{c}\)). Inserting the dependencies \(r(\bar{v})\) and \(w(\bar{v})\), see Eq. (118), we find that \[\frac{\bar{v}_{c}^{2}}{(1+\lambda_{+}/\bar{C})^{2}}=\frac{\gamma\bar{C}(\kappa_ {m}-1)}{2\det M_{\text{jp}}}, \tag{122}\] with \(\det M_{\text{jp}}=(\gamma\delta-\beta^{2})/4\). Using again Eq. (115) to leading order, we find that \[\tilde{v}_{c}\approx\sqrt{\frac{2\gamma\bar{C}(\kappa_{m}-1)}{\gamma\delta- \beta^{2}}}, \tag{123}\] cf. Eq. (57). The critical endpoints of the 1D Landau theory then correspond to the touching points (67) of the unstable domain \(\mathcal{U}_{\tilde{\mathbf{R}}}\) \[\delta\tilde{\mathbf{R}}_{c,\pm}=\pm\,(-\beta/\gamma,1)\,\,\tilde{v}_{c}, \tag{124}\] found before, see Eq. (67) with (57). Finally, the vanishing of the first derivative defines the critical drive \[h_{c}=[r\tilde{u}+w\tilde{u}^{2}/2+\gamma\tilde{u}^{3}/6]_{c}=-\frac{w_{c}^{3}}{ 6\gamma^{2}}. \tag{125}\] Making use of the coefficients (101), this translates to the critical drive \(\bar{u}_{c}\) \[\bar{u}_{c}=(a/2\bar{C})\tilde{v}_{c}^{2}-\frac{w_{c}^{3}}{6\bar{C}\gamma^{2}} \tag{102}\] and its combination with the result for \(\bar{v}_{c}\) tells us that the critical drives match up, to leading order, with the cusps (73) of the bistable domain at \(\bar{\mathbf{R}}_{c,\pm}\), \[\delta\bar{\mathbf{R}}_{\mathrm{c},\pm} =(\bar{u}_{c},\pm\bar{v}_{c}) \tag{103}\] \[\approx\left[\left(a/2\bar{C}\right)\,\tilde{v}_{c}^{2},\,\pm(1+ \lambda_{+}/\bar{C})\tilde{v}_{c}\right].\] Next, we find the entire boundary of the unstable region \(\mathcal{U}_{\bar{\mathbf{R}}}\) that is defined as the points where local minima and maxima of \(e^{\mathrm{eff}}_{\mathrm{pin}}\) coalesce, i.e., where \(\partial_{u}^{2}e^{\mathrm{eff}}_{\mathrm{pin}}=0\), \[r+w\tilde{u}_{\mathrm{jp}}+\frac{\gamma}{2}\tilde{u}_{\mathrm{jp}}^{2}=0. \tag{104}\] Making use of the Landau coefficients (101) as well as the relation between \(\tilde{v}\) and \(\bar{v}\) in (100), we recover the equation (53) for the ellipse (we drop corrections \(\propto\left(\kappa_{m}-1\right)^{3/2}\)) \[\gamma\tilde{u}_{\mathrm{jp}}^{2}+2\beta\tilde{u}_{\mathrm{jp}}\tilde{v}_{ \mathrm{jp}}+\delta\tilde{v}_{\mathrm{jp}}^{2}\approx 2\bar{C}(\kappa_{m}-1). \tag{105}\] In order to find the shape of the bistable region \(\mathcal{B}_{\bar{\mathbf{R}}}\), we exploit the fact that for fixed drives \(\bar{u}\) and \(\bar{v}\), the bistable and the unstable vortex tip configurations are local extrema of \(e^{\mathrm{eff}}_{\mathrm{pin}}\), implying that \(\partial_{\tilde{u}}e^{\mathrm{eff}}_{\mathrm{pin}}=0\) and hence \[r\tilde{u}+\frac{w}{2}\tilde{u}^{2}+\frac{\gamma}{6}\tilde{u}^{3}=h, \tag{106}\] what corresponds to the force-balance equation (100) expressed in terms of the coefficients (101). The cubic equation (106) with its left side \(\propto(\kappa_{m}-1)^{3/2}\) depends on \(\bar{u}\) through the drive \(h\). According to (101), the two terms in the drive are of order \((\kappa_{m}-1)\) and hence have to cancel one another to lowest order. As a result, we find that the bistable domain is centered around the parabola \[\bar{u}=\frac{a}{2\bar{C}}\frac{\tilde{v}^{2}}{(1+\lambda_{+}/\bar{C})^{2}}, \tag{107}\] that matches up with Eq. (70) found in Sec. III. Finding the precise form of the bistable region \(\mathcal{B}_{\bar{\mathbf{R}}}\), we have to solve Eq. (106) to cubic order in \(\sqrt{\kappa_{m}-1}\) with the help of an expansion around the center parabola (107), what amounts to repeating the analysis leading to the results (71) and (72) in Sec. III.3. Finally, we find the landing line \(\mathcal{L}_{\bar{\mathbf{R}}}\) defined as the second bistable tip position at fixed \(\bar{u}\) and \(\bar{v}\). We make use of the cubic equation (106) and represent it in the factorized form (with the inflection point at \(\tilde{u}_{\mathrm{jp}}\) having multiplicity two) \[(\tilde{u}-\tilde{u}_{\mathrm{jp}})^{2}(\tilde{u}-\tilde{u}_{\mathrm{lp}})=0, \tag{108}\] and \(\tilde{u}_{\mathrm{lp}}\) the landing position of the tip introduced in Sec. III.2.2. A somewhat tedious but straightforward calculation shows that the stable solution \(\tilde{u}_{\mathrm{lp}}\) satisfies the quadratic equation \[r-\frac{3}{8}\frac{w^{2}}{\gamma}+\frac{w}{4}\tilde{u}_{\mathrm{lp}}+\frac{ \gamma}{8}\tilde{u}_{\mathrm{lp}}^{2}=0 \tag{109}\] and thus arranges along the ellipse \[\frac{\gamma}{8}\tilde{u}_{\mathrm{lp}}^{2}+\frac{\beta}{4}\tilde{u}_{\mathrm{ lp}}\tilde{v}_{\mathrm{lp}}+\left(\frac{\delta}{2}-\frac{3}{8}\frac{\beta^{2}}{ \gamma}\right)\tilde{v}_{\mathrm{lp}}^{2}=\bar{C}(\kappa_{m}-1) \tag{110}\] when expressed in the original two-dimensional tip space; this coincides with the original result (63). In a last step, we may go over to an Ising-type Landau expansion by measuring the order parameter \(\bar{u}\) with reference to the skewed line \[\tilde{u}_{m}(\bar{v})=\left(-\frac{\beta}{\gamma}\right)\frac{\bar{v}}{(1+ \lambda_{+}/\bar{C})}, \tag{111}\] i.e., \[\tilde{u}^{\prime}=\tilde{u}-\tilde{u}_{m}(\bar{v}). \tag{112}\] The 1D effective Landau expansion now reads, with precision to order \((\kappa_{m}-1)^{2}\), \[e^{\mathrm{eff}}_{\mathrm{pin}}(\tilde{u}^{\prime};\bar{u},\bar{v})=\frac{r^{ \prime}}{2}\tilde{u}^{\prime 2}+\frac{\gamma}{24}\tilde{u}^{\prime 4}-h^{\prime} \tilde{u}^{\prime}, \tag{113}\] with the new coefficients \[r^{\prime}=r-\frac{w^{2}}{2\gamma},\quad h^{\prime}=h-\frac{w^{3}}{3\gamma^{2} }+\frac{rw}{\gamma}. \tag{114}\] The condition \(h^{\prime}=0\) now defines the equilibrium state of the thermodynamic problem that translates into the branch crossing line where the bistable vortex tip positions have equal energy. Using the definitions (101) and (114) for \(h\) and \(h^{\prime}\), we find that the branch crossing line \(\tilde{u}_{0}(\bar{v}_{0})\) in the original two-dimensional asymptotic space reads \[\bar{u}_{0}=\frac{a}{2\bar{C}}\frac{\tilde{v}_{0}^{2}}{(1+\lambda _{+}/\bar{C})^{2}}-\frac{\beta}{\gamma}\bigg{[}(\kappa_{m}-1)\frac{\bar{v}_{0}} {1+\lambda_{+}/\bar{C}}\\ +\left(\frac{\delta}{2}-\frac{\beta^{2}}{3\gamma}\right)\frac{1}{ \bar{C}}\frac{\bar{v}_{0}^{3}}{(1+\lambda_{+}/\bar{C})^{3}}\bigg{]}, \tag{115}\] extending the result (77) from Sec. III to finite values of \(\beta\) with an additional term \(\propto\left(\kappa_{m}-1\right)^{3/2}\). ### Close to merging Let us study the strong pinning problem close to merging, as described by the two-dimensional Landau-type energy functional (117), \[e_{\mathrm{pin}}(\bar{\mathbf{R}};\bar{\mathbf{R}})=\frac{\bar{C}(1- \kappa_{s})}{2}\,\tilde{u}^{2}+\frac{\bar{C}+\lambda_{+,s}}{2}\,\tilde{v}^{2}+ \frac{a_{s}}{2}\,\tilde{u}\tilde{v}^{2}\\ +\frac{\alpha_{s}}{4}\,\tilde{u}^{2}\tilde{v}^{2}+\frac{\beta_{s}}{ 6}\,\tilde{u}^{3}\tilde{v}+\frac{\gamma_{s}}{24}\,\tilde{u}^{4}-\bar{C}\tilde{u} \tilde{u}-\bar{C}\tilde{v}\tilde{v}. \tag{116}\] As found before for strong pinning close to onset, the energy functional (100) is anisotropic with respect to vortex displacements in the stable and unstable direction. Following the strategy of Sec. A.1, we can use the force-balance equation (137) to relate the tip position along the \(v\)-axis to \(\bar{v}\) and \(\tilde{u}\), \[\tilde{v}\approx\frac{\bar{v}}{1+\lambda_{+,s}/\bar{C}}\left(1-\frac{a_{s}/ \bar{C}}{1+\lambda_{+,s}/\bar{C}}\,\tilde{u}\right). \tag{101}\] Inserting (101) into the force-balance equation for the unstable component \(\tilde{u}\) and integrating, we find that the resulting effective 1D Landau theory is identical in form to the one close to onset, \[e^{\rm eff}_{\rm pin}(\tilde{u};\bar{u},\bar{v})=\frac{r_{s}}{2}\tilde{u}^{2} +\frac{w_{s}}{6}\tilde{u}^{3}+\frac{\gamma_{s}}{24}\tilde{u}^{4}-h_{s}\tilde{u}, \tag{102}\] with a proper replacement of all coefficients involving the parameters appropriate at merging, \[\begin{split}& r_{s}=\left[\bar{C}(1-\kappa_{s})-\frac{|\delta_{s}|} {2}\frac{\bar{v}^{2}}{(1+\lambda_{+,s}/\bar{C})^{2}}\right],\\ & w_{s}=\beta_{s}\frac{\bar{v}}{(1+\lambda_{+,s}/\bar{C})},\\ & h_{s}=\bar{C}\bar{u}-\frac{a_{s}}{2}\frac{\bar{v}^{2}}{(1+ \lambda_{+,s}/\bar{C})^{2}}.\end{split} \tag{103}\] The difference to (101) is the sign change in the term \(\propto|\delta_{s}|\bar{v}^{2}\). This implies a modification of the main equation determining the shape of \(\mathcal{U}_{\tilde{\mathbf{R}}}\) (from which \(\mathcal{B}_{\tilde{\mathbf{R}}}\) follows via the force balance equation (38)), with the elliptic equation (100) transforming to the hyperbolic expression \[\gamma_{s}\tilde{u}_{\rm jp}^{2}+2\beta_{s}\tilde{u}_{\rm jp}\tilde{v}_{\rm jp }-|\delta_{s}|\tilde{v}_{\rm jp}^{2}\approx 2\bar{C}(\kappa_{s}-1). \tag{104}\] The results for the jumping and landing hyperbolas in \(\tilde{\mathbf{R}}\)-space and for the edges of the bistable domain in \(\tilde{\mathbf{R}}\)-space before and after merging can be derived by following the strategy of Sec. A.1 above and agree with the corresponding results from Sec. V.1. We close with a final remark on the disappearance of critical points after merging. The critical points are found in the standard manner by setting the first three derivatives of \(e^{\rm eff}_{\rm pin}(\tilde{u};\bar{u},\bar{v})\) to zero. This works fine before merging when \(1-\kappa_{s}>0\) and we find that criticality is realized for tip and asymptotic positions as given by Eqs. (125) and (138) in Sec. V.1. However, after merging, the cubic derivative \(\partial_{\tilde{a}}^{3}e^{\rm eff}_{\rm pin}\) never vanishes, signalling the absence of a critical point, in agreement with the discussion in Secs. V.3 and V.2.2. The merger thus leads to the disappearance of the two critical (end-)points in asymptotic space, with the attached first-order lines (the branch crossing line) joining up into a single line that is framed by two separated spinodals. We are not aware of such a disappearance of critical points in a merging process within the standard discussion of thermodynamic phase transitions.
2308.01256
Learning Spatial Distribution of Long-Term Trackers Scores
Long-Term tracking is a hot topic in Computer Vision. In this context, competitive models are presented every year, showing a constant growth rate in performances, mainly measured in standardized protocols as Visual Object Tracking (VOT) and Object Tracking Benchmark (OTB). Fusion-trackers strategy has been applied over last few years for overcoming the known re-detection problem, turning out to be an important breakthrough. Following this approach, this work aims to generalize the fusion concept to an arbitrary number of trackers used as baseline trackers in the pipeline, leveraging a learning phase to better understand how outcomes correlate with each other, even when no target is present. A model and data independence conjecture will be evidenced in the manuscript, yielding a recall of 0.738 on LTB-50 dataset when learning from VOT-LT2022, and 0.619 by reversing the two datasets. In both cases, results are strongly competitive with state-of-the-art and recall turns out to be the first on the podium.
Vincenzo Mariano Scarrica, Antonino Staiano
2023-08-02T16:26:54Z
http://arxiv.org/abs/2308.01256v1
# Highlights ###### Abstract We propose a novel approach to the spatial distribution of long-term tracks for long-term tracks. The proposed approach is based on the concept of temporal and temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on concept of temporal dependencies of long-term tracks. The proposed approach is based on the concept of temporal dependencies of long-term tracks. The proposed approach is based on concept of temporal dependencies of long-term tracks. [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [[ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [[ [ [ [ [ [[ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [[ [ [ [ [ [ [ [ [ [ [[ [ [ [ [ [ [ [[ [ [ [ [ [ [[ [[ [ [[ [ [[ [[ [ [ [[ [[ [[ [[ [ [[ [[ [[ [[ [[ [[[ [[ [[ [[ [ [[ [[ [[ [[[ [[ [[ [[[ [[ [[ [[[ [[[ [[[ [[[ [[ [[[ [[ [[[[ [[[ [[[ [[[[ [[[[ [[[ [[[[ [[[ [[[ [[[[ [[[ [[[[[ [[[[ [[[[ [[[[ [[[[ [[[[ [[[[[ [[[[[ [[[[ [[[[[ [[[[ [[[[[ [[[[[[ [[[[ [[[[[ [[[[[[ [[[[[ [[[[[ [[[[[[ [[[[ [MISSING_PAGE_POST] [[[[[[[[ [MISSING_PAGE_POST] [[[[[[[[ [[[[[[[ [[[[[[[ tracking (Shao et al. (2018)). FairMOT (Zhang et al. (2020)) is one of the most interesting state-of-the-art models from the point of view of comparisons, as it is often extremely competitive and generalizable to multiple classes through specific fine-tuning phases. Usually, most trackers designed to date use predictions in the form of bounding boxes, but there are works that use instance segmentation-based phases in their architectures to improve final predictions, such as RTS (Paul et al. (2022)). Opening the discussion to multiple dimensions, for 3D images reconstructed by stereoscopy or with multiple calibrated cameras (Chang et al. (2019)), there are interesting 3D-object trackers methods in the literature, Hu et al. (2021). Long-Term Tracking can be further divided into _Re-Detection Long-Term_, where in accordance to a visibility confidence score the tracker can arbitrarily choose to re-detect the target, or _Pseudo-Long-Term_, where the target is never re-detected. In this work, Long-Term Fusion-Trackers strategies will be discussed, and a conjecture on a data and model independent learning procedure on a generalized number of Long-Term Trackers scores will be made for boosting performances. ### Environment and issues Object Tracking involves various areas of study in science, and below will be mentioned some of current importance. _Human Interaction_ can certainly be mentioned (Singh et al. (2019)), for example in the recognition and tracking of gestures taken by a web-cam for subsequent processing into commands to be executed by the machine. Object tracking also finds applications in _intelligent monitoring_(Tai et al. (2004)), for example on work sites where workers must be monitored and tracked for safety and control reasons. _Automated driving_ also uses Object Tracking techniques to monitor the trajectories of pedestrians and vehicles around (Tang et al. (2019)), so as to avoid collisions. In _virtual reality_, objects can be tracked to reproduce effects on them. Object Tracking can also be used in _Surgical Navigation_ to follow remote interventions, and impart movements of particular tools, such as a scalpel, to a robot. In the forensic field, Object Tracking is used for _Crime Prediction_ in video surveillance systems (Miao et al. (2016)), where subjects can commit offenses or access places without authorization, or at unauthorized times. In the military, where the deployment of smart weapons is increasingly required, Object Tracking is the second candidate technology after IR for _Navigation and Reconnaissance_(Lei et al. (2015)), for example missile warheads. As in all branches of Computer Vision, where techniques to solve difficult problems have limitations due to conditions of infeasibility in visual input, even the tracking of objects has some fundamental problems that it is necessary to argue, as already counted by considerable publications like Wu et al. (2013). For example, the _scale variation_ introduces an important difficulty related to the change of perspective in 2D, and the change of resolution of the template may have to require more effort on the part of the tracker in extracting features invariant to the scale. The _lighting conditions_ are another fundamental element: the reflection of light, backscatter, diffusion, refraction and other phenomena of visible waves hinder a good success. Sequences can also have _occlusions_, which are areas where the target is partially obscured by other objects or partially disappears from the scene. The objects to be tracked can also undergo similar and non-similar transformations, then deform in order to fool the tracker. An important feature must therefore be robustness to _morphological changes_. Other transformations can be _rotations_ on various planes and reference axes, _blurring_ and _resolution reduction_, as well as _noise_ introduced by the hardware used or by the filtering software in post-processing. The object can be confused with similar objects belonging to the background, and in this case we speak of _background clutter_. This turns out to be an extremely complex issue that is much debated in the most important competitions. The framerate turns out to be another important parameter, because if an object moves very fast, then in _fast motion_, you can lose information and therefore also lose the template. The displacement between two frames in terms of ground clearance would be too wide. Finally, the most complex problem is probably the _out of view_ (OoV), where the object can actually disappear from the scene and then return to it, even in different shapes and colors. This is the main problem that must be addressed, together with the others already envisaged, Long-Term Tracking. ## 2 Related works This section will serve as a survey on the most used object tracking techniques in the history of Computer Vision, up to the latest algorithms, with a focus on Long-Term Tracking and merger approaches. ### Image Processing Starting with methodologies based on image processing, which today would be rudimentary, we must necessarily remember the first searches on the target set, which took place by searching for the same target (defined as template), within a region of interest (ROI). The template, defined in its instance, is called a patch, and initially scrolled along both dimensions of the current frame, considering the entire window as ROI. The way the template was defined was variable. We started by considering the same levels of pixel intensity, then wanting to make a real template matching, resulting in very poor long-term evaluation performance; After that, it was considered appropriate to vary the patch by reinitializing it every \(n\) frames, when the matching score was below a threshold, but even this limited the tracker's ability to analyze long sequences. From simple template matching, we have moved on to the consideration of more elaborate statistical measures, such as correlation (Weiwei et al. (2021)). \[G(i,j)=\sum_{u=-k}^{k}\sum_{v=-k}^{k}F(u,v)I(i+u,j+v) \tag{1}\] From the eqn. 1, you can see how the correlation takes frame I and kernel K as input, with kernel size strictly smaller than frame size. In the same way as template matching, these measures could be maintained from the initial template throughout the sequence or updated during construction by reinitialization. However, the search on the entire window can generate an abnormal number of false positives, also due to the presence of objects similar to the target, so the size of the ROI has been drastically lowered to a local neighborhood, called search window. \[SW(i,j)=\{I(i,j)\ |\ \ |I(i,j)-SW(i,j)|<\delta,\forall i,j\in I\} \tag{2}\] In eqn. 2 there is a definition search windows, where \(\delta\) is a positive threshold with value smaller than the entire window size. This threshold can also be adaptive, depending on the algorithm one chooses. Correlation filter-based methods are however subject to sensitivity to transformations such as rotations, morphological changes and sudden changes of direction in the trajectory. However, one of their advantages remains translational invariance, under certain steady-state assumptions. Moving on to motion estimation-based approaches, perhaps the most used method to solve the tracking problem is the Kalman filter (Ali and Mirza (2006)). It consists of a probabilistic model that, based on an a priori mean state \(x_{k|k-1}\) and a priori covariance \(p_{k|k-1}\), approximates the predictions of the state a posteriori and the error of a posteriori covariance, through a step of updates to the parameters of the model: \[x_{k|k}=\theta x_{k|k-1}+\rho p_{k|k}=\theta p_{k|k-1}\theta^{T}+\psi \tag{3}\] In the eqn. 3, both state variables are updated through optimization of parameters \(\theta,\rho\) and \(\psi\). Its limitations consist in considering stationary linear dynamical systems, whereby sudden changes in the direction of the object would be predicted with a big error on covariance. However, the Kalman filter is found in many other algorithms, grafted as a more complex piece of pipeline. In addition to methods based on motion estimation, image processing has produced other systems to try to follow objects in videos, such as those based on histograms. By calculating the histogram of a patch, one can approximate a probability density function of pixel intensity levels, and set it as a similarity criterion. Known algorithms that use histograms are the MeanShift (Ali and Mirza (2006)), and its adaptive evolution, the CAMShift (Bradski (1998)). Despite their revolution in the field of object tracking, it is notorious that histograms do not capture topological information, which is of fundamental importance when the targets to be followed have detailed textures, since they do not detect occlusions well or can be confused with similar objects. Compared to previous methods, however, they can detect morphological changes. ### Machine Learning Machine Learning (ML) techniques have practically completely replaced most solutions based on image processing in Computer Vision, both for their efficiency and for their accuracy, sometimes stable and sometimes even better than the most rudimentary techniques. As for classical Machine Learning, where a classifier or regressor such as Random Forest (RF, Breiman (2001)), K Nearest Neighbors (K-NN, Cover and Hart (1967)) or Support Vectors Machine (SVM, Cortes and Vapnik (1995)) is used to learn features extracted from a patch, we can mention the work of Tian et al. (2007), where an Ensemble of SVMs is used to trace objects; on the contrary, Thormann et al. (2017), describe a first RF-based system to learn the results deriving from a Computer Vision algorithm to try to replace the latter. This last work can be considered as a source of inspiration for the study carried out in this paper, although it has different conceptual bases. More performing and commonly used to approximate object trackers, are the Deep Learning Object Detection algorithms based on convolutional networks, including YOLO (Redmon et al. (2016)), Faster-RCNN (Ren et al. (2015)). Mainly, these Object Detectors are used to search for the target object in each frame by training only on the patch of the first frame of the sequence, and if possible, updating their weights according to their own predictions. Their main disadvantages are inductive bias and anchor dependence. Multi-Domain Convolutional Neural Network (MDNet, Nam and Han (2016)) is composed of a structure of convolutional layers followed by a series of parallel branches, each representing a different domain, where in the case of tracking a sequence is assumed as a domain. First, each branch trains on the single sequence, after which the shared convolutional layers are trained to give the model global knowledge. The type of classification is binary, that is, to distinguish foreground and background. Different is the approach of Siamese networks, in which two models having the same parameters are put in parallel, and they are given as input the entire frame and the patch to be searched. After their execution, an aggregation function is applied (cross-correlation is widely used) to obtain the final heatmap where, through an appropriate rescaling, the result will be displayed. Known Siamese network models are Siam-RCNN (Voigtlaender et al. (2019)), Siam-FC++ (Xu et al. (2020)), Siam-RPN++ (Li et al. (2018)), Siam-Mask (Wang et al. (2019)). This approach was considered the state of the art until a few years ago, then supplanted by the introduction of transformers. The main defects of Siamese networks concern the poor ability to learn the background as a function of the foreground (and therefore strengthen its discrimination) and the lack of reliability of the output score, which unlike other probabilistic models, turns out to be an index of similarity. Later, transformers-based solutions began to take hold due to their excellent ability to learn sequences both in a spatial and temporal sense. Transformers can in fact be compared to Recurrent Neural Networks (RNNs) that are much less expensive in terms of training performance, but require large amounts of sequences. As the RNNs do, they take as input a sequence and return a sequence, but they introduce a new data relation study function called _attention_. These relationships are studied by encoding incoming data once a maximum token size is established. Positional encoding are systems that allow you to rearrange the input in order to simplify the calculation of attention. Transformers usually appear as auto-encoder structures, and their training includes the aid of three fundamental matrices: \(Q\), \(V\) and \(K\), or, respectively, _queries_, _values_ and _keys_ (Vaswani et al. (2017)). As in a retrieval system, the query can be considered the search string, the keys the domain in which to search and the values the final result. \[c_{i}=\sum_{j}a_{i}j\boldsymbol{h}_{j}\hskip 72.27pt\textbf{where}\hskip 72.27pt \sum_{j}a_{j}=1 \tag{4}\] In a first version of Transformers definition (Bahdanau et al. (2016)), the attention was defined as in eqn. 4, where \(\boldsymbol{h}_{j}\) are the values and \(a_{j}\) the coefficients to be pursued. (Bahdanau et al. (2016)) proposed a neural network for learning these scores. The calculation in this case turned out to be too expensive, since it maps directly from a sequence of dimension N for the encoder to a sequence of size M for the decoder. \[Attention(Q,K,V)=softmax(\frac{QK^{T}}{\sqrt{d_{k}}})V \tag{5}\] By choosing to project the sequences on a common space (Vaswani et al. (2017)), through the function \(f(x)\) for the encoder and \(g(y)\) for the decoder, we obtain projection vectors called keys \(K\) for the encoder and queries \(Q\) for the decoder. In eqn. 5, an evolved definition of attention is given, with \(d_{k}\) the queries and keys dimension. \[MultiHead(Q,K,V)=Concat(head_{1}...head_{h})W^{0}\hskip 7.227pt\textbf{where} \hskip 7.227pthead_{i}=Attention(\boldsymbol{Q}W_{i},KW_{i},VW_{i}) \tag{6}\] In eqn. 6 transformer layers are described, often multi-head. When the queries, values and keys are inputted from the same sequence, we talk about _self-attention_; if the queries come from a sequence, while keys and values are from another sequence, we deal with _cross-attention_. Usually, the first approach is used for unsupervised language models, like GPT-3, Brown et al. (2020). The latter concern models like Stable Diffusion (Rombach et al. (2022)), where more than an image, there is a prompt text input for generating text-driven manipulated images. Mentioning some of their main bottlenecks, transformers suffer of poor explainability in their architecture and they need huge amount of data to be trained onto, that makes them very difficult to be run on medium power machines. In Short-Term universe, There are several transformers used to solve the tracking problem. Among the best known must be counted STARK (Yan et al. (2021)), SuperDiMP (Bhat et al. (2019)), KeepTrack (Mayer et al. (2021)) and VitTrack (_Chen et al._ in Kristan et al. (2023)). STARK (Yan et al. (2021)) follows a classic auto-encoder structure, with a backbone for feature extraction and a branch for bounding box prediction. It stores an initial template and a dynamic template. In order to make the prediction, first check through a score head if it is necessary to change the dynamic template, and if so replace it with the previous one, updating it. SuperDiMP (Bhat et al. (2019)) follows a different architecture, applying, after the extraction of the training features, i.e. the stored templates, a model prediction branch. This step consists in calculating the weights of a convolutional model based on an initialized model and an optimization algorithm that iteratively adapts the former to the training distribution. After that, the predicted model is used on the features extracted from the test frames, in order to produce the final score map. KeepTrack (Mayer et al. (2021)) uses SuperDiMP within its pipeline as a baseline tracker on two consecutive frames. Then, having obtained two candidate targets with relative score maps, a feature encoding is made to the two maps. Defined as a Graph Neural Network (GNN) to which the embedding candidates are passed, the associations between the map elements are calculated and learned, seen as nodes connected by self-attentional and cross-attentional edges. Then to perform the candidate matching and obtain by exclusion the final target, a Sinkhorn based algorithm is used. ViTTrack (_Chen et al._ in Kristan et al. (2023)) is based on the ViT transformer (Dosovitskiy et al. (2021)). To this, he adds a corner prediction head. Similar to STARK, the initial patch and template are chained together and used for feature extraction and corner prediction. ### Trackers Fusion Strategy Throughout the history of object tracking, various alternative ways to simplify processes have been experimented with, which gradually became more and more complex. The very design of a transformer and its training involves in-depth study and huge amounts of data. To get an idea of the amount of data that would be needed to describe an associative memory capable of solving the problem of object tracking, consider that a frame has dimensions \(W\) and \(H\) in \(2\,D\), and each pixel can take \(N\) values. The possible frames, assumed as points in an unbounded discrete space \(W\times H\) dimensional, are \(N^{W\times H}\). This value alone represents the set of possible groundtruths in an image classification problem. Introducing the third dimension for sequences, we say that every sequence has a length equal to \(K\), with \(K\) not variable, and every frame in each sequence is different from every frame from other sequences. Assuming that an initial target is composed of a subset of pixels of the first frame, the amount of subsets is \(2^{W\times H}\); assuming even that for each initial target, there is an unique and distinct series of groundtruth subsets for the entire sequence, and among the sequences, below we can observe a calculation of a lower bound of possible groundtruths necessary to construct an associative memory for object tracking. \[OTG=\frac{N^{W\times H}\times 2^{W\times H}}{K}=\frac{2N^{W\times H}}{K} \tag{7}\] In the eqn. 6, \(OTG\) stands for object tracking groundtruths and the unit of storaging is in bytes. Wanting to go into the practical field, assume that a video has a resolution of \(1280\times 720\), with frame rate of 30 fps, length of 60 seconds and an encoding of 3 bytes for each pixel; Using the eqn. 6, you get a value of \(8.65\times 10^{717128}\) terabytes of data. An average algorithm based on deep learning can be used on at most a quantity of the order of terabyte units, and this is already expensive. To reduce the complexity of these algorithms, it was decided to merge multiple trackers into a parallel pipeline, and to aggregate the results into a single branch. Going backwards, (Vojir et al. (2016)) uses an adaptive Hidden Markov Model to predict which tracker to use among a pool of complementary trackers; Falling within the field of deep learning, in Long-Term tracking, until 2020 the approaches considered were sequential and the pipelines were linear. After predicting the bounding box and confidence score, the tracker decided whether to re-detect the target based on a threshold or learning. An example of tracker that uses this approach and that has obtained the best F1 scores on the VOT-LT2019 (Kristan et al. (2019)) and VOT-LT2020 (Kristan et al. (2020)) challenges is LT_DSE (Kristan et al. (2019)) (winner of both editions). Since 2020, the Tracker fusion strategy paradigm has also begun to be adopted in Long-Term tracking and using deep learning methods. In particular, the winner of the VOT-LT2021 (Kristan et al. (2021)) challenge was mlpLT (Dunnhofer et al. (2022)), based on the merger of the STARK and SuperDiMP trackers, with an online verification phase by MDNet. This tracker also applies the so-called correction phase, i.e. the tracker that is evaluated as better between the two will give a result that will act as a template for both trackers. An improved version of its F1 score is CoCoLoT (_Dunnhofer et al._ in Kristan et al. (2023)), which replaces the SuperDiMP baseline tracker with KeepTrack. The VOT-LT2022 (Kristan et al. (2023)) challenge was, instead, won by VITKT_M (_Zhang et al._ in Kristan et al. (2023)). This model consists of a composition of the ViTTrack and KeepTrack trackers, followed by the metric model MetricNet (Zhao et al. (2020)). It is then extended by adding a motion module that predicts the trajectory of the current target when it assumes abnormal behavior. The present work does not only want to propose a series of innovative models, but wants to act as a standard of generalization for tracking fusion strategy. In particular, parameterize the number of baseline trackers, introduce the classification of an OoV state, not considered by most works. The latter is of fundamental importance, especially in military and medical tasks, where the number of false positives and true negatives must be minimised. In addition, as will be explained in the **Methodology and materials** section, the process of choosing the tracker result to rely on for the final result will be treated as a learning procedure acted by a generic learner. These features will infuse the model with the ability to abstract itself from the type of algorithm used, both from the point of view of baseline trackers and learners. As will then be shown by the experiments, the models will also have the ability to abstract from data: using LTB-50 (Lukezic et al. (2021)) as a training dataset and VOT-LT2022 evaluation dataset as a test set, the recall obtained will be the highest ever, with a highly competitive F1 score; similarly, using VOT-LT2022 as a training set and LTB-50 as a test, the results will remain almost unchanged. In the experiments, ablations and modifications will be introduced to avoid any kind of a priori knowledge on the test set. ## 3 Methodology and Materials In this section the most used protocols in the context of Long-Term Object Tracking will be described, the benchmarks that are used by the model both as training sets and as test sets, a functional representation of how \(N\) trackers can be complementary to each other and finally the model itself. ### Evaluation protocols Nearly all tasks in Computer Vision gain international visibility not only because of the inherent complexity that lies in their problems, but because of the way in which the type of solutions proposed to them is evaluated. The Object Tracking Benchmarks (OTB) (Wu et al. (2015)) and Visual Object Tracking (VOT) (Kristan et al. (2013)) protocols are de facto standards in the field, and most state-of-the-art models refer to them. To better define their types of evaluation, it is necessary to introduce the concept of Intersection over Union (IoU) between two bounding boxes or between two masks. \[IoU=\frac{|r_{t}\cap r_{p}|}{|r_{t}\cup r_{p}|} \tag{8}\] In eqn. 8 the \(r_{t}\) stands for the target region while the \(r_{p}\) for the predicted. IoU is widely used for tasksof all kinds, from instance segmentation, to semantic segmentation, and it is comprised in an interval between 0 and 1. In addition, the definition of Average Center Location (ACL) is needed. \[ACL=E(||x-y||_{2}) \tag{9}\] In eqn. 9 for \(x\) the target bounding box or mask center is intended, while for \(y\) we also mean that for prediction. The OTB protocol is known to be a one-step protocol, that is, launched and never stopped on a specific frame of the sequence, even on the short-term in the presence of tracking failure. In practice, it never allows resets. Its main metrics are _accuracy_ and _robustness_. Given two thresholds \(\lambda\) and \(\delta\), in accuracy we can distinguish three measures _precision_, _success_ and _Area Under Curve_ (AUC). \[P=\%f_{i}\forall i\in[1,K]\,|ACL<\lambda \tag{10}\] \[S=\%f_{i}\forall i\in[1,K]\,|IoU>\delta \tag{11}\] \[AUC=\int_{\delta}IoU\ \textbf{with}\ \delta\in[0;1] \tag{12}\] In the eqn. 10 and eqn. 11 \(\%f_{i}\) indicates the number of frames belonging to the sequence set \(\mathcal{S}\), in the first case having threshold on the ACL with respect to \(\lambda\), in the second case on the IoU with respect to \(\delta\). The AUC (eqn. 12) is instead the integration of the IoU with respect to the change in the \(\delta\) threshold in the range \([0;1]\). As for the calculation of robustness, it refers to evaluating accuracy in three different ways: One Pass Evaluation (OPE), Temporal Robustness Evaluation (TRE) and Spatial Robustness Evaluation (SRE). OPE refers to run the evaluation in one step with no reset, TRE divides the sequence into segments and executes OPE on each individual segment, and then mediates the results; finally, SRE applies OPE on 12 transformations of the same sequence, based on augmentation. As for the VOT protocol, it identifies a different evaluation criterion for each type of tracking: for the Long-Term the metrics considered are precision, recall and F1-score, calculated according to a threshold linked to IoU. \[\tau_{\sigma}=max\{\tau|max_{\tau}F(\tau)\} \tag{13}\] \[Pr(\tau_{\sigma})=\int_{0}^{1}Pr(\tau_{\sigma},\tau_{\Omega})d_{\tau_{\Omega}}= \frac{1}{N_{p}}\sum_{t\in\{t:G_{t}\neq\emptyset\}}\Omega(A_{t}(\tau_{\sigma}),G _{t}) \tag{14}\] \[Re(\tau_{\sigma})=\int_{0}^{1}Re(\tau_{\sigma},\tau_{\Omega})d_{\tau_{\Omega}}= \frac{1}{N_{g}}\sum_{t\in\{t:A_{t}(\tau_{\sigma})\neq\emptyset\}}\Omega(A_{t}( \tau_{\sigma}),G_{t}) \tag{15}\] \[F(\tau_{\sigma})=\frac{2Pr(\tau_{\sigma})Re(\tau_{\sigma})}{Pr(\tau_{\sigma})+ Re(\tau_{\sigma})} \tag{16}\] As can be seen from 13, the calculation of the three metrics is carried out by searching for the threshold \(\tau_{\sigma}\) that maximizes the F1-score. Precision and recall (eqn. 14 and eqn. 15, respectively) integrate their respective measures in variation at the threshold of IoU in the range \([0;1]\), where \(A_{t}\) stands for predicted bounding box, \(G_{t}\) stands for groundtruth and \(\Omega\) indicates the overlap operator. The situation is different for the F1-score (eqn. 16), where the maximized threshold in eqn. 13 is taken into account to give the final outcome. There is therefore no pre-set threshold value to refer to. The latter protocol was used in the experiments, in accordance with the results presented at the last reference challenge, namely the VOT-LT2022 (Kristan et al. (2023)). ### Benchmarks As for the details on the datasets used in the pre-training phase by the models that have been mentioned and that have been used in the proposed work, these can be found directly on the individual reference papers. The datasets that will be referred to in the manuscript and that have actually been used in the experiments, both for training and testing, are LTB-50 (Lukezic et al. (2021), adopted for the VOT-LT 2019, 2020, 2021 challenges) and VOT-LT2022 (Kristan et al. (2023)). The LTB-50 dataset is composed of 50 sequences, for a total of 215294 frames, divided unequally between the sequences. The sequences have different resolutions and a wide variety of target subjects, _e.g._, animals, people, cars, etc. It contains within its scenes most of the problems dealt with in the Environment and issues section, such as OoVs, geometric transformations, different visible wave phenomena, poor acquisition quality. Implicitly, despite having constant frame rates, subjects move at different speeds, presenting rates of fast motion varying between sequences. In the experiments, the dataset will be used both in the training phase and in the test phase, exploiting the annotations produced by the VOT community during the creation phase of the benchmark. A graphic testimony of its scenes can be found in Fig. 1. Similarly to LTB-50, the VOT-LT 2022 dataset, introduced only for the 2022 edition (the VOT challenge has in fact changed the tracking task from 2023, bringing it to multi-object tracking) contains 50 sequences, for a total of 168282 frames. They have different resolutions, but the same frame rate. The considerations regarding the issues and the type of targets apply in the same way as the LTB-50 dataset, but it has been empirically noted that VOT-LT 2022 is more difficult to evaluate, as reported by the latest results. On average, the resolution is higher than the LTB-50 dataset, and the dataset is heavier in terms of foraging memory. In the experiments it will be used both as training and as a test. The ensemble of the sequences is shown in Fig. 2. ### Trackers complementarity Before presenting the spatial learning model, it is necessary to understand what are the theoretical foundations on which the fusion strategy is based. What assumes mathematical validity in merging multiple trackers together is their _complementarity_. Define complementarity as the ability of \(N\) generic trackers to return qualitatively different results in a complementary way. Clearly, in the practical field, among them the trackers have different performances, and it is not said that there are situations of fairness in which everyone can, in turn, give their own result without overlapping the others. In this sense, various types of situations can be defined, representable mathematically. Appealing to the returned outputs, namely the confidence score \(c_{ij}\) and the bounding box \(b_{ij}\), where \(i\) is the frame index in the sequence \(S\) and \(j\) the tracker index in the ensemble. If and only if, the \(IoU_{ij}\) calculated between the \(b_{ij}\) and the \(g_{i}\), i.e., the corresponding groundtruth of the \(i\)-th frame, is better than the other \(IoU_{ik,k\neq j}\), then the prediction of the \(j\)-th tracker is assigned as the corresponding prediction. Note that the comparison is independent of the confidence value, as it is not said that if \(c_{ij}\) is greater than \(c_{ik,k\neq j}\) then \(IoU_{ij}\) is greater than \(IoU_{ik,k\neq j}\). In this way, it is obtained that to have the best Figure 1: In the illustration are visible the 50 sequences of LTB-50, among which there are some of the most complex issues covered in the introduction, such as out of views or transformations of various types. The cut has been made on [https://www.votchallenge.net/vot2019/dataset.html](https://www.votchallenge.net/vot2019/dataset.html) tracking system acting on a sequence (with the given trackers), it is necessary to subject all its frames to the previous comparison, and have a one-by-one association between a frame \(i\) and a tracker \(j\) (i.e., the best, or the one whose the prediction should be chosen). For simplicity, we will introduce four scenarios, which will act as extreme conditions in which a multi-tracker system can be presented: _in-phase_, _anti-phase_, _Dirac delta_ and _upper limited_. As can be seen from Fig. 3, where a generic pair of trackers \(i\) and \(j\) taken from the system's trackers pool, the four configurations are: * **Anti-phase**: for each frame of the sequence there is always a tracker with a IoU higher than the others. Trackers alternate their predictions in a round-robin pattern to optimize decision-making. If \(N\) trackers preserve this property on a generic sequence, you will always be able to get the maximum performance from their simultaneous Figure 2: In the illustration are visible the 50 sequences of VOT-LT2022, which are found to have within them a smaller number of frames overall, but a higher resolution. The cut has been made on [https://www.votchallenge.net/vot2022/dataset.html](https://www.votchallenge.net/vot2022/dataset.html) execution. \[IoU_{j}(t)=A_{j}sin(2\pi ft+\rho_{j})\] (17) In eqn. 17 the IoU function defined on frames domain is reported as a sinusoidal wave, where \(j\) stands for the tracker index in the multi-tracker system. Every sinusoid has its own phase \(\rho_{j}\), that makes the round-robin scheme appliable. * **In-phase**: all the trackers in the system behave in the same way. They admit the same peaks in amplitude on the IoU and therefore the maximum obtainable from the union of their performances is equal to the performance of the individual. \[IoU_{j}(t)=A_{j}sin(2\pi ft)\] (18) In eqn. 18 a specific case of eqn. 17 is considered, with \(\rho_{j}=0\quad\forall j\). * **Upper limited**: there is always one and only one tracker (or a subset of the entire pool) that overpowers the performance of the others, thus making the execution of the poorest trackers useless. The maximum obtainable in this configuration is therefore given by the best or by the simultaneous execution of the best. \[IoU_{j}(t)=K_{j}\] (19) In eqn. 19 every \(j\)-th tracker IoU discrete curve is represented by a constant function \(K_{j}\), with \(K_{j}\neq K_{i}\quad\forall\quad i\neq j\). * **Dirac delta**: trackers behave in the same way as in the in-phase case, the difference lies in the fact that in a single frame the IoU of one of them (or a subset of the pool) turns out to be greater than the others. This generates a paradoxical situation in which although statistically trackers can be considered equal, one of them or a subset of Figure 3: The four extreme scenarios of a multi-tracker system: a) anti-phase trackers; b) in-phase trackers; c) upper limited trackers; d) Dirac delta like distribution. them can falsely be considered better, as in the case of Upper limited. \[\begin{cases}IoU_{j}(t)=K_{j}\\ IoU_{i}(t)=K_{j}\quad\textbf{if}\quad t\neq t_{0}\\ IoU_{i}(t)=K_{i}>K_{j}\quad\textbf{if}\quad t==t_{0}\end{cases} \tag{20}\] In eqn. 20 a special case of eqn. 19 is presented: only one point of the best constant function is higher than the others, assuming a Dirac Delta shape. The situations described are clearly ideal and almost impossible to replicate in practice, but they serve to understand how the task of learning the behavior curve among the various trackers is a fundamental task, once it is established that the trackers present among them at least one point of complementarity. This can be verified by a specially chosen training set. ### Proposed model In order to learn the behavior curve of the various trackers, and understand how their performance changes based on the output they predict and the groundtruth, the model we propose aims to train, on the scores predicted by them on each Figure 4: The entire proposed pipeline: both the input and the first target are passed to the \(N\) trackers system, which predict a confidence score to be entered by the pre-trained learner. The learner will decide which bounding box to use based on the result of the score classification. Figure 5: In the specialized model, two macro trackers are considered, mplLT and VITKT_M, and a DNN as a learner, trained on LTB-50 produced scores. In accordance with VOT standards, OoV should not be reported if predicted, so if there is no visibility detected, mplLT is chosen, so that no priori knowledge is given on VOT-LT2022. frame of every sequence, a ML algorithm. Regardless of the approach chosen, supervised or not, the goal is to decide which one among the \(N\) trackers is actually the best choice, and to do this each training frame has to be noted with a corresponding classification label, where the classes are the indices of the trackers considered plus a label for the OoV, _i.e._, where the target is not clearly visible to any of the trackers. The ML model will then have \(N\) scores of input (one for each tracker) and \(N+1\) output classes. An illustration of this process is given in Fig. 4. From the point of view of the trackers to be merged within the system, it was chosen to use the two best trackers currently known in the VOT field: mlpLT and VITKT_M. In turn, the two trackers are composed of two sub-trackers, as already described in **Trackers Fusion Strategy**, so the system consists of a tournament of 4 trackers, carried out in two matches: in the first the results of the sub-trackers are verified, while in the second, which we could define as a final match, the scores produced go into input to the learning algorithm. The final predicted class will be the index of the tracker to use. The model chosen for training on LTB-50 and testing on VOT-LT 2022 is a Deep Neural Network (DNN). Specifically, the network is composed of two hidden layers: based on the dimensionality of the system, composed of an input of two scores, it was decided to place 3 hidden neurons at the first layer and 2 at the second layer, before arriving at the single output neuron. The last neuron can return 3 different states, _i.e._, choose the bounding box of mlpLT, the bounding box of VITKT_M, or report an OoV. The DNN has been trained using the Limited-Memory BFGS optimization on a maximum number of iterations equal to 5000. For DNN supervision, before the training phase, a standard transformation was applied to the data with respect to mean and variance. Likewise, the same transformation is applied before evaluating the data being tested, compared to the mean and variance model calculated on the training. According to the VOT evaluation rules, the OoV does not have to be reported, but a bounding box can be returned in any case, if foretold. This involves the arbitrary choice of an outcome, which heuristically falls between the two trackers. Being VITKT_M the winner of the VOT-LT 2022 challenge, use it to give these bounding boxes would mean cheating on results. For this reason, the resulting winning model is used on the training dataset, so that there is no prior knowledge about the test set. Then, in case of OoV prediction, the result of mlpLT will be used on the VOT-LT 2022 test set. A visual representation of this architecture is contained in Fig. 5. The tracker proposed in the experiments turns out to be a cross between an online tracker, which is mlpLT, and an offline tracker, that is, VITKT_M. The final part, the Machine Learning-based module, works on the current frame and can be considered online. ### Rationale behind DNN To choose the right node configuration for the model that uses DNN as a learner, it was decided to appeal to the Vapnik-Chervonenkis Dimension applied to Multi-Layer Perceptron (MLP) with ReLU as activation function. The VC dimension is the maximum number \(n\) of scattered points from a binary classifier such that they can assume correct labeling. Considering the VOT protocol without out of view, the algorithm returns only two classes, namely the outputs of the first or second tracker, so it can be considered binary. According to Remark 9 in Bartlett et al. (2019), a strict size \(\Theta_{1}\) limit is imposed in the above case. \[cW\,Llog(\frac{W}{L})\leq VC\leq CW\,LlogW \tag{21}\] From eqn. 21 it follows that there must be two constants \(c\) and \(C\) such that the strict limit condition is fulfilled, where \(W\) and \(L\) are the weights and layers of the network, respectively, and VC its Vapnik-Chervonenkis dimension. For simplicity, we can consider \(c=C\), since at the right member the amount \(log(W)\) is certainly greater than \(log(\frac{W}{L})\). Having now to find a value of VC and C that solve the inequality according to our problem, a second condition is needed: we have referred to the sample-complexity bounds, which consists of another strict limit \(\Theta_{2}\). \[a\frac{VC+log(\frac{1}{\rho})}{\sigma}\leq N\leq b\frac{VC+log(\frac{1}{\rho} )}{\sigma} \tag{22}\] Similarly to the penultimate equation, the eqn. 22 admits as true the strict limit condition of \(N\) number of training patterns if there are two constants \(a\) and \(b\), with \(\rho\) the failure probability (which we considered to be the fraction of misclassified patterns on the test) and \(\sigma\) the learning error (which was considered from the last value obtained on the training loss). This \(\Theta_{2}\) comes from the combination of the upper bound treated in Hanneke (2016) and the lower bound in Ehrenfeucht et al. (1989). Now, having a system of 4 inequalities in 4 variables, it was decided to impose for simplicity the constants \(C=1\) and \(a=1\). In this way, fixing the number of training patterns at \(N=215294\) (number of frames of LTB-50 Lukezic et al. (2021)), with 3 nodes at the first hidden layer and 2 at the second, it gives \(W=14\) and \(L=4\). In addition, we obtained a \(\rho=0.45\) and \(\sigma=0.80\). The system now allows a solution for \(b=\frac{4359687}{3682}\) and \(\boldsymbol{VC}=\frac{3682}{25}\). Since the solution to the system for the chosen constants exists, it can be asserted that the chosen configuration falls within the eligible configurations. The solution does not imply that the system does not overfit: this depends on the proportion between the number of training patterns and dimensionality, but above all on the quality and statistical independence of the patterns. \begin{table} \begin{tabular}{l l l l} \hline \hline Method & Precision & Recall & F1-Score \\ \hline VITKT\_M & 0.629 & 0.604 (2\({}^{*}\)) & 0.617 \\ mixLT & 0.608 & 0.592 & 0.600 \\ HuntFormer & 0.586 & 0.610 & 0.598 \\ CoCoLoT & 0.591 & 0.577 & 0.584 \\ Proposed model (DNN) & 0.562 & 0.619 (1\({}^{*}\)) & 0.582 \\ mlpLT & 0.568 & 0.562 & 0.565 \\ Proposed model (FCM) & 0.538 & 0.593 (3\({}^{*}\)) & 0.564 \\ KeepTrack & 0.572 & 0.550 & 0.561 \\ D3SLT & 0.520 & 0.516 & 0.518 \\ SuperDiMP & 0.510 & 0.496 & 0.503 \\ \hline \hline \end{tabular} \end{table} Table 1: Results and comparison with accepted methods presented at VOT-LT2022, sorted by F1-Score \begin{table} \begin{tabular}{l l l l} \hline \hline Method & Precision & Recall & F1-Score \\ \hline mlpLT & 0.741 & 0.729 (3\({}^{*}\)) & 0.735 \\ VITKT\_M & 0.728 & 0.719 & 0.724 \\ STARK\_LT & 0.721 & 0.725 & 0.723 \\ STARK\_RGBD\_LT & 0.719 & 0.724 & 0.721 \\ SLOT & 0.727 & 0.711 & 0.719 \\ Keep\_track\_lt & 0.725 & 0.700 & 0.712 \\ SuperD\_MU & 0.738 & 0.680 & 0.708 \\ RinTrack & 0.717 & 0.696 & 0.707 \\ Proposed model (FCM) & 0.658 & 0.738 (1\({}^{*}\)) & 0.696 \\ LT\_DSE & 0.715 & 0.677 & 0.695 \\ Proposed model (DNN) & 0.653 & 0.732 (2\({}^{*}\)) & 0.690 \\ LTMU\_B & 0.698 & 0.680 & 0.689 \\ SuperDiMP & 0.675 & 0.660 & 0.667 \\ SiamRCNN & 0.654 & 0.673 & 0.664 \\ Sion\_LT & 0.640 & 0.456 & 0.533 \\ TDIOT & 0.496 & 0.478 & 0.487 \\ \hline \hline \end{tabular} \end{table} Table 2: Results and comparison with accepted methods presented at VOT-LT2021, sorted by F1-Score \begin{table} \begin{tabular}{l l l l} \hline \hline Method & \(OoV_{p}\) & \(OoV_{G}\) & Test \\ \hline Proposed model (DNN) & 10155 & 16733 & VOT-LT2022 \\ Proposed model (FCM) & 18119 & 27310 & LTB-50 \\ \hline \hline \end{tabular} \end{table} Table 3: Out of view detection skill for best models on VOT-LT 2022 and LTB-50. \(OoV_{p}\) stands for out-of-view predictions while \(OoV_{G}\) stands for out-of-view total number. ### Implementation Details on individual implementations and libraries to be installed from Github can be found in the mlpLT and VITKT_M reference papers, or on the official VOT challenge website. The learners were implemented using the Python scikit-learn and fuzzy-c-means libraries. The official repository of this work is available at the link [https://github.com/knapsack96/lsdolfts](https://github.com/knapsack96/lsdolfts). The environment used for the experiments is Kaggle, a Google cloud tool that provides free computational and storage resources. ## 4 Results and discussion In this section, results obtained from the experiments are discussed, placing the proposed method within the ranking of the various baselines. In addition, some ablation studies are proposed to verify the consistency of the results in different set-ups. ### Ablation study To verify that changing the type of learner or type of data on which to train the results obtained remained consistent, some changes were made. It was decided in a second phase to use an unsupervised learning-based learner, in particular referring to fuzzy logic clustering. The main reason for this choice lies in the fact that the distributions of the scores of the trackers are by nature overlapping: as already discussed in the section **Trackers complementarity**, despite having monotony between two scores, it is not said that there is monotony on the IoU of the relative bounding boxes. Fuzzy logic allows us to assign a value of belonging of a point to various sets, or to blur the classification task. This ability allows to reduce, where possible, the noise in the areas of overlap of the scores. In the case of fuzzy c-means (FCM) as learner, no actual supervised annotation was made, but after the clustering phase the points were assigned to the cluster with the highest membership value, in order to maximize the accuracy in terms of classification. The FCM parameters concern a number of cluster equal to the number of trackers plus 1, so 3 in our case, and a degree of fuzzy overlap equal to 2 (the least possible for FCM). Other ablations refer to the inversion of datasets for training and testing. Two experiments were then carried out using the two learners DNN and FCM, setting the VOT-LT2022 dataset as the training set and LTB-50 as the test set. In this case, if an OoV is detected, the process mentioned in **Proposed model** is inverted, so the VITKT_M result is chosen in place of mlpLT. The other version of the pipeline (using the FCM) is visible in Fig. 6. Results of both methods with DNN and FCM learners are displayed in Table 1 when training on LTB-50 and testing on VOT-LT 2022 and viceversa on Table 2. Also in the inversion of the two datasets the resolution of the system of inequalities defined in **Rationale behind DNN** was tested, where this time \(N=168282\) (number of frames of VOT-LT2022 Kristan et al. (2023)), \(\rho=0.52\) and \(\sigma=0.81\). Keeping the same constants as the first experiment, we admit a solution for the system with \(b=\frac{4359687}{3682}\) and \(VC=\frac{3682}{25}\). Figure 6: As already shown in Fig. 5, the main trackers are mlpLT and VITKT_M, while the learner is unsupervised. In this case, it consists of a Fuzzy C-Means. The OoV case is addressed by choosing the VITKT_M outcome. ### Discussion Looking at the results obtained in Table 1, the proposed method is to be with both DNN and FCM learners among the first three places for recall (respectively 1st and 3rd), while in second place is VITKT_M. The same situation recurs in Table 2, where even the method takes 1st and 2nd place in the recall, reversing the learners, or this time FCM turns out to have the best recall on LTB-50. In third place is mlpLT. Considering the F1-Score, it is interesting to note that in Table 1 the method presents with DNN learner a higher value of mlpLT, and at worst, using FCM, a higher value than KeepTrack. The best F1-Score (obtained with DNN), is in fifth place on VOT-LT 2022, after CoCoLoT, extension of mlpLT. Similarly, in Table 2, it is important to note how the method with FCM learner exceeds LT_DSE (the winner of the 2019 and 2020 editions) confirming the superiority of the merged approach. The method, for this metric, ranks 9th on the LTB-50. Different is the speech of precision, which turns out to be much lower than the recall, lowering the average of the F1-Score. In Table 1 it is around 7th place, while in Table 2 it goes to 11th. The context is defined by the trackers accepted and submitted to the editions. The result found on the LTB-50 dataset with FCM learner are shown in Fig. 7, where the spatial distribution of the scores has as coordinates the confidence of mlpLT on the abscissa axis and that of VITKT_M on the ordinates. As described in the figure, in part **a)** the groundtruth distribution contains a high interference density between classes, while in **b)** it is demonstrated how the application of FCM manages to balance the three areas to which it belongs. It is clear how the application of an ML method may be necessary to reduce the overlap rate that resides between the scores of the algorithms, where the target is fully visible (yellow and red areas) and not visible (green zone). The introduction of a learning phase of the scores also allows a better control over the detection of the OoVs, as shown in Table 3, where both the best experiments were evaluated on the basis of the number of OoV Figure 7: Spatial distribution of scores: a) The scores of mlpLT and VITKT_M mapped into a 2D space, highlighting in yellow the points that indicate a situation in which mlpLT was better, in red for VITKT_M and finally in green for an OoV. The points consist of the groundtruth calculated on the LTB-50 dataset. As you can see, there are areas of high density subject to interference among classes; b) In the same space of the scores defined in a), the points are clustered according to the fuzzy criterion and then a hard assignment is adopted, outlining three well-defined areas of belonging. predicted well on the total. In both cases, the percentage of true positive OoVs is approximately 66%. The numerical results can be better interpreted by looking at the Fig. 8 and 9, where 4 frames of two sequences of the two experiments are shown, where the groundtruth bounding boxes are drawn in red and the aforementioned ones are drawn in blue. In Fig. 8 the model with FCM learner is applied, while in Fig. 9 the one with DNN learner; In both figures you can see the decent quality of the tracking system compared to the groundtruth. Having obtained the above results, it can be confirmed the conjecture that the proposed method enjoys two important properties: the _model-independence_, or the ability to improve results by combining different trackers independently of the type of learning chosen and the _data-independence_, or the ability to keep the previous property unchanged while changing the training and test data. Figure 8: Visual results of the model with FCM learner trained on VOT-LT2022 on a sequence of the LTB-50 dataset; In red are represented the bounding boxes of groundtruth while in blue those predicted. Figure 10: Graph encoding of 1st frame targets, where red bounding boxes mean visible nodes. The image comes from the following free and sharable video [https://www.pexels.com/video/people-walking-by-on-a-sidewalk-854100/](https://www.pexels.com/video/people-walking-by-on-a-sidewalk-854100/) Figure 9: Visual results of the model with DNN learner trained on LTB-50 on a sequence of the VOT-LT2022 dataset; In red are represented the bounding boxes of groundtruth while in blue those predicted. ### Extension to multi-object tracking Similarly to single-object tracking, it would not be absurd to think that the approach presented could be extended to multiple targets. In the present case, the trackers to be taken into consideration for the merger would be Multiple Object Trackers (MOT), for example the best at the state of the art, while the learner, being several targets having their own confidence score, their own presence / absence from the scene and their own identifier, could be structured for instance as GNN. Input graphs would admit individual targets as nodes and score functions as edges: an example would be an all-connected graph in which each edge is the weighted average between the two scores compared to the distance in pixels. In addition, nodes might have attributes such as their own unique identifier and their own _bit_ representing the binary state of presence or absence (OoV). In this sense, the GNN would perform a graph classification task among several MOTs, choosing the best graph to represent the output. When a node is not visible, its distance in pixel is ignored on the edge weight, as it could be unknown. In Fig. 10 a frame with the abovementioned graph encoding is shown, with black circles as nodes and black lines as edges. In Fig. 11 a consecutive frame of the same sequence is shown, with one of the target not visible anymore: the OoV has been depicted as a white circle in an arbitrary position, still connected to the other nodes. In an advanced, multi-target-oriented version of the proposed method, this idea could be considered to test the conjecture and thus the properties of model-independence and data-independence. ## 5 Conclusion A new tracker fusion approach to the problem of long-term single-object tracking has been presented. In the manuscript, the generalization of the number of tracker components of the system, greater than the 2 usually used by most models, the ability to improve an ensemble of trackers by adding a final learning phase on the scores produced and the introduction of a classification of non-visible or OoV targets was discussed. A conjecture on the new paradigm has been formulated, theorizing the property of model-independence and data-independence, and an extension of the approach to multi-object tracking has been introduced. The model's results improved on two state-of-the-art benchmarks in terms of recall, and ranked among the top in terms of F1-score. Figure 11: Graph encoding of n-th frame targets, where red bounding boxes mean visible nodes and blue bounding boxes mean out of view nodes. The image comes from the following free and sharable video [https://www.pexels.com/video/people-walking-by-on-a-sidewalk-854100/](https://www.pexels.com/video/people-walking-by-on-a-sidewalk-854100/) ## CRediT authorship contribution statement **VM Scarrica:** Conceptualization of this study, Methodology, Software. **A Staiano:** Data curation, Writing - Original draft preparation.
2303.03733
Stabilization of the wave equation on larger-dimension tori with rough dampings
This paper deals with uniform stabilization of the damped wave equation. When the manifold is compact and the damping is continuous, the geometric control condition is known to be necessary and sufficient. In the case where the damping is a sum of characteristic functions of polygons on a two-dimensional torus, a result by Burq-G\'erard states that stabilization occurs if and only if every geodesic intersects the interior of the damped region or razes damped polygons on both sides. We give a natural generalization of their result to a sufficient condition on tori of any dimension $d \geq 3$. In some particular cases, we show that this sufficient condition can be weakened.
Marc Rouveyrol
2023-03-07T08:47:58Z
http://arxiv.org/abs/2303.03733v4
# Stabilization of the wave equation on larger-dimension tori with rough dampings ###### Abstract. This paper deals with uniform stabilization of the damped wave equation. When the manifold is compact and the damping is continuous, the geometric control condition is known to be necessary and sufficient. In the case where the damping is a sum of characteristic functions of polygons on a two-dimensional torus, a result by Burq-Gerard states that stabilization occurs if and only if every geodesic intersects the interior of the damped region or razes damped polygons on both sides. We give a natural generalization of their result to a sufficient condition on tori of any dimension \(d\geq 3\). In some particular cases, we show that this sufficient condition can be weakened. ###### Contents * 1 Notation and main result * 2 First microlocalization * 3 Non-concentration estimates * 4 Proof of Theorem 4 * 4.1 Second microlocal calculus * 4.2 The model case : 4 damped prisms in a tunnel * 4.3 Reduction of the general case and conclusion * 4.4 \(\zeta.\partial_{z}\) vector field over the sphere at infinity * 5 A more precise example in dimension 3 * 5.1 Directional second microlocalizations * 5.2 Third microlocalization near isolated undamped normal directions ## 1. Notation and main result Consider the damped wave (or Klein-Gordon) equation \[(\partial_{t}^{2}-\Delta_{g}+a(x)\partial_{t}+m(x))u=0,(u|_{t=0},\partial_{t}u|_{ t=0})=(u_{0},u_{1})\in H^{1}(M)\times L^{2}(M), \tag{1.1}\] where \((M,g)\) denotes a smooth compact Riemannian manifold without boundary, \(g\) is the manifold's metric, \(\Delta_{g}\) is the Laplace operator on \(M\), and the damping \(a\) and potential \(m\) are two non-negative \(L^{\infty}\) functions over \(M\). The energy \[E_{m}(u)(t)=\frac{1}{2}\int_{M}(|\nabla_{g}u|_{g}^{2}+|\partial_{t}u|^{2}+m|u| ^{2})d\operatorname{vol}_{g} \tag{1.2}\] is then decaying, as \[E_{m}(u)(t)=E_{m}(u)(0)-\int_{0}^{t}\int_{M}2a(x)|\partial_{t}u(t,x)|^{2}d \operatorname{vol}_{g}(x)dt.\] We say that _uniform stabilization_ holds for the damping \(a\) if one of the following equivalent conditions is satisfied : 1. There exists a rate \(f(t)\) such that \(\lim_{t\to+\infty}f(t)=0\) and for any \((u_{0},u_{1})\in H^{1}(M)\ \times\ L^{2}(M)\), \[E_{m}(u)(t)\leq f(t)E_{m}(u)(0).\] 2. There exist some constants \(C,c>0\) such that for any \((u_{0},u_{1})\in H^{1}(M)\,\times\,L^{2}(M)\), \[E_{m}(u)(t)\leq Ce^{-ct}E_{m}(u)(0).\] 3. There exist some \(T>0\) and \(c>0\) such that for any \((u_{0},u_{1})\in H^{1}(M)\ \times\ L^{2}(M)\), if \(u\) is a solution of the damped wave equation (1.1), then \[E_{m}(u)(0)\leq C\int_{0}^{T}\int_{M}2a(x)|\partial_{t}u|^{2}d\operatorname{ vol}_{g}.\] 4. There exist some \(T>0\) and \(c>0\) such that for any \((u_{0},u_{1})\in H^{1}(M)\ \times\ L^{2}(M)\), if \(u\) is a solution of the _un_damped wave equation \[(\partial_{t}^{2}-\Delta+m)u=0,\quad(u|_{t=0},\partial_{t}u|_{t=0})=(u_{0},u_ {1})\in H^{2}(M)\times L^{2}(M),\] then \[E_{m}(u)(0)\leq C\int_{0}^{T}\int_{M}2a(x)|\partial_{t}u|^{2}d\operatorname{ vol}_{g}.\] By the HUM method, uniform stabilization is equivalent to observability estimates (see Proposition 2.1 for a precise statement) and to controllability of the equation. We refer to [1, Chapters 3 and 4] for proofs concerning the wave equation and its controllability properties. In the case where the damping \(a\) is continuous, a necessary and sufficient condition for uniform stabilization is given by the following landmark result : **Theorem 1** (Geometric control condition, [14, 15, 16]).: _Let \(m\geq 0\). Assume that the damping \(a\) is continuous. For \(\rho_{0}=(x_{0},\xi_{0})\in S^{*}M\) (the unit cotangent bundle of \(M\)), let \(\gamma_{\rho_{0}}(s)\) denote the geodesic starting from \(x_{0}\) in the (co-)direction \(\xi_{0}\). The damping \(a\) then stabilizes uniformly the wave equation if and only if :_ (GCC) \[\exists T,c>0\text{ such that }\inf_{\rho_{0}\in S^{*}M}\int_{0}^{T}a(\gamma_{ \rho_{0}}(s))ds\geq c,\] _or equivalently if every geodesic \(\gamma_{\rho_{0}}(s)\) intersects \(\{a>0\}\) in time \(T\)._ In the case where \(a\) is merely \(L^{\infty}\), the following classical result is a consequence of Bardos, Lebeau and Rauch's work : **Theorem 2** ([1], [1]-Theorem 2).: \(-\) _Assume that \(0\leq a\in L^{\infty}(M)\). Then the strong geometric control condition_ (SGCC) \[\exists T,c>0\text{ such that }\forall\rho_{0}\in S^{*}\mathbb{T}^{d}, \exists s\in(0,T),\delta>0\] _such that \[a\geq c\text{ a.e. over }B(\gamma_{\rho_{0}}(s),\delta)\]_ _is_ _sufficient_ _for uniform stabilization, and the weak geometric control condition_ (WGCC) \[\exists T>0\text{ such that }\forall\rho_{0}\in S^{*}\mathbb{T}^{d}, \exists s\in(0,T)\text{ such that }\gamma_{\rho_{0}}(s)\in\operatorname{supp}(a),\] _where \(\operatorname{supp}(a)\) is the support of \(a\) in the distributional sense, is **necessary** for uniform stabilization._ Finding a necessary and sufficient condition in between (SGCC) and (WGCC) in the case of \(L^{\infty}\) dampings yet remains an open problem. In [15], Gilles Lebeau proved that uniform stabilization holds on the sphere \(\mathbb{S}^{d}=\{x\in\mathbb{R}^{d+1},|x|=1\}\) when \(a=1\) over the half-sphere. Hui Zhu generalized this result to Zoll surfaces of revolution in [14]. See also [1, Appendix B] for a case where the damped region \(\{a=1\}\) does not satisfy (WGCC). A necessary and sufficient condition for uniform stabilization was given by Nicolas Burq and Patrick Gerard in [1], in the case where \(M\) is a \(2\)-dimensional torus \(\mathbb{T}^{2}=\mathbb{R}^{2}/A\mathbb{Z}\times B\mathbb{Z}\) with \(A,B>0\), and the damping \(a\) is a sum of characteristic functions of disjoint polygons. Their result is stated thereafter and illustrated in Figure 1. **Theorem 3** ([1], Assumption 1.2 and Theorem 4).: _- The damping \(a\) stabilizes the wave equation if and only if the following assumption holds : there exists some \(T>0\) such that all geodesics (straight lines) of length \(T\) either encounter the interior of one of the polygons or follow for some time one of the sides of a polygon on the left and for some time one of the sides of a polygon (possibly the same) on the right._ The present article is dedicated to exploring generalizations of Theorem 3 to higher-dimensional tori. Like in [1], we use dampings \(a\) that are characteristic functions. This is motivated both by the fact that applied control historically uses characteristic functions, and by a will to understand the concentration and non-concentration properties of waves in situations where geodesics raze the control region without entering its interior. The case of tori is a favorable one for this study because the absence of curvature or boundary simplifies the geometry, and because it ensures that quasimodes of the Laplace operator satisfy good non-concentration properties (see Proposition 3.1). We identify a \(d\)-dimensional torus \(\mathbb{T}^{d}\) with a cuboid \(\prod_{j=1}^{d}[0,A_{j}]\), \(A_{j}>0\), with appropriate identification of the faces. We call a _polyhedron_ of \(\mathbb{T}^{d}\) the intersection of the cuboid with a finite number of half-spaces, and we assume that \(a\) is a finite sum of characteristic functions of disjoint such polyhedrons. This choice comes from the fact that it is always possible to concentrate solutions of the wave equation near a geodesic that only has finite-order contacts with the damped region (see for example [1, Section 5]). Thus, geodesics which violate (SGCC) need to have infinite-order contacts with the damped zone for stabilization to occur. By considering a damping equal to \(1\) over a finite union of polyhedrons, we essentially cover all cases where the damping is a characteristic function. On \(\mathbb{T}^{2}\), the conormal space to a geodesic at a point is one-dimensional, hence the unit sphere of (co-)normal directions to the geodesic is reduced to two points which correspond to the right and left sides of the geodesic. The assumption of Theorem 3 thus naturally generalizes to assuming that every geodesic is damped in each of its normal directions. Our first result states that this condition is indeed sufficient for uniform stabilization. It holds on any torus \(\mathbb{T}^{d}=\mathbb{R}^{d}/(A_{1}\mathbb{Z}\times\cdots\times A_{d}\mathbb{Z})\) of dimension \(d\geq 3\). **Theorem 4**.: \(-\) _Assume that every geodesic of \(\mathbb{T}^{d}\) either intersects the interior of \(\operatorname{supp}(a)\), or is damped in every normal direction, \(\operatorname{ie}\)_ \[\begin{split}&\text{for all }\rho_{0}=(X_{0},\Xi_{0})\in S ^{*}\mathbb{T}^{d},\Xi\text{ orthogonal to }\Xi_{0}\text{ with }|\Xi|=1,\\ &\text{there exist some positive }\delta_{0}\text{ and some interval }I\subset\mathbb{R}\text{ such that}\\ &\gamma_{\rho_{0}}(s)+\delta\Xi\in\operatorname{Int}( \operatorname{supp}(a)),\quad\forall s\in I,\quad\forall\delta\in(0,\delta_{0 }].\end{split} \tag{1.3}\] _Then stabilization holds._ Figure 1. N. Burq and P. Gerard’s checkerboards on \(\mathbb{T}^{2}\)[2] : the damping \(a\) is equal to \(1\) in the colored region and \(0\) elsewhere. For all these examples, (WGCC) is satisfied but not (SGCC). The dashed lines are geodesics which violate the condition of Theorem 3. Figure 2. In these three-dimensional illustrations, the damping \(a\) is equal to \(1\) in the colored zone and \(0\) elsewhere. Figure 2.a illustrates the framework of Theorem 4, as the red geodesic satisfies condition (1.3) but does not intersect the interior of \(\operatorname{supp}(a)\). In Figure 2, the red geodesic does not satisfy this condition as directions \(\Xi=(\pm 1,0)\) or \((0,\pm 1)\) are not damped in the sense of Definition 1.1. In Section 5, we prove that uniform stabilization can still occur in cases involving such geodesics. Note that any geodesic which encounters the interior of \(\operatorname{supp}(a)\) also satisfies assumption (1.3). A typical example of a geodesic satisfying the assumption without entering the interior of \(\operatorname{supp}(a)\) is the red one in Figure 2.a. The proof of Theorem 4 consists in studying precisely the concentration of high-frequency quasimodes of the Laplacean near such a geodesic (specifically near the red geodesic of Figure 3, see section 4.2) using second microlocalization techniques, then reducing the general case to this model case. Condition (1.3) naturally leads to the following definition : **Definition 1.1**.: \(-\) _Consider a geodesic \(\gamma_{\rho_{0}}\) of \(\mathbb{T}^{d}\) with direction \(\Xi_{0}\in\mathbb{S}^{d-1}\), and some \(\Xi\in\mathbb{S}^{d-1}\) orthogonal to \(\Xi_{0}\). We say that \(\gamma_{\rho_{0}}\) is damped in the normal direction \(\Xi\) if there exists some positive \(\delta_{0}\) and some interval \(I\subset\mathbb{R}\) such that_ \[\gamma_{\rho_{0}}(s)+\delta\Xi\in\operatorname{Int}(\operatorname{supp}(a)), \quad\forall s\in I,\quad\forall\delta\in(0,\delta_{0}]. \tag{1.4}\] For example, in Figure 2.b, the red geodesic is damped in all but four normal directions. Note also that since the interval \(I\) is independent of \(\delta\) in the definition, a geodesic can be damped in some direction only if it razes an edge or a face of some polyhedron in that direction. Punctual contacts between a damped polyhedron and a geodesic are not considered sufficient for the geodesic to be damped in the normal directions that enter the polyhedron, so as to avoid phenomena resembling that of Figure 1.c. We emphasize that in dimension 2, assumption (1.3) is exactly the necessary and sufficient condition found by N. Burq and P. Gerard in [1]. In dimensions 3 and greater, the enriched geometry (specifically the fact that the unit sphere of the conormal space to a geodesic at a point is infinite) permits uniform stabilization under weaker conditions. We illustrate this by proving uniform stabilization in a case based on the shape of the damping in Figure 2.b, with some geodesics damped in all but a finite number of directions. This is achieved by refining the techniques used to prove our main result, most notably using a third microlocalization procedure near the red geodesic. This example leads us to conjecture that a necessary and sufficient condition for uniform stabilization in our framework is that every geodesic be damped on a full-Lebesgue-measure set of normal directions. The paper is constructed as follows. In Section 2, we recall the first microlocalization procedure for the wave equation and introduce relevant tools. In Section 3, we prove non-concentration estimates for Laplacean quasimodes on \(\mathbb{T}^{d}\). Section 4 is dedicated to the proof of Theorem 4 and Section 5 to the refined example and techniques based on Figure 2.b. In the concluding paragraphs of Section 5, we discuss possible generalizations of this example using our techniques. _Acknowledgments_ The author wishes to thank Nicolas Burq for his rich and careful advice during the writing of this article and Antoine Prouff for the many discussions about it. This work was funded by a CDSN PhD grant from Ecole Normale Superieure Paris-Saclay via the Hadamard Doctoral School of Mathematics. ## 2. First microlocalization The goal of this section is to introduce the first microlocal measure and the observability estimate that will be used later on. We do so by proving that (SGCC) implies uniform stabilization. The result of this section is valid on any compact Riemannian manifold without boundary \(M\) and for any non-negative damping function \(a\in L^{\infty}(M)\). For simplicity, we prove it on \(d\)-dimensional tori. We refer to [14, Sections 2, 5 and Annex A] for statements and proof of (SGCC) and (WGCC) in the general case. Let us recall the statement of the strong geometric control condition : **Theorem 5**.: \(-\) _Assume that \(0\leq a\in L^{\infty}(\mathbb{T}^{d})\). Then the strong geometric control condition_ (SGCC) \[\begin{split}\exists T,c>0\text{ such that }&\forall\rho_{0} \in S^{*}\mathbb{T}^{d},\exists s\in(0,T),\delta>0\\ \text{such that }& a\geq c\text{ a.e. over }B(\gamma_{\rho_{0}}(s),\delta)\end{split}\] _is **sufficient** for uniform stabilization._ Proof.: Let us assume that (SGCC) holds. The following result states that uniform stabilization is equivalent to an observability estimate for solutions of the Helmholtz equation in the high-frequency regime. **Proposition 2.1** ([14], Proposition A.5).: \(-\) _Consider an \(L^{\infty}\) non-negative function \(a\) such that \(\int_{\mathbb{T}^{d}}a(x)dx>0\), then \(a\) uniformly stabilizes the wave equation (1.1) if and only if_ (Obs) \[\begin{split}\exists C,h_{0}>0\text{ such that }& \forall 0<h<h_{0},\quad\forall(u,f)\in H^{2}(\mathbb{T}^{d})\times L^{2}(\mathbb{T}^{ d}),\\ (h^{2}\Delta+1)u=f,\text{ there holds }&\|u\|_{L^{2}} \leq C\left(\|a^{\frac{1}{2}}u\|_{L^{2}}+\frac{1}{h}\|f\|_{L^{2}}\right).\end{split}\] We then study microlocal measures associated to a sequence of quasimodes that violate the observability estimate to prove it by contradiction, following the idea of [14, 15]. Assuming that (Obs) does not hold, we obtain sequences \((h_{n})\to 0\) and \((u_{n},f_{n})\) that satisfy \[(h_{n}^{2}\Delta+1)u_{n}=f_{n},\quad\|u_{n}\|_{L^{2}}=1,\quad\|a^{\frac{1}{2} }u_{n}\|_{L^{2}}=o(1)_{n\to+\infty},\quad\|f_{n}\|_{L^{2}}=o(h_{n})_{n\to+ \infty}.\] Given a symbol \(q\in C_{c}^{\infty}(T^{*}\mathbb{T}^{d})\), we define its quantization as follows. Using a partition of unity, we can assume that \(q\) is supported in a local chart. In this chart, we define \[\text{Op}_{h}(q)u(X)=\frac{1}{(2\pi)^{d}}\int_{\mathbb{R}^{d}\times\mathbb{R} ^{d}}e^{i(X-Y).\Xi}q(X,h\Xi)\zeta(Y)u(Y)dYd\Xi \tag{2.1}\] where \(\zeta=1\) in a neighborhood of the support of \(q\). Then, up to extracting a subsequence, \(u_{n}\) admits a semiclassical measure \(\nu\) over \(T^{*}\mathbb{T}^{d}\) which satisfies \[\lim_{n\to+\infty}(\text{Op}_{h_{n}}(q)u_{n},u_{n})_{L^{2}}=\langle\nu,q\rangle.\] \(u_{n}\) is \(h_{n}\)-oscillating, as \[\int_{|\Xi|\geq\frac{R}{h_{n}}}|\hat{u}_{n}(\Xi)|^{2}d\Xi\leq R^{-4}\|h_{n}^{ 2}|\Xi|^{2}|\hat{u}_{n}\|\|_{L^{2}}^{2}\leq\frac{C}{R^{4}}\to_{R\to+\infty}0,\] thus \(\nu\) has total mass \(1\). The asymptotic \(\|(h_{n}^{2}\Delta+1)u_{n}\|_{L^{2}}=o(h_{n})\) gives that \(\nu\) is supported in the characteristic set \[\{(X,\Xi)\in T^{*}\mathbb{T}^{d},|\Xi|^{2}=1\}\] and invariant by the bicharectistic flow : \[\Xi.\nabla_{X}\nu=0.\] In particular, \(\nu\) can be considered as a measure over the unit cotangent bundle of the torus \(S^{*}\mathbb{T}^{d}=T^{*}\mathbb{T}^{d}/\mathbb{R}_{+}^{*}\). We refer to [10, 11] and [13, Chapter 5] for the construction and properties of semiclassical measures. Denote then \[S=\{X\in\mathbb{T}^{d},\text{ there exist }\delta>0,c>0\text{ such that }a\geq c\text{ on }B(X,\delta)\}.\] Taking some \(X_{0}\in S\) and corresponding \(\delta,c\), we get that for every symbol \(q(X,\Xi)\) supported in \(B(X_{0},\delta)\) in the \(X\) variable, \[\begin{split}|(\operatorname{Op}_{h_{n}}(q)u_{n},u_{n})_{L^{2}}|& =|(\operatorname{Op}_{h_{n}}(q)u_{n},\mathbb{1}_{B(X,\delta)}u_{ n})_{L^{2}}|\\ &\leq\|\operatorname{Op}_{h_{n}}(q)u_{n}\|_{L^{2}}\|\mathbb{1}_{ B(X,\delta)}u_{n}\|_{L^{2}}\\ &\leq\frac{1}{c}\|\operatorname{Op}_{h_{n}}(q)u_{n}\|_{L^{2}}\| au_{n}\|_{L^{2}}.\end{split} \tag{2.2}\] \(\operatorname{Op}_{h_{n}}(q)\) is bounded uniformly with respect to \(h_{n}\) over \(L^{2}\), so that \((\operatorname{Op}_{h_{n}}(q)u_{n})\) is a bounded sequence in \(L^{2}\). Since \(\|au_{n}\|=o(1)_{L^{2}}\), we get that \(\langle\nu,q\rangle=0\). Hence, \(\nu\) vanishes in a neighborhood of every point \(\rho\in S_{X}^{*}\mathbb{T}^{d}\) for all \(X\in S\). By (SGCC), every bicharacteristic contains one such \(\rho\), so that \(\nu\) is identically \(0\). This contradicts the fact that \(\nu\) has total mass \(1\). Theorem 2 will later be used in the following form, which is a direct consequence of the proof : **Corollary 2.2**.: \(-\) _If a bicharacteristic contains some \(\rho=(X,\Xi)\ \in\ S^{*}\mathbb{T}^{d}\) with \(X\in\operatorname{Int}(\operatorname{supp}(a))\), then the first microlocal measure \(\nu\) has no support over this bicharacteristic._ ## 3. Non-concentration estimates In this section, we prove the non-concentration estimates that will determine the scaling of the second and third microlocalization procedures. When performing the second microlocalization procedure, these estimates allow to avoid dealing with the trace-operator-valued part of the 2-microlocal measure supported at a finite distance from the origin (see [1, Section3], [14, Theorem 1]). Our result generalizes the estimate obtained by N. Burq and P. Gerard on the 2-dimensional torus in [11, Section 3A] (itself a generalization of [13, Section 3]). The proof follows theirs. Recall that \(\|(h_{n}^{2}\Delta+1)u_{n}\|_{L^{2}}=o(h_{n})\) and denote \[\epsilon(h_{n})=\max\left(h_{n}^{\frac{1}{6}},\left(\frac{\|(h_{n}^{2}\Delta+ 1)u_{n}\|}{h_{n}}\right)^{\frac{1}{6}}\right). \tag{3.1}\] \(\epsilon(h_{n})\) satisfies : \[h_{n}^{-1}\epsilon^{-6}(h_{n})\|(h_{n}^{2}\Delta+1)u_{n}\|_{L^{2}}\leq 1\text{ and } \lim_{n\to+\infty}\epsilon(h_{n})=0. \tag{3.2}\] The result we prove is the following : **Proposition 3.1**.: \(-\) _On the torus \(\mathbb{T}^{d}\), assume that \(\|u_{n}\|_{L^{2}(\mathbb{T}^{d})}=\mathcal{O}(1)\) and (3.2) is satisfied, then there exists a constant \(C>0\) such that_ \[\forall n\in\mathbb{N},\|u_{n}\|_{L^{2}(\{|x_{j}|\leq h_{n}^{\frac{1}{2}} \epsilon^{-2}(h_{n})\})}\leq C\epsilon^{\frac{1}{2}}(h_{n}),\quad j=1,\ldots,d.\] Note that \(\epsilon(h_{n})\geq h_{n}^{\frac{1}{6}}\) so the width of the slice tends to zero as \(n\) goes to \(+\infty\). We prove the result for \(j=1\) without loss of generality to simplify notations. In this proof only, we denote \(X=(x_{1},y^{\prime})\in\mathbb{T}\times\mathbb{T}^{d-1}\) and \(\Xi=(\xi_{1},\eta^{\prime})\in\mathbb{R}\times\mathbb{R}^{d-1}\). Proposition 3.1 will result from the following statement : **Proposition 3.2**.: \(-\) _There exist some positive \(C\) and \(h_{0}\) such that for any \(0<h<h_{0}\), \(1\leq\beta\leq h^{-\frac{1}{2}}\) and any \((u,f)\in H^{2}\times L^{2}\) satisfying_ \[(h^{2}(\partial_{x_{1}}^{2}+\Delta_{y^{\prime}})+1)u=f,\] _the following estimate holds :_ \[\|u\|_{L^{\infty}(\{|x_{1}|\leq\beta h^{1/2}\})}\leq C\beta^{-\frac{1}{2}}h^{- \frac{1}{4}}(\|u\|_{L^{2}_{x_{1},y^{\prime}}(\{\beta h^{1/2}\leq|x_{1}|\leq 2 \beta h^{1/2}\})}+h^{-1}\beta^{2}\|f\|_{L^{2}_{x_{1},y^{\prime}}(\{|x_{1}|\leq 2 \beta h^{1/2}\})\}). \tag{3.3}\] Let us show that Proposition 3.2 implies Proposition 3.1. We apply Holder's inequality in the \(x_{1}\) variable and Proposition 3.2 with \(\beta=\epsilon^{-3}(h_{n})\leq h^{-\frac{1}{2}}\) : \[\|u\|_{L^{2}(\{|x_{1}|\leq h^{1/2}\epsilon^{-2}(h)\})\} \leq h^{\frac{1}{4}}\epsilon^{-1}(h)\|u\|_{L^{\infty}(\{|x_{1}|\leq h ^{\frac{1}{2}}\epsilon^{-3}(h)\};L^{2}_{y^{\prime}}))}\] \[\leq C\epsilon^{\frac{1}{2}}(h)\left(\|u\|_{L^{2}_{x_{1},y^{ \prime}}(\{h^{1/2}\epsilon^{-3}(h)\leq|x_{1}|\leq 2h^{1/2}\epsilon^{-3}(h)\})}\right.\] \[\qquad+h^{-1}\epsilon^{-6}(h)\|f\|_{L^{2}_{x_{1},y^{\prime}}(\{|x _{1}|\leq 2h^{1/2}\epsilon^{-3}(h)\})})\] \[\leq C\epsilon^{\frac{1}{2}}(h)(\|u\|_{L^{2}}+h^{-1}\epsilon^{-6} (h)\|f\|_{L^{2}})\] \[\leq 2C\epsilon^{\frac{1}{2}}(h).\] We now prove Proposition 3.2. Denote \(v\) (resp. \(g\)) the partial Fourier transform of \(u\) (resp. \(f\)) with regards to the \(y^{\prime}\) variable. For a fixed \(x_{1}\), the Plancherel equality gives \[\|v(x_{1},.)\|_{L^{2}_{\eta^{\prime}}}=K\|u(x_{1},.)\|_{L^{2}_{y^{\prime}}},\] where \(K\) depends on the periods of the torus in the \(y^{\prime}\) directions. Inequality (3.3) is then equivalent to \[\|v\|_{L^{\infty}(\{|x_{1}|\leq\beta h^{1/2}\};L^{2}_{\eta^{\prime }})}\leq C\beta^{-\frac{1}{2}}h^{-\frac{1}{4}}\left(\|v\|_{L^{2}(\{\beta h^{1/ 2}\leq|x_{1}|\leq 2\beta h^{1/2}\};L^{2}_{\eta^{\prime}})}\right.\\ \left.+h^{-1}\beta^{2}\|g\|_{L^{2}(\{|x_{1}|\leq 2\beta h^{1/2}\};L^{2}_{ \eta^{\prime}})}\right). \tag{3.4}\] Besides, by the Minkowski inequality, \[\|v\|_{L^{\infty}_{x_{1}};L^{2}_{\eta^{\prime}}}\leq\|v\|_{L^{2}_{\eta^{\prime }};L^{\infty}_{x_{1}}}\] so it is enough to prove the following one-dimensional result : **Proposition 3.3** ([1], Proposition 3.3).: \(-\) _There exist some positive \(C\) and \(h_{0}\) such that for any \(0<h<h_{0}\), \(\eta\in\mathbb{R}\), \(1\leq\beta\leq h^{-\frac{1}{2}}\) and any \((v,g)\) satisfying_ \[\left(h^{2}\frac{d^{2}}{dx_{1}^{2}}+1-h^{2}|\eta^{\prime}|^{2}\right)v=g,\] _we have_ \[\|v\|_{L^{\infty}(\{|x_{1}|\leq\beta h^{\frac{1}{2}}\})}\leq C\beta^{-\frac{1} {2}}h^{-\frac{1}{4}}\left(\|v\|_{L^{2}(\{\beta h^{\frac{1}{2}}\leq|x|\leq 2 \beta h^{\frac{1}{2}}\})}+h^{-1}\beta^{2}\|g\|_{L^{2}(\{|x|\leq 2\beta h^{ \frac{1}{2}}\})}\right). \tag{3.5}\] Integrating the square of (3.5) in the \(\eta^{\prime}\) variable gives (3.4). The proof of the one-dimensional estimate (3.5) is exactly that of [1]. We include it for the sake of completeness. Performing the change of variables \(x_{1}=\beta h^{\frac{1}{2}}z\), it is equivalent to prove that any solutions of \[\left(h\beta^{-2}\partial_{z}^{2}+1-h^{2}|\eta^{\prime}|^{2}\right)v=g\] satisfy \[\|v\|_{L^{\infty}(\{|z|\leq 1\})}\leq C\left(\|v\|_{L^{2}(\{1\leq|z|\leq 2\})} +h^{-1}\beta^{2}\|g\|_{L^{2}(\{|z|\leq 2\})}\right).\] Setting \(\tau=\beta^{2}h^{-1}(1-h^{2}|\eta^{\prime}|^{2})\), it suffices to prove the following lemma : **Lemma 3.4** ([1], Lemma 3.4).: \(-\) _There exists some positive \(C\) such that for any \(\tau\in\mathbb{R}\) and any solution \((v,k)\) over \((-2,2)\) of_ \[(\partial_{z}^{2}+\tau)v=k,\] _there holds the inequality_ \[\|v\|_{L^{\infty}(-1,1)}\leq C\left(\|v\|_{L^{2}(\{1\leq|z|\leq 2\})}+\frac{1}{ \sqrt{1+|\tau|}}\|k\|_{L^{1}(-2,2)}\right).\] _Proof of the lemma._ Let \(\chi\in C_{c}^{\infty}(-2,2)\) be equal to \(1\) over \((-1,1)\). \(w=\chi v\) satisfies \[(\partial_{z}^{2}+\tau)w=\chi k+2\partial_{z}(\chi^{\prime}v)-\chi^{\prime \prime}v. \tag{3.6}\] We distinguish two regimes : Elliptic regime : \(\tau\leq-1\). We multiply (3.6) by \(w\) and integrate by parts to get \[\|\partial_{z}w\|_{L^{2}(-2,2)}^{2}+|\tau|\|w\|_{L^{2}(-2,2)}^{2}=-(\chi k- \chi^{\prime\prime}v,w)_{L^{2}}+2(\chi^{\prime}v,\partial_{z}w)_{L^{2}}.\] Thus, \[\|\partial_{z}w\|_{L^{2}(-2,2)}^{2}+|\tau|\|w\|_{L^{2}(-2,2)}^{2}\leq\] \[C\left(\|k\|_{L^{1}(-2,2)}\|w\|_{L^{\infty}}+\|v\|_{L^{2}(\{1\leq |z|\leq 2\})}(\|w\|_{L^{2}(\{1\leq|z|\leq 2\})}+\|\partial_{z}w\|_{L^{2}(-2,2)})\right).\] By the Gagliardo-Nirenberg inequality in dimension \(1\), \[\|w\|_{L^{\infty}}\leq C\|\partial_{z}w\|_{L^{2}}^{\frac{1}{2}}\|w\|_{L^{2}}^{ \frac{1}{2}}=C|\tau|^{-\frac{1}{4}}\|\partial_{z}w\|_{L^{2}}^{\frac{1}{2}}| \tau|^{\frac{1}{4}}\|w\|_{L^{2}}^{\frac{1}{2}}.\] Set \(A=\|\partial_{z}w\|_{L^{2}(-2,2)}+|\tau|^{\frac{1}{2}}\|w\|_{L^{2}(-2,2)}\), then applying the Gagliardo-Nirenberg inequality again gives \[A^{2}\leq C\left(\|k\|_{L^{1}(-2,2)}|\tau|^{-\frac{1}{4}}A+\|v\|_{L^{2}(\{1 \leq|z|\leq 2\})}A\right).\] Applying it a third time yields \[\|v\|_{L^{\infty}(-1,1)}\leq\|w\|_{L^{\infty}}\leq C|\tau|^{-\frac{1}{4}}A \leq C\left(\|v\|_{L^{2}(\{1\leq|z|\leq 2\})}+|\tau|^{-\frac{1}{2}}\|k\|_{L^{1}(-2,2)} \right).\] Since \(|\tau|\geq 1\), we get the desired inequality. Hyperbolic regime: \(\tau\geq-1\). Set \(\sigma=\sqrt{\tau}\in\mathbb{R}^{+}\cup i[0,1]\). Integrating (3.6) gives \[w(x)=\int_{z=-2}^{x}g(z)\int_{y=z}^{x}e^{i\sigma(2y-x-z)}dydz,\] where \(g=\chi k-\chi^{\prime\prime}v+2\partial_{z}(\chi^{\prime}v)=g_{1}+\partial_{z }g_{2}.\) Since for any \(x,z\in[-2,2]\) \[\left|\int_{y=z}^{x}e^{i\sigma(2y-x-z)}dy\right|\leq\frac{C}{1+|\sigma|}=\frac {C}{1+|\tau|^{\frac{1}{2}}},\] the contribution of \(g_{1}\) is bounded uniformly by \[\frac{C}{1+|\tau|^{\frac{1}{2}}}\left(\|\chi k\|_{L^{1}(-2,2)}+\|v\|_{L^{1}( \{1\leq|z|\leq 2\})}\right).\] Integrating by parts, we get that the contribution of \(\partial_{z}g_{2}\) is bounded by \[C\|\chi^{\prime}v\|_{L^{1}(-2,2)}\leq C^{\prime}\|v\|_{L^{1}(\{1\leq|z|\leq 2 \})}.\] Summing both contributions gives the result. This concludes the proof of Proposition 3.1. ## 4. Proof of Theorem 4 This section is dedicated to the proof of Theorem 4, namely that stabilization occurs when every geodesic either intersects the interior of the damped zone or is damped in every normal direction. The first subsection is dedicated to the proof of \(L^{2}\)-boundedness of pseudodifferential operators and a Garding inequality for our 2-microlocal calculus. Well-known semiclassical analysis techniques are used to give an appropriate and precise framework to define 2-microlocal measures. In the second subsection, we study the 2-microlocal measure and prove uniform stabilization for a model damping. In the third, we reduce the general case to this model. The last subsection contains the proof of a geometric lemma used in the second one. ### Second microlocal calculus We start by constructing the 2-microlocal pseudodifferential calculus. We deal with more general symbols than in [1], as we only assume decay in the \(\zeta\) variable rather than polyhomogeneity. This weakened assumption will be used in the third microlocalization in Section 5. We introduce symbol classes and the second quantization for these symbols, then show that this quantization provides a pseudodifferential calculus in \(\epsilon^{2}(h)\) with sufficient tools to construct 2-microlocal measures. Our symbols take variables in \(\mathbb{R}^{d}\times\mathbb{R}^{d}\times\mathbb{R}^{q}\times\mathbb{R}^{q}\). Case \(q=d-1\) will be used in the present section, and \(d=3\), \(q\in\{1,2\}\) in Section 5. **Definition 4.1** (Symbol classes).: 1. _We define_ \(S^{0}\) _to be the class of smooth functions_ \(b\) _of_ \((X,\Xi,z,\zeta)\in\mathbb{R}^{d}\times\mathbb{R}^{d}\times\mathbb{R}^{q}\times \mathbb{R}^{q}\) _which are compactly supported in the_ \((X,\Xi)\) _variables and satisfy the following decay estimate in the_ \(\zeta\) _variable :_ \[\forall\alpha,\beta\in\mathbb{N}^{q},\quad\exists C_{\alpha,\beta} \quad\text{ s. t. }\quad\sup_{(X,\Xi,z,\zeta)\in\mathbb{R}^{2d}\times\mathbb{R}^{2q}}| \partial_{z}^{\alpha}\partial_{\zeta}^{\beta}b(X,\Xi,z,\zeta)|\leq C_{\alpha, \beta}\langle\zeta\rangle^{-|\beta|}\] _where_ \(\langle.\rangle\) _denotes the usual_ \(\langle\zeta\rangle=(1+|\zeta|^{2})^{\frac{1}{2}}\)_._ 2. _For any integer_ \(m\geq 0\)_, we define_ \(S^{m}_{H}\) _as the set of smooth functions which are compactly supported in the_ \((X,\Xi)\) _variables and polyhomogeneous of degree_ \(m\) _with respect to the_ \((z,\zeta)\) _variables with limits in the radial direction :_ \[\lim_{r\rightarrow+\infty}\frac{1}{r^{m}}a\left(X,\Xi,\frac{(rz,r\zeta)}{\|(z, \zeta)\|}\right)=\tilde{a}\left(X,\Xi,\frac{(z,\zeta)}{\|(z,\zeta)\|}\right).\] We have \(S^{0}_{H}\subset S^{0}\). Besides, when \(m=0\) functions in \(S^{0}_{H}\) are identified with smooth compactly supported functions on \(\mathbb{R}^{2d}\times\overline{B(0,1)_{\tilde{z},\tilde{\zeta}}}\) via the change of variables \[(z,\zeta)\mapsto(\tilde{z},\tilde{\zeta})=\frac{(z,\zeta)}{\sqrt{1+|z|^{2}+| \zeta|^{2}}}.\] Consider some function \(\epsilon(h)\) satisfying \[\lim_{h\to 0}\epsilon(h)=0,\quad\epsilon(h)\geq h^{\frac{1}{2}},\quad h^{ \frac{1}{2}}\epsilon^{-2}(h)\rightarrow_{h\to 0}0. \tag{4.1}\] Given a symbol \(b\) belonging to \(S^{m}_{H}\) or \(S^{0}\), its second quantization is defined by : \[\operatorname{Op}_{h}(b)u(X)=\frac{1}{(2\pi)^{d}}\int_{\mathbb{R}^{d}_{Y} \times\mathbb{R}^{d}_{\Xi}}e^{i(X-Y).\Xi}b\left(X,h\Xi,\frac{\epsilon(h)}{h^{ \frac{1}{2}}}x,\epsilon(h)h^{\frac{1}{2}}\xi\right)u(Y)dYd\Xi \tag{4.2}\] where \(X=(x,x^{\prime}),\Xi=(\xi,\xi^{\prime})\) belong to \(\mathbb{R}^{q}\times\mathbb{R}^{d-q}\). In other words, \[\operatorname{Op}_{h}(b)=b\left(x,x^{\prime},hD_{x},hD_{x^{\prime}},\frac{ \epsilon(h)}{h^{\frac{1}{2}}}x,\epsilon(h)h^{\frac{1}{2}}D_{x}\right). \tag{4.3}\] We first prove uniform boundedness over \(L^{2}(\mathbb{R}^{d})\) and over the space \(L^{2}_{ul}\) defined in (4.4). Boundedness over \(L^{2}_{ul}\) will allow to define \(2\)-microlocal measures for periodic functions over \(\mathbb{R}^{d}\). **Proposition 4.2**.: _- Consider some symbol \(b\in S^{0}\), then there exists some positive \(h_{0}\) such that the following statements hold :_ 1. \(\operatorname{Op}_{h}(b)\) _is a bounded operator over_ \(L^{2}(\mathbb{R}^{d})\) _uniformly wrt_ \(h\in(0,h_{0}]\)_._ 2. _We introduce a partition of unity_ \(\chi\in C^{\infty}(\mathbb{R}^{d})\) _such that_ \(\operatorname{supp}(\chi)\subset[-1,1]^{d}\)_,_ \(0\leq\chi\leq 1\) _and_ \(\sum_{p\in\mathbb{Z}^{d}}\chi_{p}=1\) _where_ \(\chi_{p}=\chi(\cdot-p)\)_. We define the set of uniformly locally_ \(L^{2}\) _functions over_ \(\mathbb{R}^{d}\) _by_ (4.4) \[L^{2}_{ul}=\left\{u\in L^{2}_{loc},\quad\sup_{p\in\mathbb{Z}^{d}}\|\chi_{p}u \|_{L^{2}(\mathbb{R}^{d}}<+\infty\right\}\] _endowed with the norm_ \(\|u\|_{L^{2}_{ul}}=\sup_{p\in\mathbb{Z}^{d}}\|\chi_{p}u\|_{L^{2}(\mathbb{R}^{ d})}\)_. Then_ \(\operatorname{Op}_{h}(b)\) _is a bounded operator over_ \(L^{2}_{ul}(\mathbb{R}^{d})\) _uniformly wrt_ \(h\in(0,h_{0}]\)_._ Proof.: We first prove uniform boundedness over \(L^{2}\). Setting \(\tilde{x}=\frac{\epsilon(h)}{h^{\frac{1}{2}}}x\), \(\tilde{y}=\frac{\epsilon(h)}{h^{\frac{1}{2}}}y\), and \(\tilde{\xi}=\epsilon(h)h^{\frac{1}{2}}\xi\) in (4.2), we obtain that \[\operatorname{Op}_{h}(b)=T^{*}_{h}b\left(\frac{h^{\frac{1}{2}}}{\epsilon(h)}x,x^{\prime},\epsilon(h)h^{\frac{1}{2}}D_{x},D_{x^{\prime}},x,\epsilon^{2}(h)D_{ x}\right)T_{h} \tag{4.5}\] where \(T_{h}\) is the unitary operator defined by \(T_{h}u(x,x^{\prime})=\left(\frac{h^{\frac{1}{2}}}{\epsilon(h)}\right)^{\frac{q}{2} }u\left(\frac{h^{\frac{1}{2}}}{\epsilon(h)}x,x^{\prime}\right)\). Since \(\epsilon(h)\geq h^{\frac{1}{2}}\), the \(h\)-dependent symbol \[\check{b}(X,\Xi)=b\left(\frac{h^{\frac{1}{2}}}{\epsilon(h)}x,x^{\prime}, \epsilon(h)h^{\frac{1}{2}}\xi,h\xi^{\prime},x,\epsilon^{2}(h)\xi\right)\] has bounded derivatives uniformly wrt \(h\) small enough. By the Calderon-Vaillancourt theorem [23, Theorem 5.1], \(\operatorname{Op}_{1}(\check{b})\) is bounded over \(L^{2}\), which proves uniform \(L^{2}\)-boundedness for \(\operatorname{Op}_{h}(b)\) by unitary conjugation. To generalize the result to \(L^{2}_{ul}\), consider the symbol \(\check{b}\) defined above and write, up to unitary conjugation : \[\chi_{r}\operatorname{Op}_{h}(b)u(X)=\chi_{r}(X)\sum_{p,q\in\mathbb{Z}^{d}} \frac{1}{(2\pi)^{d}}\int_{\mathbb{T}^{d}\times\mathbb{R}^{d}}e^{i(X-Y).\Xi} \chi_{p}(X)\check{b}(X,\Xi)\chi_{q}(Y)u(Y)dYd\Xi.\] Our goal is to obtain some bound on \(\|\chi_{r}\operatorname{Op}_{h}(b)u\|_{L^{2}}\) that is uniform wrt \(h\in(0,h_{0}]\) and \(r\in\mathbb{Z}^{d}\). If \(|p-r|\geq 2\) then \(\operatorname{supp}(\chi_{p})\cap\operatorname{supp}(\chi_{r})\) has measure zero, hence the the sum over \(p\) is finite. Then, whenever \(|p-q|\geq 2\), we integrate by parts in the \(\Xi\) variable using that \(\check{b}\) is compactly supported and that for any integer \(N\) \[\left(\frac{1-\Delta_{\Xi}}{1+|X-Y|^{2}}\right)^{N}\left(e^{i(X-Y).\Xi}\right) =e^{i(X-Y).\Xi}.\] Since \(X\) is localized near \(p\) and \(Y\) is localized near \(q\), when \(|p-q|\geq 2\) there exists some \(c>0\) such that \[\frac{1}{c}\frac{1}{1+|p-q|}\leq\frac{1}{1+|X-Y|}\leq\frac{c}{1+|p-q|}.\] The sum over \(q\) is thus convergent for every fixed \(p\) and \(N\) large enough. The Calderon-Vaillancourt theorem then allows to conclude, as \[\|\chi_{r}\operatorname{Op}_{h}(b)u\| \leq C\sum_{|p-r|\leq 1}\sum_{q\in\mathbb{Z}^{d}}\left\|\frac{ \chi_{p}}{(1+|p-q|)^{N}}\operatorname{Op}_{h}((1-\Delta_{\Xi})^{N}b)\chi_{q} u\right\|_{L^{2}}\] \[\leq C\sum_{|p-r|\leq 1}\left(\sum_{|p-q|\geq 2}\frac{1}{(1+|p-q|)^{ N}}\|\chi_{q}u\|_{L^{2}}+\sum_{|p-q|<2}\|\chi_{q}u\|_{L^{2}}\right)\] \[\leq C\left(\sum_{q^{\prime}\in\mathbb{Z}^{d}}\frac{1}{(1+|q^{ \prime}|)^{N}}\right)\|u\|_{L^{2}_{ul}}.\] We now prove Garding's weak inequality with small parameter \(\epsilon^{2}(h)\). The result is the following : **Proposition 4.3**.: \(-\) _Let \(b\in S^{0}\) be non-negative. Then, for any \(\alpha>0\), there exist some \(h_{0}(\alpha)>0\) and \(C(\alpha)>0\) such that for any \(h\in(0,h_{0}]\), \(u\in L^{2}(\mathbb{R}^{d})\) :_ \[\begin{split}&\operatorname{Re}(b\left(\frac{h^{\frac{1}{2}}}{ \epsilon(h)}x,x^{\prime},\epsilon(h)h^{\frac{1}{2}}D_{x},hD_{x^{\prime}},x, \epsilon^{2}(h)D_{x}\right)u,u)_{L^{2}}\geq-(\alpha+C\epsilon^{2}(h))\|u\|_{L^ {2}}^{2}.\\ &\left|\operatorname{Im}(b\left(\frac{h^{\frac{1}{2}}}{\epsilon( h)}x,x^{\prime},\epsilon(h)h^{\frac{1}{2}}D_{x},hD_{x^{\prime}},x,\epsilon^{2}(h)D_{ x}\right)u,u)_{L^{2}}\right|\leq C\epsilon^{2}(h)\|u\|_{L^{2}}^{2}.\end{split} \tag{4.6}\] _By unitary conjugation, we conclude that the same inequality holds for the operator_ \[b\left(x,x^{\prime},hD_{x},hD_{x^{\prime}},\frac{\epsilon(h)}{h^{\frac{1}{2}}} x,\epsilon(h)h^{\frac{1}{2}}D_{x}\right).\] Proof.: We first note that \(b\left(\frac{h^{\frac{1}{2}}}{\epsilon(h)}x,x^{\prime},\epsilon(h)h^{\frac{1}{ 2}}D_{x},hD_{x^{\prime}},x,\epsilon^{2}(h)D_{x}\right)\) is the \(\epsilon^{2}(h)\)-quantization of the non-negative \(h\)-dependent symbol \[\tilde{b}_{h}(X,\Xi)=b\left(\frac{h^{\frac{1}{2}}}{\epsilon(h)}x,x^{\prime}, \frac{h^{\frac{1}{2}}}{\epsilon(h)}\xi,\frac{h}{\epsilon^{2}(h)}\xi^{\prime}, x,\xi\right). \tag{4.7}\] Fix some \(\alpha>0\) and consider the symbol \(c_{h}=\sqrt{\tilde{b}_{h}+\alpha}\). Since \(h^{\frac{1}{2}}\epsilon^{-1}(h)\leq 1\), all the derivatives of \(\tilde{b}_{h}\) are bounded over \(\mathbb{R}^{d}\times\mathbb{R}^{d}\) uniformly wrt \(h\) small enough and derivatives of order \(m\) in the \(\Xi\) variable decay at rate \(\langle\Xi\rangle^{-m}\). Thus, \(c_{h}\) also has bounded derivatives and a similar rate of decay. Since \(c_{h}\) is real-valued, we get that \[\operatorname{Op}_{\epsilon^{2}(h)}(c_{h})^{*}=\operatorname{Op}_{\epsilon^ {2}(h)}(c_{h})+\mathcal{O}_{\mathcal{L}(L^{2})}\left(\epsilon^{2}(h)\right).\] We then apply the usual composition formulas (see for example [23, Theorem 4.14] which generalizes to our framework) to get \[(\operatorname{Op}_{\epsilon^{2}(h)}(\tilde{b}_{h})u,u)_{L^{2}}+\alpha\|u\|_{L ^{2}}^{2}=\|\operatorname{Op}_{\epsilon^{2}(h)}(c_{h})u\|_{L^{2}}^{2}+ \mathcal{O}(\epsilon^{2}(h))\|u\|_{L^{2}}^{2}.\] Taking the real and imaginary parts then gives the desired inequalities (4.6) for \(h_{0}(\alpha)\) small enough and some constant \(C(h_{0},\alpha)\). ### The model case : 4 damped prisms in a tunnel Using the 2-microlocal calculus for symbols in \(S^{0}_{H}\), we turn to the study of the concentration properties of a sequence \((u_{n})\) of quasimodes over the three-dimensional torus \(\mathbb{T}^{3}\), under a specific choice of damping function \(a\). We will then generalize this model case to higher dimensions and reduce the general case to it. We identify \(\mathbb{T}^{3}\) with \([-1,1]^{3}\) where opposite faces are identified. The damping zone \(\{a~{}=~{}1\}\) is represented in Figure 3. More explicitly, denoting \(X~{}=~{}(x_{1},x_{2},y)\) and \(\Xi~{}=~{}(\xi_{1},\xi_{2},\eta)\), \(a\) is the characteristic function of the set \[\begin{split}&\{|x_{1}|\geq\frac{1}{2}\text{ or }|x_{2}|\geq\frac{1}{2}\}\\ &\cup\{-1<y<-\frac{1}{2},x_{1}>0,x_{2}<\alpha_{R}x_{1}\}\cup\{- \frac{1}{2}<y<0,x_{2}>0,x_{1}>-\alpha_{T}x_{2}\}\\ &\cup\{0<y<\frac{1}{2},x_{1}<0,x_{2}>\alpha_{L}x_{1}\}\cup\{\frac{ 1}{2}<y<1,x_{2}<0,x_{1}<-\alpha_{B}x_{2}\}.\end{split} \tag{4.8}\] The first set denotes a neighborhood of the lateral faces of the cube as represented in the central picture of Figure 3 and the four other sets correspond to the prisms represented on the right, from the back to the front. \(\alpha_{L},\alpha_{R},\alpha_{T},\alpha_{B}>0\) are four fixed positive coefficients which should be thought of as small. The case \(\alpha_{L}=\alpha_{R}=\alpha_{T}=\alpha_{B}=0\) (when the triangle-shaped damped sections vanish and the four prisms are reduced to cubes) is dealt with in section 5. In this situation, the only two bicharacteristics that do not intersect the interior of \(\operatorname{supp}(a)\) are \[\{x_{1}=x_{2}=0,\xi_{1}=\xi_{2}=0,\eta=\pm 1\},\] which are represented in red in the figure (as one geodesic, traveled in two directions). Thus, the first microlocal measure \(\nu\) is supported in the union of these two bicharacteristics. The corresponding geodesic satisfies assumption (1.3) if all \(\alpha_{L,R,T,B}\) coefficients are positive but violates it if any of the coefficients vanishes. We now perform the second microlocalization around this geodesic. Consider a sequence \((u_{n})\) of functions over \(\mathbb{T}^{3}\) satisfying \[(h_{n}^{2}\Delta+1)u_{n}=\mathcal{O}(1)_{L^{2}}. \tag{4.9}\] We identify \(u_{n}\in L^{2}(\mathbb{T}^{3})\) with its periodic extension over \(\mathbb{R}^{3}\). In particular, the periodic extension of \(u_{n}\) belongs to \(L^{2}_{ul}\) as defined in (4.4). Using the \(L^{2}_{ul}\)-boundedness of operators defined by (4.2) and Garding's inequality, we can extract a subsequence (also denoted \((u_{n})\)) such that there exists a positive measure \(\tilde{\mu}\) on \(T^{*}\mathbb{T}^{2}\times\overline{N}\) satisfying for any polyhomogeneous symbol \(b\in S^{0}_{H}\), \[\lim_{n\to+\infty}(\operatorname{Op}_{h_{n}}(b)u_{n},u_{n})_{L^{2}}=\langle \tilde{\mu},\tilde{b}\rangle.\] Figure 3. Illustration of set (4.8). The coordinate system is represented on the left. The dual coordinates are denoted \((\xi_{1},\xi_{2},\xi_{3}=\eta)\). We also denote \(x=(x_{1},x_{2})\), \(\xi=(\xi_{1},\xi_{2})\). In the center is the torus \(\mathbb{T}^{3}\) identified with the unit cube \([-1,1]^{3}\). We color in blue the zone \(\{|x_{1}|\geq\frac{1}{2}\}\cup\{|x_{2}|\geq\frac{1}{2}\}\) where \(a=1\), thus damping the lateral faces. This leaves an undamped tunnel at the center of the cube. The picture on the right fits inside this tunnel but is represented separately for better legibility. On the right, we represent in blue the set where \(a=1\) inside \(\{|x_{1}|\leq\frac{1}{2},|x_{2}|\leq\frac{1}{2}\}\). The red geodesic \(\{x=0\}\) is the only one which does not encounter the interior of \(\operatorname{supp}(a)\). Here, \(\overline{N}\) denotes the sphere compactification of \(\mathbb{R}^{4}_{z,\zeta}\) and \(\tilde{b}\) is a continuous function on \(T^{*}\mathbb{R}^{3}\times\overline{N}\) which is defined by the value of \(b\) in the interior and by \[\tilde{b}(x,y,\xi,\eta,\tilde{z},\tilde{\zeta})=\lim_{r\to+\infty}b(x,y,\xi, \eta,r\tilde{z},r\tilde{\zeta})\] on the sphere at infinity. Since the \(u_{n}\) are periodic, the measure \(\tilde{\mu}\) is also periodic i, the \(X\) variable, hence it naturally defines a measure \(\mu\) on \(T^{*}\mathbb{T}^{3}\times\overline{N}\). The rest of the subsection is dedicated to studying such a 2-microlocal measure \(\mu\) for the sequence \((u_{n})\) introduced in Section 2. Recall that \((u_{n})\) satisfies \(\|u_{n}\|_{L^{2}(\mathbb{T}^{d})}=1\) and the asymptotics : \[(h_{n}^{2}\Delta+1)u_{n}=\mathcal{O}(h_{n}\epsilon(h_{n}))_{L^{2}}\quad;\quad au _{n}=o(1)_{L^{2}}.\] The properties of the second microlocal measure \(\mu\) are gathered in the following statement : **Proposition 4.4**.: 1. _Assume only that_ \[(h_{n}^{2}\Delta+1)u_{n}=\mathcal{O}(1)_{L^{2}},\] _then_ \(\mu(T^{*}\mathbb{T}^{3}\times\overline{N})=1\)_._ 2. _The first microlocal measure_ \(\nu\) _is the projection of_ \(\mu\) _onto the_ \((x,y,\xi,\eta)\) _variables. Assume besides that_ \[(h_{n}^{2}\Delta+1)u_{n}=o(h_{n})_{L^{2}},\quad au_{n}=o(1)_{L^{2}},\] _then_ \(\mu\) _is supported 1-microlocally in the red bicharacteristics of Figure_ 3_,_ \[\{x=0,\quad\xi=0,\quad\eta=\pm 1\}.\] 3. _Assume now that_ \[(h_{n}^{2}\Delta+1)u_{n}=\mathcal{O}(h_{n}\epsilon(h_{n}))_{L^{2}},\] _then_ \(\mu\) _is supported in the sphere at infinity in the_ \((z,\zeta)\) _variables._ 4. _Let_ \(R\) _be one of the four prisms_ \[\{-1<y<-\frac{1}{2},x_{1}>0,x_{2}<\alpha_{R}x_{1}\},\{-\frac{1}{2}<y<0,x_{2}> 0,x_{1}>-\alpha_{T}x_{2}\},\] \[\{0<y<\frac{1}{2},x_{1}<0,x_{2}>\alpha_{L}x_{1}\},\{\frac{1}{2}<y<1,x_{2}<0, x_{1}<-\alpha_{B}x_{2}\}\] _(see Figure_ 3_), then at every point of_ \(\partial R\)__\(\mu\) _vanishes 2-microlocally in the direction of the polyhedron_ \(R\)_. Precisely (from back to front along the red geodesic of Figure_ 3_) :_ (4.10) \[\mu(\{x=0,\quad\xi=0,\quad-1<y<-\frac{1}{2},\quad\eta=\pm 1,\quad z_{1}> 0,\quad z_{2}<\alpha_{R}z_{1}\})=0,\] \[\mu(\{x=0,\quad\xi=0,\quad-\frac{1}{2}<y<0,\quad\eta=\pm 1,\quad z _{2}>0,\quad z_{1}>-\alpha_{T}z_{2}\})=0,\] \[\mu(\{x=0,\quad\xi=0,\quad 0<y<\frac{1}{2},\quad\eta=\pm 1,\quad z _{1}<0,\quad z_{2}>\alpha_{L}z_{1}\})=0,\] \[\mu(\{x=0,\quad\xi=0,\quad\frac{1}{2}<y<1,\quad\eta=\pm 1,\quad z _{2}<0,\quad z_{1}<-\alpha_{B}z_{2}\})=0.\] 5. _The measure_ \(\mu\) _satisfies the conservation law :_ (4.11) \[(\eta\partial_{y}+\zeta.\partial_{z})\mu=0\] _where_ \(\zeta.\partial_{z}\) _is a vector field on the sphere at infinity of_ \(\mathbb{R}^{4}_{z,\zeta}\)_._ Proof.: The first point is a direct consequence of the \(h_{n}\)-oscillation property proved in section 2. To prove the second statement, consider a symbol \(b(X,\Xi)\) which does not depend on the 2-microlocal variables \(z,\zeta\), then \(\langle\nu,b\rangle=\langle\mu,b\circ\pi\rangle\) by definition of both measures. Said otherwise, \(\nu\ =\ \pi_{*}\mu\) where \[\pi:(X,\Xi,z,\zeta)\mapsto(X,\Xi).\] The rest of the second point comes from Corollary 2.2. To prove the third point, we take some function \(\chi\in C_{c}^{\infty}(T^{*}\mathbb{T}^{3}\times\mathbb{R}^{4}_{z,\zeta})\). \(\chi\) belongs to \(S_{H}^{0}\) and there exists some positive \(A\) such that \(\operatorname{supp}(\chi)\subset\{|z|\leq A\}\). Thus, \[\begin{split}|(\operatorname{Op}_{h_{n}}(\chi)u_{n},u_{n})_{L^{ 2}}|&=|(\operatorname{Op}_{h_{n}}(\chi)u_{n},\mathbbm{1}_{\{|x| \leq Ah_{n}^{2}\epsilon^{-1}(h_{n})\}}u_{n})_{L^{2}}|\\ &\leq C\|u_{n}\|_{L^{2}}\|u_{n}\|_{L^{2}(\{|x|\leq Ah_{n}^{2} \epsilon^{-2}(h_{n})\})}\\ &\leq C\|u_{n}\|_{L^{2}(\{|x_{1}|\leq Ah_{n}^{2}\epsilon^{-2}(h_{ n})\})}\\ &\leq C\epsilon^{\frac{1}{2}}(h_{n})\to_{n\to+\infty}0.\end{split} \tag{4.12}\] where we have applied the \(L^{2}\)-boundedness of \(\operatorname{Op}_{h_{n}}(\chi)\) and Proposition 3.1. Thus, \[\langle\mu,\chi\rangle=0,\] which proves that \(\mu\) is supported in the sphere at infinity in the \((z,\zeta)\) variables. Note that it is enough for the non-concentration estimate to hold in the cylinder \(\{|x|\leq Ah_{n}^{\frac{1}{2}}\epsilon^{-2}(h_{n})\}\) centered around the geodesic rather than the slice \(\{|x_{1}|\leq Ah_{n}^{\frac{1}{2}}\epsilon^{-2}(h_{n})\}\). This fact will be used in the general case at the end of the next subsection. We now come the microlocal vanishings. We show that \(\mu\) vanishes over \[\{x=0,\xi=0,\frac{1}{2}<y<1,\eta=+1,z_{2}<0,z_{1}<-\alpha_{B}z_{2}\}.\] Fix some small enough \(\delta_{0}>0\) and define the following cutoffs : * \(\psi\in C^{\infty}(\mathbb{R})\), equal to \(0\) over \((-\infty,1]\) and \(1\) over \([2,+\infty)\), * \(\chi\in C_{c}^{\infty}(-1,1)\) equal to \(1\) over \((-\frac{1}{2},\frac{1}{2})\), and * \(\tilde{\chi}\in C_{c}^{\infty}(\frac{1}{2},1)\) equal to \(1\) over \((\frac{1}{2}+\delta_{0},1-\delta_{0})\). Consider the symbol \[\begin{split} b(x,y,\xi,\eta,z,\zeta)=\\ \chi\left(\frac{2|x|}{\delta_{0}}\right)\tilde{\chi}(y)\chi(|\xi| )\chi(\eta-1)\psi\left(-\frac{z_{2}}{\delta_{0}|\zeta|}\right)\psi\left(\frac {-z_{1}-\alpha_{B}z_{2}}{\delta_{0}|\zeta|}\right)\psi\left(\frac{|z|^{2}+| \zeta|^{2}}{\delta_{0}}\right).\end{split} \tag{4.13}\] Cutoff \(\chi\left(\frac{2|x|}{\delta_{0}}\right)\tilde{\chi}(y)\) is supported in \(B(0,\frac{\delta_{0}}{2})_{x}\times(\frac{1}{2},1)_{y}\) and cutoff \(\psi\left(-\frac{z_{2}}{\delta_{0}|\zeta|}\right)\psi\left(\frac{-z_{1}- \alpha_{B}z_{2}}{\delta_{0}|\zeta|}\right)\) localizes in the \(\{x_{2}<0,x_{1}<-\alpha_{B}x_{2}\}\) zone, thus \[\mathbbm{1}_{\{x\in B(0,\frac{\delta_{0}}{2}),x_{2}<0,x_{1}<-\alpha_{B}x_{2}\} }\mathbbm{1}_{\{y\in(\frac{1}{2},1)\}}\operatorname{Op}_{h_{n}}(b)= \operatorname{Op}_{h_{n}}(b),\] wherefrom \[\begin{split}|\langle\operatorname{Op}_{h_{n}}(b)u_{n},u_{n}\rangle_{ L^{2}}|&=|\langle\operatorname{Op}_{h_{n}}(b)u_{n},\mathbbm{1}_{\{x\in B(0, \frac{\delta_{0}}{2}),x_{2}<0,x_{1}<-\alpha_{B}x_{2}\}}\mathbbm{1}_{\{y\in( \frac{1}{2},1)\}}u_{n}\rangle_{L^{2}}|\\ &=|\langle\operatorname{Op}_{h_{n}}(b)u_{n},\mathbbm{1}_{\{x\in B (0,\frac{\delta_{0}}{2}),x_{2}<0,x_{1}<-\alpha_{B}x_{2}\}}\mathbbm{1}_{\{y \in(\frac{1}{2},1)\}}au_{n}\rangle_{L^{2}}|\\ &\leq C\|u_{n}\|_{L^{2}}\|au_{n}\|_{L^{2}}\xrightarrow[n\to+\infty ]{}0.\end{split} \tag{4.14}\] Thus \(\langle\mu,b\rangle=0\). \(\mu\) being a positive measure, this implies \(\mu(\{b=1\})=0\) and \[\mu\left(\left\{x=0,\xi=0,y\in\left[\frac{1}{2}+\delta_{0},1-\delta_{0} \right],\eta=1,z_{2}<-2\delta_{0}|\zeta|,-(z_{1}+\alpha_{B}z_{2})>2\delta_{0}| \zeta|\right\}\right)=0.\] Taking the limit \(\delta_{0}\to 0\) then yields the result. The other 2-microlocal vanishings are obtained by changing the cutoffs in the \(y\), \(\eta\) and \(\frac{z}{|\zeta|}\) variables in (4.13). To prove the last statement, we compute the bracket \([h_{n}^{2}\Delta+1,\operatorname{Op}_{h_{n}}(q)]\) for some \(q\in S^{0}_{H}\) using the second quantization formula (4.2). We get \[[h\Delta,\operatorname{Op}_{h}(q)]=\operatorname{Op}_{h}\left((2i\Xi.\partial _{X}+2i\zeta.\partial_{z}+h\Delta_{X}+2h\epsilon(h)h^{-\frac{1}{2}}(\partial_ {x}.\partial_{z})+h(\epsilon(h)h^{-\frac{1}{2}})\Delta_{z})q\right)\] so that \[\frac{1}{2ih_{n}}[h_{n}^{2}\Delta+1,\operatorname{Op}_{h_{n}}(q) ]=\frac{1}{2i}[h_{n}\Delta,\operatorname{Op}_{h_{n}}(q)]\\ =\operatorname{Op}_{h_{n}}((\xi.\partial_{x}+\eta\partial_{y}+ \zeta.\partial_{z})q)-i\frac{h_{n}}{2}\operatorname{Op}_{h_{n}}(\Delta_{X}q)- i\frac{h_{n}}{2}(\epsilon(h_{n})h_{n}^{-\frac{1}{2}})\operatorname{Op}_{h_{n}}(( \partial_{x}.\partial_{z})q)\\ -i\frac{h_{n}}{2}(\epsilon(h_{n})h_{n}^{-\frac{1}{2}})^{2} \operatorname{Op}_{h_{n}}(\partial_{z}^{2}q). \tag{4.15}\] Unfolding the bracket and using that \((h_{n}^{2}\Delta+1)u_{n}=o(h_{n})\), we obtain \[\frac{1}{2ih_{n}}([h_{n}^{2}\Delta+1,\operatorname{Op}_{h_{n}}(q)]u_{n},u_{n} )_{L^{2}}\xrightarrow[n\to+\infty]{}0,\] so \[(\operatorname{Op}_{h_{n}}((\xi.\partial_{x}+\eta\partial_{y}+\zeta.\partial _{z})q)u_{n},u_{n})_{L^{2}}=o(1). \tag{4.16}\] Since the measure \(\mu\) is supported in \(\{\xi=0\}\), conservation law (4.11) follows. We can then conclude the contradiction argument by studying the dynamics of the vector field \(\zeta.\partial_{z}\) over the sphere at infinity. The corresponding result is illustrated in Figure 5 of subsection 4.4 and summed up in the following lemma. We postpone the proof of the lemma to that subsection. **Lemma 4.5**.: \(-\) _Consider an initial point \((z_{0},\zeta_{0})\), \(|z_{0}|^{2}+|\zeta_{0}|^{2}=1\), in the sphere at infinity \(\mathbb{S}^{2d-3}_{\infty}\) of \(\mathbb{R}^{2d-2}\), and denote \(\phi^{s}(z_{0},\zeta_{0})\) the flow of \(\zeta.\partial_{z}\) at time \(s\) starting from this initial point, then_ \[\phi^{s}(z_{0},\zeta_{0})=\frac{1}{(|z_{0}+s\zeta_{0}|^{2}+|\zeta_{0}|^{2})^{ \frac{1}{2}}}\left(z_{0}+s\zeta_{0},\zeta_{0}\right). \tag{4.17}\] _In other words, the flow of of \(\zeta.\partial_{z}\) over the sphere at infinity is the projection of the flow at finite distance from the origin, as depicted in Figure 5._ _In particular, if \(\zeta_{0}\neq 0\) then_ \[\phi^{s}(z_{0},\zeta_{0})\to_{s\to+\infty}\left(\frac{\zeta_{0}}{|\zeta_{0}|},0 \right), \tag{4.18}\] _and \((z_{0},\zeta_{0})\) is a fixed point of the flow iff \(\zeta_{0}=0\)._ Now, by point (1) of Proposition 4.4, \(\mu\) has non-empty support. Consider a point \[(x=0,y,\xi=0,\eta=\pm 1,z,\zeta)\in\mathrm{supp}(\mu),\] with \((z,\zeta)\) belonging to the sphere at infinity of \(\mathbb{R}^{4}\). By the conservation law (4.11), the point \[(x=0,y+s\eta,\xi=0,\eta=\pm 1,\phi^{s}(z,\zeta))\] also belongs to \(\mathrm{supp}(\mu)\). The flow \(\phi_{s}(z,\zeta)\) converges to some \((z_{\infty},0)\) with \(z_{\infty}\neq 0\), and by the fourth point \((z_{\infty},0)\) belongs to at least one open set of the sphere at infinity where \(\mu\) vanishes \(2\)-microlocally along the geodesic. Thus, \(\mu\) has support in an open set where it vanishes, which gives a contradiction. This concludes the proof of stabilization for the model damping in Figure 3. #### 4.2.1. Generalization to higher dimensions Since both the non-concentration estimate of Proposition 3.1 and Lemma 4.5 hold for any \(d\geq 3\), this model case can be generalized to higher dimensions. Assume that \(d\geq 3\). We consider a damping \(a\) over the \(d-\)dimensional torus \(\mathbb{T}^{d}\simeq[-1,1]^{d}\) such that : * \(a=1\) outside of a tunnel, namely in the zone \(\cup_{i=1}^{d-1}\{|x_{i}|\geq\frac{1}{2}\}\). By Corollary 2.2, the first microlocal measure is then supported in the one-directional set of bicharacteristics \(\{\xi_{i}=0,1\leq i\leq d-1,\xi_{d}=\pm 1\}\). * Inside that tunnel, \(a=1\) over a finite union of polyhedrons such that the geodesic \(\{x_{1}=\cdots=x_{d-1}=0\}\) satisfies Assumption (1.3) and is the only geodesic not entering \(\mathrm{Int}(\mathrm{supp}(a))\). For example, one can choose \(a\) to be equal to \(1\) over the set \[\bigcup_{i=1}^{d-1}\left\{x_{i}<0,\quad-1+\frac{i-1}{d-1}<y<-1+ \frac{i}{d-1}\right\}\\ \cup\bigcup_{i=1}^{d-1}\left\{x_{i}>0,\quad\frac{i-1}{d-1}<y< \frac{i}{d-1}\right\}.\] We then perform the same second quantization, as defined per formula (4.2), with \(z=\frac{\epsilon(h)}{h^{\frac{1}{2}}}(x_{1},\ldots,x_{d-1})\) and \(\zeta=\epsilon(h)h^{\frac{1}{2}}(\xi_{1},\ldots,\xi_{d-1})\). By the \(2\)-microlocal calculus on \(\mathbb{R}^{d}\times\mathbb{R}^{d}\times\mathbb{R}^{d-1}\times\mathbb{R}^{d-1}\), the sequence \((u_{n})\) still admits a \(2\)-microlocal measure \(\mu\). The first two points of Proposition 4.4 are then proved exactly as previously. The proof of the third point is identical since the non-concentration estimates of Proposition 3.1 hold for \(d-\)dimensional tori. The proof of the microlocal vanishings follow the same lines, and equations (4.15) and (4.16) still hold so that the fifth point is unchanged. We then conclude in the same manner since Lemma 4.5 is satisfied on any sphere of dimension \(2d-3\). ### Reduction of the general case and conclusion We now reduce the general case to the preceding model case. Consider a damping \(a\) on the \(d\)-dimensional torus \(\mathbb{T}^{d}=\mathbb{R}^{d}/\Gamma\) with \(\Gamma\) an orthogonal lattice \(A_{1}\mathbb{Z}\times\cdots\times A_{d}\mathbb{Z}^{d}\), \(A_{1},\ldots,A_{d}>0\). Assume that \(a\) is a finite sum of characteristic functions of disjoint polyhedrons in \(\mathbb{T}^{d}\), and that all geodesics of the torus which do not enter the interior of the damped region satisfy assumption (1.3). First we deal with non-closed geodesics. Then, we use an argument from [2, Section 3] to microlocalize near a single geodesic, the direction of which is isolated in \(\operatorname{supp}(\nu)\). We conclude by using orthonormal changes of coordinates to reduce this geodesic to the one of the model case. **Proposition 4.6**.: \(-\) _Non-closed geodesics which are damped in every normal direction (ie satisfying assumption (1.3)) intersect the interior of a polyhedron where the damping equals 1._ Proof.: Consider a non-closed geodesic \(\gamma_{\rho_{0}}\) of \(\mathbb{T}^{d}\), \(\rho_{0}=(X_{0},\Xi_{0})\). \(\gamma_{\rho_{0}}\) is then dense in some \((X_{0}+F)/\Gamma\) where \(\rho_{0}=(X_{0},\Xi_{0})\) and \(F\) is a subspace of \(\mathbb{R}^{d}\) of dimension strictly greater than one. The rest of the proof is illustrated in Figure 4. Denote \(\Xi_{0}\in F\) the direction of the geodesic and \(\Xi_{0}^{\perp}\) some unitary vector in \(F\), orthogonal to \(\Xi_{0}\). By (1.3), there exists an interval \(I\subset\mathbb{R}\) and some positive \(\delta_{0}\) such that \(a=1\) in an open neighborhood of \(\left\{\gamma(s)+\delta\Xi_{0}^{\perp},s\in I,\delta\in\left[\frac{\delta_{0}} {2},\delta_{0}\right]\right\}\subset X_{0}+F.\) Denote by \(V\) such an open neighborhood, then \(V\subset\operatorname{Int}(\operatorname{supp}(a))\) and by density of the geodesic, \(\gamma_{\rho_{0}}\) enters \(V\). The support of \(\nu\) thus contains only points \((X,\Xi)\) belonging to closed geodesics, with rational \(\Xi\in\mathbb{S}^{d-1}\). Defining \(\pi_{\Xi}:(X,\Xi)\mapsto\Xi\), we get that \(\pi_{\Xi}(\operatorname{supp}(\nu))\) is countable and closed (using compactness of the torus for the latter). By Baire's lemma, any closed subset with no isolated points (_aka_ a perfect set) in a complete metric space is uncountable. Thus, \(\pi_{\Xi}(\operatorname{supp}(\nu))\) contains an isolated direction, which we denote \(\Xi_{0}\). Let us microlocalize around this isolated direction. Consider a closed geodesic \(\gamma\) in the support of \(\nu\), with direction \(\Xi_{0}\). Consider a neighborhood of \(\Xi_{0}\) which contains no other directions in \(\pi(\operatorname{supp}(\nu))\), and \(\chi(hD_{X})\) a Fourier multiplier with a symbol supported in Figure 4. An illustration of Proposition 4.6. The non-closed geodesic \(\gamma_{\rho_{0}}\) is damped by some polyhedron in the direction \(\Xi_{0}^{\perp}\in F\). By density in \(F\), it enters that polyhedron. The dashed violet edges and paler-shade part of the polyhedron are below the plane \(F\). The dashed blue and red lines are inside the polyhedron. this neighborhood in the \(\Xi\) variable. Define \[v_{n}=\chi(hD_{X})u_{n}\] and denote \(\nu_{v}\) the corresponding \(1\)-microlocal measure. \(\nu_{v}\) satisfies \[\nu_{v}=\chi(\xi)^{2}\nu,\quad\nu_{v}(T^{*}\mathbb{T}^{d})\geq\nu(\gamma)>0, \quad(h_{n}^{2}\Delta+1)v_{n}=o(h_{n}), \tag{4.19}\] since \(h_{n}^{2}\Delta\) and \(\chi(hD_{x})\) commute. We have thus microlocalized around a single geodesic of \(\operatorname{supp}(\nu)\) with an isolated direction. Since the geodesic is closed, there exist some integers \(n_{1},\dots,n_{d}\in\mathbb{Z}\) such that \[\Xi_{0}=\frac{(n_{1}A_{1},\dots,n_{d}A_{d})}{\left(\sum_{j=1}^{d}n_{j}^{2}A_{ j}^{2}\right)^{\frac{1}{2}}}.\] The next lemma explains the induction step which allows to reduce the geodesic to one living in a \((d-1)\)-dimensional torus. It is an adaptation of [1, Lemma 2.2], to the \(d\)-dimensional setting (see also [1, Lemma 4.1]). **Lemma 4.7**.: \(-\) _Assume that \(n_{d}\) is non-zero. Consider \(u_{n}\) a \(\Gamma\)-periodic function and denote \(\Xi_{j}\) the \(j\)-th vector of the canonical basis of \(\mathbb{R}^{d}\) for every \(j\in\{1,\dots,d-2\}\). Denote_ \[\Xi_{d-1}=\frac{(0,\dots,0,n_{d-1}A_{d-1},n_{d}A_{d})}{S_{d-1}}, \Xi_{d}=\frac{(0,\dots,0,-n_{d}A_{d},n_{d-1}A_{d-1})}{S_{d-1}},\\ S_{d-1}=\left(n_{d-1}^{2}A_{d-1}^{2}+n_{d}^{2}A_{d}^{2}\right)^{ \frac{1}{2}}, \tag{4.20}\] _so that \((\Xi_{j})_{j=1,\dots,d}\) is a direct orthonormal basis and \(\Xi_{0}=\sum_{j=1}^{d-2}n_{j}A_{j}\Xi_{j}+S_{d-1}\Xi_{d-1}\). Set_ \[F_{d-1}(x)=\sum_{j=1}^{d}x_{j}\Xi_{j}.\] _Then_ \[u\circ F_{d-1}(x_{1}+k_{1}A_{1},\dots,x_{d-2}+k_{d-2}A_{d-2},x_{ d-1}+k_{d-1}S_{d-1},x_{d}+k_{d}\alpha)=\\ u\circ F_{d-1}(x_{1},\dots,x_{d-2},x_{d-1}-k_{d}\beta,x_{d}), \forall k\in\mathbb{Z}^{d},x\in\mathbb{R}^{d}, \tag{4.21}\] _where, for any fixed \(p,q\in\mathbb{Z}\),_ \[\alpha=\frac{qn_{d-1}A_{d-1}A_{d}-pn_{d}A_{d}A_{d-1}}{S_{d-1}},\quad\beta= \frac{pn_{d-1}A_{d-1}^{2}+qn_{d}A_{d}^{2}}{S_{d-1}},\quad\alpha\neq 0.\] Proof.: Since \(F\) preserves the first \(d-2\) vectors of the canonical basis and \(u\) is \(\Gamma\)-periodic, it is sufficient to find some \(\alpha,\beta\) such that for any \(k_{d-1},k_{d}\in\mathbb{Z}\) there exist \(p,q\in\mathbb{Z}\) satisfying \[k_{d-1}S_{d-1}\Xi_{d-1}+k_{d}\alpha\Xi_{d}+k_{d}\beta\Xi_{d-1}=\left(\begin{array} []{c}0\\ \vdots\\ 0\\ pA_{d-1}\\ qA_{d}\end{array}\right) \tag{4.22}\] The \(S_{d-1}\) coefficient in the first term is chosen such that it is sufficient to deal with the case \(k_{d-1}=0\), \(k_{d}=1\). The last two lines of (4.22) then give an invertible \(2\times 2\) linear system which we solve by taking the scalar product with \(\Xi_{d-1}\) and \(\Xi_{d}\) We thereby get an orthonormal basis \((\Xi_{1},\ldots,\Xi_{d})\) and a corresponding coordinate system \((x_{1},\ldots,x_{d})\) such that \(u\) is \((A_{1}\mathbb{Z}\times\cdots\times A_{d-2}\mathbb{Z}\times S_{d-1}\mathbb{Z})\)-periodic wrt to the \(d-1\) first variables, and the geodesic is contained in a \((d-1)\)-dimensional torus. Note that if the coefficient \(n_{d}\) is zero, the latter already occurs, so the assumption in the lemma is not restrictive. Since the non-concentration estimate requires periodicity of \(u_{n}\) in \(d-1\) orthogonal directions, we can apply it to \(u\circ F_{d-1}\). We get \[\|u_{n}\circ F_{d-1}\|_{L^{2}(\{|x_{d}|\leq h_{n}\epsilon(h_{n})^{-2}\})} \leq C\epsilon(h_{n})^{\frac{1}{2}}.\] Thus \[\|u_{n}\circ F_{d-1}\|_{L^{2}(F_{d-1}^{-1}(\gamma+B(0,h_{n}\epsilon(h_{n})^{- 2})))}\leq C\epsilon(h_{n})^{\frac{1}{2}}, \tag{4.23}\] as a cylinder surrounding the closed geodesic is contained in any \((d-1)\)-dimensional slice of identical width. We now iterate the construction of Lemma 4.7 and show that it preserves the non-concentration estimate. We use the periodicity of \(u_{n}\) with respect to the \((d-1)\) first coordinates to apply the lemma in successive steps : we get \((d-2)\) orthonormal changes of coordinates \(F_{d-2},\ldots,F_{1}\) (possibly equal to identity if no change of coordinates is necessary) such that for every \(j\in\{1,\ldots,d-1\}\) : * each \(F_{j}\) preserves the first \((j-1)\) vectors of the canonical basis, * \(u_{n}\circ F_{d-1}\circ\ldots\circ F_{j}\) is periodic wrt its \(j\) first variables, * and the image \(F_{j}^{-1}\circ\ldots\circ F_{d-1}^{-1}(\gamma)\) of the geodesic is contained in a torus of dimension \(j\). Denote \(F=F_{d-1}\circ\ldots\circ F_{1}\). Since every change of variables is isometric, (4.23) becomes \[\|u_{n}\circ F\|_{L^{2}(F^{-1}(\gamma+B(0,h_{n}\epsilon^{-2}(h_{n}))))}\leq C \epsilon(h_{n})^{\frac{1}{2}}. \tag{4.24}\] Since \(F^{-1}(\gamma+B(0,h_{n}\epsilon^{-2}(h_{n})))=F^{-1}(\gamma)+B(0,h_{n} \epsilon^{-2}(h_{n}))\), the estimate (4.24) is exactly the one that allows to show that the \(2\)-microlocal measure is supported at infinity (see the proof of Proposition 4.4, point 3.). Besides, \(F^{-1}(\gamma)\) is a closed geodesic depending on only one cartesian coordinate and satisfying assumption (1.3), so that we are microlocally reduced to the case of subsection 4.2. The rest of the contradiction argument follows the same lines as previously, which completes the proof of Theorem 4. ### \(\zeta.\partial_{z}\) vector field over the sphere at infinity There remains to study the dynamics of vector field \(\zeta.\partial_{z}\) over the sphere at infinity \(\mathbb{S}_{\infty}^{2d-3}\) of \(\mathbb{R}_{z}^{d-1}\times\mathbb{R}_{\zeta}^{d-1}\), \(d\geq 3\), in order to prove Lemma 4.5. The proof relies on symplectic transformations which simplify the expression of the dynamics, and on passing to spherical coordinates (as is done on \(\mathbb{S}^{1}\) in [1, Proposition 3.5, 5]). First, we reduce the dynamics over \(\mathbb{S}_{\infty}^{2d-3}\) to dynamics over \(\mathbb{S}_{\infty}^{3}\) where only one angle coordinate is non-constant. We then show that the dynamics over \(\mathbb{S}_{\infty}^{2d-3}\) can be obtained by projecting onto the sphere at infinity the dynamics of \(\zeta.\partial_{z}\) at finite distance from the origin (a straight line traveled at constant velocity \(|\zeta(0)|\)). Denote \((z^{(0)},\zeta^{(0)})\) an initial point belonging to \(\mathbb{S}_{\infty}^{2d-3}\). Since \(\mathbb{S}^{2d-3}\) is a compact manifold, the flow of \(\zeta.\partial_{z}\) is defined for all times. We denote \((z(s),\zeta(s))\) the image of \((z^{(0)},\zeta^{(0)})\) by the flow of \(\zeta.\partial_{z}\) at time \(s\). The first result is the following : **Proposition 4.8**.: 1. _There exists an angle coordinate system_ \((\theta_{1},\ldots,\theta_{2d-3})\) _on the sphere at infinity in which the dynamics of_ \(\zeta.\partial_{z}\) _satisfy :_ (4.25) \[\dot{\theta_{1}}=-\cos(\theta_{2})\sin^{2}(\theta_{1})\quad;\quad\theta_{2}= \theta_{2}^{(0)}\quad;\quad\theta_{3}=\ cst\ =\pm 1\quad;\quad\theta_{i}=0,\quad\forall i\geq 4.\] _In other words, in this coordinate system we have_ \(\zeta.\partial_{z}=-\cos(\theta_{2})\sin^{2}(\theta_{1})\partial_{\theta_{1}}\) _over the sphere at infinity._ 2. \(\theta_{1}(s)\to 0\mod\pi\) _as_ \(s\) _goes to infinity, so that_ \((z(s),\zeta(s))\to\left(\pm\frac{\zeta^{(0)}}{|\zeta^{(0)}|}\right)\) _if_ \(\zeta^{(0)}\neq 0\)_. The_ \((z^{(0)},\zeta^{(0)})\) _such that_ \(\zeta^{(0)}=0\) _are exactly the fixed points of_ \(\zeta.\partial_{z}\)_._ Proof.: If \(\zeta^{(0)}=0\), \((z^{(0)},\zeta^{(0)})\) is a fixed point of the dynamic since we are solving a first order initial value problem with zero initial derivative. Thus, we assume \(\zeta^{(0)}\neq 0\). We introduce a cartesian coordinate system \((\tilde{z},\tilde{\zeta})\) such that \(\zeta.\partial_{z}=\tilde{\zeta}.\partial_{\tilde{z}}\), \(\tilde{\zeta}\) has zero coordinates except for the first one and \(\tilde{z}\) has zero coordinates except for the first two. Choosing a rotation \(R\in SO(d-1)\) such that \[\tilde{\zeta}^{(0)}:=R^{T}\zeta^{(0)}=\begin{pmatrix}\tilde{\zeta}_{1}^{(0)} \\ 0\\ \vdots\\ 0\end{pmatrix},\] we set \[\zeta=R\tilde{\zeta}\quad;\quad z=R^{T}\tilde{z}. \tag{4.26}\] This transformation leaves the vector field unchanged : \(\zeta.\partial_{z}=\tilde{\zeta}.\partial_{\tilde{z}}\). Then along the trajectory of \(\zeta.\partial_{z}\) starting from \((z^{(0)},\zeta^{(0)})\), \(\tilde{\zeta}_{i}(s)\) vanishes for every \(i\geq 2\) and all \(s\in\mathbb{R}\). Thus, \(\zeta.\partial_{z}=\tilde{\zeta}.\partial_{\tilde{z}}=\tilde{\zeta}_{1} \partial_{\tilde{z}_{1}}\) along this trajectory. Proceeding similarly with the \(d-1\) last coordinates of \(\tilde{z}^{(0)}\), we can guarantee that \(\tilde{z}_{3}^{(0)}\ =\ \ldots=\ \tilde{z}_{d}^{(0)}\ =\ 0\) without changing the coordinates of \(\tilde{\zeta}^{(0)}\). Thus, \(\tilde{z}_{i}(s)=0\) for all \(s\in\mathbb{R}\) and every \(i\geq 3\). We now change to spherical coordinates \((r,(\theta_{j})_{1\leq j\leq 2d-3})\in\mathbb{R}_{+}^{*}\times[0,\pi]^{2d-4} \times\mathbb{S}^{1}\), allowing \(r\) to go to \(+\infty\). We set \(\tilde{z}_{1}\ =\ r\cos\theta,\tilde{\zeta}_{1}\ =\ r\sin\theta_{1}\cos\theta_{2}, \tilde{z}_{2}\ =\ r\sin\theta_{1}\sin\theta_{2}\cos\theta_{3}\), and more generally : \[\forall k\in\{1,\ldots,d-1\},\quad\tilde{z}_{k}=r\left(\prod_{i=1}^{2k-2}\sin \theta_{i}\right)\cos\theta_{2k-1}\quad;\\ \tilde{\zeta}_{k}=r\left(\prod_{i=1}^{2k-1}\sin\theta_{i} \right)\cos\theta_{2k}\ \text{if}\ k\leq d-2\quad;\quad\tilde{\zeta}_{d-1}=r\left(\prod_{i=1}^{2d-3} \sin\theta_{i}\right). \tag{4.27}\] Since all \((\tilde{\zeta}_{j})_{2\leq j\leq d-1}\) and \((\tilde{z}_{j})_{3\leq j\leq d-1}\) vanish, we are reduced to a situation where \(\sin\theta_{3}\ =\ 0\) and \(\cos\theta_{3}\ =\ \pm 1\). We now compute \(\tilde{\zeta}_{1}\partial_{\tilde{z}_{1}}\) in the \((r,\theta)\) variables. Since none of the angles \(\theta_{i}\) depend on \(\tilde{z}_{1}\) except for \(\theta_{1}\), we get \[\tilde{\zeta}_{1}\partial_{\tilde{z}_{1}}=r\sin\theta_{1}\cos\theta_{2}\left( \frac{\partial r}{\partial\tilde{z}_{1}}\partial_{r}+\frac{\partial\theta_{1}} {\partial\tilde{z}_{1}}\right).\] We compute both partial derivatives : \(\cos\theta_{1}=\frac{\tilde{z}_{1}}{(|\tilde{z}|^{2}+|\tilde{\zeta}|^{2})^{\frac {1}{2}}}\) gives \(\frac{\partial\theta_{1}}{\partial\tilde{z}_{1}}=-\frac{1}{r}\sin\theta_{1}\). \(\frac{\partial r}{\partial\tilde{z}_{1}}=\frac{\tilde{z}_{1}}{r}\) is homogeneous of order zero wrt \(r\). Given a symbol \(q\) that is polyhomogeneous of order zero, \(r\partial_{r}q\) is polyhomogeneous of order -1 so that the \(\partial_{r}\) term above vanishes as \(r\to+\infty\). Thus, over the sphere at infinity, we have \[\forall q\in S^{0},\quad\zeta.\partial_{z}q|_{\mathbb{S}^{2d-3}}=-\cos\theta_{2} \sin^{2}\theta_{1}\partial_{\theta_{1}}\tilde{q}(X,\Xi,r\to+\infty,\theta)\] where \(\tilde{q}\) is a notation for \(q\) in \((X,\Xi,r,\theta)\) coordinates. We now give an interpretation of the dynamics of this vector field as the projection onto the sphere at infinity of the flow map of \(\zeta.\partial_{z}\) starting from a point \((z^{(0)},\zeta^{(0)})\in\mathbb{S}^{1}\). This result is illustrated in Figure 5. In the following statement, we denote indifferently points belonging to the unit sphere and the sphere at infinity as they are identified by projection. **Proposition 4.9**.: \(-\) _Denote for all \(s\in\mathbb{R}\)_ \[\big{(}\tilde{z}(s),\tilde{\zeta}(s)\big{)}:=\frac{1}{\Big{(}|\tilde{z}^{(0)} +s\tilde{\zeta}^{(0)}|^{2}+|\tilde{\zeta}^{(0)}|^{2}\Big{)}^{\frac{1}{2}}} \left(\tilde{z}^{(0)}+s\tilde{\zeta}^{(0)},\tilde{\zeta}^{(0)}\right)\in \mathbb{S}^{2d-3}, \tag{4.28}\] _then_ \[(\tilde{z}(s),\tilde{\zeta}(s))=(\tilde{z}(s),\tilde{\zeta}(s)) \tag{4.29}\] _where \((\tilde{z}(s),\tilde{\zeta}(s))\in\mathbb{S}^{2d-3}\) were introduced in (4.26)._ Proof.: Recall that \((\tilde{z}^{(0)},\tilde{\zeta}^{(0)})\) has zero-coordinates except maybe for \(\tilde{z}^{(0)}_{1},\tilde{\zeta}^{(0)}_{1},\tilde{z}^{(0)}_{2}\). Then by (4.28), every coordinate of \((\tilde{z},\tilde{\zeta})\) vanishes except maybe \(\tilde{z}_{1},\tilde{\zeta}_{1},\tilde{z}_{2}\). The result then follows from the same passage to angle coordinates as in (4.27). Like before, the vanishing coordinates impose constraints on \(\tilde{\theta}_{3}\), so that we set \[\tilde{z}_{1}=r\cos\tilde{\theta}_{1}\quad;\quad\tilde{\zeta}_{1}=r\sin\tilde {\theta}_{1}\cos\tilde{\theta}_{2}\quad;\quad\tilde{z}_{2}=r\sin\tilde{\theta }_{1}\sin\tilde{\theta}_{2}\cos\tilde{\theta}_{3}=\pm r\sin\tilde{\theta}_{1} \sin\tilde{\theta}_{2}.\] where \(\tilde{\theta}_{1},\tilde{\theta}_{2}\in[0,\pi]\) and \(\tilde{\theta}_{3}\in[0,\pi]\) if \(d>3\) and \(\mathbb{S}^{1}\) if \(d=3\). Conversely, the \(\tilde{\theta}_{j}\)s satisfy : \[\tilde{\theta}_{1}=\arccos\left(\frac{\tilde{z}_{1}}{\sqrt{\sum_{j=1}^{p} \tilde{z}_{j}^{2}+\sum_{j=1}^{p}\tilde{\zeta}_{j}^{2}}}\right),\quad\tilde{ \theta}_{2}=\arccos\left(\frac{\tilde{\zeta}_{1}}{\sqrt{\sum_{j=2}^{p}\tilde{z }_{j}^{2}+\sum_{j=1}^{p}\tilde{\zeta}_{j}^{2}}}\right). \tag{4.30}\] Figure 5. Illustration of Proposition 4.9 about the transport by vector field \(\zeta.\partial_{z}\) over the sphere at infinity. \[\begin{split}\text{If }p>2:\quad\tilde{\theta}_{3}=& \text{arccos}\left(\frac{\bar{z}_{2}}{\sqrt{\sum_{j=2}^{p}z_{j}^{2}+\sum_{j=2}^{ p}\zeta_{j}^{2}}}\right)=\quad 0\mod\pi.\\ \text{If }p=2:\quad\tilde{\theta}_{3}=&\text{arccos} \left(\frac{\bar{z}_{2}}{\sqrt{\sum_{j=2}^{p}z_{j}^{2}+\sum_{j=2}^{p}\zeta_{j} ^{2}}}\right)\text{ if }\tilde{\zeta}_{2}\geq 0,\\ & 2\pi-\text{arccos}\left(\frac{\bar{z}_{2}}{\sqrt{\sum_{j=2}^{ p}z_{j}^{2}+\sum_{j=2}^{p}\zeta_{j}^{2}}}\right)\text{ otherwise.}\end{split} \tag{4.31}\] Only the first coordinate of \(\tilde{\zeta}^{(0)}\) is non-zero so by (4.28), \(\tilde{\theta}_{2}\) and \(\tilde{\theta}_{3}\) are time-independent. Since the \(\theta_{j}\)s and \(\tilde{\theta}_{j}\)s have identical initial values, we need to show that \[\frac{d}{dt}\tilde{\theta}_{1}(t)=-\cos\tilde{\theta}_{2}\sin^{2}\tilde{ \theta}_{1}(t)\] to conclude. We differentiate the expression of \(\cos(\tilde{\theta}_{1}(t))\) to get \[-\sin(\tilde{\theta}_{1}(t))\frac{d\tilde{\theta}_{1}(t)}{dt}(t)=\frac{(|\tilde {z}^{(0)}+t\tilde{\zeta}^{(0)}|^{2}+|\tilde{\zeta}^{(0)}|^{2})\tilde{\zeta}_{1} ^{(0)}-(\tilde{z}_{1}^{(0)}+t\tilde{\zeta}_{1}^{(0)})(\tilde{\zeta}^{(0)}.( \tilde{z}^{(0)}+t\tilde{\zeta}^{(0)}))}{(|\tilde{z}^{(0)}+t\tilde{\zeta}^{(0)} |^{2}+|\tilde{\zeta}^{(0)}|^{2})^{\frac{3}{2}}}.\] Simplifying this expression by removing vanishing coordinates and using the expression of \(\cos(\tilde{\theta}_{2})\), we get \[-\sin(\tilde{\theta}_{1}(t))\frac{d\tilde{\theta}_{1}(t)}{dt}(t) =\cos\tilde{\theta}_{2}(t)\left(\frac{(\tilde{\zeta}_{1}^{(0)})^ {2}+\sum_{j=2}^{p}(\tilde{z}_{j}^{(0)})^{2}}{|\tilde{z}^{(0)}+t\tilde{\zeta}^ {(0)}|^{2}+|\tilde{\zeta}^{(0)}|^{2}}\right)^{\frac{3}{2}}\] \[=\cos\tilde{\theta}_{2}(t)(1-\cos^{2}\tilde{\theta}_{1}(t))^{ \frac{3}{2}}=\cos\tilde{\theta}_{2}(t)\sin^{3}\tilde{\theta}_{1}(t),\] hence the result. ## 5. A more precise example in dimension 3 Figure 6. The damping \(a\) is equal to \(1\) on the blue zone. The flat torus \(\mathbb{T}^{3}\) is identified to the unit cube : \(\mathbb{T}^{3}\simeq[-1,1]^{3}/(X\sim X+2e_{i}).\) The four cubes on the right fit into the hollow part of the bigger cube. The coordinate system is still \(X=(x_{1},x_{2},y)=(x,y),\Xi=(\xi_{1},\xi_{2},\eta)=(\xi,\eta)\). As pointed out in the introduction, the sufficient condition of Theorem 4 is also necessary in dimension 2. The purpose of the present section is to investigate how this condition can be weakened on higher-dimensional tori. We do this by proving stabilization in the case where the damping \(a\) equals 1 on the blue zone pictured in Figure 6. Compared to the model case of section 4.2, the four prisms in the tunnel are reduced to cubes. Said otherwise, we take \(\alpha_{L}=\alpha_{R}=\alpha_{T}=\alpha_{B}=0\) in (4.8). Two main differences emerge in this example compared to the model case of Figure 3. First, the bicharacteristics which do not intersect \(\operatorname{Int}(\operatorname{supp}(a))\) are the red bicharacteristic \(\{x=0\}\), and parallel ones which raze the faces of the cubes and are contained in the same horizontal and vertical planes as the red one, or more explicitly \[\{(x_{1}=0,|x_{2}|\leq 1/2)\text{ or }(|x_{1}|\leq 1/2,x_{2}=0),\quad\xi=0,\quad \eta=\pm 1\}. \tag{5.1}\] Second, none of these geodesics (in particular not the red one) satisfy Assumption (1.3). The geodesic \(\{x=0\}\) has four undamped directions while the others have two. In section 5.1 we tackle the first issue. By dropping the \((z_{1},\zeta_{1})\) or \((z_{2},\zeta_{2})\) variable in the second quantization of section 4.2, we construct 2-microlocal measures \(\mu_{1},\mu_{2}\) in each direction \(x_{1},x_{2}\), such that the second microlocal measure \(\mu\) is absolutely continuous wrt each. We then show that \(\mu\) has no support on any geodesic except the red one. In section 5.2, we perform a third microlocalization around the red geodesic to show that the sequence \(u_{n}\) of quasimodes cannot concentrate around it either. We conclude by discussing the scope of this example and some obstacles to generalizing it. ### Directional second microlocalizations For a given \(j\in\{1,2\}\), the second quantization in the direction \(x_{j}\) of a symbol \(b_{j}(X,\Xi,z_{j},\zeta_{j})\) is defined by \[\operatorname{Op}_{h}^{(j)}(b_{j})u(X)=\frac{1}{(2\pi)^{3}}\int_{\mathbb{R}_{ Y}^{3}\times\mathbb{R}_{\Xi}^{3}}e^{i(X-Y).\Xi}b_{j}\left(X,h\Xi,\frac{\epsilon(h)} {h^{\frac{1}{2}}}x_{j},\epsilon(h)h^{\frac{1}{2}}\xi_{j}\right)u(Y)dYd\Xi. \tag{5.2}\] This quantization is a specific case of (4.2) with \(d=3\), \(q=1\), so \(u_{n}\) admits two microlocal measures \(\mu_{1}\), \(\mu_{2}\) associated to each quantization \(\operatorname{Op}_{h}^{(1)}\), \(\operatorname{Op}_{h}^{(2)}\). \(\mu_{1}\) (resp. \(\mu_{2}\)) measures concentration in the horizontal (resp. vertical) direction, at the 2-microlocal scale \(h^{\frac{1}{2}}\epsilon(h)^{-1}\). We also define the second microlocal measure \(\mu\) as formerly. The properties of the \(\mu_{j}\)s and of \(\mu\) are summarized in the following statement : **Proposition 5.1** (Properties of \(\mu_{1}\) and \(\mu_{2}\)).: \(-\) _The directional 2-microlocal measures have the following properties :_ 1. _Assume that_ \((h_{n}^{2}\Delta+1)u_{n}=\mathcal{O}_{L^{2}}(1)\)_, then_ \(\mu_{1}\)_,_ \(\mu_{2}\) _and_ \(\mu\) _are probability measures._ 2. _Assume that_ \((h_{n}^{2}\Delta+1)u_{n}=\mathcal{O}_{L^{2}}(h_{n})\) _and_ \(\|au_{n}\|_{L^{2}}\ =\ o(1)\)_, and define the projection_ \(\pi_{j}(X,\Xi,z,\zeta)=(X,\Xi,z_{j},\zeta_{j})\) _for_ \(j=1,2\)_. Then,_ \(\mu_{j}\) _is the pushforward measure of_ \(\mu\) _by the projection_ \(\pi_{j}\)_. Besides, the pushforward measure of each measure_ \(\mu\)_,_ \(\mu_{1}\)_,_ \(\mu_{2}\) _by the projection onto the_ \((X,\Xi)\) _variables is the first microlocal measure_ \(\nu\)_._ _Thus,_ (5.3) \[\operatorname{supp}(\mu)\subset\\ \{(X,\Xi,z,\zeta),(x_{1}=0,|x_{2}|\leq 1/2)\text{ or }(|x_{1}|\leq 1/2,x_{2}=0), \xi=0,\eta=\pm 1\}\] _and the same inclusions hold for_ \(\operatorname{supp}(\mu_{j}),j=1,2\)_, replacing_ \((z,\zeta)\) _with_ \((z_{j},\zeta_{j})\) 3. _Assume now that_ \((h_{n}^{2}\Delta+1)u_{n}=\mathcal{O}(h_{n}\epsilon(h_{n}))_{L^{2}}\)_, then each measure_ \(\mu_{j}\) _is supported in the sphere at infinity_ \(\mathbb{S}^{1}_{\infty}\) _in the_ \((z_{j},\zeta_{j})\) _variables._ 4. _Along the geodesics that raze one of the faces of the four damped cubes in Figure_ 6_, directional 2-microlocal measures vanish 2-microlocally in the direction of the cube. More precisely,_ \(\mu_{1}\) _satisfies :_ (5.4) \[\mu_{1}\left\{x_{1}=0,x_{2}\in\left[-\frac{1}{2},0\right),y\in \left(-1,-\frac{1}{2}\right),\xi=0,\eta=\pm 1,z_{1}>0\right\} =0,\] \[\mu_{1}\left\{x_{1}=0,x_{2}\in\left(0,\frac{1}{2}\right],y\in \left(-\frac{1}{2},0\right),\xi=0,\eta=\pm 1,z_{1}>0\right\} =0,\] \[\mu_{1}\left\{x_{1}=0,x_{2}\in\left(0,\frac{1}{2}\right],y\in \left(0,\frac{1}{2}\right),\xi=0,\eta=\pm 1,z_{1}<0\right\} =0,\] \[\mu_{1}\left\{x_{1}=0,x_{2}\in\left[-\frac{1}{2},0\right),y\in \left(\frac{1}{2},1\right),\xi=0,\eta=\pm 1,z_{1}<0\right\} =0,\] _and similar vanishings hold for_ \(\mu_{2}\)_, which we do not state for the sake of conciseness._ 5. _Consider_ \(j\in\{1,2\}\)_, then the microlocal measure_ \(\mu_{j}\) _satisfies the conservation law_ (5.5) \[(\eta\partial_{y}+\zeta_{j}\partial_{z_{j}})\mu_{j}=0.\] _Since_ \(\mu_{j}\) _is supported in the sphere at infinity in the_ \((z_{j},\zeta_{j})\) _variables, we can use polar coordinates_ \((z_{j},\zeta_{j})=(r\cos\theta,r\sin\theta)\)_,_ \(r\to+\infty\) _with_ \(\theta\in\mathbb{R}/2\pi\mathbb{Z}\)_. In these coordinates, equation (_5.5_) becomes_ (5.6) \[(\eta\partial_{y}-\sin^{2}(\theta)\partial_{\theta})\mu_{j}=0.\] 6. _The two preceding items imply that_ (5.7) \[\mathrm{supp}(\mu_{1}) \subset\{(X,\Xi,z,\zeta):|x_{1}|\leq\frac{1}{2},x_{2}=0,\xi=0, \eta=\pm 1,(z_{1},\zeta_{1})\in\mathbb{S}^{1}_{\infty}\}\] \[\mathrm{supp}(\mu_{2}) \subset\{(X,\Xi,z,\zeta):x_{1}=0,|x_{2}|\leq\frac{1}{2},\xi=0, \eta=\pm 1,(z_{2},\zeta_{2})\in\mathbb{S}^{1}_{\infty}\}\] _Since the support of_ \(\mu\) _is a subset of those of_ \(\mu_{1}\) _and_ \(\mu_{2}\)_, we get that_ \(\mu\) _is supported in the undamped great circles of the sphere at infinity_ \(\mathbb{S}^{3}_{\infty}\) _along the red bicharacteristics :_ (5.8) \[\mathrm{supp}(\mu)\subset\{x=0,\xi=0,\eta=\pm 1,(z_{1},\zeta_{1})=(0,0)\text{ or }(z_{2},\zeta_{2})=(0,0)\}.\] Proof.: The first point derives from \(h_{n}\)-oscillation. As in the proof of the second point of Proposition 4.4, we have that \[\langle\mu_{j},b\rangle=\langle\mu,b\circ\pi_{j}\rangle\] for any symbol \(b(X,\Xi,z_{j},\zeta_{j})\). The rest of the proof of the second point is identical. Concerning (3), we consider a function \(\chi\in C^{\infty}_{o}(T^{*}\mathbb{T}^{3}\times\mathbb{R}^{2}_{z_{j},\zeta_ {j}})\). There exists some \(A>0\) such that \(\chi\) is supported in the set \(\{|x_{j}|\leq A\}\). Then by Proposition 3.1, \[|\langle\mathrm{Op}_{h_{n}}(\chi)u_{n},u_{n}\rangle_{L^{2}}| =|\langle\mathrm{Op}_{h_{n}}(\chi)u_{n},\mathbbm{1}_{\{|x_{j}| \leq Ah_{n}^{\frac{1}{2}}\epsilon^{-1}(h_{n})\}}u_{n}\rangle_{L^{2}}|\] \[\leq\|\,\mathrm{Op}_{h_{n}}(\chi)u_{n}\|_{L^{2}}\|u_{n}\|_{L^{2}( \{|x_{j}|\leq Ah_{n}^{\frac{1}{2}}\epsilon^{-1}(h_{n})\})}\] \[\leq C\epsilon^{\frac{1}{2}}(h_{n})\to_{n\to+\infty}0. \tag{5.9}\] We now prove the first microlocal vanishing of \(\mu_{1}\). All other vanishings are showed similarly. Hence we are set to show that \[\mu_{1}\left\{x_{1}=0,x_{2}\in\left[-\frac{1}{2},0\right),y\in\left(-1,-\frac{1}{ 2}\right),z_{1}>0\right\}=0.\] Consider some positive \(\delta\) and \(\delta_{0}\) and the symbol \[b_{1}(X,\Xi,z_{1},\zeta_{1})=\chi_{1}\left(\frac{x_{1}}{\delta}\right)\chi_{2 }(x_{2})\chi_{3}(y)\chi_{1}(\xi)\chi_{1}(\eta-1)\psi\left(\frac{z_{1}}{\delta_ {0}|\zeta_{1}|}\right)\psi(|z_{1}|^{2}+|\zeta_{1}|^{2}) \tag{5.10}\] where \(\chi_{i}\), \(i=1,2,3\) and \(\psi\) are \([0,1]\)-valued cutoff functions in \(C^{\infty}(\mathbb{R})\) such that : * \(\chi_{1}\) equals \(1\) over \([-\frac{1}{2},\frac{1}{2}]\) and is supported in \([-1,1]\), * \(\chi_{2}\) equals \(1\) over \([-\frac{1}{2},\delta_{0}]\) and is supported in \([-\frac{1}{2}-\delta_{0},0]\), * \(\chi_{3}\) equals \(1\) over \([-1+\delta_{0},-\frac{1}{2}-\delta_{0}]\) and is supported in \([-1,-\frac{1}{2}]\). * \(\psi\) equals \(1\) over \([1,+\infty)\) and is supported inside \([\frac{1}{2},+\infty)\). We then proceed as in Proposition 4.4 to prove that \(\langle\mu_{1},b_{1}\rangle=0\) and deduce the \(2\)-microlocal vanishing from the limit \(\delta_{0}\to 0\), giving the fourth point. In the fifth point, (5.5) follows from computing the bracket \([h_{n}^{2}\Delta+1,\operatorname{Op}_{h_{n}}^{(j)}(q)]\) and passing to the limit \(h\to 0\) to prove. Indeed, (4.15) becomes \[\frac{1}{2ih_{n}}[h_{n}^{2}\Delta+1,\operatorname{Op}_{h_{n}}^{( j)}(q)]=\frac{1}{2i}[h_{n}\Delta,\operatorname{Op}_{h_{n}}(q)]\\ =\operatorname{Op}_{h_{n}}((\xi.\partial_{x}+\eta\partial_{y}+ \zeta_{j}\partial_{z_{j}})q)-i\frac{h_{n}}{2}\operatorname{Op}_{h_{n}}(\Delta _{X}q)-i\frac{h_{n}}{2}(\epsilon(h_{n})h_{n}^{-\frac{1}{2}})\operatorname{Op} _{h_{n}}((\partial_{x_{j}}.\partial_{z_{j}})q)\\ -i\frac{h_{n}}{2}(\epsilon(h_{n})h_{n}^{-\frac{1}{2}})^{2} \operatorname{Op}_{h_{n}}(\partial_{z_{j}}^{2}q). \tag{5.11}\] (5.5) then derives from the same argument as in the proof of (4.11). Thus for any polyhomogeneous symbol \(b_{j}\) of degree \(0\), \[\begin{split}\zeta_{j}\partial_{z_{j}}q_{j}&=\lim_{r \rightarrow+\infty}(-\sin^{2}(\theta)\partial_{\theta}+r\cos\theta\sin\theta \partial_{r})\tilde{q}(X,\Xi,r,\theta)\\ &=-\sin^{2}(\theta)\partial_{\theta}\lim_{r\rightarrow+\infty} \tilde{q}(X,\Xi,r,\theta).\end{split} \tag{5.12}\] Note that this computation from [11, p. 640] is the equivalent on \(\mathbb{S}^{1}\) of Proposition 4.8 and Lemma 4.5. Lastly, we prove the first inclusion of (5.7) by contradiction, as was done in subsection 4.2. Assume that \(\operatorname{supp}(\mu_{1})\) contains a point \((x,y,\xi=0,\eta=\pm 1,\theta)\) with \(x_{1}=0\), \(x_{2}\in[-\frac{1}{2},\frac{1}{2}]\setminus\{0\}\) and \(\theta\) defined by \((z_{1},\zeta_{1})=r(\cos\theta,\sin\theta)\), \(r\rightarrow+\infty\). We denote \(\phi_{s}(\theta_{0})\) the flow of \(\dot{\theta}=-\sin^{2}\theta\) at time \(s\) starting from a point \(\theta_{0}\). The conservation law for \(\mu_{1}\) gives that \[(x,y+s\eta,\xi=0,\eta,\phi_{s}(\theta)) \tag{5.13}\] also belongs to \(\operatorname{supp}(\mu_{1})\) for any \(s\in\mathbb{R}\). A quick study of the flow of equation \(\dot{\theta}=-\sin^{2}\theta\) (see [11, p. 641]) shows that \(\phi_{s}(\theta_{0})\) converges to \(0(\operatorname{mod}\,\pi)\) so that the corresponding \((z_{1}(s),\zeta_{1}(s))\) converges to \((\pm 1,0)\) on the sphere at infinity as \(s\) goes to \(+\infty\). For \(s\) large enough, this means that a point of the form (5.13) belongs to one of the zones where \(\mu\) vanishes two-microlocally, which gives a contradiction. The proof for \(\operatorname{supp}(\mu_{2})\) is identical. We come to the localization of \(\operatorname{supp}(\mu)\). Like in Section 4.2, \(\mu\) satisfies the conservation law (4.11) and the \(2\)-microlocal vanishings (4.10) with \(\alpha_{R}=\alpha_{T}=\alpha_{L}=\alpha_{B}=0\). By Lemma 4.5 and the contradiction argument of section 4.2, \(\mu\) is supported in great circles of the sphere at infinity \(\mathbb{S}^{3}_{z,\zeta}\) that do not intersect any of the four quadrants \(\{\varepsilon_{1}z_{1}>0,\varepsilon_{2}z_{2}>0\}\), \(\varepsilon_{1},\varepsilon_{2}\in\{-1,1\}\). This implies inclusion (5.8). ### Third microlocalization near isolated undamped normal directions To conclude the contradiction argument and show that the second microlocal measure \(\mu\) vanishes everywhere, we need to show that it has no weight over the undamped directions \((z_{1},\zeta_{1})=(0,0)\) and \((z_{2},\zeta_{2})=(0,0)\) of the sphere at infinity. To do this, we perform a third microlocalization which allows to add inhomogeneity in the symbols considered and split \(\mu\) into two 2-microlocal measures \(\mu_{+},\mu_{-}\) supported in each hemisphere of the sphere at infinity. As an example, we focus on the undamped direction \((z_{1},z_{2},\zeta_{1},\zeta_{2})=(z_{1},0,\zeta_{1},0)\) with \(z_{1}>0\). Under the flow of \(\zeta.\partial_{z}\), \(\zeta_{1}\) becomes arbitrarily small for long times so that we want to localize on the region of the sphere where \(z_{1}\gg|\zeta|\) and \(z_{1}\gg|z_{2}|\). We denote \(b_{-}\) an operator localizing in this region : \[b_{-}(X,\Xi,z,\zeta)=\chi_{\delta}(|x|)\chi_{\delta}(|\xi|)\tilde{\chi}(y) \chi_{\delta}(\eta-1)\psi\left(\frac{z_{1}}{\alpha|\zeta|}\right)\chi_{\alpha} \left(\frac{z_{2}}{z_{1}}\right)\psi(|z|^{2}+|\zeta|^{2}), \tag{5.14}\] where all cut-offs are smooth functions and satisfy : * \(\chi_{\delta}\) equals 1 over \([-\frac{\delta}{2},\frac{\delta}{2}]\) and 0 out of \([-\delta,\delta]\). \(\chi_{\alpha}\) is the same with a different small parameter. * \(\tilde{\chi}\) equals 1 over \([-1+\delta_{0},-\frac{1}{2}-\delta_{0}]\) and 0 out of \([-1,-\frac{1}{2}]\). * \(\psi\) equals 1 over \([1,+\infty)\) and 0 over \((-\infty,\frac{1}{2}]\). \(b_{-}\) belongs to the smooth polyhomegeneous symbol class \(S^{0}_{H}\) of the 2-microlocal pseudodifferential calculus. The third microlocalization consists in multiplying \(\operatorname{Op}_{h_{n}}(b_{-})\) by \(\psi(\pm\epsilon^{\frac{1}{2}}(h)z_{2})\). This allows to localize both near the undamped direction and in the upper and lower damped zones. The procedure is explained in the following statement and illustrated in Figure 7 : **Proposition 5.2**.: * _Denote_ \(\psi_{\pm}(x_{2})=\psi\left(\pm\frac{\epsilon^{\frac{3}{2}}(h)}{h^{\frac{1}{2 }}}x_{2}\right)\)_. Denote also_ \(\psi_{\pm}\) _the associated multiplication operators. Then there exist two positive Radon measures_ \(\mu_{\pm}\) _such that for every_ \(b\in S^{0}_{H}\)_,_ \[\langle\psi_{\pm}\operatorname{Op}_{h_{n}}(b)u_{n},u_{n}\rangle_{L^{2}} \rightarrow_{n\rightarrow+\infty}\langle\mu_{\pm},b\rangle.\] * _For any 2-microlocal symbol_ \(b\in S^{0}_{H}\)_, we have_ \[\langle(\psi_{+}+\psi_{-})\operatorname{Op}_{h_{n}}(b)u_{n},u_{n}\rangle_{L^{ 2}}\rightarrow_{n\rightarrow+\infty}\langle\mu,b\rangle,\] _so that_ \(\mu=\mu_{+}+\mu_{-}\)_. In particular,_ \(\operatorname{supp}(\mu_{\pm})\subset\operatorname{supp}(\mu)\)_._ * \(\mu_{+}\) _and_ \(\mu_{-}\) _are supported in the sphere at infinity in the_ \((z,\zeta)\) _variables. Besides,_ \(\mu_{+}\) _is supported in the_ \(\{z_{2}\geq 0\}\) _closed hemisphere of the sphere at infinity and_ \(\mu_{-}\) _is supported in the_ \(\{z_{2}\leq 0\}\) _hemisphere._ * _The_ \(\mu_{\pm}\) _measures satisfy the conservation laws_ (5.15) \[\mathbb{1}_{\pm z_{2}\geq 0}\left(\eta\partial_{y}+\zeta.\partial_{z}\right)\mu_{ \pm}=0.\] _Since_ \(\operatorname{supp}(\mu_{\pm})\subset\{\pm z_{2}\geq 0\}\)_, the characteristic functions can be omitted so the_ \(\mu_{\pm}\) _satisfy the same conservation law (_4.11_) as_ \(\mu\) 5. _Consider_ \(b_{-}\) _the symbol given by (_5.14_), then_ \(\langle\mu_{-},b_{-}\rangle=0\)_. Similarly, replacing_ \(\tilde{\chi}\) _by a function equal to 1 over_ \([-\frac{1}{2}+\delta_{0},-\delta_{0}]\) _and supported in_ \([-\frac{1}{2},0]\) _in (_5.14_) defines a symbol_ \(b_{+}\) _such that_ \(\langle\mu_{+},b_{+}\rangle=0\)_. Thus for some small enough positive_ \(\alpha\)_,_ (5.16) \[\begin{split}\mu_{-}\left\{x=0,y\in\left(-1,-\frac{1}{2}\right), \xi=0,\eta=\pm 1,z_{1}>\frac{2}{\alpha}|z_{2}|,z_{1}>\alpha|\zeta|\right\}& =0,\\ \mu_{+}\left\{x=0,y\in\left(0,-\frac{1}{2}\right),\xi=0,\eta=\pm 1,z_ {1}>\frac{2}{\alpha}|z_{2}|,z_{1}>\alpha|\zeta|\right\}&=0.\end{split}\] Proof.: The \(\psi_{\pm}\) are multiplication operators by smooth, uniformly bounded functions so they are \(h\)-uniformly bounded operators over \(L^{2}_{ul}\). \(\operatorname{Op}_{h_{n}}(b)\) is also a uniformly bounded operator. Besides, \(\psi_{\pm}\operatorname{Op}_{h_{n}}(b)=\operatorname{Op}_{h_{n}}(\psi(\epsilon ^{\frac{1}{2}}z_{2})b)\) is the 2-microlocal quantization of a function in the generalized symbol class \(S^{0}\) of our 2-microlocal calculus. Since \[b\geq 0\Rightarrow\psi(\epsilon(h)^{\frac{1}{2}}z_{2})b\geq 0,\] the Garding inequality (4.6) still holds. The classical proof of existence of semiclassical measures then gives existence and positivity of the Radon measures \(\mu_{\pm}\), hence the first point. Figure 7. On the left-hand side, we show the \(|z_{2}|\ll|z_{1}|\) zone of the sphere at infinity where the \(\chi_{\alpha}(\frac{z_{2}}{z_{1}})\) cutoff localizes. On the right-hand-side, we display the supports of the \(\psi_{+}\) and \(\psi_{-}\) cutoffs over the sphere at infinity (in the \((z_{1},z_{2})\) variables). The violet circle in the horizontal plane represents the \(\{z_{2}=0,\zeta_{2}=0\}\) great circle of \(\mathbb{S}^{3}\) (the third coordinate is then \(\zeta_{1}\)). The goal of the third microlocalization is to deal with points of \(\operatorname{supp}(\mu)\) belonging to this great circle. The flow of \(\zeta.\partial_{z}\) transports any point in this great circle close to one of the two violet disks \((z_{1},z_{2})\ =\ (\pm 1,0)\) of \(\mathbb{S}^{3}_{\infty}\) as time goes to infinity. The goal of the third microlocalization is to distinguish points \((z_{1},z_{2})=(\pm 1,0^{+})\) and \((\pm 1,0^{-})\). The second point derives from the non-concentration estimate of Proposition 3.1. The function \(1-\left(\psi\left(\frac{\epsilon^{\frac{3}{2}}(h)x_{2}}{h^{\frac{1}{2}}}\right)+ \psi\left(-\frac{\epsilon^{\frac{3}{2}}(h)x_{2}}{h^{\frac{1}{2}}}\right)\right)\) is indeed supported in the \(\{|x_{2}|\ \leq\ h^{\frac{1}{2}}\epsilon(h)^{-\frac{3}{2}}\}\) slice. As in the previous proof, this yields \[\langle(1-(\psi_{+}+\psi_{-}))\operatorname{Op}_{h_{n}}(b)u_{n},u_{n}\rangle \rightarrow_{n\rightarrow+\infty}0,\] hence the result. Thus, \(\mu_{\pm}\ll\mu\) and the \(3\)-microlocal measures are supported in the sphere at infinity in the \((z,\zeta)\) variables. To prove the second part of statement (3), consider a symbol \(b\) in \(S^{0}_{H}\) supported in the \(\{z_{2}>0\}\) (resp. \(\{z_{2}<0\}\)) part of the sphere at infinity. Then \(\psi_{-}b\) (resp. \(\psi_{+}b\)) vanishes in a neighborhood of the sphere at infinity, so that \(\langle\mu_{-},b\rangle=0\) (resp. \(\langle\mu_{+},b\rangle=0\)). We derive point (4) from computing the bracket \(\frac{1}{2ih}[h^{2}\Delta+1,\psi_{\pm}\operatorname{Op}_{h}(b)]\) : \[\frac{1}{2ih}[h^{2}\Delta+1,\psi\left(\pm\frac{\epsilon^{\frac{3} {2}}(h)}{h^{\frac{1}{2}}}x_{2}\right)\operatorname{Op}_{h}(b)]u\quad=\frac{h} {2i}\Delta(\psi_{\pm}\operatorname{Op}_{h}(b)u)-\psi_{\pm}\operatorname{Op} _{h}(b)\left(\frac{h}{2i}\Delta u\right)\] \[=\frac{h}{2i}\left(\frac{\epsilon^{3}(h)}{h}\psi_{\pm}^{{}^{ \prime\prime}}\operatorname{Op}_{h}(b)u\pm 2\frac{\epsilon(h)}{h^{\frac{1}{2}}} \psi_{\pm}^{{}^{\prime}}\partial_{x_{2}}(\operatorname{Op}_{h}(b)u)+\psi_{ \pm}[\Delta,\operatorname{Op}_{h}(b)]u\right)\] \[=\frac{1}{2i}\left(\psi_{\pm}^{{}^{\prime\prime}}\epsilon^{2}(h) \operatorname{Op}_{h}(b)u\pm 2\psi_{\pm}^{{}^{\prime}}\epsilon^{\frac{3}{2}}(h)h^{ \frac{1}{2}}\operatorname{Op}_{h}\left(i\xi_{2}b+\partial_{x_{2}}b+\frac{ \epsilon(h)}{h^{\frac{1}{2}}}\partial_{z_{2}}b\right)+\psi_{\pm}[h\Delta, \operatorname{Op}_{h}(b)]u\right)\] \[=\epsilon^{\frac{1}{2}}(h)\psi_{\pm}^{\prime}\operatorname{Op}_ {h}(\zeta_{2}b)+\psi\left(\pm\frac{\epsilon^{\frac{3}{2}}(h)}{h^{\frac{1}{2}} }x_{2}\right)\operatorname{Op}_{h}((\eta\partial_{y}+\xi.\partial_{x}+\zeta. \partial_{z})b)+l.o.t. \tag{5.17}\] The lower order terms are the second derivatives of \(\psi_{\pm}\) and the remaining terms of the bracket (4.15). They vanish as \(h_{n}\) goes to zero. The first term also does. Over the sphere at infinity, \(\psi\left(\pm\frac{\epsilon^{\frac{3}{2}}(h)}{h^{\frac{1}{2}}}x_{2}\right)\) is equal to \(\mathbb{1}_{\pm z_{2}\geq 0}\), giving the result. To prove the last point, fix some small \(\alpha>0\) independent of \(h\) and consider the symbol \(b_{-}\) given by (5.14), then for any \(n\), \(\operatorname{supp}(\psi_{-}\operatorname{Op}_{h_{n}}(b_{-})u_{n})\subset\{x_ {1}>0,x_{2}<0,y\in(-1,-\frac{1}{2})\}\) so that \[\langle\psi_{-}\operatorname{Op}_{h_{n}}(b_{-})u_{n},u_{n}\rangle =\langle 1_{\left\{x_{1}>0,x_{2}<0,y\in(-1,-\frac{1}{2})\right\}} \psi_{-}\operatorname{Op}_{h_{n}}(b_{-})u_{n},u_{n}\rangle\] \[=\langle\psi_{-}\operatorname{Op}_{h_{n}}(b_{-})u_{n},1_{\left\{ x_{1}>0,x_{2}<0,y\in(-1,-\frac{1}{2})\right\}}au_{n}\rangle\] Since \(\|au_{n}\|_{L^{2}}\to 0\) and \(\psi_{-}\operatorname{Op}_{h_{n}}(b_{-})u_{n}\) is bounded uniformly wrt small \(h_{n}\), we get that \(\langle\mu_{-},b_{-}\rangle=0\). The result then derives from arguments similar to the proof of Proposition 5.1, point 4. As we have seen in the previous section, the flow of \(\zeta.\partial_{z}\) converges to some point \((z_{1},z_{2},\zeta=0)\) of the sphere at infinity, with \((z_{1},z_{2})\in\{(\pm 1,0),(0,\pm 1)\}\). Point 5 above shows that both \(\mu_{+}\) and \(\mu_{-}\) vanish near the point \((z=(+1,0),\zeta=0)\) of the sphere at infinity over some portion of the geodesic. Replacing \(z_{1}\) by \(-z_{1}\) in cutoff (5.14) gives that \(\mu_{+}\) and \(\mu_{-}\) also vanish near the point \((z=(-1,0),\zeta=0)\). Consider a point \((x=0,y,\xi=0,\eta=\pm 1,z_{0},\zeta_{0})\) which we assume to belong to \(\operatorname{supp}(\mu)\). Since \(\mu_{+}+\mu_{-}=\mu\), this point belongs either to \(\operatorname{supp}(\mu_{+})\) or to \(\operatorname{supp}(\mu_{-})\). By Lemma 4.5, \(\phi_{s}(z_{0},\zeta_{0})\) converges to some point in \(\{(\pm 1,0),(0,\pm 1)\}\) as \(s\rightarrow+\infty\). If the limit is \((\pm 1,0)\) then the flow of \(\zeta.\partial_{z}\) transports \((z_{0},\zeta_{0})\) to some point near which either \(\mu_{+}\) or \(\mu_{-}\) vanishes along the geodesic. If it is \((0,\pm 1)\), we perform the same third microlocalization in the \(|z_{2}|\gg|z_{1}|\) zone of the sphere at infinity and replace operators \(\psi_{\pm}\) with multipliers in the \(x_{1}\) variable. Thus, we obtain a contradiction and \(\mu\) is identically zero, which proves stabilization for the damping of Figure 6. We finish with some concluding remarks concerning possible generalizations of this counter-example (and some obstacles to them) : * Based on the result and techniques of the present section, we conjecture that a necessary and sufficient condition for uniform stabilization of the wave equation on \(3\)-dimensional tori using damped polyhedrons is for every geodesic to be damped in every normal direction but a finite number. Proof of the necessary condition would likely resemble that of Nicolas Burq and Patrick Gerard in [1, Section 5], using the necessity of their generalized geometric control condition. The sufficient condition also seems difficult to prove with the techniques of the present article. The main obstacle is that both the directional second microlocalization of section 5.1 and the third microlocalization of section 5.2 can be performed only in planes where the non-concentration estimates hold (this is crucial to Proposition 5.1, point (3) and Proposition 5.2, point (2)). * For the proof to hold, such planes need to be rational (that is, they cannot be dense in the torus). This is needed due to the non-concentration estimates, which require periodicity of quasimodes in \((d-1)\) directions. Hence, our techniques allow to deal with one-directional sheets of geodesics contained in a rational hyperplane of the torus (as in subsection 5.1) and intersections of such sheets (using the third microlocalization), but not with cases where such sheets of geodesics are contained in irrational planes. Thus, they allow to prove stabilization on \(\mathbb{T}^{3}\) in the case where every geodesic is damped in all normal directions but a finite number of _rational_ ones. * Using as many additional microlocalizations as the dimension, it may even be possible to obtain generalizations of this sufficient condition to higher-dimensional tori as long as the damping is a characteristic function of polyhedrons with rational faces. However, the generalized geometric control condition conjectured by N. Burq and P. Gerard in [1, Section 5] indicates that rationality or irrationality of the undamped normal directions to the geodesics should not be relevant to uniform stabilization. For that reason, we have chosen not to confront the technicalities of such generalizations until additional work allows to bypass the non-concentration estimates.
2310.04164
The Least Common Multiple of Polynomial Values over Function Fields
Cilleruelo conjectured that for an irreducible polynomial $f \in \mathbb{Z}[X]$ of degree $d \geq 2$ one has $$\log\left[\mathrm{lcm}(f(1),f(2),\ldots f(N))\right]\sim(d-1)N\log N$$ as $N \to \infty$. He proved it in the case $d=2$ but it remains open for every polynomial with $d>2$. We investigate the function field analogue of the problem by considering polynomials over the ring $\mathbb F_q[T]$. We state an analog of Cilleruelo's conjecture in this setting: denoting by $$L_f(n) := \mathrm{lcm} \left(f\left(Q\right)\ : \ Q \in \mathbb F_q[T]\mbox{ monic},\, \mathrm{deg}\,Q = n\right)$$ we conjecture that \begin{equation}\label{eq:conjffabs}\mathrm{deg}\, L_f(n) \sim c_f \left(d-1\right) nq^n,\ n \to \infty\end{equation} ($c_f$ is an explicit constant dependent only on $f$, typically $c_f=1$). We give both upper and lower bounds for $L_f(n)$ and show that the conjectured asymptotic holds for a class of ``special" polynomials, initially considered by Leumi in this context, which includes all quadratic polynomials and many other examples as well. We fully classify these special polynomials. We also show that $\mathrm{deg}\, L_f(n) \sim \mathrm{deg}\,\mathrm{rad}\left(L_f(n)\right)$ (in other words the corresponding LCM is close to being squarefree), which is not known over $\mathbb Z$.
Alexei Entin, Sean Landsberg
2023-10-06T11:25:14Z
http://arxiv.org/abs/2310.04164v3
# The least common multiple of polynomial values over function fields ###### Abstract. Cilleruelo conjectured that for an irreducible polynomial \(f\in\mathbb{Z}[X]\) of degree \(d\geq 2\) one has \[\log\left(\operatorname{lcm}(f(1),f(2),\ldots f(N))\sim(d-1)\right)N\log N\] as \(N\to\infty\). He proved it in the case \(d=2\) but it remains open for every polynomial with \(d>2\). We investigate the function field analogue of the problem by considering polynomials over the ring \(\mathbb{F}_{q}[T]\). We state an analog of Cilleruelo's conjecture in this setting: denoting by \[\deg L_{f}(n):=\operatorname{lcm}\left(f\left(Q\right)\ :\,Q\in\mathbb{F}_{q}[T] \text{ monic, }\deg Q=n\right)\] we conjecture that \[\deg L_{f}(n)\sim c_{f}\left(d-1\right)nq^{n},\ n\to\infty \tag{1}\] (\(c_{f}\) is an explicit constant dependent only on \(f\), typically \(c_{f}=1\)). We give both upper and lower bounds for \(L_{f}(n)\) and show that the asymptotic (1) holds for a class of "special" polynomials, initially considered by Leumi in this context, which includes all quadratic polynomials and many other examples as well. We fully classify these special polynomials. We also show that \(\deg L_{f}(n)\sim\deg\operatorname{rad}\left(L_{f}(n)\right)\) (in other words the corresponding LCM is close to being squarefree), which is not known over \(\mathbb{Z}\) _E-mail addresses_: [email protected], [email protected] ## 1. Introduction While studying the distribution of prime numbers, Chebychev estimated the least common multiple of the first \(N\) integers. This was an important step towards the prime number theorem. In fact the condition \(\log\,\operatorname{lcm}\left(1,\ldots,N\right)\sim N\) is equivalent to the prime number theorem. This problem later inspired a more general problem of studying the least common multiple of polynomial sequences. For a linear polynomial \(f(X)=kX+h,\ k,h\in\mathbb{Z},\ k>0,h+k>0\) it was observed by Bateman [1], that \(\log\,\operatorname{lcm}\left(f\left(1\right),\ldots,f\left(N\right)\right) \sim c_{f}N\) as \(N\to\infty\), where \(c_{f}=\frac{k}{\varphi(k)}\sum_{1\leq m\leq k,(m,k)=1}\frac{1}{m}\), which is a consequence of the Prime Number Theorem for arithmetic progressions. Cilleruelo conjectured [10] that for any irreducible polynomial \(f\in\mathbb{Z}[X]\) of degree \(d\geq 2\) the following estimate holds: \[\log\,\operatorname{lcm}\left(f\left(1\right),...,f\left(N\right)\right)\sim \left(d-1\right)N\log N \tag{1.1}\] as \(f\) is fixed and \(N\to\infty\). **Convention.** Throughout the rest of the paper in all asymptotic notation the polynomial \(f\), and the parameter \(q\) to appear later, are assumed fixed, while the parameter \(N\) (or \(n\)) is taken to infinity. If any parameters other than \(f,q,N,n\) appear in the notation, the implied constant or rate of convergence are uniform in these parameters. Cilleruelo proved (1.1) for quadratic polynomials and there are no known examples of polynomials of degree \(d>2\) for which the conjecture is known to hold. Cilleruelo's argument also shows the predicted upper bound, i.e. if \(f\in\mathbb{Z}[X]\) is an irreducible polynomial of degree \(d\geq 2\), then \[\log\text{lcm}\,\left(f(1),\ldots,f(N)\right)\lesssim(d-1)N\log N.\] Maynard and Rudnick [14] provided a lower bound of the correct order of magnitude: \[\log\,\text{lcm}\left(f(1),\ldots,f(N)\right)\gtrsim\frac{1}{d}N\log N. \tag{1.2}\] Sah [13] improved upon this lower bound while also providing a lower bound for the radical of the least common multiple: \[\log\,\text{lcm}\left(f\left(1\right),\ldots,f\left(N\right)\right)\gtrsim N \log N,\] \[\log\,\text{rad}\left[\text{lcm}\left(f\left(1\right),...,f\left(N\right) \right)\right]\gtrsim\frac{2}{d}N\log N. \tag{1.3}\] Rudnick and Zehavi [10] established an averaged form of (1.1) with \(f\) varying in a suitable range. Leumi [12] studied a function field analogue of the problem. In the present work we expand upon and generalize the results and conjectures in [12], as well as correct some erroneous statements and conjectures from the latter work. Despite some overlap, we have kept our exposition self-contained and independent of [12]. ### The function field analogue Let \(q=p^{k}\) be a prime power. For a polynomial \(f\in\mathbb{F}_{q}[T][X]\) of degree \(d\geq 1\) of the form \[f(X)=f_{d}X^{d}+f_{d-1}X^{d-1}+...+f_{0},\ f_{i}\in\mathbb{F}_{q}[T]\] set \[L_{f}(n):=\text{lcm}\left(f\left(Q\right):\ Q\in M_{n}\right), \tag{1.4}\] where \[M_{n}:=\{Q\in\mathbb{F}_{q}[T]\text{ monic},\ \deg Q=n\}.\] Also denote \[V_{f}:=\left\{g\in\mathbb{F}_{q}[T]:\ f\left(X+g\right)=f\left(X\right)\right\}.\] The set \(V_{f}\) is a finite-dimensional \(\mathbb{F}_{p}\)-linear subspace of \(\mathbb{F}_{q}[T]\) (see Lemma 2.1 below). Denote \[c_{f}:=\frac{1}{|V_{f}|}.\] We now state a function field analog of Cilleruelo's conjecture. **Conjecture 1.1**.: _Let \(f\in\mathbb{F}_{q}[T][X]\) be a fixed irreducible polynomial with \(\deg_{X}f=d\geq 2\). Then_ \[\deg L_{f}(n)\sim c_{f}\left(d-1\right)nq^{n},\ n\to\infty.\] **Remark 1.2**.: The expression \((d-1)nq^{n}\) is directly analogous to \((d-1)N\log N\) appearing in (1.1). Over the integers (or generally in characteristic \(0\)) if \(f\) is not constant then \(V_{f}\) is always trivial and no constant \(c_{f}\) appears in (1.1). This is because if \(0\neq g\in V_{f}\) then \(2g,...,dg\) are also in \(V_{f}\) implying \(f\left(0\right)=f\left(g\right)=...=f\left(dg\right)\). Since \(f\left(x\right)-f\left(0\right)\) has at most \(d\) roots, we reach a contradiction. Even over \(\mathbb{F}_{q}[T]\), the typical case that occurs for "most" polynomials is \(c_{f}=1\) (i.e. \(V_{f}\) is trivial). A heuristic justification of Conjecture 1.1 will be given in Section 4. In the present paper we prove that Conjecture 1.1 gives the correct upper bound: **Theorem 1.3**.: \[\deg L_{f}(n)\lesssim c_{f}(d-1)nq^{n}.\] The proof of Theorem 1.3 will be given in Section 3. We also give a lower bound of the correct order of magnitude (this bound is comparable to the bound in [13, Theorem 1.3] under a mild assumption on \(f\)): **Theorem 1.4**.: 1. \[\deg L_{f}(n)\gtrsim\frac{d-1}{d}nq^{n}.\] _._ 2. _If_ \(p\nmid d\) _or_ \(f_{d}\nmid f_{d-1}\) _then_ \[\deg L_{f}(n)\gtrsim nq^{n}.\] The proof of Theorem 1.4 will be given in Section 5. We note that by Lemma 2.1 below we always have \(c_{f}\geq 1/d\) and \(c_{f}\geq 1/(d-1)\) if \(p\nmid d\) or \(f_{d}\nmid f_{d-1}\), so this is consistent with Conjecture 1.1. Regarding the radical of the LCM \[\ell_{f}(n):=\operatorname{rad}\,\operatorname{lcm}(f(Q):\ Q\in M_{n})\] we prove the following **Theorem 1.5**.: \[\deg\ell_{f}(n)\sim\deg L_{f}(n).\] The proof of Theorem 1.5 will be given in Section 6. As a consequence, our lower bounds for \(L_{f}(n)\) apply also to \(\ell_{f}(n)\). The analogous statement over \(\mathbb{Z}\) is not known and the best lower bound over \(\mathbb{Z}\), namely (1.3), has a smaller constant (if \(d>2\)) than the best known lower bound for \(L_{f}(N)\) given by (1.2). The key ingredient in the proof which is unavailable over \(\mathbb{Z}\) is the work of Poonen [10] on squarefree values of polynomials over \(\mathbb{F}_{q}[T]\) (later generalized by Lando [14] and Carmon [15]). For a class of polynomials \(f\in\mathbb{F}_{q}[T][X]\) we call _special_ (first introduced by Leumi in [13]) it is possible to establish Conjecture 1.1 in full. This class includes all quadratic polynomials, but also many polynomials of higher degree. We now define special polynomials over an arbitrary unique factorization domain (UFD). This definition was introduced in [13]. **Definition 1.6**.: A polynomial \(f\in R[X]\) of degree \(d=\deg f\geq 2\) is called _special_ in if the bivariate polynomial \(f(X)-f(Y)\) factors into a product of linear terms in \(R[X,Y]\): \[f(X)-f(Y)=\prod_{i=1}^{d}(a_{i}X+b_{i}Y+c_{i}),\quad a_{i},b_{i},c_{i}\in R.\] **Example 1.7**.: 1. A quadratic polynomial is always special because \[AX^{2}+BX+C-(AY^{2}+BY+C)=(X-Y)(AX+AY+B).\] 2. If \(R=\mathbb{F}_{p}\) then \(f=X^{p}\) is special because \[X^{p}-Y^{p}=\prod_{a\in\mathbb{F}_{p}}(X-aY).\] For a special polynomial \(f\) Conjecture 1.1 can be established in full. **Theorem 1.8**.: _If \(f\in\mathbb{F}_{q}[T][X]\) is irreducible and special then Conjecture 1.1 holds for \(f\)._ The proof of Theorem 1.8 will be given in section Section 7. **Example 1.9**.: The polynomial \(f=X^{p}-T\in\mathbb{F}_{p}[T][X]\) is irreducible (since it is linear in \(T\)) and special (similarly to Example 1.7(ii)). Hence Conjecture 1.1 holds for it. We fully classify the set of special polynomials over an arbitrary UFD \(R\). **Theorem 1.10**.: _Let \(R\) be a UFD, \(K\) its field of fractions and \(p=\operatorname{char}(K)\)._ 1. _Assume_ \(p=0\)_. Then_ \(f\in R[X]\) _is special iff it is of the form_ \[f(X)=f_{d}(X+A)^{d}+C,\quad 0\neq f_{d}\in R,\quad A,C\in K,\] _where_ \(d\geq 2\) _is such that there exists a primitive_ \(d\)_-th root of unity in_ \(K\)_._ 2. _Assume_ \(p>0\)_. Then_ \(f\in R[X]\) _of degree_ \(\deg f=d=p^{l}m\geq 2,(m,p)=1\) _is special iff it is of the form_ \[f(X)=f_{d}\prod_{i=1}^{p^{v}}(X-b_{i}+A)^{mp^{l-v}}+C,\] \[0\neq f_{d}\in R,\quad A,C\in K,\quad 0\leq v\leq l,\quad\zeta\in K, \quad V=\{b_{1},...,b_{p^{v}}\}\subset K,\] _where_ \(\zeta\) _is a primitive_ \(m\)_-th root of unity and_ \(V\) _is an_ \(\mathbb{F}_{p}(\zeta)\)_-linear subspace of_ \(K\) _with_ \(|V|=p^{v}\) The proof of Theorem 1.10 will be given in Section 8. We now briefly discuss how our main conjecture and results compare with the work of Leumi [11], which also studied the function field analog of Cilleruelo's conjecture and influenced the present work. First, [11] states Conjecture 1.1 without the constant \(c_{f}\). This is certainly false by Theorem 1.3 because it can happen that \(c_{f}<1\) (see Example 2.2 below). It seems to have been overlooked that when \(g\in V_{f}\) and \(Q\in M_{n}\), \(n>\deg g\) then since \(f(Q+g)=f(Q)\), the value \(f(Q+g)\) contributes nothing new to the LCM on the RHS of (1.4) over the contribution of \(f(Q)\). Once one accounts for this redundancy with the constant \(c_{f}=1/|V_{f}|\), Cilleruelo's original heuristic carries over to the function field case giving rise to Conjecture 1.1. Second, all results and conjectures in [11] are stated only for an absolutely irreducible and separable \(f\in\mathbb{F}_{q}[T][X]\). We do not make these restrictions here and it takes additional technical work to treat the general case. Third, the lower bound on \(L_{f}(n)\) given in [11] has a smaller constant than ours (thus the bound is weaker), comparable to the RHS of (1.3), and a lower bound for the radical of \(L_{f}(n)\) is not stated explicitly. Fourth, in [11] our Theorem 1.8 is stated without the constant \(c_{f}\), which is incorrect in general for the same reasons explained above, although the arguments therein are essentially correct in the case \(c_{f}=1\). Finally, [11] gives a classification of special polynomials only in the case \(p\nmid d\) (and it is stated only for the ring \(\mathbb{F}_{q}[T]\)), whereas we treat the general case. **Acknowledgments.** The authors would like to thank Zeev Rudnick for spotting a few small errors in a previous draft of the paper. Both authors were partially supported by Israel Science Foundation grant no. 2507/19. ## 2. Preliminaries For background on the arithmetic of function fields see [10]. For background on resultants, which we will use below, see [11, SSIV.8]. ### Notation We now introduce some notation which will be used throughout Sections 2-7. Let \(p\) be a prime, \(q=p^{k}\). For \(Q\in\mathbb{F}_{q}[T]\) we denote by \(|Q|=q^{\deg Q}\) the standard size of \(Q\). For \(P\in\mathbb{F}_{q}[T]\) prime and \(Q\in\mathbb{F}_{q}[T]\) we denote by \(v_{P}(Q)\) the exponent of \(P\) in the prime factorization of \(Q\). We will always fix a polynomial \[f(X)=\sum_{i=0}^{d}f_{i}X^{i}\in\mathbb{F}_{q}[T][X],\,f_{d}\neq 0\] of degree \(d\). We also adopt the following conventions about notation. * For a polynomial \(Q\in\mathbb{F}_{q}[T]\) we denote by \(\deg Q\) its degree in \(T\). For a polynomial \(g\in\mathbb{F}_{q}[T][X]\) we denote by \(\deg g=\deg_{X}g\) its degree in the variable \(X\). * For a polynomial \(g\in\mathbb{F}_{q}[T][X]\) we denote by \(g^{\prime}=\frac{\partial g}{\partial X}\) its derivative in the variable \(X\). The derivative in the variable \(T\) will be written explicitly \(\frac{\partial g}{\partial T}\). * \(g\in\mathbb{F}_{q}[T][X]\) is called separable if it is separable as a polynomial in the variable \(X\), equivalently \(g\not\in\mathbb{F}_{q}[T][X^{p}]\). * For two polynomials \(g,h\in\mathbb{F}_{q}[T][X]\) we denote by \(\operatorname{Res}(g,h)=\operatorname{Res}_{X}(g,h)\) their resultant in the variable \(X\). ### The space \(V_{f}\) Recall that \(V_{f}=\{g\in\mathbb{F}_{q}[T]:\ f\left(X+g\right)=f\left(X\right)\}\). **Lemma 2.1**.: _Assume \(d\geq 1\)._ 1. \(|V_{f}|\leq d\)_._ 2. \(V_{f}\) _an_ \(\mathbb{F}_{p}\)_-linear subspace of_ \(\mathbb{F}_{q}[T]\)__ 3. \(|V_{f}|\) _is a power of_ \(p\)_._ 4. _If_ \(p\nmid d\) _then_ \(V_{f}\) _is trivial._ 5. _If_ \(f_{d}\nmid f_{i}\) _for some_ \(1\leq i\leq d-1\) _then_ \(|V_{f}|\leq d-1\)_._ Proof.: 1. Assume by way of contradiction that \(|V_{f}|\geq d+1\). Then \(f(g)=f(0)\) for every \(g\in V_{f}\). Since \(f(x)-f(0)\) has at most \(d\) roots, \(f\left(x\right)=f\left(0\right)\) and \(f\) is constant, a contradiction. 2. It is obvious that \(0\in V_{f}\). Now let \(a,b\in V_{f}\) and let us prove \(\alpha a+\beta b\in V_{f}\) where \(\alpha,\beta\in\{0,1,...,p-1\}\). Recursively applying \(f(X+a)=f(X+b)=f(X)\) we obtain \[f\left(X+\alpha a+\beta b\right)=f\left(X+\sum_{i=1}^{\alpha}a+\sum_{i=1}^{ \beta}b\right)=f\left(X\right).\] 3. Obvious from (i) and (ii). 4. Let \(g\in V_{f}\), so \(f(X+g)=f(X)\). Comparing coefficients at \(X^{d-1}\) we find \(dg=0\). If \(p\nmid d\) then \(g=0\), so in this case \(V_{f}=\{0\}\) and the claim follows. 5. If \(|V_{f}|=d\) then since all elements \(g\in V_{f}\) are roots of \(f(X)-f(0)\), we have \(f=f_{d}\prod_{g\in V_{f}}(X-g)+f(0)\) and \(f_{d}|f_{i}\) for all \(1\leq i\leq d-1\), contradicting the assumption. **Example 2.2**.: (Example of a polynomial with \(|V_{f}|=d=\deg f\)) Let \(V=\{b_{1},...,b_{p^{v}}\}\subset\mathbb{F}_{q}[T]\) be an \(\mathbb{F}_{p}\)-linear subspace and \(C\in\mathbb{F}_{q}[T]\). Then the polynomial \[f(X)=\prod_{i=1}^{p^{v}}(X-b_{i})+C\] has \(V_{f}=V\). Thus \(|V_{f}|=|V|=p^{v}=d\). ### Roots of \(f\) modulo prime powers In this subsection we study the quantity \[\rho_{f}(P^{k}):=\left|\{Q\bmod P^{k}:f(Q)\equiv 0\bmod P^{k}\}\right|,\] i.e. the number of roots of \(f\) modulo a prime power \(P^{k}\). **Lemma 2.3**.: _Let \(g,h\in\mathbb{F}_{q}[T][X]\) be polynomials and let \(P\in\mathbb{F}_{q}[T]\) prime. If \(g,h\) have a common root modulo \(P^{m}\) then \(P^{m}\mid R:=\operatorname{Res}(g,h)\)._ Proof.: We can express \(R\) as \(R=a(X)f(X)+b(X)g(X)\) for some \(a(X),b(X)\in\mathbb{F}_{q}[T][X]\) (see [1, SSIV.8]). Therefore, if there exists a \(Q\in\mathbb{F}_{q}[T]\) such that \(P^{m}\mid f(Q)\) and \(P^{m}\mid g(Q)\), then \(P^{m}\) must also divide \(a(Q)f(Q)+b(Q)g(Q)=R\). The proof of the next two lemmas is similar to the analogous proof for the integer case in [21, proof of Theorem II]. **Lemma 2.4**.: _Assume \(f\) is separable and denote \(R:=\operatorname{Res}_{X}(f,f^{\prime})\neq 0\). Denote \(\mu=v_{P}(R)\). Let \(x_{0},x_{1}\in\mathbb{F}_{q}[T]\) be such that the following conditions hold:_ 1. \(f(x_{0})\equiv 0\pmod{P^{\mu+1}}\)_._ 2. \(f(x_{1})\equiv 0\pmod{P^{\beta+1}}\)_, where_ \(\beta:=v_{P}(f^{\prime}(x_{0}))\)_._ 3. \(x_{1}\equiv x_{0}\mod P^{\mu+1}\)_._ _Then \(\beta\leq\mu\) and \(v_{P}(f^{\prime}(x_{1}))=\beta\)._ Proof.: Since \(f(x_{0})\equiv 0\mod P^{\mu+1}\) and \(P^{\mu+1}\nmid R\), by Lemma 2.3 we must have \(P^{\mu+1}\nmid f^{\prime}(x_{0})\). Hence \(\beta\leq\mu\). Now writing \(x_{1}=x_{0}+tP^{\mu+1}\), \(t\in\mathbb{F}_{q}[T]\) and using \(\beta\leq\mu\) and conditions (a),(b) we have \[f^{\prime}(x_{1})=f^{\prime}(x_{0}+tP^{\mu+1})\equiv f^{\prime}(x_{0})\equiv 0 \pmod{P^{\beta}},\] \[f^{\prime}(x_{1})=f^{\prime}(x_{0}+tP^{\mu+1})\equiv f^{\prime}(x_{0})\not \equiv 0\pmod{P^{\beta+1}},\] so \(v_{P}(f^{\prime}(x_{1}))=\beta\). **Lemma 2.5**.: _In the setup of Lemma 2.4 let \(\alpha>\mu\) be an integer and assume that in fact \(f(x_{1})\equiv 0\pmod{P^{\alpha+\beta}}\). Consider the set_ \[S_{1}:=\left\{x_{1}+uP^{\alpha}\mid u\in\mathbb{F}_{q}[T]/P^{\beta}\right\} \subset\mathbb{F}_{q}[T]/P^{\alpha+\beta}.\] _Then_ 1. _The elements of_ \(S_{1}\) _are roots of_ \(f\) _modulo_ \(P^{\alpha+\beta}\)_._ 2. _The number of roots of_ \(f\) _modulo_ \(P^{\alpha+\beta+1}\) _that reduce modulo_ \(P^{\alpha+\beta}\) _to an element of_ \(S_{1}\) _is equal to_ \(|S_{1}|=q^{\beta\deg P}\) Proof.: To prove (i) we note that \[f(x_{1}+uP^{\alpha})\equiv f(x_{1})+uP^{\alpha}f^{\prime}(x_{1})\pmod{P^{2\alpha}}.\] Thus \(P^{\alpha+\beta}\mid f(x_{1}+uP^{\alpha})\) as \(P^{\alpha+\beta}\mid f(x_{1})\) and by Lemma 2.4 we have \(P^{\beta}\mid f^{\prime}(x_{1})\) and \(\alpha+\beta\leq\alpha+\mu<2\alpha\). To show (ii), consider the set of possible lifts from \(S_{1}\) to \(\mathbb{F}_{q}[T]/P^{\alpha+\beta+1}\), i.e. \[S_{2}:=\left\{x_{1}+uP^{\alpha}+vP^{\alpha+\beta}\mid u\in\mathbb{F}_{q}[T]/P^{ \beta},v\in\mathbb{F}_{q}[T]/P\right\}.\] We will now determine for which \(u,v\) the element \(x_{1}+uP^{\alpha}+vP^{\alpha+\beta}\) is a root of \(f\) modulo \(P^{\alpha+\beta+1}\). Using \(2\alpha>\alpha+\beta\) we have \[f(x_{1}+uP^{\alpha}+vP^{\alpha+\beta})\equiv f(x_{1})+uP^{\alpha}f^{\prime}(x_ {1})+vP^{\alpha+\beta}f^{\prime}(x_{1})\pmod{P^{\alpha+\beta+1}}.\] Hence \(x_{1}+uP^{\alpha}+vP^{\alpha+\beta}\) is a root of \(f\bmod P^{\alpha+\beta+1}\) iff \[f(x_{1})+uP^{\alpha}f^{\prime}(x_{1})+vP^{\alpha+\beta}f^{\prime}(x_{1})\equiv 0 \pmod{P^{\alpha+\beta+1}}. \tag{2.1}\] As \(P^{\alpha+\beta}\mid f(x_{1})\) and \(v_{P}(f^{\prime}(x_{1}))=\beta\) (Lemma 2.4), we have (2.1) iff \[u\equiv-\left(\frac{f^{\prime}(x_{1})}{P^{\beta}}\right)^{-1}\left(\frac{f(x_ {1})}{P^{\alpha+\beta}}+vf^{\prime}(x_{1})\right)\pmod{P}\] Thus we have \(|P|\) possible values of \(v\) and for each of these \(|P^{\beta}|/|P|\) possible values of \(u\). Overall we have \(|P^{\beta}|=q^{\beta\deg P}=|S_{1}|\) possible values of \((u,v)\) and the assertion follows. **Lemma 2.6**.: _Assume that \(f\) is separable, \(R=\operatorname{Res}(f,f^{\prime})\neq 0\). Let \(P\) be a prime in \(\mathbb{F}_{q}[T]\) and \(\mu=v_{P}(R)\). Then \(\rho_{f}(P^{2\mu+k})=\rho_{f}(P^{2\mu+1})\) for all \(k\geq 1\)._ Proof.: For each root \(x_{1}\) of \(f\) modulo \(P^{2\mu+k}\) we can apply Lemma 2.5 with \(\alpha=2\mu+k-\beta\), where \(\beta=v_{P}(f^{\prime}(x_{1}))\) (the condition \(\alpha>\mu\) holds because \(\beta\leq\mu\) by Lemma 2.4 applied with \(x_{0}=x_{1}\)) and obtain that the number of roots of \(f\) modulo \(P^{2\mu+k+1}\) equals the number of roots of \(f\) modulo \(P^{2\mu+k}\), i.e. \(\rho_{f}(P^{2\mu+k+1})=\rho_{f}(P^{2\mu+k})\). The assertion now follows by induction on \(k\). **Lemma 2.7**.: _Assume \(f\in\mathbb{F}_{q}[T][X^{p}]\) is inseparable and irreducible. Then \(U:=\operatorname{Res}(f,\frac{\partial f}{\partial T})\neq 0\)._ Proof.: Assume by way of contradiction that \(U=0\). Then \(f,\frac{\partial f}{\partial T}\) have a common factor and since \(f\) is irreducible we have \(f\mid\frac{\partial f}{\partial T}\). Comparing degrees in \(T\) we must have \(\frac{\partial f}{\partial T}=0\). This means that \(f\in\mathbb{F}_{q}[T^{p},X^{p}]\) is a \(p\)-th power, contradicting its irreducibility. **Lemma 2.8**.: _Assume that \(f\) is inseparable and irreducible, and \(P^{m}\nmid U:=\operatorname{Res}(f,\frac{\partial f}{\partial T})\in\mathbb{F} _{q}[T]\) for some \(P\in\mathbb{F}_{q}[T]\) prime and \(m\geq 1\). Then \(\rho_{f}(P^{k})=0\) for every \(k\geq m+1\)._ Proof.: Assuming the existence of \(Q\in\mathbb{F}_{q}[T]\) such that \(P^{m}\mid f(Q)\) (if \(f\) has no roots modulo \(P^{m}\) we are done), we will now prove that \(P^{m+1}\nmid f(Q)\). Since \(P^{m}\mid f(Q)\), we know that \(P^{m}\nmid\frac{\partial f}{\partial T}(Q)\); otherwise, by Lemma 2.3 we would have \(P^{m}|U\), contradicting our assumption. Since \(f\) is inseparable and irreducible we have \(f^{\prime}=0\) and therefore \[\frac{\partial f(Q)}{\partial T}=\frac{\partial f}{\partial T}(Q)+\frac{ \partial f}{\partial X}(Q)\frac{dQ}{dT}=\frac{\partial f}{\partial T}(Q).\] If \(P^{m+1}\mid f(Q)\), write \(f(Q)=P^{m+1}C\) and then \[\frac{\partial f(T,Q(T))}{\partial T}=\frac{\partial(P^{m+1}C)}{\partial T}=P^{m +1}\frac{\partial C}{\partial T}+(m+1)P^{m}\frac{\partial P}{\partial T}C.\] Thus \(P^{m}\mid\frac{\partial f(T,Q(T))}{\partial T}=\frac{\partial f}{\partial T}(Q)\), contradicting the above observation \(P^{m}\nmid\frac{\partial f}{\partial T}(Q)\). **Proposition 2.9**.: _Assume that \(f\) is irreducible. Then \(\rho_{f}(P^{m})\ll 1\) for all \(P\) prime and \(m\geq 1\)._ Proof.: Since \(\mathbb{F}_{q}[T]/P\) is a field we have \(\rho_{f}(P)\leq d\). Thus it remains to handle the case \(m\geq 2\). If \(f\) is inseparable then using Lemmas 2.8 and 2.7 we see that there are only finitely many pairs \(P,m\) such that \(m\geq 2\) and \(\rho_{f}(P^{m})\neq 0\) (as they must satisfy \(P^{m}\,|\operatorname{Res}(f,\frac{\partial f}{\partial T})\neq 0\)). Now if \(f\) is separable then by Lemma 2.6 there are only finitely many pairs of \((P,m)\) such that \(m\geq 2\) and \(\rho_{f}(P^{m})\neq\rho_{f}(P^{m-1})\), Denote these pairs by \((P_{i},m_{i})_{i=1}^{s}\). Then for all primes \(P\) and \(m\geq 1\) \[\rho_{f}(P^{m})\leq\max\{d,\rho_{f}(P_{i}^{m_{i}})_{i=1}^{s}\}\ll 1.\] **Lemma 2.10**.: _Let \(f\in\mathbb{F}_{q}[T][X^{p}]\) be inseparable and irreducible. Then there exist a separable and irreducible \(h\in\mathbb{F}_{q}[T][X]\) such that \(\rho_{f}(P)=\rho_{h}(P)\) for all primes \(P\in\mathbb{F}_{q}[T]\)._ Proof.: Write \(f(X)=h(X^{p^{m}})\) for some \(m\geq 1\) such that \(h\in\mathbb{F}_{q}[T][X]\setminus\mathbb{F}_{q}[T][X^{p}]\) is separable. For any prime \(P\), he \(m\)-fold Frobenius isomorphism of \(\mathbb{F}_{q}[T]/P\) given by \(x\mapsto x^{p^{n}}\) gives a one-to-one correspondence between the roots of \(f\) modulo \(P\) and the roots of \(h\) modulo \(P\). Hence \(\rho_{f}(P)=\rho_{h}(P)\). ## 3. The upper bound Throughout this section the polynomial \(f\) is assumed _irreducible_. To obtain the upper and lower bounds on \(L_{f}(n)\) (recall that this quantity was defined in (1.4)) we set \[P_{f}(n):=\prod_{Q\in M_{n}}f(Q)\] and write \[\deg L_{f}(n)=c_{f}\deg P_{f}(n)-(c_{f}\deg P_{f}(n)-\deg L_{f}(n)), \tag{3.1}\] where \(c_{f}=1/|V_{f}|\). **Lemma 3.1**.: \(\deg P_{f}(n)=dnq^{n}+O(q^{n})\)_._ Proof.: For sufficiently large \(n\) and \(Q\in M_{n}\) we have \(\deg f(Q)=dn+\deg f_{d}\), hence \[\deg P_{f}(n)=\deg\prod_{Q\in M_{n}}f(Q)=\sum_{Q\in M_{n}}\deg f(Q)=\sum_{Q \in M_{n}}(dn+\deg f_{d})=dnq^{n}+O(q^{n}).\] **Convention.** Throughout the rest of the paper \(P\) will always denote a prime of \(\mathbb{F}_{q}[T]\) and \(\sum_{P}\) (resp. \(\prod_{P}\)) will denote a sum (resp. product) over all monic primes of \(\mathbb{F}_{q}[T]\). A sum of the form \(\sum_{a\leq\deg P\leq b}\) is over all monic primes in the corresponding degree range (and the same for products). To estimate \(c_{f}\deg P_{f}(n)-\deg L_{f}(n)\) we write the prime decomposition of \(L_{f}(n)\) and \(P_{f}(n)\) as \[L_{f}(n)=\prod_{P}P^{\beta_{P}(n)},\quad P_{f}(n)=\prod_{P}P^{\alpha_{P}(n)},\] where the products are over all (monic) primes in \(\mathbb{F}_{q}[T]\). **Convention.** Throughout Sections 3-7 we will always assume that \(n\) is large enough so that \(f(Q)\neq 0\) for all \(Q\in M_{n}\). Thus \(L_{f}(n),P_{f}(n)\neq 0\) and \(\alpha_{P}(n),\beta_{P}(n)\) are always finite. We have \[\alpha_{P}(n)=\sum_{Q\in M_{n}}v_{P}(f(Q)),\;\beta_{P}(n)=\max\{v_{P}(f(Q)):Q \in M_{n}\}, \tag{3.2}\] (recall that \(v_{P}(Q)\) is the exponent of \(P\) in the prime factorization of \(Q\)). Combining (3.1) with Lemma 3.1 we have \[\deg L_{f}(n)=c_{f}dnq^{n}-\sum_{P}(c_{f}\alpha_{P}(n)-\beta_{P}(n))\deg P+O(q^ {n}).\] **Lemma 3.2**.: _For sufficiently large n_ \[\beta_{P}(n)\leq c_{f}\alpha_{P}(n).\] Proof.: Denote by \(Q_{max}\in M_{n}\) an element such that \(\beta_{P}(n)=v_{P}(f(Q_{max}))\). Then for each element \(g\in V_{f}\) we have \(f(Q_{max}+g)=f(Q_{max})\) and hence \(\beta_{P}(n)=v_{P}(f(Q_{max}))=v_{P}(f(Q_{max}+g)).\) For \(n\) sufficiently large \(Q_{max}+g\in M_{n}\), so \[|V_{f}|\beta_{P}(n)=\sum_{g\in V_{f}}v_{P}(f(Q_{max}+g))\leq\sum_{Q\in M_{n}}v_ {P}(f(Q))=\alpha_{P}(n).\] The assertion follows since \(c_{f}=1/|V_{f}|\). The next lemma is the main tool for estimating \(\alpha_{P}(n),\beta_{P}(n)\). For its proof we introduce the following notation, which will be used in the sequel as well: \[s_{f}(P^{k},n)=\left|\{Q\in M_{n}:f(Q)\equiv 0\bmod P^{k}\}\right|, \tag{3.3}\] where \(P\) is prime and \(k\geq 1\). **Lemma 3.3**.: _Let \(P\in\mathbb{F}_{q}[T]\) be a prime._ 1. \(\beta_{P}(n)=O\left(\frac{n}{\deg P}\right)\)_._ 2. _If_ \(f\) _is separable and_ \(P\nmid\operatorname{Res}(f,f^{\prime})\)_, then_ \[\alpha_{P}(n)=q^{n}\frac{\rho_{f}(P)}{|P|-1}+O\left(\frac{n}{\deg P}\right).\] 3. _If_ \(f\) _is inseparable and_ \(P\nmid\operatorname{Res}(f,\frac{\partial f}{\partial T})\)_, then_ \[\alpha_{P}(n)=q^{n}\frac{\rho_{f}(P)}{|P|}+O\left(\frac{n}{\deg P}\right).\] 4. _For the finitely many "bad" primes where neither (ii) nor (iii) hold we have_ \[\alpha_{P}(n)=O(q^{n}).\] Proof.: To prove (i) we note that since there exists \(Q\in M_{n}\) such that \(P^{\beta_{P}(n)}\mid f(Q)\) and \(f(Q)\neq 0\) we get: \[\beta_{P}(n)\deg P\leq\deg f(Q).\] For sufficiently large \(n\) we have \(\deg f(Q)=dn+\deg f_{d}\), thus \[\beta_{P}(n)\ll\frac{n}{\deg P},\] establishing (i). Using the notation (3.3) we have \[\alpha_{P}(n)=\sum_{f(Q)\in M_{n}}v_{P}(Q)=\sum_{Q\in M_{n}}\sum_{\begin{subarray} {c}P,\,k\geq 1\\ P^{k}\nmid f(Q)\end{subarray}}1=\sum_{k\geq 1\deg P}\sum_{\begin{subarray}{c}Q\in M _{n}\\ P^{k}\nmid f(Q)\end{subarray}}1=\sum_{k\geq 1}s_{f}(P^{k},n).\] Note that \[s_{f}(P^{k},n)=q^{n}\frac{\rho_{f}(P^{k})}{|P^{k}|}+O(1),\] since if \(Q\) is a root of \(f\) modulo \(P^{k}\) and \(\deg P^{k}\leq n\) then there are exactly \(\frac{q^{n}}{|P|^{k}}\) element in \(M_{n}\) that reduce to \(Q\) modulo \(P^{k}\). And if \(\deg P^{k}>n\) then \(s_{f}(P^{k},n)\leq\rho_{f}(P^{k})\ll 1\) by Proposition 2.9. Hence \[\alpha_{P}(n)=\sum_{k\geq 1}s_{f}(P^{k},n)=\sum_{k\geq 1}\left[q^{n}\frac{\rho_{ f}(P^{k})}{|P|^{k}}+O(1)\right]=q^{n}\sum_{k\geq 1}\frac{\rho_{f}(P^{k})}{|P|^{k}} +O\left(\frac{n}{\deg P}\right).\] Let us look at the different cases: * If \(f\) is separable and \(P\nmid\operatorname{Res}(f,f^{\prime})\) then we have by Lemma 2.6 that \(\rho_{f}(P^{k})=\rho_{f}(P)\), thus \[\alpha_{P}(n) =q^{n}\sum_{k\geq 1}\frac{\rho_{f}(P)}{|P|^{k}}+O\left(\frac{n}{ \deg P}\right)\] \[=q^{n}\rho_{f}(P)\sum_{k\geq 1}\left(\frac{1}{|P|}\right)^{k}+O\left( \frac{n}{\deg P}\right)\] \[=q^{n}\frac{\rho_{f}(P)}{|P|-1}+O\left(\frac{n}{\deg P}\right).\] * If \(f\) is inseparable and \(P\nmid\operatorname{Res}_{X}(f,\frac{\partial f}{\partial T})\) then using Lemma 2.8 we get \[\alpha_{P}(n)=q^{n}\frac{\rho_{f}(P)}{|P|}+O\left(\frac{n}{\deg P}\right).\] * For a general irreducible \(f\), using the fact that for every \(k\geq 1,\ \rho_{f}(P^{k})\ll 1\) (Proposition 2.9) we get \[\alpha_{P}(n)=q^{n}\sum_{k\geq 1}\frac{\rho_{f}(P^{k})}{|P|^{k}}+O\left( \frac{n}{\deg P}\right)\ll q^{n}\sum_{k\geq 1}\frac{1}{|P|^{k}}+O\left( \frac{n}{\deg P}\right)\ll O(q^{n}).\] **Lemma 3.4**.: \[\deg\left(\prod_{n<\deg P\leq n+\deg f_{d}}P^{\alpha_{P}(n)}\right)=O(q^{n}).\] Proof.: By Lemma 3.3 for sufficiently large \(n\) and \(P\) a prime such that \(\deg P>n\), we have \(\alpha_{P}(n)\leq q^{n}\frac{\rho_{f}(P)}{|P|-1}+O\left(\frac{n}{\deg P}\right) \ll\rho_{f}(P)+O(1)\ll 1\). Using the Prime Polynomial Theorem, we have \[\deg\left(\prod_{n<\deg P\leq n+\deg f_{d}}P^{\alpha_{P}(n)}\right) =\sum_{\deg n<\deg P\leq n+\deg f_{d}}\alpha_{P}(n)\deg P\] \[\ll\sum_{\deg n<\deg P\leq n+\deg f_{d}}\deg P=\sum_{k=n+1}^{n+ \deg f_{d}}\sum_{\deg P=k}k\] \[\ll\sum_{k=n+1}^{n+\deg f_{d}}k\frac{q^{k}}{k}=\frac{q^{n+\deg f _{d}+1}-q^{n+1}}{q-1}\] \[\ll q^{n}.\] **Proposition 3.5**.: _Denote \(R_{f}(n)=\prod_{\deg P\leq n+\deg f_{d}}P^{\alpha_{P}(n)}\). Then_ \[\deg R_{f}(n)=nq^{n}+O(q^{n}).\] Proof.: Let us assume first that \(f\) is separable. We will handle the inseparable case at the end of the proof. We note that we may ignore the \(O(1)\) "bad" primes (as defined in Lemma 3.3) with an error term of \(O(q^{n})\) and by Lemma 3.4 we can ignore the primes with \(n<\deg P\leq n+\deg f_{d}\) with the same error term. Thus by Lemma 3.3 we obtain \[\deg R_{f}(n)=\sum_{\deg P\leq n}\alpha_{P}(n)\deg P+O(q^{n})=\sum_{\deg P\leq n }q^{n}\frac{\rho_{f}(P)}{|P|-1}\deg P+\sum_{\deg P\leq n}O(n)+O(q^{n}). \tag{3.4}\] We bound the error term using the Prime Polynomial Theorem: \[\sum_{\deg P\leq n}n=n\sum_{k=1}^{n}\sum_{\deg P=k}1\ll n\sum_{k=1}^{n}\frac{ q^{k}}{k}\ll q^{n}. \tag{3.5}\] Now to estimate \(\sum_{\deg P\leq n}q^{n}\frac{\rho_{f}(P)}{|P|-1}\deg P\) we will use the Chebotarev Density Theorem in function fields [13, Proposition 7.4.8]. First we introduce some notation and recall some terminology. Let \(E/\mathbb{F}\) be the splitting field of \(f\) and denote by \(G\) the Galois group of \(f\). For each prime \(P\) of \(\mathbb{F}_{q}[T]\) unramified in \(E/\mathbb{F}_{q}(T)\) the Frobenius class of \(P\) is defined to be: \[\left(\frac{E/\mathbb{F}_{q}(T)}{P}\right)=\left\{\sigma\in G\,:\,\exists\,\, \mathfrak{P}/P\text{ prime of }E\text{ such that }x^{|P|}\equiv x\pmod{\mathfrak{P}}\text{ for all }x\text{ with }v_{\mathfrak{P}}(x)\geq 0 \right\}.\] The Frobenius class \(\left(\frac{E/\mathbb{F}_{q}(T)}{P}\right)\) is a conjugacy class in \(G\). Denote by \(S\) the set of all conjugacy classes in \(G\). Given a conjugacy class \(C\in S\), we set \[\pi_{C}(n)=\left|\left\{P\text{ prime}:\deg P=n,\left(\frac{E/\mathbb{F}_{q}( T)}{P}\right)=C\right\}\right|.\] Let \(K=\mathbb{F}_{q^{n}}\) be the algebraic closure of \(\mathbb{F}_{q}\) in \(E\). We have \(G_{0}:=\operatorname{Gal}(K/\mathbb{F}_{q})=\langle\phi\rangle\), where \(\phi(x)=x^{q}\) is the \(q\)-Frobenius. Denote the restriction of automorphisms map by \(\varphi:G\to G_{0}\). Since \(G_{0}=\langle\phi\rangle\) is cyclic and in particular abelian, we have for all \(C\in S\) that \(\varphi(C)=\{\phi^{nc}\}\) for some \(n_{C}\in\mathbb{Z}\). Define \[S_{k}:=\{C\in S\mid\varphi(C)=\{\phi^{k}\}\}.\] Now the Chebotarev Density Theorem in function fields says that if \(C\in S_{k}\) then \[\pi_{C}(k)=\left\{\begin{array}{ll}v\frac{|C|}{|G|}\frac{q^{k}}{k}+O\left( \frac{q^{k/2}}{k}\right),&C\in S_{k},\\ 0,&C\notin S_{k}.\end{array}\right.\] Since the Galois group acts on the roots of \(f\), we can define \(\operatorname{Fix}(C)\) to be the number of roots fixed by any element in the conjugacy class \(C\) (this number is the same for all \(\sigma\in C\)). Assuming \(P\) is unramified in \(E/\mathbb{F}_{q}(T)\), we have \(\rho_{f}(P)=\operatorname{Fix}\left(\left(\frac{E/\mathbb{F}_{q}(T)}{P} \right)\right)\). In the calculations throughout the rest of the proof summation over \(P\) denotes summation over primes \(P\nmid\operatorname{Res}_{X}(f,f^{\prime})\) if \(f\) is separable and \(P\nmid\operatorname{Res}(f,\frac{\partial f}{\partial T})\) otherwise. In the case when \(f\) is separable this conditions insures that \(P\) is unramified in \(E/\mathbb{F}_{q}(T)\). As we observed above the excluded primes contribute \(O(q^{n})\). We have: \[\sum_{\deg P=k}\frac{\rho_{f}(P)}{q^{k}-1}k=\sum_{\deg P=k}\frac{ \operatorname{Fix}\left(\left(\frac{E/\mathbb{F}_{q}(T)}{P}\right)\right)}{q^ {k}-1}k=\sum_{C\in S}\frac{\operatorname{Fix}(C)\pi_{C}(k)}{q^{k}-1}k=\sum_{ C\in S_{k}}\left[\frac{\operatorname{Fix}(C)q^{k}}{(q^{k}-1)k}\frac{v|C|}{|G|}k+O(q^{-k/2} )\right]\\ =\sum_{C\in S_{k}}v\frac{\operatorname{Fix}(C)|C|}{|G|}+O(q^{-k/ 2})\] and therefore \[\sum_{\deg P\leq n}q^{n}\frac{\rho_{f}(P)}{q^{k}-1}k=q^{n}\sum_{ k=1}^{n}\sum_{\deg P=k}\frac{\rho_{f}(P)}{q^{k}-1}k=q^{n}\sum_{k=1}^{n}\left[ \sum_{C\in S_{k}}v\frac{\operatorname{Fix}(C)|C|}{|G|}+O(q^{-k/2})\right]\\ =q^{n}\sum_{k=1}^{n}\sum_{C\in S_{k}}v\frac{\operatorname{Fix}(C) |C|}{|G|}+O(q^{n})=q^{n}\sum_{l=1}^{\lfloor\frac{n}{\beta}\rfloor}v\sum_{C\in S }\frac{\operatorname{Fix}(C)|C|}{|G|}+O(q^{n}). \tag{3.6}\] Using Burnside's lemma [14, Theorem 3.22] and the transitivity of \(G\) (which is a consequence of \(f\) being irreducible) we get \[1=\sum_{\sigma\in G}\frac{\operatorname{Fix}(\sigma)}{|G|}=\sum_{C\in S}\frac {\operatorname{Fix}(C)|C|}{|G|}.\] Pluggin this into (3.6) and recalling (3.4),(3.5) we obtain \[\deg R_{f}(n)=\sum_{\deg P\leq n}q^{n}\frac{\rho_{f}(P)}{|P|-1}\deg P+O(q^{n})= q^{n}\sum_{l=1}^{\lfloor\frac{n}{\beta}\rfloor}v+O(q^{n})=nq^{n}+O(q^{n}).\] This completes the proof in the separable case. To handle the case when \(f\) is inseparable we again do the same calculations (using Lemma 3.3(iii) this time): \[\deg R_{f}(n) =\sum_{\deg P\leq n}\alpha_{P}(n)\deg P+O\left(q^{n}\right)\] \[=\sum_{\deg P\leq n}q^{n}\frac{\rho_{f}(P)}{|P|}\deg P+\sum_{\deg P \leq n}O(n)+O(q^{n}).\] We can replace \(\rho_{f}(P)\) with \(\rho_{h}(P)\), where \(h\) is the polynomial given by Lemma 2.10. Since \(h\) is separable, the argument above (with \(f\) replaced by \(h\)) yields: \[\deg R_{f}(n)=\sum_{\deg P\leq n}q^{n}\frac{\rho_{h}(P)}{|P|}+O(q^{n})=nq^{n}+O( q^{n}),\] completing the proof. We are now ready to prove the upper bound on \(\deg L_{f}(n)\). Proof of Theorem 1.3.: Using Lemma 3.2 we have \(c_{f}\alpha_{P}(n)\leq\beta_{P}(n)\), for all \(P\) prime. We have \[\deg L_{f}(n) =c_{f}\deg P_{f}(n)-(c_{f}\deg P_{f}(n)-\deg L_{f}(n))=\] \[=c_{f}dnq^{n}-\sum_{P}(c_{f}\alpha_{P}(n)-\beta_{P}(n))\deg P \tag{3.7}\] \[\leq c_{f}dnq^{n}-\sum_{\deg P\leq n+\deg f_{d}}(c_{f}\alpha_{P} (n)-\beta_{P}(n))\deg P\] \[=c_{f}dnq^{n}-c_{f}\sum_{\deg P\leq n+\deg f_{d}}\alpha_{P}(n) \deg P+\sum_{\deg P\leq n+\deg f_{d}}\beta_{P}(n)\deg P\] (3.8) \[=c_{f}(d-1)nq^{n}+\sum_{\deg P\leq n+\deg f_{d}}\beta_{P}(n)\deg P +O(q^{n})\] where (3.7) comes from removing negative terms and (3.8) from Proposition 3.5 and Lemma 3.1. It remains to prove that \(\sum_{\deg P\leq n+\deg f_{d}}\beta_{P}(n)\deg P=O(q^{n})\). From Lemma 3.3(i) and the Prime Polynomial Theorem we get \[\sum_{\deg P\leq n+\deg f_{d}}\beta_{P}(n)\deg P\ll\sum_{\deg P\leq n+\deg f_{ d}}\frac{n}{\deg P}\deg P=\sum_{\deg P\leq n+\deg f_{d}}n\ll q^{n}. \tag{3.9}\] ## 4. Heuristic justification for Conjecture 1.1 Throughout the present section we maintain the assumption that \(f=\sum_{i=0}^{d}f_{i}X^{i}\in\mathbb{F}_{q}[T][X]\) is irreducible. Next we want to study the difference \(\deg L_{f}(n)-c_{f}(d-1)nq^{n}\) and we will do so by relating it to a slightly different quantity defined by (4.1) below. To this end let us define an equivalence relation on \(M_{n}\) by \[Q_{1}\sim Q_{2}\iff f(X+Q_{1}-Q_{2})=f(X)\iff Q_{1}-Q_{2}\in V_{f}.\] We note that for sufficiently large \(n\) the size of each equivalence class is \(|V_{f}|\) and the number of such classes is \(c_{f}q^{n}\). Consider \[S_{f}(n)=\left|\left\{P\in\mathbb{F}_{q}[T]\text{ prime}:\begin{aligned} \deg P>& n+\deg f_{d}.\\ \exists\,Q_{1}\nsim Q_{2}\in M_{n}\text{ such that }P\mid f(Q_{1}),f(Q_{2}) \end{aligned}\right\}\right|. \tag{4.1}\] Note that the condition in the bottom line on the RHS of (4.1) is equivalent to \(\beta_{P}(n)\neq c_{f}\alpha_{P}(n)\) (the negation of this condition is that \(P\) occurs precisely in all \(f(Q)\) where \(Q\) runs over a single equivalence class of \(\sim\), equivalently \(|V_{f}|\beta_{P}(n)=\alpha_{P}(n))\), hence \[S_{f}(n)=\sum_{\deg P\leq n+\deg f_{d}\atop c_{f}\alpha_{P}(n)\neq\beta_{P}(n)}1. \tag{4.2}\] **Proposition 4.1**.: _Let \(f\in\mathbb{F}_{q}[T][X]\) be irreducible with \(\deg_{X}f=d\geq 2\)._ 1. \(S_{f}(n)\ll q^{n}\)_._ 2. _Conjecture_ 1.1 _is equivalent to_ \(S_{f}(n)=o(q^{n})\)_._ Proof.: The first part follows immediately from the definition of \(S_{f}(n)\), since there are \(q^{n}\) possible values of \(f(Q)\) and each has \(O(1)\) prime factors of degree \(\deg P>n+\deg f_{d}\). It remains to prove the second part. In the main calculation in the proof of Theorem 1.3 there is only one inequality, namely \[c_{f}dnq^{n}-\sum_{P}(c_{f}\alpha_{P}(n)-\beta_{P}(n))\deg P\leq c_{f}dnq^{n}- \sum_{\deg P\leq n+\deg f_{d}}(c_{f}\alpha_{P}(n)-\beta_{P}(n))\deg P.\] equivalently \[-\sum_{\deg>n+\deg f_{d}}(c_{f}\alpha_{P}(n)-\beta_{P}(n))\deg P\leq 0.\] Since the RHS in Conjecture 1.1 is \(\gg nq^{n}\), the conjecture is now seen to be equivalent to \[\sum_{\deg>n+\deg f_{d}}(c_{f}\alpha_{P}(n)-\beta_{P}(n))=o(nq^{n}).\] Hence is suffices to prove that \[S_{f}(n)=o(q^{n})\iff\sum_{\deg P>n+\deg f_{d}}(c_{f}\alpha_{P}(n)-\beta_{P}(n ))\deg P=o(nq^{n}). \tag{4.3}\] For \(P\in\mathbb{F}_{q}[T]\) prime set \[\delta_{P}(n)=\begin{cases}1,&c_{f}\alpha_{P}(n)\neq\beta_{P}(n),\\ 0,&\text{otherwise}.\end{cases}\] Note that for \(n\) sufficiently large if \(\deg P>dn+\deg f_{d}\) then \(c_{f}\alpha_{P}(n)=\beta_{P}(n)=0\). Hence, for \(n\) sufficiently large using (4.2) we have \[S_{f}(n)=\sum_{n+\deg f_{d}<\deg P\leq dn+\deg f_{d}}\delta_{P}(n)\] From Lemma 3.3(ii-iii) and Proposition 2.9, for sufficiently large \(n\) and \(P\) not one of \(O(1)\) bad primes, if \(\deg P>n\) then \(\alpha_{P}(n)\ll 1\), so \[\sum_{n+\deg f_{d}<\deg P}(c_{f}\alpha_{P}(n)-\beta_{P}(n))\deg P \ll\sum_{n+\deg f_{d}<\deg P\leq dn+\deg f_{d}}\delta_{P}(n)\deg P\\ \ll\sum_{n+\deg f_{d}<\deg P\leq dn+\deg f_{d}}\delta_{P}(n)n=S_{ f}(n)n. \tag{4.4}\] On the other hand, once again using the fact that \(c_{f}\alpha_{P}(n)=\beta_{P}(n)\) if \(\deg P>dn+\deg f_{d}\), \[\sum_{n+\deg f_{d}<\deg P}(c_{f}\alpha_{P}(n)-\beta_{P}(n))\deg P\\ \geq\sum_{n+\deg f_{d}<\deg P\leq dn+\deg f_{d}}\delta_{P}(n)\deg P \geq\sum_{n+\deg f_{d}<\deg P\leq dn+\deg f_{d}}\delta_{P}(n)n=S_{f}(n)n, \tag{4.5}\] From (4.4) and (4.5) the equivalence (4.3) follows, which completes the proof. Heuristically one can estimate \(S_{f}(n)\) by arguing that the "probability" that for \(Q_{1}\not\sim Q_{2}\in M_{n}\) the values \(f(Q_{1}),f(Q_{2})\) share a prime factor of degree \(\deg P>n+\deg f_{d}\) is (using the Prime Polynomial Theorem) \[\sum_{m=n+\deg f_{d}+1}^{\infty}\sum_{\deg P=m}\frac{1}{|P|^{2}}=\sum_{m=n+\deg f _{d}+1}^{\infty}\frac{1}{mq^{m}}=O\left(\frac{1}{nq^{n}}\right).\] Since there are \(O(q^{2n})\) pairs \(Q_{1},Q_{2}\) we expect \[S_{f}(n)\ll\frac{1}{nq^{n}}\cdot q^{2n}\ll\frac{q^{n}}{n}=o(q^{n}).\] By Proposition 4.1 this is equivalent to Conjecture 1.1. At this point one may ask whether the equivalence relation \(\sim\) is really the dominant source of pairs \(Q_{1},Q_{2}\in M_{n}\) with \(f(Q_{1})=f(Q_{2})\), otherwise we would definitely not have \(S_{f}(n)=o(q^{n})\). Over the integers it cannot happen that \(f(n_{1})=f(n_{2})\) for \(n_{1}\neq n_{2}\in\mathbb{Z}\) sufficiently large (since a polynomial is a monotone function at sufficiently large arguments), but in finite characteristic this is less obvious. Nevertheless this is guaranteed by the following proposition, which will be needed for the proof of Theorem 1.8 as well. **Proposition 4.2**.: \[\left|\left\{(Q_{1},Q_{2})\,:\,Q_{1}\not\sim Q_{2}\in M_{n},\,f(Q_{1})=f(Q_{2} )\right\}\right|=O(q^{n/2}).\] Before we prove Proposition 4.2 we need a couple of auxiliary results. **Lemma 4.3**.: _Let \(g(X,Y)\in\mathbb{F}_{q}[T][X,Y]\) be a fixed irreducible polynomial with \(\deg_{X,Y}g\geq 2\). The number of pairs \(Q_{1},Q_{2}\in M_{n}\) such that \(g(Q_{1},Q_{2})=0\) is \(O(q^{n/2})\)._ Proof.: If \(\deg_{X}g=\deg_{Y}g=1\) then \(g=AXY+BX+CY+D\), \(A\neq 0,B,C,D\in\mathbb{F}_{q}[T]\) and one cannot have \(g(Q_{1},Q_{2})=0\) for \(Q_{1},Q_{2}\in M_{n},n>\max(\deg B,\deg C,\deg D)\) since the term \(AQ_{1}Q_{2}\) has higher degree than the other terms. Hence assume WLOG that \(\deg_{Y}g\geq 2\), otherwise switch \(X\) and \(Y\). It follows from the quantitative Hilbert Irreducibility Theorem in function fields [1, Theorem 1.1] that for all but \(O(q^{n/2})\) polynomials \(Q_{1}\in M_{n}\), the polynomial \(g(Q_{1},Y)\in\mathbb{F}_{q}[T][Y]\) is irreducible of degree \(\geq 2\) and therefore has no root \(Q_{2}\in M_{n}\). For the remaining \(O(q^{n/2})\) values of \(Q_{1}\) there are at most \(\deg_{Y}g=O(1)\) roots for each \(Q_{1}\), so overall there are \(O(q^{n/2})\) pairs \(Q_{1},Q_{2}\in M_{n}\) with \(g(Q_{1},Q_{2})=0\). **Lemma 4.4**.: _Assume that the polynomial \(f(X)-f(Y)\in\mathbb{F}_{q}[T][X,Y]\) is divisible by \(aX+bY+c\) for some \(a,b,c\in\mathbb{F}_{q}[T]\) with \(b\neq 0\). Then_ 1. \(\zeta:=-a/b\) _satisfies_ \(\zeta^{d}=1\) _and_ \(\zeta\in\mathbb{F}_{q}\)_._ 2. _If_ \(\zeta=1\) _then_ \(c/b\in V_{f}\)_._ Proof.: The divisibility assumption implies \(f(X)=f\left(-\frac{a}{b}X-\frac{c}{b}\right)\). Comparing coefficients at \(X^{d}\) gives \(\zeta^{d}=1\). Since \(\zeta\in\mathbb{F}_{q}(T)\) and \(\mathbb{F}_{q}\) is algebraically closed in \(\mathbb{F}_{q}(T)\) we have \(\zeta\in\mathbb{F}_{q}\), establishing (i). If \(\zeta=-1\) the above identity becomes \(f(X)=f\left(X-\frac{c}{b}\right)\), i.e. \(-c/b\in V_{f}\). Since \(V_{f}\) is a vector space we obtain (ii). We are ready to prove Proposition 4.2. Proof of Proposition 4.2.: Write \(f(X)-f(Y)=c\prod_{i=1}^{m}g_{i}(X,Y)\) with \(c\in\mathbb{F}_{q}[T]\) and each \(g_{i}\in\mathbb{F}_{q}[T][X,Y]\) irreducible with \(\deg_{X,Y}g_{i}\geq 1\). It is enough to show that for each factor \(g_{i}(X,Y)\) there are \(\ll q^{n/2}\) pairs \(Q_{1}\not\sim Q_{2}\in M_{n}\) with \(g_{i}(Q_{1},Q_{2})=0\). For a given \(i\), if we have \(\deg_{X,Y}g_{i}\geq 2\) this follows from Lemma 4.3. If \(\deg_{X,Y}g_{i}=1\), write \(g_{i}=aX+bY+c\), assume WLOG that \(b\neq 0\) and denote \(\zeta=-a/b\). By Lemma 4.4 we have \(\zeta^{d}=1\). We distinguish two cases: if \(\zeta\neq 1\) then \(a\neq-b\) and the leading coefficient of \(g_{i}(Q_{1},Q_{2})\) is \(a+b\neq 0\) for any \(Q_{1},Q_{2}\in M_{n}\) with \(n>\deg c\), and therefore there are no such pairs with \(g_{i}(Q_{1},Q_{2})=0\). If on the other hand \(\zeta=1\) (i.e. \(a=-b\)) then by Lemma 4.4 we have \(c/b\in V_{f}\) and \(g_{i}(Q_{1},Q_{2})=aQ_{1}+bQ_{2}+c=0\) is equivalent to \(Q_{1}+c/b=Q_{2}\), so \(Q_{1}\sim Q_{2}\) and there are no pairs with \(g_{i}(Q_{1},Q_{2})=0\), \(Q_{1}\not\sim Q_{2}\in M_{n}\). This completes the proof. **Remark 4.5**.: It is readily seen from the proof that if \(f\) is special in the sense of Definition 1.6 then we may put \(0\) on the RHS of Proposition 4.2 for \(n\) sufficiently large, since in this case \(\deg_{X,Y}g_{i}=1\) for all \(i\). ## 5. The lower bound **Lemma 5.1**.: _Let \(P\in\mathbb{F}_{q}[T]\) be a prime such that_ 1. \(\deg P>\deg f_{d}+n\)_._ 2. \(P\nmid\operatorname{Res}(f,f^{\prime})\) _if_ \(f\) _is separable._ 3. \(P\nmid\operatorname{Res}(f,\frac{\partial f}{\partial T})\) _if_ \(f\) _is inseparable._ _Denote_ \[B_{i}=B_{i}(P):=\left|\{Q\in M_{n}:P^{i}\mid f(Q)\neq 0\}\right|.\] _Then_ 1. \(B_{1}\leq d\)_._ 2. \(B_{i}\leq B_{1}\)_,_ \(i\geq 1\)_._ 3. \(B_{i}=0\) _if_ \(i>d\)_._ Proof.: Observe that since \(n+\deg f_{d}<\deg P^{i}\), \(M_{n}\) is mapped injectivly into \(\mathbb{F}_{q}[T]/P^{i}\) by \(Q\mapsto f(Q)\bmod P^{i}\). **(i).** By the observation above \(B_{1}\leq\rho_{f}(P)\leq\deg(f\bmod P)=d\), so \(B_{1}\leq d\). **(ii).** If \(f\) is separable then Lemma 2.4 combined with assumption (b) applied iteratively with \(\alpha=1,2,\ldots,i-1\) implies that each root of \(f\) modulo \(P\) has a unique lift to a root modulo \(P^{i}\). Combined with the observation above this implies \(B_{i}\leq B_{1}\). If \(f\) is inseparable then by Lemma 2.8 and assumption (c) we have \(B_{i}\leq\rho_{f}(P^{i})=0\leq B_{1}\) for \(i>1\). **(iii).** Finally suppose that \(i>d\). If there exists \(Q\in\mathbb{F}_{q}[T]\) such that \(P^{i}\mid f(Q)\neq 0\), then \(\deg f(Q)\leq dn+\deg f_{d}\), while \(dn+\deg f_{d}<i\deg P=\deg P^{i}\), a contradiction. Hence \(B_{i}=0\) in this case. We are ready to prove the first part of Theorem 1.4. Proof of Theorem 1.4(i).: From Lemma 5.1 we get that for all but \(O(1)\) primes \(P\in\mathbb{F}_{q}[T]\) (note that by Lemma 2.7 conditions (b-c) of Lemma 5.1 are satisfied for all but \(O(1)\) primes) such that \(\deg P>n+\deg f_{d}\) we have \[\alpha_{P}(n)=\sum_{i\geq 1}\sum_{Q\in M_{n}\atop P^{i}\mid f(Q)}1=\sum_{i \geq 1\atop B_{i>0}^{i}}B_{i}\leq d\sum_{i\geq 1\atop B_{i}^{i}>0}1\leq d \beta_{P}(n).\] Now using Lemma 3.1 and Proposition 3.5 we obtain \[(d-1)nq^{n}\lesssim\deg\frac{P_{f}(n)}{R_{f}(n)}=\sum_{\deg P>n+\deg f_{d}} \alpha_{P}(n)\deg P\leq d\sum_{\deg P>n+\deg f_{d}}\beta_{P}(n)\deg P\leq d \deg L_{f}(n),\] hence \(\deg L_{f}(n)\gtrsim\frac{d-1}{d}nq^{n}\) as required. **Remark 5.2**.: There exist polynomials \(f\) for which the lower bound given by in Theorem 1.4(i) and the upper bound given by Theorem 1.3 are equal. Indeed, if \(|V_{f}|=d\) (as in Example 2.2), then combining Theorem 1.3 and Theorem 1.4(ii) gives \(\deg L_{f}(n)\sim\frac{d-1}{d}nq^{n}\) as \(n\to\infty\). Hence the lower bound is tight in this case. To prove the second part of Theorem 1.4 we first need the following **Lemma 5.3**.: _Assume \(p\nmid d\) or \(f_{d}\nmid f_{d-1}\). Let \(n\) be sufficiently large and \(P\in\mathbb{F}_{q}[T]\) a prime such that \(\deg P>\deg f_{d}+n\). Then in the notation of Lemma 5.1 we have \(B_{1}(P)\leq d-1\)._ Proof.: Assume by way of contradiction that there exist distinct \(Q_{1},...,Q_{d}\) such that \(P\mid f(Q_{1}),...,f(Q_{d})\). The assumption \(\deg P>\deg f_{d}+n\) implies that \(Q_{i}\bmod P\) are also distinct and therefore \[f(X)\equiv f_{d}\prod_{j=1}^{d}(x-Q_{j})\pmod{P}\] Comparing coefficients at \(X^{d-1}\) we obtain \[f_{d-1}\equiv-f_{d}\sum_{j=1}^{d}Q_{j}\pmod{P}\] \[f_{d-1}+f_{d}\sum_{j=1}^{d}Q_{j}\equiv 0\pmod{P}.\] Thus \(P\mid f_{d-1}+f_{d}\sum_{j=1}^{d}Q_{j}\) and if \(f_{d-1}+f_{d}\sum_{j=1}^{d}Q_{j}\neq 0\) we would have \(\deg P\leq n+\deg f_{d}\) (if \(n\geq f_{d-1}\)), a contradiction. Now we observe that as \(Q_{1},...,Q_{d}\) are monic, if \(\deg\sum_{j=1}^{d}Q_{j}<n\) then \(p\mid d\). Therefore for a sufficiently large \(n\) if \(f_{d-1}+f_{d}\sum_{j=1}^{d}Q_{j}=0\) then \(p\mid d\) and additionally \(f_{d}\sum_{j=1}^{d}Q_{j}=-f_{d-1}\), hence \(f_{d}\mid f_{d-1}\). This contradicts our initial assumption and we have reached a contradiction in all cases. We are ready to prove the second part of Theorem 1.4. Proof of Theorem 1.4(ii).: From Lemma 5.3 we get that for \(P\in\mathbb{F}_{q}[T]\) prime such that \(\deg P>n+\deg f_{d}\) we have \(B_{1}(P)\leq d-1\). The rest of the proof is now the same as the proof of Theorem 1.4(i), with the inequality \(\alpha_{P}(n)\leq d\beta_{P}(n)\) replaced by \(\alpha_{P}(n)\leq(d-1)\beta_{P}(n)\), which has the effect of replacing the constant \(\frac{d-1}{d}\) by \(1\) in the conclusion. ## 6. The radical of \(L_{f}(n)\) We assume for simplicity of exposition that \(f\) is irreducible, although this assumption can be dispensed with in Theorem 1.5 and its proof, which we give in the present section. Recall that we denote \(\ell_{f}(n)=\operatorname{rad}L_{f}(n)\) and observe that \[\deg\ell_{f}(n)=\sum_{P\atop\beta_{P}(n)>0}\deg P.\] By Theorem 1.3 and Theorem 1.4(i) we have \(nq^{n}\ll\deg L_{f}(n)\ll nq^{n}\) and the asymptotic \(\deg\ell_{f}(n)\sim\deg L_{f}(n)\) (Theorem 1.5) is equivalent to the following **Proposition 6.1**.: \[\deg L_{f}(n)-\deg\ell_{f}(n)=o(nq^{n}).\] Proof.: Let us write \[\ell_{f}(n)=\sum_{\deg P>n\atop\beta_{P}(n)>0}\deg P+\sum_{\deg P\leq n\atop \beta_{P}(n)>0}\deg P,\] \[L_{f}(n)=\sum_{\deg P>n}\beta_{P}(n)\cdot\deg P+\sum_{\deg P\leq n}\beta_{P}( n)\cdot\deg P.\] Now using (3.9) we obtain \[\sum_{\deg P\leq n\atop\beta_{P}(n)>0}\deg P\leq\sum_{P\leq n+\deg f_{d}}\beta _{P}(n)\cdot\deg P=O(q^{n})\] and therefore \[\ell_{f}(n)=\sum_{\deg P>n\atop\beta_{P}(n)>0}\deg P+O(q^{n}),\] \[L_{f}(n)=\sum_{\deg P>n\atop\beta_{P}(n)>0}\beta_{P}(n)\deg P+O(q^{n}).\] Denote \[S:=\{Q\in M_{n}:\exists P\text{ prime},\ \deg P>n,\,P^{2}\mid f(Q)\}.\] We have \[0\leq\deg L_{f}(n)-\deg\ell_{f}(n) =\sum_{\deg P>n}\beta_{P}(n)\deg P-\sum_{\deg P>n\atop\beta_{P}(n)>0} \deg P+O(q^{n})\] \[\leq\sum_{\deg P>n\atop\beta_{P}(n)\geq 2}\beta_{P}(n)\deg P+O(q^{n})\] \[\leq\sum_{Q\in S}\deg f(Q)+O(q^{n})\] \[\ll|S|n+q^{n}.\] It remains to prove that \(|S|=o(q^{n})\) but by [10, Lemma 7.1], the set \[W=\{\deg Q\leq n:\exists P\text{ prime, }\deg P>n/2,\,P^{2}\mid f(Q)\}\] is of size \(o(q^{n})\) (this is the key ingredient of the proof; see also [12, Proposition 3.3] and [1, (4.7)] for refinements of this statement). Since \(S\subset W\) we have \(|S|=o(q^{n})\) and we are done. As noted above, this concludes the proof of Theorem 1.5. ## 7. Asymptotics of \(L_{f}(n)\) for special polynomials In the present section we assume that \(f=\sum_{i=0}^{d}f_{i}X^{i}\in\mathbb{F}_{q}[T][X]\), \(\deg f=d\geq 2\) is irreducible and special in the sense of Definition 1.6. We will show that \(\deg L_{f}(n)\sim c_{f}nq^{n}\) as \(n\to\infty\), thus proving Theorem 1.8. As in Section 4 we consider the equivalence relation on \(M_{n}\) given by \(Q_{1}\sim Q_{2}\iff Q_{1}-Q_{2}\in V_{f}\) and the quantity \(S_{f}(n)\) defined by (4.1). By the definition of a special polynomial we may write \[f(X)-f(Y)=\prod_{i=1}^{d}(a_{i}X+b_{i}Y+c_{i}),\quad a_{i},b_{i},c_{i}\in \mathbb{F}_{q}[T]. \tag{7.1}\] Comparing degrees in \(X\) and \(Y\) shows that \(a_{i},b_{i}\neq 0\). Comparing coefficients at \(X^{d}\) and \(Y^{d}\) in (7.1) we see that \(\deg a_{i},\deg b_{i}\leq\deg f_{d}\). Therefore if \(Q_{1},Q_{2}\in M_{n}\) with \(n>\deg c_{i}\), we have \[\deg(a_{i}Q_{1}+b_{i}Q_{2}+c_{i})\leq n+\deg f_{d}. \tag{7.2}\] Let \(Q_{1},Q_{2}\in M_{n}\) with \(Q_{1}\not\sim Q_{2}\) and \(P\) a prime with \(\deg P>n+\deg f_{d}\) such that \(P\,|\,f(Q_{1}),f(Q_{2})\). Then in particular \(P\,|\,f(Q_{1})-f(Q_{2})\) and by (7.1) we have \(P\,|\,a_{i}Q_{1}+b_{i}Q_{2}+c_{i}\neq 0\) for some \(1\leq i\leq d\). By (7.2) and the condition \(\deg P>n+\deg f_{d}\) we must have \(a_{i}Q_{1}+b_{i}Q_{2}+c_{i}=0\) and therefore by (7.1) we have \(f(Q_{1})=f(Q_{2})\). By Proposition 4.2 combined with Remark 4.5 this is impossible for \(n\) sufficiently large. Hence we have \(S_{f}(n)=0\) for \(n\) sufficiently large and by Proposition 4.1 it follows that Conjecture 1.1 holds for \(f\), concluding the proof of Theorem 1.8. ## 8. Classification of special polynomials In the present section we classify the special polynomials (Definition 1.6) over an arbitrary unique factorization domain (UFD) \(R\), establishing Theorem 1.10. We denote by \(p\) the characteristic of \(R\) (as \(R\) is a domain, \(p\) is zero or a prime) and by \(K\) its field of fractions. Also if \(p>0\) we will denote by \(\mathbb{F}_{p}\) its prime subfield. Note that as \(R\) is a UFD, so are the polynomial rings \(R[X],R[X,Y]\). For a polynomial \(g\in R[X,Y]\) we denote by \(\deg g\) its total degree. Special polynomials \(f\in R[X]\) were defined in Definition 1.6. **Lemma 8.1**.: _Let \(f\in R[X]\) be a special polynomial of degree \(d\). Then we may write_ \[f(X)-f(Y)=\prod_{i=1}^{d}(a_{i}X+b_{i}Y+c_{i}),\quad a_{i}\neq 0,b_{i}\neq 0,c_{i }\in R. \tag{8.1}\] Proof.: This is immediate from the definition, except for the condition \(a_{i},b_{i}\neq 0\) which follows by comparing degrees in \(X\) and \(Y\) in (8.1). **Lemma 8.2**.: _Let \(f=\sum_{i=0}^{d}f_{i}X^{i}\in R[X]\) be a special polynomial of degree \(d\)._ 1. _If_ \(p=0\) _then_ \[f(X)-f(Y)=f_{d}\prod_{j=1}^{d}\left(X-\zeta^{j}Y-b^{(j)}\right),\] _where_ \(\zeta\in R\) _is a primitive_ \(d\)_-th root of unity and_ \(b^{(j)}\in K\)_._ 2. _If_ \(p>0\) _write_ \(d=p^{l}m\) _with_ \((m,p)=1\)_. Then_ \[f(X)-f(Y)=f_{d}\prod_{i=1}^{p^{l}}\prod_{j=1}^{m}\left(X-\zeta^{j}Y-b^{(j)}_{i} \right),\] _where_ \(\zeta\in R\) _is a primitive_ \(m\)_-th root of unity and_ \(b^{(j)}_{i}\in K\)_._ Proof.: For brevity we only treat the case \(p>0\), the case \(p=0\) being similar to the case \(p>0\), \(l=0\). Write \(f(X)-f(Y)=\prod_{i=1}^{d}(a_{i}X+b_{i}Y+c_{i})\) as in Lemma 8.1. Note that \[\prod_{i=1}^{d}(a_{i}X+b_{i}Y+c_{i})=\prod_{i=1}^{d}(a_{i}X+b_{i}Y)+A[X,Y],\] where \(A[X,Y]\in R[X,Y],\deg A<d\). Since \(\prod_{i=1}^{d}(a_{i}X+b_{i}Y)\) is homogeneous of degree \(d\), we have \[f_{d}(X^{d}-Y^{d})=\prod_{i=1}^{d}(a_{i}X+b_{i}Y)=\prod_{i=1}^{d}a_{i}\prod_{i =1}^{d}(X-\mu_{i}Y),\] where \(\mu_{i}=-b_{i}/a_{i}\in K\). Plugging in \(Y=1\) and noting that \(\prod_{i=1}^{d}a_{i}=f_{d}\) (by comparing coefficients at \(X^{d}\)) we obtain \[X^{d}-1=\prod_{i=1}^{d}(X-\mu_{i}).\] In particular we see that all the \(d\)-th roots of unity in the algebraic closure of \(K\) lie in \(K\), and since \(R\) is integrally closed (because it is a UFD) they lie in \(R\). Since \(p=\operatorname{char}(K)>0\) these are actually the \(m\)-th roots of unity, where \(d=mp^{l}\), \((m,p)=1\). We are ready to prove the forward direction of Theorem 1.10. **Proposition 8.3**.: _Let \(f\in R[X]\) be special. Then it has the form stated in Theorem 1.10._ Proof.: For brevity we only treat the case \(p>0\), the case \(p=0\) being similar to the case \(p>0\), \(l=0\) we treat below (using part (i) of Lemma 8.2 instead of part (ii) and the fact that \(V_{f}=\{0\}\) (see Remark 1.2). Write \(d=\deg f=p^{l}m,(m,p)=1\). Denote by \(\zeta\) a primitive \(m\)-th root of unity. From Lemma 8.2(ii) we have \(\zeta\in R\) and \[f(X)-f(Y)=f_{d}\prod_{i=1}^{p^{l}}\prod_{j=1}^{m}\left(X-\zeta^{j}Y-b^{(j)}_{i} \right),\quad b^{(j)}_{i}\in K. \tag{8.2}\] Next consider the shifted polynomial \(g(X)=f(X+w)\in K[X]\) with \[w=\left\{\begin{array}{ll}\frac{b^{(m-1)}_{1}}{1-\zeta^{-1}},&m>1,\\ 0,&m=1.\end{array}\right.\] Note that \(g\) is also special but over the ring \(K\) instead of \(R\) (this is immediate from the definition) and also by the choice of \(w\) we have \[g(\zeta X)=g(X),\] because \[g(X)-g(\zeta X)=f(X+w)-f(\zeta X+w)=f_{d}\prod_{i=1}^{p^{l}}\prod_{j=1}^{m} \left(X+w-\zeta^{j}(\zeta X+w)-b^{(j)}_{i}\right)\] and if \(m>1\) then \(X+w-\zeta^{m-1}(\zeta X+w)-b^{(m-1)}_{1}=0\) (using \(\zeta^{m}=1\)). If we can show that \(g\) has the form asserted by Theorem 1.10 except the coefficients are in \(K\) instead of \(R\), then \(f(X)=g(X-w)\) would have the form asserted by the theorem. Replacing \(f\) with \(g\), we see that it is enough to prove the assertion under the additional assumptions that \(R=K\) is a field and \(f(X)=f(\zeta X)\). Thus from now on we assume that \(R=K\) is a field and \(f(X)=f(\zeta X)=f(\zeta^{2}X)=...=f(\zeta^{m-1}X)\). Next let us prove that for all \(j\) the multisets \[S_{j}:=\left\{b_{i}^{(j)}:1\leq i\leq p^{l}\right\}\] are the same for all \(1\leq j\leq m\). To this end let us identify the indices \(1\leq j\leq m\) with their corresponding residues in \(\mathbb{Z}/m\) and observe that by (8.2) we have \[\prod_{j\bmod m}\prod_{b\in S_{j}}\left(X-\zeta^{j}Y-b\right)=f(X )-f(Y)=f(X)-f(\zeta^{k}Y)=\prod_{j\bmod m}\prod_{b\in S_{j}}\left(X-\zeta^{j+k }Y-b\right)\\ =\prod_{j\bmod m}\prod_{b\in S_{j-k}}\left(X-\zeta^{j}Y-b\right) \tag{8.3}\] for each \(k\in\mathbb{Z}/m\). From the first equality in (8.3) we see that the number of times an element \(b\) appears in the multiset \(S_{j}\) equals the multiplicity of the factor \(X-\zeta^{j}Y-b\) in the factorization of \(f(X)-f(Y)\) (recall that \(R[X,Y]\) is a UFD) and from the last equality it also equals the number of times \(b\) appears in \(S_{j-k}\). Hence the multisets \(S_{1},\ldots,S_{m}\) are all equal and we may rewrite (8.2) as \[f(X)-f(Y)=f_{d}\prod_{i=1}^{p^{l}}\prod_{j\bmod m}(X-\zeta^{j}Y-b_{i})=\prod_{ b\in S_{1}\ j\bmod m}(X-\zeta^{j}Y-b), \tag{8.4}\] where \(b_{i}=b_{i}^{(1)}\). Let us prove the following properties of the multiset \(S_{1}=\{b_{1},\ldots,b_{m}\}\): 1. The underlying set of \(S_{1}\) is \(V_{f}:=\{b\in R:f(X+b)=f(X)\}\). It is an \(\mathbb{F}_{p}\)-linear subspace of \(R\), with \(|V_{f}|=p^{v}\) for some \(0\leq v\leq l\). 2. Multiplication by \(\zeta\) permutes \(S_{1}\) as a multiset. 3. \(V_{f}\) is furthermore an \(\mathbb{F}_{p}(\zeta)\)-linear subspace of \(R\). 4. All elements of \(S_{1}\) have the same multiplicity \(p^{l-v}\). For (1) observe that by (8.4) we have \[b\in S_{1}\iff X-Y-b\,\mid\,f(X)-f(Y)\iff f(X+b)=f(X),\] hence \(V_{f}\) is the underlying set of \(S_{1}\). Arguing as in the proof of Lemma 2.1 one sees that \(V_{f}\) is an \(\mathbb{F}_{p}\)-linear subspace of \(R\). Since \(|V_{f}|\leq|S_{1}|=p^{l}\) we see that \(|V_{f}|=p^{v}\) with \(0\leq v\leq l\). Replacing \(Y\) with \(\zeta Y\) in (8.4) and using \(f(Y)=f(\zeta Y)\) gives (2). The properties (1),(2) imply (3). To prove (4) observe that by (8.4) the multiplicity of \(b\in V_{f}\) in \(S_{1}\) equals the exponent of the factor \(X-Y-b\) in the factorization of \(f(X)-f(Y)\), but since \(f(Y+b)=f(Y)\) it also equals the exponent of \(X-Y\) in the factorization of \(f(X)-f(Y)\) and is therefore independent of \(b\). We have established properties (1-4). Now assuming WLOG that \(V_{f}=\{b_{1},\ldots,b_{p^{v}}\}\) we can rewrite (8.4) as \[f(X)-f(Y)=f_{d}\prod_{i=1}^{p^{v}}\prod_{j=1}^{m}(X-\zeta^{j}Y-b_{i}))^{p^{l-v} }=f_{d}\prod_{i=1}^{p^{v}}\left((X-b_{i})^{m^{l-v}}-b_{i}^{mp^{l-v}}Y^{mp^{l-v }}\right).\] Setting \(Y=0\) we see that \(f\) has the form asserted in Theorem 1.10 with \(A=0,C=f(0)\). This completes the proof. Finally we prove the converse direction in Proposition 8.3. **Proposition 8.4**.: 1. _Assume that_ \(p=0\)_, that_ \(K\) _contains a primitive_ \(d\)_-th root of unity_ \(\zeta\) _and_ \[f=f_{d}(X+A)^{d}+C\in R[X]\] _with_ \(A,C\in K\)_. Then_ \(f\) _is special._ _._ 2. _Assume that_ \(p>0\)_,_ \(d=p^{l}m,\,(m,p)=1\) _and_ \(K\) _contains a primitive_ \(m\)_-th root of unity. Suppose_ \[f(X)=f_{d}\prod_{i=1}^{p^{v}}(X-b_{i}+A)^{mp^{l-v}}+C\in R[X],\] _where_ \(A,C\in K\) _and_ \(V=\{b_{1},\ldots,b_{p^{v}}\},\,0\leq v\leq l\) _(_\(b_{i}\) _distinct) is an_ \(\mathbb{F}_{p}(\zeta)\)_-linear subspace of_ \(K\)_. Then_ \(f\) _is special._ Proof.: **(i).** Denote by \(\zeta\) a primitive \(d\)-th root of unity. Then we have the factorization \[f(X)-f(Y)=f_{d}\prod_{i=0}^{m-1}\big{(}X+A-\zeta^{i}(Y+A)\big{)}\] and \(f\) is special. **(ii).** Since \(R\) is a UFD, by Gauss's lemma \(f(X)-f(Y)\) factors into linear polynomials over \(R\) iff it does over \(K\). Hence we assume WLOG that \(R=K\) is a field and \(A,C,b_{i}\in R\). It is then immediate from the definition that \(f(X)\) is special iff \(\frac{1}{f_{d}}(f(X-A)-C)\) is, hence we assume WLOG that \(A=C=0,\,f_{d}=1\), i.e. \(f=g^{mp^{l-v}}\) where \[g=\prod_{i=1}^{p^{v}}(X-b_{i}).\] We now show that the following factorization holds: \[f(X)-f(Y)=\prod_{i=1}^{p^{v}}\prod_{j=1}^{m}(X-\zeta^{j}Y+b_{i})^{p^{l-v}}. \tag{8.5}\] Since the Frobenius map is a ring homomorphism this is equivalent to \[g(X)^{m}-g(Y)^{m}=\prod_{i=1}^{p^{v}}\prod_{j=1}^{m}(X-\zeta^{j}Y+b_{i}). \tag{8.6}\] Note that since \(V\) is an \(\mathbb{F}_{p}(\zeta)\)-linear subspace, the map \(x\mapsto\zeta^{-j}(x+b_{i})\) permutes \(V\) and we have \[g(\zeta^{-j}(X+b_{i}))=\zeta^{-jp^{v}}g(X)\] and therefore \(g(\zeta^{-j}(X+b_{i}))^{m}=g(X)^{m}\). It follows that \(X-\zeta^{j}Y+b_{i}\,\mid\,g(X)^{m}-g(Y)^{m}\) and therefore the RHS of (8.6) divides the LHS. Since both sides of (8.6) have the same degree \(mp^{v}\) and are monic in \(X\), we must have an equality. This establishes (8.6) and therefore (8.5), which completes the proof. **Remark 8.5**.: Proposition 8.3 and its proof work under the weaker assumption that \(R\) is an integrally closed domain (not necessarily a UFD). Proposition 8.4 does require the UFD assumption, but it can be replaced with \(R\) being only integrally closed if one assumes that \(f_{d}=1\), because Gauss's lemma works for monic polynomials over any integrally closed domain.
2304.06834
Understanding the phase behavior of a proto-biomembrane
The rich thermotropic behavior of lipid bilayers is addressed using phenomenological theory informed by many experiments. The most recent experiment not yet addressed by theory has shown that the tilt modulus in DMPC lipid bilayers decreases dramatically as the temperature is lowered toward the main transition temperature TM . It is shown that this behavior can be understood by introducing a simple free energy functional for tilt that couples to the area per molecule. This is combined with a chain melting free energy functional in which the area is the primary order parameter that is the driver of the main transition. Satisfactory agreement with experiment is achieved with values of the model parameters determined by experiments, but the transition is directly into the gel phase. The theory is then extended to include the enigmatic ripple phase by making contact with the most recent experimentally determined ripple structure.
John F. Nagle
2023-04-13T21:55:03Z
http://arxiv.org/abs/2304.06834v1
# Understanding the phase behavior of a proto-biomembrane ###### Abstract The rich thermotropic behavior of lipid bilayers is addressed using phenomenological theory informed by many experiments. The most recent experiment not yet addressed by theory has shown that the tilt modulus in DMPC lipid bilayers decreases dramatically as the temperature is lowered toward the main transition temperature \(T_{M}\). It is shown that this behavior can be understood by introducing a simple free energy functional for tilt that couples to the area per molecule. This is combined with a chain melting free energy functional in which the area is the primary order parameter that is the driver of the main transition. Satisfactory agreement with experiment is achieved with values of the model parameters determined by experiments, but the transition is directly into the gel phase. The theory is then extended to include the enigmatic ripple phase by making contact with the most recent experimentally determined ripple structure. lipid membranes, critical behavior, phase transitions ## I Introduction Proto-biomembranes consisting of lipid bilayers have fascinating thermodynamic phase behavior even when an artificial membrane is formed with only one of the many lipids found in organisms. When immersed in water, phosphocholine (PC) lipids that have two saturated hydrocarbon chains, both of chain length \(n\) (for \(n=\) 14-18), have four phases and three transition temperatures that depend upon the chain length. The high temperature phase is often called the fluid phase because the lipids in the two-dimensional membrane are disordered and mobile. It is often identified by the symbol \(L_{\alpha}\). (Biophysics literature often calls this the liquid-crystalline phase, although the other phases are also considered liquid crystals in physics.) Most of the membranes in organisms are in a fluid phase. As temperature is lowered, the lipids become better ordered at the main phase transition temperature \(T_{M}\), but the bilayer is far from crystalline and it takes an enigmatic ripple (\(P_{\beta^{\prime}}\)) structure [1; 2; 3; 4; 5; 6; 7] which has been a major challenge for physical understanding. Further reduction in temperature through the so-called pre-transition or lower transition at \(T_{L}\) takes the bilayers into the misnamed gel (\(L^{\prime}_{\beta}\)) phase which still retains considerable disorder [8]; skin membranes include gel-like regions [9; 10]. Even further reduction in temperature, while still remaining above the freezing point of water in which the bilayers are immersed, very slowly form a subgel phase (\(L_{C}\)) that begins to show signatures of two-dimensional crystallinity which are still not well characterized structurally and likely have no biological importance. This paper focuses on the fluid (\(F\)), ripple (\(R\)) and gel (\(G\)) phases and the main and lower transitions of the PC lipid DMPC which has two saturated linear hydrocarbon chains, each with 14 carbons bonded via a glycerol moiety to a PC headgroup. It has been widely recognized that the main phase transition of DMPC at \(T_{M}=\) 24.0 \({}^{\circ}\)C is first order with a latent heat \(\Delta H=\) 6.5 kcal/mole [11] and discontinuous jumps in structural quantities, notably a 27% increase in area per molecule from 0.47 nm\({}^{2}\)[12] to 0.60 nm\({}^{2}\)[13] and a 2.7% increase in volume [14]. However, the temperature dependence of the volume above the transition was noted as possibly signifying the existence of a critical point at an experimentally inaccessible point in an extended phase diagram. Although this was a rather small effect, there have also been other suggestions of pseudocriticality from experiments [15; 16]. Recently, more dramatic critical-like behavior above the main transition has been observed when studying the temperature dependence of mechanical moduli in DMPC [17]. Theories of the mechanical behavior of membranes originally focused at long length scales where the bending modulus \(K_{C}\) dominates. As the molecular length scale is approached, molecular tilt becomes important in physical studies. It is a degree of freedom that overcomes an otherwise insurmountable barrier to biological membrane fusion and fission [18]. The new finding regards the tilt modulus \(K_{m}\). Like \(K_{C}\), it is like the stiffness of a spring and its inverse \(1/K_{m}\) is like a compressibility. The tilt modulus decreases by a factor of 3 when \(T\) decreases from 40\({}^{\circ}\)C to the transition at TM, \(=\) 24\({}^{\circ}\)C. This is unlike most stiffness properties that increase with decreasing temperature, but it is what is observed near a critical point. Although \(K_{m}\) does not reach zero, which would be an infinite critical compressibility \(1/K_{m}\), the idea that critical behavior is observable even when the transition is ultimately first order is well understood. Figure 1 shows how this occurs in a simple fluid. When the pressure is constrained, the thermal trajectory may cross the first order phase line, but still lie within a critical region surrounding the critical point where the compressibility becomes large. Of course, for simple fluids, pressure and temperature can be varied to achieve an experimental trajectory through the critical point, but similar experiments have yet to be found for lipid bilayers. The pertinent thermodynamic quantities in theories of phase transitions near critical points are a reduced or relative temperature \(t\) and an order parameter \(\alpha\). Of course, lipid molecules are much more complex than the substituents in typical simple fluids and the interaction with water to form bilayers adds another level of complexity. One should therefore not be surprised that there would be several different order parameters that could interact with each other in interesting ways [19]. This paper addresses this by developing a phenomenological, continuum, Landau-deGennes-like description of the free energy. This follows many previous papers that have developed continuum theories for lipid bilayers [20; 21; 22; 23; 24; 25; 26; 27; 28]. While some of these theories have provided connections to the molecular level [20; 22], generally the continuum models involve phenomenological parameters that do not relate to molecular interaction energies. Nevertheless, continuum-level models can provide insight into the broad features of a system and its phase transitions, more so when the results of the parameterized model agree quantitatively with much experimental data; the model in this paper is compared to more data than previous theories. This paper develops free energy functionals for two types of order parameters. Section II focuses on the hydrocarbon chains whose conformational change from essentially straight (all-trans) at low temperature to disordered conformations in the fluid phase; this chain disordering (melting) has long been recognized as the driver of the main transition [29]. This section emphasizes that assuming a conventional free energy functional that works for simple fluids is not necessarily the best choice for the more complex state of lipids in a bilayer. Section III focuses on molecular tilt to make contact with the new experimental results for the tilt modulus \(K_{m}\). Section IV shows results obtained from an intermediate theory that combines the free energies functionals from Sections II and III. While this intermediate theory accommodates a good deal of experimental data, including the new data for the temperature dependence of tilt modulus, it only provides a main transition from the fluid phase to a gel phase. Section V then reviews the heterogeneous structure of the intervening ripple phase. Earlier theories [20; 21; 22; 23; 24; 25; 26; 27; 28] are followed in Section VI by invoking a term in the free energy functional that depends on this heterogeneity, which then provides both the main and the lower transitions. While this is not deemed completely satisfactory, as discussed in Section VII, it is suggested that this continuum theory is nevertheless an advance on previous theories. ## II Chain melting free energy functional \(F_{c}\) Conformational disordering of the hydrocarbon chains, i.e. chain melting, is clearly the dominant feature of the main transition [29]. Two likely quantities for the chain melting order parameter are either the difference in the area per molecule or the difference in thickness between the fluid phase and the gel phase. This is not a major choice because area times thickness is volume and there is only a small percentage volume change at the main transition [14]. Area \(A\) is chosen and the order parameter is defined as \[\alpha=A-A_{0}. \tag{1}\] Here \(A_{0}=0.40\) nm\({}^{2}\) is twice the cross sectional area of the hydrocarbon chains in the gel phase. It is important to emphasize that \(A_{0}\) is not the surface area per DMPC molecule in the gel phase whose value is \(A_{G}=0.47\) nm\({}^{2}\)[12]; instead, \(A_{C}=A_{G}\)cos(\(\theta_{G}\)) takes into account that chains tilted by \(\theta_{G}=32^{\circ}\)[12] are closer together than the headgroups. This convention assigns \(\alpha=0\) to the gel phase. In the fluid phase, disordered chains have no average tilt, so \(A\) is then the headgroup area. A major choice regards the form of the free energy functional. If one slavishly adopts the conventional form for magnetism or simple fluids, one writes \[F_{C}(\alpha,t)=\frac{1}{2}b_{2}t\alpha^{2}+\frac{1}{3}b_{3}\alpha^{3}+\frac{ 1}{4}b_{4}\alpha^{4}, \tag{2}\] where \(t\) is defined as \[t=T-T_{C}. \tag{3}\] Negative values of \(b_{2}\) and \(b_{3}\) bring about a first order transition as illustrated in Fig. 2. The critical point is pushed into a different place in parameter space that is quite likely difficult to achieve in experiments on lipid bilayers. That is consistent with the suggestion that critical behavior affects the phase transition even though it is ultimately a first order transition [23; 29]. There is, however, a problem with the model in Eq.(2). The area compressibility modulus \(K_{A}/A\) is the curvature in the isotherms at their minima for a flaccid bilayer with zero surface tension. Figure 2 indicates that the curvatures are equal for the gel and fluid phases and this is proven in the Appendix. Therefore, \(K_{A}\) has only a slightly larger value in the gel phase than in the fluid phase, by the ratio of \(A_{F}/A_{G}\). Although the gel phase \(K_{AG}\) is relatively poorly determined experimentally, it is Figure 1: The solid line shows the locus of a first order transition that ends in a critical point. clearly much larger than \(K_{AF}\) in the fluid phase [30; 31] and a simulation gives a ratio \(K_{AG}/K_{AF}\) about 4.6 [32]. This paper instead chooses a free energy functional form extracted from a microscopic toy model of chain melting [33]. This toy model emphasized the hard-core, steric, excluded volume interaction between hydrocarbon chains in competition with trans-gauche type conformational disordering. In contrast to the soft interactions of order \(kT\) between spins in Ising models, hard-core, excluded volume interactions are essentially either infinite or zero compared to \(kT\). Like the two-dimensional Ising model, the statistical mechanics of the toy model were exactly calculable, but with major differences in thermodynamic behavior, even at the qualitative level. In the spirit of free energy functional theory, let us use the lowest order approximation for the equation of state of that model [33] that applies near its critical point that occurs at \(T_{C}\) and chain packing area \(A_{0}\). In terms of \(t\) in Eq. (3) and \(\alpha\) in Eq. (1), the equation of state for the surface pressure \(\pi\) is \[\pi=Bt-C(\alpha^{2}+2\alpha Dt)+\pi_{c} \tag{4}\] for \(\alpha\) greater than 0. In the toy model, the smallest achievable area is \(\alpha=0\) due to the hard core steric interaction of packing all-trans hydrocarbon chains. For an incompressible chain packing phase there is a minimum area at \(A_{0}\), so \(\pi\) at \(\alpha\)= 0 is not constrained by Eq. (4) but can take values up to infinity with no further decrease in \(\alpha\). This is a completely incompressible gel phase, where the incompressibility refers to the chains, not the headgroups which will appear in the next section. The constant \(\pi_{c}\) in Eq. (4) will be chosen to ensure that the experimental trajectory has \(\pi=0\) corresponding to lipid bilayers that are experimentally flaced with no tension or pressure. Figure 3 shows the \(\pi-A\) isotherm at \(t_{1}\) = - 27.8 K for chosen values of the \(B\), \(C\) and \(D\) parameters in Eq. (4). The main transition occurs at \(T_{1}\) = 24.0 \({}^{\circ}\)C, so with Eq. (3) this choice gives \(T_{C}\) = 51.8 \({}^{\circ}\)C. The usual Maxwell equal area construction that equates the free energies of the two phases then replaces the metastable and unstable portions of this isotherm with the horizontal tie-line at \(\pi-\pi_{C}\) = - 33.0 mN/m. Since \(\pi=0\) for a flaced bilayer, this gives the critical pressure \(\pi_{c}\) = 33.0 mN/m. The increase in the experimental fluid phase area at the main transition is designated \(\alpha_{1}\) and equals 0.16 nm\({}^{2}\). It is located at the end of the horizontal tie-line that is obtained from the Maxwell construction which requires exactly \[t_{1}D=-2\alpha_{1}/3. \tag{5}\] Of course, the gel phase is not totally incompressible. That could be taken into account by using a compressible gel phase line like what is shown in Fig. 3; for prominent visualization, it has been drawn to give a gel phase compressibility \(K_{A}=-(\partial\alpha/\partial\pi_{t})/A\) that is 40% as large as the fluid phase compressibility. Even though that is an overestimate [30; 31; 32], there is a rather small difference in the corresponding tie line, so gel phase compressibility will be ignored henceforth. As \(t\) increases from \(t_{1}\), the tie line in Fig. 3 moves to experimentally inaccessible non-zero values of \(\pi\) and its length becomes shorter and vanishes when \(t\) = 0. This overall behavior is shown in Fig. 4. The point at \(t\) = 0, \(\alpha\) = 0 and \(\pi\) = \(\pi_{c}\) is a critical point with non-analytic thermodynamic properties. As \(t\) approaches 0, Figure 3: Surface pressure vs. area/lipid isotherms for \(t_{1}\) = -27.8 K, \(B\) = 1.41 (mN/m)/deg, \(C\) = 725 (mN/m)/nm\({}^{4}\), \(D\) = 0.0038 nm\({}^{2}\)/deg and \(\pi_{c}\) = 33.0 mN/m (solid), with the tie-line (dashed). A compressible gel phase is shown by the dash-dot line and the corresponding tie line by a short dash line. diverges as \(t^{-1/2}\) and the isothermal area compressibility \(-(\partial\alpha/\partial\pi)_{t}/A\) diverges as \(1/\alpha\) as \(\alpha\) approaches 0. In the original toy model \(\pi_{c}\) was zero. However, the model was modified to allow for vacancies and that allowed for expansion in the lipid volume which was taken into account by adding an attractive van der Waals interaction as a mean field term. Along with head group and water interactions, positive values of \(\pi_{c}\) were obtained and then the first order transition at \(\pi\)= 0 corresponds to the experimentally flaccid bilayer. Straightforward experimental values of the interaction parameters resulted in reasonable agreement with experiment. That exact quantitative analysis is not repeated here. The way that those prior results are taken into account in the present Landau type model is as justification for assigning the value of \(\pi_{c}\) in Eq. (4) that gives agreement with experiment when \(\pi\) = 0 [34]. The values of the other parameters in Eq. (4) and in the caption of Fig. 3 were chosen to obtain agreement with several types of experimental data. Here the appropriate thermodynamic equations are derived from Eq. (4). The area compressibility modulus \(K_{A}\) is obtained from Eq. (4) as \[K_{A}/A=-(\partial\pi/\partial\alpha)_{t}=2C(\alpha+Dt). \tag{6}\] At the first order transition, Eq. (5) reduces this to \[K_{A}/A_{1}=(2/3)C\alpha_{1}. \tag{7}\] from which the model parameter \(C\) can be determined from experimental data for \(A_{1}\), \(\alpha_{1}\) and \(K_{A1}\). The equation of state (4) also provides the change in area with temperature, \[(\partial\alpha/\partial t)_{\pi}=-\frac{(\partial\pi/\partial t)_{\alpha}}{ (\partial\pi/\partial\alpha))_{t}}=\frac{B-2CD\alpha}{2C(\alpha+Dt)}. \tag{8}\] which additionally involves both the \(B\) and \(D\) parameters. At the first order transition, Eq. (5) reduces this to \[(2/3)C\alpha_{1}(\partial\alpha/\partial t)_{\pi}=B-2CD\alpha_{1}. \tag{9}\] Another independent relation is obtained from the enthalpy of the transition. First, the free energy \(F_{C}\) is obtained by integrating \(\pi=-(\partial F/\partial\alpha)_{t}\) to give \[F_{C}(\alpha,t)=-Bt\alpha+(C/3)(\alpha^{3}+3Dt\alpha^{2})-\alpha\pi_{c}. \tag{10}\] Entropy follows as \[S_{C}(\alpha,t)=-(\partial F/\partial t)_{\alpha}=B\alpha-CD\alpha^{2}, \tag{11}\] so the configurational entropy \(S_{C}=0\) in the gel phase. Then the first order transition enthalpy is \[\Delta H_{1}=T_{1}\Delta S_{1}=T_{1}\alpha_{1}(B-CD\alpha_{1}). \tag{12}\] The three independent equations (7), (11) and (12) enable determination of the \(B\), \(C\) and \(D\) model parameters from experimental data. Equation (5) gives the value of \(t_{1}\) and Eq. (3) gives the critical temperature \(T_{C}\). The experimental value of \(\Delta H_{1}\) for DMPC is 6.5 kcal/mole at \(T_{1}\) = 297 K [11]. At \(T_{2}\)= 303 K the area \(\alpha_{2}\) is 0.20 nm\({}^{2}\)[13; 35]. From an increase in the thickness of 0.013 nm [36] and a decrease of 1% in the volume [14], the area at the main transition \(A_{1}\) = 0.56 nm\({}^{2}\) and \(\alpha_{1}\) = 0.16 nm\({}^{2}\) which is what is shown in Fig. 2. These give \((\partial\alpha/\partial t)_{\pi}\) = 0.0067 nm\({}^{2}\)/deg, somewhat larger than previous values (see p. 2634 of [13]). An additional reason to use a smaller value is the loss of one of the two lateral dimensions in the toy model that this Landau model is based on; since an area expansion is the square of a linear expansion, for small expansions this suggests a factor of \(\frac{1}{2}\) and the value 0.003 nm\({}^{2}\)/deg is used for \((\partial\alpha/\partial t)_{\pi}\). The experimental value of the area compressibility modulus \(K_{A}\) at \(T\) = 29 \({}^{\circ}\)C is 234 mN/m [37], but there are two factors that reduce this value when used in Eq. (7). First, the tilt independent bending modulus \(K_{C}\) should also be smaller by about a factor of 0.6 [17] and this suggests that \(K_{A}\) should also be smaller. Assuming as usual [37; 38] that \(K_{C}\) is proportional to \(K_{A}\) times thickness squared and that the hydrocarbon chain thickness increases by 0.011 nm from T = 29 \({}^{\circ}\)C to T = 24 \({}^{\circ}\)C, an estimate of \(K_{A}\) = 130 mN/m is used at \(t_{1}\). Second, it will be assumed that this value of \(K_{A}\) should be further divided by a factor of three to take into account that each chain in the toy model only has two neighbors versus six neighbors in experiment. Values of the ensuing model parameters are given in the caption to Fig. 3. The Gibbs free energy is obtained as \[G(t,\pi)=F(t,\alpha)+\pi A. \tag{13}\] Figure 4: Isotherms for the model in Fig. 3 for some additional temperatures. The coexistence for \(t\) = 0.45 \(t_{1}\) is the red dotted tie line. The dash-dot curve shows the locus of fluid phases that coexist with the gel phase at different temperatures and pressures. The curve at the top is the critical isotherm. The black dashed line shows the experimentally accessible locus. It is properly concave because the specific heat is non-negative \[C_{\pi}=T(\partial S/\partial t)_{\pi}=3(B-2CD\alpha)^{2}/2C\alpha. \tag{14}\] Furthermore, the value of \(C_{\pi}\) = 430 cal/mole/degree is close to the experimental value of 370 cal/mole/degree [39]. ## III Tilt free energy functional \(F_{\Theta}\) In this section a free energy \(F_{\Theta}\) for the tilt degree of freedom is developed. For hydrocarbon chains tilted by angle \(\theta\), following conventional notation [24; 27; 28; 40], the tilt order parameter is written as \(m\) = tan \(\theta\). Due to tilt symmetry, the free energy functional for tilt consists only of even powers of \(m\), \[F_{\Theta}/A=\frac{1}{2}K_{m}m^{2}+\frac{1}{4}b_{4}m^{4}+\frac{1}{6}b_{6}m^{6 }+..., \tag{15}\] where \(K_{m}\) is the tilt modulus and \(A\) is the area/lipid. If one sets \(K_{m}\) = \(b_{2}t\) where \(t\) remains the relative temperature, \(t\) = \(T-T_{C}\), then this is analogous to the \(\phi^{4}\) theory of magnetism when one terminates at the \(m^{4}\) term with \(b_{4}\) taken to be greater than 0 to ensure stability. Minimizing Eq. (15) with respect to \(m\) yields \(m^{2}\) = 0 for \(t>\)0, and for \(t<\)0 it yields a symmetry breaking spontaneous tilt \(m^{2}\) = \(-t/b_{4}\). This \(\phi^{4}\)-like theory fails in that it predicts a critical point at \(t\)=0 with \(K_{m}\) = 0 whereas DMPC has a first order transition at which \(K_{m}\)\(\approx\) 20 mN/m is still non-zero [17]. Of course, one can formally obtain a first order transition by add a cubic \(b_{3}m^{3}\) term to Eq. (15), but this violates the symmetry between positive and negative tilting. Let us consider two ways to fix the preceding failure of the \(m^{4}\) theory in Eq. (15). In this paragraph an ultimately unsuccessful, but illuminating, way is considered. This way adds an \(m^{6}\) term in Eq. (15) and assigns a negative value to \(b_{4}\). Adjustment of the parameters in this \(m^{6}\) theory then provides a first order transition and a rather trivial way to reproduce the temperature dependence of the experimental tilt modulus by choosing \(T_{C}\) = 291 K in Eq. (3). Holding \(b_{2}\) fixed then gives a value of \(K_{m}\) twice as large at \(T_{2}\) = 303 K as at the first order transition at \(T_{1}\) = 297 K. However, this \(m^{6}\) theory fails because of the value that it predicts for the enthalpy of the transition \[\Delta H_{\Theta}=T_{1}\Delta S_{\Theta}=-T_{1}\Delta[(\partial F_{\Theta}/ \partial t)_{m}]=\frac{1}{2}\Delta[T_{1}b_{2}m^{2}]. \tag{16}\] Since \(m\) = 0 in the fluid phase, this calculation needs only gel phase values, \(A_{G}\) = 0.47 nm\({}^{2}\) and \(\theta_{G}\) = 32\({}^{*}\)[12] which gives \(m^{2}\) = 0.39. The value of \(b_{2}\) is obtained as \(K_{m}/t\) where \(K_{m}\) = 20 mN/m and \(t\) = 6 K at \(T_{1}\) = 297 K. The resulting \(\Delta H_{\Theta}\) = 28 kcal/mole is four times larger than the total experimental enthalpy \(\Delta H_{\Theta}\). It fails to include any contribution from trans-gauche isomerization and from the increase in van der Waals cohesive energy required for the volume increase at the transition. These latter two contributions have been estimated to account for nearly all the experimental \(\Delta H\)[29]. This \(m^{6}\) theory is on the wrong track because it simply doesn't account for the chain melting transition in lipid bilayers in other classes of lipids, like the phosphoethanolamines (PE), that have rather comparable transition quantities as the phosphocholines but have zero tilt in the low temperature phase [41]. In this paper, the \(m^{4}\) free energy functional is modified in a different way that recognizes that the driver of the main phase transition is hydrocarbon chain melting. It is then appropriate to couple the tilt free energy to the chain melting order parameter \(\alpha\), so let us consider the following free energy functional \(F_{\Theta}(m,\alpha)\) for the tilt contribution to the total free energy, \[F_{\Theta}(m,\alpha)=\frac{1}{2}(g(\alpha)-b_{2})m^{2}+\frac{1}{4}b_{4}m^{4} \tag{17}\] where \(b_{2}\) and \(b_{4}\) are constant parameters. The major difference from the \(m^{4}\) theory in Eq.(15) is the removal of explicit temperature dependence and adding an area dependence in the function \(g(\alpha)\) that is yet to be determined. Setting \((\partial F_{\Theta}(m,\alpha)/\partial m)_{\alpha}\) = 0 obtains potentially stable tilt values \[m^{2}=(b_{2}-g(\alpha))/b_{4} \tag{18}\] when \(m^{2}\) is positive. Without loss of generality, let \(g(0)\) = 0 in the gel phase. Then, the experimental value of \(m^{2}\) = 0.39 in the gel phase [12] provides the \(b_{2}\)/\(b_{4}\) ratio and Eq. (18) verifies that \(b_{2}\) is positive for the choice of its sign in Eq. (17). For the fluid phase with \(m\)=0, the tilt modulus is \[K_{m}(\alpha)=(\partial^{2}F_{\Theta}(m,\alpha)/\partial m^{2})_{\alpha}=g( \alpha)-b_{2}. \tag{19}\] It goes negative for \(\alpha\) = 0, as it should in order to break symmetry and induce the spontaneous tilt given by Eq. (18). Next, let us consider what is required of the free energy functional in Eq. (17). First, recall that a range of \((\alpha,m)\) is not stable thermodynamically when there is a first order transition in \(\alpha\) just due to the \(F_{C}\) term discussed in Section II. Nevertheless, that previous determination will be modified by \(F_{\Theta}\) and that requires knowing the free energy functional in the unstable and metastable regions. Second, recall that the reason there is spontaneous tilt in the gel phase is that the steric area of the lipid head groups \(A_{head}\) determines the minimum area per lipid \(A_{G}\). In contrast, the chain energy is minimized when the cross-sectional area is \(A_{0}\). The actual gel phase area \(A_{G}\) is then the larger of \(A_{0}\) and \(A_{head}\). When \(A_{0}\) is smaller than \(A_{head}\), for PC lipids but not for PE lipids, the cohesive van der Waals energy of the chains is minimized in the gel phase by cooperatively tilting by angle \(\theta_{G}\) such that cos\(\theta_{G}=A_{0}/A_{head}\)[42; 43; 41]. We now apply this to \(g(\theta)\) in Eq. (19). As the constrained \(\alpha\) is forced to increase from 0, the chain cross sectional area \(A\) increases, so the chains tilt less and \(m^{2}\) decreases. This requires \(g(\alpha)\) to increase with \(\alpha\) in Eq. (18). When \(\alpha\) reaches the value 0.07 nm\({}^{2}\), at which \(A=A_{head}=A_{G}\), the deepest cohesive chain energy is achieved when \(m^{2}\) is zero. That requires \(g(0.07)=b_{2}\) in Eq.(18) and this also minimizes \(F_{\Theta}\) in Eq. (17). As \(\alpha\) is increased further, \(g(\alpha)\) further increases and \(K_{m}(\alpha)\) in Eq. (19) increases from 0. The first order transition is at \(T_{M}:=T_{1}=24\)\({}^{\circ}\)C with \(K_{m1}=20\) mN/m and it increases to \(K_{m2}=40\) mN/m at \(T_{2}=30\)\({}^{\circ}\)C. From the previous section \(\alpha_{2}=0.20\) nm\({}^{2}\) and \(\alpha_{1}=0.16\) nm\({}^{2}\). Then the values of \(K_{m1}\) and \(K_{m2}\) and Eq. (19) require \[g(0.20)-g(0.16)=g(0.16)-g(0.07). \tag{20}\] To proceed further, it is necessary to choose a functional form for \(g(\alpha)\). A linear \(g(\alpha)\) does not satisfy Eq. (20). One could use a power series, but to minimize the number of additional parameters, \(g(\alpha)=\Gamma\alpha^{p}\) is used. Numerical fitting to Eq. (20) yields \(p\approx 3\) and then fitting to the \(K_{m}\) values obtains \(\Gamma=5123\) (mN/m)/nm\({}^{6}\) and \(b_{2}\)=0.94 mN/m. Finally, \(b_{4}=b_{2}/0.39=2.41\) mN/m follows from Eq. (18) for the gel phase with \(g(\alpha)=0\) and the experimental \(m^{2}\) = 0.39 value. Now that all the parameters in Eq. (17) have been derived from experimental DMPC data, the final test is the magnitude of the transition enthalpy just due to the additional tilt term and ignoring the effect of tilt on the parameters in \(F_{C}\). Since enthalpy \(H=F+TS+\pi A\), the change in enthalpy at the transition just due to the tilting term is \[\Delta H_{\Theta}=\Delta F_{\Theta}+T_{M}\Delta S_{\Theta}+\pi\Delta A= \Delta F_{\Theta}, \tag{21}\] where the last equality comes because \(\pi=0\) for flac-cid bilayers and there is no explicit \(T\) dependence in \(F_{\Theta}(m,\alpha)\), so \(S_{\Theta}=0\) in both phases. In the fluid phase \(F_{m}=0\) because \(m^{2}=0\) and in the gel phase it equals -(1/4)\(A_{0}b_{2}^{2}/b_{4}\). This yields \(\Delta H_{\Theta}=0.01\) kcal/mole which is quite small compared to the total experimental enthalpy of 6.5 kcal/mole. This is consistent with the greater number of degrees of freedom in chain melting compared to chain tilting. ## IV Combining tilt with chain melting The chain melting theory in Section II took no consideration of the headgroup interaction that brings about tilt in the gel phase. This section treats the effect of tilt on chain melting by combining the free energies from Sections II and III \[F_{C\Theta}=F_{C}+F_{\Theta}. \tag{22}\] Then, a tilt pressure term must be added to the chain pressure shown in Fig. 3. The tilt pressure is calculated as \(-(\partial F_{\Theta}(m,\alpha)/\partial\alpha)_{m}\) from Eq. (17), where \(m\) is determined by Eq. (18) and is zero when \(m^{2}\) would go negative according to Eq. (18). The tilt pressure is negative as would be expected by adding another degree of freedom. Although it is zero in the fluid phase where there is no net tilt, it affects the position of the tie-line, as seen in Fig. 5. Although adding tilt does not affect the first order transition very much, the phase behavior at higher temperatures and surface pressures is considerably affected because there is smaller variation of \(\pi\) with \(\alpha\) in the no-tilt isotherm whereas the tilt modification is explicitly temperature independent so it becomes more dominant at higher \(t\). Fig. 6 shows the ensuing \(\pi-t\) phase diagram with and without tilt. The no-tilt phase line ends in a single critical point. With tilt there is a triple point in Figure 6: The \(\pi-t\) phase diagram showing the loci of the first order transitions and critical points with and without tilt. Figure 5: Comparing isotherms at \(t_{1}\) = -26.4 \({}^{\circ}\)C with tilt (solid) and without tilt (dot-dashed) and tie lines with tilt (dashed) and without tilt (dash-dot-dot). Fig. 6 with two first order lines extending to higher \(\pi\) with a new intermediate phase between. The upper line ends in a critical point like the no-tilt model. The lower phase line extends to very high values of \(\pi\). The appearance of two transitions as a function of temperature for values of \(\pi\) above the triple point is suggestive of the lower and main transitions in DMPC and then the intermediate phase would be likened to the ripple phase. However, the differences in enthalpies and areas are far too small. That the theory in this section ultimately misses getting both the main and the lower phase transitions is not surprising as there are more complex features to which we turn in the next section. ## V Review of the ripple phase Although there are thermal out-of-plane fluctuations, especially in the fluid phase, the time averaged bilayer is flat, in both the gel and fluid phases, as has been assumed in the preceding theory. In contrast, the ripple phase breaks the flat symmetry by having static out-of-plane structure that is singly periodic in one of the in-plane directions. The most recent high resolution x-ray study obtained an electron density profile that is shown in Fig. 7. As had originally been recognized [1], the profile is asymmetric with a major, upward-sloping, longer side and an even more downward-sloping, shorter minor side. The electron density in the headgroup region is primarily due to the electron dense phosphate headgroups so the higher electron density in the major side headgroup band means a smaller area per lipid compared to the minor side with its lower electron density. Fig. 7 also superimposes chain conformations obtained from wide angle x-ray scattering on the electron density profile. The gel-like chains in the major side are caricatured as elongated and thin. In the minor side the chains are portrayed as shorter and more fluid-like on average, with more distance between them consistent with the lower electron density in the minor side headgroup region. The height profile \(z(x)\) of the ripple is quantified in Fig. 8. Also shown is the area profile \(\alpha(x)\) that is obtained by smoothing the electron density data from Fig. 6 in [7]. Note that \(\alpha(x)=0.049\) in the major side is greater than zero because the chains are less tilted by \(\theta_{tilt}=18^{\circ}\) relative to the local bilayer normal compared to \(32^{\circ}\) in the gel phase. A smaller tilt in the ripple phase has also been reported from infrared spectroscopy data [44]. It is also estimated from [7] that the maximum \(\alpha(x)=0.15\) nm\({}^{2}\) in the center of the minor side between chains designated as 1 and 2 in Fig. 7. It may also be reiterated [7] that the relative offset in \(x\) of the locations of the monolayer minimal headgroup electron densities weighs strongly against interdigitation of chains in the minor side. Obtaining the structure of the ripple phase continues to be a challenge for simulations [45; 46; 47; 48; 49; 50]. ## VI Two phase transitions To address the phase transitions further, consider the Gibbs free energies, \(G_{G}\) for the gel phase, \(G_{F}\) for the fluid phase, and \(G_{R}\) for the ripple phase, as functions of temperature. For the experimental trajectory \(\pi=0\), \(G\) is the same as the Helmholtz free energy \(F\). In Fig. 9 the free energy of the gel phase has been simplified to be 0 at all temperatures, thereby ignoring higher order contributions like thermal expansion of the chain packing [8]. For the ripple phase, the simple approximation is made that Figure 8: The thick black line shows the ripple phase height profile \(z(x)\) of the headgroup band of one monolayer in Fig. 7. The thick dashed magenta line shows the corresponding area profile \(\alpha(x)\) times 50. The broken lines show six potential additions that could account for a heterogeneous coupling term; they are arbitrarily scaled for visibility and the functional forms are identified in the legend. Figure 7: Structure of the DMPC ripple phase adapted from [7]. The sample was a stack of bilayers at \(T=18\)\({}^{\circ}\)C. Grey scale shows the electron density which is highest in the headgroup band and lowest in the bilayer center. Coarse grained representations of chain conformations are superimposed in color. The unit cell is shown by yellow dashed lines. The upward-sloping major side of the ripple is in the center of the unit cell and the minor side is at the edges. the temperature dependence of \(G_{R}\) is a linear combination of a gel-like major side and a fluid-like minor side as well as a new term \(G_{H}\) that depends on heterogeneity. \[G_{R}(T)=\gamma G_{G}(T)+(1-\gamma)G_{F}(T)+G_{H}. \tag{23}\] In first approximation, \(G_{H}\) will be considered to be temperature independent. Accordingly, the slope of \(G_{R}(T)\) lies between those of \(G_{G}(T)\) and \(G_{F}(T)\). Importantly, if \(G_{H}<0\), then there will be two transitions as shown in Fig. 9. Since transition enthalpy \[\Delta H=T_{1}\Delta S=-T_{1}\Delta(\partial G/\partial T)_{\pi}, \tag{24}\] the value of \(\gamma\) in Eq. (24) determines the transition enthalpy of both the lower transition \(\Delta H_{L}\) and the main transition \(\Delta H_{M}\). Because the specific heat is quite small compared to the transition enthalpies, \(G_{F}(T)\) has nearly constant slope, so \(\Delta H_{M}/\Delta H_{L}\approx\gamma/(1-\gamma)\). Since the experimental \(\Delta H_{M}/\Delta H_{L}\) is about 5 [11], Eq. (24) assigns \(\gamma\approx 5/6\) of the ripple thermodynamics to the major side. That suggests a relatively larger major side fraction \(\gamma\) than visualized in Figs. 7 and 8. However, this also assigns \(1/6\) of the ripple to a pure fluid minor side, and it is clear from Fig. 7 that the minor side is more ordered on average than the pure fluid phase, so the \(\gamma\) value that agrees with experimental values of the transition enthalpies is reasonable. Finally, the difference in experimental transition temperatures determines the value of \(G_{H}\) in Eq. (23). However, note that \(G_{H}\) will have to be more negative if \(G_{G}\) in Eq. (23) is replaced by a positive value to account for the smaller \(\theta_{tilt}\) in the major side compared to the gel phase that is noted in the previous section. Also, note that the experimental specific heat [39; 51] and the thermal rate of volume expansion [14] are greater in the ripple phase than in the \(G\) and \(F\) phases, so \(G_{R}(T)\) should be more concave than allowed by Eq. (23), which assumes that \(G_{H}\) is independent of temperature. Also, the amplitude of the ripple has been reported to increase as temperature increases [52; 53], so adding temperature dependence to \(G_{H}\) would allow this simple model to be more realistic, but structural data of comparable quality to Fig. 7 are not available to pursue this. ## VII Discussion Chain melting is the most important thermodynamic driver of the main phase transition [29]. Similar to much of the literature, Section II treats this with a continuum free energy functional involving an order parameter which is here taken to be chain area \(\alpha\) rather than the essentially equivalent bilayer thickness used by others [21; 23; 27; 28]. More importantly, the functional form adopted in this paper differs from the conventional one to better accommodate the steric interactions that account for a larger area compressibility modulus in the gel phase than what the conventional form provides. This functional form comes from a detailed model of sterically hindered chain packing that has a \(3/2\)-order critical point [29; 33] rather than from the conventional \(\phi^{4}\) form appropriate for soft spin-type interactions. Chain tilt is an important secondary order parameter for lipid bilayers that have large headgroups that force tilt in the gel phase. Although the functional form that is used in Section III is similar to the \(\phi^{4}\) form, it differs by coupling to the chain area \(\alpha\) and its temperature dependence rather than to temperature directly. This treatment quantitively reproduces the recently observed temperature dependence of the tilt modulus data above the main transition [17]. This decrease in the tilt modulus as temperature is lowered to the main transition is the best experimental evidence thus far for a critical point in lipid bilayers. The theory predicts that the observed first order transition would become critical if the lateral pressure \(\pi\) could be increased sufficiently, but it has not yet been possible to do that experimentally. Chain melting and chain tilting together provide a fundamental understanding of the main transition at a qualitative level, and the theory in Section IV provides quantitative support. However, this leaves unexplained the lower transition and the ripple phase. It has been recognized in the many papers on the subject that this is an interesting theoretical challenge [20; 21; 22; 23; 24; 25; 26; 27; 28; 54; 55; 56; 57; 58; 59]. At the continuum level it has long been recognized that at least one heterogeneous Ginzburg-like term is required in the free energy to obtain a phase that is not spatially uniform [21; 23; 24; 25; 27; 28]. Such theories posit one or more order parameters and then consider terms that involve their gradients and divergences to lowest order. The latest example considered many such terms, also with two order parameters [28]. To obtain a modulation profile \(z(x)\), a spatial functional form with two sinusoidal terms was assumed and the parameters in this spatial form were then determined to minimize the free energy which had its own parameters. These latter parameters were then varied to obtain spatial modulation of the height profile that appears similar to the experimental data, but their main order parameter \(\psi\) is essentially sinusoidal instead of being constant in the major side of the ripple. Compared to the approach [28] in the previous paragraph, Section VI simply takes the experimental height profile as given, thereby avoiding having to assume a spatial functional form with its undetermined parameters. There are again many possible heterogeneous terms (see the legend in Fig. 8) that could be added to the free energy to provide a negative value of \(G_{H}\) in Eq. (24) that then gives a ripple phase and a lower transition. Although this obtains suitable agreement with experiment, it does not discriminate between these possible heterogeneous terms. More unsatisfyingly, the development in Section VI shares with all the continuum theories of the ripple phase that such terms are quite phenomenological, lacking underlying physical insight into the interactions of lipid molecules that could account for them. In contrast to our physical understanding of why there should be a transition from a tilted gel phase to the fluid phase, it is unclear to this author that there is even qualitative understanding of what it is at the molecular level that brings about the ripple phase and the lower transition. An important objective is to find a physical criterion that limits the size of the major side, and a new qualitative suggestion has been made regarding kink-block structures in the discussion in [7]. Previous theories that focus on this objective have involved solved domains [22] and next nearest neighbor interactions [54; 55], but these, along with other notable theories [56; 20; 57] provided ensuing ripple structures that differ considerably from the ripple structure in Fig. 7. It could be insightful if theories involving fundamental interactions could discriminate between the different continuum heterogeneous forms that are mentioned in the legend to Fig. 8 but it is beyond the scope of this paper to attempt such connections. It should also be noted that most theories, including the one in this paper, assume that it is sufficient to assign order parameters just to the bilayer, but the experimental structure in Fig. 7 suggests that one might have to consider an order parameter for each monolayer with coupling between monolayers as proposed in [26; 58]. Fig. 7 also emphasizes that the sample was a stack of closely spaced bilayers and that raises the issue of whether interactions between bilayers that have only been considered by a few theories [19; 59; 60] might be essential for formation of the ripple phase and a lower transition. There are reports that uni-lamellar vesicles (ULVs) do not have a lower transition [61; 62], while earlier papers did report a calorimetric pretransition, although much attenuated [63; 64; 65]. Visualizations of ripples have been reported in ULVs [2] and also in mica supported double bilayers [66] and the top layer on a stack of bilayers [67]. Although interbilayer and intermonolayer interactions may be important for the detailed structure of the ripple phase, the theory in this paper assumes, along with most other theories, that a single bilayer model remains relevant, especially for the main phase transition whose enthalpy is adequately accounted for by chain melting [29]. Even though the particular continuum theory in this paper does not provide the desired fundamental understanding of what causes the ripple phase and the lower transition beyond invoking heterogeneous terms in a continuum model, it successfully accommodates a great deal of experimental data, more than previous continuum theories [20; 21; 22; 23; 24; 25; 26; 27; 28] that also did not agree nearly so well with the more recent structure in Fig. 7. Finally, this is the first and only attempt to date to account theoretically for the relatively recently observed critical-like behavior of the tilt modulus [17]. Acknowledgements: The author thanks Dr. Saheli Mitra for comments on the manuscript. ## VIII Appendix Proof is given of the statement in the text that the \(\phi^{4}\) theory requires \[K_{AG}/A_{G}=K_{AF}/A_{F}. \tag{25}\] The area modulus \(K_{A}\) is given by \[K_{A}/A:=-(\partial\pi/\partial\alpha)_{t}=(\partial^{2}F/\partial\alpha^{2}) _{t}, \tag{26}\] so Eq. 25 follows if the second derivatives of \(F\) are equal for the gel \(G\) and the fluid \(F\) phases at the main transition temperature \(t_{1}\) and at their respective areas, \(0\) for the gel phase and \(\alpha_{1}\) for the fluid phase. For both phases \(F\) and \((\partial F/\partial\alpha)_{t}\) equal \(0\). Together these require \(\alpha_{1}=-2b_{3}/3b_{4}\) and \(b_{2}t_{1}=2b_{3}^{2}/9b_{4}\). The second derivative, \[\partial^{2}F/\partial\alpha^{2})_{t}=b_{2}t+\alpha(2b_{3}+3b_{4}\alpha), \tag{27}\] has the same value, \(b_{2}t\), in the gel phase because \(\alpha_{G}=0\), and in the fluid phase because \(\alpha_{F}=\alpha_{1}=-2b_{3}/3b_{4}\). QED
2304.01183
Exactly solvable models of nonlinear extensions of the Schrödinger equation
A method is presented to construct exactly solvable nonlinear extensions of the Schr\"odinger equation. The method explores a correspondence which can be established under certain conditions between exactly solvable ordinary Schr\"odinger equations and exactly solvable nonlinear theories. We provide several examples illustrating the method. We rederive well-known soliton solutions and find new exactly solvable nonlinear theories in various space dimensions which, to the best of our knowledge, have not yet been discussed in literature. Our method can be used to construct further nonlinear theories and generalized to relativistic soliton theories, and may have many applications.
Tom Dodge, Peter Schweitzer
2023-04-03T17:51:59Z
http://arxiv.org/abs/2304.01183v1
# Exactly solvable models of nonlinear extensions of the Schrodinger equation ###### Abstract A method is presented to construct exactly solvable nonlinear extensions of the Schrodinger equation. The method explores a correspondence which can be established under certain conditions between exactly solvable ordinary Schrodinger equations and exactly solvable nonlinear theories. We provide several examples illustrating the method. We rederive well-known soliton solutions and find new exactly solvable nonlinear theories in various space dimensions which, to the best of our knowledge, have not yet been discussed in literature. Our method can be used to construct further nonlinear theories and generalized to relativistic soliton theories, and may have many applications. ## I Introduction It is "quite a rarity in the world of nonlinear differential equations" to encounter exact analytic solutions [1]. While some exact solutions of nonlinear theories are known, see for instance [2; 3; 4; 5; 6; 7], the above quote from Ref. [1] nicely illustrates that in general they are rare. The goal of this work is to present a method allowing one to construct systematically exactly solvable nonlinear theories. We will focus on a specific class of nonlinear differential equations, namely on nonlinear extensions of the Schrodinger equation (NSE). The ordinary Schrodinger equation (SE) of nonrelativistic quantum mechanics is, of course, linear. But its nonlinear extensions have received considerable attention in literature and have numerous applications in a variety of contexts [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37]. In this work, we will show that under certain circumstances it is possible, starting from a known exact analytic ground state solution of a quantum mechanical problem, to construct an exactly solvable nonlinear theory. We will illustrate the method by providing several examples. In each case, the starting point is an exactly solvable quantum problem described by an ordinary SE like the harmonic oscillator, Coulomb problem, and other examples. As a result, we will derive nonlinear theories which have exact analytic solutions. In two of the cases, we will rederive well-known soliton solutions. In several other cases we will present exactly solvable NSEs which have not been discussed in literature before to the best of our knowledge. The method can be explored to construct systematically further exactly solvable nonlinear theories and can be generalized to relativistic theories. Besides being of immense interest for their own sake, exactly solvable NSEs can provide useful toy models and theoretical test grounds in many situations. For instance, the availability of exact analytic solutions of nonlinear theories can be used to effectively test numerical methods for nonlinear partial differential equations. The numerous applications of NSE theories range from particle physics [8; 9; 10], to many body systems and propagation of light through nonlinear media [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23], to descriptions of rogue waves in oceans or optics [24; 25; 26; 27], to cosmological models [28]. NSEs emerge naturally in the context of the transition from relativistic quantum field theories to nonrelativistic domains [29; 30; 31] and play an important role in mathematical physics [32; 33; 34; 35; 36; 37]. Another important application of studies of NSE is to provide frameworks for experimental tests of the linearity of quantum mechanics. Different schemes have been proposed [38; 39] and used to establish upper limits for nonlinear behavior in quantum mechanics based on neutron interferometry [40; 41], measurements in quantum bound states [42; 43; 44; 45], or Ramsey interferometry of vibrational modes of trapped ions [46]. So far, no deviations from linear behavior have been observed, and it is of importance to establish more stringent experimental limits. This work is organized as follows. In Sec. II, we introduce the notation and present the method to construct an analytically solvable NSE based on an analytically solvable SE. In Secs. III and IV, we explore the exactly solvable quantum harmonic oscillator to rederive a NSE describing a free or trapped Gausson in any number of space dimensions which has been encountered previously, independently in different theoretical settings. In Sec. V, we will rederive the well-known one-dimensional 1/ cosh-soliton and generalize it to any number of dimensions in Sec. VI. The latter as well as the examples presented subsequently have not been discussed in literature before to the best of our knowledge and constitute novel results. This includes the exactly solvable NSE with an arbitrary power-like nonlinearity in Sec. VII and the NSE derived from a special case of the Rosen-Morse potential in Sec. VIII. In Sec. IX, we construct an interesting NSE based on an exactly solvable potential which contains the \(\delta\)-function potential as a limiting case. Our last example is an NSE derived from the exactly solvable Coulomb potential. Some of these examples are formulated in \(N=1\) or \(N=3\) dimensions, but several of them are formulated for general \(N\). Our conclusions are presented in Sec. XI. The Appendix A contains technical details on an interesting limiting situation. Construction of exactly solvable NSEs Let us begin with a remark regarding notation. In the NSE literature, often a unit system is used with \(\hbar=1\) and many authors consider a particle of unit mass \(m=1\) or set \(2m=1\). In this work, we will explicitly use SI units and keep all physical constants in the equations. This will allow the reader to implement her or his preferred notation. The starting point is ordinary quantum mechanics in \(N\) space dimensions of a nonrelativistic spin-0 particle of mass \(m\) moving in a potential \(U(\vec{x})\) which is described by the linear Schrodinger equation (SE) \[i\hbar\,\frac{\partial\Psi(t,\vec{x})}{\partial t}=-\,\frac{\hbar^{2}}{2m}\, \bigtriangleup\Psi(t,\vec{x})+U(\vec{x})\,\Psi(t,\vec{x})\,. \tag{1}\] We shall assume the potential to be spherically symmetric such that \(U(\vec{x})=U(r)\) with \(r=|\vec{x}|\) for \(N\geq 2\) dimensions. For \(N=1\), we shall assume the potential \(U(x)\) to be even. The \(N\)-dimensional Laplace operator is given by \[\bigtriangleup=\frac{1}{r^{N-1}}\,\frac{\partial}{\partial r}\,r^{N-1}\,\frac {\partial}{\partial r}+\cdots=\frac{\partial^{2}}{\partial r^{2}}+\frac{N-1} {r}\,\frac{\partial}{\partial r}+\ldots \tag{2}\] where, for \(N\geq 2\), the dots indicate derivatives with respect to angular variables which will not be needed because we will focus exclusively on ground state wave functions depending solely on \(r\) for a spherically symmetric potential. The space dimension \(N\) will always be clear from the context. Let the potential in Eq. (1) be such that it admits at least one bound state. We denote the ground state energy by \(E_{0}\) and the ground state wave function by \[\Psi_{0}(t,\vec{x})=c_{0}\,\phi_{0}(r)\,e^{-iE_{0}t/\hbar} \tag{3}\] with the normalization \(\int d^{N}r\,|\Psi_{0}(\vec{x},t)|^{2}=1\). Due to the symmetry of the potential, the spatial part of \(\Psi_{0}(t,\vec{x})\) is described by a radial function \(\phi_{0}(r)\) for \(N\geq 2\) space dimensions. For \(N=1\), the wave function \(\phi_{0}(x)\) is even. For the following, it will be convenient to choose the phase and define the normalization constant \(c_{0}>0\) in Eq. (3) such that \[0\leq\phi_{0}(r)\leq 1\,,\quad\phi_{0}(0)=1\,. \tag{4}\] After these preparations, we are in the position to present the method. If the quantum mechanical problem in Eq. (1) can be solved analytically, then, depending on the properties of the radial function \(\phi_{0}(r)\), it may be possible to invert \(\phi_{0}(r)\) and find a function \(F\) such that the potential can be expressed as \[U(r)=F[\Psi^{*}\Psi]\bigg{|}_{\Psi=\Psi_{0}(t,\vec{x})}\,. \tag{5}\] If this step can be carried out, then \(F\) will in general be a nonlinear function of the wave function \(\Psi\). This allows us to rewrite the SE in Eq. (1) in terms of a nonlinear extension of the Schrodinger equation (NSE) as follows \[i\hbar\,\frac{\partial\Psi}{\partial t}=-\,\frac{\hbar^{2}}{2m}\,\bigtriangleup \Psi+F[\Psi^{*}\Psi]\,\Psi\,. \tag{6}\] Notice that it is convenient to choose \(\Psi^{*}\Psi\) as variable of the nonlinear function in Eq. (5) because in this way the NSE (6) is linear with respect to the phase of \(\Psi\) which carries the information about the time dependence. The NSE (6) has the exact, analytically known solution given by Eq. (3) which corresponds to a stationary soliton solution in the corresponding nonrelativistic nonlinear theory. A soliton traveling with a constant velocity \(\vec{v}\) can be obtained by applying a Galilean boost to Eq. (3) as follows \[\Psi(t,\vec{x})=c_{0}\,\phi_{0}(\vec{x}-\vec{v}t)\,e^{i(m\vec{v}\cdot\vec{x}- \frac{1}{2}m\vec{v}^{2}t-E_{0}t)/\hbar}\,. \tag{7}\] The crucial step in this construction is the derivation of the function \(F[\Psi^{*}\Psi]\). For a spherically symmetric potential \(U(\vec{x})=U(r)\) in \(N\geq 2\) (or even potential \(U(x)\) in \(N=1\)), it may be possible to carry out this step if \(\phi_{0}(r)\) is monotonously decreasing and an inverse function \(\phi_{0}^{-1}\) exists such that \(\phi_{0}^{-1}[\phi_{0}(r)]=r\) (analogously for \(N=1\)). In our context, it will be important that this crucial step can be carried out _analytically_ which ultimately depends on the properties of the potential. In the following sections, we will discuss examples to illustrate the method. Hereby, we will focus on the construction of exactly solvable NSEs with analytic solutions. Such exactly solvable nonlinear theories are of interest for their own sake and may have interesting applications. In principle, further work is required to establish that a solution of a NSE of the type (7) can be considered a soliton in the strict mathematical sense. For that it would be important to show, for instance, that two such solutions can scatter off each other and will preserve their shapes long before and long after the scattering process. Such investigations are beyond the scope of this work, but have been carried out in literature in some cases and we shall refer to them in the following. ## III \(N\)-dimensional logarithmic nonlinear theory, the free gausson As a first example, we consider the harmonic oscillator in \(N\)-dimensional space. The system is defined by the SE in Eq. (1) with a harmonic potential \[U(r)=\frac{1}{2}\,m\,\Omega^{2}\,r^{2}\,. \tag{8}\] The ground state energy and wave function are given by \[\Psi_{0}(t,\vec{x})=c_{0}\,\phi_{0}(r)\,e^{-iE_{0}t/\hbar},\quad\phi_{0}(r)=e^{ -r^{2}/b^{2}},\quad b=\sqrt{\frac{2\hbar}{m\Omega}}\,,\quad c_{0}=\left(\frac {m\Omega}{\pi\hbar}\right)^{\!\!N/4},\quad E_{0}=\frac{1}{2}\,N\,\hbar\Omega\,. \tag{9}\] Inverting the wave function as \[r^{2}=-\frac{\hbar}{m\Omega}\ \ln\!\left(\frac{|\Psi_{0}(t,\vec{x})|^{2}}{|c_{0} |^{2}}\right) \tag{10}\] we can rewrite the harmonic potential as \[U(r)=\frac{1}{2}\,m\Omega^{2}r^{2}=-\frac{\hbar\Omega}{2}\ \ln\!\left(\frac{| \Psi_{0}(t,\vec{x})|^{2}}{|c_{0}|^{2}}\right). \tag{11}\] In this way, we derive the NSE (6) with a logarithmic nonlinear term \(F[\Psi^{*}\Psi]\) defined as follows \[F\big{[}\Psi^{*}\Psi\big{]}=-A\,\ln\!\left(B\,|\Psi|^{2}\right),\quad A=\frac {\hbar\Omega}{2},\quad B=\frac{1}{|c_{0}|^{2}}\,. \tag{12}\] The exactly solvable NSE in Eqs. (6, 12) with the analytic solution (9) is known as the nonrelativistic Gausson, and was studied in detail in [4; 5; 6] including relativistic formulations. Previously, these solutions were encountered in \(N=3\) dimensions in studies of relativistic theories invariant under space-time dilatations [8]. Much later, Gaussons were rediscovered in a study of the energy-momentum tensor where point-like particles were "smeared out" to simulate an internal structure [9]. Recently, relativistic one-dimensional Gaussons were studied in [10]. In Fig. 1, we show the potential of the SE, the nonlinear term of the NSE, and the radial part of the wave function (the potential and \(\phi_{0}(r)\) of the harmonic oscillator are well known, but we include them for completeness and consistency with the following sections). It is convenient to display the potential in Fig. 1a in units of the ground state energy and \(r\) in units of \(b=\sqrt{2\hbar/(m\Omega)}\). The nonlinear term (12) is visualized as a function of the dimensionless variable \(\phi\) as \[F\big{[}\Psi^{*}\Psi\big{]}=A\,G\big{(}|\Psi|/c_{0}\big{)}\,,\quad G(\phi)=- \ln\phi^{2}. \tag{13}\] Recalling the normalization and phase convention in Eq. (4), the variable \(\phi\) satisfies \(0\leq\phi\leq 1\). For \(0<\phi<1\) the function \(G(\phi)\) is positive. As \(\phi\to 0\) the nonlinear term \(G(\phi)\) diverges which reflects the growth of \(U(r)\) for \(r\to\infty\). Figure 1: (a) The harmonic oscillator potential \(U(r)\) in units of ground state energy \(E_{0}\) as a function of \(r\) in units of \(b=\sqrt{2\hbar/(m\Omega)}\) in the 3-dimensional case. (b) The nonlinear term \(\phi\,G(\phi)\) of the NSE defined in Eq. (13). (c) The radial part \(\phi(r)\) of the ground state wave function of the SE with the potential in (a) and the soliton of the NSE defined by the nonlinear term in (b). However, the nonlinearity enters in Eqs. (6, 12) practically as \(\phi\,G(\phi)\) which goes to zero for \(\phi\to 0\) and \(\phi\to 1\) assuming its maximum in between at \(\phi=1/e\), i.e. at this point the nonlinearity in Eqs. (6, 12) is strongest, see Fig. 1b. The radial part \(\phi_{0}(r)\) of respectively the ground state wave function of the ordinary SE and the soliton of the NSE is shown in Fig. 1c. The solution (9) corresponds to a Gausson at rest. By applying the Galilean boost in Eq. (7) to the solution (9), we obtain a soliton traveling with constant velocity which preserves its shape. In our derivation, the shape-preserving traveling solution appears as a trivial consequence of Galilean invariance of the NSE in Eqs. (6, 12). As remarked in Sec. II, dedicated analyses are needed to show that such shape-preserving solutions can scatter off each other and asymptotically (i.e. long before and long after the scattering event) preserve their shapes. In the case of the Gausson solution,this was shown in [4; 5]. Noteworthy is the existence of a "resonance region" in which the scattering can be inelastic and the collision of two Gaussons can produce a final state with three Gaussons [6]. ## IV Theory of a Gausson trapped in a harmonic potential The steps carried out in Sec. III can be performed also for a "part" of the potential leading to the NSE of a Gausson trapped in the "remaining part" of the harmonic potential. For definiteness, we will consider \(N=3\) space dimensions, but a generalization to other space dimensions \(N\) is straightforward and analogous to Sec. III. For that, let us consider the harmonic potential \(U(r)=\frac{1}{2}\,m\,\Omega_{1}^{2}\,r^{2}+\frac{1}{2}\,m\,\Omega_{2}^{2}\,r ^{2}\) with \(\Omega_{1}^{2}+\Omega_{2}^{2}=\Omega^{2}\). Now we choose the part \(\frac{1}{2}\,m\,\Omega_{1}^{2}\,r^{2}\) of the potential to be left alone and reformulate the part \(\frac{1}{2}\,m\,\Omega_{2}^{2}\,r^{2}\) in terms of the nonlinear theory as discussed in Sec. III. In this way, we obtain the following NSE \[i\hbar\,\frac{\partial\Psi}{\partial t}=-\,\frac{\hbar^{2}}{2m}\,\bigtriangleup \Psi+U_{1}(r)\,\Psi+F_{2}\big{[}\Psi^{*}\Psi\big{]}\;\Psi\,, \tag{14}\] where the potential \(U_{1}(r)\), the nonlinear term \(F_{2}\big{[}\Psi^{*}\Psi\big{]}\) and the constants \(A_{2}>0\) and \(B_{2}>0\) are given by \[U_{1}(r)=\frac{1}{2}\,m\,\Omega_{1}^{2}\,r^{2},\quad F_{2}\big{[}\Psi^{*}\Psi \big{]}=-A_{2}\,\ln\!\big{(}B_{2}\,\Psi^{*}\Psi\big{)}\,,\quad A_{2}=\frac{ \hbar\Omega_{2}^{2}}{2\Omega},\quad B_{2}=\left(\frac{\pi\hbar}{m\Omega} \right)^{3/2}. \tag{15}\] Let us recall that these results are specifically for \(N=3\) space dimensions and the generalization to other dimensions is straight forward. The NSE given by Eqs. (14, 15) describes a Gausson trapped in the harmonic potential \(U_{1}(r)=\frac{1}{2}\,m\,\Omega_{1}^{2}\,r^{2}\) with the analytic solution given by Eq. (9). In the limit \(\Omega_{1}\to\Omega\) and \(\Omega_{2}\to 0\), the nonlinear theory (14, 15) reduces to the regular SE for a harmonic oscillator. In the limit \(\Omega_{1}\to 0\) and \(\Omega_{2}\to\Omega\), it reduces to the nonlinear theory of a free Gausson discussed in Sec. III. Nonlinear theory with a 1/cosh soliton in one-dimension In our next example, we consider a one-dimensional quantum system described by the potential \[U(x)=-\,\frac{\hbar^{2}}{m\,a^{2}}\frac{1}{\cosh^{2}\left(x/a\right)}\,, \tag{16}\] where \(a\) is a positive constant with the dimension of length. The ground state solution of the SE reads \[\Psi_{0}(t,x)=c_{0}\,\phi_{0}(x)\,e^{-iE_{0}t/\hbar},\quad\phi_{0}(x)=\frac{1} {\cosh(x/a)}\,,\quad c_{0}=\frac{1}{\sqrt{2a}}\,,\quad E_{0}=-\frac{\hbar^{2}} {2m\,a^{2}}\,. \tag{17}\] Using the method described in Sec. II, the wave function can be inverted and used to express the potential in terms of the ground state wave function as follows \[U(x)=-\frac{2\hbar^{2}}{ma}\Psi_{0}^{*}(t,x)\Psi_{0}(t,x)\,. \tag{18}\] The resulting analytically solvable NSE is then given by Eq. (6) with a particularly simple nonlinear term \[F\big{[}\Psi^{*}\Psi\big{]}=-A\,|\Psi|^{2}\,,\quad A=\frac{2\hbar^{2}}{ma}\,. \tag{19}\] The analytic solution (17) of the nonlinear theory (6, 19) is well known and was found in Ref. [7]. The underlying NSE in \(N=3\) is generally known as the Gross-Pitaevskii equation [11; 12] and has important applications to the description of interacting Bose gases. In \(N=1\) dimensions, it is often referred to as the Lieb-Liniger model [13]. The wide range of applications of this nonlinear theory includes propagation of self-focusing laser beams in nonlinear media [14; 15; 16; 17], solitons in Bose condensates [18; 19; 20; 21; 22] and fermionic superfluids [23], generation of ocean [24; 25; 26] and optical [27] rogue waves, or cosmological axion models of nonrelativistic dark matter [28]. The NSE can be derived from, e.g., the nonrelativistic limit of the one-dimensional sinh-Gordon model [29], or the complex \(\Phi^{4}\) theory [30; 31]. Suffice to say that this NSE is of great interest in mathematical physics [32; 33; 34; 35; 36; 37]. In the case of solitons in Bose condensates, the sign of the nonlinearity is opposite to our result and the equation describes a "dark soliton" which corresponds to a depletion in the density in the Bose condensate [18; 19; 20; 21; 22]. For completeness, we remark that the potential \(U(x)\) in Eq. (16) is a special case of the Rosen-Morse potential [47] and belongs to a wider class of potentials known as Natanzon potentials [48]. We postpone displaying the potential, nonlinear term, and wave function to the next section where we generalize the 1/cosh solution to an arbitrary number of dimensions \(N\). Nonlinear theory with a 1/cosh soliton in \(N\) dimensions The 1/cosh solitons exist also in \(N>1\) dimensions, albeit the starting point is then a somewhat more complicated quantum mechanical potential which contains an additional term proportional to \((N-1)\) and is given by \[U(r)=-\,\frac{\hbar^{2}}{ma^{2}}\frac{1}{\cosh^{2}\left(r/a\right)}-\,\left(N-1 \right)\frac{\hbar^{2}}{2ma\,r}\,\tanh(r/a) \tag{20}\] where \(a\) is a positive constant with the dimension of length. Clearly, for \(N=1\) we recover the potential of Sec. V. The ground state solution of the SE is given by \[\Psi_{0}(t,\vec{x})=c_{0}(N)\,\phi_{0}(r)\,e^{-iE_{0}t/\hbar},\quad\phi_{0}(r)= \frac{1}{\cosh(r/a)}\,,\quad E_{0}=-\frac{\hbar^{2}}{2m\,a^{2}}\,, \tag{21}\] and is exactly the same as in Sec. V except the normalization constant is now given by \[c_{0}(N)=\sqrt{\frac{4^{N-1}\,\Gamma(N/2+1)}{(2^{N}-4)\,N\,(N-1)!\,\pi^{N/2} \zeta(N-1)a^{N}}}\,,\quad N>2\,. \tag{22}\] In the case \(N=2\), care is needed because the factor \((2^{N}-4)\) goes to zero while the \(\zeta\)-function \(\zeta(N-1)\) diverges, but the product of these factors \((2^{N}-4)\,\zeta(N-1)\to 4\ln 2\) is finite such that \(c_{0}(2)=1/(a\sqrt{2\pi\ln 2})\). For \(N=1\) the formula (22) reproduces the normalization constant quoted in Sec. V in Eq. (17). Inverting the wave function, the potential can be rewritten as \[U(r)=A\,G\big{(}|\Psi_{0}(t,\vec{x})|/c_{0}|\big{)},\quad A=\frac{\hbar^{2}}{ ma}\,,\quad G(\phi)=-\phi^{2}-\frac{1}{2}\,(N-1)\frac{\sqrt{1-\phi^{2}}}{ \ln\!\big{(}1+\sqrt{1-\phi^{2}}\big{)}-\ln\phi}\,. \tag{23}\] The resulting analytically solvable NSE is then given by Eq. (6) with the nonlinear function \[F[\Psi^{*}\Psi]=A\,G\big{(}|\Psi/c_{0}|\big{)} \tag{24}\] with \(G(\phi)\) and \(A\) defined in Eq. (23). The results are valid for any dimension \(N\) including the one-dimensional case discussed in Sec. V. To the best of our knowledge, the solution for \(N>1\) has not been discussed before in literature. In units of \(|E_{0}|\), the potential has the shape \(V(r)/|E_{0}|=-2/\cosh(y)^{2}-(N-1)\tanh(y)/y\) where \(y=r/a\) and is depicted in Fig. 2a for \(N=1,\,2,\,3\) dimensions. The function \(G(\phi)\) defined in Eq. (23) is similarly shown for \(N=1,\,2,\,3\) dimensions in Fig. 2b. Although the nonlinearity in (23) enters effectively as \(\phi\,G(\phi)\), we merely plot \(G(\phi)\) since in this case the nonlinear function vanishes for \(\phi\to 0\) (in contrast to the nonlinearity in the Gausson case in Fig. 1b). In the limit \(\phi\to 1\), the function \(G(\phi)\) approaches the value \((-N-1)/2\). We see that the nonlinearity in this NSE has a very different shape and opposite sign compared to the nonlinearity of the Gausson discussed in Secs. III and IV. The radial wave function has the same 1/cosh shape in any dimension and is shown in Fig. 2c. Figure 2: (a) The potential \(U(r)\) in Eq. (20) in units of \(|E_{0}|\) for \(N=1,\,2,\,3\) dimensions. (b) The nonlinear term \(G(\phi)\) defined in Eq. (23) for \(N=1,\,2,\,3\) dimensions. (c) The radial part \(\phi_{0}(r)=1/\cosh(r/a)\) of the ground state wave function which is the solution to the SE with the potential shown in (a) and the NSE with the nonlinear term shown in (b) for any dimension \(N\). VII One-dimensional theory with a power-law nonlinearity \(F[\Psi^{*}\Psi]=|\Psi^{*}\Psi|^{\lambda}\) In this section, we present an interesting variant of the NSE discussed in Sec. V. In a one-dimensional quantum system, we consider the potential \[U(x)=-\,\frac{1+\lambda}{2\lambda^{2}}\,\frac{\hbar^{2}}{m\,a^{2}}\frac{1}{ \cosh^{2}\left(x/a\right)}\,. \tag{25}\] where \(a>0\) has the dimension of length and \(\lambda>0\) is dimensionless. The case \(\lambda=1\) was discussed in Sec. V, and for \(\lambda\to\infty\) we recover the free SE. For \(0<\lambda<\infty\), the ground state solution of the SE with the potential (25) reads \[\Psi_{0}(t,x)=c_{0}\,\phi_{0}(x)\,e^{-iE_{0}t/\hbar},\quad\phi_{0}(x)=\frac{1} {\cosh(x/a)^{1/\lambda}}\,,\quad c_{0}=\sqrt{\frac{\Gamma(1/2+1/\lambda)}{ \sqrt{\pi}\,\Gamma(1/\lambda)\,a}}\,,\quad E_{0}=-\frac{1}{\lambda^{2}}\,\frac {\hbar^{2}}{2m\,a^{2}}\,. \tag{26}\] Using the method described in Sec. II, the wave function can be inverted and used to express the potential in terms of the ground state wave function as follows \[U(x)=-\,\frac{1+\lambda}{2\lambda^{2}}\,\frac{\hbar^{2}}{m\,a^{2}}\left(\phi_ {0}^{*}(x)\phi_{0}(x)\right)^{\lambda}\,. \tag{27}\] The resulting analytically solvable NSE is then given by Eq. (6) with with the power-law nonlinear term \[F\big{[}\Psi^{*}\Psi\big{]}=A\,|\Psi|^{2\lambda}\,,\quad A=-\,\frac{1+\lambda} {2\lambda^{2}}\,\frac{2\hbar^{2}}{m\,a\,c_{0}^{2\lambda}}\,. \tag{28}\] The constant \(\lambda\) can be chosen to model any desired power-law nonlinearity proportional to \(|\Psi|^{2\lambda}\). In Fig. 3, we show the potential \(U(x)\), the nonlinear term defined as \(G(\phi)=-\,\phi^{2\lambda}\), and \(\phi_{0}(x)\) for selected \(\lambda\) values. As \(\lambda\) increases, \(U(x)\) becomes shallower and \(G(\phi)\) more strongly peaked towards the region \(\phi\to 1\). In absolute units, the spatial part of the soliton is \(c_{0}\,\phi_{0}(x)\) and the normalization constant \(c_{0}\) decreases as \(\lambda\) increases. I.e., in the limit when \(\lambda\) becomes large, the soliton decreases in the center and spreads out, i.e. it becomes delocalized. At the same time as \(\lambda\) becomes large, the magnitude of the energy \(E_{0}\propto 1/\lambda^{2}\) decreases. For \(\lambda\to\infty\), we recover the free SE as the potential \(U(x)\to 0\) in Eq. (25) and also the nonlinear term \(F\big{[}\Psi^{*}\Psi\big{]}\to 0\) in Eq. (28). If we apply a Galilean boost according to (7) and take \(\lambda\to\infty\), the solution is of course not normalizable and corresponds to a plane wave. The opposite limit of small \(\lambda\) is also interesting. The potential of the SE becomes deeper and \(E_{0}\) becomes more negative. In the NSE, the magnitude of the nonlinear term increases (it becomes more negative) since \(A\) is proportional to \(1/\lambda^{2}\) in Eq. (28) and the soliton becomes more strongly localized. This picture remains correct for arbitrarily small, but non-zero \(\lambda\). In the strict limit \(\lambda\to 0\) the potential of the SE (and the nonlinear term of the NSE) become singular, the ground state energy \(E_{0}\to-\,\infty\), while the ground state wave function becomes strongly localized and approaches \(|\psi_{0}(x,t)|^{2}\to\delta(x)\). In Appendix A, we show that despite this extreme localization of the state for \(\lambda\to 0\), Heisenberg's uncertainty principle is always valid. For completeness, we remark that the solution \(\phi_{0}(r)=1/\cosh(r/a)^{1/\lambda}\) exists also in \(N\geq 2\) dimensions for a generalized potential and a generalized nonlinearity which then both have additional structures proportional to \((N-1)\). The situation is similar to the case discussed in Sec. VI, and we refrain from showing the results. ## VIII Example of a NSE from piecewise potential Some exactly solvable quantum problems are given in terms of potentials which are defined piecewise. Our next example is of this type. We will see that it is possible to derive an NSE also in such a case. In a one-dimensional quantum system, we consider the potential given by \[U(x)=\frac{\hbar^{2}}{2mL^{2}}\,\beta(\beta-1)\tan^{2}\left(\frac{x}{L}\right) \quad\mbox{for}\quad|x|<\frac{\pi\,L}{2}\;, \tag{29}\] and infinite for \(|x|\geq\pi\,L/2\) where \(L\) is a positive parameter of dimension length and \(\beta>1\) is dimensionless. The ground state solution of the SE with the potential (29) is for \(|x|<\pi\,L/2\) given by \[\Psi_{0}(t,x)=c_{0}\,\phi_{0}(x)\,e^{-iE_{0}t/\hbar},\quad\phi_{0}(x)=\left( \cos\frac{x}{L}\right)^{\beta},\quad c_{0}=\sqrt{\frac{\beta\,\Gamma(\beta)}{ \sqrt{\pi}\,\Gamma(\beta+\frac{1}{2})L}}\,,\quad E_{0}=\frac{\hbar^{2}\beta}{2 mL^{2}} \tag{30}\] and zero elsewhere. The wave function can be inverted and used to express the potential in terms of \(\phi_{0}(x)\) as follows \[U(x)=\frac{\hbar^{2}}{2mL^{2}}\,\beta(\beta-1)\left(\phi_{0}(x)^{-2/\beta}-1 \right)\quad\mbox{for}\quad|x|<\frac{\pi\,L}{2}\;. \tag{31}\] The resulting analytically solvable NSE is then given by Eq. (6) with the nonlinear term defined as \[F\big{[}\Psi^{*}\Psi\big{]}=A\,G\big{(}|\Psi/c_{0}|\big{)}\,,\quad G(\phi)= \beta(\beta-1)\left(\phi^{-2/\beta}-1\right),\quad A=\frac{\hbar^{2}}{2L^{2}m}\,. \tag{32}\] The potential, nonlinear term, and \(\phi_{0}(x)\) are shown in Fig. 4 for selected values of \(\beta\). For \(\beta\to 1\), the potential (29) approaches the familiar infinite square well potential, while for \(\beta\gg 1\), the potential becomes very steep, see Fig. 4a. In the limit \(\beta\to 1\), also the nonlinear function has formally a "square well-type shape" with the properties (i) \(G(\phi)=0\) for \(\phi\neq 0\), and (ii) \(G(\phi)\to\infty\) as \(\phi\to 0\) as illustrated in Fig. 4b. However, although \(\beta\) can be infinitesimally close to unity, the NSE can only be solved for \(\beta>1\). In the limit of large \(\beta\), the non-linear function grows with \(\beta\). But when normalized with respect to \(\beta\), the nonlinearity has the limit \(\lim_{\beta\to\infty}\phi\,G(\phi)/\beta=-2\,\phi\,\ln\phi\), as depicted in Fig. 4b. As \(\beta\to 1\), the solution \(\phi_{0}(x)\) approaches the shape \(\cos(x/L)\) familiar from the square well potential, while for \(\beta\gg 1\) it becomes strongly localized, see Fig. 4c. In the limit \(\beta\to\infty\), the function \(\phi_{0}(x)\to 0\) for \(x\neq 0\), and the normalized wave function takes the limit \(\lim_{\beta\to\infty}|\psi_{0}(x,t)|^{2}=\delta(x)\). Despite the strong localization of the wave function for \(\beta\to\infty\), Heisenberg's uncertainty relation remains valid because the shrinking of the position uncertainty \(\Delta x\to 0\) is accompanied by the corresponding spread of the momentum uncertainty \(\Delta p\to\infty\). Notice also that \(E_{0}\) diverges as \(\beta\) grows. For any \(\beta<\infty\) it is always \(\Delta p\,\Delta x>\frac{1}{2}\,\hbar\), and the uncertainty relation becomes an equality in the limit \(\beta\to\infty\). The situation is analog to the limit \(\lambda\to 0\) in Sec. VII which is discussed in detail in App. A. Figure 4: (a) The potential \(U(x)\) in Eq. (29) in units of \(|E_{0}|\) for selected values of \(\beta\). The limiting case \(\beta=1\) corresponds to the familiar square well potential. (b) The nonlinear term \(\phi\,G(\phi)\) with \(G(\phi)\) defined in Eq. (32) normalized with respect to the parameter \(\beta\) to better illustrate the scaling in the large-\(\beta\) limit. (c) The solution \(\phi_{0}(x)=1/\cos(x/L)^{\beta}\) for selected values of \(\beta\). Nonlinear theory with \(\delta\)-function type limiting case In this section, we consider the one-dimensional potential given by the expression \[U(x,b_{0})=-\ \frac{\hbar^{2}b_{0}^{2}}{2am}\ \left[\frac{1}{(x^{2}+b_{0}^{2})^{ \,3/2}}+\frac{1}{a\,(x^{2}+b_{0}^{2})}\right]\,, \tag{33}\] where \(a>0\) and \(b_{0}>0\) are constants with the dimension of length. The ground state solution of the ordinary SE with the potential in Eq. (33) is given by \[\Psi_{0}(t,x)=c_{0}\phi_{0}(x)\,e^{-iE_{0}t/\hbar},\quad\phi_{0}(x)=\exp\!\left( \frac{b_{0}}{a}-\frac{\sqrt{x^{2}+b_{0}^{2}}}{a}\right)\!,\quad E_{0}=-\,\frac {\hbar^{2}}{2a^{2}m}\,. \tag{34}\] We could not find an analytic expression for the normalization constant \(c_{0}\) valid in the general case, though it can be computed numerically if needed. The expression for \(c_{0}\) is not of importance for the following. Notice that \(\phi_{0}(0)=1\) in accordance with Eq. (4). The radial function \(\phi_{0}(x)\) can be inverted and the potential expressed as \[U(x,b_{0})=\frac{\hbar^{2}}{2a^{2}m}\,G\!\left(\phi_{0}(x),b_{0}\right)\!, \quad G(\phi,b_{0})=\frac{b_{0}^{2}}{a^{2}}\,\frac{1-\ln(e^{-b_{0}/a}\phi)}{ \ln^{3}(e^{-b_{0}/a}\phi)}\,. \tag{35}\] In this way, we obtain the exactly solvable NSE in Eq. (6) with the nonlinearity \[F\!\left[\Psi^{*}\Psi\right]=A\,G\!\left(|\Psi|/c_{0},b_{0}\right),\quad A= \frac{\hbar^{2}}{2m\,a^{2}}\,, \tag{36}\] with \(G(\phi,b_{0})\) defined in Eq. (35). The nonlinear theory (36) has the analytic solution (34) and can describe traveling solutions according to (7). We defined the potential in Eq. (33) for \(b_{0}>0\) and excluded the case \(b_{0}=0\). But the limit \(b_{0}\to 0\) can be taken, and it is indeed very interesting. In this limit, the potential (33) has the properties \[\mbox{(i)}\quad\lim_{b_{0}\to 0}U(x,b_{0}) = 0\quad\mbox{for}\quad x\neq 0\,,\] \[\mbox{(ii)}\quad\int\limits_{-\infty}^{\infty}\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! with the convention that integrating a \(\delta\)-function up to a limit which coincides with its support yields \(\int_{0}^{c}\mathrm{d}u\,\delta(u-c)=\frac{1}{2}\) for \(c>0\). In this way, we find an unusual exactly solvable NSE, namely \[i\hbar\,\frac{\partial\Psi}{\partial t}=-\,\frac{\hbar^{2}}{2m}\,\bigtriangleup \Psi-A\,\delta\big{(}1-|\Psi|/\sqrt{a}\big{)}\,,\quad A=\frac{\hbar^{2}}{2m\, a^{2}}\,. \tag{42}\] The nonlinearity in this problem is nonzero only when the spatial part of the wave function \(\phi(x)\) becomes unity. This is the case for the solution \(\phi_{0}(x)\) in Eq. (39) (cf. Eq. (4) for conventions) only at \(x=0\) corresponding to the only point where the limiting potential (38) is nonzero. The very presence of the singular nonlinearity in Eq. (42) can be verified only by integrating the (time-independent version of the) NSE in Eq. (42) over an infinitesimal interval \([-\epsilon,\,\epsilon]\) enclosing the point \(x=0\), i.e. in very much the same way the singular potential (38) is treated in the ordinary SE. In Fig. 5, we show the potential, nonlinear term and spatial part \(\phi_{0}(x)\) for selected values of \(b_{0}\). As the parameter \(b_{0}\) decreases, the potential and nonlinear term become narrower and deeper as shown in Figs. 5a and 5b. The minimum of the potential in units of \(|E_{0}|\) and the non-linear function are given by \(U(0,b_{0})/|E_{0}|=G(1,b_{0})=-a/b_{0}-1\) and go to minus infinity for \(b_{0}\to 0\). Both functions eventually approach the corresponding singular limits in Eqs. (38, 41) which cannot be depicted. The wave function \(\phi_{0}(x)\) is regular in the limiting case \(b_{0}\to 0\) and shown in Fig. 5c. In the opposite limit \(b_{0}\to\infty\), the potential \(U(x,b_{0})\to E_{0}\) becomes a trivial constant, the non-linearity \(G(\phi,b_{0})\to 0\). After a Galilean boost (7), the wave-function describes a non-renormalizable plane wave solution. In other words, we recover a free SE in the limit \(b_{0}\to\infty\). Figure 5: (a) The potential \(U(x,b_{0})\) in Eq. (33) in units of \(|E_{0}|\) for selected values of \(b_{0}\). For \(b_{0}\to 0\) the potential reduces to an attractive \(\delta(x)\) potential. (b) The nonlinear term \(G(\phi,b_{0})\) defined in Eq. (35) which becomes \(-\,\delta(1-\phi)\) in the limit \(b_{0}\to 0\). (c) The solution \(\phi_{0}(x)\) for selected values of \(b_{0}\) including the limit \(b_{0}\to 0\). Three-dimensional nonlinear theory from Coulomb potential As our final example, we choose another well-familiar analytically solvable quantum mechanical potential, namely the Coulomb potential in \(N=3\) dimensions. The potential is given by \[U(r)=-\,\frac{e^{2}}{4\pi\varepsilon_{0}}\,\frac{1}{r}=-\frac{\hbar^{2}}{a_{B}\, m}\,\frac{1}{r} \tag{43}\] where \(a_{B}=4\pi\varepsilon_{0}\hbar^{2}/(e^{2}m)=\hbar/(\alpha mc)\) denotes the Bohr radius, \(m\) the reduced mass, and \(\alpha\) the fine structure constant. The ground state energy and wave function are given by \[\Psi_{0}(t,\vec{x})=c_{0}\,\phi_{0}(r)\,e^{-iE_{0}t/\hbar},\quad\phi_{0}(r)= \exp\!\left(-\frac{r}{a_{B}}\right)\!,\quad c_{0}=\left(\frac{1}{\pi a_{B}^{3} }\right)^{\!\!1/2},\quad E_{0}=-\,\frac{\hbar^{2}}{2a_{B}^{2}m}=-\frac{1}{2} \,\alpha^{2}\,mc^{2}\,. \tag{44}\] This wave function can be inverted such that we obtain \[r=-\frac{a_{B}}{2}\ \ln\!\left(\frac{|\Psi_{0}(t,\vec{x})|^{2}}{|c_{0}|^{2}} \right). \tag{45}\] Hence, we can rewrite the Coulomb potential as \[U(r)=\frac{2\hbar^{2}}{m\,a_{B}^{2}}\ \frac{1}{\ln\!\left(|\Psi_{0}(t,\vec{x})| ^{2}/|c_{0}|^{2}\right)}\,. \tag{46}\] In this way, we find the exactly solvable NSE in Eq. (6) where the nonlinear term \(F[\Psi^{*}\Psi]\) is given by \[F\!\left[\Psi^{*}\Psi\right]=\frac{A}{\ln\!\left(B\,\Psi^{*}\Psi\right)}=A\,G \!\left(|\Psi/c_{0}|\right),\quad A=\frac{2\hbar^{2}}{m\,a_{B}^{2}},\quad B= \pi a_{B}^{3},\quad G(\phi)=\frac{1}{\ln(\phi^{2})}\,. \tag{47}\] The nonlinear theory (6, 47) has the analytic solution (44) and can describe traveling solitons according to Eq. (7). To the best of our knowledge, this nonlinear theory has not been discussed in literature before. In Fig. 6, we depict for completeness the Coulomb potential \(U(r)\), the nonlinear function \(G(\phi)\), and the radial function \(\phi_{0}(r)\). The function \(G(\phi)\) is throughout negative for \(\phi>0\) and diverges when \(\phi\to 1\) which is in one-to-one correspondence to the divergence of the Coulomb potential at \(r\to 0\). Figure 6: (a) The three-dimensional Coulomb potential in units of \(|E_{0}|\) as function of \(r\) in units of the Bohr radius \(a_{B}\). (b) The nonlinear term \(G(\phi)\) of the NSE defined in Eq. (47). (c) The radial part \(\phi_{0}(r)\) of respectively the Coulomb ground state wave function or the soliton of the NSE (6) with the nonlinearity (47). Conclusions In this work, we have presented a method to construct analytically solvable nonlinear extensions of the Schrodinger equation (NSE) starting from an ordinary analytically solvable Schrodinger equation (SE). We have illustrated the method through several examples in which the potential \(U(\vec{x})\) of the SE in Eq. (1) was systematically transformed into a nonlinear term \(F[\Psi^{*}\Psi]\) of the NSE in Eq. (6). Starting from respectively the harmonic potential or (a special case) of the Rosen-Morse potential we rederived well-known soliton solutions of nonlinear theories, namely the Gausson in a general number of space dimensions \(N\) and the one-dimensional \(1/\cosh\) soliton [4; 5; 6; 7]. In several other cases, we have derived exact soliton solutions of non-linear theories which, to the best of our knowledge, have not been discussed previously in literature. This includes among others a nonlinear theory derived from the SE with the Coulomb potential in \(N=3\) dimensions. Another interesting example was a regular one-dimensional potential which can be transformed into the attractive \(\delta(x)\) potential by taking one of the parameters of this potential to approach a specific limit. The regular potential as well as the singular \(\delta(x)\) potential can both be used to construct exactly solvable NSE with interesting soliton solutions. The quantum mechanical potentials explored in this work have in common that they are symmetric, i.e. \(U(\vec{x})=U(r)\) with \(r=|\vec{x}|\) in \(N>1\) space dimensions or \(U(x)=U(|x|)\) in \(N=1\) space dimensions. Another common feature is that the considered potentials have a single minimum which can be finite or infinite. It is an interesting question whether the method can be generalized to construct exactly solvable nonlinear soliton theories also under more general conditions, e.g. starting from non-symmetric potentials or from double-well type potentials. Another interesting future direction could be to explore systematically methods like Lie algebra techniques and self-similar potentials [49; 50] or the more general concept of shape invariant potentials [51] and other supersymmetric methods in quantum mechanics [52; 53] or whether novel soliton solutions can be found in non-hermitian PT symmetric quantum systems in analogous ways [54; 55; 56]. These interesting questions will be addressed in future studies. **Acknowledgments.** This work was supported by the National Science Foundation under the Contract No. 1812423 and 2111490. This work was supported in part also by the Department of Energy within framework of the QGT Topical Collaboration ## Appendix A Heisenberg's uncertainty principle for extremely localized wave functions In Sec. VII, we discussed the exactly solvable quantum potential (25). In this Appendix, we investigate in detail the limit \(\lambda\to 0\) in which the ground state wave function (26) behaves such that the probability density has the properties \[(i) \lim_{\lambda\to 0}|\Psi_{0}(t,x)|^{2}=0\quad\text{for}\quad x \neq 0\,,\] \[(ii) \int\limits_{-\infty}^{\infty}\!\!dx\;|\Psi_{0}(t,x)|^{2}=1\quad \text{for}\quad\lambda\neq 0\,. \tag{101}\] These properties imply that the probability density becomes extremely localized as \[\lim_{\lambda\to 0}|\Psi_{0}(t,x)|^{2}=\delta(x), \tag{102}\] and exhibits an obviously vanishing position uncertainty \(\Delta x\). It is interesting to ask whether such an extremely localized state satisfies Heisenberg's uncertainty principle. For \(\lambda\neq 0\) it is always \(\Delta p\;\Delta x>\frac{\hbar}{2}\). We refrain from showing the bulky analytic expressions for \(\Delta p\) and \(\Delta x\) which, if needed, can be found easily with mathematica and are given in terms of Gamma functions and hypergeometric functions. The results for \(\Delta x\) and \(\Delta p\) are shown in Fig. 7. As \(\lambda\) decreases, the position uncertainty \(\Delta x\) becomes smaller while the momentum uncertainty \(\Delta p\) increases. For infinitesimally small (but non-zero) \(\lambda\), the uncertainties behave as \[\Delta x=a\sqrt{\frac{\lambda}{2}}+\ldots\;,\quad\Delta p=\frac{\hbar}{a}\; \frac{1}{\sqrt{2\lambda}}+\ldots\;, \tag{103}\] where the dots indicate positive higher order corrections such that \(\Delta x\,\Delta p>\frac{\hbar}{2}\) for all \(\lambda>0\). The leading terms in Eq. (103) approximate the momentum and position uncertainties to within \(\mathcal{O}(2\,\%)\) already for \(\lambda\lesssim 10^{-1}\) and describe \(\Delta p\) and \(\Delta x\) over several orders of magnitude for \(\lambda\ll 1\) in Fig. 7. From Eq. (102), we see that \(\lim_{\lambda\to 0}\Delta p\;\Delta x=\frac{\hbar}{2}\). Thus, Heisenberg's uncertainty relation is manifestly valid for any value of \(\lambda\) including the limit \(\lambda\to 0\). The increasing of the momentum uncertainty \(\Delta p\) as \(\lambda\) becomes small implies a large kinetic energy. In fact, since in this stationary state \(\langle p\rangle=0\), we have \[\langle E_{\rm kin}\rangle=\frac{\langle p^{2}\rangle}{2m}=\frac{\Delta p^{2}}{2 m}=\frac{\hbar^{2}}{2ma^{2}}\ \frac{1}{2\lambda}+\ldots\, \tag{10}\] modulo subleading corrections for \(\lambda\ll 1\). Thus, the expectation value of the kinetic energy diverges as \(1/\lambda\) for \(\lambda\to 0\) which is a consequence of the strong localization of the quantum state. However, it is important to keep in mind that the total (negative) binding energy \(E_{0}\) is proportional to \(1/\lambda^{2}\) in Eq. (26). Thus, when "measured in units" of the absolute value of the total energy, the expectation value of the kinetic energy actually behaves as \[\frac{\langle E_{\rm kin}\rangle}{|E_{0}|}=\frac{\lambda}{2}+\ldots \tag{11}\] with the dots denoting higher order terms. In other words, \(\langle E_{\rm kin}\rangle\) becomes negligibly small in the limit \(\lambda\to 0\) in comparison to the total binding energy. This is because the ground state energy is dominated by the expectation value of the potential energy with the potential (25) behaving as \(U(x)\propto 1/\lambda^{2}\). These properties make sense physically for arbitrarily small but non-zero values of \(\lambda\). We deal with a deeply bound and strongly localized state. The fact that \(\langle E_{\rm kin}\rangle\ll|E_{0}|\) means the motion of the particle becomes negligible as the particle becomes localized due to the strong coupling. Nevertheless, the uncertainty principle remains valid for any \(\lambda\). It is interesting to notice that the numerical computation of \(\Delta p\) and \(\Delta x\) can be carried out down to much smaller values of \(\lambda\) than the numerical test of the uncertainty relation. The reason for that is as follows. At \(\lambda=10^{-4}\) the asymptotic expressions in Eq. (10) underestimate \(\Delta p\) in relative units by about \({\cal O}(2\times 10^{-5})\) and overestimate \(\Delta x\) by about the same amount. We can go down to \(\lambda=10^{-8}\) before hitting numerical accuracy limitations for \(\Delta p\) and \(\Delta x\) on the scale of Fig. 7a. However, the over- and underestimates in \(\Delta p\) and \(\Delta x\) largely compensate each other in the product such that at \(\lambda=10^{-4}\) we reach with \(\Delta p\,\Delta x/\hbar-\frac{1}{2}={\cal O}(4\times 10^{-10})\) our numerical accuracy in Fig. 7b. The features that (i) ground state energy \(E_{0}\to-\infty\) and (ii) probability density \(|\psi_{0}(x,t)|^{2}\to\delta(x)\) occur also in the case of the attractive one-dimensional \(1/|x|\)-potential when the \(1/|x|\) singularity is "regulated" as \(1/(|x|+\epsilon)\) and the limit \(\epsilon\to 0\) is taken [57]. There is no deeper analogy between our case and the regularized \(1/|x|\) potential. This is rather a generic feature of systems with strongly localized and deeply bound ground states. As long as the parameter (\(\lambda\) in our case or \(\epsilon\) in the regulated \(1/|x|\) potential) is infinitesimally small but non-zero, one deals with a well-behaved quantum state. It has however been questioned whether the strict limit itself of such a strongly localized state with \(|\psi_{0}(x,t)|^{2}\to\delta(x)\) constitutes a physical state, see [58].
2310.12706
Trenchcoat: Human-Computable Hashing Algorithms for Password Generation
The average user has between 90-130 online accounts, and around $3 \times 10^{11}$ passwords are in use this year. Most people are terrible at remembering "random" passwords, so they reuse or create similar passwords using a combination of predictable words, numbers, and symbols. Previous password-generation or management protocols have imposed so large a cognitive load that users have abandoned them in favor of insecure yet simpler methods (e.g., writing them down or reusing minor variants). We describe a range of candidate human-computable "hash" functions suitable for use as password generators - as long as the human (with minimal education assumptions) keeps a single, easily-memorizable "master" secret - and rate them by various metrics, including effective security. These functions hash master-secrets with user accounts to produce sub-secrets that can be used as passwords; $F_R($s$, w) \longrightarrow y$, takes a website $w$, produces a password $y$, parameterized by master secret $s$, which may or may not be a string. We exploit the unique configuration $R$ of each user's associative and implicit memory (detailed in section 2) to ensure that sources of randomness unique to each user are present in each master-secret $F_R$. An adversary cannot compute or verify $F_R$ efficiently since $R$ is unique to each individual; in that sense, our hash function is similar to a physically unclonable function. For the algorithms we propose, the user need only complete primitive operations such as addition, spatial navigation or searching. Critically, most of our methods are also accessible to neurodiverse, or cognitively or physically differently-abled persons. We present results from a survey (n=134 individuals) investigating real-world usage of these methods and how people currently come up with their passwords, we also survey 400 websites to collate current password advice.
Ruthu Hulikal Rooparaghunath, T. S. Harikrishnan, Debayan Gupta
2023-10-19T13:00:16Z
http://arxiv.org/abs/2310.12706v1
# Trenchcoat: Human-Computable Hashing Algorithms for Password Generation ###### Abstract The average user has between 90-130 online accounts [17], and around \(3\times 10^{11}\) passwords are in use this year [10]. Most people are terrible at remembering "random" passwords, so they reuse or create similar passwords using a combination of predictable words, numbers, and symbols [16]. Previous password-generation or management protocols have imposed so large a cognitive load that users have abandoned them in favor of insecure yet simpler methods (_e.g._, writing them down or reusing minor variants). We describe a range of candidate _human-computable_ "hash" functions suitable for use as password generators - as long as the human (with minimal education assumptions) keeps a single, easily-memorizable'master' secret - and rate them by various metrics, including _effective security_. These functions hash master-secrets with user accounts to produce sub-secrets that can be used as passwords; \(F_{R}(\mathrm{s},w)\longrightarrow y\), which takes a website \(w\) and produces a password \(y\), parameterized by the master secret \(s\), _which may or may not be a string_. We exploit the unique configuration \(R\) of each user's associative and implicit memory (detailed in section 2) to ensure that sources of randomness unique to each user are present in each \(F\). An adversary cannot compute or verify \(F_{R}\) efficiently since \(R\) is unique to each individual; in that sense, our hash function is similar to a physically unclonable function [37]. For the algorithms we propose, the user need only complete primitive operations such as addition, spatial navigation or searching. _Critically, most of our methods are also accessible to neurodiverse, or cognitively or physically differently-abled persons_. Given the nature of these functions, it is not possible to directly use traditional cryptographic methods for analysis; so, we use an array of approaches, mainly related to entropy, to illustrate and analyze the same. We draw on cognitive, neuroscientific, and cryptographic research to use these functions as improved password management and creation systems, and present results from a survey (n=134 individuals, with each candidate performing 2 schemes) investigating real-world usage of these methods and how people _currently_ come up with their passwords. We also survey 400 websites to collate current password advice. Keywords:Usable Security Applied Cryptography Hash Functions Security Policy Authentication Identification ## 1 Introduction _Your password must be between 8-16 characters long, with at least one uppercase character, one lowercase character, one number, and one special character (such as l,@,#,etc.), must not include your username, and be changed every 90 days._ Memorizing myriad passwords, with (often questionable) constraints imposed to make each password as "random" as possible, and little guidance on how to manage this information, is a herculean task. This has resulted in people using easily guessable and common passwords [30]. Surveys last year indicated that individuals reuse over half of all passwords for multiple accounts, with many others being easily attacked with a dictionary of common passwords [16]. Anecdotally, users prioritize convenience over privacy when accessing newsletters, spam mails, or magazine subscriptions. They assign important accounts with less conveniently memorable passwords. This trade-off in memorability results in compromised security when passwords are written and stored at home. [34] Weak passwords are a serious threat when they guard sensitive data or systems, and may lead to identity theft, insurance fraud, public humiliation, etc. [43]. Common approaches to handling this rely on instructing users to create'strong' passwords with suggestions such as: 'don't use your name or birth-date', 'include symbols' and 'don't capitalize only the first letter'. However, users routinely ignore or circumvent these suggestions because of their cognitive load. The current standard for password management and security is a password manager. Unfortunately, several sources report serious flaws (including zero-day attacks) _consistently_ found in the most popular password managers every year [15, 3]. Some managers are also vulnerable because of their tendency to store the passwords to the password manager in plaintext.1 Footnote 1: Preventing this, in most password managers, requires users to terminate the manager each time after use. Users may be unaware of this or disregard it because of inconvenience, which once again lowers its security [25]. Digital and physical copies of passwords will always have vulnerabilities, but remembering several passwords imposes a cognitive load that users are unwilling or unable to manage. Past research has proposed several password generation methods [4, 5, 6] but those that consider real-world usage have not been tested beyond a dozen people [4], or have placed too large a cognitive load on users. We propose a family of **public** derivation functions \(F\) such that, if we start with a master secret, \(s\) (which the human memorizes), we can derive a sub-secret \(y_{i}\) for each website \(w_{i}\). Broadly, our requirements for such \(F\) would be: (1) Given \((y_{i},w_{i})\), where \(y_{i}=F_{R}(s,w_{i})\), it should be computationally hard to find \(s\); (2) Given \((y_{1},w_{1}),(y_{2},w_{2}),\ldots,(y_{k},w_{k})\) and \(w_{k+1}\), it should be computationally hard to find \(y_{k+1}\) (secure as in Unforgeability under Random Challenge Attack [6]). This minimizes cognitive load by requiring only the memorization of \(s\), with any \(y_{i}\) being derived using public \(w_{i}\) and \(F\). _Critically, \(s\), unlike \(y_{i}\), need not be a string_! (We discuss visual and cue-based \(s\) in section 2.) \(F\) must be easily human-computable. F must also not require too much aid, to minimize cognitive load. Further, for individuals to reproduce the same password each time, \(F\) should be deterministic with respect to each individual. One way to satisfy most of these requirements is through a cryptographically secure hash. Predefined cryptographic hash functions such as SHA-3 (with preset size parameters, and conversion to appropriate characters) could be used in place of \(F\), calculating \(y=F(s\cdot w)\), concatenating \(s\) and \(w\) where \(s\) is a string. Unfortunately, most humans cannot easily compute SHA-3 in their heads. We need something that includes _some_ features of a cryptographically-secure hash function without requiring the mathematical heavy-lifting common to such schemes. In the rest of this document, we describe a number of approaches to finding the same, and the results of our survey on the subject. (Assumptions made by cryptographers on what laypersons would find "easy to compute" may be incorrect; we must empirically observe the methods people are willing and able to use.) ### Paper Outline and Contributions To optimize our hash functions for human use, we discuss visual cues, and implicit and associative methods suggested by cognitive and neuro-scientific research in section 2. Previous literature on human-computable passwords requires rehearsal schedules, online aid, etc. with various caveats and problems [4, 5, 6]. These issues are obviated by using an easily-memorized key with human-computable algorithms designed for password generation and management. Section 4.1 presents a range of such hashing algorithms. An adversary cannot compute or verify these hashes efficiently, since these are unique to each individual; in that sense, our hash function is similar to a physically unclonable function [37] In this context, we discuss _effective security_ in section 5.2 which weighs cryptographically evaluated security against human usability. _E.g._, generating random passwords without associative memory techniques or computational tools and writing materials may impose large cognitive loads, reducing usability2 Footnote 2: In general, as a human-computable hash function grows in difficulty, a human is more likely to abandon it [16, 30] and revert to weak password practices. So, one can have very high theoretical security but, in practice, be totally insecure. We also define _graceful degradation_ - our algorithms retain a significant amount of their effective security even if access to writing materials, computers, or the internet is unavailable. We test the algorithms presented in this paper as well as Cue Pin Select [4] on a survey population of 134 individuals (with each person assigned to two, randomly-chosen schemes), averaging 56 responses per algorithm from people between the ages of 18 to 25. We analyse the results in section 5.1 and also use an LSTM to test character predictability in section 5.4. We cannot use standard cryptographic techniques to evaluate our schemes, as they are explicitly optimized for representation in human brains but difficult to represent or simulate on computers (thus contributing to their security). So, we introduce metrics to assess the security of human-computable schemes, measure ease of use, rememberability, unforgeability under Random Challenge Attack [6], and more, in appendix A. We also classify algorithms based on their paradigms, limiting factors, and success of password recall in section 5.2. Section 3 discusses common password hygiene errors and current password advice; we survey 400 websites and applications for such advice (table 1). We also provide insight into real-world methods individuals _currently_ use to come up with passwords in section 6. Finally, section 5 uses our survey results to understand the determinism or stability of our schemes during real-world usage. ## 2 Cognitive and Neuro-scientific Perspectives During WW1, before the advent of powerful computers, soldiers used "trench codes" to communicate across trenches. These had to designed to be computable by soldiers under pressure without assuming high education levels - this involved coming up with clever codebooks/manuals1. Such trench codes had their own problems, of course, but these issues were obviated by the time WW2 came around; ever since then, we have optimized our cryptographic functions (encryption, hashing, etc.) for increasingly-powerful computers, not humans. To design human-computable functions while maintaining security, we must first discuss how to optimize functions for the human brain. Footnote 1: Beyond careful design, these also included side-channel defenses e.g., the paper material was designed to degrade within a few weeks, ensuring that obsolete codes would not be used, and “lost” manuals would lose value quickly. Broadly, the brain manages memory in two categories [5]: persistent (e.g., notepads) and associative (human memory). The latter is clearly more secure for password storage and recollection, as elaborated in the Introduction. Password recollection depends on the conscious retrieval of detailed memory, which imposes a large cognitive load (so users create workarounds to ease this load). Relying on visual, implicit and associative memory can ease this cognitive load. Visual memory is capable of long-term storage of large amounts of detailed information. Implicit, associative memory aids in lasting rapid recall. However memorizing large amounts of new visual information requires constant rehearsal to become embedded in memory, which is tedious. Fortunately, humans already accumulate a vast amount of long-term information throughout their lives. Subconscious rehearsal repeated over time does _not_ feel tedious: drawing on implicit memory - such as repeatedly navigating a house - requires less effort. Visually cued recollection is easier than explicit recollection [2]. This is also a more accessible method, as neurologically damaged or disabled patients can succeed at implicit memory tasks, even when they cannot succeed on explicit memory tasks [31]. We thus contend that password retention relying on implicit memory retrieval has the potential to be _stable, long-lasting, and equitable_. Some functions proposed in section 4.1 are based on this capacity for detailed storage and fast retrieval in visual memory. The _Memory Palace_ method uses visually-cued subkey recollection. This can be further improved by using physical copies of partial visual images for cues, eliminating the cognitive load of remembering visual cues themselves. (See section 4 for details of these protocols.) We now briefly explore the act of using partial images as visual cues (figure 1) for password-subkey retrieval. We define \(p_{i}\) as the probability with which a random user correctly identifies a partial image such as above, when they are primed on the original completed image \(i\). \(n_{i}\) is the probability that they identify the partial image if they are not primed on the original image. The priming effect 4 is \(\alpha\), with \(\alpha>0\) and \(p_{i}\geq n_{i}+\alpha\) i.e. the probability of correctly identifying partial images with priming is greater than the probability of correct identification without priming [11]. Users may choose to use cues for all of their accounts, which would have required 130 cue-subkey associations for the average user last year. [17] However, this is unlikely and most users may deploy hash functions and cues for only the most sensitive data. Footnote 4: All images have demonstrably high priming “strength” [31] i.e. our images are already embedded in the user’s mind (familiar places that they can navigate mentally) What remains then, is to evaluate the success of an adversarial (without cue-subkey associations) attack. Fortunately, this is well-established in neuroscientific literature; we paraphrase [11]: _Assuming an adversary knows \(p_{i},n_{i}\), and the correct label for that image, an optimal adversarial strategy is to maximize the probability of recovery of those images without knowledge of the set \(U\) on which the user was primed (since this set \(U\) exists uniquely in the mind of each user). The best strategy is to label each image correctly at random. However supposing an adversary is allowed to recover a user's [password] with probability at most 0.5% (false positive rate). For valid recovery to succeed at least 97.5% of the time (false negative rate of 2.5%), a user would need to correctly label 135 images without prior knowledge to recover a word._5 Footnote 5: See [11] for a detailed proof. Users surveyed during a 2004 survey on Password Memorability and Security [42] were observed to use their own password generation methods, which were usually weak, yet met the security requirements demanded by websites. We thus propose that exploiting users' unique configurations of memory as a source of randomness enables compelling, secure password generation. Figure 1: Complete and partially complete line drawings for visually-cued subkey priming based on user subkeys from the Memory Palace. ## 3 Password Security Advice There are three common password-hygiene errors [40] - choosing simple passwords (123456, \(iloveyou\), \(query\), etc.), insecure storage, and password reuse. Attacks6 include guessing (common passwords), brute force, and dictionary attacks. The passwords mentioned above have 28, 40, and 32 bits of entropy respectively, which require around half a million attempts [12] to crack. (In reality, a hacker would guess common passwords first, and thus break these easily.) With the aid of GPU supported tools like Hashcat, Rainbow Crack etc., a 9-character password can be cracked in an alarmingly short time [41] - around 18 minutes to check salted hashes for every 9-character password, assuming ideal conditions7. Footnote 6: Cracking means an adversary with access to password hashes, has found a collision. Footnote 7: In practice, the time taken to find a password’s hash depends on the alphabet used, degree of parallelization, hardware specifications such as processor flops, etc. [8] Given these issues, many websites/applications suggest strategies users should follow to create secure passwords. To better understand such password advice, we surveyed 400 highly visited platforms, compiled manually and through public lists [19, 39, 36, 1]. Of these, 54 offered password advice; see table 1 for a summary. Websites suggest tactics such as intentionally misspelling words, replacing letters ('@' for 'a', '$' for's', etc., so that 'its raining cats and dogs' become '1tsrAIn1NGcts&DGS!'). However, there exist various dictionaries of special characters, common misspellings, and symbol substitutions. Hence, such tricks are ineffective against modern hackers [32]. An attack on with these dictionaries exposed hash collisions such as "April221973," and "Qbesancon321". What, then, is a secure password? The RSA challenge by _RSA Laboratories_[38] issued random keys from 40 upto 128 bits with ciphertexts. Distributed.net has been working on the 72 bit key for over 6400 days as of July, 2020 [35]; at this pace, it takes around 200,000 days to search the entire keyspace. Currently, 72 bits of entropy provide sufficient security; 80 bits of entropy are recommended for long-term security [38]. \begin{table} \begin{tabular}{l r} \multicolumn{2}{c}{**Summary of Password Advice**} \\ \hline _Parameters Suggested_ & _\% of platforms_ \\ \hline Length (\(<\) 6 characters) & 20\% \\ Length (\(>=\) 6 characters) & 20\% \\ Length (\(>=\) 8 characters) & 41\% \\ Length (\(>=\) 10 characters) & 19\% \\ Numerals & 83\% \\ Uppercase & 65\% \\ Special Characters & 63\% \\ Password Managers & 9\% \\ \end{tabular} \end{table} Table 1: Advice from 400 highly-visited websites and apps (54 provided advice). ## 4 Human-Computable Hashing Algorithms The functions proposed here draw upon the ideas discussed in section 2 to balance security and ease of use. We describe all algorithms and provide examples for cases that might otherwise be confusing. Algorithms were primarily designed to determine which approaches (subkey-generation, visualization, addition, implicit association etc.) produce the most effective and secure passwords. For this reason, they vary widely and cover a range of password generation tactics. We perform a naive entropy calculation (assuming letter entropy values are independent) for the purposes of comparing hashing algorithms. _These numbers should **not** be taken seriously as proxies for security in and of themselves, but may be useful for comparison_. Difficult-to-use schemes might push users to simply write the password down (or ignore the scheme). A "good" function produces high entropy passwords that are easy to compute. Typically, hash function security is judged by pre-image resistance, collision resistance, randomness, etc. [14]. That is not easily done for our functions - we cannot generate billions (or even millions) of hashes, as the process of generation relies on individuals' unique memory representations and sources of randomness (discussed below and in appendix 0.A). We discuss some metrics we can use in section 5 and cryptographic details in appendix 0.A. ### Description of the Schemes We describe the following human-computable hash functions: Memory Palace, Scrambled Box, Song Password, Internal Sentence. \(w\) is the website name, \(s\) is the single secret user key, and \(h\) is the candidate for \(F\). \(F\) and \(h\) are functions of \(R\), the unique configuration of each user's memory. Each source of randomness is indicated by \({}_{R}\) and specified at the end of each algorithm. Sources are elaborated on in Appendix 0.A. Common sources of randomness across all algorithms: unique memory associations; choosing between symbols, numerals or letters on the same key. **Memory Palace.**\(s\): A location\({}_{R}\) very familiar to the user. \(h_{R}(s,w)\): 1. _subkey generation_ Mentally navigate the location using each letter in \(w\). For vowels turn left and walk straight\({}_{R}\), else turn right and walk straight. After reaching the end of the website name, think of a word (or words) that describe what the user faces. (If \(w=gmail\), visualizing a familiar location, mentally move right and straight twice then left and straight twice, then right and straight twice. \(s\) = a description of what you face.) 2. _group sum_ Divide the word(s) into groups of 2 letters (pairs). Sum each group using letter values to create a new letter. (Letters map to \(\{a=1,\ldots,z=26\}\), if sum overflows, subtract 26 from the sum.) If \(s\) can't be evenly split add a favorite letter\({}_{R}\) to the end. (If \(s\) = white birds, split into wh, it, eb, ir, ds. Sum into \(w+h=e\), \(i+t=c\),\(e+b=g\),\(i+r=a\), \(d+s=w\)) * _group character_ If the first letter of a pair is a vowel, write the symbol/letter above and to its immediate diagonal\({}_{R}\) left on the keyboard after the letter from the group sum. Else, the symbol/letter above to its immediate diagonal right on the keyboard. (Described and illustrated visually during the survey.) \(password\): Alternate group sum and group character. (Alternating group sum letters with corresponding diagonal symbols, \(password\) = e3cfgya1w3.) Randomness: Spatial characteristics of direction, number of steps to take when walking. Letter preference when appending letters to make the length of \(s\) even. Interpretation of diagonal angle, choosing the \(i^{th}\) symbol along the diagonal. **Scrambled Box.**\(global\): A 10x10 table of symbols, numbers and letters (repetitions allowed). Movements associated with each story element (can be changed): Sad = up; Memorable characters (Animals, Villains etc) = diagonal to the right and down; Events that move the story forward = horizontal to the right; Happy = move to the opposite corner of the table. * A well-known easily remembered story\({}_{R}\) name. \(h_{R}(s,w)\): * _S-box generation_ Find 4 elements (e.g., emotions, events, memorable characters) in the story's plot and write them down in order. For the \(x^{th}\) element of the story, choose a \(x\times x\) square and move it by \(x\) squares, using the associated direction. Swap it with the square it replaces. * _S-box-website mapping_ Connect the story to the website to come up with a word/words\({}_{R}\). Convert letter values (mapping a=0, z=25) in the word(s) to integers, add a 0 to the number if it is a single-digit integer. Treat integers as (x,y) coordinates and find the corresponding characters in the table. Save this sequence of characters as the \(password\). (For example: Connecting Tarzan to Amazon may result in the word "shirt" which maps to letter values "19 8 9 18 20". Adding 0s to single digits, "19 80 90 18 20", and mapping to the S-box results in coordinates (1,9), (8,0) etc. The password: vtu_) **Song Password.** This method relies on two sources of randomness - songs and a 4 digit key. \(s\): A 4-digit pin. \(h_{R}(s,w)\): Figure 2: Example 10x10 box and S-box, with scrambling highlighted * Reduce \(w\) to a 4 letter mnemonic. (_Flipkart_ becomes \(f\,p\)\(k\)\(t\)) * Choose a 4 digit key\({}_{R}\). (3 8 1 9) * Choose 4 songs\({}_{R}\) starting from each letter of the mnemonic. These should be songs (not necessarily in English!) that have significance or are easy to remember. (_Fade_, _Panama_, _King of Mars_ and _Teddy Boy_.) * Choose words\({}_{R}\) from each song, corresponding to each digit of the key, and concatenate to form a _Song String_, \(S_{x}\). (\(3^{rd}\) word from _Fade_, \(8^{th}\) word from _Panama_, \(1^{st}\) word from _King of Mars_ and \(9^{th}\) word from _Teddy Boy_.) * After every vowel in \(S_{x}\), insert a special character closest\({}_{R}\) to the vowel on the keyboard. If there is more than 1 special character equidistant from the vowel, choose\({}_{R}\) one and remember it. (For \(o\), '(' or ')', for \(e\) '$' or '#'.) * Choose three characters\({}_{R}\) (letters or symbols) and move them to the end of the password. Repeat with another group of three. Then remove every alternate character (starting with the first). \(password\): resultant string. Sources of randomness: Interpretations of linguistic fillers as words, choice of special character and characters to move. **Internal Sentence.**\(s\): A rarely used word\({}_{R}\) from any language. \(h_{R}(s,w)\): Create a sentence connecting the website to the word. \(password\): Sentence created. ## 5 Analysis of Hash Functions This section analyzes the security and real-world effectiveness of our hash functions via several metrics, including a user study: 134 individuals aged 18-25 were surveyed, with each user generating passwords using 2 different randomly-assigned algorithms. Each algorithm had an average of 56 responses. We also include Cue-Pin-Select [4] in our survey. ### Generation and Retention Previous attempts have suggested "intolerably slow" methods [11]. Our protocols can be executed by the average user within 5 minutes for generation, and recollection time decreases significantly with repetition. The key human-computability properties of \(F_{R}\) are: (1) Reliance on cognitive and visual cues for stable, rapid recall8 (2) Minimal effort, and limited access to education or writing resources. Footnote 8: Some of which are proven to last in memory 17 years without repeated rehearsal [11] Some of our methods retain significant security without access to any external materials for generation. The Memory Palace and Internal protocols need only a keyboard (or pictures of standard keyboards; no writing materials or internet, though access to these would decrease cognitive load). The ability to recall or regenerate a password is essential to its effective security; lower memorability leads to frequent passwords resets and frustration that may lead to users abandoning the algorithm. Users were surveyed over a week to test password retention. See fig. 3 and table 2. Methods with less successful recall (Cue-Pin-Select, Song password and Scrambled box) seem to require more explicit memorization. Associative techniques can exponentially increase ease of password recollection (Memory Palace, Internal Sentence), and provably improve system security [9]. Therefore we recommend the use of partial visual cues for subkey association whenever possible. The rightmost area of figure 4 indicates perfectly recalled passwords, with larger bubbles indicating a more significant percentage of users with perfect recollection. Ideal functions are large bubbles at the rightmost end of the graph with an average password length above 10 characters (see section 3). Each time a password is recalled using a key, a user-familiar memory (object, space, color etc) is associated with the key. This key-memory association is repeated until thinking of one automatically brings the other to mind [23]. We emphasize that, as in all reasonable systems, the generation method is public, and the only secret that needs to be remembered is this key. The advantage of involving the methods proposed in this paper (such as visual, associative, implicit memory) is that they can be adapted to existing password generation methods. _E.g._, Cue-Pin-Select can be modified to choose random words with visual or associative cues drawing on implicit memory. \begin{table} \begin{tabular}{l r r r} & \multicolumn{3}{c}{**Password**} & \multicolumn{1}{c}{**Memorability**} \\ \cline{2-4} _Hashing algorithm Attempts_ & _Complete_ \(R\) & _Partial_ \(R\) \\ \hline Internal Sentence & 42 & 21 (50\%) & 7 (17\%) \\ Memory Palace & 45 & 19 (43\%) & 6 (14\%) \\ Song words & 42 & 10 (24\%) & 11 (27\%) \\ Cue Pin Select & 47 & 11 (24\%) & 5 (11\%) \\ Scrambled box & 29 & 6 (21\%) & 4 (14\%) \\ \end{tabular} \end{table} Table 2: R: recall/regeneration of passwords. Attempts: Number of people who attempted R. Total R: exact recall/regeneration of 1 or more passwords created. Figure 3: Password recollection visualized (based on table 2) ### Effective Security We propose the concept of _effective security_. A password generation scheme may be incredibly secure, but is useless9 if it is so hard that most users just write down their passwords. (See fig. 5.) The effective security of a function \(F_{R}\) is the actual difficulty of breaking one of its assumptions in real-world use by laypersons. The ideal human-computable hash function is easy enough (and grows easier through repeated use) to encourage humans to use it, while retaining the necessary entropy to ensure security by resisting attacks. Footnote 9: Assuming an appropriate threat actor – imagining an adversarial ‘evil’ sibling with occasional read-only access to your living space is a useful rule of thumb. Traditional cryptographic evaluations are built to evaluate functions designed for computers. We present a range of strategies for security evaluation in appendix 0.A. These strategies are **not** indicative of security by themselves, but taken in combination provide a good measure of the relative security of each function; further work is required to understand the security of such methods. ### User Study and Improvements We perform a survey comprising n=134 individuals, with an average of 56 users suggesting improvements for each algorithm. We present baseline entropy evalu Figure 4: Bubble chart of the rate of password recollection for hash functions. Each function is represented by a color; the frequency of each rate of recall (recall measured by: \(\mathbb{S}(p_{i},p_{r})\) where \(\mathbb{S}\) corresponds to the Gestalt Pattern Matching (Ratcliff/Obershelp string similarity algorithm [26, 28]) corresponds to the size of each bubble, \(p_{i}\) is the initial password and \(p_{r}\) is the remembered password; the axes measure password length and the frequency of each length. ations for each function10, measure passwords from each function against current security standards and suggest improvements based on user feedback. Footnote 10: Assuming character entropies are independent. We do not consider dictionary attacks, character frequencies etc as these would require a large number of passwords to be statistically valid, and due to unique user memory configurations \(R\) we cannot computationally generate large numbers of passwords. Our human computable hash functions average a password entropy of 78.07 bits, significantly higher than the average entropy of 40.54 bits per password as estimated by Microsoft [13]. These functions also encourage higher entropy by increasing use and distribution of symbols and capitalization. Memory Palace, Song Password, and Scrambled Box increase the number of symbols per average password to 3.188 symbols, compared to a baseline of 0.2 symbols. Capital letters decrease to 0.412, lower than 1.1 without hash functions. However, as evident in figure 7, capitalization is more distributed across location, rather than concentrated towards the first character of the password [20]. Our results are reasonably representative of the general population of password users [24]. Our choice of sample size is based on [21] and [24]. Our sample is drawn from students in a medium-sized university in India and may be applicable to similar demographic profiles. In addition, the sample represents a range of language, educational and income backgrounds. However, the proportions of these demographics are not the same as the general population. Beyond the obvious age bias (college-aged individuals), the sample is biased towards individuals willing to participate in the survey in exchange for food and money (both standardized), and all data is self-reported. In addition, Cochran's formula [22] recommends a sample size of 100 individuals based on the proportion of internet users in the world (53.6% of the global population in 2019 [7]), a 95% confidence interval and a 10% error margin. Compared to previous hu Figure 5: Mapping effective security (password security and user comfort with algorithms) and ease of use (user perception on a scale of requiring no resources, to requiring computers). _Axes are exaggerated subjectively for illustration._ man computable password research [4], we use a significantly larger sample size with a more representative demographic of password users. Thus, our results, extrapolated prudently, can apply to the broader population. Scrambled Box and Song require writing (the latter requires access to a music repository) and are harder than the first two methods for users. Song and Cue-Pin-Select also require greater intermediate key generation - choosing and explicitly recalling random unique words/songs and a pin/word. Comparatively, Internal Sentence and Memory Palace use associations already familiar to users. Graceful degradation in table 3 measures increase in difficulty with decrease in education levels. Larger graceful degradation corresponds to functions that require higher education levels. **Memory Palace.** With the aid of partial visual cues, memorizing hundreds of cues for subkey generation (objects, areas, memories etc) [17] is unnecessary and the user can focus on subkey-cue associations. Most keys were in 4% of the Figure 6: Each point represents the mean entropy of passwords with some user perceived difficulty (std. dev. error bars). Memory Palace Step 1 was presented as “method 1” to users; 2 included all steps described in section 4.1. X-axis: User Perceived Difficulty; Y-axis: Password Entropy. \begin{table} \begin{tabular}{l c c c} & & \multicolumn{2}{c}{**Graceful degradation and Entropy**} \\ \cline{3-4} _Function_ & _Mean entropy (bits)_ & _Standard deviation_ & _Graceful Degradation_ \\ \hline Internal Method & 153.95 & 97.14 & 0.66 \\ Memory Palace & 51.08 & 25.84 & 0.38 \\ Song Words & 74.57 & 44.12 & 0.49 \\ Cue Pin Select & 61.96 & 17.62 & 1.06 \\ Scrambled Box & 45.15 & 33.15 & 0.84 \\ \end{tabular} \end{table} Table 3: Graceful degradation, mean entropies, and their standard deviations. 100 most common words in English, including references to common household objects and local languages. After hashing subkeys with each website, no English words were identifiable (excluding users who misinterpreted instructions). Users were satisfied with the security but suggested clearer navigational guidance. A common struggle was navigating dead-ends with visually unremarkable cues. A significant proportion of users struggled with Step 2 and favored Step 1 and 3. Some users stated they would adapt Step 1 for future password generation. **Scrambled Box.** The key is the 'box' of pseudo-randomly scrambled symbols. This can be written down or shared, but must be unique to each user, who only needs to remember website-subkey associations. Users found rearranging symbols hard and preferred fewer instructions, but liked the lack of memorization. **Song Password** This scheme amplifies randomness in the input. For example, using songs: _Fade_, _Panama_, _King of Mars_, _Teddy Boy_ with user one's PIN\({}_{1}\) = 3819 and user 2's PIN\({}_{2}\) = 7144, passwords generated are mse@i(o)* and tsto)mhS (a similarity of 0.33% [26, 28]). \begin{table} \begin{tabular}{l r r r} & \multicolumn{3}{c}{**Survey Results**} \\ \cline{2-4} \multicolumn{1}{c}{_password length_} & \multicolumn{1}{c}{_security_} & \multicolumn{1}{c}{_difficulty (1-7)_} \\ \hline Internal Sentence & 25.91 & 9.90 & 2.52 \\ Memory Palace & 8.42 & 86.06 & 5.38 \\ Song Words & 11.50 & 92.16 & 5.41 \\ Cue Pin Select & 12.29 & 3.21 & 4.44 \\ Scrambled Box & 6.71 & 94.11 & 5.68 \\ \end{tabular} \end{table} Table 4: Security refers to the %age of passwords with \(\geq\)1 Number or Symbol. Length and difficulty are averages. Difficulty was assessed by users. Figure 7: Heatmap of the incidence of capital letters at different indexes. Passwords \(>25\) characters are omitted Users struggled with pins and associating different songs. Some users preferred not to remove or shift alternate characters, while others remarked they would adapt this method for future password generation. **Internal Sentence.** Users preferred this method for ease of use but struggled with remembering word order, verb and adjective choice, etc. or found passwords generated too long to recall. Users felt this method was insecure as it did not generate special characters or capitalization. The entropy for this method is misleading, as passwords often contain words susceptible to dictionary attacks. **Cue-Pin-Select** Word and pin recollection were challenging, users preferred associating words with cues over random cues, and suggested reducing the number of random words from 6 to 4. In general users requested stronger associative and implicit memory modifications to the method. Across all passwords and algorithms mentioned in 4.1, average password entropy is 78.07 bits and average password length is 11.83 characters (i.e. numbers, symbols and letters). ### Machine-Learning Based Analysis using LSTMs A simple machine learning system was used to predict the \(k^{th}\) character given the previous \(k-1\) characters of the password to further evaluate randomness. This is based on a long line of research starting from Shannon's entropy experiment [18]. We used all characters except the last for training (see Table 5). Figure 8: Symbol occurrence by method. Y-axis: log scale. X-axis based on symbol rank (most to least probable). SHA3-256 hashes were converted to latin-1 encoding to get typable character frequencies [29]. Memory Palace as in figure 6. ## 6 Real-World Password Generation Methods How do people currently generate (and remember) passwords? Our survey suggests that people use a combination of words, followed by digits and symbols (in that order), indicating construction in order of ease of recollection. Common associations: names of relatives, fictional characters, nicknames, etc.; digits or symbols -- birth dates, reversed phone numbers, even credit card numbers! Some used inventive techniques to balance security with memorability: account expiry dates, rhymes, snacks and manufacturing dates, and slang words. Several users reused passwords with the awareness of compromised security, citing a lack of convenient options. A small population added random words from different languages. (Full database of results omitted for brevity.) We observed that users designed passwords with human adversaries in mind and thus mistakenly believed that using animals or objects they disliked, using common character substitutions for letters ("leetspeak"), or misspelling words created a secure password. Based on previous work [44] and our survey, we recommend all platforms with password requirements brief users on current strategies used by computationally-equipped adversaries, such as dictionary attacks, frequency analysis etc. to reduce the usage of insecure passwords. ## 7 Conclusion We propose a range of human-computable hashing algorithms with string and non-string inputs, designed for password generation and management. We exploit users' unique memory configurations to drive our design, drawing upon existing neuroscientific research. We also collate current password advice across hundreds of popular websites and applications, and survey users on their current password generation methods, highlighting major issues and discussing mitigation. Our functions are validated and tested using a survey (\(n=134\)) to understand real-world usability. We note that larger surveys across a range of age groups are required to better classify the security and usability implications. Further work also needs to be done to explore the kinds of atomic human-computed operations that produce stable output useful for cryptography. \begin{table} \begin{tabular}{l r r} & \multicolumn{2}{c}{**Testing Accuracy (in \%)**} \\ \cline{2-3} _Scheme_ & _100 epochs_ & _200 epochs_ \\ \hline Internal Method & 53.13 & 58.41 \\ Memory Palace & 18.42 & 19.91 \\ Song Words & 21.71 & 23.37 \\ Cue Pin Select & 47.56 & 46.76 \\ Scrambled Box & 29.31 & 28.44 \\ \end{tabular} \end{table} Table 5: We used a Long Short Term Memory Network [33] to learn dependencies. The 50-cell LSTM was tested with two trials of 100 and 200 epochs.
2309.00261
Suppression of both superconductivity and structural transition in hole-doped MoTe$_2$ induced by Ta substitution
Type-II Weyl semimetal MoTe$_2$ exhibits a first-order structural transition at $T_s$ $\sim$250~K and superconducts at $T_c$ $\sim$0.1~K at ambient pressure. Both $T_s$ and $T_c$ can be manipulated by several tuning parameters, such as hydrostatic pressure and chemical substitution. It is often reported that suppressing $T_s$ enhances $T_c$, but our study shows a different behaviour when MoTe$_2$ is hole-doped by Ta. When $T_s$ is suppressed by Ta doping, $T_c$ is also suppressed. Our findings suggest that the suppression of $T_s$ does not necessarily enhance superconductivity in MoTe$_2$. By connecting with the findings of electron-doped MoTe$_2$, we argue that varying electron carrier concentration can effectively tune $T_c$. In addition, the Hall coefficient is enhanced around the doping region, where $T_s$ is completely suppressed, suggesting that the critical scattering around the structural transition may also play a role in suppressing $T_c$.
Siu Tung Lam, K. Y. Yip, Swee K. Goh, Kwing To Lai
2023-09-01T05:33:26Z
http://arxiv.org/abs/2309.00261v1
Suppression of both superconductivity and structural transition in hole-doped MoTe\({}_{2}\) induced by Ta substitution ###### Abstract Type-II Weyl semimetal MoTe\({}_{2}\) exhibits a first-order structural transition at \(T_{s}\sim\)250 K and superconducts at \(T_{c}\sim\)0.1 K at ambient pressure. Both \(T_{s}\) and \(T_{c}\) can be manipulated by several tuning parameters, such as hydrostatic pressure and chemical substitution. It is often reported that suppressing \(T_{s}\) enhances \(T_{c}\), but our study shows a different behaviour when MoTe\({}_{2}\) is hole-doped by Ta. When \(T_{s}\) is suppressed by Ta doping, \(T_{c}\) is also suppressed. Our findings suggest that the suppression of \(T_{s}\) does not necessarily enhance superconductivity in MoTe\({}_{2}\). By connecting with the findings of electron-doped MoTe\({}_{2}\), we argue that varying electron carrier concentration can effectively tune \(T_{c}\). In addition, the Hall coefficient is enhanced around the doping region, where \(T_{s}\) is completely suppressed, suggesting that the critical scattering around the structural transition may also play a role in suppressing \(T_{c}\). ## I Introduction Superconductivity is found, often by tuning the electronic properties via the application of hydrostatic pressure, in many topological semimetals, such as Cd\({}_{3}\)As\({}_{2}\)[1], ZrTe\({}_{5}\)[2], YPtBi [3; 4; 5; 6], WTe\({}_{2}\)[7; 8; 9; 10] and MoTe\({}_{2}\)[11; 12; 13; 14]. The exotic combination of topological bands and superconductivity offers a unique platform to search for topological superconductivity, where Majorana fermions can be used to develop topological quantum computation [15; 16]. Type-II Weyl semimetal MoTe\({}_{2}\)[17; 18; 19; 20] is one of the promising candidates for hosting topological superconductivity, especially after the discovery of an edge supercurrent [21]. At ambient pressure, MoTe\({}_{2}\) undergoes a first-order structural transition at \(T_{s}\sim\) 250 K, changing from a centrosymmetric (nonpolar) monoclinic 17\({}^{\prime}\) phase (space group: \(P2_{1}/m\)) to a noncentrosymmetric (polar) orthorhombic \(T_{d}\) phase (space group: \(Pmn2_{1}\)) upon cooling. At \(T_{c}\sim\) 0.1 K, an additional superconducting phase transition occurs. Owing to its low \(T_{c}\), it is challenging to experimentally study the superconductivity of MoTe\({}_{2}\). Finding a suitable way to control its \(T_{c}\) becomes an outstanding issue. Meanwhile, the competition between structural and superconducting transitions in MoTe\({}_{2}\) has been reported in previous studies using a variety of tuning parameters. Through the application of pressure [11; 13; 22; 23; 24; 25], \(T_{s}\) is suppressed to 0 K at \(\sim\)10 kbar, resulting in a complete removal of the \(T_{d}\) phase at high pressures. Meanwhile, \(T_{c}\) is enhanced by 30-fold (\(\sim\)4 K) at \(\sim\)15 kbar. These behaviours demonstrate the anticorrelation between \(T_{s}\) and \(T_{c}\). Similar anticorrelation can also be observed via isovalent chemical substitutions (S/Se substituting Te [22; 26]) and electron doping (Te deficiency [27] and Re substituting Mo [28]). Note that via the substitution of Mo by W, \(T_{s}\) is enhanced at ambient pressure, and the pressure-induced \(T_{c}\) is lower than that observed in the pristine MoTe\({}_{2}\), demonstrating the anticorrelation between \(T_{s}\) and \(T_{c}\) again [29]. The superconductivity of hole-doped MoTe\({}_{2}\) has not been studied to the same extent as the electron-doped counterpart. The introduction of hole carriers in monolayer MoTe\({}_{2}\) through gating has been shown to reduce its \(T_{c}\)[30]. On the other hand, the effect of hole doping on bulk MoTe\({}_{2}\) has been explored in depth through the substitution of Nb for Mo [31; 32]. Although no evidence of superconductivity with \(T_{c}>\) 2 K has been found up to the highest studied doping level \(x=0.22\) in Mo\({}_{1-x}\)Nb\({}_{x}\)Te\({}_{2}\), indicating a lack of significant enhancement in \(T_{c}\) through hole doping, the hole-doping phase diagram of Mo\({}_{1-x}\)Nb\({}_{x}\)Te\({}_{2}\) in the normal state was extensively investigated by Sakai _et al._[32]. They revealed that the suppression of \(T_{s}\) upon Nb doping is associated with a huge enhancement of thermopower at low temperatures, which they attributed to the critical scattering arising from the boundary of the nonpolar-to-polar transition around \(T_{s}\). Nevertheless, it remains uncertain how \(T_{c}\) evolves and what the correlation of \(T_{s}\) and \(T_{c}\) is upon hole doping. Understanding these issues can help us reveal the key factors that control \(T_{c}\) of MoTe\({}_{2}\). In this article, we study the effect of hole doping on MoTe\({}_{2}\) via the substitution of Mo by Ta. Transport measurements were conducted down to \(\sim\)30 mK to track the evolution of both \(T_{s}\) and \(T_{c}\), and surprisingly, we found that both \(T_{s}\) and \(T_{c}\) are suppressed and eventually vanish with increasing hole doping, contrary to the anticorrelation between \(T_{s}\) and \(T_{c}\) established in MoTe\({}_{2}\) controlled by other tuning parameters. ## II Experiment Single crystals of Mo\({}_{1-x}\)Ta\({}_{x}\)Te\({}_{2}\) were grown by the self-flux method. The mixture of Mo powder (99.999%, Alfa Aesar), Te (99.99999% lumps, Ultimate Material), and Ta powder (99.99%, Sigma Aldrich) were first placed into an alumina crucible, with a stoichiometric ratio of Mo:Ta:Te = 1\(-\)\(x\):\(x\):20. The alumina crucible was inserted into a quartz tube before the quartz tube was sealed under a vacuum. The sealed ampule was then heated to 1100 \({}^{\circ}\)C within 24 hours and stayed for 24 hours, followed by slow cooling to 880 \({}^{\circ}\)C for 400 hours. Finally, the ampule was taken out from the furnace at 880 \({}^{\circ}\)C and centrifuged to remove the excess Te flux. X-ray diffraction (XRD) data were collected at room temperature by using a Rigaku X-ray diffractometer with Cu\(K_{\alpha}\) radiation. The chemical compositions were characterized by a JEOL JSM-7800F scanning electron microscope equipped with an Oxford energy-dispersive X-ray (EDX) spectrometer. A standard four-probe method was used to measure temperature-dependent resistance in a Bluefors dilution refrigerator with a base temperature of 30 mK. A standard six-probe method was used to measure the Hall effect in a Quantum Design Physical Property Measurement System with a temperature range from 300 K to 2 K and a magnetic field of \(\pm\)14 T. ## III Results and Discussion Figure 1(a) shows the XRD spectra for the Mo\({}_{1-x}\)Ta\({}_{x}\)Te\({}_{2}\) single crystals with \(x\) = 0, 0.021, 0.042, 0.046, 0.065, 0.078, 0.097, 0.118, and 0.173. The peaks shown in all spectra are well indexed by the (00\(L\)) planes originating from the pattern of 1\(T^{\prime}\)-MoTe\({}_{2}\), confirming that all crystals are single-crystalline 1\(T^{\prime}\)-MoTe\({}_{2}\) at room temperature. Figure 1(b) focuses on the (002) peaks of all samples, which reveal a monotonic shift to a higher 2\(\theta\) when \(x\) increases, indicating a shrinking crystal structure. As the covalent radius of Ta is smaller than that of Mo, this provides crystallographic evidence that Ta is systemically substituting Mo with increasing \(x\). These Mo\({}_{1-x}\)Ta\({}_{x}\)Te\({}_{2}\) crystals measured in XRD were also examined by EDX, from which we determined their elemental compositions and hence the values of \(x\) in each sample. The EDX results are consistent with the findings in XRD spectra. (see Supplemental Material for more details [33].) Figure 2(a) illustrates the temperature dependence of resistivity \(\rho(T)\) of Mo\({}_{1-x}\)Ta\({}_{x}\)Te\({}_{2}\) with \(x=0-0.173\) measured under zero magnetic field. All samples exhibit metallic behaviour. A thermal hysteresis can be observed in pristine MoTe\({}_{2}\) (\(x=0\)) around 150-250 K when the resistivity was measured upon increasing (solid curves) and decreasing temperature (dashed curves), indicating the appearance of the first-order structural transition [11; 13; 14; 35; 34]. This transition persists up to Figure 1: (a) X-ray diffraction (XRD) spectra of single crystals of Mo\({}_{1-x}\)Ta\({}_{x}\)Te\({}_{2}\). The peaks of (00\(L\)) are indexed in the figure. (b) Enlarged XRD spectra near the peak of (002). The (002) peak shifts progressively toward a higher diffraction angle 2\(\theta\) when \(x\) increases. Figure 2: (a) Temperature dependence of resistivity \(\rho(T)\) of Mo\({}_{1-x}\)Ta\({}_{x}\)Te\({}_{2}\) at zero magnetic fields. The warm-up (cool-down) data are plotted as solid (dashed) curves. (b) Low-temperature \(\rho(T)\) normalized to the value of \(\rho(1\) K), displaying the superconducting transitions. \(x=0.097\). With increasing \(x\), the transition shifts gradually toward lower temperatures, and the hysteresis loop becomes broader. When \(x\geq 0.118\), no hysteresis is observed in the whole temperature range, suggesting that the structural transition vanishes at the high doping region. Figure 2(b) shows the resistivity data normalized to the value of \(\rho(T)\) at 1 K at the low-temperature region. A superconducting transition, where \(T_{c}\) is defined at which the resistivity drops to zero, is observed at \(x=0\) with \(T_{c}\sim 0.1\) K, which is consistent with the previous studies [11; 12; 13; 14; 21; 22; 23; 24; 25; 35]. When \(x\) increases, \(T_{c}\) generally reduces despite a small enhancement to \(\sim\)0.25 K at \(x=0.042\). At \(x=0.065\), a small drop of resistivity without reaching zero resistivity is observed near the base temperature, indicating that the bulk superconductivity is heavily suppressed and only trace superconductivity is detected. When \(x\) further increases (\(\geq 0.078\)), the resistivity data shows no signs of superconductivity. To probe the evolution of the Fermi surface of Mo\({}_{1-x}\)Ta\({}_{x}\)Te\({}_{2}\), we conducted the Hall effect measurements. Figure 3 illustrates the magnetic field dependence of Hall resistivity \(\rho_{xy}(B)\) of Mo\({}_{1-x}\)Ta\({}_{x}\)Te\({}_{2}\) with \(x=0\), 0.021, 0.065 and 0.173 at different temperatures measured during warm-up. \(\rho_{xy}(B)\) data of samples with other doping can be found in Fig. S2 in Supplemental Material [33]. At \(x=0\) (Fig. 3(a)), \(\rho_{xy}(B)\) has a negative slope at the whole temperature range. At low temperatures, \(\rho_{xy}(B)\) shows a non-linear feature. These features are consistent with the semimetallic nature of MoTe\({}_{2}\), which exhibits nearly perfect electron-hole compensation with a high electron mobility [36; 13]. After introducing Ta doping, the slope of \(\rho_{xy}(B)\) at \(x=0.021\) (Fig. 3(b)) begins to turn positive at high temperatures. When \(x\) further increases, the slope is always positive at all measured temperatures (see Figs. 3(c) and (d) as examples). This trend indicates that Ta doping introduces hole carriers to the samples and the hole carriers are dominant at \(x>0.021\). Moreover, the additional hole carriers destroy the nearly perfect electron-hole compensation, resulting in the linear positive slope of \(\rho_{xy}(B)\) at \(x>0.021\). To further visualize the temperature evolution of the Hall effect of Mo\({}_{1-x}\)Ta\({}_{x}\)Te\({}_{2}\), we extract the Hall coefficient \(R_{H}\) from the slope of \(\rho_{xy}(B)\) in the linear region, and the temperature evolution of \(R_{H}\) is plotted in Fig. 4. The \(R_{H}\) data measured at high temperatures during cool-down are also displayed. We find that a thermal hysteresis can also be observed in the \(R_{H}\) data of the samples from \(x=0\) to \(x=0.097\), while the hysteresis is absent in the sample with \(x\geq 0.118\). These results are consistent with the observation of the first-order structural transition in the \(\rho(T)\) data in Fig. 2(a). At \(x=0\) (Fig. 4(b)), \(R_{H}\) shows a strong temperature dependence below 50 K, which is similar to the result reported in previous studies [36; 34]. Upon Ta doping, \(R_{H}\) shifts toward the positive side to the introduction of additional hole carriers, while the temperature dependence is relatively mild compared to \(x=0\). The most prominent temperature profile of \(R_{H}\) in Ta-doped samples is \(x=0.065\), where the magnitude of \(R_{H}\) (\(|R_{H}|\)) gradually increases with decreasing temperature and reaches the maximum value at 2 K. Interestingly, our results show that \(|R_{H}|\) at 2 K is the largest around \(x=0.065\) (see also Fig. 3(c), where \(\rho_{xy}(B)\) has a steeper slope at 2 K compared to that in Fig. 3(d)), which is different from the expectation that \(R_{H}\) would increase toward the positive side when \(x\) increases. This issue will be further discussed in the later section. We summarize our results and construct a temperature-doping phase diagram of Mo\({}_{1-x}\)Ta\({}_{x}\)Te\({}_{2}\) in Fig. 5, which shows the Ta-doping dependence of \(T_{s}\) and \(T_{c}\). The structural transition temperatures acquired during warm-up (\(T_{s,warm}\)) and cool-down (\(T_{s,cool}\)) are defined by the extrema of the first derivative of \(\rho(T)\) around the transition (see Fig. S3 in Supplemental Material [33]). Both \(T_{s,\;warm}\) and \(T_{s,\;cool}\) show a generally decreasing trend with increasing \(x\). Compared to \(T_{s,\;warm}\), \(T_{s,\;cool}\) decreases more rapidly with increasing \(x\). When \(x\geq 0.118\), both \(T_{s,\;warm}\) and \(T_{s,\;cool}\) are completely suppressed. On the other hand, after experiencing a local maximum at \(x=0.042\), \(T_{c}\) also decreases when \(x\) increases, and drops to zero at \(x\geq 0.065\) (before \(T_{s}\) vanishes). The disappearance of superconductivity is unique in our hole-doping phase diagram; in the previous phase diagram studies of MoTe\({}_{2}\) upon pressure [11; 13; 22; 23; 24; 25], isovalent chemical substitution [22; 26], and electron doping [27; 28], they typically show the anticorrelation of \(T_{c}\) and \(T_{s}\) as well as a huge enhancement of \(T_{c}\). To shed light on the issue of why the superconductivity of MoTe\({}_{2}\) is suppressed upon hole doping, a contour plot of \(R_{H}\) is overlaid in Fig. 5. We reveal that \(|R_{H}|\) is significantly enhanced around the region when \(T_{s}\) is suppressed to zero (\(x\sim 0.1\)), and \(T_{c}\) vanishes when the enhancement of \(|R_{H}|\) emerges at \(x\sim 0.05\). Compared Figure 3: Magnetic field dependence of Hall resistivity \(\rho_{xy}(B)\) of Mo\({}_{1-x}\)Ta\({}_{x}\)Te\({}_{2}\) with (a) \(x=0\), (b) \(x=0.021\), (c) \(x=0.065\), and (d) \(x=0.173\) collected during warm-up. The colour scale at the right indicates the measured temperature. to the previous studies with other tuning parameters, while \(T_{c}\) increases, low-temperature \(|R_{H}|\) has either a weak electron-doping dependence [27; 28] or decreases with pressure [13]. Meanwhile, a similar enhancement of \(R_{H}\) has been observed in another hole doping study, Nb-doped MoTe\({}_{2}\)[32]. Such enhancement is associated with the enhancement of thermopower divided by temperature \(S/T\), which is maximum around the region where \(T_{s}\) is completely suppressed; our \(R_{H}\) contour plot is reminiscent of the contour plot of \(S/T\) reported in the phase diagram of Nb-doped MoTe\({}_{2}\) (Fig. 1(b) in Ref. [32]). According to Sakai _et al._'s argument, both enhancements of \(R_{H}\) and \(S/T\) are attributed to the strong fluctuation or phase separation around the nonpolar-polar structural transition, giving rise to some critical scattering effects on the carriers [32]. Combining this statement with our phase diagram, the critical scattering may also hinder the formation of Cooper pairs, and therefore suppress superconductivity. Further investigations on the competition between superconductivity and critical scattering are highly desired to confirm this picture. Another possible explanation for the suppression of superconductivity is related to the change in the Fermi surface topology upon hole doping. Cho _et al._[27] have performed theoretical calculations on the impact of electron and hole doping on \(T_{c}\). While they have attributed the increase in \(T_{c}\) upon electron doping (arising from Te vacancy in MoTe\({}_{2-x}\)) to the enhancement of the density of states at the Fermi level (\(N(E_{F})\)) and the electron-phonon coupling constant (\(\lambda\)), they have also predicted that, upon hole doping, \(N(E_{F})\) and \(\lambda\) will be suppressed and therefore \(T_{c}\) will decrease, which is consistent with our experimental findings. Cho _et al._ further attributed the change in \(\lambda\) to phonon vectors connecting between electron Fermi pockets, which are enlarged upon electron doping according to their calculations. In contrast, upon hole doping, electron pockets shrink and only spherical-shaped hole pockets remain at the \(\Gamma\) point [27; 32]. In the situation without phonon vectors linking between electron pockets, \(\lambda\) will be suppressed and hence \(T_{c}\) will be reduced. Therefore, our study has provided solid experimental evidence to showcase Cho _et al._'s theoretical prediction. To further elaborate on this idea, we connect our hole-doping phase diagram with the electron-doping phase diagram (based on the result of Te-deficient MoTe\({}_{2}\) from Cho _et al._[27]) and plot the combined phase diagram in Fig. 6. It unambiguously shows the asymmetry between the hole-doping phase and electron-doping diagrams, which is reminiscent of different behaviours between hole-doped and electron-doped cuprate superconductors [37; 38]. While \(T_{s}\) shows a similar suppression Figure 5: Temperature-doping phase diagram of Mo\({}_{1-x}\)Ta\({}_{x}\)Te\({}_{2}\). The upward (downward) blue triangles represent \(T_{s}\) defined from the temperature-dependent resistivity data measured during warm-up (cool-down). The solid cyan circles represent \(T_{c}\). The solid curves are guides for the eyes. The colour contour denotes the temperature dependence of Hall coefficient \(R_{H}\) at different doping levels. Figure 4: Temperature dependence of Hall coefficient \(R_{H}\) of (a) Mo\({}_{1-x}\)Ta\({}_{x}\)Te\({}_{2}\) with \(x\neq 0\) and (b) pristine MoTe\({}_{2}\) (\(x=0\)). The closed (open) symbols represent the warm-up (cool-down) data. The cool-down data are only shown at high temperatures. upon both hole- and electron-doping, the doping dependence of \(T_{c}\) behaves differently. At the electron-doping region (the right-hand side of Fig. 6), \(T_{c}\) is largely enhanced. However, when we move to the hole-doping region (the left-hand side of Fig. 6), \(T_{c}\) is heavily suppressed. This demonstrates a clear trend that \(T_{c}\) can be induced and enhanced when the electron carrier concentration increases, no matter what the phase is. Meanwhile, although the critical scattering around the structural transition may contribute to the suppression of superconductivity, our result shows that the tuning of the carrier concentration, which controls the phonon nesting vector(s), provides an effective means to vary the \(T_{c}\) of MoTe\({}_{2}\), regardless of the suppression of \(T_{s}\). These findings provide experimental evidence that enhancing the \(T_{c}\) of MoTe\({}_{2}\) by solely increasing the electron carrier concentration while preserving the topologically nontrivial \(T_{d}\) phase is possible. Such property can potentially boost the progress of the search for topological superconductivity in MoTe\({}_{2}\), which is currently hindered by its low \(T_{c}\). ## IV Conclusions In summary, we have investigated the phase diagram of Ta-doped MoTe\({}_{2}\), Mo\({}_{1-x}\)Ta\({}_{x}\)Te\({}_{2}\), with \(x=0-0.173\) through magnetotransport measurements. Single crystals of Mo\({}_{1-x}\)Ta\({}_{x}\)Te\({}_{2}\) were successfully grown by the self-flux method. X-ray diffraction and energy-dispersive X-ray spectroscopy have confirmed that Mo is partially substituted by Ta in the doped samples. By measuring the temperature dependence of resistivity and the Hall effect, we have revealed that the structural transition temperature \(T_{s}\) is completely suppressed at \(x\sim 0.11\), while the superconducting transition \(T_{c}\) generally decreases upon Ta doping and finally vanishes at \(x\sim 0.08\). This behaviour is in contrast to the previous phase diagrams constructed based on applying pressure, isovalent doping, or electron doping, which show the enhancement of \(T_{c}\) when \(T_{s}\) is suppressed. Moreover, the Hall coefficient is found to be enhanced at low temperatures around the region where \(T_{s}\) is suppressed to zero, suggesting that the critical scattering arising from the structural temperature may have some contributions to the suppression of \(T_{c}\). By comparing our findings with the phase diagram of electron-doped MoTe\({}_{2}\), we argue that the electron carrier concentration in MoTe\({}_{2}\) is a key factor in controlling \(T_{c}\), which offers a straightforward way to boost the \(T_{c}\) of MoTe\({}_{2}\). Notes added: After the first submission of this article, we noticed a recently published article [39] which reports an enhancement of \(T_{c}\) in Ta-doped MoTe\({}_{2}\). Our results do not agree with those of Ref. [39]. The discrepancy may be attributed to methodological differences. First, Ref. [39] used a different crystal growth condition. Second, we determine our \(T_{c}\) values based on the observation of zero resistivity while Ref. [39] deduced their \(T_{c}\) values from the onset of the transition in resistivity. We note that zero resistivity has not been observed in the doped samples in Ref. [39]. ###### Acknowledgements. We acknowledge Xinyou Liu, Ying Kit Tsui, Wei Zhang, and Lingfei Wang for fruitful discussions, and financial support from the Research Grants Council of Hong Kong (GRF/14300419, GRF/14301020 and A-CUHK402/19), CUHK Direct Grant (4053463, 4053528, 4053408 and 4053461), and the National Natural Science Foundation of China (12104384).
2306.01922
Agnostic Multi-Group Active Learning
Inspired by the problem of improving classification accuracy on rare or hard subsets of a population, there has been recent interest in models of learning where the goal is to generalize to a collection of distributions, each representing a ``group''. We consider a variant of this problem from the perspective of active learning, where the learner is endowed with the power to decide which examples are labeled from each distribution in the collection, and the goal is to minimize the number of label queries while maintaining PAC-learning guarantees. Our main challenge is that standard active learning techniques such as disagreement-based active learning do not directly apply to the multi-group learning objective. We modify existing algorithms to provide a consistent active learning algorithm for an agnostic formulation of multi-group learning, which given a collection of $G$ distributions and a hypothesis class $\mathcal{H}$ with VC-dimension $d$, outputs an $\epsilon$-optimal hypothesis using $\tilde{O}\left( (\nu^2/\epsilon^2+1) G d \theta_{\mathcal{G}}^2 \log^2(1/\epsilon) + G\log(1/\epsilon)/\epsilon^2 \right)$ label queries, where $\theta_{\mathcal{G}}$ is the worst-case disagreement coefficient over the collection. Roughly speaking, this guarantee improves upon the label complexity of standard multi-group learning in regimes where disagreement-based active learning algorithms may be expected to succeed, and the number of groups is not too large. We also consider the special case where each distribution in the collection is individually realizable with respect to $\mathcal{H}$, and demonstrate $\tilde{O}\left( G d \theta_{\mathcal{G}} \log(1/\epsilon) \right)$ label queries are sufficient for learning in this case. We further give an approximation result for the full agnostic case inspired by the group realizable strategy.
Nick Rittler, Kamalika Chaudhuri
2023-06-02T21:24:13Z
http://arxiv.org/abs/2306.01922v1
# Agnostic Multi-Group Active Learning ###### Abstract Inspired by the problem of improving classification accuracy on rare or hard subsets of a population, there has been recent interest in models of learning where the goal is to generalize to a collection of distributions, each representing a "group". We consider a variant of this problem from the perspective of active learning, where the learner is endowed with the power to decide which examples are labeled from each distribution in the collection, and the goal is to minimize the number of label queries while maintaining PAC-learning guarantees. Our main challenge is that standard active learning techniques such as disagreement-based active learning do not directly apply to the multi-group learning objective. We modify existing algorithms to provide a consistent active learning algorithm for an agnostic formulation of multi-group learning, which given a collection of \(G\) distributions and a hypothesis class \(\mathcal{H}\) with VC-dimension \(d\), outputs an \(\epsilon\)-optimal hypothesis using \(\tilde{O}\left((\nu^{2}/\epsilon^{2}+1)Gd\theta_{\mathcal{G}}^{2}\log^{2}(1/ \epsilon)+G\log(1/\epsilon)/\epsilon^{2}\right)\) label queries, where \(\theta_{\mathcal{G}}\) is the worst-case disagreement coefficient over the collection. Roughly speaking, this guarantee improves upon the label complexity of standard multi-group learning in regimes where disagreement-based active learning algorithms may be expected to succeed, and the number of groups is not too large. We also consider the special case where each distribution in the collection is individually realizable with respect to \(\mathcal{H}\), and demonstrate \(\tilde{O}\left(Gd\theta_{\mathcal{G}}\log(1/\epsilon)\right)\) label queries are sufficient for learning in this case. We further give an approximation result for the full agnostic case inspired by the group realizable strategy. ## 1 Introduction There is a growing theory literature concerned with choosing a classifier that performs well on multiple subpopulations or "groups" [1; 2; 3; 4; 5; 6; 7]. In many cases, the motivation comes from a perspective of fairness, where a typical requirement is that we classify with similar accuracy across groups [3; 4; 5]. In other cases, the motivation may simply be to train more reliable classifiers. For example, cancer detection models with good overall accuracy often suffer from poor ability to detect rare subtypes of cancer that are not well-represented or identified in training. This suggests that naive ERM may be insufficient in practice [8]. In this work, we consider the following formulation of "multi-group" learning. The learner is given a collection of distributions \(\mathcal{G}=\{D_{g}\}_{g=1}^{G}\), each corresponding to a group, and a hypothesis class \(\mathcal{H}\), and wants to pick a classifier that approximately minimizes the maximum classification error over group distributions. We consider this problem from an active learning perspective, where the learner has the power to choose which examples from each group it wants to label during training. In a standard extension of the active learning literature, we set out to design schemes for choosing which examples from each group should be labeled, where the goal is to minimize the number of label queries while retaining PAC-learning guarantees. A major challenge in harnessing the power of active algorithms even in standard agnostic settings is making sure they are consistent. In the case of active learning, this means that as the number of number of labels requested approaches infinity, the learner outputs an optimal hypothesis. To complicate things further, the main algorithmic paradigm for consistent agnostic active learning over a single distribution - disagreement based active learning (DBAL) - fails to admit direct application in the multi-group learning objective. The fundamental idea in DBAL is that the learner may safely spend its labeling budget in the "disagreement region", a subset of instance space where empirically well-performing hypotheses disagree about how new examples should be labeled. When the learner need only consider a single distribution, error differences between classifiers are specified entirely through their performance on the disagreement region, and so spending the labeling budget here allows the learner to figure out which hypotheses are best while saving on labels. The problem is that when multiple group distributions must be considered, the absolute errors of classifiers on each group must be estimated to compare performance of two classifiers in their worst case over collection, and this property no longer holds. We resolve this via the observation that, while we cannot spend all our labeling budget in the disagreement region, we can exploit the agreement in its complement to cheaply estimate absolute errors of classifiers on each group. In particular, we estimate the absolute errors by choosing a representative classifier \(h_{\mathcal{H}^{\prime}}\) in the set of empirically well-performing classifiers \(\mathcal{H}^{\prime}\), and estimating its error on the complement of the disagreement region on each group distribution. These error estimates can be used to construct estimates for the absolute errors on each group for each \(h\in\mathcal{H}^{\prime}\) at the statistical cost of estimating a coin bias, leading to a relatively cheap, consistent active strategy. We analyze the number of label queries made by this scheme in terms of a standard complexity measure in the active learning literature called the "disagreement coefficient" [9; 10], and show an upper bound of \(\tilde{O}\left((\nu^{2}/\epsilon^{2})Gd_{\mathcal{G}}^{2}\log^{2}(1/\epsilon) +G\log(1/\epsilon)/\epsilon^{2}\right)\), where \(\theta_{\mathcal{G}}\) is the maximal disagreement coefficient over the collection of group distributions. We discuss some regimes where this label complexity can be expected to push below sample complexity lower bounds for a learner that can request samples from each group distribution during training, but does not have power to abstain from labeling specific examples. We also consider the special case of agnostic learning where each group distribution is individually realizable, but no single hypothesis separates all groups simultaneously. In this case, we show that all dependence on \(1/\epsilon^{2}\) in the label complexity can be replaced with \(\log(1/\epsilon)\) when disagreement coefficients are bounded. It turns out that using the strategy we develop in this special case leads to an approximation algorithm for the general agnostic case, for which we give guarantees. ## 2 Related Work ### Multi-Group Learning The majority of the empirical work on multi-group learning has been through the lens of "Group-Distributionally Robust Optimization" (G-DRO) [11; 12; 13]. The goal in G-DRO is to choose a classifier that minimizes the maximal risk against an unknown mixture over a collection of distributions \(\{D_{g}\}_{g=1}^{G}\) representing groups. One assumes a completely passive sampling setting - all data is given to the learner at the beginning of training, and the learner has no ability to draw extra, fine-grained samples. The strategy is usually empirical risk minimization (ERM) - or some regularized variant - on the empirical max loss over groups; for a set of classifiers parameterized by \(\phi\in\Phi\), letting \(S_{g}\) denote the set of examples in the training set coming from \(D_{g}\), one performs \(\min_{\phi\in\Phi}\max_{g\in[G]}\frac{1}{|S_{g}|}\sum_{(x,y_{i})\in S_{g}}l(f_ {\phi}(x_{i}),y_{i})\) for some loss \(l\). It is important to note that the learner knows the group identity of each sample in the training set, but is not provided with group information at test time, precluding the possibility of training a separate classifiers for each group. "Multi-group PAC learning" consider the multi-group problem under the passive sampling assumption from a more classical learning-theoretic perspective [3; 4]. Here, one assumes there is a single distribution \(D\) from which one is given samples, but also a collection of subsets of instance space \(\mathcal{G}\) over which one wants to learn conditional distributions. Given a hypothesis class \(\mathcal{H}\), the learner tries to improperly learn a classifier \(f\) that competes with the optimal hypothesis on each conditional distribution specified by a group \(g\) in the collection - formally, one requires that for a given error tolerance \(\epsilon\), \(f\) has the property \(\forall g\in\mathcal{G},\ \ \mathbb{P}_{D}(f(x)\neq y|x\in g)\leq\inf_{h\in \mathcal{H}}\mathbb{P}_{D}(h(x)\neq y|x\in g)+\epsilon\) with high probability. An interesting wrinkle in this literature is that the group identity of samples is available at both training and test times. It has been shown that a sample complexity of \(\tilde{O}\left(\log(|\mathcal{H}|)/\gamma\epsilon^{2}\right)\) is sufficient for agnostic learning in this model, where \(\gamma\) is the minimal mass of a group \(g\) under \(D\)[4]. "Collaborative learning" studies the multi-group problem under an alternative sampling model [1, 2, 7]. In this case, we are given a collection of distributions \(\{D_{g}\}_{g=1}^{G}\), each corresponding to a group. Given some hypothesis class \(\mathcal{H}\), the goal is to learn a classifier \(f\), possibly improperly, that is evaluated against its worst-case loss over \(D_{1},\ldots,D_{G}\); formally, we would like \(f\) to satisfy \(\max_{g\in[G]}\mathbb{P}_{D_{g}}\big{(}f(x)\neq y\big{)}\leq\inf_{h\in\mathcal{ H}}\max_{g\in[G]}\mathbb{P}_{D_{g}}(h(x)\neq y)+\epsilon\). In contrast with multi-group PAC learning, the learner may decide how many samples from each \(D_{g}\) it wants to collect during training, and group identity is hidden on test examples. This models the case where a learner may want to collect more data from a particularly difficult group of instances, such as a rare or hard-to-diagnose type of cancer. It has been shown for finite hypothesis classes that \(\tilde{\Theta}(\log(|\mathcal{H}|)/\epsilon^{2}+G/\epsilon^{2})\) total samples over all groups are necessary and sufficient to learn in this model; \(\tilde{O}(d\log(1/\epsilon)/\epsilon^{2}+G/\epsilon^{2})\) total samples are sufficient for VC-classes [7]. Our work extends the model of collaborative learning, and endows the learner with the ability decide which samples from each group distribution \(D_{g}\) should be labeled. This is the standard framework of active learning, applied to the multi-group setting. As in collaborative learning, we assume group identity is hidden at test time. ### Active Learning Active learning concerns itself with the development of learning algorithms for training classifiers that have power over which training examples should be labeled [14, 15]. The field has largely focused on uncovering settings in which algorithmic approaches reduce the amount of labels required for PAC-style learning guarantees beyond sample complexity lower bounds that apply to i.i.d. data collection from the underlying distribution [16, 17]. In the agnostic, 0-1 loss setting, the standard upper bounds for label complexity follow \(\tilde{O}\left(\theta(d\log(1/\epsilon)+d\nu^{2}/\epsilon^{2}\right)\). Here, \(\nu\) is the "noise rate", i.e. the true error of the optimal hypothesis \(h^{*}\), and \(\theta\) is a distribution-dependent parameter called the "disagreement coefficient". Thus, gains of active strategies over standard passive lower bounds of \(\Omega(d\nu/\epsilon^{2})\) depend on certain easiness conditions like small noise rates and bounded disagreement coefficients [15]. The vast majority of the work on active learning has been done in the 0-1 loss setting [9, 10, 18, 19, 20, 21]. It has been significantly harder to push the design of active learning algorithms past the regime of accuracy on a fixed distribution. While some work has attempted to generalize classical ideas of active learning to different losses [22], these are heavily outnumbered in the literature. As previously mentioned, the most difficult part of designing agnostic active learning strategies is maintaining consistency. The issue comes down to a phenomenon referred to as "sampling bias" : because active learners would like to target certain parts of space to save on labels, there is a risk that the learner prematurely stops sampling on a part of space in which there is some detail in the distribution that could not be detected at a higher labeling resolution. This can easily lead to inconsistent strategies [15]. Thus, a major contribution of our work is exhibiting a consistent active scheme for the multi-group problem. \begin{table} \begin{tabular}{c c c} \hline \hline Problem & Full Agnostic & Group-Realizable \\ \hline Passive Multi-Group [4] & \(\tilde{O}\left(\log(|\mathcal{H}|)/\gamma\epsilon^{2}\right)\) & \(\tilde{O}\left(\log(|\mathcal{H}|)/\gamma\epsilon\right)\) \\ Collaborative Learning [7] & \(\tilde{O}\left(d\log(1/\epsilon)/\epsilon^{2}+G/\epsilon^{2}\right)\) &? \\ Active Multi-Group (us) & \(\tilde{O}\left((\nu^{2}/\epsilon^{2}+1)Gd\theta_{\tilde{\mathcal{G}}}^{2} \log^{2}(1/\epsilon)+G\log(1/\epsilon)/\epsilon^{2}\right)\) & \(\tilde{O}\left(Gd\theta_{\mathcal{G}}\log^{2}(1/\epsilon)\right)\) \\ \hline \hline \end{tabular} \end{table} Table 1: Overview of the complexity of multi-group learning. The \(\tilde{O}\) notation hide factors logarithmic in \(d\), \(G\), \(1/\delta\), and \(\log(1/\epsilon)\). We reserve discussion of regimes in which our algorithm improves on results in Collaborative Learning for Section 5. Preliminaries ### Learning Problem We study a binary classification setting where examples fall in some instance space \(\mathcal{X}\), and labels lie in \(\mathcal{Y}:=\{-1,1\}\). We suppose we are given some pre-specified, finite collection of distributions \(\mathcal{G}=\{D_{g}\}_{g=1}^{\mathcal{G}}\) over \(\mathcal{X}\times\mathcal{Y}\) corresponding to groups. Given a hypothesis class \(\mathcal{H}\) of measurable classifiers with VC-dimension \(d\), the goal of the leaner is to pick some \(h\in\mathcal{H}\) from finite data that performs well across all the distributions in \(\mathcal{G}\) in the worst case. Let \(L_{\mathcal{G}}(h\mid g):=\mathbb{P}_{D_{g}}\left(h(x)\neq y\right)\) be the error of a hypothesis \(h\) on group \(g\). Formally speaking, the learner would like to choose a classifier approximately obtaining \[\inf_{h\in\mathcal{H}}\max_{g\in[G]}L_{\mathcal{G}}(h\mid g),\] using finite data. We often use \(L_{\mathcal{G}}^{\max}(h)\) as shorthand for \(\max_{g\in[G]}L_{\mathcal{G}}(h\mid g)\). We use \(\nu:=\inf_{h\in\mathcal{H}}L_{\mathcal{G}}^{\max}(h)\) to denote the "noise rate" of \(\mathcal{H}\) on the multi-distribution objective. The use of the term "agnostic" throughout reflects the fact that we make no assumption that \(\nu=0\) in our algorithm design or analysis. We assume for simplicity that there is some \(h^{*}\in\mathcal{H}\) attaining \(\nu\). ### Active Learning Model We consider a standard active learning model specified as follows. Let \(supp(g)\) denote the support of the marginal over instance space of \(D_{g}\). The active learner has access to two sampling oracles for each distribution specified by \(D_{g}\). The first is \(U_{g}(\cdot)\), which given a set \(S\subseteq\mathcal{X}\) measurable with respect to \(D_{g}\), returns an unlabeled sample from \(D_{g}\) conditioned on \(S\); if \(\mathbb{P}_{D_{g}}(x\in S)=0\), then \(U_{g}(S)\) returns "None". The second is \(O_{g}(\cdot)\), which given a point in \(supp(g)\), returns a sample from the conditional distribution over labels specified by \(x\) and \(g\). More formally, querying \(U_{g}(S)\) for \(S\) such that \(\mathbb{P}_{D_{g}}(x\in S)\neq 0\) is equivalent to drawing i.i.d. samples according to marginal over instance space of \(D_{g}\) (independent of previous randomness), and returning the first example that falls in \(S\); querying the oracle \(O_{g}(x)\) for \(x\in supp(g)\) is equivalent to receiving a sample from a Rademacher random variable with parameter \(\mathbb{P}_{D_{g}}(Y=1|X=x)\). As is standard in active learning, the active learner is assumed to have functionally unlimited access to queries from \(U_{g}(\cdot)\). On the other hand, queries to oracles \(O_{g}(\cdot)\) are precious: the "label complexity" of a strategy executed by the active learner is the sum of queries to oracles \(O_{g}(\cdot)\) over all \(g\), and is to be minimized given a desired generalization error guarantee. ## 4 Challenges in Multi-Group Active Learning In this section, we give some background on classical disagreement-based methods on a single distribution, and discuss in more detail the challenge of designing consistent active learning strategies in the multi-group setting. ### Background on Disagreement-Based Active Learning Almost all agnostic active learning algorithms for accuracy over a single distribution boil down to disagreement-based methods [10; 15; 16; 18]. The fundamental idea in this school of algorithms is that one can learn the relative accuracy of two classifiers \(h\) and \(h^{\prime}\) by only requesting labels for examples in the part of instance space on which they disagree about how examples should be labeled. More generally, given a set of classifiers \(\mathcal{H}^{\prime}\subseteq\mathcal{H}\), one can consider the "disagreement region" of \(\mathcal{H}^{\prime}\), defined as \[\Delta(\mathcal{H}^{\prime}):=\left\{x\in\mathcal{X}:\exists h,h^{\prime}\in \mathcal{H}^{\prime}\;s.t.\;h(x)\neq h^{\prime}(x)\right\}.\] As alluded to above, the difference in accuracy of classifiers \(h,h^{\prime}\in\mathcal{H}^{\prime}\) is specified entirely through this inherently label-independent notion. For a single distribution \(D\), we may write \[\frac{\mathbb{P}_{D}\left(h(x)\neq y\right)-\mathbb{P}_{D}\left(h^{\prime}(x) \neq y\right)}{\mathbb{P}_{D}\left(\Delta(\mathcal{H}^{\prime})\right)}= \mathbb{P}_{D}\big{(}h(x)\neq y\mid\Delta(\mathcal{H}^{\prime})\big{)}- \mathbb{P}_{D}\left(h^{\prime}(x)\neq y\mid\Delta(\mathcal{H}^{\prime}) \right),\] as by definition, \(h,h^{\prime}\) have the same conditional loss on \(\Delta(\mathcal{H}^{\prime})^{c}\). Inspired by this observation, the idea is to label examples in \(\Delta(\mathcal{H}^{\prime})\), and ignore those outside of it. This allows the learner to learn about the relative performance of classifiers while saving on the labels of examples falling in \(\Delta(\mathcal{H}^{\prime})^{c}\) In running a DBAL algorithm, one hopes certain classifiers quickly reveal themselves to be empirically so much worse on \(\Delta(\mathcal{H}^{\prime})\) than the current ERM hypothesis, that by standard concentration bounds, they can be inferred to be worse than \(\epsilon\)-optimal on \(D\) with high probability. Elimination of these classifiers shrinks the disagreement region, allowing the labeling to become further fine-grained. Given the above loss decomposition, this leads to consistent active learning strategies. ### Labeling in the Disagreement Region: No Longer Enough In the multi-group setting, the strategy of comparing performance of classifiers solely on \(\Delta(\mathcal{H}^{\prime})\) breaks down. Although the classifiers in \(\mathcal{H}^{\prime}\) still agree in \(\Delta(\mathcal{H}^{\prime})^{c}\), this is not enough infer differences in the worst case error over groups \(L^{\max}_{\mathcal{G}}\); this is because differences in performance on \(\Delta(\mathcal{H}^{\prime})\) are not generally representative of differences in absolute errors over group distributions. The following simple example makes this concrete. **Example 1**.: Consider the task of determining which of two classifiers \(h\) and \(h^{\prime}\) has lower worst case error over distributions \(D_{1}\) and \(D_{2}\) with marginal supports \(S_{1}\subseteq\mathcal{X}\) and \(S_{2}\subseteq\mathcal{X}\). Let their disagreement region be denoted by \(\Delta=\{x\in\mathcal{X}:h(x)\neq h^{\prime}(x)\}\), and let \(l(f,i,A)\) denote the conditional loss of classifier \(f\) on \(S_{i}\cap A\) under \(D_{i}\). Suppose we only know their conditional losses on \(\Delta\cap S_{1}\) and \(\Delta\cap S_{2}\) under \(D_{1}\) and \(D_{2}\), respectively. We see for \(h\) that \[l(h,i,S)=\begin{cases}1/4&i=1,A=\Delta\cap S_{1}\\ 1/3&i=2,A=\Delta\cap S_{2}\\?&i=1,A=\Delta^{c}\cap S_{1}\\?&i=2,A=\Delta^{c}\cap S_{2}\end{cases}\] and for \(h^{\prime}\) that \[l(h^{\prime},i,S)=\begin{cases}34/100&i=1,A=\Delta\cap S_{1}\\ 0&i=2,A=\Delta\cap S_{2}\\?&i=1,A=\Delta^{c}\cap S_{1}\\?&i=2,A=\Delta^{c}\cap S_{2}\end{cases}.\] Consider ignoring the performance of classifiers in \(\Delta^{c}\), and using as a surrogate for the multi-group objective \[\max_{i\in\{1,2\}}l(h,i,S_{i}\cap\Delta).\] In this case, we would chose \(h\) has the better of the two hypotheses. Suppose now that \(\Delta\cap S_{1}\) and \(\Delta\cap S_{2}\) have mass \(1/2\) under both \(D_{1}\) and \(D_{2}\), and that \(l(h,1,\Delta^{c}\cap S_{1})=l(h^{\prime},1,\Delta^{c}\cap S_{1})=0\). Finally, suppose that \(l(h,2,\Delta^{c}\cap S_{2})=l(h^{\prime},2,\Delta^{c}\cap S_{2})=1/2\). Then under the true multi-group objective, by decomposing the group losses, one can compute that \(h^{\prime}\) has a lower worst case error over groups \(D_{1}\) and \(D_{2}\) by a margin of \(1/6\). Thus, to utilize the disagreement region in multi-group algorithms, we will need to at least label some samples on \(\Delta(\mathcal{H}^{\prime})^{c}\) as \(\mathcal{H}^{\prime}\) shrinks. The specification of such a strategy is the content of the next section. ## 5 General Agnostic Multi-Group Learning ### An Agnostic Algorithm The basic idea in Algorithm 1 is similar to classical DBAL approaches for a single distribution. We start with the full hypothesis class \(\mathcal{H}\), and look to iteratively eliminate hypotheses from contention as we learn about how to classify on each group through targeted labeling. Our solution to the problem posed to DBAL above is to keep track of the errors of well-performing hypotheses on the complement of the disagreement region in a way that exploits the agreement property. To do this, we construct a two-part estimate for the loss of a hypothesis on a given group. Denote the set of hypotheses still in contention at iteration \(i\) is \(\mathcal{H}_{i}\). Let \(R_{i}=\Delta(\mathcal{H}_{i})\) and \(S_{R_{i},g}\) be a labeled sample from \(U(R_{i})\) and \(S_{R_{i}^{c},g}\) be a labeled sample from \(U(R_{i}^{c})\). We can now estimate the loss for some \(h\in\mathcal{H}_{i}\) on group \(g\) via \[L_{S;R_{i}}(h\mid g):=\mathbb{P}_{D_{G}}(x\in R_{i})\cdot L_{S_{R_{i},g}}(h)+ \mathbb{P}_{D_{G}}(x\in R_{i}^{c})\cdot L_{S_{R_{i}^{c},g}}(h_{\mathcal{H}_{i} }),\] where \(L_{S}(h):=1/|\mathcal{S}|\sum_{(x,y)\in\mathcal{S}}\mathbbm{1}[h(x)\neq y]\) is a standard empirical loss estimate1, and \(h_{\mathcal{H}_{i}}\) is an _arbitrarily_ chosen hypothesis from \(\mathcal{H}_{i}\) that is used in the loss estimate of every \(h\in\mathcal{H}_{i}\). This leads to an unbiased estimator given that every \(h\in\mathcal{H}_{i}\) labels the sample from this part of space in exactly the same way. Footnote 1: taken to be an arbitrary constant if \(\mathcal{S}=\emptyset\); see the Appendix for details. The utility of this estimator is that by choosing an arbitrary representative \(h_{\mathcal{H}_{i}}\), we can estimate the loss of all hypotheses still in contention to precision \(O(\epsilon)\) on \(R_{i}^{c}\) with \(\bar{O}(1/\epsilon^{2})\) samples, removing the usual dependence of the VC-dimension. On the other hand, as the disagreement region shrinks, \(\mathbb{P}_{D_{G}}(x\in R_{i})\) shrinks as well, so while we will still need to invoke uniform convergence to get reliable loss estimates in \(R_{i}\), the precision to which we need to estimate losses in this part of space decreases with every iteration, and eventually the overall dependence on the VC-dimension is diminished. This later observation is the standard source of gains in DBAL [9, 18, 23]. After forming these loss estimates on each group, we construct unbiased loss estimates for the worst case over groups via \[L_{\mathcal{S};R_{i}}^{\max}(h):=\max_{g\in\mathcal{G}}L_{S;R_{i}}(h\mid g).\] These loss estimates inherit concentration properties from the two-part estimator above. We draw enough samples at each iteration \(i\) such that we essentially learn the multi-group problem to precision \(2^{\lceil\log(1/\epsilon)\rceil-i}\epsilon\). We note that Algorithm 1 assumes access to the underlying group marginals measures \(\mathbb{P}_{D_{G}}\). This is common in the active learning literature [18, 20]. Probabilities of events in instance space can be estimated to arbitrary accuracy using only unlabeled data, so this assumption is not dangerous to our goal of lowering label complexities. ### Guarantees Vitally, the scheme given in Algorithm 1 is consistent. It is a lemma of ours that the number of samples drawn at each iteration is sufficiently large that the true error of any \(h\in\mathcal{H}_{i+1}\) is no more than \(2^{\lceil\log(1/\epsilon)\rceil-i}\epsilon\). Thus, after \(\lceil\log(1/\epsilon)\rceil\) iterations, the ERM hypothesis on \(L^{\max}_{\mathcal{S};R_{i}}(\cdot)\) is then \(\epsilon\)-optimal with high probability. We can bound the label complexity of the algorithm using standard techniques from DBAL. A ubiquitous quantity in the analysis of disagreement-based schemes is that of the "disagreement coefficient" [9; 24]. The general idea is that the disagreement coefficient bounds the rate of decrease in \(r\) of the measure of the the disagreement region of a ball of radius \(r\) around \(h^{*}\) in the pseudo-metric \(\rho_{g}(h,h^{\prime}):=\mathbb{P}_{D_{g}}\left(h(x)\neq h^{\prime}(x)\right)\). Precisely, we use the following definition of the disagreement coefficient in our analysis [10; 23]: given a group \(D_{g}\), the disagreement coefficient on \(g\) is \[\theta_{g}:=\sup_{h\in\mathcal{H}}\sup_{r^{\prime}\geq 2\nu+\epsilon}\frac{ \mathbb{P}_{D_{g}}\left(x\in\Delta(B_{g}(h,r^{\prime}))\right)}{r^{\prime}},\] where \(B_{g}(h,r^{\prime}):=\{h^{\prime}\in\mathcal{H}:\rho_{g}(h,h^{\prime})\leq r ^{\prime}\}\) is a ball of radius \(r^{\prime}\) about \(h\) in pseudo-metric \(\rho_{g}\). We further notate the maximum disagreement coefficient over the groups \(\mathcal{G}\) as \(\theta_{\mathcal{G}}:=\max_{g}\theta_{g}\). The disagreement coefficient is trivially bounded above by \(1/\epsilon\), but can be bounded independently of \(\epsilon\) in many cases [10; 24]. For example, when \(\mathcal{H}\) is the class of linear separators in \(d\) dimensions and the underlying marginal distribution is uniform over the Euclidean unit sphere, the disagreement coefficient is \(\Theta(\sqrt{d})\)[9]. **Theorem 1**.: _For all \(\epsilon>0\), \(\delta\in(0,1)\), collections of groups \(\mathcal{G}\), and hypothesis classes \(\mathcal{H}\) with \(d<\infty\), with probability \(\geq 1-\delta\), the output \(\hat{h}\) of Algorithm 1 satisfies_ \[L^{\max}_{\mathcal{G}}(\hat{h})\leq L^{\max}_{\mathcal{G}}(h^{*})+\epsilon,\] _and its label complexity is bounded by_ \[\tilde{O}\Bigg{(}G\;\theta_{\mathcal{G}}^{2}\bigg{(}\frac{\nu^{2}}{\epsilon^ {2}}+1\bigg{)}\big{(}d\log(1/\epsilon)+\log(1/\delta)\big{)}\log(1/\epsilon)+ \frac{G\log(1/\delta)\log(1/\epsilon)}{\epsilon^{2}}\Bigg{)}.\] Here, the \(\tilde{O}\) notation hides factors of \(\log(\log(1/\epsilon))\) and \(\log(G)\); we leave all proofs for the Appendix. Theorem 1 tell us that Algorithm 1 enjoys the following upside over passive and collaborative learning approaches: the dependence on the standard interaction of the VC-dimension \(d\) and \(1/\epsilon^{2}\) is removed, and replaced with \(Gd\theta_{\mathcal{G}}^{2}\log^{2}(1/\epsilon)\nu^{2}/\epsilon^{2}\), which in settings with small disagreement coefficients and low noise rates, will be significant for small \(\epsilon\). ### Comparison to Lower Bounds in Collaborative Learning We compare our label complexity guarantees to results in collaborative learning, where the learner has the power to ask for samples from specific group distributions, but not selectively label these samples. This is a strictly more demanding comparison than to pure passive settings, but a fair one, given that active learners have the option of executing any collaborative learning strategy. In collaborative learning, for finite hypothesis classes \(\mathcal{H}\), it is known that \[\Omega\left(\frac{\log(|\mathcal{H}|)}{\epsilon^{2}}+\frac{G\log(\min(| \mathcal{H}|,G)/\delta)}{\epsilon^{2}}\right)\] total labels over all groups are necessary [7]. We consider comparing this lower bound to a simplified version of the label complexity guarantee in Theorem 1: \[\tilde{O}\left(dG\theta_{\mathcal{G}}^{2}\log^{2}(1/\epsilon)+\frac{G\log(1/ \epsilon)}{\epsilon^{2}}\right),\] thus implicitly assuming \(\nu\) is neglectably small, and making all comparisons up to factors logarithmic in \(G\), \(1/\delta\) and \(\log(1/\epsilon)\). This former assumption is a standard assumption under which we may hope an agnostic active learner to succeed [15]. ``` proceduregroup_realizable(\(\mathcal{H},\epsilon,\delta\), active learner \(\mathcal{A}\), \(\{U_{g}(\cdot)\}_{g=1}^{G},\{O_{g}(\cdot)\}_{g=1}^{G}\)) for\(g\in[G]\)do \(\hat{h}_{g}\leftarrow\mathcal{A}(\mathcal{H},\epsilon/6,\delta/2G,U_{g}( \mathcal{X}),O_{g})\) \(S^{\prime}_{g}\gets 144/\epsilon^{2}\left(2d\ln(24/\epsilon)+\ln(8G/ \delta)\right)\) samples from oracle \(U_{g}(\mathcal{X})\) \(\hat{S}_{g}\leftarrow\left\{\left(x,\hat{h}_{g}(x)\right):x\in S^{\prime}_{g}\right\}\) endfor return\(\hat{h}=\arg\min_{h\in\mathcal{H}}\max_{g\in[G]}\frac{1}{|\hat{S}_{g}|}\sum_{(x,\hat{y}) \in\hat{S}_{g}}1\left[h(x)\neq\hat{y}\right]\) endprocedure ``` **Algorithm 2** Group Realizable Algorithm Even the simplified upper bound does not admit the cleanest comparison this to lower bound, due to our excess factor of \(\log(1/\epsilon)\) in the second term. However, it does showcase that while we pay slightly more per group than necessary, under conditions amenable to active learning, we pay significantly less per dimension \(d\). Particularly for small \(\epsilon\), one can see that's approximately sufficient that \(G<o(\theta_{\mathcal{G}}^{2}(\log(1/\epsilon)\epsilon)^{2})^{-1})\) for the simplified upper bound to beat the lower bound. For a more fine-grained comparison that in some sense underestimates the power of Algorithm 1, assume that the following condition governs the relationship of \(G\), \(d\), and \(\epsilon\): \[G\log(1/\epsilon)\leq d<\left(\theta_{\mathcal{G}}^{2}\epsilon^{2}\log^{2}(1/ \epsilon)\right)^{-1}.\] Then the simplified bound is smaller in order than the lower bound above. ## 6 Group Realizable Learning A special case of the learning problem, where extreme active learning gains can be readily seen, comes when the hypothesis class \(\mathcal{H}\) achieves zero noise rate on each group \(D_{g}\). This setting has been considered in the passive "multi-group learning" literature [4]. Formally speaking, in the group realizable setting, the following condition holds: \[\forall g\in[G],\exists h_{g}^{*}\in\mathcal{H}\;s.t.\;L_{\mathcal{G}}(h_{g}^{ *}\mid g)=0,\] i.e. for all groups in the collection \(\mathcal{G}\), there is some hypothesis achieving 0 error on that group. Note that this differs from the fully realizable setting where there is some \(h^{*}\in\mathcal{H}\) with \(L_{\mathcal{G}}^{\max}(h^{*})=0\). While fully realizable implies group realizable, the converse is not true. Thus, group realizability represents an intermediate regime between the realizable setting and the full agnostic settings. ### Algorithm In the group realizable case, it is possible to show a reduction of the problem of active learning over hypothesis classes with respect to a single distribution. This can be accomplished as follows. For each \(D_{g}\), we call as a subroutine an active learner that is guaranteed to find an order \(\epsilon\)-optimal hypothesis \(\hat{h}_{g}^{*}\) with high probability over it's queries. It then gathers new unlabeled samples from each \(D_{g}\), and instead of requesting labels from \(O_{g}(\cdot)\), labels each unlabeled point with \(\hat{h}_{g}^{*}\). The final step is to do an empirical risk minimization on these artificially labeled samples with respect to the multi-group objective. See Algorithm 2 for a formal specification of the strategy. ### Guarantees The strategy given in Algorithm 2 leads to a consistent active learning scheme, provided the active learners called as subroutines have standard guarantees that can be inherited. Theorem 2 gives a guarantee to this end. The proof follows from an argument similar to one used in [10] - because the subroutine calls return hypotheses with near 0 error on each group, the artificially labeled training set used in the ERM step looks nearly identical to a counterfactual training set for the ERM step constructed by querying labels \(O_{g}(x)\) for each unlabeled \(x\). This is similar to the idea in [23]. We present Theorem 2 assuming access to a classical, realizable active learner due to [25]. **Theorem 2**.: _Suppose Algorithm 2 is run with the active learner \(\mathcal{A}_{CAL}\) of [25]. Then for all \(\epsilon>0\), \(\delta\in(0,1)\), hypothesis classes \(\mathcal{H}\) with \(d<\infty\), and collections of groups \(\mathcal{G}\) with the group realizability property under \(\mathcal{H}\), with probability \(\geq 1-\delta\), the output \(\hat{h}\) satisfies_ \[L^{\max}_{\mathcal{G}}(\hat{h})\leq L^{\max}_{\mathcal{G}}(h^{*})+\epsilon,\] _and the number of labels requested is_ \[\tilde{O}\bigg{(}dG\theta_{\mathcal{G}}\log(1/\epsilon)\bigg{)}.\] Thus, when disagreement coefficients across the collection of groups are bounded independently of \(\epsilon\), the usual, passive dependence on \(1/\epsilon^{2}\) is replaced by \(\log(1/\epsilon)\). In the passive multi-group setting of [3], it has been shown that \(\tilde{O}\left(\log(|\mathcal{H}|)/\gamma\epsilon\right)\) samples are sufficient for group realizable learning, where we recall \(\gamma\) is a lower bound on the probability of getting a sample in each group [4]. ## 7 Full Agnostic Approximation ### Inconsistency of the Reduction in the Full Agnostic Regime Algorithm 2 admits clean analysis, and nicely harnesses the power of realizable active learners for a single distribution. One might wonder if a similar strategy might provide a consistent strategy in full agnostic regime. Unfortunately, the direct application of Algorithm 2 using agnostic learners does not yield a consistent active learning algorithm. In fact, consistency fails even when for each \(g\in[G]\), \(h^{*}_{g}\) is the Bayes optimal classifier on \(D_{g}\), and \(\nu_{g}:=\inf_{h\in\mathcal{H}}L_{\mathcal{G}}(h\mid g)\) is small. This lack of consistency comes down to the fact that labeling with the Bayes optimal underestimates noise rates on each group, which in turn may bias the output of the ERM step. ### A \(3\nu\)-Approximation Algorithm Although the strategy of creating an artificially labeled training set with near-optimal hypotheses on each group fails outside of the group realizable case, it possesses a nice approximation property. We give a guarantee to this end in Theorem 3. It states that if we call an active learner with agnostic guarantees on each group \(D_{g}\), and then use the outputs \(\hat{h}^{*}_{g}\) to artificially label a new batch of unlabeled data from each group, using ERM on this artificially labeled data gives at worst a \(2\nu+\epsilon\)-optimal hypothesis with high probability. **Theorem 3**.: _Suppose Algorithm 2 is run with the agnostic active learner \(\mathcal{A}_{DHM}\) of [15]. Then for all \(\epsilon>0\), \(\delta\in(0,1)\), hypothesis classes \(\mathcal{H}\) with \(d<\infty\), and collections of groups \(\mathcal{G}\), with probability \(\geq 1-\delta\), the output \(\hat{h}\) satisfies_ \[L^{\max}_{\mathcal{G}}(\hat{h})\leq L^{\max}_{\mathcal{G}}(h^{*})+2\cdot\max_ {g\in[G]}\nu_{g}+\epsilon\leq 3\cdot L^{\max}_{\mathcal{G}}(h^{*})+\epsilon,\] _and the number of labels requested is_ \[\tilde{O}\Bigg{(}dG\theta_{\mathcal{G}}\bigg{(}\log^{2}(1/\epsilon)+\frac{\nu ^{2}}{\epsilon^{2}}\bigg{)}\Bigg{)}.\] The proof is very similar to that of Theorem 2, but notes in addition that \(\hat{h}^{*}_{g}\) mislabels on a roughly \(\nu_{g}\)-fraction of the unlabeled samples from each group \(G\). This allows us to upper bound the distortion of the ERM step. ## 8 Conclusion In this work, we have taken a first look at active multi-group learning. Though the design of general agnostic strategies in this setting is quite challenging, an interesting future direction may be the search for strategies that work in more specific cases, for example extending our work in the group realizable setting. In particular, the search for algorithms with small label complexities under specific low-noise conditions, such as Tsybakov noise on each \(D_{g}\), may prove fruitful [26].
2307.08391
S-duality in the Cardy-like limit of the superconformal index
We evaluate the superconformal index of 4d $\mathcal{N}=4$ SYM with gauge algebra $so(2N_c+1)$ in the Cardy-like limit. We then study the relation with the results obtained for the S-dual $usp(2N_c)$, discussing the fate of S-duality in different regions of charges. We find that S-duality is preserved thanks to a non-trivial integral identity that relates the three sphere partition functions of pure 3d Chern-Simons gauge theories.
Antonio Amariti, Andrea Zanetti
2023-07-17T11:07:52Z
http://arxiv.org/abs/2307.08391v1
# S-duality in the Cardy-like limit of the superconformal index ###### Abstract We evaluate the superconformal index of 4d \({\cal N}=4\) SYM with gauge algebra \(so(2N_{c}+1)\) in the Cardy-like limit. We then study the relation with the results obtained for the S-dual \(usp(2N_{c})\), discussing the fate of S-duality in different regions of charges. We find that S-duality is preserved thanks to a non-trivial integral identity that relates the three sphere partition functions of pure 3d Chern-Simons gauge theories. ## 1 Introduction The holographic interpretation of the entropy of 5d BPS Kerr-Newman black holes [1; 2] from the dual field theory point of view has been an active field of research in the recent past, thanks to the extremization principle of [3]. It has been indeed possible to find a microscopic way to count the microstates [4; 5]. by extracting them from the superconformal index (SCI) [6; 7]. Further generalizations of these results have then been obtained [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31]. It was then realized that it is possible to furnish a field theoretical interpretation of the result in terms of an effective field theory analysis that follows from the compactification of 4d \(SU(N_{c})\) SYM on \(S^{1}\)[32; 33]. The analysis is performed by considering the most general supersymmetric action in 3d and by fixing the coefficients by a one-loop calculation of the Kaluza-Klein modes on the circle. The analysis generalizes the one done in [34] for the ordinary Cardy-limit of the SCI and for the case of 4d \(SU(N_{c})\) SYM it reproduces the results expected from the matrix model [22]. The EFT corresponds to the \(SU(N_{c})_{\pm N_{c}}\) CS action of an \({\cal N}=2\) vector multiplet with further contributions of global CS that can be associated to the 4d global anomalies. On the other hand the EFT interpretation is less clear when the analysis is performed in the regime of charges that dominates the behavior of the SCI for rational values of the fugacity associated to the rotation parameter [33]. Anyway, from the matrix model calculation, also in this case a 3d CS theory is expected. Indeed the 3d matrix model corresponds to the one obtained from an \(SU(N_{c}/C)\times U(1)^{C-1}\) gauge group, with (mixed) CS levels and further contributions that resemble the ones of the global CS discussed in the EFT interpretation of the SCI in the BH regime. It is natural to wonder how the EFT interpretation generalizes beyond the case of \(su(N_{c})\) SYM. A first attempt consists of considering the case of \(usp(2n)\) and \(so(m)\) gauge algebra, where some results from the matrix model perspective have been obtained in [23]. In this case for \(n=2N_{c}\) and \(m=2N_{c}+1\) a further question consists of understanding the fate of the S-duality under the Cardy-like limit. The role of the size of the circle (i.e. the fact that one sums over the whole KK tower) suggests that S-duality should be preserved in the 3d EFT. This is indeed very similar to the idea pursued in [35] for the reduction of 4d dualities to 3d. The finite size effects in the circle reduction there (on the first sheet in the language of [32]) encrypted in the constraints imposed by the KK monopole, became crucial in order to construct the 3d EFT preserving the 4d dualities. This expectation was confirmed from the matrix model calculation in [23], restricting to the saddles at vanishing holonomies, the ones that dominate the index in the BH regime. Physically the matching of the index evaluated on the saddles at vanishing holonomies can be understood from the EFT interpretation. First of all in this case the result can be expressed in terms of the 4d trace anomalies, that naturally match across S-dual phases. Second, the less trivial aspect of this matching consists of comparing the contributions from the CS sectors. The agreement in this case can be reformulated as the fact that S-duality is preserved because the topological sectors, identified by the saddle point holonomies, are equivalent. However, a full understanding of S-duality in the Cardy-like limit requires to go beyond the case at vanishing holonomies. In [23] indeed further saddles of the SCI have been studied for the \(usp(2N_{c})\) case. The behavior of the SCI evaluated on these saddles is generically subleading in the region of charges that reproduces the BH entropy 1. On the other hand for the orthogonal case the index has been evaluated so far only for the saddle at vanishing holonomies. The questions is then if the Cardy-like limit of the SCI of \(so(2N_{c}+1)\) SYM on these other saddles matches the results obtained in [23] for the \(usp(2N_{c})\) case. Indeed, despite the fact that such saddles are expected to be subleading in the BH regime, they dominate the index in other regions of charges. In this paper we provide an answer to this question, showing that S-duality relating \(so(2N_{c}+1)\) and \(usp(2N_{c})\) is fully preserved in the Cardy-like limit of the SCI for small collinear angular momenta. In order to provide the complete answer we first study the saddle point equations for \(so(2N_{c}+1)\) SYM, expanding the index at finite \(N_{c}\) in terms of the small angular momenta. Then, we study the behaviour of the index focusing only on the leading terms in the Cardy-like expansion, showing that in various "physical" regions of charges the leading contributions to the index match across the S-dual phases. However, a large \(N_{c}\) limit is required if we stick to a leading order Cardy-like expansion, to achieve a matching. For this reason, we proceed then to go beyond the leading order and we observe that only after including subleading terms in the expansions S-duality is properly recovered at finite \(N_{c}\). These last expansions provide also 3d CS partition functions for topological gauge theories and their evaluation is crucial for our scopes. Indeed by direct evaluation we show that such CS partition functions vanish on the saddles that are subleading at large \(N_{c}\) for any choice of charges. We refer to saddles of this type as perturbatively unstable, because even if they are apparently giving a contribution to the index at leading order in the angular momenta, they vanish once higher order terms in the expansion are considered. Summarizing: we find that the SCI of \(so(2N_{c}+1)\) and \(usp(2N_{c})\)\(\mathcal{N}=4\) SYM in the Cardy like limit receives non-vanishing contributions only from a subset of solutions of the saddle point equations. Furthermore, the index expanded in terms of the small collinear angular momenta around such solutions matches among the S-dual theories. ## 2 The Cardy-like limit of the SCI of \(\mathcal{N}=4\)\(usp(2N_{c})\) Sym In this section we overview the results of [23] for the evaluation of the Cardy-like limit of the superconformal index for \(\mathcal{N}=4\) SYM with gauge algebra 2\(\mathfrak{g}=usp(2N_{c})\). The index corresponds to a matrix integral over the holonomies \(u_{i}\)\(i=1,\ldots,N_{c}\) and it can be written in terms of the elliptic Gamma functions as Footnote 2: Observe that in the following we will always refer to the gauge algebra instead of the gauge group because the superconformal index does not distinguish the global properties of the gauge group. \[\begin{split}\mathcal{I}^{usp(2N_{c})}=&\frac{(p;p )_{\infty}^{N_{c}}(q;q)_{\infty}^{N_{c}}}{2^{N_{c}}N_{c}!}\prod_{a=1}^{3}\tilde {\Gamma}(\Delta_{a})^{N_{c}}\int\prod_{i=1}^{N_{c}}\mathrm{d}u_{i}\frac{\prod _{a=1}^{3}\prod_{i<j}\tilde{\Gamma}(\pm u_{ij}^{(\pm)}+\Delta_{a})}{\prod_{i<j }\tilde{\Gamma}(\pm u_{ij}^{(\pm)})}\cdot\\ &\cdot\frac{\prod_{a=1}^{3}\prod_{i=1}^{N_{c}}\tilde{\Gamma}(\pm 2 u_{i}+\Delta_{a})}{\prod_{i=1}^{N_{c}}\tilde{\Gamma}(\pm 2u_{i})}.\end{split} \tag{1}\] where \(\Delta_{1,2,3}\) are the R-charges of the three adjoints. We refer the reader to appendix A for the definition of the superconformal index and to appendix B for the definitions of the elliptic functions and their asymptotic behavior. The index can be also written as an integral of an effective action \(S_{\rm eff}^{usp(2N_{c})}\), that in this case is written as \[\begin{split} S_{\rm eff}^{usp(2N_{c})}&=\sum_{a=1} ^{3}\left(\sum_{i<j}\log\tilde{\Gamma}\left(\pm u_{ij}^{(\pm)}+\Delta_{a} \right)+\sum_{i=1}^{N_{c}}\log\tilde{\Gamma}\left(\pm 2u_{i}+\Delta_{a}\right)+N_{c} \log\tilde{\Gamma}\left(\Delta_{a}\right)\right)\\ &+\sum_{i<j}\log\theta_{0}\left(\pm u_{ij}^{(\pm)}\right)+\sum_{i =1}^{N_{c}}\log\theta_{0}\left(\pm 2u_{i}\right)+2N_{c}\log(p;p)_{\infty},\end{split} \tag{2}\] such that the matrix integral (1) becomes \[\mathcal{I}^{usp(2N_{c})}=\frac{1}{2^{N_{c}}N_{c}!}\int\prod_{i=1}^{N_{c}} \mathrm{d}u_{i}\,e^{-S_{\rm eff}^{usp(2N_{c})}} \tag{3}\] The next step consists of evaluating the index in the limit \(|\tau|\to 0\) (at fixed \(\arg\tau\in(0,\pi)\)) restricting to the case \(\tau=\sigma\) (see [36] for the generalization to \(\sigma\neq\tau\)). The evaluation of the index in this limit corresponds to a series expansion in \(\tau\); such expansion is obtained by perturbations around the holonomies that solve the saddle point equations obtained from (2). As \(|\tau|\to 0\) the saddles will converge to the leading ones, capturing the full behaviour of the index up to exponentially suppressed terms in \(|\tau|\). The saddle point equations are \[\begin{split}\sum_{a=1}^{3}\Bigg{[}\sum_{\begin{subarray}{c}j=1 \\ j\neq k\end{subarray}}^{N_{c}}\left(\!B_{2}\{u_{ij}^{(\pm)}+\Delta_{a}\}\!-\!B _{2}\{-u_{ij}^{(\pm)}+\Delta_{a}\}\!\right)+B_{2}\{2u_{i}+\Delta_{a}\}\!-\!B_{ 2}\{-2u_{i}+\Delta_{a}\}\Bigg{]}\!\!=\!0.\end{split} \tag{4}\] The analysis of the solutions of these equations and the expansion of the index has been performed in [23]. In the following we review the results. It has been observed that the index receives contributions from two families of saddle points and that the final sum over such saddles can be written as \[\mathcal{I}^{usp(2N_{c})}=\sum_{L=0}^{\lfloor\frac{N_{c}-1}{2}\rfloor}2 \mathcal{I}^{usp(2N_{c})}_{L=0,N_{c}-L=1/2}+\mathcal{I}^{usp(2N_{c})}_{L=0,L =1/2,N_{c}-2L=1/4}+\Big{(}\mathcal{I}^{usp(2N_{c})}_{N_{c}/2=0,N_{c}/2=1/2} \ \ \text{if N}_{\text{c}}\ \text{even}\Big{)}. \tag{5}\] Each of the families has a distinct leading saddle point which dominates in a specific region of charges. * The first family is constituted of saddles with \(L\) holonomies at \(u_{i}=0\) and \(K\equiv N_{c}-L\) holonomies at \(u_{i}=1/2\). Such saddles are paired by the relation \(\mathcal{I}_{L,N_{c}-L}=\mathcal{I}_{N_{c}-L,L}\). For this reason it is convenient to count them starting from \(L=0\) up to \(\lfloor\frac{N_{c}-1}{2}\rfloor\) with a degeneracy factor 2. Their contribution to the index has been studied in [23], here we only report the result. The saddle point and the effective action emerging near the saddle as \(|\tau|\to 0\) are \[\hat{\mathbf{u}}=\begin{cases}\bar{u}_{j}=v_{j}\tau&j=1,...,L\\ \frac{1}{2}+\bar{u}_{L+r}\equiv\frac{1}{2}+\bar{w}_{r}=\frac{1}{2}+w_{r}\tau&r =1,...,N_{c}-L.\end{cases} \tag{6}\] \[S_{L,K}^{usp(2N_{c})}=\] \[=-\frac{2\pi i}{\tau^{2}}\left(\eta_{1}(L+1-K)+K\eta_{2}\right) \sum_{i=1}^{L}\bar{u}_{i}^{2}-\frac{2\pi i}{\tau^{2}}\left(\eta_{1}(K+1-L)+L \eta_{2}\right)\sum_{r=1}^{K}\bar{w}_{r}^{2}\] \[+\sum_{i<j}^{L}\log\left[2\sin\left(\pm\frac{\pi\bar{u}_{ij}^{( \pm)}}{\tau}\right)\right]+\sum_{i=1}^{L}\log\left[2\sin\left(\pm\frac{2\pi \bar{u}_{i}}{\tau}\right)\right]\] \[+\sum_{r<s}^{K}\log\left[2\sin\left(\pm\frac{\pi\bar{w}_{rs}^{( \pm)}}{\tau}\right)\right]+\sum_{r=1}^{K}\log\left[2\sin\left(\pm\frac{2\pi \bar{w}_{r}}{\tau}\right)\right]\] \[-\frac{i\pi}{\tau^{2}}\left(2(L\!-\!K)^{2}+N_{c}\right)\prod_{a= 1}^{3}\left(\{\Delta_{a}\}_{\tau}\!-\!\frac{1+\eta_{1}}{2}\right)\!-\!\frac{i \pi}{\tau^{2}}LK\prod_{a=1}^{3}\left(\{2\Delta_{a}\}_{\tau}\!-\!\frac{1+\eta_ {2}}{2}\right)\] \[+i\pi\left(\frac{(6-5\eta_{1})\left(2(L-K)^{2}+N_{c}\right)}{12} \!+\!\frac{(12-5\eta_{2})LK}{3}-N_{c}^{2}\right)\!-\!N_{c}\log(\tau)+\mathcal{ O}(\tau), \tag{7}\] where \(\eta_{1}=\pm 1\) and \(\eta_{2}=\pm 1\) define different chambers for the chemical potentials, satisfying \[\sum_{a=1}^{3}\{I\Delta_{a}\}_{\tau}=2\tau+\frac{3+\eta_{I}}{2} \tag{8}\] These constraints arise as a consequence of the constraint \[\prod_{a=1}^{3}y_{a}=pq\implies\Delta_{1}+\Delta_{2}+\Delta_{3}-2\tau\in \mathbb{Z}, \tag{9}\] together with the requirement that \(\Delta_{a}\not\to 0,2\). The reduction over the thermal \(S^{1}\) with length \(\beta\) in the Cardy-like limit \(\tau\sim\beta\) produces 3d pure CS partition functions on \(S^{3}\) after the integration of the massive KK modes on \(S^{1}\). The original \(usp(2N_{c})\) gauge algebra is broken down to \(usp(2L)_{k_{1}}\times usp(2K)_{k_{2}}\). This can be read off directly from the effective action, as it is reflected in the logarithmic terms in (7), defining the measure of the CS partition function, upon exploiting property (C.3) (with \(\omega_{1}=\omega_{2}=i\)) for the hyperbolic gamma functions. The CS levels can be identified by recalling the expression for a pure 3d CS partition function on \(S^{3}\) with \(usp(2m)_{k}\) gauge algebra \[Z_{S^{3}}^{usp(2m)}=\frac{e^{i\pi m^{2}}}{|2^{m}m!|}\int\prod_{i=1}^{m}d\sigma_{ i}e^{-i\pi 2k\sigma_{i}^{2}}\prod_{\alpha\in\Delta_{+}}4\sinh(\pm\pi\alpha(\sigma))).\] (10) Upon making the CS effective action apparent, through the change of variables \(\bar{u}_{j}=-i\sigma_{j}\tau\), the CS levels are \(k_{1,2}=-C_{1,2}\), with \(-\frac{2\pi i}{\tau^{2}}C_{1,2}\) being the coefficients of the quadratic terms in (7). All in all, the contribution to the SCI coming from the \((L,K)\) saddle point of this family is, up to exponentially suppressed corrections \(\sim\mathcal{O}(e^{-\frac{1}{|\tau|}})\) in the Cardy-like limit, \[\mathcal{I}_{L,K}^{usp(2N_{c})}=\tau^{N_{c}}e^{-\frac{i\pi\left(2(L-K)^{2}+N_ {c}\right)}{2}}e^{-2i\pi LK}\mathcal{I}_{0}Z_{S^{3}}^{usp(2L)_{k_{1}}}Z_{S^{3 }}^{usp(2K)_{k_{2}}},\] (11) where \[\log\mathcal{I}_{0}\equiv -\frac{i\pi}{\tau^{2}}\left(2(L-K)^{2}+N_{c}\right)\prod_{a=1}^{3} \left(\{\Delta_{a}\}_{\tau}-\frac{1+\eta_{1}}{2}\right)+\] \[-\frac{i\pi}{\tau^{2}}LK\prod_{a=1}^{3}\left(\{2\Delta_{a}\}_{\tau }-\frac{1+\eta_{2}}{2}\right)-N_{c}\log(\tau)+\] \[+i\pi\left(\frac{\left(6-5\eta_{1}\right)\left(2(L-K)^{2}+N_{c} \right)}{12}+\frac{(6-5\eta_{2})LK}{3}-(L^{2}+K^{2})\right).\] (12) and the CS levels for the 3d pure CS theories partition functions on \(S^{3}\) are \[\begin{cases}k_{1}=-((L+1-K)\eta_{1}+K\eta_{2})\\ k_{2}=-((K+1-L)\eta_{1}+L\eta_{2}).\end{cases}\] (13) * The second family is described by saddles with \(L\) holonomies at \(u_{i}=0\), \(L\) at \(u_{i}=1/2\) and \(K=N_{c}-2L\) at \(u_{i}=1/4\) and \(L\) ranging between zero and \(\lfloor\frac{N_{c}-1}{2}\rfloor\). The original gauge algebra is broken by these vacua, with breaking pattern \(usp(2N_{c})\to usp(2L)_{k_{1}}\times usp(2L)_{k_{2}}\times su(K)_{k_{3}}\times u (1)_{k_{4}}\) and a pure 3d CS partition function emerges. The contribution to the SCI coming from these saddles is \[\mathcal{I}_{L,L,K}^{usp(2N_{c})}=\tau^{N}e^{-\frac{i\pi(N_{c}+4L^{2}+K(K-1))} {2}}\mathcal{I}_{0}Z_{S^{3}}^{usp(2L)_{k_{1}}}Z_{S^{3}}^{usp(2L)_{k_{2}}}Z_{S^ {3}}^{su(K)_{k_{3}}}Z_{S^{3}}^{u(1)_{k_{4}}}\] (14) with \[\log\mathcal{I}_{0}= \frac{i\pi(2L-K)}{\tau^{2}}\prod_{a=1}^{3}\left(\{\Delta_{a}\}_{ \tau}-\frac{1+\eta_{1}}{2}\right)-\frac{i\pi LK}{4\tau^{2}}\prod_{a=1}^{3} \left(\{4\Delta_{a}\}_{\tau}-\frac{1+\eta_{4}}{2}\right)\] (15) \[- \frac{i\pi((2L-K)^{2}+K)}{4\tau^{2}}\prod_{a=1}^{3}\left(\{2 \Delta_{a}\}_{\tau}-\frac{1+\eta_{2}}{2}\right)-i\pi(4L^{2}+K^{2})\] \[+ \frac{i\pi(6-5\eta_{1})(2L\!-\!K)}{12}\!+\!\frac{i\pi(12\!-\!5 \eta_{2})(2L\!-\!K)^{2}+K)}{12}\!+\!\frac{i\pi(12\!-\!5\eta_{4})LK}{3},\] the \(\eta_{i}\) defined similarly as before and \[\begin{cases}k_{1}=-\frac{1}{2}\big{(}2\eta_{1}+(2L-K)\eta_{2}+K \eta_{4}\big{)}\\ k_{2}=-\frac{1}{2}\big{(}2\eta_{1}+(2L-K)\eta_{2}+K\eta_{4}\big{)}\\ k_{3}=-\big{(}-2\eta_{1}+(K-2L+2)\eta_{2}+2L\eta_{4}\big{)}\\ k_{4}=-2\big{(}-(K+1)\eta_{1}+(K+1-L)\eta_{2}+L\eta_{4}\big{)}.\end{cases}\] (16) * When \(N_{c}\) is even there is also a self-paired saddle with \(N_{c}/2\) holonomies at \(0\) and \(1/2\); such saddle represents a limiting case of the other two families discussed above. Summarising, the index of \(\mathcal{N}=4\)\(usp(2N_{c})\) SYM receives contributions from \(N_{c}+1\) distinct saddle points, divided in two families. Employing the pairing degeneracy discussed above, the saddles of the two families can be combined naturally into one, parameterised by \(\mathcal{I}_{j}\), with \(j=0,...,N_{c}-1\) and defined as follows: \[\begin{cases}\mathcal{I}_{j}\equiv\mathcal{I}_{j,j,N_{c}-2j}^{usp(2N_{c})}&0 \leq j\leq\lfloor\frac{N_{c}}{2}\rfloor\\ \mathcal{I}_{j}\equiv 2\mathcal{I}_{j,N_{c}-j}^{usp(2N_{c})}&\lceil\frac{N_{c}}{2} \rceil\leq j\leq N_{c}.\end{cases} \tag{17}\] The limiting case \(j=\frac{N_{c}}{2}\) is common to both families and connects them, resulting in a well ordered distribution of saddles shown in Figure 1. ### Explicit evaluation In this section, we perform a complete analysis of the contributions to the SCI from each saddle. At first, we will focus on the leading order Cardy-like limit, such to identify the dominant saddle points in the regions of charges denoted as _physical_ in [33]. In our language these correspond to the choices \(\eta_{1}=-\eta_{2}=\pm 1\) which reduce to the cases discussed in [33] when \(\{\Delta_{a}\}=1/3,2/3\). We show that a leading order analysis in \(1/|\tau|^{2}\), while being enough to determine the dominant saddle points in each region of charges, can miss physical properties of these vacua such as S-duality and possible perturbative instabilities that emerge in the calculation at subleading orders in \(|\tau|\) and that are encoded in the three-sphere CS partition functions. For this reason, we claim that a complete expansion beyond the leading order in the Cardy-like limit is necessary to achieve a physically reliable result. The leading order \(1/|\tau|^{2}\) competition between the saddles in each family is determined by a parabola. For the first family we have \[-\frac{\tau^{2}}{i\pi}S_{L,N_{c}-L}^{usp(2N_{c})}=\left(2(N_{c}-2L)^{2}+N_{c} \right)\alpha_{1}-L(N_{c}-L)\alpha_{2}, \tag{18}\] where we defined \[\alpha_{I}\equiv\prod_{a=1}^{3}\left(\{I\Delta_{a}\}_{\tau}-\frac{1+\eta_{I} }{2}\right)\quad. \tag{19}\] We can determine the dominant saddle point in both chambers \(\eta_{1}=-\eta_{2}=\eta_{4}=\mp 1\). The net effect of switching from the first to the second region is to change the concavity of the parabola, switching from a M-shaped effective potential to a W-shaped one in the language of [9]. The vertex of (18) sits at \[L=\frac{N_{c}}{2} \tag{20}\] as expected due to the pairing between the \(L\) and the \(N_{c}-L\) saddles. Thus, the leading saddle is either the one closer to \(N_{c}/2\) or the saddle with \(N_{c}\) holonomies at zero, depending on the chamber of the chemical potentials we are in. Analogously, for the second family we find that the vertex of the corresponding parabola sits at \[L=\frac{N_{c}}{4}-\frac{8\alpha_{1}-\alpha_{2}}{2(8\alpha_{2}-\alpha_{4})} \tag{21}\] In the "physical" regions the relation \(\frac{8\alpha_{1}-\alpha_{2}}{2(8\alpha_{2}-\alpha_{4})}<0\) always holds and it allows us to conclude that the leading saddle for this family is either the one with \(N_{c}\) holonomies at \(u_{i}=1/4\) or the saddle closer to the vertex (21) defined by some \(L\) (say \(L^{*}\)) by symmetry reasons. However, we notice that the two parabolas describing the two families of saddles have opposite concavities, thus depending on the region of the chemical potentials the leading saddle is either the one where the \(N_{c}\) holonomies sitting at zero dominate on the \(L^{*}\) saddle of the second family, or the one with \(N_{c}\) holonomies at \(1/4\) (in the W winged shaped potential) as shown in (Fig. 1). Borrowing again the terminology of [9], we refer to the choice where the vanishing holonomies dominate as the M-wing, while the region where the non-vanishing holonomies dominate is referred to the W-wing. The first case corresponds to the choice \(\eta_{1}=-\eta_{2}=-1\), while the second case corresponds to \(\eta_{1}=-\eta_{2}=1\). he Cardy-like limit of the SCI of \(\mathcal{N}\!=\!4\) of \(so(2N_{c}+1)\) Sym In this section we focus on the Cardy-like limit evaluation of the SCI for \(4d\)\(\mathcal{N}=4\) SYM with \(so(2N_{c}+1)\) gauge algebra, determining the general structure of the saddle points. The index is given by \[\begin{split}\mathcal{I}^{so(2N_{c}+1)}=&\frac{(p;p )_{\infty}^{N_{c}}(q;q)_{\infty}^{N_{c}}}{2^{N_{c}}N_{c}!}\prod_{a=1}^{3} \tilde{\Gamma}(\Delta_{a})^{N_{c}}\int\prod_{i=1}^{N_{c}}\mathrm{d}u_{i}\frac{ \prod_{a=1}^{3}\prod_{i<j}\tilde{\Gamma}(\pm u_{ij}^{(\pm)}+\Delta_{a})}{\prod_ {i<j}\tilde{\Gamma}(\pm u_{ij}^{(\pm)})}\cdot\\ &\cdot\frac{\prod_{a=1}^{3}\prod_{i=1}^{N_{c}}\tilde{\Gamma}(\pm u _{i}+\Delta_{a})}{\prod_{i=1}^{N_{c}}\tilde{\Gamma}(\pm u_{i})}.\end{split} \tag{13}\] We define the effective action \[\begin{split} S_{\mathrm{eff}}^{so(2N_{c}+1)}&=\sum _{a=1}^{3}\left(\sum_{i<j}\log\tilde{\Gamma}\left(\pm u_{ij}^{(\pm)}+\Delta_{a }\right)+\sum_{i=1}^{N_{c}}\log\tilde{\Gamma}\left(\pm u_{i}+\Delta_{a} \right)+N_{c}\log\tilde{\Gamma}\left(\Delta_{a}\right)\right)\\ &+\sum_{i<j}\log\theta_{0}\left(\pm u_{ij}^{(\pm)}\right)+\sum_{i =1}^{N_{c}}\log\theta_{0}\left(\pm u_{i}\right)+2N_{c}\log(p;p)_{\infty},\end{split} \tag{14}\] such that the index is \[\mathcal{I}^{so(2N_{c}+1)}=\frac{1}{2^{N_{c}}N_{c}!}\int\prod_{i=1}^{N_{c}} \mathrm{d}u_{i}\,e^{-S_{\mathrm{eff}}^{so(2N_{c}+1)}} \tag{15}\] General solutions to the saddle point equations beyond the leading order Cardy-like limit can be found by first focusing on the leading term in the Cardy-like limit and then by expanding around those solutions accordingly, following the strategy of [22]. As \(|\tau|\to 0\) the saddles will converge to the leading ones, capturing the full Figure 1: Behaviour of the saddle points for \(usp(2N_{c})\) as \(x\coloneqq j/N_{c}\) ranges from \(0\) to \(1\) and \(N_{c}=10\). On the left: \(\{\Delta_{a}\}=1/3\); on the right: \(\{\Delta_{a}\}=2/3\). behaviour of the index up to exponentially suppressed terms in \(|\tau|\). The saddle point equations are \[\sum_{a=1}^{3}\bigg{[}\sum_{\begin{subarray}{c}j=1\\ j\neq k\end{subarray}}^{N_{c}}\bigg{(}B_{2}\{u_{ij}^{(\pm)}\!+\!\Delta_{a}\}\!-\! B_{2}\{-u_{ij}^{(\pm)}+\Delta_{a}\}\bigg{)}+B_{2}\{u_{i}+\Delta_{a}\}\!-\!B_{2}\{-u_{i} +\Delta_{a}\}\bigg{]}\!=\!0. \tag{3.4}\] The absence of a factor 2 in the \(\pm u_{i}\) roots of \(so(2N_{c}+1)\) with respect to the ones of \(usp(2N_{c})\) plays a crucial role in the behaviour of the structure of the saddles, leading to a rather different behaviour than the ones of the symplectic case, as showed in Figure 23. We found that the solution with \(N_{c}\) holonomies at zero, already studied in [23], lies inside a more general family of saddles parameterised by \(L=0,N_{c}\), which counts the number of holonomies set to zero. The general saddle point is of the form \((L,N_{c}-L)\), with \(L\) holonomies at zero and \(N_{c}-L\) at \(1/2\). As opposed to the symplectic case, there is no pairing between the \(L\) and \(L^{\prime}=N_{c}-L\) saddles. Footnote 3: Observe that the gauge symmetry breaking pattern is reminiscent of the one dictated by the split of an orientifold \(O4\) plane under T-duality along a compact direction. It would be interesting to investigate further on this relation. The saddle point beyond the leading order in the Cardy-like limit and the corresponding subleading contributions to the index are then obtained by expanding around the leading saddles. ### \(L\) holonomies at \(u_{i}=0\), \(K=N_{c}-L\) holonomies at \(u_{i}=1/2\) We make the following ansatz for the general saddle point: \[\hat{\mathbf{u}}=\begin{cases}\bar{u}_{j}=v_{j}\tau&j=1,...,L\\ \frac{1}{2}+\bar{u}_{L+r}\equiv\frac{1}{2}+\bar{w}_{r}=\frac{1}{2}+w_{r}\tau&r =1,...,K.\end{cases} \tag{3.5}\] Then, expanding the effective action near \(\hat{\mathbf{u}}\) for \(|\tau|\to 0\) we obtain Figure 2: Vacua distribution together with the corresponding group of the effective CS theory. \[S^{so(2N_{c}+1)}_{L,K}=-\frac{i\pi}{\tau^{2}}(2(L-K)-1)\eta_{1}+2K \eta_{2})\sum_{i=1}^{L}\bar{u}_{i}^{2}+\sum_{i<j}^{L}\log\left(2\sin\left(\frac{ \pm\pi\bar{u}_{ij}^{(\pm)}}{\tau}\right)\right)+\] \[+\sum_{i=1}^{L}\log\left(2\sin\left(\frac{\pm\pi\bar{u}_{i}}{\tau }\right)\right)-N_{c}\log(\tau)+\] \[-\frac{i\pi}{\tau^{2}}((2(K-L)-3)\eta_{1}+(2L+1)\eta_{2})\sum_{r =1}^{K}\bar{w}_{r}^{2}+\sum_{r<s}^{K}\log\left(2\sin\left(\frac{\pm\pi\bar{w}_{ rs}^{(\pm)}}{\tau}\right)\right)+\] \[-\frac{i\pi}{\tau^{2}}\left(2(L-K)^{2}+L-3K\right)\prod_{a=1}^{3 }\left(\{\Delta_{a}\}_{\tau}-\frac{1+\eta_{1}}{2}\right)\] \[-\frac{i\pi}{2\tau^{2}}\left(K(1+2L)\right)\prod_{a=1}^{3}\left( \{2\Delta_{a}\}_{\tau}-\frac{1+\eta_{2}}{2}\right)+\] \[+i\pi\left(\frac{\left(6-5\eta_{1}\right)(2(L-K)^{2}+L-3K)}{12}+ \frac{(12-5\eta_{2})K(1+2L)}{6}-N_{c}^{2}\right), \tag{3.6}\] where again \(\eta_{1}=\pm 1\) and \(\eta_{2}=\pm 1\). The action (3) is manifestly not invariant under \(L\leftrightarrow K\), differently to the symplectic case. Upon changing variables \(\bar{u}_{j}=-i\sigma_{j}\tau\), we can read off the three sphere partition function a 3d pure CS theory. Such CS theories arise by expanding the holonomies around the \(u_{i}=0\) and \(u_{i}=1/2\) vacua, and they give rise to an odd and even rank orthogonal gauge group respectively. We obtain the partition function of an \(s(o(2L+1)_{k_{1}}\times o(2K)_{k_{2}})\) pure CS theory with CS levels \[\begin{cases}k_{1}=-(2(L-K)-1)\eta_{1}-2K\eta_{2}\\ k_{2}=-(2(K-L)-3)\eta_{1}-(2L+1)\eta_{2}\end{cases} \tag{3.7}\] The index is then \[\mathcal{I}=\sum_{L=0}^{N_{c}}\mathcal{I}_{L,N_{c}-L},\quad\text{ where}\quad\mathcal{I}_{L,K}=\tau^{N_{c}}e^{-\frac{i\pi((2K-1)K+(2L+1)L)}{2}} \mathcal{I}_{0}Z_{S^{3}}^{s(o(2L+1)_{k_{1}}\times o(2K)_{k_{2}})} \tag{3.8}\] and \[\log\mathcal{I}_{0} =-\frac{i\pi}{\tau^{2}}\left(2(L-K)^{2}+L-3K\right)\prod_{a=1}^{3 }\left(\{\Delta_{a}\}_{\tau}-\frac{1+\eta_{1}}{2}\right)\] \[-\frac{i\pi}{2\tau^{2}}\left(K(1+2L)\right)\prod_{a=1}^{3}\left( \{2\Delta_{a}\}_{\tau}-\frac{1+\eta_{2}}{2}\right)-N_{c}\log(\tau)+\] \[+i\pi\left(\frac{\left(6-5\eta_{1}\right)(2(L-K)^{2}+L-3K)}{12}+ \frac{(12-5\eta_{2})K(1+2L)}{6}-N_{c}^{2}\right). \tag{3.9}\] ### General behaviour of the saddles Again, the dominant saddle point in the Cardy-like limit depends on the region of chemical potentials we are in. To identify the leading saddle it is enough to focus on the leading order term. The behaviour of the saddles is determined by a second degree polynomial in \(L\in[0,N_{c}]\). \[-\frac{\tau^{2}}{i\pi}S_{L,N_{c}-L}^{so(2N_{c}+1)}=\left(2(N_{c}-2L)^{2}+4L-3N_ {c}\right)\alpha_{1}+\frac{1}{2}\left((N_{c}-L)(1+2L)\right)\alpha_{2}. \tag{3.10}\] where \(\alpha_{1,2}\) are defined in (2.19) The parabola has a vertex in \[L=\frac{2N_{c}-1}{4} \tag{3.11}\] independently of the chemical potentials. Thus, it follows that the dominant saddle is either the one closer to the vertex with \(L=\left\lfloor\frac{N_{c}}{2}\right\rfloor\), since \(L\) must be integer, or the saddle with \(L=N_{c}\) at the extremum of the domain of the parabola, depending on the region of chemical potentials we are considering. The saddle with \(L=0\) is penalised, due to the vertex being closer to zero than to \(N_{c}\); only in the large \(N_{c}\) limit we expect to recover a pairing between the \(L\) and the \(L^{\prime}=N_{c}-L\) saddles as the symmetry axis of the parabola goes to \(N_{c}/2\). In the "physical" regions we are in the M-wing or in the W-wing. In the first case the dominant saddle point is the one with \(N_{c}\) holonomies at zero, while in the second case the dominant saddle is the one with \(L=\left\lfloor\frac{N_{c}}{2}\right\rfloor\). Summarizing, the M-wing is dominated by vanishing holonomies, while the saddle with \(L=\left\lfloor\frac{N_{c}}{2}\right\rfloor\) holonomies at \(u_{i}=0\) and the remaining at \(u_{i}=1/2\) dominates the W-wing. Figure 3: The behaviour of (3.10) as \(x\coloneqq L/N_{c}\) ranges from \(0\) to \(1\). The vertical line passes through the vertex of the parabola. For presentation purposes we plotted the case with \(N_{c}=8\). On the left: \(\{\Delta_{a}\}=1/3\); on the right: \(\{\Delta_{a}\}=2/3\). S-duality In this section we study the fate of 4d S-duality in the Cardy-like limit of the SCI. We start by matching the \(1/|\tau|^{2}\) leading order contribution to the index when all the holonomies are vanishing in both the symplectic and orthogonal case. This saddle dominates the index in the M-wing of the potential and it reproduces the entropy function of the would be holographic dual black hole. Then we match the leading contributions in the region of parameter where the index is in the W-wing. Then we consider the fate of S-duality also in presence of subleading contribution in \(|\tau|\). As discussed above only few saddles survive for both \(usp(2N_{c})\) and \(so(2N_{c}+1)\). These saddles are exactly the ones that dominates the index in the M-wing and in the W-wing. In the case of the M-wing the full matching was discussed in [23]. In the W-wing we show here that S-duality is preserved because of a non-trivial identity among the pure CS partition functions. ### S-duality at the leading order We begin our analysis by focusing on the leading \(1/|\tau|^{2}\) order expansion of the index. The dominant contributions to the index in each region \(\eta_{1}=-\eta_{2}=\mp 1\) have been identified in the previous sections and read * M-wing (\(\eta_{1}=-1\)): the dominant contribution in the orthogonal case is achieved for vanishing holonomies. The symplectic theory is dominated by the same configuration of holonomies but the contribution is doubled due to the pairing between the saddles at \(N_{c}\) holonomies at \(u_{i}=0\) and at \(N_{c}\) holonomies at \(u_{i}=\frac{1}{2}\). As discussed in [23], the factor 2 degeneracy, understood as the presence of a \(\mathbb{Z}_{2}\) global 1-form symmetry, is not apparent in the orthogonal theory at this order, for any finite \(N_{c}\), and only once subleading corrections in \(|\tau|\) are included such factor can be recovered. * W-wing (\(\eta_{1}=1\)): The symplectic theory is dominated by the saddle point with \(N_{c}\) holonomies at \(u_{i}=\frac{1}{4}\), while for the orthogonal case the dominant contribution arise when \(L=\lfloor\frac{N_{c}}{2}\rfloor\) holonomies sits at \(u_{i}=0\) and the remaining ones at \(u_{i}=\frac{1}{2}\). At this level the expectation is that S-duality manifests as a matching between the dominant saddles. A natural question regards the role of the regions of chemical potentials. When an EFT interpretation of the underlying 3d pure CS theory is understood, the matching between the saddles is actually constrained by S-duality independently of the specific region of \(\eta_{I}\) we sits in, because the topological sectors identified by the holonomies are equivalent. This is indeed the case for the saddle at vanishing holonomies for which an EFT interpretation for the CS terms can be recovered. To be more explicit, one can readily observe that \[\mathcal{I}^{usp(2N_{c})}_{N_{c},0}=\mathcal{I}^{so(2N_{c}+1)}_{N_{c},0}=\exp \left[-\frac{i\pi N_{c}(2N_{c}+1)}{\tau^{2}}\prod_{a=1}^{3}\left(\{\Delta_{a}\}- \frac{1+\eta_{1}}{2}\right)\right] \tag{39}\] which holds for any value of \(\eta_{I}\) and it thus persists independently of the specific wing we are in. Notice however that by sticking at order \(1/|\tau|^{2}\) a proper matching can be achieved only considering the large \(N_{c}\) limit, when the reflexive symmetry \(L\leftrightarrow K\) between the saddles is recovered also in the orthogonal case. In fact, for \(N_{c}\rightarrow\infty\) we get \[\mathcal{I}^{usp(2N_{c})}_{N_{c},0}+\mathcal{I}^{usp(2N_{c})}_{0,N_{c}}=\mathcal{I}^{so(2N_{c}+1)}_{N_{c},0}+\mathcal{I}^{so(2N_{c}+1)}_{0,N_{ c}}= \tag{40}\] \[= \,2\exp\left[-\frac{i\pi N_{c}(2N_{c}+1)}{\tau^{2}}\prod_{a=1}^{ 3}\left(\{\Delta_{a}\}-\frac{1+\eta_{1}}{2}\right)\right]\] The same argument cannot be employed for the saddles dominating the W-wing as the EFT interpretation is less clear. However, also for these saddles the matching extends to any region of \(\{\Delta_{a}\}\) at least for the leading order in \(|\tau|\). Indeed, \[\mathcal{I}^{usp(2N_{c})}_{0,0,N_{c}}=\mathcal{I}^{so(2N_{c}+1)}_ {\lfloor\frac{N_{c}}{2}\rfloor,\lceil\frac{N_{c}}{2}\rceil}= \tag{41}\] \[=\exp\left[\frac{i\pi N_{c}}{\tau^{2}}\prod_{a=1}^{3}\left(\{ \Delta_{a}\}-\frac{1+\eta_{1}}{2}\right)-\frac{i\pi(N_{c}^{2}+N_{c})}{4\tau^{ 2}}\prod_{a=1}^{3}\left(\{2\Delta_{a}\}-\frac{1+\eta_{2}}{2}\right)\right]\] At this level of the discussion the fate of S-duality on the other saddles is unclear. Indeed we did not find a matching among the indices expanded around such saddles at leading order in \(1/|\tau|^{2}\). The situation is clarified by taking into account the complete expansion in \(|\tau|\), as we will show in the next sub-section. ### Beyond the leading order The 3d CS partition function on \(S^{3}\) is \[Z_{\mathfrak{g}}=\frac{1}{|W|}\int\prod_{i=1}^{r\mathfrak{k}\mathfrak{g}}d \sigma_{i}e^{\frac{i\pi k\sigma_{i}^{2}}{\omega_{1}\omega_{2}}}\prod_{\alpha( \sigma)}\Gamma_{h}^{-1}(\alpha(\sigma)) \tag{42}\] The exact evaluation of this partition function is already known in literature for algebras of type ABCD and it can be obtained by employing the Weyl character formula and its generalisations. As discussed before, the possible symmetry breaking patterns of the original gauge group for each holonomy configuration fall into an algebra of type ABCD. Therefore, we can get an explicit evaluation of the SCI on each saddle beyond the semiclassical expansion, with the most significant contribution coming from CS partition functions. The explicit expression of such partition function for each algebra is presented in appendix C. All of them exhibits similar features. The general structure is \[Z_{ABCD(m)_{k}}=\frac{\exp{(i\pi f(m,k))}}{g(m,k)}\prod_{n\in I_{m}}2\sin\Bigl{(} \pi\frac{n}{k}\Bigr{)}^{d(n)}, \tag{4.5}\] where \(I_{m}\) is some subset of consecutive elements of (semi-)integers, typically depending on the rank \(m\) of the group, \(k\) is the CS level of the theory, while \(f(m,k)\) and \(g(m,k)\) are two functions depending on the details of \(\mathfrak{g}\), with \(g(m,k)\) such that \(g(m,0)=0\), while \(f(m,k)\) real, so that \(\exp(i\pi f(m,k))\) is a phase. The function \(d(n)\) represents a possible degeneracy, due to possible multiple occurrences of the same integer n. A general consequence of (4.5) is that the level plays a crucial role in determining the physical relevance of the saddle point. First, for \(k=0\) the TFT is not well defined. Second, when \(k\) lies within \(I_{m}\) the partition function is zero. The only case when (4.5) is non-vanishing is when \(k>\max(I_{m})\). Since the CS level is determined by the holonomy configuration of a chosen saddle, we can predict the stability and the contribution of such a saddle to the index only by studying the CS level for the emerging pure CS theories, expanding the effective action for the matrix model near such vacuum. Focusing first on \(\mathcal{N}=4\)\(usp(2N_{c})\) SYM, the possible patterns of symmetry breaking found can be divided in two categories \(usp(2N_{c})\to usp(2L)\times usp(2K)\) and \(usp(2N_{c})\to usp(2L)\times usp(2L)\times su(K)\times u(1)\). For the first case, by inspecting (C.6) and remembering that the CS levels for the two pure CS theories are defined as in (2.13), we find that \(Z^{usp(2(N_{c}-L))_{k_{2}}}=0\) when \(0<L\leq\lfloor N_{c}/2\rfloor\). Moreover, under the reflexive symmetry \(L\leftrightarrow N_{c}-L\) the role of \(Z^{usp(2L)_{k_{1}}}\) and \(Z^{usp(2(N_{c}-L)_{k_{2}})}\) is exchanged and we can conclude that \(Z^{usp(2L)_{k_{1}}}=0\) when \(\lfloor N_{c}/2\rfloor\leq L<N_{c}\). In addition, it can sporadically happen that the CS level is zero for some saddles with \(L\neq 0,1/2\). Thus, beyond the semiclassical approximation the only non-vanishing saddle arising from the first family is the one with \(N_{c}\) holonomies at \(u_{i}=0\) together with its paired one with \(N_{c}\) holonomies at \(u_{i}=1/2\). All the other saddles give a vanishing partition functions and they are then perturbatively unstable. The same argument can be applied to the second family of saddles leaving only one non-vanishing saddle with holonomy configuration defined by \(N_{c}\) holonomies at \(u_{i}=1/4\). While the \(1/|\tau|^{2}\) leading order calculation identifies such two saddles as the dominant contributions to the index, the analysis beyond the leading order shows that they are the only contributions to the index. In addition, S-duality cannot hold without the explicit evaluation of the CS partition function obtained by a pertubation close to the saddle. This is because S-duality is expected to manifest in the Cardy-like limit as a matching between saddles of the two theories. Then, without an analysis of the subleading contributions in \(|\tau|\) of each saddle to the index, not only there is not a clear understanding of the role played by the subleading saddles within the context of S-duality, but even a partial matching between the dominant ones cannot be achieved as discussed in [23]. The story proceeds in a similar way for the orthogonal \(so(2N_{c}+1)\) case. In this case, we have just one family of saddles with \(L\) holonomies at \(u_{i}=0\) and \(N_{c}-L\) at \(u_{i}=1/2\), as discussed in Section 3. The original \(so(2N_{c}+1)\) gauge algebra breaks into \(s(o(2L+1)\times o(2(N_{c}-L)))\) and a \(Z^{s(o(2L+1)_{k_{1}}\times o(2K)_{k_{2}})}\) factor (with \(k_{1}\) and \(k_{2}\) defined in (3.7)) appears in the evaluation of the subleading contributions in \(|\tau|\) to the index in the Cardy-like limit. Using the results presented in appendix C for the partition function of the CS gauge theories with orthogonal gauge algebra, together with (3.7) for the CS levels we find that the only non-vanishing saddles are the ones with either \(N_{c}\) holonomies at zero or \(L=\lfloor\frac{N_{c}}{2}\rfloor\) holonomies at \(u_{i}=0\) and the remaining ones at \(u_{i}=\frac{1}{2}\). These have been already identified as the dominant contributions to the index in the M-wing and W-wing respectively. Summarising, S-duality predicts a matching between two pairs of saddles of the two theories, which must hold independently of the regions of charges that we are considering. In this sense also the distinction between the W and M shaped regions of the potential is unnecessary, because we have matched the whole expansions in \(|\tau|\) in both the wings4. The SCI for the two distinct 4d S-dual SYM theories reduces to Footnote 4: Observe that even if we did not mention the contribution at order \(|\tau|\) in our calculation that corresponds, for vanishinbg holonomies, to the supersymmetric Casimir energy [32] and it always matches across dualities. Similarly we have matched that terms across S-duality also in the cases without vanishing holonomies. * \(usp(2N_{c})\): \(\mathcal{I}^{usp(2N_{c})}=2\mathcal{I}_{N_{c},0}+\mathcal{I}_{0,0,N_{c}}\). * \(so(2N_{c}+1)\): \(\mathcal{I}^{so(2N_{c}+1)}=\mathcal{I}_{N_{c},0}+\mathcal{I}_{\lfloor\frac{N_ {c}}{2}\rfloor,\lceil\frac{N_{c}}{2}\rceil}\). The saddles with vanishing holonomies agree in the two theories, as already discussed in [23]. For both theories their contribution to the SCI is \[\log\mathcal{I}_{N_{c},0}\sim-\frac{i\pi N_{c}(2N_{c}+1)}{\tau^{2}}\prod_{a=1 }^{3}\left(\{\Delta_{a}\}_{\tau}-\frac{1+\eta_{1}}{2}\right)+\log 2. \tag{4.6}\] This result holds thanks to the crucial role played by the evaluation of the CS partition function, responsible in the orthogonal case for the appearance of a \(\log 2\), related to the \(\log|G|\) correction to the black hole entropy, discussed in [23] and understood as the presence of a 1-form symmetry. It remains to show that the saddle with \(N_{c}\) holonomies at \(u_{i}=\frac{1}{4}\) of the \(usp(2N_{c})\) theory agrees with the corresponding saddle with \(L=\lfloor\frac{N_{c}}{2}\rfloor\) holonomies at \(u_{i}=0\) and the remaining ones at \(u_{i}=\frac{1}{2}\) of the \(so(2N_{c}+1)\) theory. In the symplectic theory the contribution of the saddle to the index is \[\mathcal{I}_{N_{c},0}=\tau^{N}e^{-i\pi\frac{N_{c}^{2}}{2}}\mathcal{I}_{0}Z_{S^ {3}}^{su(N_{c})_{h_{1}}}Z_{S^{3}}^{u(1)_{h_{2}}}, \tag{4.7}\] with \[\log\mathcal{I}_{0}= -\frac{i\pi N_{c}(N_{c}+1)}{4\tau^{2}}\prod_{a=1}^{3}\left(\{2 \Delta_{a}\}_{\tau}-\frac{1+\eta_{2}}{2}\right)+\frac{i\pi N_{c}}{\tau^{2}} \prod_{a=1}^{3}\left(\{\Delta_{a}\}_{\tau}-\frac{1+\eta_{1}}{2}\right)\] \[+\frac{5i\pi N_{c}}{12}\left(\eta_{1}-(N_{c}+1)\eta_{2}\right)+ \frac{i\pi N_{c}}{2}-N_{c}\log(\tau), \tag{4.8}\] For the orthogonal case the general expression (3.8) reduces to, when \(L=\lfloor\frac{N_{c}}{2}\rfloor\), \[\mathcal{I}_{\lfloor\frac{N_{c}}{2}\rfloor,\lceil\frac{N_{c}}{2}\rceil}=\tau^ {N_{c}}e^{-\frac{i\pi N_{c}^{2}}{2}}\mathcal{I}_{0}Z_{S^{3}}^{s\left(o(2[ \frac{N_{c}}{2}]+1)_{k_{1}}\times o(2[\frac{N_{c}}{2}])_{k_{2}}\right)}, \tag{4.9}\] where \[\log\mathcal{I}_{0}= \frac{i\pi N_{c}}{\tau^{2}}\prod_{a=1}^{3}\left(\{\Delta_{a}\}_{ \tau}-\frac{1+\eta_{1}}{2}\right)-\frac{i\pi N_{c}(N_{c}+1)}{4\tau^{2}}\prod_{ a=1}^{3}\left(\{2\Delta_{a}\}_{\tau}-\frac{1+\eta_{2}}{2}\right)\] \[+\frac{5i\pi N_{c}}{12}\left(\eta_{1}-(N_{c}+1)\eta_{2}\right)+ \frac{i\pi N_{c}}{2}-N_{c}\log(\tau) \tag{4.10}\] exactly matches the same term in the symplectic case. Assuming S-duality is preserved, then a non-trivial integral identity between products of CS partition functions is expected. Thus, focusing on the regions where \(\eta_{1}=-\eta_{2}=-1\)5, it remains to show that Footnote 5: The same identities holds also for the case \(\eta_{1}=-\eta_{2}=1\), that it is related to the one discussed here by a parity transformation. * \(N_{c}=2m\): \[Z_{S^{3}}^{s(o(2m+1)_{2m+1}\times o(2m)_{2m+4})}=Z_{S^{3}}^{su(2m)_{2m+4}}Z_{S ^{3}}^{u(1)_{4(2m+1)}}.\] (4.11) * \(N_{c}=2m+1\): \[Z_{S^{3}}^{s(o(2m+2)_{2m+2}\times o(2m+1)_{2m+5})}=Z_{S^{3}}^{su(2m+1)_{2m+5} }Z_{S^{3}}^{u(1)_{4(2m+2)}}\] (4.12) It turns out that these identities indeed hold. The complete proof is presented in appendix D. At last, we achieved a matching between all the saddle points emerging in the Cardy-like limit of the SCI for \({\cal N}=4\) SYM theory with \(usp(2N_{c})\) and \(so(2N_{c}+1)\), thus recovering S-duality in the Cardy-like limit of the index for finite \(N_{c}\). To conclude the analysis we comment on the two special cases of \(N_{c}=1\) and \(N_{c}=2\), when the algebras isomorphisms between classical Lie algebras extend the matching between the saddle points to all the saddles of the two theories. We have * When \(N_{c}=1\), the isomorphism is made explicit upon changing variables in the SCI as \(u^{so}=2v^{usp}\), implying \({\cal I}^{usp(2)}={\cal I}^{so(3)}\). Accounting for the pairing degeneracy of the saddles with \(v=0\) and \(v=\frac{1}{2}\), we obtain the expected mapping between saddles: \[\begin{array}{ccc}&usp(2)&&so(3)\\ &0,\,1/2\,\,\longmapsto&0\\ &1/4\,\,\longmapsto&1/2.\end{array}\] (4.13) * The case of \(N_{c}=2\) is physically more interesting, being the only case where a third matching between saddles of the two theories appears. Again, defining \(u^{so}_{1,2}=v^{usp}_{1}\pm v^{usp}_{2}\), one can easily show that the two indices (2.1) and (3.1) can be mapped into each others. The corresponding mapping between the saddles in the two theories is the following: \[\begin{array}{ccc}&usp(4)&&so(5)\\ (0,0),(1/2,1/2)&\longmapsto&(0,0)\\ (0,1/2)&\longmapsto&(1/2,1/2)\\ (1/4,1/4)&\longmapsto&(0,1/2).\end{array}\] (4.14) Besides the two saddles, already discussed in full generality in the previous section, a third matching appears between the (0,1/2) saddle of \(usp(4)\) and the (1/2,1/2) saddle of \(so(5)\) as a consequence of the algebra isomorphism relating the two SCIs. However, the matching survives only at order \(1/|\tau|^{2}\) in the Cardy-like expansion as, once the CS partition function contributions are included, an instability emerges in the two saddle points because the CS levels (2.13) and (3.7) vanish in this case. ## 5 Conclusions In this paper we have studied the fate of S-duality in the Cardy like limit of the SCI of \({\cal N}=4\) SYM for the cases with gauge algebra \(so(2N_{c}+1)\) and \(usp(2N_{c})\). We have found that such duality is preserved (at finite \(N_{c}\)) in a non-trivial way and only after a complete analysis beyond the leading order \(1/|\tau|^{2}\) in the Cardy-like limit. The calculation of the subleading corrections in \(|\tau|\) requires a saddle point analysis, and, as we have shown here, there is a lower amount of saddles in the \(so(2N_{c}+1)\) case with respect to the ones found in [23] for \(usp(2N_{c})\). While this is not a problem _per se_, because already at leading level in the W-wing, the index evaluated from two degenerate \(usp(2N_{c})\) saddles coincides with the one evaluated on a single \(so(2N_{c}+1)\), by evaluating only the leading contribution of each saddle in the Cardy-like limit we have not been able to fully match the index of \(usp(2N_{c})\) with the one of \(so(2N_{c}+1)\). Nevertheless we have matched the indices evaluated on the saddles that dominate in the M-wing and the indices evaluated on the saddles that dominate in the W-wing separately. Even if the matchings between these saddles holds at finite \(N_{c}\), there can be also other saddles that contribute to the index. We have shown that in general these last never contribute to the SCI because the CS partition function generated from the expansion in \(|\tau|\) vanishes for such saddles. We have eventually evaluated the CS partition functions and fully matched the index of the S-dual models in the Cardy-like limit. Many open questions are leftover. First it should be interesting to study the fate of S-duality for models with less supersymmetry and multiple gauge groups. In principle we expect a that the behavior studied here applies to these cases as well and that similar conclusions can be reached. Motivated by the study of cases with lower supersymmetry, another analysis that we did not perform here regards the study of the subleading corrections for \({\cal N}=4\)\(so(2N_{c})\) SYM. Even if this is a self dual theory, understanding its behavior may be relevant for extending the analysis to models with \({\cal N}=1,2\), where also \(so(2N_{c})\) gauge nodes can appear. A further generalization regards the fate of Seiberg duality in models with four supercharges. In the toric case one can borrow the results of [22], where the matching is indeed straightforward in the solutions denoted as "C-center". Other solutions are nevertheless possible, as discussed in [16], and it is relevant to understand if they are perturbatively stable, i.e. if they are not vanishing once the subleading terms and the CS actions are considered. Partially related to the last issue, another consequence of our analysis regards the relation between the vacua of the \({\cal N}=1^{*}\) theory on the circle and the vacua extracted from the saddle point analysis of the SCI in the Cardy like limit. We have seen here that such correspondence does not seem to hold in the \(usp(2N_{c})\) and \(so(2N_{c}+1)\) cases, where the number of solutions does not grow with \(N_{c}\) but it is fixed to 3 in the first case and 2 in the second case. As observed in [32] this value is related to the presence of a 1-form global symmetry, and its value reflects the number of inequivalent lattices of charges of Wilson and 't Hooft lines under the unbroken subgroup of the center of the gauge group. For example in the case of the C-center solutions for \(SU(N_{c})/\mathbb{Z}_{C}\)\({\cal N}=4\) SYM such number is \(N_{c}/C\). Furthermore this number corresponds to a logarithmic correction to the contribution of the degenerate saddle to the index. As discussed in [23], despite the different degeneration of the saddles, there is a matching of these logs in the index of \(usp(2N_{c})\) and \(so(2N_{c}+1)\) in the W-wing that emerges only after evaluating the CS partition function. In general it would be interesting if this behavior holds true in general, i.e.if the number of lattices associated to the same modding corresponds to the log corrections associated to the index. To conclude, it is also tempting to associate, along the lines of [32; 33], the results obtained here to a 3d effective action emerging from the integration over the massive KK modes coming from the matter multiplets in the reduction on the thermal \(S^{1}\). While this interpretation is expected for the saddles at vanishing holonomies, it is less clear how to interpret our results for the other saddles along these lines. Indeed, even in absence of an EFT interpretation, following the discussion in [33] (see also [28]), one can associate the saddles at non-vanishing holonomies to the expansion of the index with \(e^{2\pi i\tau}\) approaches a root of unity. In the \(su(N_{c})\) case the CS partition function corresponds to an orbifold partition functions on \(S^{3}/\mathbb{Z}_{C}\). Furthermore such solutions are related to the orbifolds of the Euclidean AdS\({}_{5}\) BH [25]. In our case such orbifold interpretation is not straightforward and it deserves further analysis. ###### Acknowledgements. The work of A.A., A.Z.. has been supported in part by the Italian Ministero dell'Istruzione, Universita e Ricerca (MIUR), in part by Istituto Nazionale di Fisica Nucleare (INFN) through the "Gauge Theories, Strings, Supergravity" (GSS) research project and in part by MIUR-PRIN contract 2017CC72MK-003. ## Appendix A The superconformal index In this appendix we survey the main definitions of the superconformal index that we have used in the paper. The index is defined as \[\mathcal{I}\equiv\text{Tr}(-1)^{F}e^{-\beta H}p^{J_{1}+\frac{R}{2}}q^{J_{2}+ \frac{R}{2}}\prod_{f=1}^{r\text{k}\mathbf{F}}v_{f}^{q_{f}} \tag{10}\] In this trace formula we \(J_{i}\) are the angular momenta on the \(S^{3}\), \(R\) is the R-charge and \(q_{f}\) are the flavor charges of the rank \(\mathbf{F}\) flavor symmetry group \(\mathfrak{f}\). The fugacities of these symmetries are denoted as \(p,q\) and \(v_{f}\) respectively. Instead of the trace formula it is useful to define the index for a gauge theory in terms of a matrix integral over the holonomies of the gauge algebra: \[\mathcal{I}=\frac{(p;p)_{\infty}^{\text{rkg}}(q;q)_{\infty}^{\text{rkg}}}{|W_ {\mathfrak{g}}|}\oint_{T^{\text{rkg}}}\prod_{i=1}^{r\text{kg}}\frac{dz_{i}}{2 \pi iz_{i}}\frac{\prod_{a=1}^{n_{\chi}}\prod_{\rho_{a}}\Gamma_{e}((pq)^{R_{a} /2}z^{\rho_{\mathfrak{g}}^{a}}v^{\rho_{\mathfrak{g}}^{a}})}{\prod_{\alpha} \Gamma_{e}(z^{\alpha_{\mathfrak{g}}})} \tag{11}\] where \(\rho_{\mathfrak{g},\mathfrak{f}}^{a}\) represent the weights for the chiral multiplet gauge and the flavor. In the Cardy-like limit it is more convenient to work explicitly with the chemical potentials conjugated to the charges of the theory. Therefore we define \[p\equiv e^{2\pi i\tau},\ \ \ \ q\equiv e^{2\pi i\sigma},\ \ \ \ v_{j}\equiv e^{2\pi i \xi_{j}},\ \ \ \ z_{i}\equiv e^{2\pi iu_{i}}. \tag{111}\] From 110 we can read off the chemical potential for the R-charge \[\nu_{R}\equiv\frac{1}{2}(\tau+\sigma). \tag{112}\] In the literature it is pretty common to encode all the charges associated with the global symmetries of the theory in a new set of fugacities \(y_{a}\) together with the charges \(\Delta_{a}\) defined by \[y_{a}\equiv e^{2\pi i\Delta_{a}}\equiv((pq)^{R_{a}/2}v^{\rho_{t}^{\theta}}), \tag{113}\] The charges \(\Delta_{a}=\rho_{f}^{a}(\xi)+\nu_{R}R_{a}\) encode all the information about the flavour and R charges of the theory. ## Appendix B Asymptotic formulas In this appendix we collect the main formulas for the hypergeometric functions and their asymptotic expansions needed to perform a saddle-point evaluation of the SCI in the Cardy-like limit. Let \(\tau,\sigma\in\mathbb{H}\) and \(p=e^{2\pi i\tau}\), \(q=e^{2\pi i\sigma}\). The elliptic gamma function is defined as the infinite product \[\Gamma_{e}(z;p,q)\equiv\Gamma_{e}(z)\coloneqq\prod_{i,j=0}^{\infty}\frac{1-p ^{j+1}q^{k+1}/z}{1-zp^{j}q^{k}}. \tag{114}\] We also define the modified elliptic gamma function as \[\tilde{\Gamma}_{e}(u;\tau,\sigma)\equiv\tilde{\Gamma}_{e}(u)\coloneqq\Gamma_{ e}(e^{2\pi iu};e^{2\pi i\tau},e^{2\pi i\sigma}). \tag{115}\] The Pochhammer symbol is defined for complex \(z,q\) with \(|q|<1\) by \[(z;q)_{\infty}\coloneqq\prod_{j=0}^{\infty}\left(1-zq^{k}\right). \tag{116}\] We can then define the elliptic function \(\theta_{0}\) \[\theta_{0}(u;\tau,\sigma)=(u;q)_{\infty}(q/u;q)_{\infty}=\prod_{j=0}^{\infty} \left(1-e^{2\pi i((j+1)\tau-u)}\right)\left(1-e^{2\pi i(u+j\tau)}\right) \tag{117}\] and for our purposes it is enough to remember that it satisfies \[\log\theta_{0}(u)=-\log\tilde{\Gamma}_{e}(u). \tag{100}\] In the Cardy-like limit the asymptotic behaviour of these functions can be written introducing the \(\tau-\)modded value of a complex number: \[\{u\}_{\tau}\equiv u-\lfloor\operatorname{Re}(u)-\cot(\arg(\tau))\operatorname{ Im}(u)\rfloor, \tag{101}\] For \(u\in\mathbb{R}\) it reduces to the ordinary fractional part \(u-\lfloor u\rfloor\). Writing \(u\in\mathbb{C}\) as \(u=\tilde{u}+\tau\tilde{u}\) with \(\tilde{z},\tilde{z}\in\mathbb{R}\), the \(\tau-\)modded value satisfies \[\{u\}_{\tau}=\{\tilde{u}\}+\tau\tilde{u}, \tag{102}\] Moreover from the definition it follows that \[\{-u\}_{\tau}=\begin{cases}1-\{u\}_{\tau}&\tilde{u}\not\in\mathbb{Z}\\ -\{u\}_{\tau}&\tilde{u}\in\mathbb{Z}.\end{cases} \tag{103}\] Then, as \(|\tau|\to 0\) with \(\operatorname{Im}\{\tau\}>0\) fixed, we have the following asymptotic behaviours \[\log\left(q;q\right)_{\infty}\sim-\frac{i\pi}{12}\left(\frac{1}{\tau}+\tau \right)-\frac{1}{2}\log(-i\tau)+\mathcal{O}\left(e^{\frac{2\pi\sin\arg(\tau)} {|\tau|}}\right), \tag{104}\] \[\begin{split}\log\theta_{0}(u;\tau)\sim&\frac{i\pi}{ \tau}\{u\}_{\tau}(1-\{u\}_{\tau})+i\pi\{u\}_{\tau}-\frac{i\pi}{6\tau}\left(1 +3\tau+\tau^{2}\right)+\\ &+\log\left[\left(1-e^{-\frac{2\pi i}{\tau}\{u\}_{\tau}}\right) \left(1-e^{-\frac{2\pi i}{\tau}(1-\{u\}_{\tau})}\right)\right]+\mathcal{O} \left(e^{\frac{2\pi\sin\arg(\tau)}{|\tau|}}\right),\end{split} \tag{105}\] provided that \(\tilde{u}\not\in\mathbb{Z}\), with \(Q(u)\) defined as \[Q(u;\tau)=-\frac{B_{3}(u)}{6\tau^{2}}+\frac{B_{2}(u)}{2\tau}-\frac{5B_{1}(u)} {12}+\frac{\tau}{12}, \tag{106}\] where \(B_{n}(u)\) are the Bernoulli polynomials \[B_{1}(u)=u-\frac{1}{2},\hskip 28.452756ptB_{2}(u)=u^{2}-u+\frac{1}{6},\hskip 28.452756ptB _{3}(u)=u^{3}-\frac{3}{2}u^{2}+\frac{u}{2}. \tag{107}\] The Bernoulli polynomials (and their modded version \(B(\{x\}_{\tau})\)) satisfy the following identity, known as Raabe's formula \[\sum_{J=0}^{C-1}B_{n}\left(\frac{J}{C}+u\right)=\frac{1}{C^{n-1}}B_{n}\left(Cu \right), \tag{101}\] through which we expressed the effective actions in terms of products of \(\{C\Delta\}_{\tau}\) terms, with \(C=1,2,4\). ## Appendix C \(Z_{S^{3}}\) for pure 3d \(\mathcal{N}=2\) CS theories with ABCD gauge algebra In this appendix we collect some useful formulas on the exact evaluation of the 3d partition function of the three sphere partition function 3d of pure CS gauge theories with gauge algebra \(\mathfrak{g}\) of ABCD type. The integral formula corresponds to a matrix integral of the form \[Z_{\mathfrak{g}}=\frac{1}{|W|}\int\prod_{i=1}^{\rm rk\mathfrak{g}}d\sigma_{i} \tfrac{i\pi k\sigma_{i}^{2}}{\omega_{1}\omega_{2}}\prod_{\alpha(\sigma)}\Gamma _{h}^{-1}(\alpha(\sigma)) \tag{102}\] where \(\alpha(\sigma)\) represent the simple roots of the algebra and \(\Gamma_{h}\) are hyperbolic gamma functions \[\Gamma_{h}(z)\equiv\prod_{m,n=1}^{\infty}\frac{(n+1)\omega_{1}+(m+1)\omega_{2 }-z}{n\omega_{1}+m\omega_{2}} \tag{103}\] Expression (102) can be rewritten in terms of hyperbolic sines, as in the main text, by employing the following property of the hyperbolic gamma functions \[\frac{1}{\Gamma_{h}(x)\Gamma_{h}(-x)}=-4\sin\!\left(\frac{\pi x}{\omega_{1}} \right)\sin\!\left(\frac{\pi x}{\omega_{2}}\right)\!. \tag{104}\] We have observed in the body of the paper that the Cardy-like limit of the SCI gives rise (for charges \(\Delta_{a}\neq 0,2\)) to a matrix integral of the type (102) for pure CS gauge algebras of \(\mathfrak{g}\) of ABCD type. Such partition functions can be exactly evaluated and the results have already been obtained in the literature. Here we collect these results, because the exact evaluations of \(Z_{\mathfrak{g}}\) has allowed us to perform the precision checks on S-duality. Furthermore we restrict to the case of \(S^{3}\), setting \(\omega_{1}=\omega_{2}\) because we have studied the case with collinear angular momenta in the body of the paper. Let us start surveying the various results. The evaluation of the partition function for pure CS \(\mathfrak{g}=u(N_{c})\) at level \(k\) was performed in [37]. The final formula is \[Z_{u(N_{c})_{k}}=\frac{e^{\frac{i\pi N_{c}\left(3kN_{c}-2N_{c}^{2}-6k+2\right)}{ 12k}}}{k\frac{N_{c}}{2}}\prod_{m=1}^{N_{c}-1}\left(2\sin\left(\frac{\pi m}{k} \right)\right)^{N_{c}-m} \tag{100}\] It is also possible to relate the \(u(N_{c})\) case to the \(su(N_{c})\) one, thanks to the formula \[Z_{su(N_{c})_{k}}=\sqrt{\frac{k}{iN_{c}}}Z_{u(N_{c})_{k}} \tag{101}\] Such distinction is important in our analysis, because we often deal with \(su(N_{c})\times U(1)\) gauge theories, with different CS level for the abelian factor. The partition function for 3d pure CS on \(S^{3}\) with \(\mathfrak{g}=usp(2N_{c})\) at level \(k\) is [38] \[Z^{usp(2N_{c})_{k}} = \frac{\exp\left(-\frac{i\pi N_{c}(2+6N_{c}+4N_{c}^{2}+6k+3|k|)}{ 12k}\right)}{(2|k|)^{\frac{N_{c}}{2}}}. \tag{102}\] \[\cdot\!\!\!\!\!\!\!\prod_{1\leq j<\ell\leq N_{c}}4\sin\left(\frac {\pi(j+\ell)}{2k}\right)\sin\left(\frac{\pi(j-\ell)}{2k}\right)\prod_{j=1}^{N_ {c}}2\sin\left(\frac{\pi j}{k}\right)\quad.\] To conclude.the survey we consider the orthogonal cases, studied in [23]. The case of \(\mathfrak{g}=so(2N_{c}+1)\) at level \(k\) gives \[Z^{so(2N_{c}+1)_{k}} = \frac{\exp\left(\frac{i\pi N_{c}\left(12kN_{c}-8N_{c}^{2}-9|k|+2 \right)}{12k}\right)}{|k|^{\frac{N_{c}}{2}}}.\] (103) \[\cdot\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! how S-duality is preserved in the Cardy-like limit in the region where the index is dominated by the \(W\)-wing shaped potential. As discussed in the paper we have identified two different possibilities for the \(usp(2N_{c})/so(2N_{c}+1)\) duality, depending on the parity of \(N_{c}\). For \(N_{c}=2m\) the expected relation is (4.11) that we reproduce here for the ease of the reader \[Z_{S(O(2m+1)_{2m+1}\times O(2m)_{2m+4})}=Z_{U(2m)_{2m+4,4(2m+1)}}\] (D.1) while for \(N_{c}=2m+1\) the expected relation is (4.12) \[Z_{S(O(2m+2)_{2m+2}\times O(2m+1)_{2m+5})}=Z_{U(2m+1)_{2m+5,4(2m+2)}}\] (D.2) Two comments are in order. First the normalization of the \(U(1)\) factor follows the conventions of appendix A of [39]. Second, the partition function for the orthogonal case has been denoted here as schematically as \(S(O(n)\times O(m))\) but it coincides with the one of \(so(n)\times O(m)\) and \(O(n)\times so(m)\). The two identities (D.1). and (D.2) can be shown explicitly. In the following we give a direct derivation of (D.1). An analogous derivation holds for (D.2). We start observing that the partition function \(Z_{so(2m+1)_{2m+1}}\) can be evaluated, inferring the results from the evaluation of \(Z_{so(2n+1)_{2n-1}}\) given in [23]. We have \[Z_{so(2m+1)_{2m+1}}=\frac{e^{\frac{1}{12}i\pi m(2m-1)}}{\sqrt{2m+1}}\] (D.3) On the other hand we can estimate the relation between the products of trigonometric functions that are inside the \(so(2m)\) and the \(U(2m)\) partition functions. They are \[\mathcal{P}_{SO}\equiv\prod_{p=1}^{m}\prod_{q=p+1}^{m}4\sin\left(\frac{\pi(p- q)}{2(m+2)}\right)\sin\left(\frac{\pi(p+q-2)}{2(m+2)}\right)\] (D.4) and \[\mathcal{P}_{SU}\equiv\prod_{p=1}^{2m-1}\left(2\sin\left(\frac{\pi p}{2m+4} \right)\right)^{2m-p}=\prod_{p=1}^{2m}\prod_{q=p+1}^{2m}2\sin\left(\frac{\pi( p-q)}{m+4}\right)\] (D.5) respectively The ratio between such quantities can be simplified by using the partition functions of the pure 3d \(\mathcal{N}=2\) CS \(U(n)_{n}\) and \(so(2n)_{2(n-1)}\) theories. They first one can be read from [23] and for \(Z_{so(2m+6)_{2(m+2)}}\) it gives \[\prod_{p=1}^{m+3}\prod_{q=p+1}^{m+3}4\sin\left(\frac{\pi(p-q)}{2(m+2)}\right) \sin\left(\frac{\pi(p+q-2)}{2(m+2)}\right)=2e^{\frac{1}{2}i\pi\left(m^{2}+m+2 \right)}(2m+4)^{\frac{m+3}{2}}\] (D.6) while the second one, for \(Z_{U(m+4)_{m+4}}\), can be read from [37] and it gives \[\prod_{p=1}^{2m+4}\prod_{q=p+1}^{2m+4}2\sin\left(\frac{\pi(p-q)}{2m+4}\right)=(2m +4)^{m+2} \tag{109}\] Using (107) we simplify (106) as \[\mathcal{P}_{SO}=2e^{\frac{1}{2}i\pi\left(m^{2}+m+2\right)}(2m+4)^{\frac{m+3} {2}}\Theta_{SO} \tag{110}\] with \[\Theta_{SO} = \frac{1}{\prod_{p=1}^{m}\prod_{q=m+1}^{m+3}4\sin\left(\frac{\pi(p -q)}{2(m+2)}\right)\sin\left(\frac{\pi(p+q-2)}{2(m+2)}\right)} \tag{111}\] \[\times \frac{1}{\prod_{p=m+1}^{m+2}\prod_{q=p}^{m+3}4\sin\left(\frac{\pi (p-q)}{2(m+2)}\right)\sin\left(\frac{\pi(p+q-2)}{2(m+2)}\right)}\] Using (109) we simplify (108) as \[\mathcal{P}_{SU}=(2m+4)^{m+2}\Theta_{SU} \tag{112}\] with \[\Theta_{SU}=\frac{1}{\prod_{p=1}^{2m}\prod_{q=2m+1}^{2m+4}2\sin\left(\frac{\pi (p-q)}{2m+4}\right)\cdot\prod_{p=2m+1}^{2m+3}\prod_{q=p+1}^{2m+4}2\sin\left( \frac{\pi(p-q)}{2m+4}\right)} \tag{113}\] Next we want to show that \[\frac{\Theta_{SU}}{\Theta_{SO}}=\frac{(-1)^{m+1}}{m+2} \tag{114}\] In order to evaluate this ratio we start observing that \[\frac{\prod_{p=m+1}^{m+2}\prod_{q=p}^{m+3}4\sin\left(\frac{\pi(p-q)}{2(m+2)} \right)\sin\left(\frac{\pi(p+q-2)}{2(m+2)}\right)}{\prod_{p=2m+1}^{2m+3}\prod_ {q=p+1}^{2m+4}2\sin\left(\frac{\pi(p-q)}{2m+4}\right)}=-1 \tag{115}\] We are then left then with \[\frac{\Theta_{SU}}{\Theta_{SO}} = -\frac{\prod_{p=1}^{m}\prod_{q=m+1}^{m+3}4\sin\left(\frac{\pi(p-q )}{2(m+2)}\right)\sin\left(\frac{\pi(p+q-2)}{2(m+2)}\right)}{\prod_{p=1}^{2m} \prod_{q=2m+1}^{2m+4}2\sin\left(\frac{\pi(p-q)}{2m+4}\right)} \tag{116}\] \[= \frac{(-1)^{m+1}}{\sin\left(\frac{4\pi}{2m+4}\right)\cdot\prod_{ p=1}^{m}2\sin\left(\frac{\pi(p+1)}{2m+4}\right)\cdot\prod_{p=1}^{m-1}2\sin \left(\frac{\pi p}{2m+4}\right)}\] In order to conclude the proof of (116) we need to estimate the denominator of (117). This can be done by using the relations \[\sin\left(\frac{4\pi}{2m+4}\right)=4\sin\left(\frac{\pi(2m+3)}{2(m+2)}\right)\sin \left(\frac{\pi(m+3)}{2(m+2)}\right)\sin\left(\frac{\pi(m+4)}{2(m+2)}\right) \tag{119}\] and \[\prod_{p=1}^{m}\sin\left(\frac{\pi(p+1)}{2(m+2)}\right)=\prod_{p=m+3}^{2m+2} \sin\left(\frac{\pi p}{2(m+2)}\right) \tag{120}\] Such that the denominator of (117) becomes \[\frac{1}{2}\prod_{p=1}^{2m+3}2\sin\left(\frac{\pi p}{2m+4}\right)=m+2 \tag{121}\] Then by plugging this result in (117) we arrive at (116). Eventually we plug (117) and (118) in (119) such to verify the latter. To conclude we have not presented the explicit derivation of (115), because it can be derived along the same lines of the analysis performed in this appendix.
2305.12598
Probing the Temperature Structure of the Inner Region of a Protoplanetary Disk
Midplane heating induced by disk accretion plays a key role in determining the disk temperature particularly at the inner disk midplane where planets form. However, the efficiency of accretion heating has been not well constrained by observations. We construct two-dimensional models of the Class II disk around CW Tau, taking into account the midplane heating. The models are compared with the ALMA dust continuum observations at Bands 4, 6, 7 and 8, with an angular resolution of 0.1 arcsec. The observed brightness temperatures are almost wavelength-indenpendent at $\lesssim$10 au. We find that if the maximum dust size $a_{\rm max}$ is $\lesssim100~{\rm \mu m}$, the brightness temperatures predicted by the model exceed the observed values, regardless of the efficiency of accretion heating. The low observed brightness temperatures can be explained if millimeter scattering reduces the intensity. If the disk is passive, $a_{\rm max}$ needs to be either $\sim150~{\rm \mu m}$ or $\gtrsim$ few ${\rm cm}$. The accretion heating significantly increases the brightness temperature particularly when $a_{\rm max}\lesssim300~{\rm \mu m}$, and hence $a_{\rm max}$ needs to be either $\sim300~{\rm \mu m}$ or $\gtrsim$ few ${\rm cm}$. The midplane temperature is expected to be $\sim$1.5-3 times higher than the observed brightness temperatures, depending on the models. The dust settling effectively increases the temperature of the dust responsible for the millimeter emission in the active disk, which makes the model with $300~{\rm \mu m}$-sized dust overpredicts the brightness temperatures when strong turbulence is absent. Porous dust (porosity of 0.9) makes the accretion heating more efficient so that some sort of reduction in accretion heating is required. Future longer wavelength and higher angular resolution observations will help us constrain the heating mechanisms of the inner protoplanetary disks.
Takahiro Ueda, Satoshi Okuzumi, Akimasa Kataoka, Mario Flock
2023-05-21T23:13:30Z
http://arxiv.org/abs/2305.12598v1
# Probing the Temperature Structure of the Inner Region of a Protoplanetary Disk ###### Abstract Context: Disk temperature structure is crucial for the formation of planets. Midplane heating induced by disk accretion plays a key role in determining the disk temperature particularly at the inner disk midplane where planets form. However, the efficiency of accretion heating has been not well constrained by observations. Aims:Our aim is to observationally constrain the physical properties of the inner region of the CW Tau disk where the midplane heating potentially takes place. Methods:We construct two-dimensional physical models of the CW Tau disk, taking into account the midplane heating. The models are compared with the ALMA dust continuum observations at Bands 4, 6, 7 and 8, with an angular resolution of 0\(\aas@@fstack{\prime\prime}\)1. The observed brightness temperatures are almost wavelength-independent at \(\lesssim\)10 au. Results:We find that if the maximum dust size \(a_{\rm max}\) is \(\lesssim 100\,\mu\)m, the brightness temperatures predicted by the model exceed the observed values, regardless of the efficiency of accretion heating. The low observed brightness temperatures can be explained if millimeter scattering reduces the intensity. If the disk is passive, \(a_{\rm max}\) needs to be either \(\sim 150\,\mu\)m or \(\gtrsim\) few cm. The accretion heating significantly increases the brightness temperature particularly when \(a_{\rm max}\lesssim 300\,\mu\)m, and hence \(a_{\rm max}\) needs to be either \(\sim 300\,\mu\)m or \(\gtrsim\) few cm. The midplane temperature is expected to be \(\sim\)1.5-3 times higher than the observed brightness temperatures, depending on the models. The dust settling effectively increases the temperature of the dust responsible for the millimeter emission in the active disk, which makes the model with 300 \(\mu\)m-sized dust overpredicts the brightness temperatures when strong turbulence is absent. Porous dust (porosity of 0.9) makes the accretion heating more efficient so that some sort of reduction in accretion heating is required. Conclusions:The brightness temperature is not a simple function of the dust temperature because of the effect of scattering and midplane heating even if the disk is optically thick. The current data of the CW Tau disk are not enough to discriminate between the passive and active disk models. Future longer wavelength and higher angular resolution observations will help us constrain the heating mechanisms of the inner protoplanetary disks. ## 1 Introduction The temperature structure of inner protoplanetary disks controls the evolution of dust grains (e.g., Birnstiel et al., 2010; Okuzumi et al., 2016; Drazkowska & Alibert, 2017; Ueda et al., 2021), subsequent formation of planets and their migration (e.g., Kley & Nelson, 2012; Bitsch et al., 2014; Bitsch, 2019; Savvidou & Bitsch, 2021) and eventually the chemical composition of planets (e.g., Oberg et al., 2011; Madhusudhan et al., 2014; Ohno & Ueda, 2021; Schneider & Bitsch, 2021; Notsu et al., 2022). Particularly at the midplane of the inner regions of disks (\(\lesssim 10\) au), the heating resulting from disk gas accretion is thought to play a key role in determining the disk temperature. In the classical model of accretion heating, the gravitational energy of the disk gas is released near the midplane where the energy is hard to escape from the disk, leading to efficient heating of the disk midplane (e.g., Hubeny, 1990; Nakamoto & Nakagawa, 1994). However, recent non-ideal magneto-hydrodynamical models have shown that accretion heating may be less efficient compared to the classical model when the accretion is primarily driven by the magnetorotational instability (MRI) in the upper layer of the disk (Hirose & Turner, 2011) or by magneto-hydrodynamical disk winds (Mori et al., 2019). This is because the gravitational energy released at the upper layer can escape from the disk more easily. Still, accretion heating in magneto-hydrodynamically accreting disks can affect the disk temperature structure depending on the disk ionization state and opacity (Bethune & Latter, 2020; Kondo et al., 2022). Even though how the disk midplane is heated is crucial for the formation of planets, it is poorly constrained by observations. The Atacama Large Millimeter/submillimeter Array (ALMA) provides an opportunity to probe the temperature structure of the inner regions of disks, where accretion heating potentially takes place. The ALMA high-resolution multi-wavelength analysis allows us to evaluate the dust properties (e.g., dust size, temperature and surface density) as a function of radial distance based on the spectral behavior of the (sub-)millimeter dust thermal emission (Carrasco-Gonzalez et al., 2019; Macias et al., 2021; Sierra et al., 2021; Ueda et al., 2022; Guidi et al., 2022). However, previous studies have generally assumed that the observed dust temperatures at different ALMA wavelengths are identical (i.e., vertically isothermal). If the disk is optically thick at the observing wavelengths, the different observing wavelengths trace different heights within the disk, which may alter the interpretation of the disk properties (Sierra and Lizano, 2020; Ueda et al., 2021). In other words, the vertical disk structure can be probed by the multi-wavelength sub-millimeter to centimeter observations by leveraging the difference in optical depth at each wavelength. For instance, recent ALMA observations of CO isotopologue line emissions have provided insights into the vertical structure of the outer regions of disks by utilizing the expected emission heights of different CO isotopologues (Law et al., 2021, 2023). For the inner regions of disks, dust continuum observations would be more suitable than the gas molecule observations as it is easier to achieve better angular resolution and sensitivity with a reasonable observing time. Okuzumi et al. (2023) demonstrated that the vertical temperature structure of the inner few au of disks can be inferred from the multi-wavelength dust continuum observations using ALMA and the next generation Very Large Array (ngVLA). In this paper, we investigate the temperature structure of the disk around CW Tau using ALMA multi-wavelength dust continuum observations. The CW Tau disk has a high accretion rate (3-10 \(\times\) 10\({}^{-8}M_{\odot}\) yr\({}^{-1}\); McClure, 2019; Robinson et al., 2022; Gangi et al., 2022) and has been observed at ALMA Bands 4, 6, 7 and 8 (Ueda et al., 2022). This combination of a high accretion rate and rich observational data makes the CW Tau disk an excellent target for studying the vertical temperature structure of the inner region potentially heated by disk accretion. This paper is constructed as follows. In Section 2, we introduce our observational data and theoretical models. The models are compared with the observations in Section 3. Discussion and summary are in Section 4 and 5, respectively. ## 2 Methods ### ALMA observations We make use of the ALMA data of the CW Tau disk taken and calibrated in Ueda et al. (2022). The observations were carried out at ALMA Bands 4 (\(\lambda=2.17\) mm), 6 (1.34 mm), 7 (0.89 mm) and 8 (0.75 mm). The CW Tau disk is a Class II disk with the stellar luminosity of \(L_{\star}=0.45L_{\odot}\)(Herczge and Hillenbrand, 2014), accretion luminosity of \(L_{\rm acc}=0.85L_{\odot}\)(Gangi et al., 2022) and accretion rate of \(\dot{M}=3-10\times 10^{-8}M_{\odot}\) yr\({}^{-1}\)(McClure, 2019; Robinson et al., 2022; Gangi et al., 2022) which is much higher than the typical value of the Class II disks (e.g., \(\sim 3.6\times 10^{-9}M_{\odot}\) yr\({}^{-1}\); Manara et al., 2017) see also Manara et al. (2022). Its high accretion rate and relatively low stellar luminosity makes the CW Tau disk suitable for studying accretion heating of the disk. In this work, we adopt \(\dot{M}=4\times 10^{-8}M_{\odot}\) yr\({}^{-1}\), which falls within the range of observed values for CW Tau. Figure 1 presents the observed brightness temperature at ALMA Bands 4, 6, 7 and 8. The observational data has a common angular resolution of 0\(\aas@@fstack{\prime\prime}\)1, corresponding to a spatial resolution of 13.2 au for the distance of CW Tau (132 pc; Gaia Collaboration et al., 2021). The brightness temperature at ALMA Bands 4, 6, 7, and 8 shows a nearly identical value of \(\sim 35\) K at \(\lesssim 10\) au. At the outer region, \(\gtrsim 20\) au, the brightness temperature exhibits variations among the different ALMA bands. In particular, the brightness temperature at Band 4 is lower compared to the other bands. ### Theoretical model We construct two-dimensional (radial position \(r\) and vertical height \(z\)) models of the CW Tau disk in order to compare them with the ALMA observations. In this section, we provide a detailed description of our theoretical models of the disk structure. #### 2.2.1 Temperature structure The temperature structure of the disk is determined by the combined effects of stellar irradiation and accretion heating: \[T^{4}=T_{\rm irr}^{4}+T_{\rm acc}^{4}, \tag{1}\] where \(T_{\rm irr}\) and \(T_{\rm acc}\) denote the disk temperature determined by stellar irradiation and accretion heating, respectively. The temperature structure of the passively irradiated disk is calculated as \[T_{\rm irr}^{4}=\frac{L_{\star}+L_{\rm acc}}{16\pi or^{2}}\left[\exp\left(- \tau_{\star}\right)+2\psi\right], \tag{2}\] where \(\sigma\) is the Stefan-Boltzmann constant, \(\tau_{\star}\) is the radial optical depth for stellar irradiation. The angle \(\psi\) represents the angle between the ray from the central star and the disk surface. Its actual value depends on the distance from the central star and the detailed disk surface structure. We assume \(\psi=0.02\) which corresponds to \(h_{\star}/r\sim 0.07\) with \(h_{\star}\) being the height of the disk photosphere above the midplane (Dullemond et al., 2001). Equation (2) provides the black body temperature in the optically thin regime when \(\tau_{\star}\ll 1\), whereas it yields temperature profile of the classical optically thick passive disk in the limit of \(\tau_{\star}\gg 1\)(e.g., Dullemond et al., 2001; see also Huang et al., 2018). In the optically thick interior region (\(\tau_{\star}\gg 1\)), the temperature structure is vertically isothermal. It is worth noting that the temperature structure of the optically thin region (\(\tau_{\star}\ll 1\)) has no significant impact on our analysis as the emission observed by ALMA is dominated by the emission from the region where \(\tau_{\star}\gg 1\). The temperature of optically thick region of the passively-heated disk can be lower than our model because of scattering of the stellar irradiation (Okuzumi et al., 2022), which is not included in our model. With our dust model, this could reduce the disk temperature by 10%. We examine the validity of Equation (2) in Appendix A. Figure 1: Brightness temperature profile observed at ALMA Bands 4, 6, 7 and 8 obtained by Ueda et al. (2022). The transparent region denotes the uncertainty arising from both flux calibration and thermal noise. The temperature structure determined by accretion heating is (e.g., Nakamoto & Nakagawa 1994; Sierra & Lizano 2020) \[T_{\rm acc}^{4}=\frac{3}{4}\left(\tau_{\rm z}+\frac{2}{3}\right)\frac{3\dot{M} \Omega_{\rm K}^{2}}{8\pi\sigma}, \tag{3}\] where \(\Omega_{\rm K}\) is the Keplerian frequency and \(\tau_{\rm z}\) is the vertical optical depth for the dust thermal emission integrated from each position to \(z=\infty\) (or \(z=-\infty\) if \(z<0\)); \[\tau_{\rm z}(z)=\int_{z}^{\infty}\kappa_{\rm R}(z^{\prime})\rho_{\rm d}(z^{ \prime})dz^{\prime}, \tag{4}\] with \(\kappa_{\rm R}\) and \(\rho_{\rm d}\) being the Rosseland-mean opacity and dust density, respectively. Because \(\tau_{\rm z}\) increases as \(z\) approaches the midplane, \(T_{\rm acc}\) increases as \(z\) approaches the midplane. We note that the actual value of \(\dot{M}\) can be lower/higher than our adopted value by a factor of \(\sim 2\) (see Section 2.1), which decrease/increase \(T_{\rm acc}\) by \(\sim\)20%. As \(T_{\rm acc}\) is a function of the product of \(\tau_{\rm z}\) and \(\dot{M}\), the disk temperature structure is almost identical as long as \(\tau_{\rm z}\dot{M}\) is identical. In this study, we consider two scenarios: a passive disk where the temperature structure is solely determined by stellar irradiation (i.e., \(T^{4}=T_{\rm irr}^{4}\)), and an active disk where additional heating due to accretion is taken into account (\(T^{4}=T_{\rm irr}^{4}+T_{\rm acc}^{4}\)). As shown in Section 3, accretion heating dominates over stellar irradiation (\(T\approx T_{\rm acc}\)) near the midplane within the inner few to few dozen au, depending on the specific disk models. #### 2.2.2 Dust opacities The dust opacity plays a crucial role in determining both the temperature structure of the disk and the (sub-)millimeter emission observed with ALMA. The dust composition is assumed to be that of the DSHARP model (Bimstiel et al. 2018) which is a mixture of water ice (Warren & Brandt 2008), astronomical silicates (Draine 2003), troilite and refractory organics (Henning & Stognienko 1996). It is important to note that the dust composition remains highly uncertain, and therefore, we also present modeling results using an alternative dust model, the DIANA dust model (Woitke et al. 2016) in Appendix B. The dust size distribution ranges from \(a_{\rm min}\) to \(a_{\rm max}\) with a power-law index of \(-p_{\rm d}\). We assume \(p_{\rm d}=3.5\) which is similar to the so-called MRN distribution (Mathis et al. 1977) as a fiducial case, while more flat profile (\(p_{\rm d}=2.5\)) is also applied to investigate its impact on the brightness temperature. The latter corresponds to more top-heavy distribution where the dust opacity is dominated by larger grains. The minimum dust size \(a_{\rm min}\) is set to be 0.05 \(\mu\)m. We confirmed that our conclusions remain unchanged even if we adopt larger \(a_{\rm min}\), 0.5 \(\mu\)m. We consider two different dust porosity values, namely \(p=0\) (corresponding to compact dust) and 0.9 (corresponding to a filling factor of \(f=1\) and 0.1, respectively). We do not consider very porous dust (\(p\gg 0.9\)) because extremely porous dust may not account for the high polarization degree of the dust thermal emission observed toward the CW Tau disk (Bacciotti et al. 2018; Tazaki et al. 2019; Kirchschlager et al. 2019). The opacities are computed with Optool (Dominik et al. 2021). Figure 2 shows the effective extinction opacity \(\kappa_{\rm ext}\), effective albedo \(\omega_{\rm eff}\) and Rosseland-mean opacity \(\kappa_{\rm R}\) of our dust model. The effective extinction opacity is defined as \(\kappa_{\rm ext}=\kappa_{\rm abs}+\kappa_{\rm sca}(1-g)\), where \(\kappa_{\rm abs},\kappa_{\rm sca}\) and \(g\) are the absorption opacity, scattering opacity and anisotropic scattering parameter, respectively. The effective albedo is given as \(\omega_{\rm eff}=\kappa_{\rm sca}(1-g)/\kappa_{\rm ext}\). The extinction opacity has a slope of \(\kappa_{\rm ext}\propto\lambda^{-1.7}\) when \(a_{\rm max}\ll\lambda/2\pi\), whereas it is flatter when \(a_{\rm max}\gg\lambda/2\pi\). In the intermediate regime (\(a_{\rm max}\sim\lambda/2\pi\)), the opacity slope is steeper than \(-1.7\) because of the Mie interference. The interference is significant only for compact grains, leading to a higher opacity compared to porous grains (Kataoka et al. 2014). The effective scattering albedo at the ALMA wavelengths (\(\sim\)0.8-2 mm) is \(\lesssim 0.2\) for \(a_{\rm max}\ll\lambda/2\pi\sim 300\)\(\mu\)m. However, for \(a_{\rm max}\gtrsim 300\)\(\mu\)m, the scattering albedo reaches 0.8 or even higher at the ALMA wavelengths. The high scattering albedo efficiently reduces the emergent intensity at the ALMA wavelengths, which makes the disk fainter than that without scattering (e.g., Liu 2019; Zhu et al. 2019). At the ALMA wavelengths, compact dust has slightly higher \(\omega_{\rm eff}\) compared to porous dust. Figure 2: Effective extinction opacity (top), effective albedo (middle) and Rosseland-mean opacity (bottom) of compact (\(p=0\); left) and porous (\(p=0.9\); right) dust with different \(a_{\rm max}\). The power-law index of the dust size distribution is set to \(p_{\rm d}=3.5\). The vertical gray dotted lines denote the wavelength of ALMA Bands 4, 6, 7 and 8. This is because porous dust scatters incident radiation more efficiently in the forward direction, resulting in a lower effective albedo. We note that the absorption and scattering opacity can be scaled by \(a_{\rm max}f\) except for the behavior of the Mie interference (Kataoka et al., 2014). However, the asymmetry parameter \(g\) depends on the porosity, which makes the difference in the optical properties of compact and porous dust for a given \(a_{\rm max}f\)(Zhang et al., 2023). #### 2.2.3 Dust disk The dust surface density is given by \[\Sigma_{\rm d}=\frac{\tau_{\rm 10,B4}}{\kappa_{\rm ext,B4}}\left(\frac{r}{10 \leavevmode\nobreak\ \rm au}\right)^{-1}, \tag{5}\] where \(\tau_{\rm 10,B4}\) is the vertical extinction optical depth at ALMA Band 4 at 10 au. Although the true dust surface density is highly uncertain, the wavelength-independent brightness temperature implies that the inner disk region is at least moderately optically thick at these wavelengths (see Ueda et al., 2022). In our model, we adopt \(\tau_{\rm 10,B4}=3\) for all models. In Appendix C, we show the effect of \(\tau_{\rm 10,B4}\) on the brightness temperature. We confirmed that \(\tau_{\rm 10,B4}\) needs to be larger than unity in order to match the observed brightness temperature at ALMA Band 4. We will discuss the estimated dust and gas surface densities as well as the corresponding accretion efficiency parameter \(\alpha_{\rm acc}\) in Section 4.2. In the vertical direction, the dust density distribution is given by: \[\rho_{\rm d}=\frac{\Sigma_{\rm d}}{\sqrt{2\pi}h_{\rm d}}\exp\left(-\frac{z^{2 }}{2h_{\rm d}^{2}}\right), \tag{6}\] where \(h_{\rm d}\) represents the scale height of the dust disk. In our fiducial model, we assume that the dust is well-coupled with the gas, i.e., \(h_{\rm d}=h_{\rm g}\) with \(h_{\rm g}\) being the gas scale height. The gas scale height is calculated by \(h_{\rm g}=c_{\rm s}/\Omega_{\rm K}\) with \(c_{\rm s}\) being the sound speed of the gas at the midplane. We also investigate the effect of dust settling. In incorporate dust settling, we define the Stokes number as \[{\rm St}=\frac{\pi}{2}\frac{\rho_{\rm m}fa}{\Sigma_{\rm g}}, \tag{7}\] where \(\rho_{\rm m}=1.675\leavevmode\nobreak\ \rm g\leavevmode\nobreak\ \rm cm^{-3}\) is the material density, \(f=1-p\) is the dust filling factor and \(\Sigma_{\rm g}\) is the gas surface density and \(a\) is the dust radius. For simplicity, we assume that the gas surface density is 100 times larger than the dust surface density. The dust scale height is assumed to be in mixing-settling equilibrium (Dubrulle et al., 1995; Youdin and Lithwick, 2007); \[h_{\rm d}=\left(1+\frac{\rm St}{\alpha_{\rm t}}\frac{1+2{\rm St}}{1+{\rm St}} \right)^{-1/2}h_{\rm g}, \tag{8}\] where \(\alpha_{\rm t}\) is the turbulence strength for vertical dust mixing. In our model with settling, we assume relatively weak vertical mixing with \(\alpha_{\rm t}=10^{-4}\) in order to investigate the difference from the no-settling limit. The computational domain extends radially from \(r=0.5\) to 50 au, logarithmically divided into 100 cells. For the polar direction, the calculation domain covers from \(\theta=-\pi/6\) to \(\pi/6\) (where \(\theta=0\) gives disk midplane) with 400 uniformly spaced cells. To investigate the impact of different physical mechanisms on the brightness temperature, we constructed models with various setups. The details of these models are summarized in Table 1. ### Imaging simulation Given the disk structure obtained in Section 2.2, we perform Monte Carlo radiative transfer calculations using RADMC-3D (Dullemond et al., 2012) to generate synthetic images. The dust opacities are the same as those described in Section 2.2.2. We use \(10^{7}\) photon packages for each simulation. To compare the synthetic images with the observations, we smooth the images with the observing beam size of \(0\aas@@fstack{\prime\prime}1\), which corresponds to a physical distance of 13.2 au for CW Tau (\(d=132\) pc). The disk inclination is assumed to be \(58^{\circ}\)(Ueda et al., 2022) and the intensity is averaged over the azimuthal direction to obtain the radial intensity profile. The radial intensity profile is then converted into the brightness temperature using the full Planck function. To account for anisotropic scattering, we employ the Henyey-Greenstein approximation (Henyey and Greenstein, 1941). ## 3 Comparison of models and observations In this section, we compare our theoretical models with the observations. ### Passive disk models Let us start the comparison from the simplest cases: passive disks, where the temperature structure is determined solely by stellar irradiation. Figure 3 shows the brightness temperature profile of the passive disk model for different \(a_{\rm max}\). In Figure 3, we show the brightness temperature computed with and without scattering. When scattering is considered, the emergent intensity is reduced, leading to lower brightness temperatures compared to the actual dust temperature, even in optically thick disks (Liu, 2019; Zhu et al., 2019; Ueda et al., 2020). For \(a_{\rm max}=10\leavevmode\nobreak\ \rm\mu m\), the model brightness temperatures at \(r\lesssim 10\leavevmode\nobreak\ \rm au\) are identical for all wavelengths and are \(\sim 1.5\) times higher than observed. Scattering has little effects on the brightness temperatures in this case because of the low albedo of the \(10\leavevmode\nobreak\ \rm\mu m\)-sized grains (see the center panel of Figure 2). At \(\gtrsim 15\leavevmode\nobreak\ \rm au\), the brightness temperature at Band 4 is slightly lower than those at the other bands because that region is marginally optically thin at wavelengths \(\gtrsim 2\leavevmode\nobreak\ \rm mm\). For \(a_{\rm max}\gtrsim 100\leavevmode\nobreak\ \rm\mu m\), the brightness temperatures are comparable to or lower than those for \(a_{\rm max}=10\leavevmode\nobreak\ \rm\mu m\). The observed brightness temperature in our models without scattering is affected by the optical depth of the disk. In our model, the dust surface density is adjusted such that the extinction optical depth at ALMA Band 4 reaches a value of 3 at a radial distance of 10 au (\(\tau_{\rm 10,B4}=3\)). In the models without scattering, the effect of scattering is taken into account in the calculation of the dust surface density. This means that the dust surface density remains the same for models with and without scattering. However, when \begin{table} \begin{tabular}{c|c|c|c|c} Model name & Accretion & Dust & \(p\) & \(p_{\rm d}\) \\ & heating & settling & & \\ \hline Passive disk & off & off & 0 & 3.5 \\ Fiducial active disk & on & off & 0 & 3.5 \\ Settling & on & on & 0 & 3.5 \\ Top-heavy & on & off & 0 & 2.5 \\ Porous & on & off & 0.9 & 3.5 \\ \hline \end{tabular} \end{table} Table 1: Summary of our models calculating the emergent intensity, scattering is ignored. Because of this setup, the vertical optical depth is effectively smaller for the models without scattering if scattering dominates over absorption. For instance, dust with \(a_{\rm max}=300\ \mu\)m has the effective albedo of \(\sim 0.9\) at ALMA Band 4, which corresponds to the absorption optical depth of \(\tau_{\rm abs,10,B4}=\tau_{\rm rel,10,B4}(1-\omega_{\rm rel})\sim 0.3\). Therefore, the no-scattering model with \(a_{\rm max}=300\ \mu\)m is optically thin at ALMA Band 4 and hence shows lower brightness temperature. For the models with scattering, the model disks are always optically thick at \(<10\) au at all wavelengths we considered. We see from Figure 3 that the brightness temperatures from the passive disk models without scattering exceed those from the observations. This implies that the scattering-induced intensity reduction takes place in the inner region of the CW Tau disk. For \(a_{\rm max}\sim 150\ \mu\)m, the brightness temperatures are reduced by \(\sim\)30% by scattering, which makes the passive disk with 150 \(\mu\)m-sized dust consistent with the observations. The model with \(a_{\rm max}=300\ \mu\)m has a high effective albedo (\(\gtrsim 0.8\)) at all wavelengths considered, yielding brightness temperatures lower than observed. For \(a_{\rm max}\gtrsim 300\ \mu\)m, the effective albedo at ALMA Bands 7-8 decreases as dust size increases, which makes the models with large grains are consistent with the observations at ALMA Bands 7-8. However, at Band 4, dust with sizes in the range of \(300\ \mu\lesssim a_{\rm max}\lesssim 1\) mm still exhibit a high albedo, leading to brightness temperatures that are too low compared to the observed values. Because the scattering albedo at ALMA Band 4 decreases for \(a_{\rm max}\gtrsim 1\) mm, very large dust with \(a_{\rm max}\gtrsim 10\) cm may account for the observations. Figure 4 presents the brightness temperature at the center of the model images normalized by the observed value, providing insight into the temperature structure within the central re Figure 4: Brightness temperature at the center of model images normalized by the observed brightness temperature. The gray shaded region denotes the region where the model explains the observations with an accuracy of 10%, which is the typical ALMA calibration uncertainty. Figure 3: Brightness temperature profile of the passive disk model (\(p=0\), \(\tau_{\rm 10,B4}=3\) and \(p_{4}=3.5\)) with (top) and without (bottom) scattering with different \(a_{\rm max}\). The transparent shaded region shows the observed brightness temperature. The model brightness temperature is calculated after the intensity is convolved with the beam size of \(0\aas@@fstack{\prime\prime}1\) which corresponds to the physical distance of 13.2 au. The black dotted line denote the midplane temperature profile convolved with a gaussian with a full width at half maximum (FWHM) of 13.2 au. gion with diameter of \(\sim 13\) au, which corresponds to the size of one beam. We see that the brightness temperature of models with \(a_{\rm max}\lesssim 100\)\(\mu\)m exceeds the observed values by \(\sim 30\%\). In contrast, the brightness temperatures are significantly lower than the observations if \(a_{\rm max}=300\)\(\mu\)m. The intermediate size, \(a_{\rm max}\sim 150\)\(\mu\)m, reasonably explains the observations. If dust size is larger than millimeter, the model brightness temperatures are more similar to the observed values as dust size increases, showing that very large grains (\(\gtrsim\) few cm) are roughly consistent with the observations. In summary, when considering a disk heated solely by stellar irradiation, two possible ranges for the maximum dust radius emerge to explain the observed brightness temperatures: \(\sim 150\)\(\mu\)m or \(\gtrsim\) few cm. Very small dust grains, \(a_{\rm max}\ll 150\)\(\mu\)m, are unlikely to account for the observations due to their low effective albedo at millimeter wavelengths, which results in higher brightness temperatures than observed. On the other hand, millimeter-sized dust grains are also improbable as their high effective albedo at ALMA Band 4 leads to significantly lower model brightness temperatures compared to the observed values. ### Active disk models The passive disk models considered in the previous subsection ignore any internal heating sources. However, the inner region of the disk can be heated by gas accretion, and hence, the disk may have a vertical temperature gradient that affects the brightness temperature. In this subsection, we will show how accretion heating affects the brightness temperature of the CW Tau disk. #### 3.2.1 Case of \(a_{\rm max}=10\)\(\mu\)m To clarify the impact of the vertical temperature structure on the ALMA brightness temperatures, we start the analysis by considering the scenario of \(a_{\rm max}=10\)\(\mu\)m, wherein scattering of dust thermal emission is negligible. The most notable factor in this case is the substantial vertical optical depth \(\tau_{x}\), which establishes the accretion heating as the primary determinant of the inner disk temperature. Figure 5 shows the two-dimensional temperature structure of the disk with \(a_{\rm max}=10\)\(\mu\)m, both with and without considering the effects of accretion heating. It is evident that within a radius of approximately 20 au, the temperature structure is primarily influenced by accretion heating when \(a_{\rm max}=10\)\(\mu\)m. Additionally, Figure 5 presents the dust temperatures at the surfaces where the extinction optical depths at ALMA wavelengths reach unity (hereafter ALMA surfaces). It is worth noting that despite having identical total vertical optical depths in the passive and active disk models, the ALMA surfaces are higher in the active disk model due to the elevated gas scale height compared to the passive model. The dust temperature at the ALMA surfaces remains constant regardless of the observing wavelengths in the case of a passively heated disk. However, in the presence of accretion heating, the dust temperature at the ALMA wavelengths becomes wavelength-dependent. This dependence arises because longer observing wavelengths probe regions closer to the midplane, where accretion heating is more efficient. Consequently, longer observing wavelengths result in higher dust temperatures. For instance, at 10 au, the dust temperature observed with ALMA Band 8 is \(\sim 100\) K, while the dust temperature observed with ALMA Band 4 is \(\sim 180\) K. In comparison, the midplane temperature is \(\sim 200\) K. Figure 5 (bottom panels) also compares the brightness temperature profile obtained from the radiative transfer simulation with the observed values. The bottom-left panel of Figure 5 corresponds to the top-left panel in Figure 3. The brightness temperature of the fiducial active disk is more than two times higher than that of the passive disk. Furthermore, a wavelength-dependent brightness temperature is seen within \(\sim 10\) au, which is not seen in the observations. This is because of the vertical temperature gradient shown in the top and middle panels in Figure 5. The brightness temperature is independent of the inclusion of scattering because of the negligible scattering albedo. Based Figure 5: Two-dimensional temperature structure (top), dust temperature at the ALMA surfaces (middle) and synthetic brightness temperature profiles (bottom). The maximum dust size is 10 \(\mu\)m. The left panels show the passive disk model where the disk is heated only by the stellar irradiation, whereas the right panels show the active disk model where the disk is heated by both stellar irradiation and disk accretion. The light green lines in the top panels denote the height where the vertical extinction optical depth reaches unity (ALMA surfaces) at ALMA Bands 4 (solid), 6 (dashed), 7 (dash-dotted) and 8 (dotted). The dust temperatures shown in the middle panels are not beam-concluded, while the brightness temperatures shown in the bottom panels are taken from the synthetic images convolved with a gaussian with a FWHM of 13.2 au. The transparent regions denote the observed brightness temperatures. on these, we conclude that the maximum dust size of 10 \(\mu\)m is unlikely regardless of the efficiency of accretion heating. #### 3.2.2 Fiducial active disks Figure 6 shows the two-dimensional temperature structure of the fiducial active disk, considering different values of \(a_{\rm max}\). We see that the temperature of the inner disk region is higher for smaller \(a_{\rm max}\) because of the higher vertical optical depth for dust thermal emission (\(\tau_{z}\)). For \(a_{\rm max}\lesssim 300\)\(\mu\)m, the dust temperature at the ALMA surfaces is higher at longer wavelengths, as shown in Section 3.2.1. In contrast, For \(a_{\rm max}\gtrsim 1\) mm, the dust temperature is not sensitive to the observing wavelengths. This is because the extinction opacity of large dust (\(\gtrsim 1\) mm) is less sensitive to the ALMA wavelength than that of small dust (see Fig 2), and hence all ALMA observations trace similar disk heights. The difference in the dust temperature is the most significant at \(a_{\rm max}\sim 100\)\(\mu\)m because of its steep slope in the extinction opacity. Although the dust temperature is nearly independent on the observing wavelengths when \(a_{\rm max}\gtrsim 1\) mm, it remains lower than the midplane temperature at \(\lesssim 6\) au. The temperature structure is almost identical for models with \(a_{\rm max}=1\) mm and 1 cm. This similarity arises because we ensure that the vertical extinction optical depth at ALMA Band 4 (\(\tau_{\rm 10,B4}\)) is constant across all models. In the regime of \(a_{\rm max}\gtrsim 1\) mm, both the millimeter extinction opacity and the Rosseland-mean opacity decrease with increasing dust radius in the same manner (\(\propto a_{\rm max}^{-1/2}\)). To keep \(\tau_{\rm 10,B4}\) to be constant, the dust surface density is inversely proportional to the millimeter extinction opacity. Consequently, despite the decrease in the Rosseland-mean opacity as the dust radius increases, the infrared optical depth \(\tau_{z}\), which governs the magnitude of accretion heating, remains unchanged when \(\tau_{\rm 10,B4}\) is held constant. Figure 7 shows the brightness temperature profile of the fiducial active disk model. In contrast to the case of \(a_{\rm max}=10\)\(\mu\)m, when \(a_{\rm max}\gtrsim 100\)\(\mu\)m, scattering is no longer negligible and reduces the brightness temperature which makes the interpretation of the brightness temperature more complicated. For \(a_{\rm max}=155\)\(\mu\)m, the model brightness temperatures are significantly higher than the observed values. The millimeter-wave scattering makes the brightness temperatures lower than that of no-scattering limit. The model with \(a_{\rm max}=300\)\(\mu\)m shows similar brightness temperatures with the observations if scattering is included. The brightness temperatures with scattering are \(\sim 2\) times lower than that without scattering. Furthermore, the midplane temperature is \(\sim 3\) time higher than the brightness temperatures. This clearly demonstrates that, even if the disk is fully optically thick, the observed brightness temperature can be much lower than the true midplane temperature because of the Figure 6: Two-dimensional temperature structure and dust temperature at ALMA surfaces for different values of \(a_{\rm max}\). Top: two-dimensional temperature structure of the fiducial active disk. The light green lines denote the height where the vertical extinction optical depth reaches unity at ALMA Bands 4 (solid), 6 (dashed), 7 (dash-dotted) and 8 (dotted). bottom: dust temperature at the height where the vertical extinction optical depth at the ALMA wavelengths neaty. The radial profile are not beam-convolved. The dust settling is not taken into account. The dust is assumed to be compact with a size distribution of \(p_{4}=3.5\). The dust surface density is set so that \(\tau_{\rm 10,B4}=3\). combined effect of scattering and vertical temperature gradient (see also Sierra & Lizano 2020). When \(a_{\rm max}\) is of the order of 1 mm, the brightness temperature at Band 4 is significantly lower than the observed value because of scattering even if the accretion heating takes place (see also Figure 8). In contrast, when \(a_{\rm max}\) is greater than \(\sim 1\) cm, the model brightness temperatures are similar to the observations as seen for \(a_{\rm max}=300\)\(\mu\)m. This is because the effective albedo at ALMA Band 4 decreases as \(a_{\rm max}\) increases in the regime of \(a_{\rm max}\gtrsim 1\) mm. In this case, the midplane temperature is \(\sim 2\) times higher than the observed brightness temperatures. The central brightness temperature of the fiducial active disks is shown in Figure 8. In the small dust regime (\(<300\)\(\mu\)m), the model brightness temperature are too high to account for the observations because of the efficient accretion heating. While the passive disk model with \(a_{\rm max}\sim 300\)\(\mu\)m underestimates the brightness temperature at ALMA Band 4-7, the fiducial active disk model with \(a_{\rm max}\sim 300\)\(\mu\)m aligns well with the observations. This is attributed to the enhancement of brightness temperature by accretion heating, particularly at longer wavelengths. Large grain models (\(\gtrsim\) few cm) are roughly consistent with the observations, although they tend to overestimate the brightness temperature at Band 8. ### Effect of dust settling, size distribution and porosity In this section, we explore the influence of dust settling, dust size distribution, and dust porosity on the brightness temperatures of the active disk. Figure 9 shows the two-dimensional temperature structure of the fiducial active disk (same as the second right column of Figure 6), settling model, top-heavy model and porous dust model. The maximum dust size is set to 1 mm. We discuss the effect of each component in the following subsections. #### 3.3.1 Effect of dust settling As dust grains grow larger, they are expected to settle down towards the midplane of the disk. If dust settling takes place, the effective dust size that contributes to the millimeter emission decreases because larger grains can be hidden within the optically thick midplane layer (Ueda et al. 2021a). Figure 8: Same as Figure 4 but for the fiducial active disk. Figure 7: Same as Figure 3 but for the fiducial active disk. Figure 9 compares the dust temperature at the ALMA surfaces of the models without (left) and with (second left) dust settling with \(\alpha_{\rm t}=10^{-4}\). We find that dust settling effectively increases the temperature of dust observed with ALMA. This behavior can be explained as follows: The efficiency of accretion heating is primarily determined by the distribution of small grains, whereas the height of the ALMA surfaces is predominantly influenced by the distribution of large grains. Due to settling, larger grains tend to selectively settle towards the midplane, causing the ALMA surfaces to descend more rapidly towards the midplane compared to the isotherm height of the active disk (as shown in the top rows of Figure 9). This allows ALMA to observe a hotter layer. Figure 10 shows the central brightness temperature of the beam-convolved images of the settling model. The radial brightness temperature profile of each model is shown in the appendix D (see Figure 1). For \(a_{\rm max}\lesssim 3\) mm, the brightness temperature is systematically higher than that of the fiducial active disk model (i.e., without settling). This is because of the effect explained above. Particularly for \(a_{\rm max}=300\,\mu\)m, the settling model overpredicts the brightness temperatures because of the higher temperature of the dust responsible for the emission. We note that the settling model with \(a_{\rm max}=300\,\mu\)m reasonably explains the observations if \(\alpha_{\rm t}=10^{-3}\) because dust settling is less effective compared to \(\alpha_{\rm t}=10^{-4}\). We also note that if the disk is active, the turbulence strength for dust mixing may not be significantly lower than that for accretion (\(\alpha_{\rm acc}\); Equation (10)), implying that \(\alpha_{\rm t}=10^{-3}\) may be more likely for the active disk model (see also Section 4.2). For \(a_{\rm max}\gtrsim 3\) mm, the brightness temperatures of the settling model are not simply higher than those of the model without settling. The temperature of dust observed with ALMA is higher for the models with settling compared to those without settling. However, the differential settling makes effective dust size smaller, which increases the scattering albedo and hence reduces the emergent intensity at the ALMA wavelengths. The emergent intensity, and hence the brightness temperature, is determined by the balance between these two effects. Overall, large grain models (\(a_{\rm max}\gtrsim 3\) mm) predict the brightness temperatures similar to the observations. Compared to the no-settling models (Figure 8), the settling model is more consistent with the observations if \(a_{\rm max}\gtrsim 3\) mm. We note that if \(\alpha_{\rm t}=10^{-3}\), dust settling has no significant impact on the brightness temperatures when \(a_{\rm max}\lesssim 1\)mm (see also Section 4.2). #### 3.3.2 Effect of dust size distribution The dust size distribution plays a critical role in both the millimeter thermal emission and the efficiency of accretion heating. In our fiducial case, we assume a slope of \(p_{\rm d}=3.5\) for the dust size distribution. However, it is important to note that the dust size Figure 9: Two-dimensional temperature structure and dust temperature at ALMA surfaces for different disk models. Top: two-dimensional temperature structure of the fiducial active disk model (left), Settling model (\(\alpha_{\rm t}=10^{-4}\), second left), Top-heavy model (second right) and Porous model (right). The light green lines denote the height where the vertical extinction optical depth reaches unity at ALMA Bands 4 (solid), 6 (dashed), 7 (dash-dotted) and 8 (dotted). The maximum dust size \(a_{\rm max}\) is 1 mm in the all models. Bottom: dust temperature at the height where the vertical extinction optical depth at the ALMA wavelengths reaches unity. The radial profile is not convolved. distribution in protoplanetary disks is uncertain and is influenced by dust coagulation and fragmentation processes. In particular, if dust fragmentation is not efficient, the dust size distribution can exhibit a more top-heavy nature, meaning it is dominated by larger grains, rather than following a slope of \(p_{\rm d}=3.5\)(e.g., Birnstiel et al., 2011). In the top-heavy model, we adopt a dust size distribution with a slope of \(p_{\rm d}=2.5\) instead of 3.5. As shown in Figure 9, the top-heavy dust size distribution leads to a cooler disk region near the midplane compared to the fiducial active disk. This is because the dust mass is more dominated by large grains which have lower surface-to-mass ratio and hence accretion heating is less efficient. Figure 11 shows the central brightness temperature of the beam-convolved images of the top-heavy model. The radial brightness temperature profile of each model is shown in the appendix D (see Figure D.2). Although the dust temperature is lower than that of fiducial active disk model, the brightness temperature does not simply follow the dust temperature. In the regime of \(a_{\rm max}\lesssim 1\) mm, scattering is more efficient than the case of \(p_{\rm d}=3.5\) because mm-sized grains have higher scattering albedo than micron-sized grains at ALMA wavelengths. Therefore, for \(a_{\rm max}\lesssim 1\) mm, the brightness temperatures of the top-heavy model are lower compared to the fiducial active disk model. In contrast, in the regime of \(a_{\rm max}\gtrsim 1\) mm, scattering is less efficient than the case of \(p_{\rm d}=3.5\) because cm-sized or larger grains have lower scattering albedo than mm-sized grains at ALMA wavelengths. Therefore, for \(a_{\rm max}\gtrsim 1\) mm, the brightness temperatures of the top-heavy model are higher compared to the fiducial active disk model. In the middle of these two regimes (\(a_{\rm max}\sim 1\) mm), the brightness temperature is not sensitive to \(p_{\rm d}\). Overall, the top-heavy models do not appear to be consistent with the observations. #### 3.3.3 Effect of porosity In the previous sections, we focused on studying the brightness temperature of disks with compact dust grains. However, it is important to consider that dust in protoplanetary disks can be porous due to the pairwise collisional growth (e.g., Okuzumi et al., 2012). In Figure 9, we compare the dust temperature between the fiducial active disk model (left) and the porous dust model (right). We see that the accretion heating is more efficient in the porous dust model. This is because, for a given optical depth at ALMA wavelengths (e.g., \(\tau_{\rm B4}\)), the optical depth for the dust thermal emission (\(\tau_{\rm z}\)) is higher for porous dust. Figure 12 shows the ratio between the Rosseland-mean opacity (at 100K) and the extinction opacity at ALMA Band 4. For \(a_{\rm max}\gtrsim 100\)\(\mu\)m, porous dust has higher \(\kappa_{\rm B,100K}/\kappa_{\rm ext,B4}\). This means that, for a given optical depth at ALMA wavelength, porous dust has larger vertical optical depth for accretion heating and hence the accretion heating is more efficient compared to compact dust. Figure 13 shows the central brightness temperature of the convolved images of the porous dust model. The radial brightness temperature profile of each model is shown in the appendix D (see Figure D.3). If dust is porous, the brightness temperature is higher than that of compact dust model regardless of the dust size. We see that the porous dust models predict \(\gtrsim 1.5\) times higher brightness temperatures than the observations. We also confirm that the brightness temperatures of porous dust model are still higher even if \(p_{\rm d}=2.5\). ## 4 Discussion ### Dust size in the inner region of the CW Tau disk Our study reveals that the maximum dust size in the inner region (\(\lesssim 10\) au) of the CW Tau disk is preferred to be \(\sim\)150-300 \(\mu\)m (small grain solution) or larger than a few centimeters (large grain solution). Distinguishing between these two scenarios is essential for gaining insights into grain growth and temperature structure of the disk. The CW Tau disk shows the strong scattering-induced polarization at ALMA Band 7 with a polarization degree of \(\gtrsim 1\%\)(Bacciotti et al., 2018). This indicates that the thermal emission at ALMA Band 7 is primarily contributed by dust grains with a Figure 11: Same as Figure 4 but for the top-heavy dust model. Figure 10: Same as Figure 4 but for the settling model with \(\alpha_{\rm t}=10^{-4}\). radius of \(\lambda/2\pi\sim 140\)\(\mu\)m (Kataoka et al. 2015), although the angular resolution of the polarimetric observations (\(\sim\)0\(\aas@@fstack{\prime\prime}\)2) is worse than ours (0\(\aas@@fstack{\prime\prime}\)1). This implies that the small grain solution is more consistent with the polarimetric observations than the large grain solution. The effective observed-dust size can be smaller than the true maximum dust size if differential dust settling takes place (Sierra & Lizano 2020; Ueda et al. 2021), which potentially makes the large grain solution matches with the polarimetric observation. However, it is important to note that achieving the observed high polarization degree (\(\gtrsim 1\%\)) with \(a_{\rm max}\) larger than \(\sim 1\) mm is unlikely, even under the assumption of extremely low turbulence strength (Ueda et al. 2021). The polarization degree also depends on the detailed structure of dust grains. Recent laboratory experiment suggests that non-spherical irregular shape dust can produce strong scattering-induced polarization even if the dust size is significantly larger than the observing wavelength (Lin et al. 2023). If this is the case, the polarization degree of non-spherical grains is less sensitive to the observing wavelength than the compact spherical grains. Furthermore, inclusion of small amount of porosity (\(p\lesssim 0.9\)) also makes the polarization degree less sensitive to the dust size (Tazaki et al. 2019; Zhang et al. 2023). However, we note that, if the porosity is as high as 0.9, accretion heating is so efficient that the model brightness temperatures exceed the observed values. Therefore, if \(p\gtrsim 0.9\), some sort of reduction of accretion heating due to, e.g., wind-driven accretion (Mori et al. 2019; Kondo et al. 2022), is required. Future multi-wavelength polarimetric observations would help us constrain the detailed dust structure. ### Surface densities and effective accretion \(\alpha\) In this section, we discuss the expected dust and gas surface densities, as well as the effective turbulence strength for the accretion of the CW Tau disk. Figure 14 shows the gas surface density of our fiducial active disk models(assuming hundred times of the dust). The gas surface density of the minimum mass solar nebula (MMSN), \(1700(r/{\rm au})^{-1.5}\) g cm\({}^{-2}\) (Hayashi 1981), is also shown for comparison. We also plot the criterion for the disk to be gravitationally unstable (Toomre 1964): \[Q\equiv\frac{\Omega_{\rm K}c_{\rm s}}{\pi G\Sigma_{\rm g}}=1. \tag{9}\] In general, our model disks are expected to be gravitationally stable within the region of focus, approximately \(\lesssim 10\) au. If \(a_{\rm max}=100\)\(\mu\)m, the gas surface density reaches \(Q=1\) around \(\sim 10\) au. If \(a_{\rm max}\) were \(10\)\(\mu\)m, the disk mass would be \(\sim 1.5\) times higher; however, as already shown in Section 3, the model with \(a_{\rm max}=10\)\(\mu\)m fails to reproduce the observed brightness temperatures. On the other hand, for \(a_{\rm max}\gtrsim 300\)\(\mu\)m, the gas surface density is estimated to be much lower than the \(Q=1\) line within 20 au. Moreover, the estimated gas surface density aligns closely with the MMSN model within 10 au for \(a_{\rm max}\gtrsim 300\)\(\mu\)m. This indicates that the CW Tau disk has enough capability to form a planetary system similar to our solar system. Figure 14 also shows the effective turbulence strength for the disk accretion; \[\alpha_{\rm acc}=\frac{\dot{M}}{3\pi\Sigma_{\rm g}c_{\rm s}h_{\rm g}}. \tag{10}\] For the given accretion rate of \(4\times 10^{-8}M_{\odot}\) yr\({}^{-1}\), the effective turbulence strength accounting for the disk accretion needs to be \(\sim 10^{-4}\) for \(a_{\rm max}=100\)\(\mu\)m, while it is \(\sim 10^{-3}\)-\(10^{-2}\) for \(a_{\rm max}\gtrsim 300\)\(\mu\)m. The estimated \(\alpha_{\rm acc}\) falls into a reasonable range of \(\alpha_{\rm acc}=10^{-4}\)-\(10^{-2}\), although the high \(\alpha_{\rm acc}\) (\(\sim 10^{-2}\)) found for the models of \(a_{\rm max}=1\)-\(10\) mm is higher than the typical value estimated from the observed disk lifetime in various star-forming regions (Manara et al. 2022, but see also Hartmann et al. 1998). The model with \(a_{\rm max}=300\)\(\mu\)m yields \(\alpha_{\rm acc}\sim 10^{-3}\). As shown in Section 3.2.2 and 3.3.1, the model with \(a_{\rm max}=300\)\(\mu\)m can account for the observations if vertical dust settling is ignored but cannot it dust settling takes place with \(\alpha_{\rm t}=10^{-4}\). To further investigate this, we conducted a simulation of the settling model with \(\alpha_{\rm t}=10^{-3}\) and confirm that the model with \(a_{\rm max}=300\)\(\mu\)m reasonably matches the observations if \(\alpha_{\rm t}=10^{-3}\). This suggests that the effective turbulence strength for vertical dust mixing, \(\alpha_{\rm t}\) Figure 12: Ratio between the Rosseland-mean opacity (at 100K) and the extinction opacity at ALMA Band 4. The red solid and blue dashed lines denote the compact and porous dust, respectively. Figure 13: Same as Figure 4 but for the porous dust model. is not much lower than the effective turbulence strength for disk accretion, \(\alpha_{\rm acc}\), if \(a_{\rm max}=300\)\(\mu\)m. ### Predictions for future observations Our results show that the observed brightness temperatures at ALMA Bands 4-8 can be explained by either the passive or active disk models. It is crucial to distinguish these models for understanding the heating mechanisms in the inner region of disks. Figure 15 shows the brightness temperatures at \(\lambda=3.1\) (ALMA Band 3), 4.1 (ALMA Band 2) and 7.0 mm (ALMA Band 1 or VLA Q band) computed from our models. The angular resolution is assumed to be 0\(\aas@@fstack{\prime\prime}\)05. The active disk with \(a_{\rm max}=300\)\(\mu\)m has \(\sim\)1.5-2 times higher brightness temperatures at \(\lambda=3.1\) and 4.1 mm than the other models. The peak brightness temperature reaches \(\sim 80\) K at \(\lambda=3.1\) and 4.1 mm. In contrast, at \(\lambda=7.0\) mm, the brightness temperature of the active disk model with \(a_{\rm max}=300\)\(\mu\)m is not significantly different from that with \(a_{\rm max}=10\) cm. This is because the small grain solution has higher temperature but lower optical depth, resulting in the cancellation of these effects. The passive disk with \(a_{\rm max}=300\)\(\mu\)m can be distinguished from the other models by using \(\lambda=7.0\) mm because it has significantly lower brightness temperature. This is because the passive disk has lower temperature than the active disk and the optical depth is lower than the large grain model. If \(a_{\rm max}=10\) cm, the accretion heating is not so significant that both the passive and active disk predict similar brightness temperature. However, the difference in those models is poten Figure 14: Gas surface density (hundred times of the dust; top) and effective turbulence strength for the disk gas accretion (bottom) estimated from our fiducial active disk models. The gray dashed line denotes the gas surface density of the minimum mass solar nebula (MMSN; Hayashi 1981). The dotted lines denote the criteria for the disk to be gravitationally unstable (\(Q=1\)). Figure 15: Brightness temperature profile at \(\lambda=3.1\), 4.1 and 7.0 mm predicted by our active disk models (solid lines) and passive disk models (dotted lines). The angular resolution is assumed to be 0\(\aas@@fstack{\prime\prime}\)05. The scattering-induced intensity reduction is included. tially distinguishable by ALMA because typical uncertainty in the ALMA flux is 5-10% which is smaller than the difference in the brightness temperatures predicted by the models. Even though the finest angular resolution of current facilities (ALMA and VLA) at \(\lambda\gtrsim 3\) mm is limited to \(\sim 0\aas@@fstack{\prime\prime}05\), it can achieve \(\sim 0\aas@@fstack{\prime\prime}02\) at shorter wavelengths. Figure 16 shows the brightness temperature at \(\lambda=0.75\) (ALMA Band 8), 0.89 (Band 7) and 1.34 mm (Band 6) computed from our models with the angular resolution of \(0\aas@@fstack{\prime\prime}02\). We can see that the higher spatial resolution observations at shorter wavelengths can also distinguish between the active and passive disk models. The higher angular resolution allows us to detect a steeper brightness temperature profile in the active disk model. However, the difference in the brightness temperatures of the active and passive disks decreases with observing wavelength decreases because the shorter wavelength traces a more upper layer where the accretion heating is less effective. The difference in the brightness temperatures of the two passive disk models is maximized at \(\lambda\sim 1.34\) mm because 300 \(\mu\)m-sized grain has a maximum effective albedo at that wavelength. In contrast, the active disk models with \(a_{\rm max}=300\)\(\mu\)m and 10 cm have similar brightness temperature profile because the 300 \(\mu\)m dust model has a higher dust temperature, which compensates the efficient intensity reduction by scattering. ### Spectral energy distribution Although our focus is on comparing our models with observations at ALMA wavelengths, it is worthwhile to compare them at infrared wavelengths as well. Figure 17 shows the comparison between our model spectral energy distributions (SEDs) and the observed data. The observed SED is taken from Andrews et al. (2013) (see also references therein). In order to evaluate the SEDs, we extrapolate the disk model down to 0.01 au and remove the dust with temperature higher than 1400 K which mimics the sublimation of silicate dust. This modification is necessary because the dust responsible for infrared emission could be located within 0.5 au, which is the inner boundary used for the calculations in the comparison at ALMA wavelengths. We find that the active disk models have higher luminosity at infrared wavelengths (\(\sim\)1-20 \(\mu\)m) than the passive disk models (see also, e.g., D'Alessio et al. 1998; Dullemond et al. 2007). The passive disk models tend to underestimate the fluxes observed at infrared wavelengths. For the active disk models, the large grain models (\(a_{\rm max}=1\) mm and 1 cm) still underestimate the infrared fluxes, while the model with \(a_{\rm max}=300\)\(\mu\)m overestimates them. Although neither of these models fully explains the observed SED at infrared wavelengths, the active disk models are more resemble to the observations compared to the passive disk models. However, it is important to note that the infrared SED is highly influenced by the intricate structure of the innermost region of the disk, specifically in the vicinity of the silicate sublimation radius (see Dullemond & Monnier 2010 and Kraus 2015 for review), which is beyond the scope of our study. For instance, the emission in the near-infrared range (\(\sim 2\)\(\mu\)m) is expected to be predominantly attributed to the scattering of stellar light at the inner rim of the disk (e.g., Monnier & Millan-Gabet 2002), rather than the thermal emission from the region we focus on. Additionally, the presence of a dust halo surrounding the inner disk, potentially induced by the inner disk wind, is anticipated to enhance the near-infrared flux (e.g., Vinkovic & Jurkic 2007). Therefore, detailed modeling of the innermost region of the disk is necessary to obtain accurate fluxes at near-infrared wavelengths. While significant uncertainties remain in the near-infrared range, the observed high fluxes at mid-infrared wavelengths may suggest a higher likelihood of active heating within the CW Tau disk. ## 5 Summary We investigate the impact of accretion heating on the brightness temperatures of the inner region of the CW Tau disk. The key findings are as follows: Figure 16: Brightness temperature profile at \(\lambda=0.75\), 0.89 and 1.34 mm predicted by our active disk models (solid lines) and passive disk models (dotted lines). The angular resolution is assumed to be \(0\aas@@fstack{\prime\prime}02\). The scattering-induced intensity reduction is included. * If \(a_{\rm max}\lesssim 100\)\(\mu\)m, the model brightness temperatures are too high to explain the observations regardless of the efficiency of accretion heating. * If the disk is passively heated, the maximum dust size needs to be either \(\sim 150\)\(\mu\)m or \(\gtrsim\) few cm. * If the disk is actively heated, small grain models significantly overpredict the brightness temperatures, and hence the maximum dust size needs to be either \(\sim 300\)\(\mu\)m or \(\gtrsim\) few cm. * The midplane temperature would be \(\sim\)1.5-3 times higher than the observed brightness temperatures because of combined effect of scattering and accretion heating. * If dust settling occurs in the active disk, the ALMA observations trace deeper disk layers compared to the disk with no-settling. Therefore, the dust settling effectively increases the temperature of the dust responsible for the millimeter emission from the active disk. * If turbulence strength parameter for vertical dust mixing \(\alpha_{\rm i}\) is \(10^{-4}\), the active disk model with \(a_{\rm max}=300\)\(\mu\)m overpredicts the brightness temperatures, suggesting that \(\alpha_{\rm i}>10^{-4}\). * The estimated effective accretion efficiency \(\alpha_{\rm acc}\) falls into a reasonable range of \(10^{-4}\)-\(10^{-2}\), depending on the dust size. If \(a_{\rm max}=300\)\(\mu\)m, \(\alpha_{\rm acc}\) is expected to be \(\sim 10^{-3}\), indicating that \(\alpha_{\rm i}\) is not much smaller than \(\alpha_{\rm acc}\). * The efficiency of accretion heating needs to be reduced if dust is porous (\(p=0.9\)) because the accretion heating is too efficient to explain the observations regardless of the dust size. Future observations using longer wavelengths, such as ALMA Bands 1-3 and VLA, will play a crucial role in distinguishing between the active and passive disk models. The results of our work highlight the importance of multi-wavelength observations in improving our understanding of the physical properties of the inner regions of protoplanetary disks. ###### Acknowledgements. We thank the anonymous referee for useful comments. We also thank Sean Andrews for providing the observed SED of CW Tau. T.U. acknowledges the support of the DFG-Grant 'Inside: inner regions of protoplanetary disks: simulations and observations" (FL 909/5-1). S. O. is supported by JSPS KAKENHI Grant Numbers JP18H05438, JP20H00182, and JP20H01948. M.F. acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 757957).
2307.04583
Parameterised distance to local irregularity
A graph $G$ is \emph{locally irregular} if no two of its adjacent vertices have the same degree. In [Fioravantes et al. Complexity of finding maximum locally irregular induced subgraph. {\it SWAT}, 2022], the authors introduced and studied the problem of finding a locally irregular induced subgraph of a given a graph $G$ of maximum order, or, equivalently, computing a subset $S$ of $V(G)$ of minimum order, whose deletion from $G$ results in a locally irregular graph; $S$ is denoted as an \emph{optimal vertex-irregulator of $G$}. In this work we provide an in-depth analysis of the parameterised complexity of computing an optimal vertex-irregulator of a given graph $G$. Moreover, we introduce and study a variation of this problem, where $S$ is a substet of the edges of $G$; in this case, $S$ is denoted as an \emph{optimal edge-irregulator of $G$}. In particular, we prove that computing an optimal vertex-irregulator of a graph $G$ is in FPT when parameterised by the vertex integrity, neighborhood diversity or cluster deletion number of $G$, while it is $W[1]$-hard when parameterised by the feedback vertex set number or the treedepth of $G$. In the case of computing an optimal edge-irregulator of a graph $G$, we prove that this problem is in FPT when parameterised by the vertex integrity of $G$, while it is NP-hard even if $G$ is a planar bipartite graph of maximum degree $4$, and $W[1]$-hard when parameterised by the size of the solution, the feedback vertex set or the treedepth of $G$. Our results paint a comprehensive picture of the tractability of both problems studied here, considering most of the standard graph-structural parameters.
Foivos Fioravantes, Nikolaos Melissinos, Theofilos Triommatis
2023-07-10T14:21:43Z
http://arxiv.org/abs/2307.04583v3
# Parameterised distance to local irregularity + ###### Abstract A graph \(G\) is _locally irregular_ if no two of its adjacent vertices have the same degree. In [Fioravantes et al. Complexity of finding maximum locally irregular induced subgraph. _SWAT_, 2022], the authors introduced and studied the problem of finding a locally irregular induced subgraph of a given a graph \(G\) of maximum order, or, equivalently, computing a subset \(S\) of \(V(G)\) of minimum order, whose deletion from \(G\) results in a locally irregular graph; \(S\) is denoted as an _optimal vertex-irregulator of \(G\)_. In this work we provide an in-depth analysis of the parameterised complexity of computing an optimal vertex-irregulator of a given graph \(G\). Moreover, we introduce and study a variation of this problem, where \(S\) is a subset of the edges of \(G\); in this case, \(S\) is denoted as an _optimal edge-irregulator of \(G\)_. In particular, we prove that computing an optimal vertex-irregulator of a graph \(G\) is in FPT when parameterised by the vertex integrity, neighborhood diversity or cluster deletion number of \(G\), while it is W[1]-hard when parameterised by the feedback vertex set number or the treedepth of \(G\). In the case of computing an optimal edge-irregulator of a graph \(G\), we prove that this problem is in FPT when parameterised by the vertex integrity of \(G\), while it is \(\mathcal{NP}\)-hard even if \(G\) is a planar bipartite graph of maximum degree \(4\), and W[1]-hard when parameterised by the size of the solution, the feedback vertex set or the treedepth of \(G\). Our results paint a comprehensive picture of the tractability of both problems studied here, considering most of the standard graph-structural parameters. **Keywords:** Locally irregular, largest induced subgraph, FPT, W-hardness ## 1 Introduction A fundamental problem in graph theory is "given a graph \(G\), find an induced subgraph \(H\) of \(G\), of maximum order, that belongs in the family of graphs verifying a property \(\Pi\)", in which case we say that \(H\in\Pi\): \begin{tabular}{l l} \multicolumn{2}{l}{Largest Induced Subgraph with Property \(\Pi\) (ISP\(\Pi\))[18]} \\ _Input:_ & A graph \(G=(V,E)\), an integer \(k\), a property \(\Pi\). \\ _Question:_ & Does there exist a set \(S\subseteq V\) such that \(|S|\leq k\) and \(G-S\in\Pi\)? \\ \end{tabular} There is a plethora of classical problems that fall under this general setting. Consider for example the Vertex Cover and the Feedback Vertex Set, where \(\Pi\) is the property "the graph is an independent set" and "the graph is a forest", respectively. In this paper we study the ISPII problem where \(\Pi\) is the property "the graph is locally irregular", recently introduced in [15]. A graph \(G=(V,E)\) is called _locally irregular_ if no two adjacent vertices in \(V\) have the same degree. We extend the work presented in [15], by more thoroughly investigating the behaviour of the problem in regards to parameterised complexity. In addition, we take the first step towards the problem of finding large locally irregular (not necessarily induced) subgraphs of a given graph \(G\). In particular, we introduce the problem where the goal is to find a subset of edges of \(G\) of maximum order, whose removal renders the graph locally irregular. Our results allow us to paint a rather clear picture concerning the tractability of both problems studied here, considering many standard graph-structural parameters (see Figure 1 for an overview of our results). **ISPII and hereditarity.** The ISPII problem has been extensively studied in the case where \(\Pi\) is a _hereditary_ property. Formally, a property \(\Pi\) is _hereditary_ if, for any graph \(G\) verifying that property, it holds that any induced subgraph of \(G\) also verifies that property (notice that the properties mentioned previously are indeed hereditary). It was already shown in [27] that ISPII is a hard problem for any non-trivial hereditary property. On the positive side, the ISPII problem always admits an FPT algorithm, when parameterised by the size of the solution, if \(\Pi\) is a hereditary property [8, 23]. This is an important result, as it allows us to conceive efficient algorithms to solve computationally hard problems, as long as we restrict ourselves to graphs verifying such properties. It is also worth mentioning the work in [16], which provides a framework that yields exact algorithms that are significantly faster than brute-force to solve a more general version of the ISPII problem: given a universe, find a subset of maximum cardinality which verifies some hereditary property. On a high level, the algorithm proposed in [16] builds the solution which is a subset \(H\) of maximum cardinality with the wanted property, by continuously Figure 1: Overview of our results. A parameter \(A\) appearing linked to a parameter \(B\) with \(A\) being below \(B\) is to be understood as “\(A\) is at least as large as \(B\)”. In light blue and olive we exhibit the FPT results we provide for the problems of finding an optimal vertex and edge (respectively) irregular. In red we exhibit the W[1]-hardness results we provide for both problems. extending a partial solution \(X\subseteq H\). Note that this approach only works if \(\Pi\) is indeed a hereditary property. More recently, this approach was generalised by the authors of [13], who provide a framework that yields exponential-time approximation algorithms. However, not all interesting properties are hereditary. E.g., "all vertices of the induced subgraph have odd degree", and "the induced subgraph is \(d\)-regular", where \(d\) is an integer given in the input (recall that a graph is \(d\)-_regular_ if all of its vertices have the same degree \(d\)), are two non-hereditary properties. The authors of [5] studied the ISP\(\Pi\) problem for the former property, showing that this is an \(\mathcal{NP}\)-hard problem, and providing an FPT algorithm that solves the problem when parameterised by the rank-width. Also, the authors of [1, 3, 28] studied the ISP\(\Pi\) problem for the latter property. In particular, in [3] it is shown that finding a (connected) induced subgraph of maximum order that is \(d\)-regular, is \(\mathcal{NP}\)-hard to approximate, even when restricted on bipartite or planar graphs. The authors of [3] also provide a linear-time algorithm to solve this problem for graphs with bounded treewidth. Lastly, it is also worth mentioning [7], where the authors consider the non-hereditary property "the induced subgraph is \(k\)-anonymous", where a graph \(G\) is \(k\)-anonymous if for each vertex of \(G\) there are at least \(k-1\) other vertices of the same degree. An important observation is that, in the case of non-hereditary properties, the ISP\(\Pi\) problem does not necessarily admit an FPT algorithm parameterised by the size of the solution. Indeed, the authors of [28] proved that when considering \(\Pi\) as "the induced subgraph is regular", the ISP\(\Pi\) problem is W[1]-hard when parameterised by the size of the solution. This indicates the importance of considering graph-structural parameters for conceiving efficient algorithms for such problems. This is exactly the approach followed in [17, 25], where the authors consider a generalisation of Vertex Cover, the ISP\(\Pi\) problem where \(\Pi\) is "the graph has maximum degree \(k\)", for an integer \(k\) given in the input. **Distance from local irregularity.** In some sense, the property that interests us lies on the opposite side of the one studied in [1, 3, 28]. Recall that a graph \(G\) is locally irregular if no two of its adjacent vertices have the same degrees. The notion of locally irregular graphs was formally introduced in [4], where the authors take some steps towards proving the so-called 1-2-3 Conjecture proposed in [21] and claimed to be solved recently in [22]. Roughly, this conjecture is about functions assigning weights from \([k]=\{1,\ldots,k\}\) to the edges of a graph, called proper \(k\)-labellings, so that all adjacent vertices have different weighted degrees; the conjecture states that for any non-trivial graph, this should always be achievable for \(k\leq 3\). In [15], the authors introduced and studied the problem of finding a locally irregular induced subgraph of a given graph \(G\) of maximum order (a non-hereditary property). Equivalently, given a graph, find a set of _vertices_ of minimum cardinality, whose deletion renders the graph locally irregular; such sets are named _optimal vertex-irregularity_. The main focus of [15] was to study the complexity of computing an optimal vertex-irregulator of a given graph. Among other results, it was shown that this problem is \(\mathcal{NP}\)-hard even for subcubic planar bipartite graphs, W[2]-hard parameterised by the size of the solution and W[1]-hard parameterised by the treewidth of the input graph. Moreover, for any constant \(\varepsilon<1\), there cannot be a polynomial-time \(\mathcal{O}(n^{1-\varepsilon})\)-approximation algorithm. On the positive side, there are two FPT algorithms that solve this problem, parameterised by the maximum degree of the input graph plus either the size of the solution or the treewidth of the input graph. Note that the notion of vertex-irregulators proved to be fruitful in the context of proper labellings. Indeed, the authors of [6] observed a connection between finding large locally irregular induced subgraphs and constructing proper \(k\)-labellings that also maximise the use of weight 1 on the edges of the given graph. Apart from improving the results of [15], in this paper we also introduce the novel problem of computing a subset of a graph's _edges_, of minimum order, whose deletion renders the graph locally irregular; such sets are named _optimal edge-irregulators_. This problem is introduced as a first step towards understanding the problem of finding large locally irregular (not necessarily induced) subgraphs of a given graph. Problems concerned with finding maximum subgraphs verifying a specific property have also been extensively studied (_e.g._, [9, 10, 2]). One might expect that finding edge-irregulators could be easier than finding vertex-irregulators as it is often the case with graph theoretical problems concerned with subsets of edges, whose versions considering subsets of vertices are intractable (recall, _e.g._, the Edge Cover, the Feedback Edge Set and even the Min Weighted Lower-Upper-Cover [29]). As it turns out, however, finding large edge-irregulators is also a computationally hard problem. **Our contribution.** In this paper we study the complexity of computing optimal vertex and edge-irregulators. Our results allow us to identify the parameters for which the tractability of the former problem changes, considering almost all standard graph-structural parameters. We also take steps towards the same goal for the latter problem. In Section 2 we introduce the needed notation and provide some first results. In particular, we observe that computing optimal vertex-irregulators is W[1]-hard when parameterised by the treedep or the feedback vertex set of the given graph. Section 3 is focused on providing FPT algorithms for the problem of finding optimal vertex-irregulators, parameterised by the neighborhood diversity or the vertex integrity of the input graph. In Section 4, we focus on the problem of finding optimal edge-irregulators. First, we prove that this problem is \(\mathcal{NP}\)-hard, even when restricted to planar bipartite graphs of maximum degree 4. We also show that the problem is W[1]-hard parameterised by the size of the solution or the feedback vertex set of the input graph. Lastly, we modify the FPT algorithm for computing an optimal vertex-irregulator parameterised by the vertex integrity in order to provide an FPT algorithm that solves the edge version of the problem (once more parameterised by the vertex integrity). We close the paper in Section 5, where we propose some directions for further research. ## 2 Preliminaries For notions and definitions of graph theory not explained here, we refer the reader to [11]. Let \(G=(V,E)\) be a graph and \(G^{\prime}=(V^{\prime},E^{\prime})\) be a subgraph of \(G\) (_i.e._, created by deleting vertices and/or edges of \(G\)). Recall first that the subgraph \(G^{\prime}\) is _induced_ if it can be created only by deleting vertices of \(G\). That is, for each edge \(uv\in E\), if \(u,v\in V^{\prime}\), then \(uv\in E^{\prime}\). For any vertex \(v\in V\), let \(N_{G}(v)=\{u\in V:uv\in E\}\) denote the _neighbourhood_ of \(v\) in \(G\) and \(d_{G}(v)=|N_{G}(v)|\) denote the _degree_ of \(v\) in \(G\). Note that, whenever the graph \(G\) is clear from the context, we will omit the subscript and simply write \(N(v)\) and \(d(v)\). Also, for \(S\subseteq E\), denote by \(G-S\) the graph \(G^{\prime}=(V,E\setminus S)\). That is, \(G^{\prime}\) is the graph resulting from the deletion of the edges of \(S\) from the graph \(G\). Let \(G=(V,E)\) be a graph. We say that \(G\) is _locally irregular_ if, for every edge \(uv\in E\), we have \(d(u)\neq d(v)\). Now, let \(S\subseteq V\) be such that \(G[V\setminus S]\) is a locally irregular graph; any set \(S\) that has this property is denoted as a _vertex-irregulator of \(G\)_. Moreover, let \(\mathrm{I}_{v}(G)\) be the minimum order that any vertex-irregulator of \(G\) can have. We will say that \(S\) is an _optimal_ vertex-irregulator of \(G\) if \(S\) is a vertex-irregulator of \(G\) and \(|S|=\mathrm{I}_{v}(G)\). Similarly, we define an _edge-irregulator of \(G\)_ to be any set \(S\subseteq E\) such that \(G-S\) is locally irregular. Moreover, let \(\mathrm{I}_{e}(G)\) be the minimum order that any edge-irregulator of \(G\) can have. We will say that \(S\) is an _optimal_ edge-irregulator of \(G\) if \(S\) is an edge-irregulator of \(G\) and \(|S|=\mathrm{I}_{e}(G)\). The next simple observation is quite useful when proving lower bounds on an optimal vertex or edge-irregulator of a graph. **Observation 2.1**.: _Let \(G=(V,E)\) be a graph containing two vertices \(u,v\) such that \(uv\in E\) and \(d(u)=d(v)\). Any edge-irregulator of \(G\) contains at least one edge incident to \(u\) or \(v\). Also, any vertex-irregulator of \(G\) contains at least one vertex in \(N(u)\cap N(v)\)._ Let \(G=(V,E)\) be a graph. We say that two vertices \(u\), \(v\) of \(V\) are _twins_ if \(N(u)\setminus\{v\}=N(v)\setminus\{u\}\), i.e., they have the same neighbourhoods. **Observation 2.2**.: _Let \(G=(V,E)\) be a graph and \(u,v\in V\) be a set of twins of \(G\) such that \(uv\in E\). Any vertex-irregulator of \(G\) contains at least one vertex in \(\{u,v\}\)._ Indeed, by Observation 2.1, we get that any vertex-irregulator \(S\) of \(G\) includes at least one neighbour of \(u\) or \(v\). If we assume that \(S\cap\{u,v\}=\emptyset\), then \(u\) and \(v\) are once more adjacent twins in \(G[V\setminus S]\), contradicting the fact that \(S\) is a vertex-irregulator. The importance of the upcoming Lemma 2.3 lies in the fact that we can repeatedly apply it and reduce the size of the graph on which we are searching for a vertex-irregulator, as long as the reduced graph contains a pair of adjacent twins. This is a core argument behind the algorithms presented in Theorems 3.2 and 3.6. **Lemma 2.3**.: _Let \(G=(V,E)\) be a graph and \(u,v\in V\) be a pair of adjacent twins. Let \(G^{\prime}=(V^{\prime},E^{\prime})\) be the graph resulting from the deletion of either \(u\) or \(v\) from \(G\). Then, \(\mathrm{I}_{v}(G)=\mathrm{I}_{v}(G^{\prime})+1\)._ Proof.: Assume w.l.o.g. that \(u\notin V^{\prime}\). We first prove that \(\mathrm{I}_{v}(G)\leq\mathrm{I}_{v}(G^{\prime})+1\). Indeed, assume that \(\mathrm{I}_{v}(G)>\mathrm{I}_{v}(G^{\prime})+1\) and let \(S^{\prime}\) be an optimal vertex-irregulator of \(G^{\prime}\). Next, consider the graph \(\tilde{G}=G[V\setminus(S^{\prime}\cup\{u\})]\). From the construction of \(G^{\prime}\), it follows that \(\tilde{G}=G^{\prime}[V^{\prime}\setminus S^{\prime}]\). Since \(S^{\prime}\) is a vertex-irregulator of \(G^{\prime}\), we obtain that \(\tilde{G}\) is locally irregular. In other words, the set \(S^{\prime}\cup\{u\}\) is a vertex-irregulator of \(G\) and \(|S^{\prime}\cup\{u\}|=I_{v}(G^{\prime})+1\), a contradiction. Next, assume that \(\mathrm{I}_{v}(G)<\mathrm{I}_{v}(G^{\prime})+1\) and let \(S\) be an optimal vertex-irregulator of \(G\). It follows from Observation 2.2 that \(|\{u,v\}\cap S|\geq 1\). Assume w.l.o.g. that \(u\in S\). Thus, and by the construction of \(G^{\prime}\), we have that \(G^{\prime}[V^{\prime}\setminus(S\setminus\{u\})]=G[V\setminus S]\) and the set \(S\setminus\{u\}\) is a vertex-irregulator of \(G^{\prime}\). In other words, \(\mathrm{I}_{v}(G^{\prime})\leq|S|-1=\mathrm{I}_{v}(G)-1\), a contradiction. We close this section with some observations on the proof that computing \(\mathrm{I}_{v}(G)\) is W[1]-hard parameterised by the treewidth of \(G\), initially presented in [15], which allow us to show that this result holds even if we consider more "generous" parameters, such as the treedepth or the the feedback vertex set number (_i.e._, size of a minimum feedback vertex set) of the input graph. Recall that the _treedepth_ of a graph \(G=(V,E)\) can be defined recursively: if \(|V|=1\) then \(G\) has treedepth \(1\). Then, \(G\) has treedepth \(k\) if there exists a vertex \(v\in V\) such that every connected component of \(G[V\setminus\{v\}]\) has treedepth at most \(k-1\). Given a graph \(G\) and a tree \(T\) rooted at a vertex \(u\), by _attaching_\(T\) on a vertex \(v\) of \(G\) we mean the operation of adding \(T\) to \(G\) and identifying \(u\) with \(v\). **Observation 2.4**.: _Let \(G\) be a graph with vertex cover number (i.e., size of a minimum vertex cover) \(k_{1}\) and \(T\) be a rooted tree of depth \(k_{2}\). Let \(G^{\prime}\) be the graph resulting from attaching an arbitrary number of copies of \(T\) directly on vertices of \(G\). Then \(G^{\prime}\) has treedepth \(\mathcal{O}(k_{1}+k_{2})\) and feedback vertex set number \(\mathcal{O}(k_{1})\)._ The reduction presented in [15, Theorem 16] starts with a graph \(G\) which is part of an instance of the List Colouring problem, and constructs a graph \(G^{\prime}\) by attaching some trees of depth at most \(3\) on each vertex of \(G\). The List Colouring problem was shown to be W[1]-hard in [14] when parameterised by the vertex cover number of the input graph. Thus, and by Observation 2.4, we obtain the following: **Corollary 2.5**.: _Given a graph \(G\), it is W[1]-hard to compute \(\mathrm{I}_{v}(G)\) parameterised by either the treedepth or the feedback vertex set number of \(G\)._ FPT algorithms for vertex-irregulators In this section we present two FPT algorithms that compute an optimal vertex-irregulator of a given graph \(G\), when parameterised by the neighbourhood diversity or the vertex integrity of \(G\). The latter algorithm is then used to show that this problem is in FPT also when parameterised by the cluster deletion number of \(G\). We begin by recalling the needed definitions. The _twin equivalence_ of \(G\) is the relation on the vertices of \(V\) according to which two vertices belong to the same equivalence class if and only if they are twins. **Definition 3.1** ([24]).: _The neighbourhood diversity of a graph \(G\), denoted by \(nd(G)\), is the number \(k\) of classes of the twin equivalence of \(G\)._ Let \(G=(V,E)\) be a graph with \(nd(G)=k\) and let \(V_{1},\ldots,V_{k}\) be the partition of \(V\) defined by the twin equivalence of \(G\). Observe that for any \(i\in[k]\), we have that \(G[V_{i}]\) is either an independent set or a clique. **Theorem 3.2**.: _Given a graph \(G=(V,E)\) such that \(nd(G)=k\), there exists an algorithm that computes \(\mathrm{I}_{v}(G)\) in FPT-time parameterised by \(k\)._ Proof.: Let \(V_{1},\ldots,V_{k}\) be the partition of \(V\) defined by the twin equivalence of \(G\). Recall that for any \(i\in[k]\), we have that \(G[V_{i}]\) is either an independent set or a clique. We begin by constructing an induced subgraph \(G^{\prime}=(V^{\prime},E^{\prime})\) of \(G\) by applying the following procedure: for each \(i\in[k]\), if \(G[V_{i}]\) is a clique on at least two vertices, then delete all the vertices of \(V_{i}\) except one; let \(D\) be the set of vertices that were deleted in this fashion throughout the procedure and \(d=|D|\). Observe that this procedure terminates after \(k\) repetitions and, thus, runs in polynomial time (in regards to \(|V|\)). Moreover, it follows from Lemma 2.3 that \(\mathrm{I}_{v}(G)=\mathrm{I}_{v}(G^{\prime})+d\). Thus, in order to compute \(\mathrm{I}_{v}(G)\), it suffices to compute \(\mathrm{I}_{v}(G^{\prime})\). To achieve that, we model this problem as an ILP on a bounded number of variables. For every \(i\in[k]\), let \(V^{\prime}_{i}=V_{i}\cap V^{\prime}\). Also, for every \(i\in[k]\), let \(N(i)=\{j\in[k]\mid\exists u\in V^{\prime}_{j}\text{ and }v\in V^{\prime}_{i}\text{ s.t. }uv\in E^{\prime}\}\). That is, \(N(i)\) is the set of indices of the neighbouring partitions of vertices in \(V^{\prime}_{i}\). Finally, we guess a partition of \([k]\) into \(S_{1}\) and \(S_{2}\) (there are at most \(2^{k}\) such partitions), such that, if \(S^{\prime}\) is a vertex-irregulator of \(G^{\prime}\), then \(S^{\prime}\cap V^{\prime}_{i}=V^{\prime}_{i}\) for all \(i\in S_{2}\), and \(S^{\prime}\cap V^{\prime}_{i}\neq V^{\prime}_{i}\) for all \(i\in S_{1}\). Variables \[x_{i}\in[|V^{\prime}_{i}|]\quad i\in S_{1}\quad\text{ number of vertices remaining in a subset of }V^{\prime}_{i}\] Objective \[\max\sum_{i=1}^{k}x_{i} \tag{3.1}\] Constraints \[\sum_{\ell\in N(i)}x_{\ell}\neq\sum_{\ell\in N(j)}x_{\ell}\qquad\qquad\qquad \qquad\forall i,j\in S_{1}\text{ s.t. }j\in N(i) \tag{3.2}\] The variable \(x_{i}\) is used in the above model to represent the vertices that will remain in \(V^{\prime}_{i}\), for each \(i\in S_{1}\), after the deletion of an optimal vertex-irregulator \(S^{\prime}\) of \(G^{\prime}\). The constraint 3.2 makes sure that any two adjacent vertices \(u,v\in V^{\prime}\) have different degrees in \(G^{\prime}[V^{\prime}\setminus S^{\prime}]\). Indeed, for each \(uv\in E^{\prime}\), there exist \(i,j\) such that \(u\in V^{\prime}_{i}\) and \(v\in V^{\prime}_{j}\). If either \(i\in S_{2}\) or \(j\in S_{2}\) (or both), then \(u\in S^{\prime}\) or \(v\in S^{\prime}\) (or both). Thus, we can assume that \(i,j\in S_{1}\). In this case, it follows from the constraint 3.2 that \(d_{G^{\prime}[V^{\prime}\setminus S^{\prime}]}(u)=\sum_{\ell\in N(i)}x_{\ell} \neq\sum_{\ell\in N(j)}x_{\ell}=d_{G^{\prime}[V\setminus S^{\prime}]}(v)\). In any case, \(G^{\prime}[V^{\prime}\setminus S^{\prime}]\) is locally irregular. Finally, since the model has \(k\) variables, we can and obtain \(S^{\prime}\) in FPT time, parameterised by \(k\) (by running for example the Lenstra algorithm [26]). We now present an FPT algorithm to compute an optimal vertex-irregulator of an input graph \(G\) when parameterised by the vertex integrity of \(G\). **Definition 3.3**.: _A graph \(G=(V,E)\) has vertex integrity \(k\) if there exists a set \(U\subseteq V\) such that \(|U|=k^{\prime}\leq k\) and all connected components of \(G[V\setminus U]\) are of order at most \(k-k^{\prime}\)._ It is known that we can find such a set in FPT-time parameterised by \(k\)[12]. **Theorem 3.4**.: _Given a graph \(G=(V,E)\) with vertex integrity \(k\), there exists an algorithm that computes \(\mathrm{I}_{v}(G)\) in FPT-time parameterised by \(k\)._ Proof.: Let \(U\) be such that \(|U|=k^{\prime}\leq k\) and \(C_{1},\ldots,C_{m}\) be the vertex sets of the connected components of \(G[V\setminus U]\). It follows that \(|C_{j}|\leq k\), \(j\in[m]\). Assume that we know the intersection of an optimal vertex-irregulator \(S\) of \(G\) and the set \(U\), and let \(S^{\prime}=S\cap U\) and \(U^{\prime}=U\setminus S\) (there are at most \(2^{|U|}\leq 2^{k}\) possible intersections \(S^{\prime}\) of \(U\) and \(S\).). Notice that the graph \(G[V\setminus S^{\prime}]\) has an optimal vertex-irregulator that contains only vertices from \(\bigcup_{i\in[m]}C_{i}\). Indeed, assuming otherwise contradicts that \(S^{\prime}\) is the intersection of an optimal vertex-irregulator and \(U\). Thus, in order to find an optimal vertex-irregulator \(S\) of \(G\), it suffices to compute \(S^{*}\subseteq\bigcup_{i\in[m]}C_{i}\), which is an optimal vertex-irregulator of \(G[V\setminus S^{\prime}]\), for every set \(S^{\prime}\subseteq U\). Then, we return the set \(S^{*}\cup S^{\prime}\) of minimum order. We compute \(S^{*}\) through an ILP with bounded number of variables. To do so, we define types and sub-types of graphs \(G[U^{\prime}\cup C_{j}]\). Informally, the main idea is to categorise the graphs \(G[U^{\prime}\cup C_{j}]\), \(j\in[m]\), into types based on their structure (formally defined later), whose number is bounded by \(k\). Each type \(i\) is associated to a number \(no_{i}\) that represents the number of the subgraphs \(G[U^{\prime}\cup C_{j}]\), \(j\in[m]\), that belong in that type. Then, for each type \(i\), we will define sub-types based on the induced subgraphs \(G[(U^{\prime}\cup C_{j})\setminus S_{q}]\), for \(S_{q}\subseteq C_{j}\). We also define a variable \(no_{i,q}\) that is the number of the subgraphs \(G[U^{\prime}\cup C_{j}]\), \(j\in[m]\), that are of type \(i\) and of sub-type \(q\) in \(G[V\setminus S]\). Note that knowing the structure of these types and sub-types, together with \(no_{i,q}\), is enough to compute the order of \(S^{*}\). Finally, for any \(j\in[m]\), the graph \(G[U^{\prime}\cup C_{j}]\) is of order at most \(k\). Thus, the number of types, sub-types and their corresponding variables, is bounded by a function of \(k\). We will present an ILP formulation whose objective is to minimise the order of \(S^{*}\). We begin by defining the types. Two graphs \(G[U^{\prime}\cup C_{i}]\) and \(G[U^{\prime}\cup C_{j}]\), \(i,j\in[m]\), are of the same type if there exists a bijection1\(f:C_{i}\cup U^{\prime}\to C_{j}\cup U^{\prime}\) such that \(f(u)=u\) for all \(u\in U^{\prime}\) and \(N_{G[U^{\prime}\cup C_{i}]}(u)=\{f^{-1}(v)\mid v\in N_{G[U^{\prime}\cup C_{j}] }(f(u))\}\) for all \(u\in C_{i}\). Note that if such a function exists, then \(G[U^{\prime}\cup C_{i}]\) is isomorphic to \(G[U^{\prime}\cup C_{j}]\). Footnote 1: Recall that a function \(f:A\to B\) is a _bijection_ if, for every \(a_{1},a_{2}\in A\) with \(a_{1}\neq a_{2}\), we have that \(f(a_{1})\neq f(a_{2})\) and for every \(b\in B\), there exists an \(a\in A\) such that \(f(a)=b\). Recall also that the _inverse_ function of \(f\), denoted as \(f^{-1}\), exists if and only if \(f\) is a bijection, and is such that \(f^{-1}:B\to A\) and for each \(b\in B\) we have that \(f^{-1}(b)=a\), where \(f(a)=b\). Let \(p\) be the number of different types. Notice that \(p\) is bounded by a function of \(k\) as any graph \(G[U^{\prime}\cup C_{i}]\) has order at most \(k\). Also, we can decide if two graphs \(G[U^{\prime}\cup C_{i}]\) and \(G[U^{\prime}\cup C_{j}]\), \(i,j\in[m]\), are of the same type in FPT-time parameterised by \(k\). For each type \(i\in[p]\), set \(no_{i}\) to be the number of graphs \(G[U^{\prime}\cup C_{j}]\), \(j\in[m]\), of type \(i\). Furthermore, for each type \(i\in[p]\) we select a \(C_{j}\), \(j\in[m]\), such that \(G[U^{\prime}\cup C_{j}]\) is of type \(i\), to represent that type; we will denote this set of vertices by \(C^{\prime}_{i}\). We are now ready to define the sub-types. Let \(i\in[p]\) be a type represented by \(C^{\prime}_{i}\) and \(S^{i}_{1}\ldots,S^{i}_{2^{|C^{\prime}_{i}|}}\)be an enumeration of the subsets of \(C^{\prime}_{i}\). For any \(q\in[2^{|C^{\prime}_{i}|}]\) we define a sub-type \((i,q)\) which represents the induced subgraph \(G[(U^{\prime}\cup C^{\prime}_{i})\setminus S_{q}]\). Set \(no_{i,q}\) to be the number of graphs represented by \(G[U^{\prime}\cup C^{\prime}_{i}]\), \(i\in[p]\), that are of type \((i,q)\) in \(G[V\setminus S^{*}]\), for a vertex-irregulator \(S^{*}\), _i.e._, \(S^{*}\cap C^{\prime}_{i}=S^{i}_{q}\). Notice that, given a vertex-irregulator \(S^{*}\subseteq\bigcup_{j\in[m]}C_{j}\) of \(G[V\setminus S^{\prime}]\), there exists a sub-type \((i,q)\), \(i\in[p]\), \(q\in[2^{|C^{\prime}_{i}|}]\), such that the graph \(G[(U^{\prime}\cup C_{j})\setminus S^{*}]\) is of sub-type \((i,q)\), for all \(j\in[m]\). Also, assuming that we know the order of \(|S^{i}_{q}|\) and the number \(no_{i,q}\) for all \(i\in[p]\), \(q\in[2^{|C^{\prime}_{i}|}]\), then \(|S^{*}|=\sum_{i\in[p]}\sum_{q\in[2^{|C^{\prime}_{i}|}]}no_{i,q}|S^{i}_{q}|\). Before giving the ILP formulation whose goal is to find a vertex-irregulator \(S^{*}\) while minimising the above sum, we guess the \((i,q)\) such that \(no_{i,q}\neq 0\). Let \(S_{2}\) be the set of pairs \((i,q)\)\(i\in[p]\) and \(q\in[2^{|C^{\prime}_{i}|}]\), such that there are two vertices \(u,v\in C^{\prime}_{i}\setminus S^{i}_{q}\) where \(uv\in E(G[(U^{\prime}\cup C_{i})\setminus S^{i}_{q}])\) and \(d_{G[(U^{\prime}\cup C^{\prime}_{i})\setminus S^{i}_{q}]}(u)=d_{G[(U^{\prime} \cup C^{\prime}_{i})\setminus S^{i}_{q}]}(v)\). For every \((i,q)\in S_{2}\), we have that \(no_{i,q}=0\). Indeed, assuming otherwise contradicts the fact that \(S^{*}\) is a vertex-irregulator. We guess \(S_{1}\subseteq\{(i,q)\mid i\in[p],q\in 2^{|C^{\prime}_{i}|}\}\setminus S_{2}\) such that \(no_{i,q}\neq 0\) for all \((i,q)\in S_{1}\). Observe that the number of different sets that are candidates for \(S_{1}\) are at most some function of \(k\). Constants \(no_{i}\) \(i\in[p]\) number of components of type \(i\) \(e_{uv}\in\{0,1\}\) \(u,v\in U^{\prime}\) set to 1 iff \(uv\in E(G[U^{\prime}])\) \(e_{u,v}^{i,q}\in\{0,1\}\) \(i\in[p],q\in[2^{|C^{\prime}_{i}|}]\), \(u\in U^{\prime}\) set to 1 iff \(uv\in E(G[(U^{\prime}\cup C^{\prime}_{i})\setminus S^{i}_{q}])\) and \(v\in C^{\prime}_{i}\setminus S^{i}_{q}\) \(b^{i,q}_{u}\in[n]\) \(i\in[p],q\in[2^{|C^{\prime}_{i}|}]\) and \(u\in U^{\prime}\) set to \(d_{G[(U^{\prime}\cup C^{\prime}_{i})\setminus S^{i}_{q}]}(u)\) \(d^{i,q}_{u}\in[n]\) \(i\in[p],q\in[2^{|C^{\prime}_{i}|}]\) and \(u\in C^{\prime}_{i}\setminus S^{i}_{q}\) set to \(d_{G[(U^{\prime}\cup C^{\prime}_{i})\setminus S^{i}_{q}]}(u)\) Variables \(no_{i,q}\) \(i\in[p],q\in[2^{|C^{\prime}_{i}|}]\) number of types \((i,q)\) Objective \[\min\sum_{i\in[p]}no_{i,q}|S^{i}_{q}| \tag{3.3}\] Constraints \[no_{i,q}=0 \text{ iff }(i,q)\notin S_{1} \tag{3.4}\] \[\sum_{q\in 2^{|C^{\prime}_{i}|}}no_{i,q}=no_{i} \forall i\in[p]\] (3.5) \[\sum_{w\in U^{\prime}}e_{wv}+\sum_{i\in[p]}no_{i,q}b^{i,q}_{v}\neq \sum_{w\in U^{\prime}}e_{wu}+\sum_{i\in[p]}no_{i,q}b^{i,q}_{u} \forall u,v\in U^{\prime}\] (3.6) \[d^{i,q}_{v}\neq\sum_{w\in U^{\prime}}e_{wu}+\sum_{i\in[p]}no_{i,q }b^{i,q}_{u} \forall e^{i,q}_{u,v}=1\text{ and }(i,q)\in S_{1} \tag{3.7}\] Assume that we have found the values \(no_{i,q}\) for \((i,q)\), \(i\in[p]\), \(q\in[2^{|C^{\prime}_{i}|}]\). We construct an optimal vertex-irregulator of \(G[V\setminus S^{\prime}]\) as follows. Start with an empty set \(S^{*}\). For each \(i\in[p]\) take all components \(C_{j}\) of type \(i\). Partition them in to \(2^{|C_{i}|}\) sets \(\mathcal{C}^{i}_{q}\) such that any set \(q\in[2^{|C^{\prime}_{i}|}]\) contains exactly \(no_{i,q}\) of these components. For any component \(C\in\mathcal{C}^{i}_{q}\), select all vertices represented by the set \(S^{i}_{q}\) (as it was defined before) and add them to \(S^{*}\). The final \(S^{*}\) is an optimal vertex-irregulator for \(G[V\setminus S^{\prime}]\). Let \(S=S^{\prime}\cup S^{*}\). We show that \(S\) is a vertex-irregulator of \(G\). To do so, it suffices to verify that in the graph \(G[V\setminus S]\) there are no two adjacent vertices with the same degree. Let \(u,v\) be a pair of adjacent vertices in a component represented by \(C^{\prime}_{i}\setminus S\), which is of type \((i,q)\). If \(d_{G[V\setminus S]}(u)=d_{G[V\setminus S]}(v)\), then \((i,q)\in S_{2}\). Therefore, \(no_{i,q}=0\) and we do not have such a component in \(G[V\setminus S]\). Thus, it suffices to focus on adjacent vertices such that at least one of them is in \(U^{\prime}\). Notice that, in \(G[V\setminus S]\), the degree of vertex \(u\in U^{\prime}\) is equal to \(\sum_{u\in U^{\prime}}e_{uv}+\sum_{i\in[p]}no_{i,q}b^{i,q}_{i}\). In other words, no two adjacent vertices in \(U^{\prime}\) have the same degree due to the constrain 3.6. Lastly, the constrain 3.7 guarantees that no vertex in \(U^{\prime}\) is adjacent to a vertex in \(C_{i}\setminus S\) (for some \(i\in[p]\)) such that both of them have the same degree in \(G[V\setminus S]\). Moreover, both \(S^{\prime}\) and \(S^{*}\) are constructed to be minimum such sets. Thus, \(S\) is an optimal vertex-irregulator of \(G\). Finally, since the number of variables in the model is bounded by a function of \(k\), we can and obtain \(S^{*}\) in FPT time, parameterised by \(k\) (by running for example the Lenstra algorithm [26]). The previous algorithm can be used to find an optimal vertex-irregulator of a graph \(G\) in FPT-time when parameterised by the cluster deletion number of \(G\). Note that the cluster deletion number of a graph can be computed in FPT-time parameterised by \(k\)[20]. **Definition 3.5** ([20]).: _Let \(G=(V,E)\) be a graph and \(S\subseteq V\) be a set of minimum order such that all the connected components of \(G[V\setminus S]\) are cliques. Then \(G\) has cluster deletion number \(k\), where \(k=|S|\)._ **Theorem 3.6**.: _Given a graph \(G=(V,E)\) with cluster deletion number \(k\), there exists an algorithm that computes \(\mathrm{I}_{v}(G)\) in FPT-time parameterised by \(k\)._ Proof.: Let \(S\) be such that \(|S|=k\) and \(G[V\setminus S]\) is a disjoint union of cliques \(C_{1},\ldots C_{m}\) for \(m\geq 1\). Our goal is to reduce the size of these cliques so that each one of them has order at most \(2^{k}\). We achieve this through the the following procedure. Let \(i\in[m]\) be such that the clique \(C_{i}=(V_{C_{i}},E_{C_{i}})\) has \(|V_{C_{i}}|>2^{k}\). Let \(V_{1},\ldots,V_{p}\) be the partition of \(V_{C_{i}}\) defined by the twin equivalence of \(C_{i}\). That is, two vertices \(u,v\in V_{C_{i}}\) belong in a \(V_{j}\), \(j\in[p]\), if and only if \(u\) and \(v\) are twins. Note that \(p\leq 2^{k}\). Observe that, since \(C_{i}\) is a clique, the graphs \(C_{i}[V_{j}]\), \(j\in[p]\), are also cliques. In other words, for each \(j\in[p]\), all the vertices of \(V_{j}\) are adjacent twins. We delete all but one vertex of \(V_{j}\), for each \(j\in[p]\), and repeat this process for every \(i\in[m]\) such that \(|V_{C_{i}}|>2^{k}\). Let \(G^{\prime}=(V^{\prime},E^{\prime})\) be the resulting subgraph of \(G\) and \(d=|D|\), where \(D\) is the set of vertices that were removed throughout this process. It follows from Lemma 2.3 that \(\mathrm{I}_{v}(G)=\mathrm{I}_{v}(G^{\prime})+d\). Observe also that \(S\subseteq V^{\prime}\) and that each connected component of \(G^{\prime}[V^{\prime}\setminus S]\) is a clique of at most \(2^{k}\) vertices. In other words, \(G^{\prime}\) has vertex integrity at most \(2^{k}+k\). To sum up, to compute \(\mathrm{I}_{v}(G)\) it suffices to compute \(\mathrm{I}_{v}(G^{\prime})\), which can be done in FPT-time by running the algorithm presented in Theorem 3.4. ## 4 Edge-irregulators In this section we begin the study of finding an optimal edge-irregulator of a given graph \(G\). It turns out that the decision version of this problem is \(\mathcal{NP}\)-complete, even for quite restrictive classes of graphs. Furthermore, it is also W[1]-hard parameterised by the size of the solution. **Theorem 4.1**.: _Let \(G\) be a graph and \(k\in\mathbb{N}\). Deciding if \(\mathrm{I}_{e}(G)\leq k\) is \(\mathcal{NP}\)-complete, even if \(G\) is a planar bipartite graph of maximum degree \(4\)._ Proof.: The problem is clearly in \(\mathcal{NP}\). We focus on showing it is also \(\mathcal{NP}\)-hard. This is achieved through a reduction from the Planar 3-SAT problem which is known to be \(\mathcal{NP}\)-complete [18]. In that problem, a 3CNF formula \(\phi\) is given as an input. We say that a bipartite graph \(G^{\prime}=(V,C,E)\)_corresponds_ to \(\phi\) if it is constructed from \(\phi\) in the following way: for each literal \(x_{i}\) (resp. \(\neg x_{i}\)) that appears in \(\phi\), add the _literal vertex_\(v_{i}\) (resp. \(v^{\prime}_{i}\)) in \(V\) (for \(1\leq i\leq n\)) and for each clause \(C_{j}\) of \(\phi\) add a _clause vertex_\(c_{j}\) in \(C\) (for \(1\leq j\leq m\)). Then the edge \(v_{i}c_{j}\) (resp. \(v^{\prime}_{i}c_{j}\)) is added if the literal \(x_{i}\) (resp. \(\neg x_{i}\)) appears in the clause \(C_{j}\). Finally, we add the edge \(v_{i}v^{\prime}_{i}\) for every \(i\). A 3CNF formula \(\phi\) is valid as input to the Planar 3-SAT problem if the graph \(G^{\prime}\) that corresponds to \(\phi\) is planar. Furthermore, we may assume that each variable appears in \(\phi\) twice as a positive and once as a negative literal. The question is whether there exists a truth assignment to the variables of \(X\) satisfying \(\phi\). Starting from a 3CNF formula \(\phi\), we construct a graph \(G\) such that \(\mathrm{I}_{e}(G)\leq 3n\) if and only if \(\phi\) is satisfiable. The construction of \(G\) is as follows: we start with the graph \(G^{\prime}\) that corresponds to \(\phi\). Then, for each \(1\leq i\leq n\), we remove the edge \(v_{i}v^{\prime}_{i}\), and attach the gadget illustrated in Figure 2 to \(v_{i}\) and \(v^{\prime}_{i}\). Let \(E_{i}\) denote the edges of the gadget attached to \(v_{i}\) and \(v^{\prime}_{i}\) plus the edges \(e^{1}_{i},e^{2}_{i}\) and \(e^{3}_{i}\). Finally, for each \(1\leq j\leq m\), we add the star on \(5\) vertices, and identify one of its leaves with the vertex \(c_{j}\). Observe that the resulting graph \(G\) is planar, bipartite and \(\Delta(G)=4\). Before we provide the reduction, let us show two claims that are going to be useful. **Claim 4.2**.: _Let \(S\) be an edge-irregulator of \(G\) such that \(|S|\leq 3n\). For every \(1\leq i\leq n\), we have that \(|S\cap E_{i}|\geq 3\)._ Proof of the claim.: Observe that \(d_{G}(u_{5})=d_{G}(v_{i})=d_{G}(u_{6})=d_{G}(u_{7})\). It follows that \(S\) contains at least one edge \(a_{1}\) incident to \(u_{6}\) or \(u_{7}\) and one edge \(a_{2}\) incident to \(v_{i}\) or \(u_{5}\). We distinguish cases: 1. \(a_{1}=a_{2}=v^{\prime}_{i}u_{6}\). Then \(S\) also contains an edge \(a_{3}\) incident to \(v_{i}\) or \(u_{5}\). If \(a_{3}=u_{5}v^{\prime}_{i}\), then \(S\) contains an additional edge incident to \(u_{2}\) and \(u_{1}\). If \(a_{3}=u_{5}v_{i}\), then \(S\) contains at least one additional edge incident to \(u_{2}\) and \(u_{1}\). If \(a_{3}=u_{5}v_{i}\), then \(S\) contains one additional edge incident to \(u_{2}\) or \(u_{5}\). In any one of the above cases, we have that \(|S_{i}|\geq 3\). Thus, we may assume that \(a_{3}\) is incident to \(v_{i}\) but not to \(u_{5}\). If \(a_{3}=e^{1}_{i}\) or \(a_{3}=e^{2}_{i}\), then \(S\) contains an additional edge incident to \(v_{i}\) or \(u_{6}\). Finally, if \(a_{3}=v_{i}u_{6}\), then \(S\) contains an additional edge incident to \(u_{6}\) or \(u_{8}\). Thus, if \(a_{1}=a_{2}=v^{\prime}_{i}u_{6}\), then \(|S_{i}|\geq 3\). 2. \(a_{1}\neq a_{2}\). We distinguish some additional cases: 1. \(a_{1}=v_{i}u_{6}\). If \(a_{2}\in\{e^{3}_{i},v^{\prime}_{i}u_{9},v^{\prime}_{i}u_{5}\}\), then \(S\) contains an additional edge incident to \(u_{7}\). If \(a_{2}\in\{v_{i}u_{5},u_{5}u_{4}\}\), then \(S\) contains an additional edge incident to \(u_{2}\). Finally, if \(a_{2}=u_{5}u_{4}\), then \(S\) contains an additional edge incident to \(u_{3}\). 2. \(a_{1}=u_{6}u_{7}\). Then \(S\) contains an additional edge incident to \(u_{9}\). 3. \(a_{1}\in\{u_{7}u_{9},u_{7}u_{10},u_{7}u_{11}\}\). Then \(S\) contains an additional edge incident to \(u_{7}\). 4. \(a_{1}=u_{6}u_{8}\). Then \(S\) contains an additional edge incident to \(u_{16}\). Thus, if \(a_{1}\neq a_{2}\), then \(|S_{i}|\geq 3\), which finishes the proof of the claim. **Claim 4.3**.: _Let \(S\) be an edge-irregulator of \(G\) such that \(|S|\leq 3n\). Then, for every \(1\leq i\leq n\), we have that_ * _if_ \(|S\cap\{e_{i}^{1},e_{i}^{2}\}|\geq 1\) _then_ \(|S\cap\{e_{i}^{3},e_{i}^{4}\}|=0\) _and_ * _if_ \(|S\cap\{e_{i}^{3},e_{i}^{4}\}|\geq 1\) _then_ \(|S\cap\{e_{i}^{1},e_{i}^{2}\}|=0\)_._ Proof of the claim.: Since the proofs of the two items are highly symmetrical, we will only prove the first item. To do that, it suffices to show that if \(S\) does not respect the statement for some \(1\leq i\leq n\), then \(|S\cap E_{i}|\geq 4\). Then, since \(|S|\leq 3n\), and \(1\leq i\leq n\), there exists a \(1\leq j\leq n\) such that \(i\neq j\) and \(|S\cap E_{j}|\leq 2\). This contradicts Claim 4.2. Let \(H=G-S\). Assume first that there exists an \(i\) such that, say, \(e_{i}^{1}\in S\) and \(e_{i}^{3}\in S\). Observe that \(S\) contains at least one edge \(e\) incident to \(u_{6}\) or \(u_{7}\), as otherwise we would have that \(d_{H}(u_{6})=d_{H}(u_{7})\), contradicting the fact that \(S\) is an edge-irregulator of \(G\). Thus, if we also have that \(e_{i}^{2}\in S\) or that \(e_{i}^{4}\in S\), it follows that \(|S\cap E_{i}|\geq 4\), a contradiction. Thus, we may assume that \(S\cap E_{i}=\{e_{i}^{1},e_{i}^{3},e\}\). If \(e\in\{u_{7}u_{9},u_{7}u_{10},u_{11}\}\), say \(e=u_{7}u_{9}\), then \(d_{H}(u_{7})=d_{H}(u_{10})\). Also, if \(e=u_{6}u_{8}\), then \(S\) also contains \(u_{8}u_{16}\). Finally, if \(e=v_{i}u_{6}\) (resp. \(e=u_{6}v_{i}^{\prime}\)) then \(d_{H}(u_{6})=d_{H}(v_{i}^{\prime})\) (resp. \(d_{H}(u_{6})=d_{H}(v_{i})\)). It follows from Observation 2.1 that in all cases, we have that \(|S\cap E_{i}|\geq 4\), a contradiction. We are now ready to give the reduction. Let \(G\) be the graph constructed from the formula \(\phi\) as explained above. We show that there exists a satisfying truth assignment of \(\phi\) if and only if \(\mathrm{I}_{e}(G)\leq 3n\). For the first direction, let \(T\) be a satisfying truth assignment of \(\phi\). Let \(S\) be the set containing the edges \(e_{i}^{1},e_{i}^{2},u_{6}u_{7}\) for every \(1\leq i\leq n\) such that \(T(x_{i})=true\) and the edges \(e_{i}^{3},e_{i}^{4},v_{i}u_{6}\) for each \(i\) such that \(T(\neg x_{i})=true\). Let \(H=G-S\). Clearly, \(|S|=3n\). Also \(S\) is an edge-irregulator of \(G\). Indeed, the part of the graph \(H\) that corresponds to the gadget attached to \(v_{i}\) and \(v_{i}^{\prime}\) is clearly locally irregular for every \(i\). Also, for each \(j\), we have that \(d_{H}(c_{j})\leq 3\) (since \(C_{j}\) is satisfied by at least one literal) and any vertex in \(N_{H}(c_{j})\) has degree equal to \(4\). For the reverse direction, assume that \(\mathrm{I}_{e}(G)\leq 3n\) and let \(S\) be an edge-irregulator of \(G\) such that \(|S|=3n\). Recall that due to Claim 4.3, for each \(i\in[n]\), if \(S\) contains one edge in \(\{e_{i}^{1},e_{i}^{2}\}\) then it contains no edge in \(\{e_{i}^{3},e_{i}^{4}\}\) and _vice versa_. For each \(i\in[n]\), we set \(T(x_{i})=true\) if \(S\) contains one edge in \(\{e_{i}^{1},e_{i}^{2}\}\) and \(T(\neg x_{i})=true\) in any other case. We claim that \(T\) is indeed a truth assignment that satisfies \(\phi\). Indeed, due to Claim 4.3, we know that each variable will receive exactly one truth value. Also, since \(S\) is an edge-irregulator, and due to Claim 4.2, we know that for each \(j\in[m]\), there exists an \(i\in[n]\) such that either \(v_{i}c_{j}\in S\) or \(v_{i}^{\prime}c_{j}\in S\); that is, for each clause \(C_{j}\), there exists either a literal \(x_{i}\) or a literal \(\neg x_{i}\) that has been set to true. In other words, each clause of \(\phi\) is satisfied by \(T\). This ends the reduction. Figure 2: The construction in the proof of Theorem 4.1. The dashed lines are used to represent the edges between the literal and the clause vertices. **Theorem 4.4**.: _Let \(G\) be a graph and \(k\in\mathbb{N}\). Deciding if \(\mathrm{I}_{e}(G)\leq k\) is \(\mathrm{W}[1]\)-hard parameterised by \(k\)._ Proof.: The reduction is from \(k\)-Multicoloured Clique. _- \(k\)-Multicoloured Clique_ -- _Input:_ A graph \(G^{\prime}=(V,E)\) and a partition \((V_{1},\ldots,V_{k})\) of \(V\) into \(k\) independent sets. _Question:_ Does there exist a set \(S\subseteq V\) such that \(G^{\prime}[S]\) is a clique? It is known that \(k\)-Multicoloured Clique is \(\mathrm{W}[1]\)-hard parameterised by \(k\)[14]. On a high level, our reduction will proceed as follows. Starting with the graph \(G^{\prime}\) that is given in the input of \(k\)-Multicoloured Clique, we will first subdivide every edge of the graph \(G^{\prime}\). Then, for each \(i\in[k]\), we will attach one copy of a particular gadget to the vertices of \(V_{i}\). Also, for each \(1\leq i<j\leq k\), we will attach a copy of our gadget to the vertices that correspond to the edges \(v_{i}v_{j}\) of \(G^{\prime}\), with \(v_{i}\in V_{i}\) and \(v_{j}\in V_{j}\). In total, we will add \((k^{2}+k)/2\) gadgets. The gadgets are structured so that any edge-irregulator of the graph contains at least one edge for each gadget (so any solution has a size of at least \((k^{2}+k)/2\)). Furthermore, we prove that, if we have selected only one edge from a gadget, then that edge must be incident to either a vertex of the original graph or a vertex that represents an edge of the original graph. Finally, we show that: * an edge-irregulator \(S\) that contains exactly one edge from each gadget (i.e. an edge-irregulator of size \((k^{2}+k)/2\)) can give us a clique of size \(k\) in the original graph by selecting the vertices and edges (represented by vertices) of the original graph that are incident to the edges of \(S\) and * if we have a clique of size \(k\) in the original graph we can construct an optimal edge-irregulator \(S\) by selecting the edges of the gadgets that are incident to the \(k\) vertices of the clique and the \((k^{2}-k)/2\) vertices that represent the edges of the clique. We proceed with the formal proof. Assume that we are given an instance \(G^{\prime}=(V,E)\) with vertex partition \((V_{1},\ldots,V_{k})\) where \(|V_{i}|=n\) for all \(i\in[k]\). For each \(i\in[k]\), we denote by \(v_{i}^{p}\), for \(p\in[n]\), the vertices of \(V_{i}\). We construct a graph \(G\) as follows: * Start with a copy of \(G^{\prime}\). * Subdivide each edge \(e\in E\). Let \(u_{i,j}^{p,q}\) be the vertex that corresponds to the edge \(v_{i}^{p}v_{j}^{q}\in E\). Also, let \(U_{i,j}\) be the set of vertices that corresponds to the edges between the sets \(V_{i}\) and \(V_{j}\), _i.e._, the set \(\{u_{i,j}^{p,q}\mid v_{i}^{p}v_{j}^{q}\in E\}\). * For each pair \((i,j)\) where \(1\leq i<j\leq k\), create a copy of the gadget \(H_{|U_{i,j}|}\), illustrated in Figure 3, and add all the edges between the copy of \(w\) and the vertices of \(U_{i,j}\). We denote this copy of \(w\) by \(w_{i,j}\), the copy of \(H_{|U_{i,j}|}\) by \(H^{w_{i,j}}\) and the copy of \(y\) in \(H^{w_{i,j}}\) by \(y_{i,j}\). * For each \(i\in[k]\), create a copy of the gadget \(H_{|V_{i}|}\) and add all the edges between the copy of \(w\) and the vertices of \(V_{i}\). We denote this copy of \(w\) by \(w_{i}\), the copy of \(H_{|V_{i}|}\) by \(H^{w_{i}}\) and the copy of \(y\) in \(H^{w_{i}}\) by \(y_{i}\). * Finally, add leaves attached to the vertices of \(V_{i}\), \(i\in[k]\), so that each vertex of \(V_{i}\) has degree \(kn\) and attached to the vertices of \(U_{i,j}\), \(1\leq i<j\leq k\), so each that vertex of \(U_{i,j}\) has degree \(kn+1\). Let \(G\) be the resulting graph. We prove that \(G\) has an edge-irregulator of order \((k^{2}+k)/2\) if and only if \(G^{\prime}\) is a yes instance of \(k\)-Multicoloured Clique. Assume that \(G^{\prime}\) is a yes instance of \(k\)-Multicoloured Clique and \(C=\{c_{1},\ldots,c_{k}\}\) is a clique in \(G^{\prime}\) with \(c_{i}\in V_{i}\) for every \(i\in[k]\). We will construct an edge-irregulator of \(G\) as follows. Start with an empty set \(S\). Notice that, for each \(i\in[k]\), \(|V_{i}\cap C|=1\) and let \(p\in[n]\) be such that \(v_{i}^{p}=c_{i}\); we add to \(S\) the edge \(v_{i}^{p}w_{i}\). For each pair \((i,j)\), \(1\leq i<j\leq k\), let \(p,q\in[n]\) be such that \(v_{i}^{p}=c_{i}\) and \(v_{j}^{q}=c_{j}\); we add to \(S\) the edge \(v_{i,j}^{p,q}w_{i,j}\). Notice the edge \(v_{i}^{p}v_{j}^{q}\) must exist in \(E\) since \(C\) is a clique. It follows that the vertex \(u_{i,j}^{p,q}\), and therefore the edge \(u_{i,j}^{p,q}w_{i,j}\), also exists in \(G\). By construction, \(|S|=(k^{2}+k)/2\). It only remains to prove that \(S\) is an edge-irregulator of \(G\). Consider the graph \(G-S\). Observe that, for every \(H^{w_{i}}\), \(i\in[k]\), we have reduced the degree of \(w_{i}\) by exactly one. Therefore, any two adjacent vertices of \(H^{w_{i}}\) have different degree (see Figure 3). The same holds true for every \(H^{w_{i,j}}\), \(1\leq i<j\leq k\). Consider now the edges \(xz\in E(G)\) such that \(x\in\{w_{i},w_{j},w_{i,j}\}\), and \(z\in V_{i}\cup U_{i,j}\cup V_{j}\), \(1\leq i<j\leq k\). Notice that \(d_{G-S}(x)=n^{2}-1\) and \(kn-1\leq d_{G-S}(z)\leq kn+1\). For sufficiently large \(n\), we have that \(n^{2}-1>kn+1\). It remains to consider the edges between vertices in \(V_{i}\cup V_{j}\) and in \(U_{i,j}\) for any \(1\leq i<j\leq k\). Notice that, for every \(1\leq i<j\leq k\), all vertices of \(V_{i}\cup V_{j}\), except one vertex \(v_{i}^{p}\in V_{i}\) and one vertex \(v_{j}^{q}\in V_{j}\), have degree \(kn\), and \(d_{G-S}(v_{i}^{p})=d_{G-S}(v_{j}^{q})=kn-1\). Also, all vertices of \(U_{i,j}\), except one vertex \(u^{\prime}\), have degree \(kn+1\), and \(d_{G-S}(u^{\prime})=kn\). So, \(u^{\prime}\) is the only vertex of \(U_{i,j}\) that could possibly have the same degree as a vertex in \(V_{i}\setminus\{v_{i}^{p}\}\) or \(V_{j}\setminus\{v_{j}^{q}\}\). It follows by the construction of \(S\) that \(u^{\prime}\) is actually \(u_{i,j}^{p,q}\). Also, by the construction of \(G\), \(u_{i,j}^{p,q}\) is adjacent only to \(v_{i}^{p}\) and \(v_{j}^{q}\), as it represents the edge between their corresponding vertices in \(G^{\prime}\). Thus, for every \(1\leq i<j\leq k\), no vertex in \(U_{i,j}\) has the same degree as any of its neighbours in \(V_{i}\) or \(V_{j}\). It follows from all the arguments above that \(S\) is indeed an edge-irregulator of \(G\). Now we show that if \(\mathrm{I}_{e}(G)=(k^{2}+k)/2\) then \(G^{\prime}\) has a clique of size \(k\). Let \(S\) be an edge-irregulator of \(G\) of order \((k^{2}+k)/2\). First, we notice that for each \(i\in[k]\), \(d_{G}(w_{i})=d_{G}(y_{i})\) and that for each \(1\leq i<j\leq k\), \(d_{G}(w_{i,j})=d_{G}(y_{i,j})\). Let \(E_{w_{i}}\) be the set of edges \(w_{i}v\) for \(v\in V_{i}\) and \(E_{w_{i,j}}\) be the set of edges \(w_{i,j}u\) for \(u\in U_{i,j}\). Also, let \(w\in\{w_{i}\mid i\in[k]\}\cup\{w_{i,j}\mid 1\leq i<j\leq k\}\). Since \(S\) is an edge-irregulator of \(G\), it follows that \(|S\cap(E(H^{w})\cup E_{w})|\geq 1\). Also, observe that for any pair of distinct vertices \(w,w^{\prime}\in\{w_{i}\mid i\in[k]\}\cup\{w_{i,j}\mid 1\leq i<j\leq k\}\), we have that \((E(H^{w})\cup E_{w})\cap(E(H^{w^{\prime}})\cup E_{w^{\prime}})=\emptyset\). Thus, and since \(|S|=(k^{2}+k)/2\), we obtain that, actually, \(|S\cap(E(H^{w})\cup E_{w})|=1\). Next, we show that \(S\) includes only edges from the set \(E_{w}\), for each \(w\in\{w_{i}\mid i\in[k]\}\cup\{w_{i,j}\mid 1\leq i<j\leq k\}\). In particular we claim the following: Figure 3: The gadget \(H_{b}\) used in the proof of Theorem 4.4. The black vertices represent the vertices of the gadget. The white vertices represent either a set of the original vertices \(V_{i}\), \(i\in[k]\), or a set of edge vertices \(U_{i,j}\), \(1\leq i<j\leq k\). In the construction, if \(w\) is adjacent to vertices of a \(V_{i}\), \(i\in[k]\), then \(b=|V_{i}|\) while if \(w\) is adjacent to vertices of a \(U_{i,j}\), \(1\leq i<j\leq k\), then \(b=|U_{i,j}|\). In each copy of the gadget the degrees of \(w\) and \(y\) are equal. **Claim 4.5**.: _Let \(w\in\{w_{i}\mid i\in[k]\}\cup\{w_{i,j}\mid 1\leq i<j\leq k\}\). It holds that \(S\cap E(H^{w})=\emptyset\) and that \(|S\cap E_{w}|=1\)._ Proof of the claim.: Assume that \(S\cap E(H^{w})\neq\emptyset\) and let \(e\in S\cap E(H^{w})\). We distinguish cases according to which edge of \(H^{w}\) is \(e\). In each case, we show that \(S\) must include an additional edge of \(E(H^{w})\), which is a contradiction to the fact that \(|S\cap(E(H^{w})\cup E_{w})|=1\). \(\boldsymbol{e}\) **is incident to neither \(\boldsymbol{w}\) nor \(\boldsymbol{y}\):** Then \(S\) must also include an additional edge incident to \(w\) or \(y\) (from previous discussion). \(\boldsymbol{e}\) **is incident to \(\boldsymbol{y}\):** Then, \(S\) must include an additional edge of \(E(H^{w})\), as otherwise \(d_{G-S}(y)=d-1\) and \(y\) would have at least one neighbour of degree \(d-1\). \(\boldsymbol{e}\) **is incident to \(\boldsymbol{w}\) and \(\boldsymbol{e\neq wy}\):** Then, \(S\) must include an additional edge of \(E(H)\), as otherwise \(G-S\) would include a connected component isomorphic to \(K_{2}\). \(\diamond\) The previous claim also shows that \(S\subseteq\bigcup_{i\in[k]}E_{w_{i}}\cup\bigcup_{1\leq i<j\leq k}E_{w_{i,j}}\). We now explain how to construct a clique of \(G^{\prime}\) of order \(k\). Let \(\ell(i)=m(i)\) be the index that specifies which edge incident to \(w_{i}\) is included in \(S\). That is, \(\ell(i)\) is such that \(w_{i}v_{i}^{\ell(i)}\in S\). Similarly, for each \(1\leq i<j\leq k\), let \(\ell(i,j)\) and \(m(i,j)\) be the indices such that \(w_{i,j}u_{i,j}^{\ell(i,j),m(i,j)}\in S\). Notice that both \(\ell(i)\) and \(\ell(i,j)\) are unique as \(S\) contains exactly one edge incident to each of \(w_{i}\) and \(w_{i,j}\) (by Claim 4.5). **Claim 4.6**.: _The set \(C=\{v_{i}^{\ell(i)}\mid i\in[k]\}\) induces a clique of order \(k\) in \(G\)._ Proof of the claim.: First, for any \(1\leq i<j\leq k\), we show that \(\ell(i)=\ell(i,j)\) and \(m(j)=m(i,j)\). To simplify the notation let \(\ell=\ell(i,j)\) and \(m=m(i,j)\). By the definition of \(\ell\) and \(m\) we have that \(w_{i,j}u_{i,j}^{\ell,m}\in S\). Now, we consider the degrees of the vertices \(v_{i}^{\ell}\) and \(u_{i,j}^{\ell,m}\). Since \(w_{i,j}u_{i,j}^{\ell,m}\in S\), we have that \(d_{G-S}(u_{i,j}^{\ell,m})=kn\). If \(\ell(i)\neq\ell\), then \(d_{G-S}(v_{i}^{\ell})=kn\), as \(S\) would not include any edges incident to \(v_{i}^{\ell}\) in that case. This is a contradiction since \(v_{i}^{\ell}\) and \(u_{i,j}^{\ell,m}\) are adjacent in \(G\) (by construction) and remain so in \(G-S\) (as \(S\subseteq\bigcup_{i\in[k]}E_{w_{i}}\cup\bigcup_{1\leq i<j\leq k}E_{w_{i,j}}\)). Therefore, for any \(1\leq i<j\leq k\), \(\ell(i)=\ell=\ell(i,j)\). Similarly, we can show that for any \(1\leq i<j\leq k\), \(m(j)=m=m(i,j)\). Now we show that for every pair of distinct vertices \(u,v\in\{v_{i}^{\ell(i)}\mid i\in[k]\}\), we have that \(u\) and \(v\) are adjacent in \(G^{\prime}\). W.l.o.g. let \(u=v_{i}^{\ell(i)}\) and \(v=v_{j}^{\ell(j)}\) for some \(1\leq i<j\leq k\). We know that \(\ell(i)=\ell\) and \(\ell(j)=m(j)=m\). Therefore, the vertex \(u_{i,j}^{\ell(i,j),m(i,j)}=u_{i,j}^{\ell,m}\) of \(G\) is adjacent to \(v_{i}^{\ell(i)}\) and \(v_{j}^{\ell(j)}\). This means that \(v_{i}^{\ell(i)}\) and \(v_{j}^{\ell(j)}\) are incident in \(G^{\prime}\) as the vertex \(u_{i,j}^{\ell(i),m(j)}\) corresponds to the edge between these two vertices in \(G^{\prime}\) (recall the construction of \(G\)). Thus, any pair of vertices in \(C\) is a pair of adjacent vertices in \(G^{\prime}\). It follows that \(C\) is a clique. \(\diamond\) This completes the proof. Unfortunately, this problem exhibits a similar behaviour to finding optimal vertex-irregulators, as it also remains intractable even for "relatively large" structural parameters. **Theorem 4.7**.: _Let \(G\) and \(k\in\mathbb{N}\). Deciding if \(\mathrm{I}_{e}(G)\leq k\) is \(\mathrm{W}[1]\)-hard parameterised by either the feedback vertex set number or the treedepth of \(G\)._ Proof.: The reduction is from the General Factor problem: - General Factor -- _Input:_ A graph \(H=(V,E)\) and a list function \(L:V\to\mathcal{P}(\{0,\ldots,\Delta(H)\})\) that specifies the available degrees for each vertex \(u\in V\). _Question:_ Does there exist a set \(S\subseteq E\) such that \(d_{H-S}(u)\in L(u)\) for all \(u\in V\)? This problem is known to be W[1]-hard when parameterised by the vertex cover number of \(H\)[19]. Starting from an instance \((H,L)\) of General Factor, we construct a graph \(G\) such that \(\mathrm{I}_{e}(G)\leq n^{2}\), where \(n=|V(H)|\), if and only if \((H,L)\) is a yes-instance. For every vertex \(u\in V(H)\), let us denote by \(\overline{L}(u)\) the set \(\{0,1,\ldots,d_{H}(u)\}\setminus L(u)\). In the case where \(\{0,1,\ldots,d_{H}(u)\}\setminus L(u)=\emptyset\), we set \(\overline{L}(u)=\{-1\}\). On a high level, the graph \(G\) is constructed by adding some trees on the vertices of \(H\). In particular, for each vertex \(u\in V(H)\) and for each element \(a\) in \(\overline{L}(u)\), we will attach a tree to \(u\) whose purpose is to prevent \(u\) from having degree \(a\) in \(G-S\), for any optimal edge-irregulator \(S\) of \(G\). We begin by defining an arbitrary order on the vertices of \(H\). That is, \(V(H)=\{u_{1},u_{2},\ldots,u_{n}\}\). Next, we describe the trees we will use in the construction of \(G\). In particular, we will describe the trees that we attach to the vertex \(u_{i}\), for every \(1\leq i\leq n\). First, for each \(a_{j}\in\overline{L}(u_{i})\), define the value \(a_{j}^{\prime}=d_{H}(u_{i})-a_{j}\). Also, for each \(j\), let \(d_{i,j}=2in^{4}-a_{j}^{\prime}\). For each "forbidden degree" \(a_{j}\) in the list \(\overline{L}(u_{i})\), we will attach a tree \(T_{i,j}\) to \(u_{i}\). We define the tree \(T_{i,j}\) as follows. First, for every \(0\leq k\leq n^{2}-1\), create \(n^{2}\) copies of \(S_{d_{i,j}-k}\) (the star on \(d_{i,j}-k\) vertices) and \(q\) additional copies of \(S_{d_{i,j}-n^{2}+1}\) (the exact value of \(q\) will be defined in what follows). Then, choose one leaf from each one of the above stars, and identify them into a single vertex denoted as \(u_{i,j}\); the value of \(q\) is such that \(d(u_{ij})=d_{i,j}-1=2in^{4}-a_{j}^{\prime}-1\). Let \(T_{i,j}\) be the resulting tree and let us say that \(u_{i,j}\) is the root of \(T_{i,j}\). Let us now describe the construction of \(G\). For each vertex \(u_{i}\in V(H)\) and for each \(a_{j}\in\overline{L}(u_{i})\), add the tree \(T_{i,j}\) to \(H\) and the edge \(u_{i,j}u_{i}\). Then, for each vertex \(u_{i}\in V(H)\), for any \(j\) such that \(u_{i,j}\) is a neighbour of \(u_{i}\), add \(p_{i}\) additional copies of the tree \(T_{i,j}\), as well as the edges between \(u_{i}\) and the roots of the additional trees, so that \(d_{G}(u_{i})=2in^{4}\). The resulting graph is \(G\). Note that, for each vertex of \(V(H)\), we are adding at most \(\mathcal{O}(n^{4})\) trees, each tree containing \(\mathcal{O}(n^{4})\) vertices. Thus, the construction of \(G\) is achieved in polynomial time. We are now ready to present our reduction. Assume first that \((H,L)\) is a yes-instance of General Factor, and let \(S\subseteq E\) be such that \(d_{H-S}(u)\in L(u)\) for all \(u\in V(H)\). We claim that \(S\) is also an edge-irregulator of \(G\). By the construction of \(G\), and since \(S\) only contains edges from \(H\), there are no two adjacent vertices in \(G-H\) that have the same degree in \(G-S\). Thus, it remains to check the pairs of adjacent vertices \(x,y\) such that, either both \(x\) and \(y\) belong to \(V(H)\), or, w.l.o.g., \(x\in V(H)\) and \(y\in V(G-H)\). For the first case, let \(x=u_{i}\) and \(y=u_{i^{\prime}}\), for \(1\leq i<i^{\prime}\leq n\). Then, assuming that \(d_{G-S}(u_{i})=d_{G-S}(u_{i^{\prime}})\), we get that \(2in^{4}-p=2i^{\prime}n^{4}-p^{\prime}\), where \(S\) contains \(0\leq p\leq n^{2}\) and \(0\leq p^{\prime}\leq n^{2}\) edges incident to \(u_{i}\) and \(u_{i^{\prime}}\) respectively. Thus, \(2n^{4}(i-i^{\prime})=p-p^{\prime}\), a contradiction since Figure 4: The tree \(T_{i,j}\) that is attached to the vertex \(u_{i}\), where \(j\) is such that \(a_{j}\in\overline{L}(u_{i})\), in the proof of Theorem 4.7. The value of \(d\) is such that, in total, \(d(u_{i,j})=2in^{4}-a_{j}+1\). and \(-n\leq i-i^{\prime}\leq n\). For the second case, for every \(i\), let \(d_{G-S}(u_{i})=2in^{4}-p\), where the set \(S\) contains \(1\leq p\leq n^{2}\) edges of \(H\) incident to \(u_{i}\). Also, by the construction of \(G\) and since \(S\) only contains edges from \(H\), we have that for every \(j\), \(d_{G-S}(u_{i,j})=d_{G}(u_{i,j})=2in^{4}-a_{j}^{\prime}\), where, recall, \(a_{j}^{\prime}=d_{H}(u_{i})-a_{j}\) for \(a_{j}\in\overline{L}(u_{i})\). Assume now that there exist \(i,j\) such that \(d_{G-S}(u_{i})=d_{G-S}(u_{i,j})\). Then, \(2in^{4}-p=2in^{4}-d_{H}(u_{i})+a_{j}\) and thus \(d_{H}(u_{i})-p=a_{j}\). But then \(d_{H-S}(u_{i})=a_{j}\), which is a contradiction since \(a_{j}\in\overline{L}(u_{i})\). Thus, \(S\) is an edge-irregulator of \(G\) and \(|S|\leq n^{2}\) since \(S\) only contains edges of \(E(H)\). For the reverse direction, assume that \(\operatorname{I_{e}}(G)\leq n^{2}\) and let \(S\) be an optimal edge-irregulator of \(G\). We will show that \(S\) is also such that \(d_{H-S}(u_{i})\in L(u_{i})\), for every \(i\). Let us first prove the following claim. **Claim 4.8**.: _Let \(S\) be an optimal edge-irregulator of \(G\). For every \(i,j\), let \(T\) be any copy of the \(T_{i,j}\) tree that is attached to \(u_{i}\), and let \(u\) be the root of this \(T_{i,j}\). If \(S\) contains \(x\geq 1\) edges of \(E_{i,j}=E(T)\cup\{uu_{i}\}\), then \(x\geq n^{2}\)._ Proof of the claim.: Assume there exist \(i,j\) such that \(|S\cap E_{i,j}|=x\geq 1\) and \(x\leq n^{2}\). Among those edges, there are \(x_{1}\geq 0\) edges incident to \(u\) and \(x_{2}\geq 0\) edges incident to children of \(u\) (but not to \(u\)), with \(x_{1}+x_{2}=x<n^{2}\). Assume first that \(x_{1}=0\). Then \(x=x_{2}\) and there is no edge of \(S\cap E_{i,j}\) that is incident to \(u\). Then \(d_{G-S}(u)=d_{G}(u)\) and observe that \(d_{G}(u)\) is strictly larger than that of any of its children (by the construction of \(G\)). It follows that \(S\setminus(S\cap E_{i,j})\) is also an edge-irregulator of \(G\), contradicting the optimality of \(S\). Thus \(x_{1}\geq 1\). It then follows from the construction of \(G\) that there exist at least \(n^{2}\) children of \(u\), denoted by \(z_{1},\ldots,z_{n^{2}}\), such that \(d_{G-S}(u)=d_{G}(z_{k})\), for every \(1\leq k\leq n^{2}\). Since \(x<n^{2}\), there exists at least one \(1\leq k\leq n^{2}\) such that \(d_{G-S}(u)=d_{G-S}(z_{k})\), contradicting the fact that \(S\) is an edge-irregulator. Thus \(x\geq n^{2}\). \(\diamond\) It follows directly from Claim 4.8 that \(S\) contains only edges of \(E(H)\). Assume that there exist \(i,j\) such that \(d_{H-S}(u_{i})=a_{j}\) and \(a_{j}\in\overline{L}(u_{i})\). Then \(d_{G-S}(u_{i})=2in^{4}-a_{j}^{\prime}\). Also, by the construction of \(G\), \(u_{i}\) is adjacent to a vertex \(u_{i,j}\) for which (since \(S\) contains only edges of \(E(H)\)) we have that \(d_{G-S}(u_{i,j})=d_{G}(u_{i,j})=2in^{4}-a_{j}^{\prime}\). This is contradicting the fact that \(S\) is an edge-irregulator of \(G\). Thus, for every \(i,j\), we have that if \(d_{H-S}(u_{i})=a_{j}\), then \(a_{j}\in L(u_{i})\), which finishes our reduction. Finally, if \(H\) has vertex cover number \(vc\), then, by Observation 2.4, and since \(G\) is constructed by attaching trees of depth \(3\) directly on the vertices of \(H\), we have that \(G\) has treedepth and feedback vertex set \(\mathcal{O}(vc)\). This concludes our proof. We close this section by observing that the proof of Theorem 3.4 can be adapted for the case of edge-irregulators. Indeed, it suffices to replace the guessing of vertices and the variables defined on vertices, by guessing of edges and variables defined on the edges of the given graph. Finally, the definition of the sub-types is done through subgraphs produced only by deletion of edges. This leads us to the following: **Corollary 4.9**.: _Given a graph \(G\) with vertex integrity \(k\), there exists an algorithm that computes \(\operatorname{I_{e}}(G)\) in FPT-time._ ## 5 Conclusion In this work we continued the study of the problem of finding optimal vertex-irregulators, and introduced the problem of finding optimal edge-irregulators. In the case of vertex-irregulators, our results are somewhat optimal, in the sense that we almost characterise exactly which are the "smallest" graph-structural parameters that render this problem tractable. The only meaningful parameter whose behaviour remains unknown is the modular-width of the input graph. The parameterised behaviour of the case of edge-irregulators is also somewhat understood, but there are still some parameters for which the problem remains open. Another interesting direction is that of approximating optimal vertex or edge-irregulators. In particular it would be interesting to identify parameters for which either problem becomes approximable in FPT-time (recall that vertex-irregulators are not approximable within any decent factor in polynomial time [15]). Finally, provided that the behaviour of edge-irregulators is better understood, we would also like to propose the problem of finding locally irregular minors, of maximum order, of a given graph \(G\).
2302.01225
Asymmetric Cryptosystem Using Careful Synchronization
We present public-private key cryptosystem which utilizes the fact that checking whether a partial automaton is carefully synchronizing is $PSPACE$-complete, even in the case of a binary alphabet.
Jakub Ruszil
2023-02-02T17:04:58Z
http://arxiv.org/abs/2302.01225v1
# Asymmetric Cryptosystem Using Careful Synchronization ###### Abstract We present public-private key cryptosystem which utilizes the fact that checking whether a partial automaton is carefully synchronizing is \(PSPACE\)-complete, even in the case of a binary alphabet. ## 1 Introduction Cryptography is essential branch of mathematics since the ancient times. It's main purpose is to ensure the privacy of information between sender and receiver sent through a possibly observed channel. Nowadays we differ symmetric cryptography - where the key used to cipher the message is the same as the one to decipher it - and asymmetric, where the key to cipher the message is commonly known and the one to decipher it is known only to the receiver of the message. In other words asymmetric cryptography is referred to as a public key cryptography, or a public-private key cryptography. The idea of public key cryptography was first mentioned in a confidential report GCHQ [4] (UK Government Communications Headquarters) and later independently by Diffie and Hellman in 1976 [27] along with the first practical public key cryptosystem based on knapsack problem. The mostly known asymmetric cryptosystem (RSA) was invented by Rivest, Shamir and Adleman in 1978 [17] and is applicable since then to encryption and digital signatures. The concept of synchronization of finite automata is essential in various areas of computer science. It consists in regaining control over a system by applying a specific set of input instructions. These instructions lead the system to a fixed state no matter in which state it was at the beginning. The idea of synchronization has been studied for many classes of complete deterministic finite automata (DFA) [1, 2, 5, 9, 10, 16, 18, 20, 23, 22, 24, 25] and non-deterministic finite automata [7, 14]. One of the most famous longstanding open problems in automata theory, known as Cerny Conjecture, states that for a given synchronizing DFA with \(n\) states one can always find a synchronizing word of length at most \((n-1)^{2}\). This conjecture was proven for numerous classes of automata, but the problem is still not solved in general case. The concept of synchronization has been also considered in coding theory [3, 8], parts orienting in manufacturing [5, 15], testing of reactive systems [19] and Markov Decision Processes [11, 12]. Allowing no outgoing transitions from some states for certain letters helps us to model a system for which certain actions cannot be accomplished while being in a specified state. This leads to the problem of finding a synchronizing word for a finite automaton, where transition function is not defined for all states. Notice that this is the most frequent case, if we use automata to model real-world systems. In practice, it rarely happens that a real system can be modeled with a DFA where transition function is total. The transition function is usually a partial one. This fact motivated many researchers to investigate the properties of partial finite automata relevant to practical problems of synchronization. We know that, in general case, checking if a partial automaton can be synchronized is PSPACE-complete [13] even for binary alphabet [26] and those facts are essential in our latter considerations. In this paper we present a public key cryptosystem utilizing fact, that checking if the PFA is carefully synchronizing is PSPACE-complete. This is however not the first attempt of trying to develop asymmetric cryptosystems with the notion of finite automata. Public key cryptography on finite automata with output is discussed in [21] and uses the notion of invertible automata to provide the hard computational problem, inevitable to design such cryptosystem. The paper is organized as follows. In the section 2 we provide with the basic notions and facts about synchronization of automata. In the sections 3 and 4 we present basic method of encryption and decryption using our cryptosystem. In the section 5 we state couple of additional improvements to ensure better security. Finally we conclude the paper in the section 6 along with possible further research to the topic. ## 2 Preliminaries _Partial finite automaton_ (PFA) is an ordered tuple \(\mathcal{A}=(Q,\Sigma,\delta)\), where \(\Sigma\) is a finite set of letters, \(Q\) is a finite set of states and \(\delta:Q\times\Sigma^{*}\to Q\) is a transition function, possibly not everywhere defined. In this definition we omit initial and final states, since they are not relevant to the problem of synchronization. For \(\mathit{w}\in\Sigma^{*}\) and \(\mathit{q}\in Q\) we define \(\delta(\mathit{q},\mathit{w})\) inductively: \(\delta(\mathit{q},\epsilon)=q\) and \(\delta(\mathit{q},\mathit{aw})=\delta(\delta(\mathit{q},\mathit{a}),\mathit{w})\) for \(\mathit{a}\in\Sigma\), where \(\epsilon\) is the empty word and \(\delta(\mathit{q},\mathit{a})\) is defined. A word \(\mathit{w}\in\Sigma^{*}\) is called _carefully synchronizing_ if there exists \(\overline{\mathit{q}}\in Q\) such that for every \(\mathit{q}\in Q\), \(\delta(\mathit{q},\mathit{w})=\overline{\mathit{q}}\) and all transitions \(\delta(\mathit{q},w^{\prime})\), where \(w^{\prime}\) is any prefix of \(\mathit{w}\), are defined. A PFA is called _carefully synchronizing_ if it admits any carefully synchronizing word. For a given \(\mathcal{A}\) we define its _power automaton_ (which is itself a PFA) as \(\mathcal{P}(\mathcal{A})=(2^{Q},\Sigma,\tau)\), where \(2^{Q}\) stands for the set of all subsets of \(Q\), and \(\Sigma\) is the same as in \(\mathcal{A}\). The transition function \(\tau:2^{Q}\times\Sigma\to 2^{Q}\) is defined as follows. Let \(Q^{\prime}\subseteq Q\). For every \(a\in\Sigma\) we define \(\tau(Q^{\prime},a)=\bigcup_{q\in Q^{\prime}}\delta(q,a)\) if \(\delta(q,a)\) is defined for all states \(q\in Q^{\prime}\), otherwise \(\tau(Q^{\prime},a)\) is not defined. We also note \(Q.w\) as an action of a word \(w\) on a set of states \(Q\) under the function \(\delta\). Let \(S\subseteq Q\). Then we denote \(S.w^{-1}\) as a preimage of \(S\) under the action of a word \(w\). We note that the above concepts can also be considered for _deterministic finite automata_ (DFA), for which the transition function is total. We define an \(a\)-cluster to be a DFA \(\mathcal{A}=(Q,\{a\},\delta)\) such that the automaton is connected. In other words it means that such automaton is a cycle on letter \(a\) with paths that leads to the states of that cycle. The set of states that induce a cycle in the \(a\)-cluster is referred to as the _center_ of the cluster. The _depth_ of the cluster is the length of the longest path to the center of the cluster. If \(q\) belongs to the center of the \(a\)-cluster, the _branch_ of the state \(q\) are the states that has a path to \(q\) that does not have any other state belonging to the center. _Destination_ of the branch is a state in the center that has an in-transition from the last state of the branch. Example of the \(a\)-cluster is depicted on Figure 1. Center of that \(a\)-cluster is the set \(\{3,4,5,6\}\), the depth is \(2\) and there are two branches: \(b_{1}=\{1,2\}\) and \(b_{2}=\{7\}\). Destination of the branch \(b_{1}\) is the state \(3\) and of the branch \(b_{2}\) is state \(4\). We define the sum of two automata \(\mathcal{A}=(Q_{1},\Sigma_{1},\delta_{1})\) and \(\mathcal{B}=(Q_{2},\Sigma_{2},\delta_{2})\) as \(\mathcal{A}\cup\mathcal{B}=(Q_{1}\cup Q_{2},\Sigma_{1}\cup\Sigma_{2},\delta_{ 1}\cup\delta_{2})\). We can now state the obvious fact, useful to decide whether a given PFA is carefully synchronizing. Figure 1: Example of the \(a\)-cluster **Fact 1**.: _Let \(\mathcal{A}\) be a PFA and \(\mathcal{P}(\mathcal{A})\) be its power automaton. Then \(\mathcal{A}\) is carefully synchronizing if and only if for some state \(q\in Q\) there exists a path in \(\mathcal{P}(\mathcal{A})\) from \(Q\) to \(\{q\}\). The shortest synchronizing word for \(\mathcal{A}\) corresponds to the shortest such path in \(\mathcal{P}(\mathcal{A})\)._ An example of a carefully synchronizing automaton \(\mathcal{A}_{car}\) is depicted in Fig. 2. One of its carefully synchronizing words is \(aa(ba)^{3}bbab\). We recall the result of Vorel [26] about the complexity of deciding whether a PFA is carefully synchronizing. **Theorem 1**.: _Given a PFA \(\mathcal{A}=(Q,\Sigma,\delta)\), checking if \(\mathcal{A}\) is carefully synchronizing is \(PSPACE\)-complete even for \(|\Sigma|=2\)._ Further we assume that \(\Sigma=\{a,b\}\) and the letter \(a\) is defined for all the states wherever not mentioned otherwise. Having that we can go to the description of our method. ## 3 Basic encryption Let a plain text be the word \(u\in\{0,1\}^{*}\). Choose a public key to be a carefully synchronizing PFA \(\mathcal{A}=(Q,\Sigma,\delta)\) and a private key to be any word \(w\) that carefully synchronizes \(\mathcal{A}\). For simplicity of further statements we note Figure 2: A carefully synchronizing automaton \(\mathcal{A}_{car}\). \((Q_{i},\Sigma,\delta_{i})\) to be isomorphic to \(\mathcal{A}\) for any \(i\in\mathbb{N}\). First we describe a construction that is a ciphertext. Define an automaton \(\mathcal{P}=(\{p_{1},p_{2},..,p_{|u|+1}\},\{0,1\},\gamma)\) where \(\gamma\) is defined as follows: for \(i\in\{1,..,|u|+1\}\) set \(\gamma(p_{i-1},u_{i})=p_{i}\), where \(u_{i}\) is \(i\)-th letter of a word \(u\). In other words we encode our plaintext in the form of a directed path, where consecutive edges correspond to the consecutive letters of the word \(u\). Encryption consists of four steps: 1. Compute \(\mathcal{B}=\bigcup_{i=1}^{|u|+1}\mathcal{A}_{i}\) and denote \(\bigcup_{i=1}^{|u|+1}\delta_{i}=\rho\), \(P=\bigcup_{i=1}^{|u|+1}Q_{i}\) 2. for any transition \((p_{i},p_{j})\) in \(\mathcal{P}\), labelled with a letter \(x\in\{0,1\}\) choose any pair of states \(q^{i}\in Q_{i}\) and \(q^{j}\in Q_{j}\), and set \(\rho(q^{i},x)=q^{j}\), 3. for all \(i\in\{1,..,|u|+1\}\) and for every letter \(a\in\Sigma\), if \(q^{i}\in Q_{i}\) and \(\delta(q^{i},a)\) is undefined, then choose any \(j\) and any state \(q^{j}\in Q_{j}\) and set \(\rho(q^{i},a)=q^{j}\), 4. for all \(i\in\{1,..,|u|+1\}\) choose \(k_{i}\in\mathbb{N}\). Choose \(k_{i}\) pairs \((q_{p}^{i},q_{r}^{i})\) and a letter \(x\in\{0,1\}\) and define \(\rho(q_{p}^{i},x)=q_{r}^{i}\) Automaton \(\mathcal{B}\) is our ciphertext. It is straightforward from the construction, that computing such automaton is polynomial in terms of \(Q,\Sigma\) and length of the plaintext. We also state two obvious observations. **Fact 2**.: _After removing letters \(x\in\{0,1\}\) from automaton \(\mathcal{B}\) we obtain a DFA over \(\Sigma\)._ **Fact 3**.: _After removing letters \(a\in\Sigma\) from automaton \(\mathcal{B}\) we obtain a digraph labelled with letters \(x\in\{0,1\}\) with longest path between the vertices of length 1._ Procedure of encrypting the word \(01\) is depicted on figures 3, 4, 5 and 6. As a public key we take the automaton depicted on Figure 2. The first step involves summation of three copies of the public key that correspond to the three vertices of the word 01 encoded as a labeled path. The first vertex of the path corresponds to the automaton induced by the states with suffix \(a\), the second - by the states with the suffix \(b\), and the third - by the suffix \(c\). Figure 3: First step of encryption. The second step involves adding the transitions \(0\) and \(1\) to the states of automata that correspond to the in and out vertices of the transition. In the above example we define transition \(\rho(5a,0)=2b\), which corresponds to the first transition of the encoded word, and \(\rho(12b,1)=9c\), which corresponds to the second transition of the encoded word. Transitions added in this step are bolded. Figure 4: Second step of encryption. The third step involves adding transitions from \(\Sigma\) to those states in \(\mathcal{B}\), which have undefined transitions for letters from \(\Sigma\). In that case we add only \(b\) letters. For example we defined \(\rho(1a,b)=2b\). We should act similarly for all states, for which \(b\) is undefined, but we have only added some of the necessary transitions so the figure is readable. Figure 5: Third step of encryption. The last step involves adding some number of transitions under letters from the alphabet \(\{0,1\}\) between states belonging to the same copy of a public key in \(\mathcal{B}\). In that case we have added transition \(\rho(9a,0)=10a\) (first copy), transitions \(\rho(7b,1)=10b\) and \(\rho(8b,1)=3b\) (second copy) and transition \(\rho(12c,0)=7c\) (third copy). ## 4 Basic decryption For that section we assume that we have a ciphertext automaton \(\mathcal{B}=(P,\Sigma,\rho)\) constructed from a public key \(\mathcal{A}=(Q,\Sigma,\delta)\), and that we know a private key \(w\) which is a carefully synchronizing word for the automaton \(\mathcal{A}\). First we state a lemma. **Lemma 1**.: _Let \(Q.w=q_{l}\). After removing letters \(x\in\{0,1\}\) from automaton \(\mathcal{B}\) we have that \(P.w=\{q_{l}^{1},q_{l}^{2},..,q_{l}^{|u|+1}\}\)._ Figure 6: Fourth step of encryption. Proof.: It is immediate from construction, since we have not removed any transitions from \(\Sigma\) within any \(Q_{i}\), that for any \(Q_{i}\in P\) holds \(Q_{i}.w=q_{l}^{i}\), since \(w\) carefully synchronizes \(\mathcal{A}\) and each \(Q_{i}\) on \(\Sigma\) induces an isomorphic copy of \(\mathcal{A}\). So we have, that \(\{q_{l}^{1},q_{l}^{2},..,q_{l}^{|u|+1}\}\subseteq P.w\). To prove that \(P.w\subseteq\{q_{l}^{1},q_{l}^{2},..,q_{l}^{|u|+1}\}\subseteq P.w\) it suffices to notice, that, from fact 2 automaton \(\mathcal{B}\) is deterministic and for any prefix \(w^{\prime}\) of \(w\) if \(Q.w^{\prime}=\{q_{k_{1}},..,q_{k_{s}}\}\), then \(Q_{i}.w^{\prime}=\{q_{k_{1}}^{i},..,q_{k_{s}}^{i}\}\). **Lemma 2**.: _There exist an algorithm with \(O(|P||w|)\) time complexity and \(O(|P||w|)\) space complexity which computes a partition of \(P\) on sets \(Q_{1}\), \(Q_{2}\),..., \(Q_{|u|+1}\)._ Proof.: We describe a desired algorithm. Suppose we have an array with \(|P|\) columns and \(|w|+1\) rows. Put every element of \(P\) in a different column of a first row. Then we fill the \(i\)-th row by taking a state from the \((i-1)\)-th row of the corresponding column and applying to it the \(i\)-th letter of a word \(w\) until the end of the row. After this procedure, from lemma 1, the last row contains only the states from the set \(\{q_{l}^{1},q_{l}^{2},..,q_{l}^{|u|+1}\}\). We can now compute each \(Q_{i}\) by taking those states from the first row that lie in the same columns as the state \(q_{l}^{i}\). With these two lemmas we are ready to present a decryption method: 1. using Lemma 1 compute the set \(\{q_{l}^{1},q_{l}^{2},..,q_{l}^{|u|+1}\}\), 2. using Lemma 2 compute the partition of \(P\) on sets \(Q_{1}\), \(Q_{2}\),..., \(Q_{|u|+1}\), 3. for every transition \(x\in\{0,1\}\) in \(\mathcal{B}\) if \(x\) joins a states from different sets, say \(Q_{i}\) and \(Q_{j}\), then join \(q_{l}^{i}\) and \(q_{l}^{j}\) with transition \(x\), otherwise remove the transition. Observe, that after applying that procedure to the ciphertext \(\mathcal{B}\) we end up with a graph that was our plaintext, what can be concluded directly from the encryption procedure. In general one can decipher the message only by knowing any carefully synchronizing word for \(\mathcal{A}\) or computing every possible induced subautomaton isomorphic to \(\mathcal{A}\). ## 5 Extensions As the ciphertext which is a result of our encryption method consists of \(n\) copies of isomorphic automaton with added transitions between those copies one can think of more "sophisticated" method of creating a ciphertext. As mentioned in the previous section a potential attacker can decipher the message computing every possible induced subautomaton isomorphic to a public key. However, the problem of determining for two given graphs say \(G\) and \(H\), whether \(G\) has a copy of \(H\) as an induced subgraph is \(NP\)-complete [6]. In this section we present two lemmas that can be used to obfuscate the ciphertext even more. The first one involves adding the state to the public key and the second one adding arbitrary number of \(a\)-clusters to the ciphertext. **Lemma 3**.: _Let \(\mathcal{A}=(Q,\Sigma,\delta)\) be a PFA with carefully synchronizing word \(w\). Further, let \(q\in Q\) be such that there exists \(p\in Q\) such that \(q\in p.a^{-1}\). Let also \(\mathcal{A}^{\prime}=(Q\cup\{q^{\prime}\},\Sigma,\delta^{\prime})\) where \(\delta^{\prime}\) is defined as \(\delta\) on \(Q\) and \(\delta^{\prime}(q^{\prime},a)=q\). Then \(w\) carefully synchronizes \(\mathcal{A}^{\prime}\)._ Proof.: Since \(a\) is defined for all states of \(Q^{\prime}\), and \(|\Sigma|=2\), then the first letter of \(w\) must be \(a\). Let \(w^{\prime}\) be the word \(w\) without the first letter. Since \(\delta^{\prime}(q^{\prime},a)=q\) and we assumed that there exist \(p\in Q\) such that \(q\in p.a^{-1}\) it is straightforward, that \(Q^{\prime}.a=Q.a\). Since we have not added any other transitions to \(\mathcal{A}^{\prime}\) and \(\delta^{\prime}\) is defined as \(\delta\) on \(Q\), we obtain that \(Q^{\prime}.aw^{\prime}=Q.aw^{\prime}=Q.w\) and that concludes the proof. For the next lemma we assume notation as in former part of the paper. **Lemma 4**.: _Let \(\mathcal{B}=\bigcup_{i=1}^{k}\mathcal{A}_{k}\) and \(m\in\mathbb{N}\) be the smallest integer such that \(Q.a^{m}=Q.a^{m+1}\). Define \(B_{i}=Q_{i}.a^{m}b\) and let \(\mathcal{C}_{1}=(S_{1},\{a\},\eta_{1}),\ldots,\mathcal{C}_{l}=(S_{l},\{a\}, \eta_{l})\) be \(a\)-clusters with depth \(1\) and centers \(K_{1},\ldots,K_{l}\) respectively. Let \(\mathcal{B}^{\prime}=\mathcal{B}\cup\bigcup_{i=1}^{l}\mathcal{C}_{i}=(P^{ \prime},\Sigma,\rho^{\prime})\). If we define \(b\) transitions for all states \(q\in\bigcup_{i=1}^{l}K_{i}\) such that there exists \(0<j<k+1\) such that \(\rho^{\prime}(q,b)\in B_{j}\) then \(P^{\prime}.w=\{q_{l}^{1},...,q_{l}^{k}\}\)._ Proof.: Since \(a\) is the only letter defined for all states in \(\mathcal{A}\) and \(Q.a^{m}=Q.a^{m+1}\) then \(w\) starts with a word \(a^{m_{1}}b\) for \(0<m_{1}<m+1\). Note \(w=a^{m_{1}}bw^{\prime}\) Observe that \(Q.a^{i+1}\subseteq Q.a^{i}\) for all \(i\geq 0\). From that we have, that \(Q.a^{m}\subseteq Q.a^{m_{1}}\) and further for all copies of \(\mathcal{A}\) in \(\mathcal{B}^{\prime}\) we obtain that \(B_{i}\in Q_{i}.a^{m_{1}}.b\). Also, since the depth of any cluster \(\mathcal{C}_{i}\) is \(1\), we have that \(P_{j}.a^{m_{1}}=K_{j}\) for all \(0<j<m+1\). Notice that \(P^{\prime}=\bigcup_{i=1}^{k}Q_{i}\cup\bigcup_{i=1}^{l}S_{i}\), so \[P^{\prime}.w=\bigcup_{i=1}^{k}Q_{i}.w\cup\bigcup_{i=1}^{l}S_{i}.w=\bigcup_{i=1 }^{k}Q_{i}.a^{m_{1}}bw^{\prime}\cup\bigcup_{i=1}^{l}S_{i}.a^{m_{1}}bw^{\prime}\] which gives \[P^{\prime}.w=\bigcup_{i=1}^{k}B_{i}w^{\prime}\cup\bigcup_{i=1}^{l}K_{i}bw^{ \prime}.\] But we know, that for all \(q\in\bigcup_{i=1}K_{i}\) there exists \(0<i<k+1\) such that \(\delta^{\prime}(q,b)\in B_{i}\). From that we obtain \[P^{\prime}.w=\bigcup_{i=1}^{k}B_{i}w^{\prime},\] and since each \(B_{i}=Q_{i}.a^{m}b\) then \(B_{i}.w^{\prime}=Q_{i}.a^{m}bw^{\prime}=Q_{i}.w=\{q_{l}^{i}\}\) and that concludes the proof. Using these two lemmas we can move on to the description of the extended method of encryption and decryption. In the next two subsections we follow the notation provided in sections 3 and 4. ### Extended encryption The extension consists of adding two stages between the 1 and 2 stage of encryption method, defining sets \(Q^{\prime}_{i}\) and substitute them for \(Q_{i}\) in latter stages. Let us state two additional stages: 1. add \(l\)\(a\)-clusters with depth 1 to automaton obtained in stage 1 and define letters \(b\) for centers of those clusters to fulfill assumptions of lemma 4 in \(\rho\) function (defined in section 3) 2. for each copy \(\mathcal{A}_{i}\) of public key in automaton obtained in previous stage add \(k_{i}\) states and define transitions as in lemma 3 and note the set of the added states in this stage as \(A_{i}\) for each \(\mathcal{A}_{i}\) Now let us define sets \(Q^{\prime}_{i}\). For clusters \(\mathcal{C}_{1}=(S_{1},\{a\},\gamma_{1}),\ldots,\mathcal{C}_{l}=(S_{l},\{a\}, \gamma_{l})\) (from stage 1) with centers \(K_{1},\ldots,K_{l}\) respectively we define sets \(C_{1},\ldots,C_{|u|+1}\), such that if for \(q\in K_{i}\) it holds \(\rho(q,b)\in B_{j}\) (notation from lemma 4), then \(q\) and its branch belong to the set \(C_{j}\). Then define \(Q^{\prime}_{i}=Q_{i}\cup A_{i}\cup C_{i}\). It is a simple exercise to prove that the sets \(Q^{\prime}_{1},\ldots,Q^{\prime}_{|u|+1}\) form a partition of \(P=\bigcup_{i=1}^{|u|+1}Q_{i}\cup A_{i}\cup C_{i}\) which is the set of all states of our ciphertext. The latter stages remain as in section 3. ### Extended decryption Algorithm of deciphering is similar to the one described in section 4. We state lemmas being in a strict correspondence with those proven in section 4. **Lemma 5**.: _Let \(\mathcal{B}\) be a ciphertext computed by extended encryption method using public key \(\mathcal{A}=(Q,\Sigma,\delta)\) and \(Q.w=q_{l}\). After removing letters \(x\in\{0,1\}\) from automaton \(\mathcal{B}\) we have that \(P.w=\{q_{l}^{1},q_{l}^{2},\ldots,q_{l}^{|u|+1}\}\)._ Proof.: Observe that after stage 1 we can apply Lemma 4 and we obtain that \((\bigcup_{i=1}^{|u|+1}Q_{i}\cup C_{i}).w=\{q_{l}^{1},q_{l}^{2},..,q_{l}^{|u|+1}\}\). Notice that after stage 2 we can apply Lemma 3 to any copy of public key that was modified by that stage and also \(P.w=\{q_{l}^{1},q_{l}^{2},..,q_{l}^{|u|+1}\}\). The rest of the proof is similar to the proof of Lemma 1. **Lemma 6**.: _There exist an algorithm with polynomial time complexity (depending on \(|P|\) and \(|w|\)) which computes a partition of \(P\) on sets \(Q^{\prime}_{1}\), \(Q^{\prime}_{2}\),..., \(Q^{\prime}_{|u|+1}\)._ Proof.: Using approach from the proof of Lemma 2 we can compute similar matrix, say \(M\), in time \(O(|P||w|)\). From Lemma 6 we know the last row contains only the states from the set \(\{q_{l}^{1},q_{l}^{2},\ldots,q_{l}^{|u|+1}\}\) and we can compute sets \(\bar{Q}_{1},\ldots,\bar{Q}_{|u|+1}\) such if column of the first row containing \(q\) is the same as the column of the last row containing \(q_{l}^{i}\), then \(q\in\bar{Q}_{i}\). Notice that there are three cases, when \(q\in\bar{Q}_{i}\): * \(q\in Q_{i}\) * \(q\in A_{i}\) * \(q\in S_{m}\) such that there exist \(p\in C_{i}\cap S_{m}\) (notation from Lemma 4) First two cases are straightforward. To prove the theorem for the third case observe that if \(q\notin Q_{i}\cup A_{i}\), then \(q\notin A_{j}\) and \(q\notin Q_{k}\) for any \(j,k\neq i\) otherwise \(\mathcal{B}\) would be non-deterministic. So we deduce that \(q\in S_{m}\) for some \(m\). For the sake of contradiction suppose that \(C_{i}\cap S_{m}=\varnothing\). But that means, that \(q.a^{m}b\in B_{j}\) for \(j\neq i\) and further \(q.ab^{m}w^{\prime}=q.w=q_{l}^{j}\) what is a contradiction. From these considerations we are able to determine for each \(i\) the sets \(A_{i}\) and \(Q_{i}\) that are subsets of the set \(Q_{i}^{\prime}\). In order to compute the sets \(C_{i}\) we first compute \(S_{1}^{\prime},..,S_{n}^{\prime}\) inducing all \(a\)-clusters in \(\mathcal{B}\) by removing \(b,0,1\) transitions and determine all connected components of the resulting structure. Now we examine three cases for a cluster \(S_{j}^{\prime}\): * \(S_{j}^{\prime}\cap\bar{Q}_{i}=\varnothing\) * \(S_{j}^{\prime}\subseteq\bar{Q}_{i}\) * \(S_{j}^{\prime}\cap\bar{Q}_{i}\neq\varnothing\) and \(S_{j}^{\prime}\not\subset\bar{Q}_{i}\) Notice, that if the first case holds we know that no state of \(S_{j}^{\prime}\) belongs to \(C_{i}\). If the second case holds, we must check if \(S_{j}^{\prime}\subseteq Q_{i}\cup A_{i}\). If this is not true, then we have found a cluster \(\mathcal{C}_{m}\), such that for all \(q\in K_{m}\) it holds \(\rho(q,b)\in B_{i}\) and we determined the \(a\)-cluster that belongs to \(C_{i}\). In the third case we know that some of the states of the cluster \(S_{j}^{\prime}\) are in \(C_{i}\) and some are not. To compute those that are let us take the center of the \(a\)-cluster \(S_{j}^{\prime}\), say \(K_{j}^{\prime}\), and observe that \(q\in C_{i}\) if, and only if \(q\in\{p\in K_{j}^{\prime}:\rho(p,b)\in B_{i}\}=K_{j}^{\prime\prime}\) or \(q\) belongs to some branch with destination in \(K_{j}^{\prime\prime}\). That concludes the proof. Using two former lemmas, decryption method is similar as in 4. Extended step is depicted on Figure 7. If we choose the public key to be the automaton on Figure 2, then notice, that in Lemma 4 we have \(m=2\), and \(Q.a^{2}b=\{2,3,7,12,13\}\). Observe that we can apply Lemma 3 to states \(1,2,3\). We also added cluster that consists of states \(4,5,6,7\) so we can apply Lemma 4. In the former method we had that \(Q_{1}=\{1a,..,15a\},Q_{2}=\{1b,..,15b\},Q_{3}=\{1c,..,15c\}\) and now \(Q^{\prime}_{1}=Q_{1}\cup\{1\},Q^{\prime}_{2}=Q_{2}\cup\{2,4,5\},Q^{\prime}_{3}= Q_{3}\cup\{3,6,7\}\). ## 6 Conclusions and further work We proposed a method of utilizing careful synchronization to provide brand new public key cryptosystem. In sections 3 and 4 we presented core idea of our method and provided an example that illustrates it. As the ciphertext in that method consists of \(n\) copies of the same automaton, those two sections are included to so the reader could understand the method presented in section 5. It should be also mentioned that lemmas 4 and 3 are only examples of extensions of that cryptosystem. Indeed, observe that Lemma 4 provides a possibility to add "free" \(a\)-clusters to a ciphertext. The disadvantage of that extension is that Figure 7: Extended step of encryption. we only can add \(b\) transitions to states that are some specified states of the copy of the public key. It is possible also to add the extension, that allows us to define the \(a\)-clusters for which we can define \(b\) transition outside of that specific sets \(B_{i}\) but to whatever state we want, even the other added \(a\)-cluster. However this extension would cause that in Lemma 5 it would be only \(\{q_{l}^{1},..,q_{l}^{|u|+1}\}\in P.w\) so the number of states added in such extensions would have been bounded by \(\min(|Q_{1}^{\prime}|,..,|Q_{l}^{\prime}|)\) and also demanded modifications in Lemma 6 so we omitted that extension. Observe also that point 2 of encryption procedure can be modified in many ways. For example one can choose to define more than one transition between copies of automata and in decryption section choose the one that has odd or even number in a ciphertext. We end up with several questions and open problems: **Question 1**.: _What is the most reasonable way to define lacking transitions in point 3?_ It is straightforward that if all lacking transitions in a copy of a public key are defined within the same copy, it would result with \(|u|+1\) connected automata which are not connected between each other, and that simplifies the attack on the cryptosystem. **Question 2**.: _What is the most reasonable way to define transitions in point 4?_ We have defined step 4 in an abstract way, so to investigate many versions of adding those "obfuscating" \(\{0,1\}\) transitions. **Question 3**.: _Find an algorithm that generates pairs of public and private keys._ We believe that the most promising approach will be to construct a PFA that is carefully synchronized by a given \(w\). We also want to investigate if it is possible to design an algorithm that for a given word \(w\) generates \(n\) non-isomorphic PFA's that are carefully synchronized by \(w\). Having that one could take as a public key a tuple of \(n\) automata that are synchronized by the same word \(w\). In that case, all methods presented in the paper would need only slight modifications to work properly.
2308.03469
Conformal Warped Product Submersion
In this paper, the concept of Riemannian warped product submersion is generalized to the conformal case. We introduce the notion of conformal warped product submersion. It is a submersion between warped product manifolds that preserves angles between the horizontal vectors. The fundamental tensors of submersion are derived for conformal warped product submersion.
Harmandeep Kaur, Abhishek Pandey, Gauree Shanker
2023-08-07T10:51:57Z
http://arxiv.org/abs/2308.03469v1
# Conformal Warped Product Submersion ###### Abstract In this paper, the concept of Riemannian warped product submersion is generalized to the conformal case. We introduce the notion of conformal warped product submersion. It is a submersion between warped product manifolds that preserves angles between the horizontal vectors. The fundamental tensors of submersion are derived for conformal warped product submersion. **Mathematics Subject Classification (2020):** 53C15, 53C18, 53C20, 53B25. **Keywords:** Warped product manifolds, Riemannian submersion, conformal submersion, Riemannian warped product submersion. ## 1 Introduction In order to compare the geometric structures between two manifolds, we need suitable types of maps between Riemannian manifolds. Given two manifolds, the maps are known as submersions if the rank of a differential map is equal to the dimension of the target manifold and immersions if the rank of a differential map is equal to the dimension of source manifold. Moreover, if these maps are isometry between manifolds, then the immersion is called isometric immersion and the submersion is called Riemannian submersion. Riemannian submersions were introduced in the sixties by Gray [16] and O'Neill [23]. Riemannian submersion is a tool to study the geometry of a Riemannian manifold with an additional structure in terms of certain components, that is, the fibers and the base space. Riemannian submersions are related to physics and have their applications in the Yang-Mills theory [7, 28], Kaluza-Klein theory [8, 18], supergravity and superstring theories [19, 22]. The projection of a Riemannian product manifold on one of its factors is a trivial example of Riemannian submersion. The class of warped product manifolds has shown itself to be rich, both wide and diverse, playing important roles in differential geometry as well as in physics. To illustrate, Bishop and O'Neill introduced warped products in [6] as means to construct a large class of complete Riemannian manifolds with negative curvature. The notion of warped product manifolds is one of the most fruitful generalizations of Riemannian products. Such notion plays very important roles in differential geometry as well as in physics, especially in general relativity. Schwarzschild and Robertson Walker cosmological models are well-known examples of warped product manifolds [24]. Warped product manifolds have been studied for a long period of time. In contrast, the study of warped product submanifolds was only initiated around the beginning of this century in a series of articles [9, 10, 11, 12]. The study of maps between Riemannian warped product manifolds is an active research field. The theory of warped product immersion has been studied extensively so far. On the other hand, the study of Riemannian warped product submersions is an emerging area of research. The notion of Riemannian warped product submersion was introduced by I. K. Erken and C. Murathan in 2021 [21]. Recently, I. K. Erken et al. extended the study on Riemannian warped product submersion and gave the curvature properties of such submersions in [14]. Conformal submersion is a generalization of Riemannian submersion. It is a submersion between Riemannian manifolds having conformal mapping between horizontal vectors. Although conformal maps do not preserve the distance between points contrary to isometries, they preserve angles between vector fields. This property enables one to transfer certain properties of a manifold to another manifold by deforming such properties. Conformal submersions were introduced independently by Fuglede [15] and Ishihara [20] which is useful for the characterization of harmonic morphisms [5] and have applications in medical imaging and computer graphics. In [25], Ornea obtained the fundamental equations of such submersions. The curvature relations for conformal submersions were given in [5, 17]. Further, conformal submersion was studied by many authors [1, 2, 3, 4, 26]. R. Tojeiro studied conformal immersions of warped product manifolds in [27]. So, it is interesting to study conformal warped product submersion. In this paper we introduce the notion of conformal warped product submersion which is the generalization of Riemannian warped product submersion. ## 2 Preliminaries In this section, we are going to recall the foundational concepts required to understand the notion of conformal warped product submersion. ### Warped Product Manifolds **Definition 2.1**.: _Let \((M_{1}^{m_{1}},g_{M_{1}})\) and \((M_{2}^{m_{2}},g_{M_{2}})\) be two Riemannian manifolds and let f be a positive smooth function on \(M_{1}\). Then the warped product \(M=M_{1}\times_{f}M_{2}\) of \(M_{1}\) and \(M_{2}\) is the product manifold \(M_{1}\times M_{2}\) endowed with the metric \(g_{M}\) defined as_ \[g_{M}(X,Y)=g_{M_{1}}(\pi_{1*}(X),\pi_{1*}(Y))+f^{2}(\pi_{1}(x,y))g_{M_{2}}(\pi_ {2*}(X),\pi_{2*}(Y)), \tag{2.1}\] _where X, Y are vector fields on \(M_{1}\times M_{2}\) and \(\pi_{1}\), \(\pi_{2}\) are projection mappings of \(M\) onto \(M_{1}\), \(M_{2}\) respectively._ The fibers \(\{x\}\times M_{2}=\pi_{1}^{-1}(x)\) and the leaves \(M_{1}\times\{y\}=\pi_{2}^{-1}(y)\) are Riemannian submanifolds of \(M_{1}\times_{f}M_{2}\). Vectors tangent to leaves and those tangent to fibers are called horizontal and vertical respectively [13]. If v \(\in\ T_{p}M_{1}\) and \(q\in M_{2}\), then the lift \(\overline{v}\) of \(v\) to \((p,q)\) is the unique vector \(T_{(p,q)}M\) such that \((\pi_{1})_{*}(\vec{v})=v\). For a vector field X \(\in\ \Gamma(TM_{1})\), the lift of X to M is the vector field \(\overline{X}\) whose value at each (p, q) is the lift of \(X_{p}\) to (p, q). The set of all horizontal lifts is denoted by \(\mathcal{L}(M_{1})\). Similarly, we denote the set of all vertical lifts by \(\mathcal{L}(M_{2})\). A vector field \(\overline{E}\) of \(M_{1}\times\ M_{2}\) can be written as \(\overline{E}=\overline{X}+\overline{U}\) with \(\overline{X}\in\mathcal{L}(M_{1})\) and \(\overline{U}\in\mathcal{L}(M_{2})\). **Lemma 2.1**.: _[_24_]_ _Let \(M=M_{1}\times_{f}M_{2}\) be a warped product manifold. For any \(E_{1},F_{1}\in\mathcal{L}(M_{1})\) and \(E_{2},F_{2}\in\mathcal{L}(M_{2})\),_ 1. \(\nabla_{E_{1}}F_{1}\) _is the lift of_ \(\nabla^{1}_{E_{1}}F_{1}\)_,_ 2. \(\nabla_{E_{1}}E_{2}=\nabla_{E_{2}}E_{1}=(E_{1}(f)/f)E_{2}\)_,_ 3. \(nor(\nabla_{E_{2}}F_{2})=-g_{M}(E_{2},F_{2})(D\ ln\ f)\)_,_ 4. \(tan(\nabla_{E_{2}}F_{2})\) _is the lift of_ \(\nabla^{2}_{E_{2}}F_{2}\)_._ Here \(\nabla,\nabla^{1}\) and \(\nabla^{2}\) denote Riemannian connections on \(M,M_{1}\) and \(M_{2}\), respectively and \(Df\) denotes the gradient of f. **Corollary 2.1**.: _[_24_]_ _Let \(M=M_{1}\times_{f}M_{2}\) be a warped product manifold. Then the leaves \(M_{1}\times\{y\}\) and the fibers \(\{x\}\times M_{2}\) are totally geodesic and totally umbilical, respectively._ ### Riemannian submersions **Definition 2.2**.: _[_23_]_ _Let \((M^{m},g_{M})\) and \((N^{n},g_{N})\) be two Riemannian manifolds,, where \(dim(M)=m,\)\(dim(N)=n\) and \(m>n\). A Riemannian submersion \(F:M\to N\) is a surjective map of M onto N satisfying the following axioms:_ 1. _F has maximal rank._ 2. _The differential_ \(F_{*}\) _preserves the lengths of horizontal vectors._ For each \(b\in N\), \(F^{-1}(b)\) is a submanifold of \(M\) of dimension \((m-n)\), called the fibers of \(F\). A vector field on \(M\) is called vertical if it is always tangent to the fiber and is called horizontal if it is always orthogonal to the fibers. The integrable distribution of \(F\) is defined by \(\mathcal{V}_{p}=kerF_{*p}\) and \(\mathcal{V}_{p}\) is called the vertical distribution of submersion \(F\), and the distribution \(\mathcal{H}_{p}=(\mathcal{V}_{p})^{\perp}\) which is a complementary and orthogonal distribution to \(\mathcal{V}\) is called horizontal distribution. Thus, for every \(p\in M\), \(M\) has the following decomposition: \[T_{p}M=\mathcal{V}_{p}\oplus\mathcal{H}_{p}=\mathcal{V}_{p}\oplus\mathcal{V}_ {p}^{\perp}. \tag{2.2}\] A vector field \(X\) on \(M\) is called basic if \(X\) is horizontal and \(F\) related to a vector field \(X_{*}\) on \(N\), i.e., \(F_{*}X_{p}=X_{*F(p)}\) for all \(p\in M\). The geometry of Riemannian submersions is characterized by O'Neill's tensors [23]\(\mathcal{T}\) and \(\mathcal{A}\) defined for vector fields \(E\), \(F\) on \(M\) by \[\mathcal{A}_{E}F =\mathcal{H}\nabla_{\mathcal{H}E}\mathcal{V}F+\mathcal{V}\nabla_ {\mathcal{H}E}\mathcal{H}F, \tag{2.3}\] \[\mathcal{T}_{E}F =\mathcal{H}\nabla_{\mathcal{V}E}\mathcal{V}F+\mathcal{V}\nabla_ {\mathcal{V}E}\mathcal{H}F, \tag{2.4}\] where, \(\nabla\) is the Levi-Civita connection of \(g_{M}\) and \(\mathcal{T}\) acts as the second fundamental form of all the fibers and the tensor \(\mathcal{A}\) determines the integrability of horizontal distributions. A Riemannian submersion is called a Riemannian submersion with totally geodesic fiber if \(\mathcal{T}\) vanishes identically. A Riemannian submersion is called a Riemannian submersion with totally umbilical fibre if \[\mathcal{T}_{U}W=g_{M}(U,W)H \tag{2.5}\] \(\forall\) U, W \(\in\Gamma(\mathcal{V})\). ### Conformal Submersion Let \((M,g_{M})\) and \((B,g_{B})\) be Riemannian manifolds and \(F:M\to B\) be a smooth submersion, then F is called a conformal submersion, if there is a positive function \(\lambda:M\rightarrow\mathbb{R}^{+}\) such that \[\lambda^{2}g_{M}(X,Y)=g_{B}(F_{*}X,F_{*}Y) \tag{2.6}\] for X, Y \(\in\Gamma((kerF_{*})^{\perp})\) and \(\lambda\) is called dilation. **Proposition 2.1**.: _[_17_]_ _Let \(\pi:(M^{m},g)\rightarrow(N^{n},h)\) be a conformal submersion with dilation \(\lambda\) and X, Y be horizontal vectors, then_ \[A_{X}Y=\frac{1}{2}\bigg{\{}\mathcal{V}[X,Y]-\lambda^{2}g(X,Y)grad\nu\Big{(} \frac{1}{\lambda^{2}}\Big{)}\bigg{\}}. \tag{2.7}\] **Example 2.1**.: _Let F be a map defined by,_ \[F:R^{4}\to R^{2};(x_{1},x_{2},x_{3},x_{4})\rightarrow(e^{x_{3}}sin \ x_{4},e^{x_{3}}cos\ x_{4}),\] _then it is a conformal submersion with \(\lambda=e^{x_{3}}\)._ ### Riemannian warped product submersion **Definition 2.3**.: _[_21_]_ _Let \(\phi_{i}\), \(i\) = 1, 2, be Riemannian submersions from \(M_{i}\) to \(N_{i}\). If \(M=M_{1}\times_{f}\ M_{2}\) and \(N=N_{1}\times_{\rho}\ N_{2}\) are Riemannian warped product manifolds, then the map_ \[\phi=\phi_{1}\times\phi_{2}:M=M_{1}\times_{f}M_{2}\to N=N_{1} \times_{\rho}N_{2} \tag{2.8}\] _given by \((\phi_{1}\times\phi_{2})(p_{1},p_{2})=(\phi_{1}(p_{1}),\phi_{2}(p_{2}))\) is a Riemannian submersion. This kind of Riemannian submersions are called Riemannian warped product submersions._ A Riemannian warped product submersion \(\phi=(\phi_{1},\phi_{2}):M=M_{1}\times_{f}M_{2}\to N=N_{1}\times_{ \rho}N_{2}\) has \(M_{i}\)-minimal fibers if \(H_{i}\) vanishes identically for \(i=1,2\) and mixed totally geodesics fibers if its second fundamental form T satisfies \(T(E,F)=0\) for any \(E\in\Gamma(\mathcal{V}_{1})\) and \(F\in\Gamma(\mathcal{V}_{2})\). ## 3 Conformal warped product submersion In this section, we introduce the notion of conformal warped product submersion which is a generalization of Riemannian warped product submersion. It is a submersion between warped product manifolds that preserve the angles between the horizontal vectors. Further, the expressions for fundamental tensor of submersion are derived for conformal warped product submersion. **Definition 3.1**.: _Let \(M_{1},M_{2}\) be Riemannian manifolds and \(\lambda_{1},\lambda_{2}\) be smooth positive functions on \(M_{1}\) and \(M_{2},\) repectively. If \(\lambda\) is a smooth positive function on \(M_{1}\times\ M_{2}\) i.e.,_ \[\lambda:M_{1}\times\ M_{2}\to R^{+} \tag{3.1}\] _such that, \(\lambda|_{M_{1}}\)= \(\lambda_{1}\) and \(\lambda|_{M_{2}}\)= \(\lambda_{2},\) then \(\lambda\) is called the lift function of \(\lambda_{1}\) and \(\lambda_{2}.\)_ **Definition 3.2**.: _Let \(\phi_{i}\), \(i\) = 1, 2, be Riemannian submersions from \(M_{i}\) to \(N_{i}\). If \(M=M_{1}\times_{f}M_{2}\) and \(N=N_{1}\times_{\rho}N_{2}\) are Riemannian warped product manifolds, then the map_ \[\phi_{i}=\phi_{1}\times\phi_{2}:M=M_{1}\times_{f}M_{2}\to N=N_{1} \times_{\rho}N_{2} \tag{3.2}\] _given by \((\phi_{1}\times\phi_{2})(p_{1},p_{2})=(\phi_{1}(p_{1}),\phi_{2}(p_{2}))\) is a Riemannian submersion. This kind of Riemannian submersions are called Riemannian warped product submersions._ A Riemannian warped product submersion \(\phi=(\phi_{1},\phi_{2}):M=M_{1}\times_{f}M_{2}\to N=N_{1} \times_{\rho}N_{2}\) has \(M_{i}\)-minimal fibers if \(H_{i}\) vanishes identically for \(i=1,2\) and mixed totally geodesics fibers if its second fundamental form T satisfies \(T(E,F)=0\) for any \(E\in\Gamma(\mathcal{V}_{1})\) and \(F\in\Gamma(\mathcal{V}_{2})\). ## 4 Conformal warped product submersion In this section, we introduce the notion of conformal warped product submersion which is a generalization of Riemannian warped product submersion. It is a submersion between warped product manifolds that preserve the angles between the horizontal vectors. Further, the expressions for fundamental tensor of submersion are derived for conformal warped product submersion. **Definition 3.3**.: _Let \(M_{1},M_{2}\) be Riemannian manifolds and \(\lambda_{1},\lambda_{2}\) be smooth positive functions on \(M_{1}\) and \(M_{2},\) repectively. If \(\lambda\) is a smooth positive function on \(M_{1}\times\ M_{2}\) i.e.,_ \[\lambda:M_{1}\times\ M_{2}\to R^{+} \tag{3.1}\] _such that, \(\lambda|_{M_{1}}\)= \(\lambda_{1}\) and \(\lambda|_{M_{2}}\)= \(\lambda_{2},\) then \(\lambda\) is called the lift function of \(\lambda_{1}\) and \(\lambda_{2}.\)_ **Proposition 3.1**.: _Let \(\phi_{i}:M_{i}\to N_{i}\), \(i=\) 1, 2, be conformal submersions with dilation \(\lambda_{i}\), from Riemannian manifolds \(M_{i}\) to \(N_{i}\). If \(M=M_{1}\times_{f}\ M_{2}\) and \(N=N_{1}\times_{\rho}\ N_{2}\) are Riemannian warped product manifolds, then the map_ \[\phi=\phi_{1}\times\phi_{2}:M=M_{1}\times_{f}\ M_{2}\to N=N_{1}\times_{\rho}\ N_{2} \tag{3.2}\] _given by \((\phi_{1}\times\phi_{2})(p_{1},p_{2})=(\phi_{1}(p_{1}),\phi_{2}(p_{2}))\) is a conformal submersion with dilation \(\lambda\), where \(\lambda\) is the lift of \(\lambda_{1}\) and \(\lambda_{2}\) to \(M_{1}\times\ M_{2}.\) We call this submersion as conformal warped product submersion._ Proof.: Let \(X_{1},Y_{1}\) and \(X_{2},Y_{2}\) be horizontal vector fields on \(M_{1}\) and \(M_{2}\) repectively. As \(\phi_{i}:M_{i}\to N_{i}\) is conformal submersion for \(i=1,2\). So, from eq.(2.6) we have \[g_{N_{i}}(\phi_{i*}X_{i},\phi_{i*}Y_{i})=\lambda_{i}^{2}g_{M_{i}}(X_{i},Y_{i}). \tag{3.3}\] Since, \(\phi_{i}\) is a submersion from \(M_{i}\) to \(N_{i}\) for \(i=1,2\), the map \(\phi:M=M_{1}\times_{f}\ M_{2}\to N=N_{1}\times_{\rho}\ N_{2}\) is a submersion. The tangent space of M has the following decomposition for \(p=(p_{1},p_{2})\in M\), where \(p_{1}\in M_{1}\) and \(p_{2}\in M_{2}\), \[T_{(p_{1},p_{2})}\big{(}M_{1}\times M_{2}\big{)}=T_{(p_{1},p_{2})}\big{(}M_{1} \times\{p_{2}\}\big{)}\oplus T_{(p_{1},p_{2})}\big{(}\{p_{1}\}\times M_{2}\big{)}, \tag{3.4}\] \[T_{(p_{1},p_{2})}\big{(}M_{1}\times M_{2}\big{)}=\mathcal{H}_{(p_{1},p_{2})} \oplus\mathcal{V}_{(p_{1},p_{2})}, \tag{3.5}\] where, \(\mathcal{H},\mathcal{V}\) denotes the horizontal and vertical subspace of M respectively. Also, \(ker(\phi_{1}\times\phi_{2})_{*}=ker(\phi_{1*})\times ker(\phi_{2*})\) Using (3.4) and (3.5), we have \(T_{(p_{1},p_{2})}\big{(}M_{1}\times\{p_{2}\}\big{)}=\big{(}(\mathcal{H}_{1})_ {p_{1}}\times\{p_{2}\}\big{)}\oplus\big{(}(\mathcal{V}_{1})_{p_{1}}\times\{p_ {2}\}\big{)},\) \(T_{(p_{1},p_{2})}\big{(}\{p_{1}\}\times M_{2}\big{)}=\big{(}\{p_{1}\}\times( \mathcal{H}_{2})_{p_{2}}\big{)}\oplus\big{(}\{p_{1}\}\times(\mathcal{V}_{2})_{ p_{2}}\big{)}.\) Hence, we get the decomposition of vertical and horizontal subspace of M as, \[\mathcal{V}_{(p_{1},p_{2})} =\big{(}(\mathcal{V}_{1})_{p_{1}}\times\{p_{2}\}\big{)}\oplus \big{(}\{p_{1}\}\times(\mathcal{V}_{2})_{p_{2}}\big{)},\] \[\mathcal{H}_{(p_{1},p_{2})} =\big{(}(\mathcal{H}_{1})_{p_{1}}\times\{p_{2}\}\big{)}\oplus \big{(}\{p_{1}\}\times(\mathcal{H}_{2})_{p_{2}}\big{)}.\] For a horizontal vector field \(X_{i}^{\mathcal{H}}\in\Gamma(\mathcal{H}_{i})\), the lift of \(X_{i}^{\mathcal{H}}\) to M is the vector field \((\overline{X_{i}^{\mathcal{H}}})=(\overline{X_{i}})^{\mathcal{H}}\). Similarly, for a vertical vector field \(X_{i}^{\mathcal{V}}\in\Gamma(\mathcal{V}_{i})\), the lift of \(X_{i}^{\mathcal{V}}\) to M is the vector field \((\overline{X_{i}^{\mathcal{V}}})=(\overline{X_{i}})^{\mathcal{V}}\). For instance, both the vector field on \(M_{i}\) and its lift to M will be denoted by the same notation, the meaning will be clear from the context. Now, in order to show that the submersion, \(\phi:M\to N\) is a conformal submersion, we proceed as follows: \[g_{N}(\phi_{*}(X_{1},X_{2}),\phi_{*}(Y_{1},Y_{2})) =g_{N_{1}}(\phi_{1*}(X_{1}),\phi_{1*}(Y_{1}))+\rho^{2}(\phi_{1}(p_ {1}))g_{N_{2}}(\phi_{2*}(X_{2}),\phi_{2*}(Y_{2}))\] \[=\lambda_{1}^{2}(p_{1})g_{M_{1}}(X_{1},Y_{1})+f^{2}(p_{1}). \lambda_{2}^{2}(p_{2})g_{M_{2}}(X_{2},Y_{2})\] \[=\lambda_{1}^{2}(\pi_{1}(p))g_{M_{1}}(X_{1},Y_{1})+f^{2}(p_{1}). \lambda_{2}^{2}(\pi_{2}(p))g_{M_{2}}(X_{2},Y_{2})\] \[=(\lambda_{1}\circ\pi_{1})^{2}g_{M_{1}}(X_{1},Y_{1})+f^{2}(p_{1}). (\lambda_{2}\circ\pi_{2})^{2}g_{M_{2}}(X_{2},Y_{2})\] \[=\lambda^{2}(g_{M_{1}}(X_{1},Y_{1})+f^{2}g_{M_{2}}(X_{2},Y_{2}))\] \[=\lambda^{2}g_{M}((X_{1},X_{2}),(Y_{1},Y_{2})),\] where \(\lambda\) is the lift function of \(\lambda_{1}\) and \(\lambda_{2}\). Therefore, \(\phi:M=M_{1}\times_{f}\ M_{2}\to N=N_{1}\times_{\rho}\ N_{2}\) is a conformal warped product submersion with dilation \(\lambda\) **Corollary 3.1**.: _If the dilation \(\lambda\equiv 1\), then the conformal warped product submersion is a Riemannian warped product submersion. Thus, Riemannian warped product submersion is a particular case of conformal warped product submersion._ **Corollary 3.2**.: _If \(\phi\) is the conformal warped product submersion with dilation \(\lambda=e^{-\sigma};\sigma\in C^{\infty}(M),\) then the metric \(G_{M}\) given by \(G_{M}=e^{2\sigma}g_{M}\) is the unique metric on \(M=M_{1}\times_{f}M_{2}\) conformal with \(g_{M}\) with the property that_ \[\varphi:(M,G_{M})\to(N,g_{N}), \tag{3.6}\] _where \(\varphi\) is a Riemannian submersion, defined by \(\varphi(p)=\phi(p)\), for \(p\in M.\)_ Proof.: Let \(X_{1},Y_{1}\) and \(X_{2},Y_{2}\) be horizontal vector fields on \(M_{1}\) and \(M_{2}\) repectively. Since, \(\phi\) is the conformal warped submersion with dilation \(\lambda.\) By Prop. 3.1, we have \[\begin{split} g_{N}(\phi_{*}(X_{1},X_{2}),\phi_{*}(Y_{1},Y_{2}))& =\lambda^{2}g_{M}\big{(}(X_{1},X_{2}),(Y_{1},Y_{2})\big{)}\\ &=e^{-2\sigma}g_{M}\big{(}(X_{1},X_{2}),(Y_{1},Y_{2})\big{)}\\ &=e^{-2\sigma}e^{2\sigma}G_{M}\big{(}(X_{1},X_{2}),(Y_{1},Y_{2}) \big{)}\\ &=G_{M}\big{(}(X_{1},X_{2}),(Y_{1},Y_{2})\big{)}.\end{split}\] Hence, we get the assertion. **Theorem 3.1**.: _Let \(\phi=(\phi_{1},\phi_{2}):M=M_{1}\times_{f}M_{2}\to N=N_{1}\times_{\rho}N_{2}\) be a conformal warped product submersion with dialtion \(\lambda\) between two Riemannian warped product manifolds. If \(X_{i},Y_{i}\in\ \Gamma(\mathcal{H}_{i}),i=1,2\) on \(M_{i},\) then we have_ 1. \(A(X_{1},Y_{1})=A_{1}(X_{1},Y_{1})\,=\frac{1}{2}\Big{\{}\mathcal{V}[X_{1},Y_{1} ]-\lambda_{1}^{2}g_{M_{1}}(X_{1},Y_{1})grad_{\mathcal{V}}\Big{(}\frac{1}{ \lambda_{1}^{2}}\Big{)}\Big{\}},\)__ 2. \(A(X_{2},Y_{2})=\frac{1}{2}\Big{\{}A_{2}(X_{2},Y_{2})-A_{2}(Y_{2},X_{2})- \lambda_{2}^{2}g_{M_{2}}(X_{2},Y_{2})grad_{\mathcal{V}}\Big{(}\frac{f^{2}}{ \lambda_{1}^{2}}\Big{)}\Big{\}}.\)__ Proof.: Extend \(X_{i},Y_{i}\) to basic vector fields. In view of (2.7), we have \[\begin{split} A(X_{1},Y_{1})&=\frac{1}{2}\big{\{} \mathcal{V}[X_{1},Y_{1}]-grad_{\mathcal{V}}(g_{M}(X_{1},Y_{1}))\big{\}}\\ &=\frac{1}{2}\big{\{}\mathcal{V}[X_{1},Y_{1}]-grad_{\mathcal{V}}(g _{M_{1}}(X_{1},Y_{1}))\big{\}}.\end{split} \tag{3.7}\] Combining (2.6) and (3.7), we get, \[\begin{split} A(X_{1},Y_{1})&=\frac{1}{2}\Big{\{} \mathcal{V}[X_{1},Y_{1}]-grad_{\mathcal{V}}\Big{(}\frac{1}{\lambda_{1}^{2}}g_{ N_{1}}(\phi_{1*}(X_{1}),\phi_{1*}(Y_{1}))\Big{)}\Big{\}}\\ &=\frac{1}{2}\Big{\{}\mathcal{V}[X_{1},Y_{1}]-g_{N_{1}}\big{(} \phi_{1*}(X_{1}),\phi_{1*}(Y_{1})\big{)}grad_{\mathcal{V}}\Big{(}\frac{1}{ \lambda_{1}^{2}}\Big{)}\Big{\}}\\ &=\frac{1}{2}\Big{\{}\mathcal{V}[X_{1},Y_{1}]-\lambda_{1}^{2}g_{ M_{1}}\big{(}\phi_{1*}(X_{1}),\phi_{1*}(Y_{1})\big{)}grad_{\mathcal{V}}\Big{(}\frac{1}{ \lambda_{1}^{2}}\Big{)}\Big{\}}\\ &=A_{1}(X_{1},Y_{1}),\end{split}\] which proves (1). For (2), we proceed as follows \[\begin{split} A(X_{2},Y_{2})&=\frac{1}{2}\{\mathcal{V} [X_{2},Y_{2}]-grad_{\mathcal{V}}\big{(}g_{M}(X_{2},Y_{2})\big{)}\}\\ &=\frac{1}{2}\{\mathcal{V}[X_{2},Y_{2}]-grad_{\mathcal{V}}\big{(} f^{2}g_{M_{2}}(X_{2},Y_{2})\big{)}\}.\end{split} \tag{3.8}\] Combining (2.6) and (3.8), we get \[A(X_{2},Y_{2})=\frac{1}{2}\Big{\{}\mathcal{V}(\nabla_{X_{2}}Y_{2}-\nabla_{Y_{2 }}X_{2})-grad_{\mathcal{V}}\Big{(}\frac{f^{2}}{\lambda_{2}^{2}}g_{N_{2}}(\phi_ {2*}(X_{2}),\phi_{2*}(Y_{2}))\Big{)}\Big{\}}.\] Using lemma (2.1) and above equation we obtain \[\begin{split} A(X_{2},Y_{2})&=\frac{1}{2}\Big{\{} \mathcal{V}((\nabla_{X_{2}}^{2}Y_{2}-g_{M}(X_{2},Y_{2})(grad\ ln\ f))\\ &-(\nabla_{Y_{2}}^{2}X_{2}-g_{M}(Y_{2},X_{2})(grad\ ln\ f)))\\ &-g_{N_{2}}(\phi_{2*}(X_{2}),\phi_{2*}(Y_{2}))grad_{\mathcal{V}} \Big{(}\ \frac{f^{2}}{\lambda_{2}^{2}}\Big{)}\Big{\}}\\ &=\frac{1}{2}\Big{\{}\mathcal{V}(\nabla_{X_{2}}^{2}Y_{2}-\nabla_{ Y_{2}}^{2}X_{2})-\lambda_{2}^{2}g_{M_{2}}(X_{2},Y_{2})grad_{\mathcal{V}}\Big{(} \ \frac{f^{2}}{\lambda_{2}^{2}}\Big{)}\Big{\}}\\ &=\frac{1}{2}\Big{\{}(A_{2}(X_{2},Y_{2})-A_{2}(Y_{2},X_{2})- \lambda_{2}^{2}g_{M_{2}}(X_{2},Y_{2})grad_{\mathcal{V}}\Big{(}\ \frac{f^{2}}{\lambda_{2}^{2}}\Big{)}\Big{\}},\end{split}\] which proves (2). In our study, the concept of Riemannian warped product submersion is generalized to the conformal case which leads number of interesting and significant issues to investigate. ## Acknowledgments The first author is thankful to UGC for providing financial assistance in terms of JRF scholarship vide NTA Ref. No.: 201610070797(CSIR-UGC NET June 2020). The third author is thankful to the Department of Science and Technology (DST) Government of India for providing financial assistance in terms of FIST project (TPN-69301) vide the letter with Ref. No.: (SR/FST/MS-1/2021/104).
2303.08896
SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models
Generative Large Language Models (LLMs) such as GPT-3 are capable of generating highly fluent responses to a wide variety of user prompts. However, LLMs are known to hallucinate facts and make non-factual statements which can undermine trust in their output. Existing fact-checking approaches either require access to the output probability distribution (which may not be available for systems such as ChatGPT) or external databases that are interfaced via separate, often complex, modules. In this work, we propose "SelfCheckGPT", a simple sampling-based approach that can be used to fact-check the responses of black-box models in a zero-resource fashion, i.e. without an external database. SelfCheckGPT leverages the simple idea that if an LLM has knowledge of a given concept, sampled responses are likely to be similar and contain consistent facts. However, for hallucinated facts, stochastically sampled responses are likely to diverge and contradict one another. We investigate this approach by using GPT-3 to generate passages about individuals from the WikiBio dataset, and manually annotate the factuality of the generated passages. We demonstrate that SelfCheckGPT can: i) detect non-factual and factual sentences; and ii) rank passages in terms of factuality. We compare our approach to several baselines and show that our approach has considerably higher AUC-PR scores in sentence-level hallucination detection and higher correlation scores in passage-level factuality assessment compared to grey-box methods.
Potsawee Manakul, Adian Liusie, Mark J. F. Gales
2023-03-15T19:31:21Z
http://arxiv.org/abs/2303.08896v3
# SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection ###### Abstract Generative Large Language Models (LLMs) such as GPT-3 are capable of generating highly fluent responses to a wide variety of user prompts. However, LLMs are known to hallucinate facts and make non-factual statements which can undermine trust in their output. Existing fact-checking approaches either require access to the output probability distribution (which may not be available for systems such as ChatGPT) or external databases that are interfaced via separate, often complex, modules. In this work, we propose "SelfCheckGPT", a simple sampling-based approach that can be used to fact-check black-box models in a zero-resource fashion, i.e. without an external database. SelfCheckGPT leverages the simple idea that if a LLM has knowledge of a given concept, sampled responses are likely to be similar and contain consistent facts. However, for hallucinated facts, stochastically sampled responses are likely to diverge and contradict one another. We investigate this approach by using GPT-3 to generate passages about individuals from the WikiBio dataset, and manually annotate the factuality of the generated passages. We demonstrate that SelfCheckGPT can: i) detect non-factual and factual sentences; and ii) rank passages in terms of factuality. We compare our approach to several baselines and show that in sentence hallucination detection, our approach has AUC-PR scores comparable to or better than grey-box methods, while SelfCheckGPT is best at passage factuality assessment.1 Footnote 1: Coda and dataset can be found on the project page at [https://github.com/potsawee/selfcheckgpt](https://github.com/potsawee/selfcheckgpt). ## 1 Introduction Large Language Models (LLMs) such as GPT-3 Brown et al. (2020), PaLM Chowdhery et al. (2022), and Chinchilla Hoffmann et al. (2022) are capable of generating highly fluent and realistic responses to a variety of user prompts. They have been used in many applications such as automatic tools to draft reports, virtual assistants that retrieve information, summarization systems, as well as a multitude of other generative applications. Despite the convincing and realistic nature of LLM-generated texts, a concern with LLMs is their tendency to hallucinate facts and make up information. A method for hallucination detection is to leverage existing intrinsic uncertainty metrics such as token probability or entropy since these metrics can be used to determine the parts of the output sequence the system is least certain of Yuan et al. (2021); Fu et al. (2023). However, all current uncertainty metrics require access to the output token-level probability distribution information that may not necessarily be available to users, e.g. when systems are accessed used through limited external APIs such as ChatGPT. Further, there is an active field of fact-verification where evidence is retrieved from an external database to assess the veracity of a claim Thorne et al. (2018); Guo et al. (2022). However, facts can only be assessed relative to the knowledge present in the database. Though corpora such as Wikipedia can cover a great deal of general knowledge and serve as a useful database for fact verification, hallucination is observed over Figure 1: SelfCheckGPT with Question Answering. a wide range of tasks beyond pure fact verification. For example, summaries from automatic systems can contain information not present in the context (Kryscinski et al., 2019, 2020; Maynez et al., 2020). In this paper, we propose SelfCheckGPT, a simple sampling-based approach that can detect whether responses generated by LLMs are hallucinated or factual. SelfCheckGPT only uses sampled responses and can therefore be used on black box models, while it also operates in a zero-resource fashion, i.e. with no external database. The motivating idea of SelfCheckGPT is that when a LLM knows a given concept well, the sampled responses are likely to be similar and contain consistent facts. However, for hallucinated facts, stochastically sampled responses are likely to diverge and may completely contradict one another. By sampling multiple responses from a LLM, one can measure information consistency between the different responses and determine which statements are factual and which have been hallucinated. Three variants of SelfCheckGPT for measuring informational consistency are considered: BERTScore, question-answering, and n-gram. Through analysis of annotated articles generated by GPT-3, we show that SelfCheckGPT can determine factual documents effectively in a black-box, zero-resource manner. ## 2 Related Work ### Large Language Models There has been rapid growth in current large language models (LLMs) literature with larger and better models being constantly released (Chowdhery et al., 2022). These models are commonly used as the backbone for a range of NLP tasks (Wang et al., 2018). Traditionally, these LLMs are fine-tuned to a specific task and/or domain (Devlin et al., 2019; Radford et al., 2019; Raffel et al., 2020), however, a fascinating finding is that as models scale up, they inherit abilities to naturally solve a wide range of natural language tasks in a zero-shot fashion (Brown et al., 2020; Wei et al., 2022). ### Hallucination of Large Language Models Hallucination has been studied in text generation tasks, including summarization (Huang et al., 2021) and dialogue generation (Shuster et al., 2021). A survey of hallucination in a variety of natural language generation tasks has been conducted (Ji et al., 2023). Further, Liu et al. (2022) compiled a hallucination detection dataset, but the texts were obtained by perturbing factual texts; thus, this dataset may not reflect actual LLM hallucination. Recently, Azaria and Mitchell (2023) trains a multi-layer perception classifier using LLM's hidden representations as the input to predict a truthfulness of a sentence. This approach requires labelled data for supervised training as well as the internal states of the LLM, which may not be available through APIs. Another recent approach is self-evaluation (Kadavath et al., 2022), which is a method where the LLM is prompted to answer about its previous prediction, e.g. the probability of its generated response/answer is true. ### Sequence Level Uncertainty Estimation Token probabilities have been used as an indication of model certainty. For example, OpenAI's GPT-3 web interface allows users to display token probabilities as shown in Figure 2. Additionally, uncertainty estimation based on aleatoric and epistemic uncertainty for autoregressive generation has been studied (Xiao and Wang, 2021; Malinin and Gales, 2021). Further, conditional language model scores have been used to evaluate properties of texts (Yuan et al., 2021; Fu et al., 2023). Recently, semantic uncertainty is proposed to address uncertainty in free-form generation tasks where probabilities are attached to meanings of text instead of tokens (Kuhn et al., 2023). ### Fact Verification Existing fact-verification approaches follow a multi-stage pipeline of claim detection, evidence retrieval and verdict prediction (Guo et al., 2022; Zhong et al., 2020). Such methods, however, require access to external databases and can have considerable inference costs. Figure 2: Example of OpenAI’s GPT-3 interface with token probabilities displayed. Grey-Box Factuality Assessment This section will introduce methods that can be used to determine the factuality of LLM responses in a zero-resource setting when one has full access to output distributions.2 We will use 'factual' to define when statements are grounded in valid information, i.e. when hallucinations are avoided, and 'zero-resource' when no external database is used. Footnote 2: Alternatively, white-box approaches, such as the method in Azaria and Mitchell (2023), require access to full internal states of the LLM in addition to output distributions. As a result, they are less practical and not considered in this work. ### Uncertainty-based Assessment **Motivation**. To consider how the factuality of a generated response can be determined in a zero-resource setting, we consider LLM pre-training. During pre-training, the model is trained with next-word prediction over massive corpora of textual data. This gives the model a strong understanding of language (Jawahar et al., 2019; Raffel et al., 2020), powerful contextual reasoning (Zhang et al., 2020), as well as world knowledge (Liusie et al., 2022). Consider the input "Lionel Messi is a _-". Since Messi is a world-famous athlete who may have appeared multiple times in pre-training, the LLM is likely to know who Messi is. Therefore, given the context, the token "footballer" may be assigned a very high probability while some other professions such as "carpenter" will be considered very improbable. However, for the input "John Smith is a _", the system may be unsure of how the sentence should continue, and have a flat probability distribution. During decoding, this will lead to a random word being generated- causing the system to hallucinate. This insight allows us to realize the connection between uncertainty metrics and factuality. Factual sentences are likely to contain tokens with higher likelihood and lower entropy, while hallucinations are likely to come from positions with flat probability distributions with high uncertainty. **Token-level Probability \(p\)** Given the LLM's response \(R\), let \(i\) denote the \(i\)-th sentence in \(R\), \(j\) denote the \(j\)-th token in the \(i\)-th sentence, \(J\) is the number of tokens in the sentence, and \(p_{ij}\) be the probability of the word generated by the LLM at the \(j\)-th token of the \(i\)-th sentence. Two probability metrics are used: \[\text{Avg}(-\log p) =-\frac{1}{J}\sum_{j}\log p_{ij} \tag{1}\] \[\text{Max}(-\log p) =\underset{j}{\text{max}}\;(-\log p_{ij}) \tag{2}\] \(\text{Max}(-\log p)\) measures the sentence's likelihood by assessesing the _least_ likely token in the sentence. **Entropy \(\mathcal{H}\)** The entropy of the output distribution is: \[\mathcal{H}_{ij}=-\sum_{\tilde{w}\in\mathcal{W}}p_{ij}(\tilde{w})\log p_{ij}( \tilde{w}) \tag{3}\] where \(p_{ij}(\tilde{w})\) is the probability of the word \(\tilde{w}\) being generated at the \(j\)-th token of the \(i\)-th sentence, and \(\mathcal{W}\) is the set of all possible words in the vocabulary. Similar to the probability-based metrics, two entropy-based metrics are used: \[\text{Avg}(\mathcal{H}) =\frac{1}{J}\sum_{j}\mathcal{H}_{ij} \tag{4}\] \[\text{Max}(\mathcal{H}) =\max_{j}\left[\mathcal{H}_{ij}\right] \tag{5}\] ## 4 Black-Box Factuality Assessment **Motivation**. A drawback of the previous grey-box methods is that they require output token-level probabilities. Though this may seem a reasonable requirement, for massive LLMs only available through limited API calls, such token-level information might not be available (such as with ChatGPT). Therefore, we consider black-box approaches because they remain applicable even when only text-based responses can be derived from the LLM. **Proxy LLMs** A simple baseline to consider is using a proxy LLM, i.e. another LLM that we have full access to such as LLaMA (Touvron et al., 2023). With no access to the full outputs of the LLM generating the text, a proxy LLM could be used to approximate the output token-level probabilities. In the next section, we propose SelfCheckGPT, which is also a black-box approach. ## 5 SelfCheckGPT **Notation**. Let \(R\) refer to the LLM response drawn from a given user query. SelfCheckGPT operates by drawing a further \(N\) stochastic LLM response samples \(\{S^{1},S^{2},..,S^{n},...,S^{N}\}\) from the same query, followed by measuring the consistency between the response and stochastic samples. As a hallucination score of the \(i\)-th sentence, we design SelfCheckGPT \(\mathcal{S}(i)\) such that \(\mathcal{S}(i)\in[0.0,1.0]\) and \(\mathcal{S}(i)\to 1.0\) if the \(i\)-th sentence is hallucinated, and \(\mathcal{S}(i)\to 0.0\) if it is grounded in valid information. ### SelfCheckGPT with BERTScore Let \(\mathcal{B}(.,.)\) denote the BERTScore between two sentences. SelfCheckGPT with BERTScore finds the averages BERTScore of a sentence with the most similar sentence of each drawn sample: \[\mathcal{S}_{\text{BERT}}(i)=1-\frac{1}{N}\sum_{n=1}^{N}\underset{k}{\text{ max}}\left(\mathcal{B}(r_{i},s_{k}^{n})\right) \tag{6}\] where \(r_{i}\) represent the \(i\)-th sentence in \(R\) and \(s_{k}^{n}\) represent the \(k\)-th sentence in the \(n\)-th sample \(S^{n}\). This way if the information in a sentence appears in many drawn samples, one may assume that the information is factual, whereas if the statement appears in no other sample, it is likely a hallucination. ### SelfCheckGPT with Question Answering Based on the idea that information consistency could be assessed using question answering (QA), we apply the automatic multiple-choice question answering generation (MQAG) framework [14] to SelfCheckGPT. MQAG assesses consistency by generating multiple-choice questions that an answering system can independently answer given each passage. If facts on consistent concepts are queried, the answering system is expected to predict similar answers. The MQAG framework consists of a question-answer generation system G1, distractor generation system G2, and answering system A. For the sentence \(r_{i}\) in the response \(R\), we draw questions \(q\), associated answers \(a\), and distractors \(\mathbf{o}_{\backslash a}\) as follows: \[q,a\sim P_{\texttt{G1}}(q,a|r_{i});\ \ \mathbf{o}_{\backslash a}\sim P_{ \texttt{G2}}(\mathbf{o}_{\backslash a}|q,a,R) \tag{7}\] where \(\mathbf{o}=\{a,\mathbf{o}_{\backslash a}\}=\{o_{1},...,o_{4}\}\). To filter out bad (e.g. unanswerable) questions, we define an answerability score [14]: \[\alpha=P_{\texttt{U}}(\text{answerable}|q,\text{context}) \tag{8}\] where the context is either the response \(R\) or sampled passages \(S^{n}\), and \(\alpha\to 0.0\) for unanswerable and \(\alpha\to 1.0\) for answerable. We use \(\alpha\) to filter out unanswerable questions which have \(\alpha\) lower than a threshold. Subsequently, we use the answering system A to answer all answerable questions: \[a_{R} =\underset{k}{\text{argmax}}\left[P_{\texttt{A}}(o_{k}|q,R, \mathbf{o})\right] \tag{9}\] \[a_{S^{n}} =\underset{k}{\text{argmax}}\left[P_{\texttt{A}}(o_{k}|q,S^{n}, \mathbf{o})\right] \tag{10}\] We compare whether \(a_{R}\) is equal to \(a_{S^{n}}\) for all samples \(\{S^{1},...,S^{N}\}\), yielding the number of matches \(N_{\texttt{n}}\) and the number of not-matches \(N_{\texttt{n}}\). Subsequently, a simple inconsistency score for the \(i\)-th sentence and question \(q\) based on the match/not-match counts is calculated: \(\mathcal{S}_{\text{QA}}(i,q)=\frac{N_{\texttt{n}}}{N_{\texttt{n}}+N_{\texttt{ n}}}\). To take into account the number of answerable questions (i.e. the evidence to assess the sentence), we use Bayes' theorem (derivation provided in Appendix B) to improve Equation 5.2 to \[\mathcal{S}_{\text{QA}}(i,q)=\frac{\gamma_{2}^{N_{\texttt{n}}^{\prime}}}{ \gamma_{1}^{N_{\texttt{n}}}+\gamma_{2}^{N_{\texttt{n}}}} \tag{11}\] where \(N_{\texttt{n}}^{\prime}\) = the effective match count, \(N_{\texttt{n}}^{\prime}\) = the effective mismatch count, \(\gamma_{1}\), and \(\gamma_{2}\) are defined in Appendix B. Ultimately, SelfCheckGPT with QA is the average of inconsistency scores across \(q\), \[\mathcal{S}_{\text{QA}}(i)=\mathbb{E}_{q}\left[\mathcal{S}_{\text{QA}}(i,q)\right] \tag{12}\] ### SelfCheckGPT with n-gram Given samples \(\{S^{1},S^{2},...,S^{N}\}\) generated by a LLM, one could train a new language model using these samples to approximate the LLM. As \(N\) gets larger, this new language model is closer to the LLM generating the response samples. Therefore, we can approximate the LLM's token probabilities using the newly trained language model. In practice, the number of samples \(N\) is limited due to time and/or cost constraints. Consequently, we train a simple n-gram model using the samples \(\{S^{1},...,S^{N}\}\) and the main response \(R\) (which will be assessed). We note that by including \(R\) in training the n-gram model can be considered as a smoothing method where the count of each token in \(R\) is increased by 1. Then, we compute the average of log-probabilities on the response \(R\), \[\mathcal{S}_{\text{n-gram}}^{\text{Avg}}(i)=-\frac{1}{J}\sum_{j}\log\tilde{p} _{ij} \tag{13}\] where \(\tilde{p}_{ij}\) is the probability (of the \(j\)-th token of the \(i\)-th sentence) computed using the n-gram model. Alternatively, we can also use the maximum of negative log probabilities of the n-gram model, \[\mathcal{S}_{\text{n-gram}}^{\text{Max}}(i)=\max_{j}\left(-\log\tilde{p}_{ij}\right) \tag{14}\] ### SelfCheckGPT Combination Lastly, given the differences in the natures of the variants of SelfCheckGPT, we expect them to be complementary. As a result, we consider SelfCheckGPT-Combination, which is a simple combination of the normalized scores of the three variants, including \(\mathcal{S}_{\text{BERT}}\), \(\mathcal{S}_{\text{QA}}\), and \(\mathcal{S}_{\text{n-gram}}\). ## 6 Data and Annotation We evaluate hallucination detection approaches by 1) generating synthetic Wikipedia articles using GPT-3 on the individuals from the Wikibio dataset (Lebret et al., 2016); 2) manually annotating the factuality of the passage at a sentence level; 3) evaluating the system's ability to detect hallucinations. WikiBio is a dataset of the first paragraph (along with tabular information) of Wikipedia biographies. We rank the WikiBio test set in terms of paragraph length and randomly sample 238 articles from the top 20% of longest articles (to ensure no obscure concept is selected). GPT-3 (text-davinci-003) is used to generate Wikipedia articles on a concept using the prompt "This is a Wikipedia passage about {concept}". Table 1 provides the statistics of GPT-3 generated passages. We then annotate the sentences of the generated passages using the guidelines shown in Figure 3 such that each sentence classified as: * **Major Inaccurate** (Non-Factual, **1**): The sentence is entirely hallucinated, i.e. the sentence is unrelated to the topic. * **Minor Inaccurate** (Non-Factual, **0.5**): The sentence consists of some non-factual information, but the sentence is related to the topic. * **Accurate** (Factual, **0**): The information presented in the sentence is accurate. Of the 1908 annotated sentences, 761 (39.9%) of the sentences were labelled major-inaccurate, 631 (33.1%) were minor-inaccurate, and 516 (27.0%) were accurate.3 Passage-level scores are obtained by averaging the sentence-level labels in each passage. The distribution of passage-level scores is shown in Figure 4, where we observe a large peak at +1.0. We refer to the points at this peak as _total hallucination_, i.e. the individual/concept was entirely made up and is unrelated to the real concept. Footnote 3: When selecting more obscure or more well-known concepts/individuals, the label distribution can be shifted to contain more or fewer hallucinations. A subset of the dataset consisting of 201 sentences was annotated by two annotators. To obtain a single label for this subset, if both annotators agree, we use the agreed label. However, if they disagree, we use the worse-case label, e.g. {minor inaccurate, major inaccurate} is mapped to major inaccurate. We report inter-annotator agreement as measured by Cohen's \(\kappa\)(Cohen, 1960) in Table 2. Cohen's \(\kappa\) values of 0.595 and 0.748 indicate _moderate_ and _substantial_ agreement (Viera et al., 2005) for the 3-label and 2-class scenarios, respectively. \begin{table} \begin{tabular}{c c c} \hline \hline \#Passages & \#Sentences & \#Tokens/passage \\ \hline 238 & 1908 & 184.7\(\pm\)36.9 \\ \hline \hline \end{tabular} \end{table} Table 1: The statistics of **WikiBio GPT-3 dataset** where the number of tokens is based on the OpenAI GPT-2 tokenizer. \begin{table} \begin{tabular}{c c c} \hline \hline Annotation & 3-label & 2-label \\ \hline Cohen’s \(\kappa\) & 0.595 & 0.748 \\ \hline \hline \end{tabular} \end{table} Table 2: Inter-annotator agreement where 3-label means selecting from accurate, minor inaccurate, major inaccurate. 2-label is calculated by combining minor/major into one label. Figure 4: Document factuality scores histogram plot Figure 3: Flowchart of our annotation process ## 7 Experiments The main generative LLM is **GPT-3** (text-davinci-003), which is the state-of-the-art system at the time of conducting our experiments. To obtain the main response, we set the generation temperature to 0.0 and use beam search decoding. For the stochastically generated samples, we set the temperature to 1.0 and generate \(N\)=20 samples. For the proxy LLM approach, the main text shows the results on LLaMA, which is one of the best-performing open-source LLMs. The results on other proxy LLMs can be found in the appendix. Also, the details about QG and QA systems are described in the appendix. ### Sentence-level Hallucination Detection First, we investigate whether our hallucination detection methods are capable of identifying the factuality of sentences. In detecting non-fact sentences, both major-inaccurate labels and minor-inaccurate labels are grouped together into the _non-factual_ class, while the _factual_ class refers to accurate sentences. In addition, we consider a more challenging task of detecting major-inaccurate sentences in passages that are _not_ total hallucination passages, which we refer to as _non-factual_*.4 Figure 5 and Table 3 show the performance of our approaches, where the following observations can be made: Footnote 4: In non-factual*, 206 passages (1632 sentences) remain. **1) LLM's probabilities \(p\) correlate well with factuality**. Our results show that probability measures (from the LLM generating the texts) are strong baselines for assessing factuality. Factual sentences can be identified with an AUC-PR of 53.97, significantly better than the random baseline of 27.04, with the AUC-PR for hallucination detection also increasing from 72.96 to 83.21. This supports the hypothesis that when the LLMs are uncertain about generated information, generated tokens often have higher uncertainty, paving a promising direction for hallucination detection approaches. Also, the probability \(p\) measure performs better than the entropy \(\mathcal{H}\) measure of top-5 tokens. **2) Proxy LLM perform noticeably worse than LLM (GPT-3)**. Nevertheless, as opposed to the LLM, the results of proxy LLM show that the entropy \(\mathcal{H}\) measures outperform the probability measures. This suggests that using richer uncertainty information could improve factuality/hallucination detection tasks, while the entropy of top-5 tokens is likely insufficient. In addition, when using other proxy LLMs such as GPT-NeoX or OPT-30B, the performance is worse than or marginally better than the random baseline. We believe this poor performance occurs as different LLMs have different generating patterns, and therefore even common uninformative tokens may have a low probability if they do not follow the style of the proxy LLM. We note that a weighted conditional language model score such as BARTScore (Yuan et al., 2021) could be incorporated in future investigations of the proxy LLM approach. **3) SelfCheckGPT rivals grey-box approaches**. SelfCheckGPT considerably outperforms the proxy LLM approach in all detection setups. Furthermore, SelfCheckGPT outperforms the grey-box probability-based approach in most setups. We also observe a performance gain when combining the variants of SelfCheckGPT. Interestingly, despite being the simplest method, SelfCheckGPT with unigram (max) works well across different setups. Essentially, when assessing a sentence, this method picks up the token with the _lowest_ occurrence given all the samples. For instance, if this token only appears a few times (or once) in the samples (\(N\)=20), it is likely non Figure 5: PR-Curve of detecting non-factual and factual _sentences_ in the GPT-3 generated WikiBio passages. factual. Next, we investigate its performance as we vary from 1-gram to 5-gram. The results in Table 6 show that simply finding the least likely token/n-gram is more effective than computing the average n-gram language model score of the sentence (i.e. Avg(\(-\log p\)). As \(n\) increases the performance of SelfCheckGPT with n-gram (max) drops, because the space of n-grams increases exponentially with \(n\); hence, requiring exponentially more samples. ### Passage-level Factuality Ranking The previous results show that SelfCheckGPT is an effective approach for predicting sentence-level factuality. An additional consideration, though, is whether SelfCheckGPT can be used to determine the overall factuality of passages. Passage-level factuality scores are obtained by averaging the sentence-level scores over all sentences. \[f_{\text{passage}}(i)=\frac{1}{|R|}\sum_{i}f(i) \tag{15}\] where \(f(i)\) is the sentence-level score, and \(|R|\) is the number of sentences in the passage. Note that for Avg(\(-\log p\)) and Avg(\(\mathcal{H}\)), we compute the average over all tokens in a passage. Whereas for Max(\(-\log p\)) and Max(\(\mathcal{H}\)), we first take the maximum operation over tokens at the sentence level, and we then average over all sentences following Equation 15. Since human judgement is somewhat subjective, averaging the sentence-level labels would lead to ground truths with less noise. Our results in Table 3 and Figure 6 show that all of the SelfCheckGPT methods correlate far better with human judgements than all other methods, including the grey-box probability and entropy methods. Further, the three variants of SelfCheckGPT appear complementary, with the combined approach being the best-performing system, achieving the highest Pearson correlation of 69.05. Unsurprisingly, the proxy LLM approach again achieves considerably lower correlations. ### Ablation Studies **External Knowledge (instead of SelfCheck)** If external knowledge is available, one could measure the informational consistency with the LLM response and the real-world document (instead of LLM self-samples). In this experiment, we have the related WikiBio passage and so can extract the first Wikipedia paragraph for each concept/individual.5 Our results in Table 4 show that for the QA and BERTScore methods, using SelfCheckGPT samples can yield a comparable or even \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{Sentence-level (AUC-PR)} & \multicolumn{2}{c}{Passage-level (Corr.)} \\ & NonFact & NonFact* & Factual & Pearson & Spearman \\ \hline Random & 72.96 & 29.72 & 27.04 & - & - \\ \hline GPT-3’s probabilities (_LLM_, _grey-box_) & & & & \\ \hline Avg(\(-\)log\(p\)) & 83.21 & 38.89 & 53.97 & 57.04 & 53.93 \\ Avg(\(\mathcal{H}\))\({}^{\dagger}\) & 80.73 & 37.09 & 52.07 & 55.52 & 50.87 \\ Max(\(-\)log\(p\)) & 87.51 & 35.88 & 50.46 & 57.83 & 55.69 \\ Max(\(\mathcal{H}\))\({}^{\dagger}\) & 85.75 & 32.43 & 50.27 & 52.48 & 49.55 \\ \hline LLaMA-30B’s probabilities (_Proxy LLM_, _black-box_) & & & & \\ \hline Avg(\(-\)log\(p\)) & 75.43 & 30.32 & 41.29 & 21.72 & 20.20 \\ Avg(\(\mathcal{H}\)) & 80.80 & 39.01 & 42.97 & 33.80 & 39.49 \\ Max(\(-\)log\(p\)) & 74.01 & 27.14 & 31.08 & -22.83 & -22.71 \\ Max(\(\mathcal{H}\)) & 80.92 & 37.32 & 37.90 & 35.57 & 38.94 \\ \hline **SelfCheckGPT** (_black-box_) & & & & & \\ \hline w/ BERTScore & 81.96 & 45.96 & 44.23 & 58.18 & 55.90 \\ w/ QA & 84.26 & 40.06 & 48.14 & 61.07 & 59.29 \\ w/ Unigram (max) & 85.63 & 41.04 & 58.47 & 64.71 & 64.91 \\ Combination & 87.33 & 44.37 & 61.83 & 69.05 & 67.77 \\ \hline \hline \end{tabular} \end{table} Table 3: AUC-PR for sentence-level detection tasks. Passage-level ranking performances are measured by Pearson correlation coefficient and Spearman’s rank correlation coefficient w.r.t. human judgements. The results of other proxy LLMs, in addition to LLaMA, can be found in the appendix. \({}^{\dagger}\)GPT-3 API returns the top-5 tokens’ probabilities, which are used to compute entropy. better performance compared to using the WikiBio reference passage. This illustrates that SelfCheckGPT is a strong hallucination detection approach that is comparable to methods using stored external information. Lastly, the n-gram model shows a significant drop in performance when using the WikiBio passages instead of LLM self-samples. This failure is attributed to the fact that just the WikiBio reference text is not sufficient to train an n-gram model, and we investigate the number of samples in more detail in the next ablation. #### Impact of the Number of Samples Typically, sampling-based methods are expected to achieve better performance when more samples are drawn. However, drawing a higher number of samples leads to higher computational costs and/or API costs. Thus, we investigate the behaviour of SelfCheckGPT as we vary the number of samples drawn from 1 to 20. Our results in Figure 7 (and Figure 8 in the appendix) show that the performance of SelfCheckGPT increases as more samples are used, with the performance gain diminishing as we generate more samples. SelfCheckGPT with n-gram requires the highest number of samples before its performance reaches a plateau. #### Model Choice for Proxy LLM Figure 9 (in Appendix C) illustrates that LLaMA is far better than other LLMs, and the performance of the proxy LLM method increases with model size. Similarly, average probability, Avg(\(p\)), is closer to that of GPT-3 when using a larger proxy LLM as shown in Table 7 in the appendix. ## 8 Conclusions This work proposes SelfCheckGPT, a zero-resource black-box approach, that can be used to detect LLM hallucinations and determine the trustworthiness of generated responses without the need for any external resources. SelfCheckGPT has been shown to be an effective approach for LLM hallucination assessment at both sentence and passage levels, and the approach is applicable to any LLM and any topic that the LLM is prompted to generate. Through experimental analysis of annotated GPT-3 responses, we show that SelfCheckGPT has a competitive performance to the grey-box probability-based approach, and it also significantly outperforms the proxy LLM approach. In addition, this work releases a dataset for GPT-3 hallucination detection, consisting of 238 annotated passages. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{Sent-1vl AUC-PR} & \multicolumn{3}{c}{Passage-1vl} \\ & NoFac & NoFac\({}^{\text{a}}\) & Fact & Pear. & Spear. \\ \hline SelfCk-BERT & 81.96 & 45.96 & 44.23 & 58.18 & 55.90 \\ WikiBio+BERT & 81.32 & 40.62 & 49.15 & 58.71 & 55.80 \\ \hline SelfCk-QA & 84.26 & 40.06 & 48.14 & 61.07 & 59.29 \\ WikiBio+QA & 84.18 & 45.40 & 52.03 & 57.26 & 53.62 \\ \hline SelfCk-1gm & 85.63 & 41.04 & 58.47 & 64.71 & 64.91 \\ WikiBio+1gm & 80.43 & 31.47 & 40.53 & 28.67 & 26.70 \\ \hline \hline \end{tabular} \end{table} Table 4: The performance when using SelfCheckGPT samples versus external stored knowledge. Figure 6: Scatter plot of passage-level scores where Y-axis = Method scores, X-axis = Human scores. The scatter plots of other SelfCheckGPT variants are provided in Figure 10 in the appendix. Figure 7: The performance of SelfCheckGPT methods on ranking passages (Spearman’s) versus the number of samples. ## 9 Limitations In this study, the scope of GPT-3 generated texts is 238 passages about individuals in the WikiBio dataset, as a result, a wider range of concepts, e.g. locations and objects, could be investigated to better understand the nature of LLM's hallucination.
2304.03964
Locally imprimitive points on elliptic curves
Under GRH, any element in the multiplicative group of a number field $K$ that is globally primitive (i.e., not a perfect power in $K^*$) is a primitive root modulo a set of primes of $K$ of positive density. For elliptic curves $E/K$ that are known to have infinitely many primes $\mathfrak p$ of cyclic reduction, possibly under GRH, a globally primitive point $P\in E(K)$ may fail to generate any of the point groups $E(k_{\mathfrak p})$. We describe this phenomenon in terms of an associated Galois representation $\rho_{E/K, P}:G_K\to\mathrm{GL}_3(\hat{\mathbf Z})$, and use it to construct non-trivial examples of global points on elliptic curves that are locally imprimitive.
Nathan Jones, Francesco Pappalardi, Peter Stevenhagen
2023-04-08T09:31:31Z
http://arxiv.org/abs/2304.03964v1
# Locally imprimitive points on elliptic curves ###### Abstract. Under GRH, any element in the multiplicative group of a number field \(K\) that is globally primitive (i.e., not a perfect power in \(K^{*}\)) is a primitive root modulo a set of primes of \(K\) of positive density. For elliptic curves \(E/K\) that are known to have infinitely many primes \(\mathfrak{p}\) of cyclic reduction, possibly under GRH, a globally primitive point \(P\in E(K)\) may fail to generate any of the point groups \(E(k_{\mathfrak{p}})\). We describe this phenomenon in terms of an associated Galois representation \(\rho_{E/K,P}:G_{K}\to\operatorname{GL}_{3}(\widehat{\mathbf{Z}})\), and use it to construct non-trivial examples of global points on elliptic curves that are locally imprimitive. Key words and phrases:Elliptic curves, primitive points, Galois representation 2010 Mathematics Subject Classification: Primary 11G05; Secondary 11F80 ## 1. Introduction Under the Generalized Riemann Hypothesis (GRH), every non-zero rational number that is not \(-1\) or a square is a primitive root modulo infinitely many primes \(p\). This was proved in 1967 by Hooley [5], forty years after Artin had stated it as a conjecture. For general number fields \(K\), there are counterexamples to the direct analogue of this statement, i.e., number fields \(K\) with non-torsion elements \(x\in K^{*}\) that are not \(\ell\)-th powers for any prime \(\ell\) for which \(K\) contains a primitive \(\ell\)-th root of unity, but that are nevertheless a primitive root in only finitely many residue class fields \(k_{\mathfrak{p}}\). The direct analogue of Artin's conjecture does however hold for \(x\in K^{*}\) that are _globally primitive_, i.e., not in \({K^{*}}^{\ell}\) for any prime \(\ell\). **Theorem 1.1**.: _Let \(K\) be a number field and \(x\in K^{*}\) globally primitive, and assume GRH. Then \(x\) is a primitive root modulo \(\mathfrak{p}\) for a set primes \(\mathfrak{p}\) of \(K\) of positive density._ Informally stated: _globally primitive_ elements in \(K^{*}\) are _locally primitive_ in \(k_{\mathfrak{p}}^{*}\) at infinitely many places \(\mathfrak{p}\), with an element being 'primitive' in \(K^{*}\) or \(k_{\mathfrak{p}}^{*}\) meaning that it generates a subgroup that is _not_ contained in a strictly larger cyclic subgroup of \(K^{*}\) or \(k_{\mathfrak{p}}^{*}\). Section 3 provides a proof of Theorem 1.1, and counterexamples to stronger statements. Now replace the multiplicative group \(K^{*}\) by the point group \(E(K)\) of an elliptic curve \(E/K\), and the unit group \(k_{\mathfrak{p}}^{*}\) of the residue class field at \(\mathfrak{p}\) by the point group \(E(k_{\mathfrak{p}})\) at the primes of good reduction \(\mathfrak{p}\) of \(E\). Then we can add two natural questions to Artin's to obtain the following three problems: for a number field \(K\), determine the infinitude (or natural density) of the set of primes \(\mathfrak{p}\) in \(K\) for which 1. (Artin) a given element \(x\in K^{*}\) is a primitive root modulo \(\mathfrak{p}\), i.e., \(k_{\mathfrak{p}}^{*}=\langle\overline{x}\rangle\); 2. (Serre [15]) a given elliptic curve \(E/K\) has cyclic reduction modulo \(\mathfrak{p}\); 3. (Lang-Trotter [8]) a given point \(P\in E(K)\) generates the group \(E(k_{\mathfrak{p}})\) at \(\mathfrak{p}\). We will refer to the cases I, II, and III as the _multiplicative primitive root_, the _cyclic reduction_ and the _elliptic primitive root_ case, and denote the corresponding sets of primes by \(S_{K,x}\), \(S_{E/K}\), and \(S_{E/K,P}\), respectively. By definition, the finitely many primes of bad reduction for which we have \(\operatorname{ord}_{\mathfrak{p}}(x)\neq 0\) (in case I) or \(\operatorname{ord}_{\mathfrak{p}}(\Delta_{E})\neq 0\) (in case II and III, with \(\Delta_{E}\) the discriminant of \(E\)) are excluded from these sets. Note that we have an obvious inclusion \(S_{E/K,P}\subset S_{E/K}\). In each of the three cases, we have a group theoretical statement that can be checked prime-wise at the primes \(\ell\) dividing the order of the groups \(k_{\mathfrak{p}}^{*}\) and \(E(k_{\mathfrak{p}})\) involved. The statement 'at \(\ell\)' has a translation in terms of the splitting behavior of \(\mathfrak{p}\) in a finite Galois extension \(K\subset K_{\ell}\) that we describe in Section 2. Combining the requirements for all \(\ell\) leads to (conjectural) density statements based on the Chebotarev density theorem [16]. Imposing infinitely many splitting conditions, one for each prime \(\ell\), leads to analytic problems with error terms that have been mastered under assumption of GRH in Cases I [1, 5, 9] and II [2, 3, 15], and that remain open in Case III. In each case, there is a conjectural density \(\delta_{K,x}\), \(\delta_{E/K}\), or \(\delta_{E/K,P}\) that is an upper bound for the upper density of the set of primes \(\mathfrak{p}\). Proving unconditionally that the set is infinite in case the conjectural density is positive is an open problem. If it is zero, we can however prove unconditionally that the corresponding set of primes is finite. This paper focuses on the vanishing of \(\delta_{E/K,P}\) in the elliptic primitive root case, which is much more subtle than the vanishing in the cases I and II. We call a global point \(P\in E(K)\)_locally imprimitive_ if it is a generator of the local point group \(E(k_{\mathfrak{p}})\) for only finitely many primes \(\mathfrak{p}\) of \(K\). Our analysis will yield 'elliptic counterexamples' \((E/K,P)\) to Theorem 1.1, i.e., elliptic curves \(E/K\) for which the cyclic reduction density \(\delta_{E/K}\) is positive but for which a globally primitive point \(P\in E(K)\) is locally imprimitive. Just as in the multiplicative primitive root and the cyclic reduction cases I and II, obstructions to local primitivity of a point \(P\in E(K)\) become visible in an associated Galois representation. In the elliptic primitive root case, the absolute Galois group \(G_{K}=\operatorname{Gal}(\overline{K}/K)\) of the number field \(K\) acts on the subgroup of \(E(\overline{K})\) consisting of the points \(Q\in E(\overline{K})\) satisfying \(kQ\in\langle P\rangle\subset E(K)\) for some \(k\in\mathbf{Z}_{\geq 1}\). This yields a representation \(\rho_{E/K,P}:G_{K}\to\operatorname{GL}_{3}(\widehat{\mathbf{Z}}).\) Just as in the two other cases [2], it suffices to consider the residual representation \[\overline{\rho}_{E,P}:G_{K}\to\operatorname{GL}_{3}(\widehat{\mathbf{Z}})\to \operatorname{GL}_{3}(\mathbf{Z}/N\mathbf{Z}) \tag{1}\] modulo a suitable squarefree integer \(N\) that is divisible by all 'critical primes'. Unlike the cases I and II, case III already allows non-trivial obstructions to local primitivity at prime level \(N=\ell\). In the multiplicative case I, the index \([k_{\mathfrak{p}}^{*}:\langle\overline{x}\rangle]\) can only be divisible by \(\ell\) for almost all \(\mathfrak{p}\) for the 'trivial reason' that \(K\) contains an \(\ell\)-th root of unity and \(x\) is an \(\ell\)-th power in \(K^{*}\). In the cyclic reduction case II, the group \(E(k_{\mathfrak{p}})\) can only have a non-cyclic \(\ell\)-part for almost all \(\mathfrak{p}\) for the 'trivial reason' that the full \(\ell\)-torsion of \(E/K\) is \(K\)-rational. In the elliptic primitive root case III however, there is a third reason why a point \(P\in E(K)\) can be _locally \(\ell\)-imprimitive_, meaning that \(\ell\) divides the index \([E(k_{\mathfrak{p}}):\langle\overline{P}\rangle]\) for all but finitely many \(\mathfrak{p}\). It is a less obvious one, and it was numerically discovered in 2015 in the case \(\ell=2\) by Meleleo [11], who restricted himself to the basic case \(K=\mathbf{Q}\). **Theorem 1.2**.: _Let \(P\in E(K)\) be a non-torsion point of an elliptic curve \(E\) defined over a number field \(K\), and \(\ell\) a prime number. Then \(P\) is locally \(\ell\)-imprimitive if and only if at least at least one of the following conditions holds_ 1. \(E(K)\) _contains a torsion point of order_ \(\ell\) _and_ \(P\in\ell\cdot E(K)\)_;_ 2. \(E\) _has complete rational_ \(\ell\)_-torsion over_ \(K\)_;_ 3. _there exists an isogeny_ \(\phi:E^{\prime}\to E\) _defined over_ \(K\) _with kernel generated by a torsion point of_ \(E^{\prime}(K)\) _of order_ \(\ell\) _and_ \(P\in\phi[E^{\prime}(K)]\)_._ Condition A in Theorem 1.2 is the analogue of the trivial condition from case I: if \(E(K)\) has non-trivial \(\ell\)-torsion, then almost all \(E(k_{\mathfrak{p}})\) are groups of order divisible by \(\ell\), and for these \(\mathfrak{p}\) a point \(P\in\ell\cdot E(K)\) will have its reduction in the subgroup \(\ell E(k_{\mathfrak{p}})\subset E(k_{\mathfrak{p}})\) of index divisible by \(\ell\). Condition B bears no relation to \(P\), and is well known from case II: non-cyclicity of the \(\ell\)-part of the global torsion subgroup \(E(K)^{\text{tor}}\) implies non-cyclicity of the \(\ell\)-part of \(E(k_{\mathfrak{p}})\) at almost all \(\mathfrak{p}\). At these \(\mathfrak{p}\), no single point \(P\) can generate it. Condition C has no analogue in the multiplicative primitive root case, and it is a truly different condition as it includes cases in which we have both \(P\notin\ell\cdot E(K)\) and \(E[\ell](K)=0\). If it holds, the dual isogeny \(\widehat{\phi}:E\to E^{\prime}\) maps \(P\) into \(\ell E^{\prime}(K)\), and the pair \((E,P)\) is \(\ell\)-isogenous to the curve-point pair \((E^{\prime},\widehat{\phi}(P))\) satisfying Condition A. We call a locally \(\ell\)-imprimitive non-torsion point \(P\in E(K)\)_non-trivial_ if Condition C is satisfied, but _not_ Condition A or B. By Theorem 1.2, non-trivial locally \(2\)-imprimitive points \(P\in E(K)\) can only exist for \(E/K\) having a single \(K\)-rational point of order \(2\), i.e., a \(2\)-torsion subgroup of order \(\#E(K)[2]\neq 1,4\). Examples of such points are actually surprisingly abundant. **Theorem 1.3**.: _Let \(E/K\) be any elliptic curve with \(\#E(K)[2]=2\). Then there are infinitely many quadratic twists of \(E\) over \(K\) that have a non-trivial locally \(2\)-imprimitive point._ The proof of this Theorem, which we give in Section 5, uses the fact that it is easy to create non-torsion points on twists of \(E\), and exploits the particularly explicit description of \(K\)-rational \(2\)-isogenies. For primes \(\ell>2\), it is harder to obtain families of elliptic curves with points of infinite order that are locally \(\ell\)-imprimitive in non-trivial ways. In Section 6 we provide an approach in the case \(\ell=3\). It can be extended to higher values of \(\ell\) (Section 7), but the examples rapidly become unwieldy. Non-torsion points that are locally imprimitive but not locally \(\ell\)-imprimitive for any single prime \(\ell\) do exist, but they are not easily found. They involve restrictions arising from reductions of \(\rho_{E/K,P}\) of composite level caused by non-trivial _entanglement_ between the fields \(K_{\ell}\). In the context of the easier cyclic reduction case II, this is discussed in [2], and we present a first type of examples for our Lang-Trotter case III in our final Section 8. Such higher level obstructions will be explored in more detail in a forthcoming paper. **Acknowledgements.** _All authors received support from the Max-Planck-Institut fur Mathematik in Bonn while working on this paper. They thank the institute for its financial support and for its very inspiring atmosphere._ ## 2. Characterization by splitting conditions In each of the three cases discussed in the introduction, we can characterize the corresponding sets of primes \(S_{K,x}\), \(S_{E/K}\), and \(S_{E/K,P}\) of \(K\) in terms of the splitting behaviour of their elements \(\mathfrak{p}\) in suitable extensions \(K\subset K_{\ell}\), with \(\ell\) ranging over all prime numbers. **I. Multiplicative primitive root case.** Let \(K\) be a number field and \(x\in K^{*}\) non-torsion. Define \(K_{m}=\mathbf{Q}(\zeta_{m},\ \sqrt{x})\) for \(m\in\mathbf{Z}_{\geq 1}\) as the '\(m\)-division field of \(x\)', i.e., the splitting field over \(K\) of the polynomial \(X^{m}-x\in K[X]\). If \(\mathfrak{p}\) is a prime of \(K\) of characteristic \(p\) for which \(x\) is a \(\mathfrak{p}\)-adic unit, the index \([k_{\mathfrak{p}}^{*}:\langle\overline{x}\rangle]\) is divisible by a prime \(\ell\neq p\) if and only if \(\mathfrak{p}\) splits completely in \(K\subset K_{\ell}\). Note that \([k_{\mathfrak{p}}^{*}:\langle\overline{x}\rangle]\) is never divisible by \(p=\operatorname{char}(\mathfrak{p})\), even though \(\mathfrak{p}\) may split completely in \(K_{p}\). Example: \(x=17\) is a primitive root modulo the prime \(\mathfrak{p}_{3}\) of norm \(3\) in \(K=\mathbf{Q}(\sqrt{-21})\), but \(\mathfrak{p}_{3}\) splits completely in the sextic extension \[K\subset K_{3}=K(\zeta_{3},\sqrt[3]{17})=K(\sqrt{7},\sqrt[3]{17}).\] This can however only happen for primes \(\mathfrak{p}|2\Delta_{K}\), with \(\Delta_{K}\) the discriminant of \(K\), since \(K\subset K_{p}\) is ramified at all \(\mathfrak{p}|p\) for \(p\) coprime to \(2\Delta_{K}\). In other words: for almost all \(\mathfrak{p}\), the 'condition at \(\ell\)' in the following Lemma is automatically satisfied at \(\ell=\operatorname{char}\mathfrak{p}\). **Lemma 2.1**.: _For \(\mathfrak{p}\) a prime of \(K\) outside the support of \(x\), we have \(k_{\mathfrak{p}}^{*}=\langle\overline{x}\rangle\) if and only if \(\mathfrak{p}\) does not split completely in \(K\subset K_{\ell}\) for any prime \(\ell\neq\operatorname{char}\mathfrak{p}\). _ By Lemma 2.1, the set \(S_{K,x}\) of primes in \(K\) for which \(x\) is a primitive root is up to finitely many primes equal to the set of primes that do not split completely in \(K\subset K_{\ell}\) for any prime \(\ell\). For \(m\in\mathbf{Z}_{\geq 1}\), the set of primes \(\mathfrak{p}\) of \(K\) that split completely in \(K\subset K_{m}\) has natural density \(1/[K_{m}:K]\). Under GRH, it follows from [9] that the set \(S_{K,x}\) has a natural density that is given by the inclusion-exclusion sum \[\delta_{K,x}=\sum_{m=1}^{\infty}\frac{\mu(m)}{[K_{m}:K]} \tag{2}\] that converges slowly, but that can be rewritten in 'factored form' as \[\delta_{K,x}=\sum_{m|N}\frac{\mu(m)}{[K_{m}:K]}\cdot\prod_{\ell|N\ \text{prime}}(1-\frac{1}{\ell(\ell-1)}). \tag{3}\] Here we can take for \(N\) any integer divisible by the primes in some finite set of _critical primes_. One may take for this set the set of primes that are either in the support of \(x\) or divide \(2\Delta_{K}\), together with those primes \(\ell\) for which \(x\) is in \({K^{*}}^{\ell}\). The essential feature of \(N\) is that the family \(\{K_{\ell}\}_{\ell|N}\) of '\(\ell\)-division fields of \(x\) outside \(N\)' is a linearly disjoint family over \(K\) with each \(K_{\ell}\) having the full Galois group \(\operatorname{Gal}(K_{\ell}/K)\cong\operatorname{Aff}_{1}(\mathbf{F}_{\ell})= \mathbf{F}_{\ell}\rtimes\mathbf{F}_{\ell}^{*}\) of order \(\ell(\ell-1)\), and that the compositum \(L\) of the fields in this family satisfies \(L\cap K_{N}=K\). **II. Cyclic reduction case.** For an elliptic curve \(E/K\), we consider the set \(S_{E/K}\) of primes of cyclic reduction of \(E\), i.e., the primes \(\mathfrak{p}\) of \(K\) for which \(E\) has good reduction and the reduced elliptic curve point group \(E(k_{\mathfrak{p}})\) is cyclic. The condition that \(E\) have good reduction modulo \(\mathfrak{p}\) only excludes the finitely many primes dividing the discriminant \(\Delta_{E}\) of \(E\). For \(m\in\mathbf{Z}_{\geq 1}\), we define \(K_{m}=K(E[m](\overline{\mathbf{Q}}))\) in this case to be the \(m\)-division field of \(E\) over \(K\). The following elementary lemma [2, Corollary 2.2] is the analogue of Lemma 2.1. It expresses the fact that a finite abelian group is cyclic if and only if its \(\ell\)-primary part is cyclic for all primes \(\ell\). **Lemma 2.2**.: _A prime \(\mathfrak{p}\) of good reduction of \(E/K\) is a prime of cyclic reduction if and only if \(\mathfrak{p}\) does not split completely in \(K\subset K_{\ell}\) for any prime \(\ell\neq\operatorname{char}\mathfrak{p}\). _ As in the multiplicative Case I, cyclicity of the \(p\)-primary part of the groups \(E(k_{\mathfrak{p}})\) is automatic for \(p=\operatorname{char}(\mathfrak{p})\). Also here, total splitting of \(\mathfrak{p}\) in non-trivial extensions \(K\subset K_{p}\) for \(p=\operatorname{char}\mathfrak{p}\) does occur: it suffices to base change any elliptic curve \(E/K\) with \(K\subset K_{p}=K[X]/(f)\) non-trivial by an extension \(K\subset L=K[X]/(g)\) with \(f\) and \(g\) polynomials of the same degree that are \(\mathfrak{p}\)-adically close, but with \(g\) Eisenstein at a prime \(\mathfrak{q}\) that is unramified in \(K\subset K_{p}\). For \(E/L\), the non-trivial extension \(L\subset L_{p}\) will be totally split at all primes dividing \(\mathfrak{p}\). Again, this can happen only at primes \(\mathfrak{p}|2\Delta_{K}\), as otherwise \(K\subset K_{p}\) will be ramified at all primes \(\mathfrak{p}\) of characteristic \(p\) by the fact that \(K_{p}\) contains \(\zeta_{p}\). Thus, for almost all \(\mathfrak{p}\), the 'condition at \(\ell\)' in Lemma 2.2 is again automatically satisfied at \(\ell=\operatorname{char}\mathfrak{p}\). The finitely many primes dividing \(2\Delta_{K}\) are clearly irrelevant when dealing with the density of the set \(S_{E/K}\), which, just like in the previous case, coincides up to finitely many primes with the set of primes \(\mathfrak{p}\) of \(K\) that do not split completely in \(K\subset K_{\ell}\) for any prime \(\ell\). Under GRH, the density of \(S_{E/K}\) is again given [2, Section 2] by an inclusion-exclusion sum that we already know from (2): \[\delta_{E/K}=\sum_{m=1}^{\infty}\frac{\mu(m)}{[K_{m}:K]}. \tag{4}\] If \(E\) is without CM over \(\overline{\mathbf{Q}}\), or has CM by an order \(\mathcal{O}\subset K\), there is in each case a factorization of \(\delta_{E/K}\) that is typographically identical to (2), provided that \(N\) is divisible by all primes from an appropriately defined finite set of critical primes [2, Theorems 1.1 and 1.2]. If \(E\) has CM by an order \(\mathcal{O}\not\subset K\), there is a hybrid formula [2, Theorem 1.4] with different contributions from ordinary and supersingular primes. A 'factorization formula' for \(\delta_{K,x}\) and \(\delta_{E/K}\) as in (3) shows that the vanishing of these densities is always caused by an obstruction at some finite level \(N\). For such \(N\), no element in \(\operatorname{Gal}(K_{N}/K)\) restricts for all \(\ell|N\) to a non-trivial element of \(\operatorname{Gal}(K_{\ell}/K)\). As a consequence, there are no non-critical primes in \(S_{K,x}\) or \(S_{E/K}\): the Frobenius elements of such primes in \(\operatorname{Gal}(K_{N}/K)\) cannot exist for group theoretical reasons. An obstruction at prime level \(N=\ell\), which means an equality \(K=K_{\ell}\), amounts in the cases I and II to a 'trivial' reason that we already mentioned in the context of Conditions A and B in Theorem 1.2. For \(K=\mathbf{Q}\), vanishing of \(\delta_{\mathbf{Q},x}\) and \(\delta_{E/\mathbf{Q}}\) only occurs if we have \(K=K_{\ell}\) for a prime \(\ell\), which in this case has to be \(\ell=2\). Over general number fields, vanishing may be caused by obstructions that occur _only_ at composite levels. Typical examples can be constructed by base changing a non-vanishing example \((K,x)\) or \(E/K\) to a suitable extension field \(K\subset L\). Example 3.1 accomplishes vanishing of \(\delta_{K,x}\) by an obstruction at level \(30=2\cdot 3\cdot 5\) for the field \(K=\mathbf{Q}(\sqrt{5})\) that does not arise at lower level by cleverly choosing \(x\). In [2, Example 5.4] we find a base change of an elliptic curve \(E/\mathbf{Q}\) to a field \(K\) of degree \(48\) with a similar level \(30\) obstruction to cyclic reduction. **III. Elliptic primitive root case.** In addition to the elliptic curve \(E/K\), we are now given a point \(P\in E(K)\) of infinite order. We consider the set \(S_{E/K,P}\) of primes \(\mathfrak{p}\) of \(K\) for which \(E\) has cyclic reduction and the reduction of the point \(P\) modulo \(\mathfrak{p}\) generates the group \(E(k_{\mathfrak{p}})\). Note the obvious inclusion \(S_{E/K,P}\subset S_{E/K}\). For \(m\in\mathbf{Z}_{\geq 1}\), we let \(K_{m}=K(m^{-1}P)\) be the \(m\)_-division field of \(P\)_, i.e., the extension of \(K\) generated by the points of the subgroup of \(E(\overline{K})\) defined as \[\langle m^{-1}P\rangle=\{Q\in E(\overline{K}):mQ\in\langle P\rangle\}.\] Note that this extension \(K_{m}\) contains the \(m\)-division field of the elliptic curve \(E\) that we encountered in the cyclic reduction case II. The \(m\)-division field \(K_{m}=K(m^{-1}P)\) of \(P\) is again unramified over \(K\) at primes \(\mathfrak{p}\) of good reduction coprime to \(m\). The proof of this fact is as for the \(m\)-division field of \(E\): as the \(m\)-th roots \(Q\in E(\overline{K})\) of \(P\) that generate \(K_{m}\) over \(K\) differ by \(m\)-torsion points, their reductions modulo a prime over \(\mathfrak{p}\) remain different, so inertia acts trivially on the set of such \(Q\). The quotient group \(V_{m}=\langle m^{-1}P\rangle/\langle P\rangle\) is a free module of rank \(3\) over \(\mathbf{Z}/m\mathbf{Z}\). It comes with a natural linear action of the absolute Galois group \(G_{K}\) of \(K\), and this mod-\(m\) Galois representation induces an embedding \[G_{m}=\operatorname{Gal}(K_{m}/K)\hookrightarrow\operatorname{GL}(V_{m}) \cong\operatorname{GL}_{3}(\mathbf{Z}/m\mathbf{Z}). \tag{5}\] As \(G_{K}\) stabilizes the rank \(2\) subspace \(U_{m}=E[m](\overline{K})\subset V_{m}\), this is a'reducible' representation. Write \(V_{m}=U_{m}\oplus(\mathbf{Z}/m\mathbf{Z})\cdot\overline{Q}\) with \(\overline{Q}=(Q\bmod\langle P\rangle)\in V_{m}\) the image of a point \(Q\in E(\overline{K})\) satisfying \(mQ=P\). We then have a split exact sequence \[0\to U_{m}\longrightarrow V_{m}\longrightarrow\mathbf{Z}/m\mathbf{Z}\cdot \overline{Q}\to 0\] of free \(\mathbf{Z}/m\mathbf{Z}\)-modules that is split as a sequence of \((\mathbf{Z}/m\mathbf{Z})[G_{m}]\)-modules if and only if we have \(P\in m\cdot E(K)\). After choosing an \(\mathbf{Z}/m\mathbf{Z}\)-basis \(\{T_{1},T_{2}\}\) of \(U_{m}\) and extending it by some \(\overline{Q}\) as above to a \(\mathbf{Z}/m\mathbf{Z}\)-basis for \(V_{m}\), the matrix representation of \(\sigma\in G_{m}\) becomes \[\sigma=\sigma_{A,b}=\begin{pmatrix}a_{11}&a_{12}&b_{1}\\ a_{21}&a_{22}&b_{2}\\ 0&0&1\end{pmatrix}, \tag{6}\] in which the linear action of \(\sigma\) on \(U_{m}\) with respect to some \(\mathbf{Z}/m\mathbf{Z}\)-basis \(\{T_{1},T_{2}\}\) of \(U_{m}\) is described by \(A=\begin{pmatrix}a_{11}&a_{12}\\ a_{21}&a_{22}\end{pmatrix}\in\operatorname{GL}(U_{m})\), and \[b=b_{1}T_{1}+b_{2}T_{2}=\sigma Q-Q\] is the translation action of \(\sigma\) on some chosen '\(m\)-th root' \(Q\) of \(P\) with respect to that same basis. In other words, \(V_{m}\) gives a Galois representation of \(G_{K}\) with image \(G_{m}\subset\operatorname{GL}(V_{m})\) that is contained in the \(2\)-dimensional affine group \[\operatorname{Aff}_{2}(\mathbf{Z}/m\mathbf{Z})=(\mathbf{Z}/m\mathbf{Z})^{2} \rtimes\operatorname{GL}_{2}(\mathbf{Z}/m\mathbf{Z}).\] In the important case where \(m=\ell\) is prime, we are in the classical situation of a \(3\)-dimensional Galois representation over the finite field \(\mathbf{F}_{\ell}\). The analogue in the elliptic primitive root case of the Lemmas 2.1 and 2.2 is a little bit more involved. We have to impose a condition on the Frobenius elements \(\operatorname{Frob}_{\mathfrak{p},\ell}\in G_{\ell}\) at all primes \(\ell\neq\operatorname{char}\mathfrak{p}\) different from being _equal_ to the identity element \(\operatorname{id}_{\ell}\in G_{\ell}\): in this case it only needs to be'sufficiently close' to it. **Lemma 2.3**.: _For \(P\in E(K)\) of infinite order and \(\mathfrak{p}\) a prime of good reduction of \(E\) of characteristic different from \(\ell\) we have_ \[\ell|[E(k_{\mathfrak{p}}):\langle\overline{P}\rangle]\quad\Longleftrightarrow \quad\operatorname{rank}(\operatorname{Frob}_{\mathfrak{p},\ell}-\operatorname{id }_{\ell})\leq 1.\] Proof.: As all \(V_{\ell}\) are \(3\)-dimensional over \(\mathbf{F}_{\ell}\), the condition \(\operatorname{rank}(\operatorname{Frob}_{\mathfrak{p},\ell}-\operatorname{id }_{\ell})\leq 1\) means that \(\operatorname{Frob}_{\mathfrak{p},\ell}\) is the identity on an \(\mathbf{F}_{\ell}\)-subspace of \(V_{\ell}\) of dimension at least \(2\). If it equals \(U_{\ell}\), then \(E(k_{\mathfrak{p}})\) has complete \(\ell\)-torsion of order \(\ell^{2}\) and every cyclic subgroup \(\langle\overline{P}\rangle\) has index divisible by \(\ell\). If not, it intersects \(U_{\ell}\) in a \(1\)-dimensional subspace, so we have a point of order \(\ell\) in \(E(k_{\mathfrak{p}})\) and a point \(\overline{Q}\in E(k_{\mathfrak{p}})\) satisfying \(\ell\overline{Q}=\overline{P}\). This also implies that \([E(k_{\mathfrak{p}}):\langle\overline{P}\rangle]\) is divisible by \(\ell\). Conversely, if \(\ell\) divides \([E(k_{\mathfrak{p}}):\langle\overline{P}\rangle]\) then either \(E(k_{\mathfrak{p}})\) has complete \(\ell\)-torsion or \(E(k_{\mathfrak{p}})\) has a cyclic non-trivial \(\ell\)-part and \(\overline{P}\) is contained in \(\ell\cdot E(k_{\mathfrak{p}})\). In both cases \(\operatorname{Frob}_{\mathfrak{p},\ell}\) is the identity on a subspace of \(V_{\ell}\) of dimension at least \(2\). **Corollary 2.4**.: _Let \(P\in E(K)\) be of infinite order and \(\mathfrak{p}\) a prime of good reduction of \(E\) of prime norm \(\operatorname{char}\mathfrak{p}>5\) for which \(\overline{P}\neq\overline{O}\in E(k_{\mathfrak{p}})\). Then we have_ \[E(k_{\mathfrak{p}})=\langle\overline{P}\rangle\quad\Longleftrightarrow\quad \operatorname{rank}(\operatorname{Frob}_{\mathfrak{p},\ell}-\operatorname{id }_{\ell})\geq 2\text{ for all primes }\ell.\] Proof.: By Lemma 2.3, the condition on the right side says that \(p=\operatorname{char}\mathfrak{p}\) is the only possible prime divisor of the index \([E(k_{\mathfrak{p}}):\langle\overline{P}\rangle]\). For a prime \(\mathfrak{p}\) of degree one, i.e., of prime norm \(p\), the index of a subgroup of \(E(k_{\mathfrak{p}})=E(\mathbf{F}_{p})\) can only be divisible by \(p\) if it is the trivial subgroup, as we have \(\#E(\mathbf{F}_{p})<p+1+2\sqrt{p}<2p\) for \(p>5\). So we have \(E(k_{\mathfrak{p}})=\langle\overline{P}\rangle\) unless \(\mathfrak{p}\) is a prime for which we have \(\overline{P}=\overline{O}\in E(k_{\mathfrak{p}})\). As we have \(P\neq O\in E(K)\), this happens only for finitely many \(\mathfrak{p}\). In density questions, we can disregard any finite set of primes, and more generally a set of primes of density zero. The set of primes in a number field of degree bigger than one is such a zero density set. For this reason, the density of the set \(S_{E/K,P}\) only depends on the primes of degree one outside any finite set of 'critical primes' that it contains. Thus, Corollary 2.4 can play the same role as the Lemmas 2.1 and 2.2. In order to express the 'heuristical density' \(\delta_{E/K,P}\) of \(S_{E/K,P}\), we define the subset \(S_{\ell}\subset G_{\ell}=\operatorname{Gal}(K_{\ell}/K)\) of 'bad' elements at the prime \(\ell\) as \[S_{\ell}=\{\sigma\in G_{\ell}:\operatorname{rank}_{\mathbf{F}_{\ell}}(\sigma- \operatorname{id}_{\ell})\leq 1\}.\] For arbitrary \(m\in\mathbf{Z}_{\geq 1}\) and \(\ell|m\) prime we let \(\pi_{m,\ell}:G_{m}\to G_{\ell}\) be the natural restriction map, and define \(S_{m}\subset G_{m}\) as \[S_{m}=\bigcup_{\ell|m}\pi_{m,\ell}^{-1}[S_{\ell}].\] With \(s_{m}=\#S_{m}\) denoting the cardinality of \(S_{m}\), the _elliptic primitive root density_ is now given by the inclusion-exclusion sum \[\delta_{E,P}=\sum_{m=1}^{\infty}\frac{\mu(m)s_{m}}{[K_{m}:K]}. \tag{7}\] It is the elliptic analogue of the multiplicative primitive root density (2). It is an upper density for \(S_{E/K,P}\) that has not been proven to be its true density in cases with \(\delta_{E/K,P}>0\), not even under GRH. We can compute \(\delta_{E/K,P}\) using the methods of [2]. This is not directly relevant for us, as our focus in this paper is on cases where \(\delta_{E/K,P}\) vanishes in 'non-trivial' ways, so we will merely sketch this here. In order to obtain a factorization \[\delta_{E/K,P}=\sum_{m|N}\frac{\mu(m)s_{m}}{[K_{m}:K]}\cdot\prod_{\ell\nmid N \text{ prime}}(1-\frac{s_{\ell}}{[K_{\ell}:K]}). \tag{8}\] as in (3), it suffices to have an 'open-image theorem' for the Galois representation \(\rho_{E,P}\) arising from the action of \(G_{K}\) on the subgroup \[R_{P}=\{Q\in E(\overline{K}):mQ\in\langle P\rangle\text{ for some }m\in\mathbf{Z}_{ \geq 1}\}\cong(\mathbf{Q}/\mathbf{Z})^{2}\times\mathbf{Q}\] of \(E(\overline{K})\) generated by all the roots of \(P\) in \(E(\overline{K})\). The Galois action of \(G_{K}\) on the quotient group \(V=R_{P}/\langle P\rangle\), which is free of rank \(3\) over \(\mathbf{Q}/\mathbf{Z}\), gives rise to a Galois representation \[\rho_{E,P}:G_{K}\longrightarrow\operatorname{Aut}(V)\cong\operatorname{GL}_{3 }(\widehat{\mathbf{Z}}),\] which has (5) as its mod-\(m\) representation. It factors via \(\operatorname{Gal}(K(R_{P})/K)\), with \(K(R_{P})=\bigcup_{m}K_{m}\subset\overline{K}\) the compositum of all '\(m\)-division fields' of \(P\) inside \(\overline{K}\). The group \(U=E(\overline{K})^{\operatorname{tor}}\cong(\mathbf{Q}/\mathbf{Z})^{2}\) is a direct summand of \(V\), and if we choose a \(\mathbf{Q}/\mathbf{Z}\)-basis for \(V=U\oplus\mathbf{Q}/\mathbf{Z}\) as we did for \(V_{m}=U_{m}\oplus\mathbf{Z}/m\mathbf{Z}\cdot\overline{Q}\), the image of \(\rho_{E,P}\) is in \(\operatorname{Aff}_{2}(\widehat{\mathbf{Z}})=\widehat{\mathbf{Z}}^{2}\rtimes \operatorname{GL}_{2}(\widehat{\mathbf{Z}})\). For \(E\) without CM over \(\overline{K}\), one deduces from Serre's open image theorem that this image is of finite index in \(\operatorname{Aff}_{2}(\widehat{\mathbf{Z}})\), which yields (8) for any \(N\) divisible by some finite product \(N_{E/K,P}\in\mathbf{Z}_{>0}\) of critical primes. As in [2], one deduces that all non-CM-densities \(\delta_{E,P}\) are rational multiples of a universal constant. If \(E\) has CM over \(\overline{K}\) by an order \(\mathcal{O}\subset K\), one replaces \(\operatorname{Aff}_{2}(\widehat{\mathbf{Z}})\) by \(\operatorname{Aff}_{1}(\mathcal{O})\), and in the case of CM by an order \(\mathcal{O}\not\subset K\), one separates the contribution of ordinary and supersingular primes of \(E/K\) as in [2]. ## 3. Multiplicative primitivity Before focusing on the Lang-Trotter case III, we first settle the multiplicative primitive root case: under GRH, globally primitive elements \(x\in K^{*}\) are locally primitive for a set of primes of positive density \(\delta_{K,x}\). Proof of Theorem 1.1.: Let \(x\in K^{*}\) be globally primitive. As we assume GRH, the primitive root density for \(x\in K^{*}\) exists and is equal to \(\delta_{K,x}\) defined in (2), by the results of [9]. We need to show that \(\delta_{K,x}\) does not vanish. In view of the factorization formula (3), it suffices to show that for any squarefree integer \(N>1\), the fraction \(\sum_{m|N}\mu(m)[K_{m}:K]^{-1}\) of elements in \(\operatorname{Gal}(K_{N}/K)\) that have non-trivial restriction to \(K_{\ell}\) for all primes \(\ell|N\) does not vanish. As \(x\) is not an \(\ell\)-th power in \(K^{*}\), the polynomial \(X^{\ell}-x\) is irreducible in \(K[X]\). It therefore gives rise to an extension \(K\subset K_{\ell}=\operatorname{Split}_{K}(X^{\ell}-x)=K(\zeta_{\ell}, \sqrt[\ell]{x})\) of degree \(\ell\cdot c_{\ell}\), with \(c_{\ell}\) a divisor of \(\ell-1\). If \(\ell\) is the largest prime dividing the squarefree number \(N\), we conclude that \(K_{N/\ell}\subset K_{N}\) is Galois of degree divisible by \(\ell\). Showing that \(\operatorname{Gal}(K_{N}/K)\) contains an element of the required type is now easily done by induction on the number of of primes dividing the squarefree integer \(N>1\). If \(N\) is prime, then \(\operatorname{Gal}(K_{N}/K)\) contains a non-trivial element of order \(N\). If not, we let \(\ell\) be the largest prime dividing \(N\) and observe that an automorphism of the required type in \(\operatorname{Gal}(K_{N/\ell}/K)\), which exists by the induction hypothesis, always possesses an extension to the compositum \(K_{N}\) of \(K_{N/\ell}\) and \(K_{\ell}\) that is non-trivial on \(K_{\ell}\). The assumption of global primitivity in Theorem 1.1 cannot be weakened to the assumption \(K\neq K_{\ell}\) for all prime number \(\ell\). The resulting stronger statement is correct for \(K=\mathbf{Q}\), but counterexamples to it exist for general number fields \(K\), as the cyclotomic extensions \(K\subset K(\zeta_{\ell})\) for different \(\ell\) may all be non-trivial, but 'entangled' over \(K\). The following counterexample takes \(K\) to be quadratic. **Example 3.1**.: The quadratic field \(K=\mathbf{Q}(\sqrt{5})\) has fundamental unit \(\varepsilon=\frac{1+\sqrt{5}}{2}\). The element \(\pi=\varepsilon^{2}-4=\frac{-5+\sqrt{5}}{2}\in K\) has norm \(5\) and is a square modulo \(4\). The field \(K(\sqrt{\pi})\), which is cyclic of degree \(4\) over \(\mathbf{Q}\) and unramified outside \(5\), is therefore equal to \(K(\zeta_{5})\). Take \(y=-3\pi\in K\) and choose \(x=y^{15}\). We then have \[K_{3}=K(\zeta_{3})=K(\sqrt{-3})\qquad\text{ and }\qquad K_{5}=K(\zeta_{5})=K( \sqrt{\pi}),\] so \(K_{2}=K(\sqrt{x})=K(\sqrt{y})=K(\sqrt{-3\pi})\) and \(K_{3}\) and \(K_{5}\) are three different quadratic extensions of \(K\) contained in the biquadratic extension \(K\subset K_{6}=K_{10}=K_{15}=K_{30}\). We have \(\mu_{K}=\{\pm 1\}\) and, even though \(x\) is not a square in \(K^{*}\), there is exactly one prime of \(K\) modulo which \(x\) is a primitive root: (2). For the primes \(\mathfrak{p}=(3)\) and \(\mathfrak{p}=(\sqrt{5})\) the element \(x\) is not in \(k_{\mathfrak{p}}^{*}\), and for all primes of characteristic \(p>5\) the index \([\mathbf{F}_{p}:\langle\overline{x}\rangle]\) is divisible by at least one of \(2\), \(3\) or \(5\). Indeed, no prime can be inert in all three quadratic subfields of an extension with group \(\mathbf{Z}/2\mathbf{Z}\times\mathbf{Z}/2\mathbf{Z}\). The simple observation that no prime \(\mathfrak{p}\) of a number field \(K\) can be inert in all three quadratic subextensions of an extension \(K\subset L\) with group \(\mathbf{Z}/2\mathbf{Z}\times\mathbf{Z}/2\mathbf{Z}\) underlies many 'entanglement obstructions', including the one in our final Section 8. ## 4. Proof of Theorem 1.2 In the Lang-Trotter situation, Lemma 2.3 shows that a point \(P\in E(K)\) will generate a subgroup of the local point group \(E(k_{\mathfrak{p}})\) of index divisible by \(\ell\) when \(\operatorname{Frob}_{\mathfrak{p},\ell}\in G_{\ell}\) pointwise fixes a \(2\)-dimensional subspace of the \(3\)-dimensional \(\mathbf{F}_{\ell}\)-vector space \(V_{\ell}\). Vanishing of the density \(\delta_{E/K,P}\) can therefore occur 'because of \(K_{\ell}\)' in cases where \(G_{\ell}=\operatorname{Gal}(K_{\ell}/K)\) is non-trivial, but only contains elements that pointwise fix a \(2\)-dimensional subspace of \(V_{\ell}\). Our proof of Theorem 1.2 is based on a general lemma that we phrase and prove in the generality that was suggested to us by Hendrik Lenstra. It describes the linear group actions on vector spaces of finite dimension over arbitrary fields that have'many fixpoints' in the sense of the \(3\)-dimensional example \(V_{\ell}\) that we have at hand. Let \(V\) be any vector space on which a group \(G\) acts linearly, and denote by \[V^{G}=\{v\in V:\sigma v=v\text{ for all }\sigma\in G\}\qquad\text{and}\qquad V _{G}=V/(\sum_{\sigma\in G}(\sigma-1)V)\] the maximal subspace and quotient space of \(V\) on which \(G\) acts trivially. For every \(\sigma\in G\), we have an exact sequence of vector spaces \[0\longrightarrow V^{\langle\sigma\rangle}\longrightarrow V\mathop{ \longrightarrow}^{\sigma-1}V\longrightarrow V/(\sigma-1)V\to 0\] showing that for \(V\) of finite dimension \(n\), we have \[\dim V^{\langle\sigma\rangle}\geq n-1\quad\Longleftrightarrow\quad\dim( \sigma-1)V\leq 1. \tag{9}\] **Lemma 4.1**.: _Let \(G\) be a group acting linearly on a vector space \(V\) of dimension \(n\in\mathbf{Z}_{\geq 0}\). Then the following are equivalent:_ 1. \(\dim V^{\langle\sigma\rangle}\geq n-1\) _for all_ \(\sigma\in G\)_;_ 2. \(\dim V^{G}\geq n-1\) _or_ \(\dim V_{G}\geq n-1\)_._ Proof.: The implication \((2)\Rightarrow(1)\) is immediate, as the inequality \(\dim V_{G}\geq n-1\) implies that, for all \(\sigma\in G\), we have \(\dim(\sigma-1)V\leq 1\) and, by (9), \(\dim V^{\langle\sigma\rangle}\geq n-1\). For \((1)\Rightarrow(2)\), we can assume there exists \(\sigma\in G\) acting non-trivially on \(V\), and define subgroups \(A_{\sigma},B_{\sigma}\subset G\) by \[A_{\sigma}=\{\tau\in G\colon V^{\langle\tau\rangle}\supset V^{\langle\sigma \rangle}\}\qquad\text{and}\qquad B_{\sigma}=\{\tau\in G\colon(\tau-1)V\subset( \sigma-1)V\}.\] The equality \(A_{\sigma}=G\) implies that \(V^{G}=V^{\langle\sigma\rangle}\) has dimension \(n-1\), and the equality \(B_{\sigma}=G\) implies that \(V_{G}=V/(\sigma-1)V\) has dimension \(n-1\). In order to show that we have one of these equalities, and therefore (2), we argue by contradiction. Assume \(A_{\sigma}\) and \(B_{\sigma}\) are _strict_ subgroups of \(G\), and pick \(\tau\in G\) outside \(A_{\sigma}\cup B_{\sigma}\). Then there exist \(s\in V^{\langle\sigma\rangle}\setminus V^{\langle\tau\rangle}\) and \(t\in V^{\langle\tau\rangle}\setminus V^{\langle\sigma\rangle}\), and \((\sigma-1)V\) and \((\tau-1)V\) are _different_ 1-dimensional subspaces of \(V\) spanned by \((\sigma-1)t\) and \((\tau-1)s\), respectively. The subspace \((\tau\sigma-1)V\) is 1-dimensional and spanned by \((\tau\sigma-1)s=(\tau-1)s\), so it equals \((\tau-1)V\). It contains \((\tau\sigma-1)t=\tau(\sigma-1)t\), but since \(\tau\) acts on \((\sigma-1)t\notin(\tau-1)V\) by translation along a vector in \((\tau-1)V\), we have \(\tau(\sigma-1)t\notin(\tau-1)V\). Contradiction. For those who like to think of Lemma 4.1 in terms of matrices, Condition (1) means that every element of \(G\) has a matrix representation with respect to a suitable basis that, according to (9), can be given in one of the equivalent forms \[\left(\begin{array}{c|c}\hline\\ &I_{n-1}&\vdots\\ &&\ast\\ \hline 0&\cdots&0&\ast\end{array}\right)\qquad\text{or}\qquad\left(\begin{array} []{c|c}\ast&\ast&\cdots&\ast\\ \hline\\ \hline\vdots&I_{n-1}\\ 0&\end{array}\right). \tag{10}\] The first form shows \(n-1\) linearly independent vectors in \(V^{\langle\sigma\rangle}\), the second starts from a vector spanning \((\sigma-1)V\). The lemma then states that under this condition, a _single_ basis for \(V\) can be chosen such that either all elements of \(G\) have a matrix representation of the first form, or they all have one of the second form. **Example 4.2**.: For an elliptic curve \(E/K\), we can apply Lemma 4.1 to the action of the Galois group \(G\) of the \(\ell\)-division field of \(E\) over \(K\) on the 2-dimensional \(\mathbf{F}_{\ell}\)-vector space \(V=E[\ell](\overline{K})\) of \(\ell\)-torsion points of \(E\). In this case the point group \(E(k_{\mathfrak{p}})\) at a prime \(\mathfrak{p}\nmid\ell\) of good reduction is of order divisible by \(\ell\) if and only if \(\operatorname{Frob}_{\mathfrak{p}}\in G\) pointwise fixes a 1-dimensional subspace of \(V\). We find that almost all local point groups \(E(k_{\mathfrak{p}})\) are of order divisible by \(\ell\) if and only if the Galois representation \(\rho_{E/K,\ell}\) of \(G_{K}\) on the group of \(\ell\)-torsion points of \(E\) can be given in matrix form as \[\rho_{E/K,\ell}\sim\begin{pmatrix}1&\ast\\ 0&\ast\end{pmatrix}\qquad\text{or}\qquad\rho_{E/K,\ell}\sim\begin{pmatrix} \ast&\ast\\ 0&1\end{pmatrix}.\] In words: either \(E(K)\) contains an \(\ell\)-torsion point, or it is \(\ell\)-isogenous over \(K\) to an elliptic curve with a \(K\)-rational \(\ell\)-torsion point. Moreover, for \(E/K\) of the first kind, with a point \(T\in E(K)\) of order \(\ell\), the quotient curve \(E^{\prime}=E/\langle T\rangle\) is of the second kind, with the dual isogeny \(E^{\prime}\to E\) being the \(\ell\)-isogeny in question. This is a well-known fact that occurs as the very first exercise in [14, p. I-2]. Proof of Theorem 1.2.: Let \(E/K\) be an elliptic curve, and \(P\in E(K)\) a non-torsion point that is locally \(\ell\)-imprimitive. We define \(K_{\ell}=K(\ell^{-1}P)\) as in Section 2, and view \(G_{\ell}=\operatorname{Gal}(K_{\ell}/K)\subset\operatorname{GL}(V_{\ell})\) as a group of \(\mathbf{F}_{\ell}\)-linear automorphisms of the \(3\)-dimensional vector space \(V_{\ell}=\langle\ell^{-1}P\rangle/\langle P\rangle\). As every element of \(G_{\ell}\) occurs as the Frobenius of infinitely many primes of good reduction, it follows from Lemma 2.3 that all elements of \(G_{\ell}\) leave a \(2\)-dimensional subspace of \(V_{\ell}\) pointwise invariant. We can now apply our Lemma 4.1 for \(n=3\) with \(G=G_{\ell}\) and \(V=V_{\ell}\) to conclude that at least one of the following occurs: either \(G_{\ell}\) acts trivially on a \(2\)-dimensional subspace of \(V_{\ell}\), or \(G_{\ell}\) acts trivially on a \(2\)-dimensional quotient space of \(V_{\ell}\). In the first case, if \(U_{\ell}=E[\ell](\overline{K})\subset V_{\ell}\) is a subspace with trivial \(G_{\ell}\)-action, then \(E(K)\) has complete \(\ell\)-torsion and Condition B is satisfied. If \(G_{\ell}\) acts trivially on a different \(2\)-dimensional subspace \(S_{\ell}\subset V_{\ell}\), than \(S_{\ell}\) is spanned by a non-zero vector in \(U_{\ell}\cap S_{\ell}\) and the non-zero image of a point of infinite order \(Q\in\langle\ell^{-1}P\rangle\) in the \(\mathbf{F}_{\ell}\)-vector space \(V_{\ell}\). In other words: \(E(K)\) contains a torsion point of order \(\ell\) and a point \(Q\) with \(\ell Q=mP\) for some \(m\in\mathbf{Z}\) that is not divisible by \(\ell\). Writing \(am+b\ell=1\) in \(\mathbf{Z}\), the point \(Q^{\prime}=aQ+bP\in E(K)\) satisfies \(\ell Q^{\prime}=a\ell Q+b\ell P=amP+b\ell P=P\), so Condition A is satisfied. In the second case, where \(G_{\ell}\) acts trivially on a \(2\)-dimensional quotient space \(V_{\ell}/T_{\ell}\), it acts on \(V_{\ell}\) by translation along the \(1\)-dimensional subspace \(T_{\ell}\). We will assume that \(G_{\ell}\) does not act trivially on the subspace \(U_{\ell}\), as this would bring us back in the first case, with Condition B holding. As \(U_{\ell}=E[\ell](\overline{K})\subset V_{\ell}\) is \(G_{\ell}\)-stable, we have strict inclusions \(0\subsetneq T_{\ell}\subsetneq U_{\ell}\) of \(\mathbf{F}_{\ell}[G_{\ell}]\)-modules, so \(T_{\ell}\) is a \(K\)-rational subgroup of \(E(\overline{K})\) of order \(\ell\). The corresponding isogeny \(E\to E^{\prime}=E/T_{\ell}\) is defined over \(K\), and identifies the \(\mathbf{F}_{\ell}[G_{\ell}]\)-module \(U_{\ell}/T_{\ell}\), which has trivial \(G_{\ell}\)-action, with the subgroup of \(E^{\prime}(K)\) of order \(\ell\) that is the kernel of the isogeny \(\phi:E^{\prime}\to E\) dual to \(\widehat{\phi}:E\to E^{\prime}=E/T_{\ell}\). If \(Q\in E(\overline{K})\) satisfies \(\ell Q=P\), then \(P^{\prime}=\widehat{\phi}(Q)\) is in \(E^{\prime}(K)\), as it is the image of any point in the Galois orbit \(G_{\ell}\cdot Q\subset Q+T_{\ell}\). Moreover, we have \(\phi(P^{\prime})=\phi\widehat{\phi}(Q)=\ell Q=P\), so Condition C of Theorem 1.2 is satisfied. Conversely, each of the Conditions A, B, and C guarantees that \(P\in E(K)\) is locally \(\ell\)-imprimitive. For \(A\) and \(B\) this is immediate. If Condition C holds, we have an \(\ell\)-isogeny \(\phi:E^{\prime}\to E\) defined over \(K\) and a point \(P^{\prime}\in E^{\prime}(K)\) with \(\phi(P^{\prime})=P\). Pick \(Q^{\prime}\in E^{\prime}(\overline{K})\) with \(\ell Q^{\prime}=P^{\prime}\) and put \(Q=\phi(Q^{\prime})\in E(\overline{K})\). Writing \(\widehat{\phi}:E\to E^{\prime}\) for the dual isogeny, we are in the situation of Example 4.2, and we have \[\ell Q=\phi(\ell Q^{\prime})=\phi(P^{\prime})=P\qquad\text{and}\qquad\widehat{ \phi}Q=\widehat{\phi}\phi Q^{\prime}=\ell Q^{\prime}=P^{\prime}\in E^{ \prime}(K).\] As \(Q\) is in the fibre \(\widehat{\phi}^{-1}(P^{\prime})\), the \(G_{\ell}\)-action on \(Q\bmod\langle P\rangle\in V_{\ell}\), which is by translation over \(\ell\)-torsion points, gives rise to a Galois orbit of length dividing \(\ell\). If the length is \(1\), then Condition A is satisfied, and \(P\in E(K)\) is locally \(\ell\)-imprimitive. If the length is \(\ell\), then \(G_{\ell}\) acts on \(V_{\ell}\) by translation along the \(K\)-rational subgroup \(T_{\ell}=\ker\widehat{\phi}\subset U_{\ell}=E[\ell](\overline{K})\), and the matrix representation of \(G_{\ell}\) on \(V_{\ell}\) with respect to the filtration \(T_{\ell}\subset U_{\ell}\subset V_{\ell}\) is \[\rho_{E/K,\ell}\sim\begin{pmatrix}*&*&*\\ 0&1&0\\ 0&0&1\end{pmatrix}.\] By Lemma 2.3, we find that we have \(\ell|[E(k_{\mathfrak{p}}):\langle\overline{P}\rangle]\) for every prime \(\mathfrak{p}\) of good reduction of \(E\) that is of characteristic different from \(\ell\), so \(P\in E(K)\) is locally \(\ell\)-imprimitive. ## 5. Locally \(2\)-imprimitive points A _non-trivial_ locally \(\ell\)-imprimitive point on an elliptic curve \(E/K\) is a non-torsion point \(P\in E(K)\) for which Condition C of Theorem 1.2 holds, but not Conditions A or B. If \(P\) is such a point, \(E\) admits a \(K\)-rational \(\ell\)-isogeny \(E\to E^{\prime}\), and the Galois representation \(\overline{\rho}_{E,\ell}\) of \(G_{K}\) on \(U_{\ell}=E[\ell](\overline{K})\) is non-trivial, with image contained in a Borel subgroup of \(\operatorname{GL}(U_{\ell})\). Let \(P\in E(K)\) be a non-trivial locally \(2\)-imprimitive point. As a Borel subgroup of \(\operatorname{GL}(U_{2})\cong\operatorname{GL}_{2}(\mathbf{F}_{2})\) has order \(2\), the representation \(\overline{\rho}_{E,2}\) is a non-trivial quadratic character, and as a Weierstrass model for \(E\) we can take \[E:y^{2}=x(x^{2}+ax+b)\qquad\text{with }b,d=a^{2}-4b\in K^{*}. \tag{11}\] Here \((0,0)\) is the \(K\)-rational point of order \(2\), and \(\overline{\rho}_{E,2}\) corresponds to the \(2\)-division field of \(E\) over \(K\), which equals \(K(\sqrt{d})\). Addition by \((0,0)\) induces an involution \[(x,y)\mapsto(x_{1},y_{1})=(b/x,-by/x^{2})\] on the function field \(K(E)=K(x,y)\) of \(E\), and the invariant field \(K(x+x_{1},y+y_{1})\) is the function field of the \(2\)-isogenous curve \(E^{\prime}=E/\langle(0,0)\rangle\). Choosing \(u=x+x_{1}+a\) and \(v=y+y_{1}\) as generators for \(K(E^{\prime})\), we obtain a Weierstrass model \[E^{\prime}:v^{2}=u(u^{2}-2au+d)\qquad\text{with }d,d^{\prime}=(-2a)^{2}-4d=16b \in K^{*} \tag{12}\] for \(E^{\prime}\) that is of the same form (11), and an explicit \(2\)-isogeny \(\varphi:E\to E^{\prime}\) given by \[\varphi:(x,y)\longmapsto(u,v)=(x+x_{1}+a,y+y_{1})=\left(\frac{y^{2}}{x^{2}},( 1-\frac{b}{x^{2}})y\right). \tag{13}\] An affine point \((u,v)\in E^{\prime}(K)\) different from \((0,0)\) is in the image of \(E(K)\) under this isogeny if and only if \(u\in K^{*}\) is a square. The point \((0,0)\) is in the image if and only if \(d\) is a square in \(K^{*}\), which amounts to saying that \(E(K)\) has full \(2\)-torsion. This not the case for our \(E\). As \(E^{\prime}\) is again of the form (11), one sees that the isogeny \(\widehat{\varphi}:E^{\prime}\to E\) dual to \(\varphi\) is given by \[\widehat{\varphi}:(u,v)\longmapsto\left(\frac{v^{2}}{4u^{2}},(1-\frac{b}{u^{2 }})\frac{v}{8}\right). \tag{14}\] Proof of Theorem 1.3.: Let \(E/K\) be an elliptic curve with \(\#E(\overline{K})=2\). We take \((0,0)\) as the point of order \(2\) in a Weierstrass model of \(E\) in order to obtain an equation \(E:y^{2}=x(x^{2}+ax+b)\) as in (11), with \(d=a^{2}-4b\in K^{*}\setminus K^{*2}\). Any quadratic twist of \(E\) over \(K\) is of the form \[E_{D}:y^{2}=x(x^{2}+aDx+bD^{2})\] for some \(D\in K^{*}\) that we may still rescale by squares in \(K^{*}\), and we can define the \(2\)-isogeny \(\varphi:E_{D}\to E^{\prime}_{D}=E/\langle(0,0)\rangle\) as above by replacing \((a,b)\) in (11), (12), and (13) by \((aD,bD^{2})\). Any point \(P\in E_{D}(K)\) satisfying Condition C from Theorem 1.2 is in the image \(\widehat{\varphi}(E^{\prime}_{D}(K))\) of the isogeny \(\widehat{\varphi}:E^{\prime}_{D}\to E_{D}\) dual to \(\varphi\). This means that its \(x\)-coordinate is a square in \(K^{*}\), which we can take to be \(1\) after rescaling the model of \(E\) over \(K\). Thus, the twists that are relevant for us are those for which the point \(P=(1,\pm Y)\in E_{D}(\overline{K})\) is \(K\)-rational. We want \((D,Y)\) to be a \(K\)-rational point on the conic \(Y^{2}=1+aD+bD^{2}\) different from \((0,\pm 1)\). Such points are obtained as the second point of intersection of this conic with the line \(Y-1=\lambda D\) through \((0,1)\) with slope \(\lambda\in K\). We find that the twists \(E_{\lambda}=E_{D_{\lambda}}\) of \(E\) by \[D_{\lambda}=\frac{a-2\lambda}{\lambda^{2}-b}\qquad\text{with }\lambda\in K \setminus\{a/2,\pm\sqrt{b}\} \tag{15}\] come by construction with a \(K\)-rational point \[P_{\lambda}=(1,Y_{\lambda})=\left(1,\frac{\lambda^{2}-a\lambda+b}{\lambda^{2 }-b}\right)\in E_{\lambda}(K). \tag{16}\] We can find \(K_{2}=K(\frac{1}{2}P_{\lambda})\) by solving \(2Q=\widehat{\varphi}(\varphi Q)=P_{\lambda}\) in \(E_{\lambda}(\overline{K})\). The equation \(\widehat{\varphi}(u,v)=P_{\lambda}\) has \(2\) solutions in \(E^{\prime}_{\lambda}(K)\), since we chose for the first coordinate of \(P_{\lambda}\) the square value \(1\). By (14), these are the \(K\)-rational points \((u,v)\) different from \((0,0)\) that have \(v^{2}=4u^{2}\) and satisfy the equation \[v^{2}=u(u^{2}-2aD_{\lambda}u+dD_{\lambda}^{2})\] defining \(E^{\prime}_{\lambda}\). With \(d=a^{2}-4b\), the resulting equation \[u^{2}-2aD_{\lambda}u-4u+dD_{\lambda}^{2}=(u-aD_{\lambda}-2)^{2}-4(1+aD_{ \lambda}+bD_{\lambda}^{2})=0\] for \(u\) yields two solutions \(u_{1},u_{2}=aD_{\lambda}+2\pm Y_{\lambda}\in K\) with product \(u_{1}u_{2}=dD_{\lambda}^{2}\). Writing \(D_{\lambda}\) and \(P_{\lambda}\) as in (15) and (16), we find \(aD_{\lambda}+2-Y_{\lambda}=d/(\lambda^{2}-b)\), so the minimal extension \(K\subset K_{2}\) for which the corresponding points are in \(\varphi[E(K_{2})]\) is \[K_{2}=K(\tfrac{1}{2}P_{\lambda})=K(\sqrt{u_{1}},\sqrt{u_{2}})=K(\sqrt{d}, \sqrt{\lambda^{2}-b}).\] If we avoid the values \(\lambda\in K\) for which \(\lambda^{2}-b\) is either a square or \(d\) times a square in \(K\)-this includes the \(3\) values excluded in (15)-then \(K_{2}\) is a biquadratic extension of \(K\) which, unsurprisingly, has the \(2\)-division field \(K(\sqrt{d})\) of \(E/K\) as one of its quadratic subextensions. For these \(\lambda\), our matrix representation from (5) and (6) of \(G_{2}=\operatorname{Gal}(K_{2}/K)\) becomes \[\operatorname{Gal}(K_{2}/K)=\left\{\begin{bmatrix}1&x&y\\ 0&1&0\\ 0&0&1\end{bmatrix}:x,y\in\mathbf{F}_{2}\right\},\] which implies that \(P_{\lambda}\in E_{\lambda}(K)\) is globally \(2\)-primitive, but locally \(2\)-imprimitive. There is still the possibility that \(P_{\lambda}\), though globally \(2\)-primitive, is a torsion point of even order \(m>2\). Examples: the point \((1,-3)\) on \(y^{2}=x(x^{2}-7x+3)\) has order \(4\), and the point \((1,1)\) on \(y^{2}=x(x^{2}+3x-3)\) has order \(6\). However, for fixed \(K\) there are only finitely many possibilities for \(m\) by Merel's theorem. For every given even \(m\), the point \(P_{\lambda}\) is of order \(m\) if and only if the \(m\)-th division polynomial \(\psi_{m}(x)=y^{-1}f(x,E_{\lambda})\) vanishes at \(x=1\). This happens for only finitely many values of \(\lambda\), as \(f(1,E_{\lambda})\) is a non-constant rational function of \(\lambda\) if we fix \(a,b\in K\). Our understanding of local \(2\)-imprimitivity is more or less complete, as every non-trivial \(2\)-locally imprimitive point on an elliptic curve arises as in the construction in the proof of Theorem 1.3. Indeed, the hypothesis \(\#E[2](\overline{K})=2\) implies that \(E\) has a model as in (11), and as the \(x\)-coordinate of a point \(P\) satisfying Condition C of Theorem 1.2 is a square, the model can be scaled over \(K\) to have \(P=(1,y)\in E(K)\) ## 6. Locally \(3\)-imprimitive points By Theorem 1.2, every pair \((E,P)\) of an elliptic curve \(E/K\) with a non-trivial locally \(\ell\)-imprimitive point \(P\in E(K)\) arises as the \(\ell\)-isogenous image of a \(K\)-rational curve-point-pair \((E^{\prime},P^{\prime})\) for which the kernel \(E^{\prime}\to E\) is generated by an \(\ell\)-torsion point \(T\in E^{\prime}(K)\). In this situation, the Galois representations of \(G_{K}\) on the \(\ell\)-torsion subgroups of \(E^{\prime}\) and \(E=E^{\prime}/\langle T\rangle\) are, with respect to a suitable basis, of the form \[\rho_{E^{\prime}/K,\ell}\sim\begin{pmatrix}1&*\\ 0&\omega_{\ell}\end{pmatrix}\qquad\text{and}\qquad\rho_{E/K,\ell}\sim\begin{pmatrix} \omega_{\ell}&*\\ 0&1\end{pmatrix}. \tag{17}\] Here \(\omega_{\ell}\) is the cyclotomic character corresponding to the extension \(K\subset K(\zeta_{\ell})\). For \(\ell>2\), the \(K\)-rationality of \(\ell\)-torsion points of \(E\) is not preserved under twisting of \(E\), so there is no direct analogue of Theorem 1.3 for \(\ell\neq 2\). In this section we focus on the case \(\ell=3\). **Lemma 6.1**.: _Let \(E/K\) be an elliptic curve of discriminant \(\Delta_{E}\) for which the Galois representation \(\rho_{E/K,3}\) on \(U_{3}=E[3](\overline{K})\) is of one of the two forms in (17). Then the \(3\)-division field of \(E\) over \(K\) equals the splitting field of the polynomial \(X^{3}-\Delta_{E}\)._ Proof.: Let \(H_{3}=\operatorname{Gal}(K(E[3])/K)\subset\operatorname{GL}(U_{3})\cong \operatorname{GL}_{2}(\mathbf{F}_{3})\) be the image of \(\rho_{E/K,3}\), and denote by \(\psi_{3}=\prod_{i=1}^{4}(X-x_{i})\in K[X]\) the \(3\)-division polynomial of \(E\) over \(K\). The quartic polynomial \(\psi_{3}\) comes with a Galois resolvent \(\delta_{3}\in K[X]\), a cubic having \[\alpha_{1}=x_{1}x_{2}+x_{3}x_{4},\qquad\alpha_{2}=x_{1}x_{3}+x_{2}x_{4},\qquad \alpha_{3}=x_{1}x_{4}+x_{2}x_{3}\] as its roots. Under the permutation action of \(\operatorname{GL}(U_{3})/\langle-1\rangle=S_{4}\) on the roots of \(\psi_{3}\), the normal subgroup \(V_{4}\triangleleft S_{4}\) of order \(4\) fixes each of these \(3\) roots \(\alpha_{i}\). The two natural surjections of Galois groups \[H_{3}\to\operatorname{Gal}(\psi_{3})\to\operatorname{Gal}(\delta_{3}) \tag{18}\] are isomorphisms as they arise as a restriction to suitable subgroups of the generic group theoretical maps \[\operatorname{GL}(U_{3})\to\operatorname{GL}(U_{3})/\langle-1\rangle=S_{4}\to S _{4}/V_{4}=S_{3}.\] More precisely, the first surjection in (18) is injective as we have \(-1\notin H_{3}\) for \(H_{3}\) as in (17), and the second is because we have \(\operatorname{Gal}(\psi_{3})\cap V_{4}=1\) in \(S_{4}\): the \(x\)-coordinate of the \(3\)-torsion point spanning the Galois invariant subspace corresponding to the first column of the matrices is fixed by \(\operatorname{Gal}(\psi_{3})\). Viewing \(H_{3}\) as \(\operatorname{Gal}(\delta_{3})\), we may finish the proof by quoting a classical formula [13, p. 305] that expresses the three cube roots of \(\Delta_{E}\) as \(b_{4}-3\alpha_{i}\) (\(i=1,2,3\)), with \(b_{i}\in K\) a coefficient from the Weierstrass model of \(E\). This yields \(K(E[3])=\operatorname{Split}_{K}(g)=\operatorname{Split}_{K}(X^{3}-\Delta_{E})\). From Lemma 6.1, we see that for the representations in (17), the subgroup \(\binom{1\ *}{0\ 1}\) corresponds to the extension \(K(\zeta_{3})\subset K(\zeta_{3},\sqrt[3]{\Delta})\) for the discriminant values \(\Delta=\Delta_{E^{\prime}}\) and \(\Delta_{E}\). We can write the curve \(E^{\prime}\) in Deuring normal form [6, p. 89] as \[E^{\prime}:y^{2}+axy+by=x^{3}\qquad\text{with $a,b\in K$ and $\Delta_{E^{\prime}}=b^{3}(a^{3}-27b)\in K^{*}$.} \tag{19}\] Here \(T=(0,0)\in E^{\prime}(K)\) is the point of order \(3\), and the quotient curve \(E=E^{\prime}/\langle T\rangle\) has Weierstrass equation \[E:y^{2}+axy+by=x^{3}-5abx-(a^{3}+7b)b, \tag{20}\] with the explicit formula for the \(3\)-isogeny \(\varphi_{3}:E^{\prime}\to E=E^{\prime}/\langle T\rangle\) being given by \[\varphi_{3}(x,y)=\left(\frac{x^{3}+abx+b^{2}}{x^{2}},\frac{y(x^{3}-abx-2b^{2})-b (ax+b)^{2}}{x^{3}}\right). \tag{21}\] As \(E\) has discriminant \(\Delta_{E}=b(a^{3}-27b)^{3}\), the \(3\)-division field \(K(E[3])\) over \(K\) equals \(K(\zeta_{3})\) if and only if \(b\in K^{*}\) is a cube and different from \((a/3)^{3}\). If \(a\) is non-zero, we can rescale \((x,y)\mapsto(a^{2}x,a^{3}y)\) and simplify (19) to \[E^{\prime}_{b}:y^{2}+xy+by=x^{3}. \tag{22}\] For \(K\) a number field, we have infinitely many pairwise different \(3\)-isogenous images \((E_{b},P_{b})=\varphi_{3}[(E^{\prime}_{b},P^{\prime}_{b})]\) for which \(P_{b}\in E_{b}(K)\) is non-trivially locally \(3\)-imprimitive. **Theorem 6.2**.: _For \(K\) a number field, take \(b=b(X)=(1-X-X^{2})/X\in K(X)\) and define the associated elliptic curve over \(K(X)\) as_ \[E_{b}:y^{2}+xy+by=x^{3}-5bx-(1+7b)b.\] _Then for infinitely many \(t\in K^{*}\), the specialization \(E_{b(t)}\) of \(E_{b}\) is an elliptic curve over \(K\) for which_ \[\left(\frac{(t^{2}+t)(t^{2}-1)+1}{t^{2}},\frac{(1-t^{2})\left(t^{4}+2t^{3}+t-1 \right)}{t^{3}}\right)\in E_{b(t)}(K) \tag{23}\] _is a non-trivial locally 3-imprimitive point._ Proof.: For the curve \(E^{\prime}_{b}\) in (22) with \(b\in K(X)\) as defined, the point \(P^{\prime}_{b}=(1,X)\) lies on \(E^{\prime}_{b}\) by the very choice of \(b\): it satisfies \(X^{2}+X+bX=1\). Under the \(3\)-isogeny \(\varphi_{3}:E^{\prime}_{b}\to E_{b}\) from (21) to the curve \(E_{b}\) obtained by putting \(a=1\) in (20), it is mapped to \[P_{b}=(1+b+b^{2},(1-b-2b^{2})X-b(1+b)^{2})\in E_{b}(K(X)).\] Under the specialization \(X=t\in K^{*}\), we obtain a point \(P_{b(t)}\), on the curve \(E_{b(t)}\) defined over \(K\) that is given by (23). We are only interested in those specializations for which \(E_{b(t)}\) is an elliptic curve. As these are all \(t\in K^{*}\) for which \(b(t)\notin\{0,1/27\}\), at most \(4\) 'bad' values of \(t\) are excluded. Also, by the same argument as we gave for \(P_{\lambda}\) in the case \(\ell=2\), there are only finitely many \(t\in K^{*}\) for which \(P_{t}\) is torsion point. These finitely many \(t\) we also exclude as 'bad' values. We saw already that \(E_{b}\) has \(3\)-division field \(K(\zeta_{3},\sqrt[3]{b})\), and an explicit computation shows that the \(3\)-division field of the point \(P_{b}\in E_{b}(K(X))\) equals \[K(X)(\tfrac{1}{3}P_{b})=K(\zeta_{3},\sqrt[3]{b},\sqrt[3]{X})=K(\zeta_{3},\sqrt[ 3]{X},\sqrt[3]{X^{2}+X-1}).\] Over \(K(\zeta_{3},X)\), the elements \(X\) and \(X^{2}+X-1\) have 'independent' cube roots - it suffices look at their ramification locus. It follows that the Galois group of the \(3\)-division field of \(P_{b}\) over \(K(X)\) may be described as \[\operatorname{Gal}(K(X)(\tfrac{1}{3}P_{b})/K(X))=\left\{\begin{bmatrix}\omega_ {3}&x&y\\ 0&1&0\\ 0&0&1\end{bmatrix}:x,y\in\mathbf{F}_{3}\right\},\] with \(\omega_{3}\) denoting the \(\mathbf{F}_{3}^{*}\)-valued character corresponding to the (possibly trivial) extension \(K(X)\subset K(X,\zeta_{3})\). By Hilbert irreducibility, it follows that for infinitely many \(t\in K^{*}\) outside the finite set of 'bad' values, the \(3\)-division field of the point \(P_{b(t)}\in E_{b(t)}(K)\) has the'same' Galois group over \(K\), making it into a point that is globally \(3\)-primitive, but locally \(3\)-imprimitive. As \(E_{b(t)}\) does not have complete 3-torsion over \(K\), we conclude that the point \(P_{b(t)}\) given in (23) is a non-trivial locally 3-imprimitive point. **Remark 6.3**.: The construction in the proof of Theorem 6.2 excludes all specializations for which \(b=b(t)\in K^{*}\) is a cube and the elliptic curve in (22) has 3-division field \(K(\zeta_{3})\). In this special case, we can also equip \(E=E_{b}\) with a non-trivial locally 3-imprimitive point for infinitely many \(b\in K^{*}{}^{3}\). We first write \(b=c^{-3}\) and transform the curve under \((x,y)\mapsto(c^{-2}x,c^{-3}y)\) into \(E^{\prime}:y^{2}+cxy+y=x^{3}\). As in the previous case, we have \(P^{\prime}=(1,t)\in E^{\prime}(K)\) for \(c=(-t^{2}-t+1)/t\), and the image of \(P^{\prime}\) under the map \[\varphi:E^{\prime}\to E=E^{\prime}/\langle(0,0)\rangle:y^{2}+cxy+y=x^{3}-5cx-( c^{3}+7)\] is the point \(P=\varphi_{3}(P^{\prime})=((-t^{2}+t+1)/t,(t^{2}-1)/t^{2})\in E(K)\), for which the 3-division field \(K(\frac{1}{3}P)\) is equal to \(K(\zeta_{3},\sqrt[3]{t})\). For almost all \(t\notin K^{*}{}^{3}\), this makes \(P\) into a globally 3-primitive point that is locally 3-imprimitive. We conclude our discussion for \(\ell=3\) with the remaining case in which the curve \(E^{\prime}\) in (19) has \(a=0\). In this case \(E^{\prime}\) has \(j\)-invariant \(0\), and writing \(c=b/2\) we may rescale the equation by \(y\mapsto y-c\) to the more familiar shape \(E^{\prime}:y^{2}=x^{3}+c^{2}\), with 3-torsion point \(T=(0,c)\) and CM by \(\mathbf{Z}[\zeta_{3}]\). We equip \(E^{\prime}\) with a \(K\)-rational point \((c,c\sqrt{c+1})\) by putting \(c=s^{2}-1\) with \(s\in K\). This leads to a 1-parameter family of 3-isogenies \[\varphi_{3}:E^{\prime}_{s} :y^{2}=x^{3}+(s^{2}-1)^{2}\longrightarrow E_{s}:y^{2}=x^{3}-27(s ^{2}-1)^{2}\] \[P^{\prime}_{s} =(s^{2}-1,s(s^{2}-1))\longmapsto P_{s}=(s^{2}+3,s(s^{2}-9))\] between CM-curves with \(j\)-invariant \(0\). In this case the 3-division field of \(E_{s}\) over \(K\) is \(K(\zeta_{3},\sqrt[3]{2(s^{2}-1)})\), and the 3-division field of \(P_{s}\) over \(K\) equals \[K(\zeta_{3},\sqrt[3]{2(s^{2}-1)},\sqrt[3]{4(s+1)}).\] Again, for \(s\in K\) outside a thin set, the point \(P_{s}\in E_{s}(K)\) is globally 3-primitive but locally 3-imprimitive. ## 7. Further examples Over \(K=\mathbf{Q}\), non-trivial locally \(\ell\)-imprimitive points can only occur for primes \(\ell\leq 7\). Examples for \(\ell=5\) and also \(\ell=7\) can be found by the techniques that we employed for \(\ell=3\), but the formulas and resulting curves rapidly become less suitable for presentation on paper. ### Curves with a locally 5-imprimitive point In this case, we start from Tate's normal form \[E^{\prime}:y^{2}+(1-c)xy-cy=x^{3}-cx^{2} \tag{24}\] that parametrises elliptic curves with \((0,0)\) as a 5-rational point (see Kulesz [7]). It has further points \((0,c)\), \((c,0)\) and \((c,c^{2})\) of order 5, and its discriminant equals \(\Delta_{E^{\prime}}=c^{5}\left(c^{2}-11c-1\right)\). Using Velu's formula [18] or invoking Pari-GP, we compute the Weierstrass equation for the 5-isogenous curve \(E=E^{\prime}/\langle(0,0)\rangle\) \[E=E_{c}: y^{2}+ (1-c)xy-cy=\] \[x^{3}-cx^{2}-5c(c^{2}+2c-1)x-c(c^{4}+10c^{3}-5c^{2}+15c-1),\] and also the explicit \(5\)-isogeny \(\varphi_{5}:E^{\prime}\to E\). The discriminants involved are \(\Delta_{E^{\prime}}=c^{5}(c^{2}-11c-1)\) and \(\Delta_{E}=c(c^{2}-11c-1)^{5}\), much like we saw for \(\ell=3\). The \(5\)-torsion representations of \(E^{\prime}\) and \(E\) are as in (17), and even though the proof of Lemma 6.1 for \(\ell=3\) does not generalize to \(\ell=5\), we found by a direct calculation that the \(5\)-division fields are \(K(E^{\prime}[5])=K(\zeta_{5},\sqrt[5]{c^{2}-11c-1}))\) and \(K(E[5])=K(\zeta_{5},\sqrt[5]{c})\): generated over \(K(\zeta_{5})\) by the \(5\)-th root of the discriminant. We can equip \(E^{\prime}\) with a \(K\)-rational point \(P_{t}^{\prime}=(t,t)\) by putting \(c=t(2-t)\), and compute its image \(P_{t}=\varphi_{5}(P^{\prime})\in E_{t(2-t)}(K)\) as \[P_{t}=\big{(}\tfrac{2t^{4}-8t^{3}+11t^{2}-6t+2}{(t-1)^{2}},-\tfrac{t^{8}-7t^{ 7}+19t^{6}-23t^{5}+4t^{4}+23t^{3}-31t^{2}+19t-4}{(t-1)^{3}}\big{)}.\] The corresponding \(5\)-division field of \(P_{t}\) is \[K(\tfrac{1}{5}P_{t})=K(\zeta_{5},\sqrt[5]{t},\sqrt[5]{t-2}).\] If this is an extension of degree \(5\) of \(K(\zeta_{5},\sqrt[5]{c})=K(\zeta_{5},\sqrt[5]{t(t-2)})\), then \(P_{t}\) is a globally \(5\)-primitive but locally \(5\)-imprimitive point in \(E_{t(2-t)}(K)\). **Example 7.1**.: Take \(K=\mathbf{Q}\). For \(t=1\) the point \(P_{t}\) above is the zero point as \(P_{t}^{\prime}\) is \(5\)-torsion, for \(t=2\) we have \(c=0\) and \(E\) is singular, while for \(t=3\) and \(4\) we encounter the 'accidents' \(t=-c\) and \(tc=2^{5}\) leading to points \(P_{t}\in 5E_{t(2-t)}(\mathbf{Q})\). For \(t=5\) we obtain the point \(P_{5}=(497/16,-73441/64)\) on \[E_{-15}:y^{2}+16xy+15y=x^{3}+15x^{2}+14550x+232860,\] which is the curve \(5835.\mathrm{c2}\) in the LMFDB-database. Note that for \(c=-15\) we have \[c(c^{2}-11c-1)=-5835=-3\cdot 5\cdot 389.\] The locally \(5\)-imprimitive point \(P_{5}\) is a generator of \(E_{-15}(\mathbf{Q})\cong\mathbf{Z}\). In fact, \(P_{t}\) will be globally \(5\)-primitive but locally \(5\)-imprimitive in \(E_{t(2-t)}(\mathbf{Q})\) for all \(t\in\mathbf{Z}_{\geq 5}\) that are not a fifth power or a fifth power plus \(2\), as for these \(t\) the subgroup of \(\mathbf{Q}^{*}/\mathbf{Q}^{*}{}^{5}\) generated by \(t\) and \(t-2\) has order \(25\). ### Curves with a locally \(7\)-imprimitive point Again we start from the Tate's normal equation \[E^{\prime}:y^{2}+(1-c)xy-by=x^{3}-bx^{2}\] but now we do not impose \(b=c\) as for \(\ell=5\), but instead \[c=d^{2}-d\quad\text{ and }b=d^{3}-d^{2}.\] The curve \(E^{\prime}=E_{d}^{\prime}\) parametrizes [7] elliptic curves with \((0,0)\) as point of order \(7\). A Weierstrass equation for the \(7\)-isogenous curve \(E=E^{\prime}/\langle(0,0)\rangle\) is \[E:y^{2}+(1-c)xy-by=x^{3}-bx^{2}-5\left(2b^{2}+b\left(c^{2}-3c-2 \right)+c\left(c^{2}+4c+1\right)\right)x\] \[-b^{2}\left(12c^{2}+c+24\right)-6b^{3}+b\left(-c^{4}+9c^{3}+46c^{ 2}+24c+2\right)-\] \[c\left(c^{4}+16c^{3}+36c^{2}+16c+1\right).\] It has discriminant \(\Delta_{E}=d(d-1)(d^{3}-8d^{2}+5d+1)^{7}\), and this time we find its \(7\)-division field to be \(K(E[7])=K(\zeta_{7},\sqrt[5]{d(d-1)^{2}})\). We equip \(E^{\prime}\) with a \(K\)-rational point \(P_{t}^{\prime}=(d^{2}t,d^{3}t)\) by putting \(d=d(t)=(t+1)/(t^{2}-t+1)\). The image of \(P_{t}^{\prime}\) under the \(7\)-isogeny \(E^{\prime}\to E\) is \[P_{t}=\left(-\frac{C(t)}{(2t-1)^{2}(t-1)^{2}\left(t^{2}-t+1\right)^{4}},\frac{ D(t)}{(t-1)^{3}(2t-1)^{3}\left(t^{2}-t+1\right)^{6}}\right)\in E(K)\] for certain polynomials \(C(t)\) and \(D(t)\) in \(\mathbf{Z}[t]\) of degree \(12\) and \(18\). In terms of \(t\), the \(7\)-division field is \(K(E[7])=K(\zeta_{7},\sqrt[7]{t^{2}(t+1)(t-2)^{2}(t^{2}-t+1)^{4}})\), and the \(7\)-division field of \(P_{t}\) is \[K(\tfrac{1}{7}P_{t})=K(E[7])\left(\sqrt[7]{\tfrac{t(t+1)}{t-2}}\right)=K\left( \zeta_{7},\sqrt[7]{\tfrac{t(t^{2}-t+1)}{(t+1)}},\sqrt[7]{\tfrac{t(t+1)}{(t-2)} }\right).\] The point \(P_{t}\) is a globally \(7\)-primitive but locally \(7\)-imprimitive point when the extension \(K(\zeta_{7})\subset K(\tfrac{1}{7}P_{t})\) has its generic degree \(7^{2}\). **Example 7.2**.: Take \(K=\mathbf{Q}\). For \(t=1\) the point \(P_{t}\) above is the zero point as \(P_{t}^{\prime}\) is \(7\)-torsion, and for \(t=2\) the curve \(E^{\prime}\) is singular. For \(t=3\) and \(d=\tfrac{4}{7}\) however we obtain the point \(P_{3}=(286019/490^{2},15951227/490^{3})\) on \[E:y^{2}+\tfrac{61}{7^{2}}xy+\tfrac{48}{7^{3}}y=x^{3}+\tfrac{48}{7^{3}}x^{2}- \tfrac{774780}{7^{7}}x-\tfrac{1047829260}{7^{11}},\] which is the curve \(20622.\)j1 with minimal model \[E_{0}:y^{2}+xy=x^{3}-5455771x-5039899603,\] in the LMFDB-database. Our locally \(7\)-imprimitive point \(P_{3}\) is a generator of \(E(\mathbf{Q})\cong\mathbf{Z}\). On \(E_{0}\) the corresponding generator is \((328219/10^{2},109777927/10^{3})\). ## 8. A composite level obstruction So far we have focused on non-trivial obstructions to local primitivity at prime level \(\ell\), as this is a new phenomenon in the elliptic primitive root case III that does not arise in the multiplicative primitive root case I and the cyclic reduction case II. In all three cases, there exist obstructions of different nature at composite levels that arise from the _entanglement_ between finitely many of the corresponding division fields \(K_{\ell}\). These obstructions do not arise over \(K=\mathbf{Q}\), and most examples in the cases I and II are created by base changing to a well-chosen finite extension of the fields of definition \(\mathbf{Q}(x)\) and \(\mathbf{Q}(j_{E})\). Again, case III is different here, as entanglement obstructions already occur over \(\mathbf{Q}\). In this Section we construct a level \(6\) obstruction. Let \(E/K\) be an elliptic curve with \(\#E[2](K)=2\), and \(P\in E(K)\) a point of infinite order. Then the \(2\) division field \(K(E[2](\overline{K}))\) is a quadratic extension of \(K\). Assume that the \(2\)-division field \(K(\tfrac{1}{2}P)\) of \(P\) is of maximal degree \(4\) over it. Then \(G_{2}=\operatorname{Gal}(K(\tfrac{1}{2}P)/K)\) is a dihedral group of order \(8\) for which the matrix representation (6) on \(V_{2}=\langle\tfrac{1}{2}P\rangle/\langle P\rangle\) has the form \[G_{2}=\left\{\begin{bmatrix}1&a&c\\ 0&1&b\\ 0&0&1\end{bmatrix}:a,b,c\in\mathbf{F}_{2}\right\}\subset\operatorname{GL}_{3}( \mathbf{F}_{2}). \tag{25}\] There is a unique subfield of \(L\subset K(\tfrac{1}{2}P)\) with Galois group over \(K\) isomorphic to the Klein \(4\)-group \(V_{4}=C_{2}\times C_{2}\), and we can view \(a\) and \(b\) in the matrix representation (25) of \(G_{2}\) as \(\mathbf{F}_{2}\)-valued quadratic characters on \(G_{2}\) that generate the character group of the quotient \(\operatorname{Gal}(L/K)\cong V_{4}\) of \(G_{2}\). For a prime \(\mathfrak{p}\nmid 2\Delta_{E}\), the point \(P\) generates a subgroup of odd index in \(E(k_{\mathfrak{p}})\) if and only if for its Frobenius \(\operatorname{Frob}_{\mathfrak{p},2}\in G_{2}\), viewed as a matrix as in (25), the endomorphism \((\operatorname{Frob}_{\mathfrak{p},2}-\operatorname{id}_{2}):V_{2}\to V_{2}\) has \(\mathbf{F}_{2}\)-rank at least \(2\) (Lemma 2.3). We obtain the criterion \[2\nmid[E(k_{\mathfrak{p}}):\langle\overline{P}\rangle]\quad\Longleftrightarrow \quad a(\operatorname{Frob}_{\mathfrak{p},2})=b(\operatorname{Frob}_{\mathfrak{p},2})=1\in\mathbf{F}_{2}. \tag{26}\] More precisely, \(a(\operatorname{Frob}_{\mathfrak{p},2})=1\) implies that \(E(k_{\mathfrak{p}})\) does not have full \(2\)-torsion, and \(b(\operatorname{Frob}_{\mathfrak{p},2})=1\) implies that \(P\) is not only not in \(2E(k_{\mathfrak{p}})\), but also not a \(2\)-isogenous image as in Condition C of Theorem 1.2. Suppose further that \(E\) has a \(K\)-rational \(3\)-torsion subgroup \(T\), and let \(\varphi_{3}:E^{\prime}\to E\) be the isogeny dual to the quotient map \(\phi:E\to E^{\prime}=E/T\). Assume that the point \(P\) is in \(\varphi_{3}[E^{\prime}(K)]\) but not in \(3E(K)\). Then the \(3\)-division field \(K(\frac{1}{3}P)\) of \(P\) has Galois group \(G_{3}=\operatorname{Gal}(K(\frac{1}{3}P)/K)\) for which the matrix representation on \(V_{3}=\langle\frac{1}{3}P\rangle/\langle P\rangle\) will 'generically' be the group \[G_{3}=\left\{\begin{bmatrix}d&e&f\\ 0&g&0\\ 0&0&1\end{bmatrix}:\quad d,g\in\mathbf{F}_{3}^{*},e,f\in\mathbf{F}_{3}\right\} \subset\operatorname{GL}_{3}(\mathbf{F}_{3}) \tag{27}\] of order \(36\). In this case \(d\) and \(g\) can be viewed as a quadratic characters \(G_{3}\to\mathbf{F}_{3}^{*}\), and another application of Lemma 2.3 shows that for primes \(\mathfrak{p}\nmid 3\Delta_{E}\), we have \[g(\operatorname{Frob}_{\mathfrak{p},3})=1\in\mathbf{F}_{3}^{*}\quad \Longrightarrow\quad 3|[E(k_{\mathfrak{p}}):\langle\overline{P}\rangle]. \tag{28}\] Thus, for primes \(\mathfrak{p}\nmid 6\Delta_{E}\), a necessary condition for \(\overline{P}\in E(k_{\mathfrak{p}})\) to be an elliptic primitive root is that the three quadratic characters \(a\), \(b\) and \(g\) occurring in (26) and (28) do not vanish on the Frobenius automorphism of \(\mathfrak{p}\) in \(K_{6}=K(\frac{1}{6}P)\). In other words: the prime \(\mathfrak{p}\) has to be inert in the quadratic extensions \(K_{a}\), \(K_{b}\) and \(K_{g}\) of \(K\) corresponding to these \(3\) characters. Primes \(\mathfrak{p}\) satisfying the condition above exist if the quadratic extensions \(K_{a}\), \(K_{b}\) and \(K_{g}\) are linearly disjoint over \(K\), but _not_ if they are the three quadratic subfields of a \(V_{4}\)-extension \(K\subset K_{a}K_{b}K_{g}\). In the latter case, we have a splitting obstruction to local primitivity of \(P\) in \(K_{6}\) that does not exist in one of the smaller fields \(K(\frac{1}{2}P)\) or \(K(\frac{1}{3}P)\): it has level \(6\), but not \(2\) or \(3\), making it an obstruction caused by _entanglement_ of division fields. **Example 8.1**.: An example is provided by the elliptic curve \(E/\mathbf{Q}\) with label \(12100\).j1 in the LMFDB data base. The curve \(E\) has discriminant \[\Delta_{E}=2^{4}\cdot 5^{9}\cdot 11^{6},\] and if we take \((0,0)\) to be its unique \(\mathbf{Q}\)-rational \(2\)-torsion point it has Weierstrass model \[E:y^{2}=x^{3}+605x^{2}-3025x.\] For this curve we have \(E(\mathbf{Q})=\langle T_{2}\rangle\times\langle P\rangle\cong\mathbf{Z}/2 \mathbf{Z}\times\mathbf{Z}\) with \(T_{2}=(0,0)\) of order \(2\) and \(P=(\frac{-13475}{36},\frac{1249325}{216})\) a generator of infinite order. We have \(K_{a}=\mathbf{Q}(E[2])=\mathbf{Q}(\sqrt{\Delta_{E}})=\mathbf{Q}(\sqrt{5})\), and over this field the \(2\)-division field \(\mathbf{Q}(\frac{1}{2}P)\) of \(P\) is the \(V_{4}\) extension \[\mathbf{Q}(E[2])=\mathbf{Q}(\sqrt{5})\subset\mathbf{Q}(\tfrac{1}{2}P)= \mathbf{Q}(\sqrt{5},\sqrt{\pi},\sqrt{\pi})\] generated by the square roots of \(\pi=3+2\sqrt{5}\) and its conjugate. From \(\pi\overline{\pi}=-11\) we see that \(\mathbf{Q}(\frac{1}{2}P)\) is cyclic of degree \(4\) over \(\mathbf{Q}(\sqrt{-55})\), and that we have \(K_{b}=\mathbf{Q}(\sqrt{-11})\). As \(E\) acquires a \(3\)-torsion point \(T_{3}=(\frac{55}{3},\frac{275}{9}\sqrt{165})\) over the quadratic field \(\mathbf{Q}(\sqrt{165})=\mathbf{Q}(\sqrt{-3\cdot-55})\) that generates a \(\mathbf{Q}\)-rational torsion subgroup of order \(3\), the \(3\)-division field of \(E\) has quadratic subfields \(K_{d}=\mathbf{Q}(\sqrt{165})\) and \(K_{g}=\mathbf{Q}(\sqrt{-55})\) making \(K_{g}\) the third quadratic subfield in the \(V_{4}\)-extension \(\mathbf{Q}\subset K_{a}K_{b}\). Over the full 3-division field of \(E\), the 3-division field of \(P\) is the cubic extension \[\mathbf{Q}(E[3])=\mathbf{Q}(\sqrt{-3},\sqrt{-55},\sqrt[3]{2})\subset\mathbf{Q} (\tfrac{1}{3}P)=\mathbf{Q}(E[3],\sqrt[3]{\alpha})\] generated by a cube root of an element \(\alpha=(3+\sqrt{-55})/2\in K_{g}\) of norm 16, which shows that its Galois group over \(\mathbf{Q}\) is the group \(G_{3}\) in (27). We conclude that \(P\) is a locally never-primitive point of \(E(\mathbf{Q})\) as the index of \(\langle\overline{P}\rangle\) in \(E(\mathbf{F}_{p})\) is always divisible by 2 or 3. An upcoming paper will have further details on obstructions to primitivity of composite level, and on how to find explicit examples.
2302.11036
CrowdLogo: crowd simulation in NetLogo
Planning the evacuation of people from crowded places, such as squares, stadiums, or indoor arenas during emergency scenarios is a fundamental task that authorities must deal with. This article summarizes the work of the authors to simulate an emergency scenario in a square using NetLogo, a multi-agent programmable modeling environment. The emergency scenario is based on a real event, which took place in Piazza San Carlo, Turin, on the 3rd of June 2017. The authors have developed a model and conducted various experiments, the results of which are presented, discussed and analyzed. The article concludes by offering suggestions for further research and summarizing the key takeaways.
Davide Foini, Magdalena Rzyska, Katharina Baschmakov, Sergio Murino
2023-02-21T22:38:04Z
http://arxiv.org/abs/2302.11036v1
# CrowdLogo: crowd simulation in NetLogo ###### Abstract Planning the evacuation of people from crowded places, such as squares, stadiums, or indoor arenas during emergency scenarios is a fundamental task that authorities must deal with. This article summarizes the work of the authors to simulate an emergency scenario in a square using NetLogo, a multi-agent programmable modeling environment. The emergency scenario is based on a real event, which took place in Piazza San Carlo, Turin, on the 3rd of June 2017. The authors have developed a model and conducted various experiments, the results of which are presented, discussed and analyzed. The article concludes by offering suggestions for further research and summarizing the key takeaways. Emergency, simulation, modelling, NetLogo, multi-agent ## I Introduction ### _Context and motivation_ Emergency situations where a crowd is involved are many and they happen in different areas of the world and in different situations, ranging from sports events [4] to concerts or celebrations [5]. In that kind of setting, if adequate safety precautions and procedures are overlooked or totally ignored, the evacuation can result in a stampede, causing injuries and victims. Our efforts have been focused on modeling the event that happened in Piazza San Carlo, Turin, on the 3rd of June 2017, when after firecrackers exploded in the middle of the crowd, the stampede that broke out caused three victims and more than one thousand injuries, and the investigations carried out proved that safety norms were not observed [3]. In particular, the biggest neglects were that iron barriers were placed at the entrances of the square to screen the people entering, but when the evacuation started they acted like a trap blocking the way out and nurturing obstructions. Another fundamental safety norm that was not enforced was the ban on glass bottles: a lot of street vendors were allowed to sell glass beer bottles, and when the crowd started escaping all those bottles started to break, creating more panic, easing slipping and increasing the risk of cutting with shards of glass and being overrun by others. In Fig. 1 a bird-eye view of the square is displayed. The northernside view shows the street (northern gate) that was closed during the evacuation in 2017. However, the simulation model enables to use of this exit for evacuation as well as the other five of them. ### _Goals and expected contributions_ The main goals of this study have tackled two different simulation approaches: the first was to create a model accurate enough that represented crowd behavior (_descriptive approach_), and the second goal was to verify what could have changed if more attention would have been given to safety measures, like a smartphone application that points the user to the nearest exit (_speculative approach_). We hope our work, still considering its limited extension, will be a useful tool to analyze situations that can be considered dangerous and that it will be used as a source for further research and developments in this area. The full code of the model is available at [8]. ### _Structure of the manuscript_ The paper is structured as follows. In Section II we analyze the works of different authors on the same topic as the one we focused on, in Section III we first give a formalization of the problem, then we explain our model, both in terms of approach and in details, including also the metrics and the KPIs used. The following Section IV will show the results obtained experimenting with the model. The discussion ends by summing up the discussion and with proposals for future works. ## II Literature Review A comprehensive overview of evacuation models is presented in [9], here the authors suggest that agent-based models may be the most suitable for the developing of what-if scenarios, among a large list of different approaches, such as cellular automata models or lattice gas models. Moreover, they present a categorization of the evacuation models, these are a classical model, a hybridized model, and a generic model. Each of the latter models is further split into Fig. 1: Piazza San Carlo seen from the northern side [6] subcategories, hence giving a really comprehensive, but fragmented list of possibilities. Following this scheme, our model could mostly be classified as: classical model \(\rightarrow\)microscopic model \(\rightarrow\)information of individual movement \(\rightarrow\)agent-based. In [1] the authors developed in NetLogo an evacuation scenario, but their focus was on the architecture of a closed public space, and only an adult population was considered. The experiments carried out varied the number of exits and their width and observed better performances when the Portuguese Fire Code requirements were respected. Concerning the "Review of Pedestrian and Evacuation Simulations" by G. Keith Still, [10] the past situations have an impact on creating new simulation models - as it was after the Word Trade Centre attack on 9th of September 2001. The simulations aimed to analyze the problems that occurred during the evacuation to prevent the same mistakes in the future and adjust the safety rules in the buildings. The main goal of the simulations described in the paper was to reach the lowest time of evacuation (due to the fire in the building and the high probability of building collapse). When it comes to open spaces the evacuation on San Carlo square happened in 2017 and the safety procedures were not scrupulously obeyed, which led to the death of 3 people and thousands injured. By the simulation model described in this report, we would like to analyze the main problems that can occur during mass events. Since the main concern in such events is panic, which may cause the growth of injured participants and victims, the goal is not to obtain the time of the evacuation minimized, but to keep the lowest possible number of victims at the lowest possible time of the evacuation. In [11] another approach for an agent-based model of evacuations inside buildings is modeled and investigated. In particular, this study is trying to include psychological factors into its model, such as group decision-making, leader-follower, and consensus. The results indicate that evacuating individually is faster than evacuating in groups and evacuation time increases together with the size of the group. Inspired by that we are also considering implementing psychological factors, in our case we are choosing the awareness fraction and panic fraction. Lastly, in the research of [12], evacuation models and results can be implemented into cyber-physical systems (CPS) with the main aim to support decision-making processes in evacuation scenarios. ## III Methodological approach ### _Problem formalization_ The aim of the project is to show the behavior of pedestrians during an emergency situation - their decision-making process and its influence on others. The simulation will be expanded by the possibility of adding new gates and take into account if a certain percentage of people is aware of the best evacuation path, for example with a mobile application installed on their smartphones. The analysis will show how the evacuation time and damage during emergency situation change in the different organizational patterns. ### _Modelling approach_ The simulation starts with the setup of the map of Piazza San Carlo and a certain number of pedestrians spawning within. When the alarm starts, indicating the beginning of the evacuation, their role is to find the closest exit which may be accessed in the shortest possible time. Pedestrians have to adjust their velocity according to other pedestrians and be able to change direction. They own attributes like velocity, position, direction, the time they need to leave the area to be evacuated, and index of the health state. Velocity and health state depends on the interactions between pedestrians, more specifically on the density of people per patch. The health state is categorized into seven levels of injury, which are taken from the Abbreviated Injury Scale [2]. Health status, evacuation time, and speed are displayed in plots whilst the simulation is running. Furthermore, we introduce two variables in the model: the aware fraction and the panic fraction. The aware fraction indicates the fraction of pedestrians who are aware of the coordinates of the exits so aware pedestrians move then directly towards these. Panic fraction defines the number of pedestrians who are experiencing panic, hence they move more randomly. The simulation ends when all people leave the square. Figure 2 describes the model as a flow chart. ### _Detailed description_ In this subsection, we are going to describe deeper how the procedure we implemented works and what they are willing to render about a real situation. The first aim has been to reproduce a fair abstraction of the place where the event took place, Piazza San Carlo. We decided to create a bijection between a patch color and its type, namely, we associated one color with each of the following types: gate, wall, outside of the square, inside of the square, and obstacles (e.g. the statue in the center of the square). Then the start_simulation procedure was created, which aim is indeed to initialize the environment, spanning a number of people equal to the global variable population, and make them move randomly in the square before the evacuation starts. After an alarm is activated they will perform some actions based on some combination of their self variable, for example escaping, rational, or panic. The evacuation starts once the alarm button is pushed, with this procedure we set the escaping status of every turtle to true, and make the turtles face their destination, which is a random gate if aware was (randomly) set to false or the nearest gate if aware was set to true. The scope of this aware binary variable is to reproduce a scenario (hence it refers to a speculative approach) in which people may receive a message, through an application specially created for the event, that computes for every user their nearest gate at every moment; this could be very useful since it may help to minimize the total evacuation time. Once all the people have their status set to escaping they will follow either the procedure move_person or the procedure follow_crowd, based on the value of their panic binary variable. With this binary variable we wanted to reproduce how panic influences people's actions, the main assumption is that if panic is present people will tend to randomly follow the crowd instead of take a rational decision, like checking their smartphones and heading themselves to the nearest gate. The global variable panic_fraction allows us to choose how much we want panic to influence the model, moreover once panic is present every person which is actually panicking will have a different amount of panic, sampled randomly from a uniform distribution, which will then be the parameter of a Bernoulli distribution governing the variable rational (if rationality is not present Fig. 2: Flow chart of the simulation process. people will follow the crowd). The procedure move_person is the most detailed of our model, and it aims to govern how people should move in the square once the evacuation started and if their rational status is set to true; the high amounts of details in it come from the fact that we had to handle with a lot of different combinations of patches' color, number of people in every patch. At every iteration of our simulation, namely every tick, we also perform an update of people's health status, this is done through the procedure updating people's status and is based on [2]. After the health status is updated also the speed is, with the procedure update_speed, which takes into account the health status of every person together with her/his gender and age. Finally, we have the, previously mentioned, follow_crowd procedure, which either makes the turtle follow the most crowded patch neighbor or makes him/her exit the evacuation if a patch gate is one of its neighbors. This is based on the assumption that even if a person is panicking, once he or she has a gate really nearby, he will go through the exit. ### _Metrics and KPIs_ To evaluate the simulation results following metrics were taken into consideration: * **gates throughput (evacuation speed)**: number of people exiting per second; * **evacuation time**: time between the beginning of the life-threatening situation and the end of the evacuation. The evacuation is finished when all people have left the area; * **average speed**: average velocity of people in the simulation; * **injury level**: number of people with the specified health status based on the Injury Severity Score [2]: healthy, minor, moderate, serious, severe, critical and fatal. Furthermore, a subsequent assessment considered the following Key Performance Indicators: * **number of victims**: number of people with injury level specified as fatal; * **number of injured**: number of people with minor, moderate, serious, severe, critical injury level. ## IV Results and discussion In this section, we are going to explain the different scenarios we have simulated, which operational policies we used, the experiments we performed, and the results obtained. ### _Scenarios_ As already introduced in Section B, we simulated two different scenarios: the _descriptive scenario_, and the _speculative scenario_. The _descriptive scenario_ aims at reproducing the real event as realistically as possible. The _speculative scenario_ has been developed as a tool to assess what could have happened if the safety norms would have been observed, meaning reducing the number of people in the square, the absence of glass bottles and the more accessible gates, and the availability of the mobile application to know the nearest exit. ### _Operation policies_ The operational policies employed differ in the two scenarios. In the _descriptive scenario_, just 50% of the population is aware of the nearest exit, while in the _speculative scenario_ this rate is increased in each experiment. We also analyzed the impact of glass bottles: if their presence is flagged as true people have the chance of slipping and therefore not moving. The diffusion of a smartphone application indicating the nearest exit is also considered. Another operational policy is mediated via the panic fraction, which influences how people behave. The final operational policy used is the accessibility of gates, meaning that the maximum number of people on the patch is the same as a "regular" one. ### _Experiments_ All of the experiments performed were executed using Netlogo _behavior space_, more in detail one experiment per scenario, for a total of six experiments. The first experiment is referred to the descriptive scenario, so with the aim of reproducing the real event, with a focus on the number of victims. This experiment will also represent the baseline to measure the results of the later experiments, the full parameters setup is available in Table I. From the second to the last experiments the impact of different situations has been analyzed, based on the different operational policies pointed out in the previous section. In Table II all of the experiments are described in terms of the number of runs and composition of such runs. ### _Results_ In this section, we are going to show and discuss the results obtained from running the experiments, starting from the descriptive experiments and proceeding afterward to the speculative ones. The first result is the duration of the evacuation, which is shown in Table III, where the average value is reported for the runs of the experiment. The experiment that impacted the most on the duration of the evacuation was the panic fraction experiment, where increasing the panic fraction from 0% to 10% was almost enough to double the time duration of the evacuation. Another interesting feedback was that increasing the diffusion of the mobile application from 90% to 100% nearly halved the evacuation time. **Descriptive Scenario** The focus of this experiment is on the number of victims resulting from the evacuation. In Fig. 3 is possible to note in the last column of the second histogram how in all the runs the number of victims is between zero and five, due to the difference in generating the initial position of people, which is close to the real number of three victims. It is also worth noting that all the other levels of injury are on the same level independently of the run. With regards to the average speed and the evacuation speed, reported in Figures 4 and 5, it should be noted that the values and their evolution over time are homogeneous, except for the last part for the average speed, that is due to some people that were stuck in the crowd and find a path when most of the others were already evacuated. The last result obtained was the evolution of the evacuation through the simulation, displayed in Fig. 9, which have been divided in two plots for clarity. It is possible to see how run #10 and run #2 have a lower evacuation time and the others have an higher one. **Speculative Scenario** _Number of People_ With regards to the experiment based on the number of people, reported in Fig. 7, it is worth noting that decreasing the number of people gradually decreases the number of the first level of injury, but for the last three categories the ratio is almost halved every run and that with five thousand fewer people there are no victims. What emerged from the experiments about the evacuation speed (Fig. 8) and the evacuation time (Fig. 9) is that all the runs follow the same trend, but with slightly higher values based on the number of people, meaning that that the values on the y axis decrease starting from the first run to the last one. The experiment about the average speed of the simulation showed that the values and their evolution do not change based on the number of people (Fig. 10). Fig. 4: Average speed per run in the descriptive scenario Fig. 5: Evacuation speed per run in the descriptive scenario. Fig. 3: Injury levels per run in the descriptive experiment _Mobile Application_ This experiment is the one that has given the most interesting results. In Fig. 11 it is possible to notice how the rapidity with which the average speed decreases with higher levels of diffusion of the smartphone application, while in Fig. 12 the evacuation speed has higher peaks with higher degrees of diffusion. With regard to the evacuation time, plotted in Fig. 13, the best results have been obtained with a diffusion ratio of 70% and the more stable ones for the higher ratios of 90% and 100%. It is also possible to note already in this graph that the maximum ratio of diffusion obtained the best evacuation time. This result is also shown in Table III. The downside of having a higher percentage of people knowing the nearest exit is displayed in Fig. 14. With the exception of moderate and serious injuries, all the other levels registered a worse value when increasing the diffusion ratio. This is most probably caused by the fact that a higher density is more easily obtained when a lot of people reach their exit in a short time, creating a blockage in the proximity of the gates. To conclude, a better evacuation time is obtained but in exchange, more injuries and victims are observed. #### Glass Bottles Removing the glass bottles from the simulation has given the best results in the number of injured people. In Fig. 19 is displayed how the number of healthy people increased by a ratio of almost 60%, while the low levels of injury are halved and the higher levels are more than halved. Slipping left some people with a higher residual speed in the simulation, how it is possible to note in Fig. 20, while the absence of glass bottles did not cause an improvement in the evacuation time (Fig. 22) and in the average speed (Fig. 21). Figure 16: Average speed per run in the experiment based on the accessibility of exits Figure 17: Evacuation speed per run in the experiment based on the accessibility of exits Figure 14: Injury levels per run in the experiment based on the diffusion of the mobile application Figure 15: Evacuation time per run in the experiment based on the accessibility of exits Figure 19: Injury levels per run in the experiment based on the presence of glass bottles Figure 20: Average speed per run in the experiment based on the presence of glass bottles Figure 18: Injury levels per run in the experiment based on the accessibility of exits Fig. 21: Evacuation speed per run in the experiment based on the presence of glass bottles Fig. 22: Evacuation time per run in the experiment based on the presence of glass bottles Fig. 23: Average speed per run in the experiment based on the panic fraction Fig. 24: Evacuation speed per run in the experiment based on the panic fraction Fig. 25: Evacuation time per run in the experiment based on the panic fraction Figure 26: Injury levels per run in the experiment based on the panic fraction ## V Conclusion In this paper, we created a model of a specific evacuation scenario in order to have a simulation model that is quite realistic and to experiment with the impact that different safety measures would have made if not overlooked. The simulation model was able the fairly replicate the real event in terms of the number of fatalities, which was our main focus for the descriptive scenario. The experiments performed in the speculative scenario showed that reducing the number of people admitted to the event and not allowing glass bottles would have considerably lowered the number of injuries and also avoided victims. The mobile application has revealed itself as a double-edged sword: on the one hand, it resulted in a reduction in the evacuation time, but on the other hand, more fatalities were registered. Future work could focus on using the same model in different scenarios since it would be enough to redefine the map, or to understand the impact of the various combinations of factors (for example reducing the number of people and not allowing glass bottles at the same time), or to introduce a social force model to regulate how people move inside the simulation (an example of a Netlogo implementation is available at [7]). In conclusion, this work demonstrated how when planning an event that involves a high number of people all the necessary safety procedures must be observed thoroughly because even the slightest carelessness can result in injuries and victims.
2302.09679
Subdiffusion with particle immobilization process described by differential equation with Riemann--Liouville type fractional time derivative
An equation describing subdiffusion with possible immobilization of particles is derived by means of the continuous time random walk model. The equation contains a fractional time derivative of Riemann--Liouville type which is a differential-integral operator with the kernel defined by the Laplace transform. We propose the method for calculating the inverse Laplace transform providing the kernel in the time domain. In the long time limit the subdiffusion--immobilization process reaches a stationary state in which the probability density of a particle distribution is an exponential function.
Tadeusz Kosztołowicz
2023-02-19T21:56:56Z
http://arxiv.org/abs/2302.09679v1
Subdiffusion with particle immobilization process described by differential equation with Riemann-Liouville type fractional time derivative ###### Abstract An equation describing subdiffusion with possible immobilization of particles is derived by means of the continuous time random walk model. The equation contains a fractional time derivative of Riemann-Liouville type which is a differential-integral operator with the kernel defined by the Laplace transform. We propose the method for calculating the inverse Laplace transform providing the kernel in the time domain. In the long time limit the subdiffusion-immobilization process reaches a stationary state in which the probability density of a particle distribution is an exponential function. ## I Introduction In diffusion process particles can be eliminated from further diffusion in different ways. There may be a particle decay due to a reaction when it meets other molecules. Since the particle disappears, the probability density \(P(x,t)\) that the particle is at a point \(x\) in time \(t\) is not normalized, \[\int_{-\infty}^{\infty}P(x,t)dx<1. \tag{1}\] Another process that eliminates a particle from further diffusion is the permanent immobilization of the particle. Both processes mentioned above can occur in the diffusion of antibiotic molecules in a bacterial biofilm. One of defense mechanisms is to disintegrate the antibiotic molecules, the process can be described by diffusion-reaction equations. In the other one bacteria can thicken the biofilm immobilizing antibiotic molecules [1; 2], see also [3; 4] and the references cited therein. The immobilized molecules have not disappeared, they can further interact with the environment. In this case, the probability of finding a molecule in the system is equal to one at any time. We call the process subdiffusion with particle immobilization. It is obvious that this process cannot be described by a diffusion-reaction equation. The immobilization of molecules can occur in a medium in which the movement of particles is very hindered, as in the biofilm mentioned above, subdiffusion may occur in such a system, see for example Refs. [5; 6; 7; 8; 9; 10; 11; 12; 13; 14]. We derive an equation describing subdiffusion with particle immobilization in a one-dimensional homogeneous system. We assume that after each jump a particle can be immobilized with the same probability which does not change with time and is independent of the particle position. ## II Model To derive the subdiffusion-immobilization equation we use the continuous time random walk (CTRW) model [5; 6; 7; 8; 9; 11; 13; 15; 16; 17; 18]. Within the model, when the average length of a single particle jump \(\epsilon\) is finite the form of the subdiffusion equation is determined by the probability density \(\psi\) of the waiting time for the particle to jump. In terms of the Laplace transform, \(\mathcal{L}[f(t)](s)=\int_{0}^{\infty}\mathrm{e}^{-st}f(t)dt\equiv\hat{f}(s)\), the equation generated by the function \(\psi\) is as follows \[s\hat{P}(x,s)-P(x,0)=\frac{\epsilon^{2}s\hat{\psi}(s)}{2[1-\hat{\psi}(s)]} \frac{\partial^{2}\hat{P}(x,s)}{\partial x^{2}}, \tag{2}\] the derivation of this equation is described in Appendix. We make the following assumptions: 1. the probability of finding a particle in the system is equal to one at any time, \[\int_{-\infty}^{\infty}P(x,t)dx=1,\] (3) 2. since the particle can be permanently stopped, the probability that the particle will make a jump is less than one, \[\int_{0}^{\infty}\psi(t)dt<1.\] (4) ### Subdiffusion equation To obtain the subdiffusion equation we assume \[\hat{\psi}(s)=\frac{1}{1+\tau s^{\alpha}}, \tag{5}\] \(0<\alpha<1\), where \(\tau\) is a parameter with the units of \(\mathrm{s}^{\alpha}\). This function satisfies the normalization condition \[\int_{0}^{\infty}\psi(t)dt\equiv\hat{\psi}(0)=1. \tag{6}\] This condition means that the particle cannot be stopped permanently with non-zero probability. From Eqs. (2) and (5) we get \[s\hat{P}(x,s)-P(x,0)=Ds^{1-\alpha}\frac{\partial^{2}\hat{P}(x,s)}{\partial x^{2}}, \tag{7}\] where \(D=\epsilon^{2}/2\tau\) is a subdiffusion coefficient given in the units of m\({}^{2}\)/s\({}^{\alpha}\). Due to the relations \[\mathcal{L}^{-1}\left[s\hat{f}(s)-f(0)\right](t)=\frac{df(t)}{dt}, \tag{8}\] \[\mathcal{L}^{-1}\left[s^{\beta}\hat{f}(s)\right](t)=\frac{RLd^{\beta}f(t)}{dt ^{\beta}}, \tag{9}\] \(0<\beta<1\), where \[\frac{{}^{RL}d^{\beta}f(t)}{dt^{\beta}}=\frac{1}{\Gamma(1-\beta)}\frac{d}{dt} \int_{0}^{t}(t-u)^{-\beta}f(u)du \tag{10}\] is the Riemann-Liouville time fractional derivative of the order \(\beta\in(0,1)\). From Eqs. (7)-(9) we get the subdiffusion equation \[\frac{\partial P(x,t)}{\partial t}=D\frac{{}^{RL}\partial^{1-\alpha}}{ \partial t^{1-\alpha}}\frac{\partial^{2}P(x,t)}{\partial x^{2}}. \tag{11}\] ### Subdiffusion-immobilization equation In order to find a function \(\psi(t)\) that satisfies Eq. (4), i.e. \(\hat{\psi}(0)<1\), we assume that the Laplace transform of the function is \[\hat{\psi}(s)=\frac{1}{1+\tau\gamma+\tau s^{\alpha}}, \tag{12}\] \(0<\alpha<1\), the parameter \(\gamma\), which controls molecule immobilization, is given in the units of \(1\)/s\({}^{\alpha}\). The probability \(p_{s}\) of stopping the molecule permanently is \(p_{s}=1-\hat{\psi}(0)=\tau\gamma/(1+\tau\gamma)\). From Eqs. (2) and (12) we get \[s\hat{P}(x,s)-P(x,0)=D\frac{s^{1-\alpha}}{1+\gamma s^{-\alpha}}\frac{\partial ^{2}\hat{P}(x,s)}{\partial x^{2}} \tag{13}\] The inverse Laplace transform of the right-hand side of Eq. (13) is calculated using the formula \[\mathcal{L}^{-1}\left[\frac{s^{1-\alpha}}{1+\gamma s^{-\alpha}}\hat{f}(s) \right](t)=\frac{{}^{RL}_{F}d^{1-\alpha}f(t)}{dt^{1-\alpha}}, \tag{14}\] where \[\frac{{}^{RL}_{F}d^{1-\alpha}f(t)}{dt^{1-\alpha}}=\frac{d}{dt}\int_{0}^{t}F_{ \alpha}(t-t^{\prime};\gamma)f(t^{\prime})dt^{\prime} \tag{15}\] is the Riemann-Liouville type fractional derivative with the kernel \(F_{\alpha}\) which is defined by its Laplace transform \[\hat{F}_{\alpha}(s;\gamma)=\frac{1}{\gamma+s^{\alpha}}. \tag{16}\] For \(\gamma=0\), this derivative is the Riemann-Liouville derivative Eq. (10) of the order \(1-\alpha\). Eqs. (13)-(16) provide the following subdiffusion-immobilization equation \[\frac{\partial P(x,t)}{\partial t}=D\frac{{}^{RL}_{F}\partial^{1-\alpha}}{ \partial t^{1-\alpha}}\frac{\partial^{2}P(x,t)}{\partial x^{2}}. \tag{17}\] Calculation of the inverse transform of Eq. (16) is usually done by power series expansion of the function when \(\gamma/s^{\alpha}<1\), and then inverting the transform term by term using the formula \(\mathcal{L}^{-1}[1/s^{\beta}](t)=t^{\beta-1}/\Gamma(\beta)\), \(\beta>0\). The result is the Mittag-Leffler function [19; 20]. However, this procedure is valid for relatively large values of the parameter \(s\), which correspond to small values of time variable. To get the inverse Laplace transform over the whole time domain we propose to use the following method: (1) instead of \(\hat{F}_{\alpha}\) Eq. (16) find the inverse transform of \(\hat{F}_{\alpha}(s,\gamma)\mathrm{e}^{-as^{\mu}}\), \(a,\mu>0\), (2) expand \(\hat{F}_{\alpha}\) in a power series of \(s\) considering both cases \(s^{\alpha}>\gamma\) and \(s^{\alpha}<\gamma\) separately, (3) use the formula [21] \[\mathcal{L}^{-1}\left[s^{\nu}\mathrm{e}^{-as^{\mu}}\right](t) \equiv f_{\nu,\mu}(t;a) \tag{18}\] \[= \frac{1}{t^{\nu+1}}\sum_{n=0}^{\infty}\frac{1}{n!\Gamma(-n\mu-\nu )}\left(-\frac{a}{t^{\mu}}\right)^{n}\] \(a,\mu>0\), (4) calculate the limit of \(a\to 0^{+}\) in the obtained functions. We note that \[f_{\nu,\mu}(t;0^{+})=\frac{1}{t^{\nu+1}\Gamma(-\nu)}, \tag{19}\] and the result is independent of the parameter \(\mu\). From the formula \[\frac{\mathrm{e}^{-as^{\mu}}}{\gamma+s^{\alpha}}=\left\{\begin{array}{l} \mathrm{e}^{-as^{\mu}}\sum_{n=0}^{\infty}(-\gamma)^{n}s^{-(n+1)\alpha},\;s> \gamma^{1/\alpha},\\ \frac{\mathrm{e}^{-as^{\mu}}}{\gamma}\sum_{n=0}^{\infty}\left(-\frac{1}{ \gamma}\right)^{n}s^{n\alpha},\;s<\gamma^{1/\alpha},\end{array}\right. \tag{20}\] and Eqs. (18) and (19) we obtain \[F_{\alpha}(t;\gamma)=\left\{\begin{array}{l}\frac{1}{t^{1-\alpha}}E_{\alpha, \alpha}(-\gamma t^{\alpha}),\;t<t_{b},\\ \\ -\frac{1}{\gamma^{2}t^{1+\alpha}}\tilde{E}_{\alpha,\alpha}\left(-\frac{1}{\gamma t ^{a}}\right),\;t>t_{b},\end{array}\right. \tag{21}\] where \(E_{\alpha,\beta}(u)=\sum_{n=0}^{\infty}\frac{u^{n}}{\Gamma(\alpha n+\beta)}\), \(\alpha,\beta>0\), is the two-parameter Mittag-Leffler (ML) function, \(\tilde{E}_{\alpha,\beta}(u)=\sum_{n=0}^{\infty}\frac{u^{n}}{\Gamma(-\alpha n- \beta)}\) is a generalization of the ML function for negative parameters. We note that conditions \(s>\gamma^{1/\alpha}\) and \(s<\gamma^{1/\alpha}\) do not determine the parameter \(t_{b}\). For example, the condition \(s>\gamma^{1/\alpha}\) is equivalent to \(1/s^{\beta+1}<1/(s^{\beta}\gamma^{1/\alpha})\) for \(\beta>0\) (assuming that \(s\) is a real positive parameter). The inverse Laplace transform of the inequality provides \(t<\beta/\gamma^{1/\alpha}\) where \(\beta\) is a positive number. Thus, the above inequality does not determine \(t_{b}\). Here we define the parameter \(t_{b}\) as the shorter time at which the upper and the lower functions in Eq. (21) are matched, see Fig. 1. In terms of the Laplace transform the solution to Eq. (16) (the Green's function) for the initial condition \(P(x,0)=\delta(x)\), where \(\delta\) is the Dirac-delta function, and boundary conditions \(P(\pm\infty,t)=0\) is \[\hat{P}(x,s)=\frac{\sqrt{\gamma+s^{\alpha}}}{2s\sqrt{D}}\,{\rm e}^{-|x|\sqrt{ \frac{\gamma+s^{\alpha}}{\gamma D}}} \tag{22}\] The solution fulfils the condition \(\int_{-\infty}^{\infty}\hat{P}(x,s)dx=1/s\) what provides the normalization of the function \(P\) Eq. (3). Let \(\gamma\neq 0\). We calculate the inverse Laplace transform of the function (22) for small and large values of \(s\) separately. In calculation, we use the formulas \(\sqrt{1+u}\approx 1+u/2-u^{2}/8\) and \({\rm e}^{-u}\approx 1-u+u^{2}/2\), \(u\to 0\), and keep the leading terms in the obtained series. When \(s^{\alpha}>\gamma\) we obtain \[\hat{P}(x,s)=\frac{1}{2\sqrt{D}s^{1-\alpha/2}}\left(1-\frac{b_{1}}{s^{\alpha/2 }}+\frac{b_{2}}{s^{\alpha}}\right){\rm e}^{-\frac{|x|}{\sqrt{D}}s^{\alpha/2}}, \tag{23}\] where \(b_{1}=\gamma|x|/2\sqrt{D}\) and \(b_{2}=(\gamma/2)(1+|x|^{2}\gamma/2\sqrt{D})\). If \(s^{\alpha}<\gamma\), we get \[\hat{P}(x,s)=\frac{\sqrt{\gamma}}{2s\sqrt{D}}{\rm e}^{-\sqrt{\frac{\gamma}{2} }|x|(1+\frac{s^{\alpha}}{2\gamma})}\left[1+\frac{s^{\alpha}}{2\gamma}-b\frac{ s^{2\alpha}}{\gamma^{2}}\right], \tag{24}\] where \(b=\sqrt{\gamma/D}|x|+1/8\). Eqs. (18) and (23) provide the Green's functions in the limit of short time \[P(x,t)=\frac{1}{2\sqrt{D}}\Big{[}f_{-1+\alpha/2,\alpha/2}(t;\eta) \tag{25}\] \[-b_{1}f_{-1,\alpha/2}(t;\eta)+b_{2}f_{-1-\alpha/2,\alpha,2}(t; \eta)\Big{]},\] where \(\eta=|x|/\sqrt{D}\). From Eqs. (18) and (24) we get the Green's function in the long time limit \[P(x,t)=\frac{1}{2}\sqrt{\frac{\gamma}{D}}{\rm e}^{-\sqrt{\frac{ \gamma}{2}}|x|}\Big{[}f_{-1,\alpha}(t;\xi) \tag{26}\] \[+\frac{1}{2\gamma}f_{\alpha-1,\alpha}(t;\xi)-\frac{b}{\gamma^{2} }f_{2\alpha-1,\alpha}(t;\xi)\Big{]},\] where \(\xi=|x|/2\sqrt{D\gamma}\). Since the mean particle position equals zero, in terms of the Laplace transform the mean square displacement of the particle is \[{\cal L}\left[\left\langle(\,\Delta x)^{2}(t)\right\rangle\right](s)=\int_{- \infty}^{\infty}x^{2}\hat{P}(x,s)dx=\frac{2D}{s(\gamma+s^{\alpha})}. \tag{27}\] When \(\gamma\neq 0\), for small \(s\) we have \({\cal L}\left[\left\langle(\,\Delta x)^{2}(t)\right\rangle\right](s)=2D/[1/s- 1/(\gamma s^{1-\alpha})]\). Thus, in the limit of long time we get \[\left\langle(\Delta x)^{2}(t)\right\rangle=\frac{2D}{\gamma}\left[1-\frac{1}{ \gamma\Gamma(1-\alpha)t^{\alpha}}\right]. \tag{28}\] In the limit \(t\rightarrow\infty\), the stationary state described by the following function is reached, \[P(x,t\rightarrow\infty)\equiv P_{st}(x)=\frac{1}{2}\sqrt{\frac{\gamma}{D}}\, {\rm e}^{-\sqrt{\frac{\gamma}{2}}|x|}. \tag{29}\] For illustration, plots of functions \(F_{\alpha}\) and \(P\) are shown in Figs. 1 and 2, respectively. The parameters are \(\alpha=0.7\), \(\gamma=0.6\), and \(D=10\), all parameters are given in arbitrarily chosen units. In Fig. 3 the Green's functions for the stationary state are presented. ## III Final remarks The process of subdiffusion with particle immobilization can be described by an equation with a fractional time derivative of the Riemann-Liouville type, which is a differential-integral operator with the kernel \(F_{\alpha}\) defined by its Laplace transform Eq. (16). Normal diffusion and subdiffusion have a different stochastic interpretation. However, the normal diffusion-immobilization equation can be obtained from Eq. (17) by substituting \(\alpha=1\). We have proposed a method for determining the inverse Laplace transform of the kernel. In our opinion, this method can be widely used for calculating inverse Laplace transforms \({\cal L}^{-1}[\hat{f}(s)](t)\) for a wide class of functions \(f\). In a homogeneous unbounded system the subdiffusion-immobilization process reaches a stationary state which is described by \(P_{st}(x)\) Eq. (29). This distribution depends only on the quotient \(\gamma/D\) expressed in the units of \(1/{\rm m}^{2}\) and it does not explicitly depend on the parameter \(\alpha\) Figure 1: Plot of the function \(F_{\alpha}\). The dashed vertical line shows the location of the parameter \(t_{b}=11.5\). The solid line with squares is the plot of the upper function in Eq. (21) which describes \(F_{\alpha}\) for \(t<t_{b}\), the solid line with circles is the plot of the lower function in Eq. (21) which represents \(F_{\alpha}\) for \(t>t_{b}\). In the numerical calculations, the leading 20 terms in the series appearing in the functions \(E_{\alpha,\alpha}\) and \(\tilde{E}_{\alpha,\alpha}\) have been taken into account. The achievement of the steady state is suggested by Fig. 2, where the Green's functions for relatively long times differ very little from each other. In the stationary state there is \(\left\langle\left(\Delta x\right)^{2}\!\left(t\rightarrow\infty\right)\right\rangle =\frac{2D}{\gamma}\), the particle is finally immobilized with probability equal to one. The subdiffusion-immobilization process is described by Eq. (17) that can be obtained in practice by replacing the time fractional Riemann-Liouville derivative Eq. (10) with the more general Riemann-Liouville type derivative with the kernel \(F_{\alpha}\) Eq. (15) in the "ordinary" subdiffusion equation Eq. (11), orders of both derivatives are the same. There is a different situation than in the subdiffusion-reaction equation in which the reaction term is involved in the "ordinary" subdiffusion equation, see for example Refs. [22; 23; 24]. We mention that the Riemann-Liouville type fractional derivatives with different kernels have been considered in Ref. [25; 26; 27; 28]. ## Acknowledgment The author wishes to express his thanks to Aldona Dutkiewicz for fruitful discussions. ## Appendix. Derivation of Eq. (2) To derive the subdiffusion equation we use a simple model of a particle random walk along a one-dimensional homogeneous lattice. Usually, in the CTRW model both a particle jump length and waiting time for a particle to jump are random variables. We assume that the jump length distribution \(\lambda\) has the form \(\lambda(x)=\frac{1}{2}[\delta(x-\epsilon)+\delta(x+\epsilon)]\). Random walk with discrete time \(n\) is described by the equation \(P_{n+1}(m)=\frac{1}{2}P_{n}(m+1)+\frac{1}{2}P_{n}(m-1)\), where \(P_{n}(m)\) is a probability that a diffusing particle is at the position \(m\) after \(n\)-th step. Let the initial particle position be \(m=0\). Moving from discrete \(m\) to continuous \(x\) spatial variable we assume \(x=m\epsilon\) and \(P_{n}(x)=P_{n}(m)/\epsilon\), where \(\epsilon\) is a distance between discrete sites. The above equations and the relation \([P_{n}(x+\epsilon)+P_{n}(x-\epsilon)-2P_{n}(x)]/\epsilon^{2}=\partial^{2}P_{n }(x)/\partial x^{2}\), \(\epsilon\to 0\), provide the following equation in the limit of small \(\epsilon\) \[P_{n+1}(x)-P_{n}(x)=\frac{\epsilon^{2}}{2}\frac{\partial^{2}P_{n}(x)}{ \partial x^{2}}. \tag{30}\] To move from discrete to continuous time we use the formula \(P(x,t)=\sum_{n=0}^{\infty}Q_{n}(t)P_{n}(x)\)[15], where \(Q_{n}(t)\) is the probability that a diffusing particle takes \(n\) step in the time interval \((0,t)\). The function \(Q_{n}\) is a convolution of \(n\) distributions \(\psi\) of a waiting time for a particle to jump and a function \(U(t)=1-\int_{0}^{t}\psi(t^{\prime})dt^{\prime}\) which is the probability that a particle does not change its position after \(n\)-th step, \(\hat{U}(s)=[1-\hat{\psi}(s)]/s\), \(Q_{n}(t)=(\underbrace{\psi*\psi*\ldots*\psi}_{n\ times}*U)(t)\), where \((f*h)(t)=\int_{0}^{t}f(u)h(t-u)du\). Due to the following property \(\mathcal{L}[(f*h)(t)](s)=\hat{f}(s)\hat{h}(s)\) we obtain \[\hat{P}(x,s)=\frac{1-\hat{\psi}(s)}{s}\sum_{n=0}^{\infty}\hat{\psi}^{n}(s)P_{ n}(x). \tag{31}\] Combining Eqs. (30) and (31) we get Eq. (2). Figure 3: Plots of the function \(P_{st}\) Eq. (29) for different values of the ratio \(\gamma/D\) given in the legend. Figure 2: Plots of Green’s functions for times given in the legend. The plots represent the function Eq. (23) for \(t=0.1,0.5\) and Eq. (24) for \(t=15,50,100\).
2307.05845
PIGEON: Predicting Image Geolocations
Planet-scale image geolocalization remains a challenging problem due to the diversity of images originating from anywhere in the world. Although approaches based on vision transformers have made significant progress in geolocalization accuracy, success in prior literature is constrained to narrow distributions of images of landmarks, and performance has not generalized to unseen places. We present a new geolocalization system that combines semantic geocell creation, multi-task contrastive pretraining, and a novel loss function. Additionally, our work is the first to perform retrieval over location clusters for guess refinements. We train two models for evaluations on street-level data and general-purpose image geolocalization; the first model, PIGEON, is trained on data from the game of Geoguessr and is capable of placing over 40% of its guesses within 25 kilometers of the target location globally. We also develop a bot and deploy PIGEON in a blind experiment against humans, ranking in the top 0.01% of players. We further challenge one of the world's foremost professional Geoguessr players to a series of six matches with millions of viewers, winning all six games. Our second model, PIGEOTTO, differs in that it is trained on a dataset of images from Flickr and Wikipedia, achieving state-of-the-art results on a wide range of image geolocalization benchmarks, outperforming the previous SOTA by up to 7.7 percentage points on the city accuracy level and up to 38.8 percentage points on the country level. Our findings suggest that PIGEOTTO is the first image geolocalization model that effectively generalizes to unseen places and that our approach can pave the way for highly accurate, planet-scale image geolocalization systems. Our code is available on GitHub.
Lukas Haas, Michal Skreta, Silas Alberti, Chelsea Finn
2023-07-11T23:36:49Z
http://arxiv.org/abs/2307.05845v6
# PIGEON: Predicting Image Geolocations ###### Abstract Planet-scale image geolocalization remains a challenging problem, necessitating fine-grained understanding of visual information across countries, environments, and time. Although traditional retrieval-based approaches using hand-crafted features have recently been superseded by deep learning methods, transformer-based advances in machine learning have rarely been applied in image geolocalization. We introduce PIGEON, a novel deep multi-task model for planet-scale Street View image geolocalization that incorporates, inter alia, semantic geocell creation with label smoothing, conducts pretraining of a CLIP vision transformer on Street View images, and refines location predictions with ProtoNets across a candidate set of geocells. Our work presents three major contributions: first, we design a semantic geocells creation and splitting algorithm based on open-source data which can be adapted to any geospatial dataset. Second, we show the effectiveness of intra-geocell few-shot refinement and the applicability of unsupervised clustering and ProtNets to the task. Finally, we make our pre-trained CLIP transformer model, StreetCLIP, publicly available for use in adjacent domains with applications to fighting climate change and urban and rural scene understanding. Motivated by the rising popularity of an online game GeoGuessr with over 50 million players worldwide, we focus specifically on Street View images and create the first AI model which consistently beats human players in GeoGuessr, ranking in the top 0.01% of players. In addition to our novel modeling approach, we create a new planet-scale dataset for image geolocalization of 400,000 images. Our model achieves impressive results, aided by positive multi-task transfer in both an implicit and explicit multi-task setting. We attain 91.96% country accuracy on our held-out set and 40.36% of our guesses are within 25 km of target. One of the most important results of our work is demonstrating the domain generalization of our pre-trained CLIP model called StreetCLIP (Haas et al., 2023) and its robustness to distribution shifts. We apply StreetCLIP in a zero-shot fashion to out-of-distribution benchmark datasets IM2GPS and IM2GPS3k and achieve state-of-the-art results, beating models finetuned on more than four million in-distribution images. Finally, we show that contrastive pretraining is an effective meta-learning technique for image geolocalization with StreetCLIP realizing a more than 10 percentage points accuracy increase over CLIP on countries not seen during StreetCLIP-specific pretraining. With image geolocalization datasets varying widely in terms of geographical distribution, our results demonstrate the effectiveness of applying StreetCLIP to any geolocalization and related problem. ## 1 Introduction The game of GeoGuessr has become a worldwide sensation in the recent years, attracting over 50 million players globally and getting covered by the New York Times (Browning, 2022). On its surface, GeoGuessr seems quite simple: given a Street View location, players need to say where they find themselves in the world. Yet despite this seeming simplicity, the game is infamously difficult. As a result of the diversity of countries, seasons, and climates in the world, it is very hard for most humans to accurately pinpoint their locations. Motivated by Geoguesr, we embarked on finding a state-of-the-art approach to planet-scale image geolocalization. The general problem of photo geolocation has a variety of popular use cases, ranging from geographic photo tagging and retrieval at large technology companies to academic, historical research based on archival images. The societal interest in artificial intelligence being able to recognize location from images became clear in 2016, when a paper published by Google garnered worldwide coverage by the media (Weyand et al., 2016). Given the rising popularity of GeoGuessr, numerous amateur attempts have been made at "solving" the game (Suresh et al., 2018; de Fontnouvelle, 2021; Cassens, 2022). There is also an additional incentive to contribute to a growing community of geography enthusiasts, with AI models having the potential to improve geography education and the potential of the learned Street View representations to be beneficial for applications in sustainability, i.e. the prediction of buildings' energy efficiency (Mayer et al., 2022). In this work, we present PIGEON, a model trained on Street View data drawn from the same distribution as GeoGuessr, achieving an impressive image geolocalization results and consistently beating humans in the game of Geoguessr, ranking amongst the top players globally. Some of our work's major contributions revolve around the use of CLIP, a recent multi-modal vision transformer which has been shown to be an effective few-shot learner (Radford et al., 2021), which is important given the geographical sparsity of images in most image geolocalization datasets. As such, our work invovates on approaches still leveraging convolutional neural networks (CNNs) such as (Weyand et al., 2016). The remainder of this paper proceeds as follows. In Section 2, we outline past approaches to the problem of image geolocalization. In Section 3, we describe our dataset and the process of acquiring and augmenting our data. In Section 4, we discuss our proposed approach, outlining the six-step process comprising PIGEON. In Section 5, we present our results, discussing both distance-based metrics pertaining to our main image geolocalization task as well as other metrics relevant for our augmented dataset. In Section 6, we analyze the particularities of the performance of our model while attempting to interpret some predictions of the model. Section 7 summarizes our work, and Section 8 outlines potential future directions for our research. ## 2 Related Work ### Traditional Image Geolocalization The task of image geolocalization, also referred to as visual place recognition (Berton et al., 2022), is typically described as a difficult problem due to the sheer diversity of the conditions in which images are taken. An image can be taken during daytime or nighttime, with varying weather, illumination, season, traffic, occlusion, viewing angle, and many other factors. In fact, the task is deemed so difficult that it was not immediately clear that visual features could have superior predictive power in localizing images than textual features (Crandall et al., 2009). What is perhaps even more challenging, however, is the fact that images can be taken anywhere in the world, representing an extremely vast classification space. To that end, many of the previous approaches at image geolocalization were constrained to small types of parts of the world, such as looking exclusively at cities (Wu and Huang, 2022), specific mountain range like the Alps (Baatz et al., 2012; Saurer et al., 2016; Tomesek et al., 2022), deserts (Tzeng et al., 2013), or even beaches (Cao et al., 2012). Other approaches focused on highly constrained geographical area, such as the United Sates (Suresh et al., 2018) or even specific cities like Pittsburgh and Orlando (Zamir and Shah, 2010) or San Francisco (Berton et al., 2022). The first modern attempt at planet-scale image geolocalization is attributed to IM2GPS in 2008 (Hays and Efros, 2008), a retrieval-based approach using nearest-neighbor search based on hand-crafted features. It was the first time that image geolocalization was considered in an unconstrained manner on a global scale. Yet despite this scale, dependence on nearest-neighbor retrieval methods (Zamir and Shah, 2014) meant that an enormous database of reference images would be necessary for accurate image geolocalization on the scale of the entire planet. ### Deep Image Geolocalization #### 2.2.1 Convolutional Neural Networks (CNNs) Interest in image geolocalization surged with the arrival of deep learning to computer vision, marking an evolution from hand-crafted to deep-learned features (Masone and Caputo, 2021). In 2016, Google released a paper called PlaNet (Weyand et al., 2016) that first applied convolutional neural networks (CNNs) (Krizhevsky et al., 2012) to photo geolocalization. It also first cast the problem as a classification task, which was particularly important as past research has shown that it was difficult for deep learning models to directly predict geographic coordinates (de Brebisson et al., 2015) because most models do not learn the distributions of data points efficiently, as well as because of the interdependence of latitude and longitude. The improvements made with deep learning led researchers to revisit IM2GPS (Vo et al., 2017), apply CNNs to massive datasets on mobile images (Howard et al., 2017), and make applications to GeoGuessrs more widespread (Suresh et al., 2018; Luo et al., 2022). Nevertheless, some researchers argue for using approaches combining classification and retrieval (Kordopatis-Zilos et al., 2021). #### 2.2.2 Vision Transformers Following the success of transformers (Vaswani et al., 2017) in natural language processing, the transformer architecture found its application to computer vision, such as through the ViT architecture (Kolesnikov et al., 2021). The global context of ViT architectures explains immediate significant improvements compared with CNNs (Raghu et al., 2021). Additionally, vision transformers have been found to be useful in multi-model text and image setting, such as through the OpenAI's CLIP model (Radford et al., 2021) being applied to image geolocalization (Wu and Huang, 2022; Luo et al., 2022). Prior papers have also used contrastive learning without the use of CLIP (Kordopatis-Zilos et al., 2021). Although vision transformers have been successfully applied to a range of problems in Computer Science, applications of these models have thus far been fairly limited (Pramanick et al., 2022), but have recently been accelerating (Berton et al., 2022). In particular, the emergence of vision transformer models has not been widely applied to the problem of geolocalization from Street View imagery. ### Multi-task Image Geolocalization Multi-task approaches have been found to be improving results of the main task by using complementary tasks (Ranjan et al., 2016), with certain types of task being more beneficial for the main task than others (Bingel and Sogaard, 2017). This, coupled with the fact that auxiliary information was found to be a vital pre-processing step for image geolocalization (Pramanick et al., 2022), pointed to the potential of multi-task learning to significantly accelerated the field of image geolocalization. Extracting sets of priors about objects that can potentially be seen in an image (Ardeshir et al., 2014) can be framed as ingredients to a multi-task setting, such as by using scene recognition as a secondary task in a multi-task framework (Pramanick et al., 2022). By using semantic segmentation, the problem of extreme variation can be alleviated (Seymour et al., 2018). In fact, until recently, state-of-the-art performance (Muller-Budack et al., 2018) was made possible by combining convolutional neural networks with contextual information about environmental scenes. This is particularly important as image geolocalization is very difficult in natural environments (Tomesek et al., 2022). More recent work showed that vision transformers and multi-task settings (Pramanick et al., 2022) contribute to superior performance, further accelerating research in the field. ### Geocell Partitioning The chosen method of partitioning the world into geocells can have an enormous effect on downstream classification performance. Previous approaches rely on geocells that are either plainly rectangular (de Fontnouvelle, 2021), rectangular using the S2 library (Muller-Budack et al., 2018), or effectively arbitrary, such as through combinatorial partitioning (Seo et al., 2018). While semantic construction of geocells has been found to be of high importance to image geolocalization (Theiner et al., 2022), even current state-of-the-art papers using the S2 library (Pramanick et al., 2022). Alternative method for achieving optimized geocells include creating specific loss functions for the classification layer (Izbicki et al., 2019). ### Additional Prior Work Other prior academic work cited the need for cross-view image geolocalization as photos tend to be concentrated in landmarks and urban areas with sparse ground level geo-tagged photos. Cross-view approaches can combine ground-level appearance, overhead appearance, and land cover attributes (Lin et al., 2013). What is more, methods using Street View images have shown incredible potential in inferring factors such as income, race, education, and voting patterns (Gebru et al., 2017). In prior work, oftentimes the Street View images were inputted to the model in conjunction with images of landmarks (Weyand et al., 2020), images taken indoors, or cross-viewed with aerial images (Yang et al., 2021; Zhu et al., 2022). Moreover, recent paper cited the potential of also geolocalizing objects within images (Wilson et al., 2021), factoring in the differences in land cover (Russwurm et al., 2020), and setting new benchmarks (Berton et al., 2022). Further information about work done in image geolocalization can be found in various surveys of the field (Masone and Caputo, 2021; Wilson et al., 2021; Mai et al., 2022; Li and Hsu, 2022). ## 3 Dataset ### Dataset Acquisition While most image geolocalization approaches rely on publicly available datasets, this is not the case for Street View given the lack of publicly available planet-scale Street View datasets. To that end, we decided to create on original dataset. We proactively reached out to Erland Ranvinge, the Chief Technology Officer of Geoguessr, who generously agreed to share a dataset of 1 million locations used in the Competitive Duels mode of GeogGuessr. From the dataset, we randomly sampled 100,000 of the provided locations, or 10% of the overall dataset. For each of the locations, we downloaded four images, ending up with 400,000 images. The distribution of countries in our training set is displayed in Figure 20 in Section B of the Appendix. It is also where the details about our process of querying the Street View API, including relevant parameters for both Street View metadata and Street View images, is described. As can be seen, there are clear "tiers" of countries delineated by the frequency of sampling, and we denote each tier by a different color. Approximately 70% of the locations are in the "high" tier, 24% are in the "medium" tier, and the remaining 6% are in the "low" tier. For each location, we start with a random compass direction and take four images separated by 90 degrees, thus differing from a single-image setup typically seen in Street View image geolocalization (de Fontnouvelle, 2021). We carefully created non-overlapping image patches like in prior approaches (Cassens, 2022), and cropped images to remove auxiliary watermarks. Prior work addressing using Street View for GeoGuessr image geolocalization did not specifically look at data obtained directly from the GeoGuessr game (Luo et al., 2022), making our approach particularly novel. ### Image Format Four images for a sample location in our dataset are visualized in Figure 1. It is crucial to notice the advantage of a four-image setting compared to a single-image setting. Looking at the leftmost image in Figure 1, it mainly contains information on vegetation, making it difficult to locate the image with confidence. However, the additional images provide clues pertaining to roads, buildings and cars, pointing to the advantages of extending the dataset with additional images in lieu of taking a single image for each location. ### Dataset Augmentation Recognizing that adding auxiliary geographic metadata can be beneficial for image geolocalization (Arbinger et al., 2022), we decided to augment our dataset with data on Koppen-Geiger climate zones (Beck et al., 2018), as well as elevation temperature, precipitation, etc. We also capture information frequently used by human GeoGuessr players in placing their guesses such as the side of the road that traffic travels on. Details regarding specific datasets used in our dataset augmentation procedure are described in Section A of the Appendix. ## 4 Methodology This work introduces a variety of technical novelties applied to the problem of image geolocalization, summarized in the following subsections. ### Geocell Creation Prior research has shown that predicting latitudes and longitudes directly for any image geolocalization problem does not result in state-of-the-art performance (Theiner et al., 2022). Current methods all rely on the generation of geocells to discretize the coordinate regression problem and thus transform it into a classification setting, making geocell design "crucial for performance" (Theiner et al., 2022). #### 4.1.1 Naive Geocells Our initial geocell design is inspired by the approach undertaken by papers that had previously achieved state-of-the-art result on image geolocalization (Muller-Budack et al., 2018; Pramanick et al., 2022) using the S2 geometry library. The S2 geocell algorithm uses numerous rectangles which observe the curvature of the earth and split each rectangle into four equally-sized smaller rectangles if the number of data points within a given rectangle reaches a pre-defined threshold. Our naive geocell algorithm works in a similar fashion; it is first initialized with one large rectangle which is in every subsequent step divided into two rectangles along the longest side, only dividing a rectangle further if the two resulting rectangles contain a minimum of thirty points. Instead of splitting each rectangle into two equally-sized rectangles, a \(k\)-means clustering is performed with \(k=2\) to find a decision boundary, only splitting the given rectangle if the minimum geocell size of thirty training data points is respected. Figure 2 illustrates the resulting rectangular geocells derived from our naive geocell creation algorithm for the metropolitan area of Paris. #### 4.1.2 Semantic Geocells A major contribution of this work is our contribution on the generation of semantic geocells which automatically adapt based on the geographic distribution of any training dataset samples. The motivation behind a semantic geocell design is that visual features in images often follow the semantics of the given country (i.e. road marking), region (i.e. quality of infrastructure), or city (i.e. street signs). In addition, country or administrative boundaries often follow natural boarders such as the flow of rivers or mountain ranges which in turn influence visual features such as the type vegetation, soil color, or more. We use planet-scale open-source administrative data for our semantic geocell design, relying on non-overlaping political shape files of three levels of administrative boundaries (country, admin 1, and admin 2 levels) obtained from (GADM, 2022). Starting at the most granular level (admin 2), our algorithm merges adjacent admin 2 level polygons to such that each geocell contains at least thirty training samples. Our method attempts to preserve the hierarchy given by admin 1 level boundaries and never merges cells across country borders (defined by distinct ISO country codes). It randomly merges geocells with adjacent cells using the following prioritization: 1. Small adjacent geocells in same admin 1 area. 2. Large adjacent geocells in same admin 1 area. 3. Small adjacent geocells in same country. 4. Large adjacent geocells in same country. The above prioritization of our algorithm ensures that geocells containing fewer than the minimum threshold of training samples are not simply appended to large adjacent geocells but instead results in low-density regions being aggregated into one larger cell, often surrounding major metropolitan areas. This further preserves rural and urban semantics. Figure 2 shows an example of our semantic geocell design preserving the urban area of Paris as well as the surrounding sub-urban regions. One limitation of aggregating admin 2 level areas as defined by (GADM, 2022) is that for some urban areas, the number of training examples for a single cell might greatly exceed the minimum sample threshold defined by the algorithm's user. In addition, through the process of merging adjacent geocells, some cells might be created which could be split again into multiple smaller cells based on different boundaries. We address this limitation in our geocell design through the following innovative algorithm which uses Voronoi Tessellation and the OPTICS clustering algorithm (Ankerst et al., 1999) to further split a geocell into further smaller semantic geocells. Our Semantic Geocell Division Algorithm uses OPTICS (Ankerst et al., 1999) to find a large cluster within a cell, checking whether removing this cluster from the cell would result in two cells each having a large number of training samples than MINSIZE. If this is the case, the new geocell's polygon is determined by performing Voronoi Tessellation over all points in the intial cell as depicted in Figure 3 and assigning the Voronoi polygons to a new cell containing all training samples in the computed OPTICS cluster. The area found through Voronoi Tessellation is then removed from the old geocell. The splitting is performed until convergence for each OPTICS parameter setting. In our work, we use three distinct OPTICS settings with values minsamples = 8, 10, and 15 for the three respective rounds and xi parameters of 0.05, 0.025, and 0.015 for the same rounds. With each successive setting, the requirements defining a cluster are thus relaxed to find clusters even in cells which are difficult to further divide. Merging geocells according to administrative boundary hierarchies and dividing large cells based on our Semantic Geocell Division Algorithm results in geocells roughly balanced in size and which also preserve the semantics of cities, regions, countries, and the natural environment. By deploying our method to our training dataset, we compute the boundaries of a total of 2203 geocells used for our experiments. ``` Input: geocell boundaries \(g\), training samples \(x\), OPTICS parameters \(p\), minimum cell size MINSIZE. Initialize \(j=1\). repeat Initialize \(C\) = OPTICS(\(p_{j}\)). for\(g_{i}\) in \(g\)do Define \(x_{i}=\{x_{j}|x_{j}\in x\wedge x_{j}\in g_{i}\}\). repeat Cluster \(c=C(x_{i})\). \(c_{max}=c_{k}\) where \(|x_{i,k}|\geq|x_{i,l}|\forall l\). if\(|c_{max}|>\) MINSIZE and \(|x\setminus x_{i,k}|>\) MINSIZE then New cell \(g_{new}\) = VORNOI(\(x_{i,k}\)). \(g_{i}=g_{i}\setminus g_{new}\). Assign \(x_{i}\) to cells \(i\) and \(new\). endif until convergence endfor \(j=j+1\) until\(j\) is \(|p|\) ``` **Algorithm 1** Semantic Geocell Division Algorithm ### Label Smoothing By discretizing our image geolocalization problems via the help of our semantic geocells creation process, a trade-off is created between the granularity of geocells and predictive Figure 1: Four images comprising a 360-degree panorama in Pegswood, England in our dataset. accuracy. The more granular the geocells are, the more precise a prediction can be but the classification problem becomes more difficult due to higher cardinality. To address this issue, we devise a loss function which penalizes based on the distance bwteen the predicted geocell to the correct geocell. By smoothing the one-hot geocell classification label according to equation 1, we train our models in a much more data-efficient way as the parameters for multiple geocells are trained concurrently with each training example. The value of the smoothed one-hot label \(L_{i}\) for geocell \(i\) given the correct geocell \(c\) is given by \[L_{i}=\exp(-\left[\text{Hav}(g_{i},x_{c})-\text{Hav}(g_{c},x_{c})\right]/75) \tag{1}\] where \(g_{i}\) are the centroid coordinates of the geocell polygon of cell \(i\) and \(x_{c}\) are the true coordinates of the example for which the label is computed. The constant of 75 acts as a temperature setting for the label smoothing which worked well in out experiments. \(\text{Hav}(\cdot,\cdot)\) is the Haversine distance in kilometers defined as: \[\small\begin{split} 2r\arcsin\left(\sqrt{\sin^{2}\left(\frac{\phi_{2} -\phi_{1}}{2}\right)+\cos(\phi_{1})\cos(\phi_{2})\sin^{2}\left(\frac{\lambda_ {2}-\lambda_{1}}{2}\right)}\right)\end{split} \tag{2}\] One advantage of using the Haversine distance between two points is that it respects the Earth's spherical geometry, giving accurate estimates of the distance between two points. Figure 4 demonstrates the results of smoothing geocell labels which ideally results in lower geolocalization errors at the cost of slightly lower geocell prediction accuracy due to the added noise in the label. By combining out semantic geocell design with label smoothing, we optimize for our model to spread probabilities across semantically similar _and_ adjacent cells. Figure 5 the distribution of probabilities of our best model for a true location close to the sea in Jakobstad, Finland. Notably, our semantic geocell design and label smoothing results in our model placing high probabilities on semantically similar cells adjacent to the Gulf of Bothnia in Scandinavia. ### Vision Transformer (CLIP) The input image is encoded using a pre-trained vision transformer (Kolesnikov et al., 2021). We utilized a pretrained ViT-L/14 architecture and fine-tuning either the prediction heads or also unfreeze the last vision transformer layer. For model versions with multiple image inputs, we average the embeddings of all four images. Averaging the embeddings performed better during our experiments than combining the emebddings via multi-head attention or an additional transformer layer. We were particularly interested in exploring the effect of the type of pretraining on downstream performance. We Figure 3: Voronoi tessellation applied in the process of geocell creation. Figure 2: Ile-de-France area around Paris, France, under different geocell creation specifications. compare a ViT-L/16 that was pre-trained ImageNet-21k with 14 million images (Deng et al., 2009) with CLIP ViT-L/14 which is a multi-modal model that utilized contrastive pre-training on a dataset of 400 million images and caption (Radford et al., 2021). Based on our priors and commonly observed strategies by professional GeoGuessr players, there are a variety of relevant features for the image location task, e.g., vegetation, road markings, street signs, and architecture. We hypothesize that the multi-modal pre-training creates embeddings with a much deeper semantic understanding of the image, enabling it to learn such features. As we show later, the CLIP vision transformer gives a substantial improvement over a comparable ImageNet vision transformer and using attention maps, we can indeed show how this enables the model to learn these strategies in an interpretable way. ### StreetCLIP Contrastive Pretraining Inspired by the substantial improvement that we observed from using CLIP's contrastive pre-training over the ImageNet pre-trained vision transformer, we explored designing a contrastive pre-training task that we could use to fine-tune our CLIP foundation model even before learning the geocell prediction head. For that, we augment our Street View dataset with geographic, demographic, and geological auxiliary data. This Figure 4: Impact of applying label smoothing over neighboring geocells for a location in Accra, Ghana. Figure 5: Distribution of probabilities over geocells for a true location in Jakobstad, Finland. data is used to create randomized captions for each image using a rule-based system that samples components from different task categories and combines them in a randomized order. The probabilities for each category are adjusted based on priors. Some examples of categories & corresponding caption components include: * Location: "A Street View photo in the region of Eastern Cape in South Africa." * Climate: "This location has a temperate oceanic climate." * Compass Direction: "This photo is facing north." * Season: "This photo was taken in December." * Traffic: "In this location, people drive on the left side of the road." This creates an implicit multi-task setting and ensures the model maintains rich representations of the data while adjusting to the distribution of Street View images and learning features that are relevant & correlated with geolocation. ### Multi-task Learning We also experiment with making our multi-task setup explicit by creating task-specific prediction heads for auxiliary climate variables, population density, elevation, and the month (season) of the year. As climate variables we include the Koppen-Geiger Climate Zone, the yearly average temperature and precipitation at the given location as well as the difference in temperature and precipitation between the month with the highest average value and the month with the lowest average value. The climate zone and and season prediction tasks are posed as a classification problem while the other six auxiliary tasks are formulated as a regression task. In Hays & Efros (2014), the authors note that the "distribution of likely locations for an image provides huge amounts of additional meta-data for climate, average temperature for any day, vegetation index, elevation, population density, per capita income, average rainfall," and more which can be leveraged for the task of geolocalization. We unfreeze the last CLIP layer to allow for parameter sharing across tasks with the goal of observing a positive transfer from our auxiliary tasks to our geolocalization problem and to learn more general image representations which reduce the risk of overfitting to the training dataset. Our loss function weights the geolocalization tasks as much as all auxiliary tasks combined. A novel contribution of our work is that we use eight auxiliary prediction tasks instead of just two compared to prior research employing multi-task methods (Pramanick et al., 2022) with multi-task methods having shown impressive results across fields (Ruder, 2017). ### ProtoNet Refinement To further refine our model's guesses within a geocell and to improve street and city-level performance, we perform intra-geocell refinement using ProtoNets (Snell et al., 2017). Instead of simply predicting the mean latitude and longitude of all points within a geocell as current state-of-the-art approaches such as Pramanick et al. (2022), we pose each cell's intra-cell refinement as a separate few-shot classification task. We again use the OPTICS clustering algorithm (Ankerst et al., 1999) with a minsample parameter of 3 and a xi parameter of 0.15 to cluster all points within a geocell and thus propose classes to learn in the intra-cell classification setting. Each cluster consisting of at least three training Figure 6: Contrastive pretraining of StreetCLIP (Haas et al., 2023) in an implicit multi-task setting using images from Varzea Grande, Mato Grosso, Brazil. examples forms a prototype and its representation is computed by averaging the embeddings of all images within the prototype. To compute the prototype embeddings, we use the same model as in our geocell prediction task but remove the prediction heads and freeze all weights. Figure 7 illustrates examples of refinement clusters found by the OPTICS algorithm in the Greater Los Angeles metropolitan area. During inference, we first compute and average the new location's embeddings. After our geocell classification model predicts, instead of predicting that cell's centroid coordinates, we take the euclidian distance between the averaged image embeddings and all prototypes within the given geocell, selecting the prototype location with the smallest euclidian image embedding distance to the inference location as the final geolocalization prediction. The creation of intra-cell location prototypes allows our model to predict one of more than 11,000 distinct locations for a training dataset of 90,000 locations instead of just choosing from the 2,203 distinct geocell centroid coordinates, thus allowing for more precise decision making. While guess refinement via protonets is in itself a novel idea, our work goes one step further by allow our ProtoNet refiner to optimize across cells. Instead of refining a geolocalization prediction in a single cell, our ProtoNet refiner optimizes across multiple cells which further increases performance. During inference, our geocell classification model outputs the top five predicted geocells as well as the model's associated probabilities for these cells. The refinement model than picks the most likely location within each of the five proposed geocells after which a softmax is computed across the five euclidian image embedding distances yielded through ProtoNet refinement. We use a softmax with a temperature of \(1.6\) which was carefully tuned to balance probabilities across different geocells. Finally, these refinement probabilities are multiplied with the probabilities provided by the geocell classification model and the refinement location corresponding to the highest joint probability is chosen as the final geolocalization prediction. ## 5 Results The results of our best-performing PIGEON model are listed in the bottom row of Tables 1 and 2. We achieve an astounding 91.96% Country Accuracy (based on political boundaries) and 40.36% of guesses are within 25 km of the correct location. Moreover, the median kilometer error is 44.35 km and the average GeoGuessr score is 4,525. In Table 3, we list the results of our multi-task models on our augmented dataset. Our results show that geographical, demographic, and geological features can be inferred from Street View images. ### Ablation Studies on Geolocalization Accuracy We perform a detailed ablation study for each of our methodological contributions as described in Section 4. We summarize our results in Table 1, displaying the percentage of our guesses that fall within a given kilometer radius from the actual location, using standard kilometer-based metrics in line with the literature (Pramanick et al., 2022). Furthermore, for each ablation, we calculate additional distance-based metrics in Table 2 that provide insights as to the performance of our modeling approach. We have the following observations: * Label Smoothing, Four-image Panorama, Multi-task Parameter Sharing, Semantic Geocells and CLIP Pretraining all significantly improve continent, country, and region-level metrics. * On the other hand, ProtoNet Refinement has almost no effect on continent, country and region-level metrics, but significantly improves street-level accuracy from 1.32% to 4.84% as well as city level accuracy from 34.96% to 39.86%. * Fine-tuning the last CLIP layer hurts model performance on its own, however, when performing multi-task training with the last CLIP layer as shared parameters, there is _positive transfer_ and it _increases_ performance. The multi-task training acts as a regularizer. * When additionally performing the Contrastive Street-CLIP Pretraining then unfreezing the last CLIP layer again _hurts_ performance. In particular, there is no positive transfer from the multi-task training anymore. Presumably, all of the benefits from multi-task supervision have already been captured from the implicitly multi-task StreetCLIP pretraining. Figure 7: Visualized ProtoNet clusters in the Greater Los Angeles metropolitan area. In Figure 8 we visualize the improvement of the best-performing PIGEON models over the simplest model using CLIP Base, showing how the performance gains are more palpable at finer granularities of distance compared to coarser distance metrics. ### Contrastive Pretraining Results with StreetCLIP The geolocation task is usually framed as a supervised learning problem. However, this has the major problem the models are very restricted to a specific task, e.g., the number of classes and the distribution of the training data. For example, our training dataset contains only Street View images during the day, whereas IM2GPS, a common benchmark dataset for geolocalization, contains a much wider distribution of images, e.g., images of the inside of buildings and images during the night. Moreover, both datasets have different non-overlapping sets of countries and differing definitions of countries, e.g., whether overseas territories like French Guiana or Guam are considered their own countries or not. We have the hypothesis that StreetCLIP (Haas et al., 2023), through our Street View Multi-task Contrastive Pretraining, learns relevant strategies for geolocalization but keeps the general world knowledge from the original CLIP Pretraining. Thereby, it can generalize to countries it has never seen during our Street View Pretraining and is robust with regard to distribution shift. We test our trained StreetCLIP model on the benchmark image geolocalization datasets IM2GPS and IM2GPS3k, which contain a much broader distribution of images than Street View. By generating an exhaustive list of 234 country captions, we perform a zero-shot linear probe of StreetCLIP to get country-level predictions which we then translate into coordinates. Table 4 presents our results. We compare against TransLocator Pramanick et al. (2022), the current state-of-the-art on both of these datasets, and following their work, we report our performance on continent-level accuracy. Whereas TransLocator was trained in a supervised manner on 4.72 million images, our model was trained in a semi-supervised manner on only 1 million Street View images. Surprisingly, despite the distribution shift, StreetCLIP outperforms the state-of-the-art on both benchmark datasets using just linear probing. In particular, StreetCLIP performs significantly better than CLIP which implies that there is a transfer of image geolocalization performance onto new distributions. We conjecture that contrastive pretraining is performing _implicit meta-learning_. To further, confirm this hypothesis we investigated the performance of CLIP and StreetCLIP in countries that were not seen during StreetCLIP training (Haas et al., 2023). On the latest benchmark IM2GPS3K, StreetCLIP achieves an accuracy of 52.79% for countries not seen during unseen countries vs. 41.51% of accuracy for CLIP. An explanation for this surprising transfer is that the knowledge about these countries was already learned during the initial CLIP pertaining, e.g., the text encoder presumably has a good embedding of every country in the world. However, the StreetCLIP pretraining primes the model for the geolocalization tasks and unlocks additional knowledge from the original CLIP pretraining. Thereby, StreetCLIP can perform well on zero-shot transfer to new tasks (i.e., new countries) where our contrastive pretraining can be seen as a form of implicit meta-learning. ## 6 Analysis We analyze our results in detail both through quantitative and qualitative evaluations. We confirmed the accuracy of our results by deploying our model in the GeoGuessr game, where our model consistently beats high-ranking human players, ranking in the Top 1,000 globally. We try to understand whether StreetCLIP is learning interpretable strategies by utilizing an explainability method. Furthermore, we analyze some of our underperforming guesses, and discuss the limitations of our work. ### Quantitative Evaluation #### 6.1.1 Comparison with Human Performance Using our Chrome extension (see Appendix D), we deploy PIGEON in online competitive GeoGuessr and aggregate the results of 298 rounds of the game mode Duel against human players of varying skill levels. We visualize the comparison of PIGEON with actual human in-game performance in Figure 9. Players are ranked into the following divisions by skill level: Bronze Division, Silver Division, Gold Division, Master Division, and Champion Division. Figure 8: Geolocalization accuracy of our within distance-based standard metrics of km radii. \begin{table} \begin{tabular}{c c c c c c} \hline \hline & \multicolumn{4}{c}{**Distance (\% @ km)**} \\ **Method** & _Street_ & _City_ & _Region_ & _Country_ & _Continent_ \\ & 1 km & 25 km & 200 km & 750 km & 2500 km \\ \hline CLIP Base & 1.28 & 24.08 & 55.38 & 80.20 & 92.00 \\ + Label Smoothing & 0.92 & 24.18 & 59.04 & 82.84 & 92.76 \\ + Four-image Panorama & 1.10 & 32.50 & 75.32 & 92.92 & 98.00 \\ + Fine-tuning Last CLIP Layer & 1.10 & 32.74 & 75.14 & 93.00 & 97.98 \\ + Multi-task Parameter Sharing & 1.18 & 33.22 & 75.42 & 93.42 & 98.16 \\ + Semantic Geocells & 1.24 & 34.54 & 76.36 & 93.36 & 97.94 \\ + Contrastive CLIP Pretraining & 1.32 & 34.96 & 78.48 & **94.82** & 98.48 \\ + ProtoNet Refinement & 4.84 & 39.86 & **78.98** & 94.76 & 98.48 \\ \hline - Unfreezing Last CLIP Layer & **5.36** & **40.36** & 78.28 & 94.52 & **98.56** \\ \hline \hline \end{tabular} \end{table} Table 1: Multi-step ablation study on our modeling approach to image geolocalization. \begin{table} \begin{tabular}{l c c c c} \hline \hline & **Country** & **Mean** & **Median** & **GeoGuessr** \\ **Method** & **Accuracy** & **km Error** & **km Error** & **Score** \\ & \(\%\) & \(km\) & \(km\) & \(points\) \\ \hline CLIP Base & 72.12 & 990.0 & 148.0 & 3,890 \\ + Label Smoothing & 74.74 & 877.4 & 131.1 & 3,986 \\ + Four-image Panorama & 87.64 & 315.7 & 60.81 & 4,442 \\ + Fine-tuning Last CLIP Layer & 87.90 & 312.7 & 61.81 & 4,442 \\ + Multi-task Parameter Sharing & 87.96 & 299.9 & 60.63 & 4,454 \\ + Semantic Geocells & 89.36 & 316.9 & 55.51 & 4,464 \\ + Contrastive CLIP Pretraining & 91.14 & 251.9 & 50.01 & 4,522 \\ + ProtoNet Refinement & 91.82 & 255.1 & 45.47 & **4,531** \\ \hline - Unfreezing Last CLIP Layer & **91.96** & **251.6** & **44.35** & 4,525 \\ \hline \hline \end{tabular} \end{table} Table 2: Results from the ablation study beyond the standard distance metrics (distance). \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{**Elevation Error\(m\)\(people/km^{2}\)**} & \multicolumn{2}{c}{**Pop. Density Error\(m\)\(people/km^{2}\)**} & \multicolumn{2}{c}{**Temp. Precipitation Error\(m\)\(m/day\)**} & \multicolumn{2}{c}{**Month Accuracy\(\%\)**} \\ \hline CLIP Base & & & & & \\ + Label Smoothing & & & & & \\ + Four-image Panorama & & & & & \\ + Fine-tuning Last CLIP Layer & & & & & \\ + Multi-task Parameter Sharing & 141.7 & 1,094 & 1.37 & 14.48 & 45.74 & 74.10 \\ + Semantic Geocells & 147.1 & **1,064** & 1.36 & 14.71 & 45.74 & 74.66 \\ + Contrastive CLIP Pretraining & **132.8** & 1,072 & **1.18** & **12.82** & **50.64** & **75.76** \\ + ProtoNet Refinement & & & & & \\ \hline - Unfreezing Last CLIP Layer & 149.6 & 1,119 & 1.26 & 15.08 & 45.42 & 75.22 \\ \hline \hline \end{tabular} \end{table} Table 3: Results from the ablation study beyond the standard distance metrics (non-distance). For reference, GeoGuessr has 30 million players worldwide, and the Master Division represents roughly the top 1% of players, whereas the Champion Division represents the Top 1000 players worldwide. As we observe in Figure 9, PIGEON comfortably outperforms human performance. It even beats Champion Division players in median kilometer distance and, therefore, belongs to the Top 0.1% or Top 1000 players globally. Moreover, PIGEON is able to perform guesses almost instantly. #### 6.1.2 Urban vs. Rural In order to elucidate the difficulty of different sub-distributions, we investigate whether a performance differential exists between urban and rural locations. Presumably, the density of relevant cues should be higher in Street View images from urban locations. We bin our validation dataset into quintiles by population density and visualize PIGEON's median kilometer error. In Figure 10, we observe that indeed higher population density correlates with better predictions. In particular, there is a sharp dropoff in the highest quintile compared to the other four quintiles. This confirms our hypothesis that there is a higher density of cues in urban locations. ### Qualitative Evaluation #### 6.2.1 Explainability One of our hypotheses in Section 4.3 was that the contrastive pre-training used by CLIP gives the model a deeper semantic understanding of scenes and thereby enables it to discover strategies that are interpretable by humans. Surprisingly, the model was able to learn strategies that are taught in online GeoGuessr guides without ever having been directly supervised to learn these strategies. In order to visualize what patches of the image are considered relevant for a given caption, we visualize attention relevancy maps for our finetuned StreetCLIP model by implementing the method from Generic Attention-model Explainability for Bi-Modal Transformers (Chefer et al., 2021). In our experiments, we observed that this explainability method does not generalize well from a patch size of 32, as used in the official implementation, to our patch size of 14. Our hypothesis is that this is caused by the distribution of relevancy scores across patches having a lower entropy when the patch size is smaller. In order to resolve this issue, we modify the method by filtering out outliers and squaring relevancy scores. This significantly improved the interpretability of both regular CLIP and our StreetCLIP on smaller patch sizes and should be applicable beyond our project. For the visualizations in Figure 11, we generated relevancy maps for an image from the validation dataset and the corresponding ground-truth caption, e.g. "This photo is located in Canada". Indeed, the model pays attention to features that professional GeoGuessr players consider important, e.g., vegetation, road markings, utility posts, and signage. This makes the strong performance of the model explainable and could furthermore enable the discovery of new strategies that professional players have not yet discovered. #### 6.2.2 Error Analysis In spite of our model's generally high accuracy of estimating image geolocations, there were several scenarios in which our model underperformed. By computing entropy for the probabilities of top predicted geocells for each location in our validation set, we managed to identify the images about the geolocation of which our model was the most uncertain. We visualize those cases in Figure 12. The features of poorly classified images are aligned with our intuitions and prior literature about difficult settings for image geolocations. Figure 12 shows that images from tunnels, bodies of water, poorly illuminated areas, forest, indoor areas and soccer stadiums are amongst the imagery that is the most difficult to pinpoint geographically. This makes sense: without recognizable features directly pertaining to a specific geographical area, their classification is much more difficult when compares to images with features that clearly distinguish a given geography. ### Limitations Nevertheless, several limitations remain. Although PIGEON can successfully identify the vast majority of countries in which photos were taken, it still cannot be used at extremely precise levels (street-level) that are necessary for detailed geo-tagging. Moreover, the Street View images in our dataset were taken during daytime, raising doubts over the generalization of the model to images taken during nighttime. Further testing under different appearance variations \begin{table} \begin{tabular}{c c c} \hline \hline \multirow{2}{*}{**Benchmark**} & \multirow{2}{*}{**Method**} & **Distance \% @ km** \\ & & _Contiment_ \\ & & 2500 km \\ \hline \multirow{2}{*}{**IM2GPS**} & TransLocator & 86.70 \\ & Zero-shot CLIP & 86.08 \\ & Zero-shot StreetCLIP & **88.19** \\ \hline \multirow{2}{*}{**IM2GPS3K**} & TransLocator & 80.10 \\ & Zero-shot CLIP & 77.28 \\ \cline{1-1} & Zero-shot StreetCLIP & **80.65** \\ \hline \hline \end{tabular} \end{table} Table 4: Results from zero-shot probing with StreetCLIP (Haas et al., 2023) contrastive pretraining on out-of-distribution benchmark datasets. could provide insights into the robustness of PIGEON to different seasons, illuminations, weather, etc. Additionally, we recognize that some of our visualizations may be prone to cherry-picking, thus not being wholly representative of the underlying datasets. ## 7 Conclusion Overall, PIGEON presents multiple novel improvements to multi-task image geolocalization while providing important insights and artifacts for related problems in fighting climate-change and urban and rural scene understanding. PIGEON achieves impressive results in planet-scale image geolocalization on Street View images, achieving a country accuracy of 91.96% on our held-out dataset and placing 40.36% of our guesses within 25 km of the target. Our model consistently beats human players in the game of GeoGuessr which samples data from the same distribution as introduced in our Figure 11: Attention attribution maps for a sample of locations in our dataset. Figure 10: Median km error by population density quintile. Figure 9: Comparison of the GeoGuessr in-game performance of PIGEON with the performance of actual online GeoGuessr players. novel dataset of 100,000 Street View locations. The three major contributions of our work can be summarized as follows: we introduce a semantic geocell creation and splitting algorithm based on open-source data adaptable to any geospatial dataset. Second, we show the effectiveness of intra-geocell few-shot refinement via ProtoNets and the use clustering to generate potential prediction candidates. Finally, we make our pre-trained CLIP transformer model, StreetCLIP (Haas et al., 2023), publicly available for use by other researchers. Finally, we show that contrastive pretraining is an effective meta-learning technique ideal for domain generalization and robustness to distribution shifts. One of the most important results of our work is achieving state-of-the-art performance on the IM2GPS and IM2GPS3k image geolocalization benchmark datasets which are strongly out-of-distribution compared to our Street View dataset used for the pre-training of StreetCLIP. Most notably, the state-of-the art performance achieved is in zero-shot, shining light on the potential of StreetCLIP to help solve problems in many other domains. ## 8 Future Work ### Potential Extensions Going forward, several extensions can be made to make image geolocalization more precise. Future models can detect text included in images to leverage linguistic information for predictions, with textual data having previously been suggested as a potential feature aiding geolocalization (Arbinger et al., 2022). Instead of being constrained to street-level imagery, cross-view approaches could be employed, such as synthesizing satellite imagery with Street View (Toker et al., 2021). Although we propose novel semantic geocells, are experiments are constrained to one granularity of geocells; in the future, various granularities of geocells can be tested to find the optimal geocell sizes. Ideally, future image geolocalization models would be robust to appearance changes, which bring up the need for incorporating changes over the years, requiring datasets of images over an extended period of time over a year (Ali-bye et al., 2022). In a multi-task setting, determining the optimal number of tasks is likely to be a priority. Additionally, image segmentation and concept influence could be used for further location prediction interpretability, and fusions between images to get information about the entire four-image panorama and not just individual images. In the long term, future work could go beyond Street View, with the models able to geolocate any photo taken anywhere in the world at fine-grained granularity. To that end, future experiments in CLIP-based zero-shot settings should go beyond just the continent-level accuracy. Figure 12: Examples of images for which PIGEON was the most uncertain about the correct location. Some additional extensions we thought of exploring in this project, but did not end up pursuing, include using knowledge graphs, using road networks and compass directions for intra-geocell refinement, as well as adding an urban/rural scene recognition task to the multi-task setting. ### Social Impact The results we achieved have vast social impact potential. By predicting climate based on images, we could be able to assess the risk to the consequences of climate change. This is why we decided to augment our data specifically with the Koppen-Geiger climate classification system given its emphasis on the geospatial understanding of the impacts of climate change (Beck et al., 2018). Image geolocalization can also be used for applications in autonomous driving (Wilson et al., 2021), in war zones (such as during the Russian invasion of Ukraine), for attributing location to archival images, helping historical research, as well as in promoting geography education through gamified e-learning (Girgin, 2017). Even with the potential benefits to humans, image geolocalization nevertheless has to deal with various ethical issues. Some actors posting images might not want their images to be geolocalized, leading to questions about the fragility of privacy protections. Furthermore, accurate image geolocalization systems could be used by governments for citizen surveillance, posing a threat to individual freedoms.
2303.06663
SAR-UNet: Small Attention Residual UNet for Explainable Nowcasting Tasks
The accuracy and explainability of data-driven nowcasting models are of great importance in many socio-economic sectors reliant on weather-dependent decision making. This paper proposes a novel architecture called Small Attention Residual UNet (SAR-UNet) for precipitation and cloud cover nowcasting. Here, SmaAt-UNet is used as a core model and is further equipped with residual connections, parallel to the depthwise separable convolutions. The proposed SAR-UNet model is evaluated on two datasets, i.e., Dutch precipitation maps ranging from 2016 to 2019 and French cloud cover binary images from 2017 to 2018. The obtained results show that SAR-UNet outperforms other examined models in precipitation nowcasting from 30 to 180 minutes in the future as well as cloud cover nowcasting in the next 90 minutes. Furthermore, we provide additional insights on the nowcasts made by our proposed model using Grad-CAM, a visual explanation technique, which is employed on different levels of the encoder and decoder paths of the SAR-UNet model and produces heatmaps highlighting the critical regions in the input image as well as intermediate representations to the precipitation. The heatmaps generated by Grad-CAM reveal the interactions between the residual connections and the depthwise separable convolutions inside of the multiple depthwise separable blocks placed throughout the network architecture.
Mathieu Renault, Siamak Mehrkanoon
2023-03-12T13:56:59Z
http://arxiv.org/abs/2303.06663v1
# SAR-UNet: Small Attention Residual UNet for Explainable Nowcasting Tasks ###### Abstract The accuracy and explainability of data-driven nowcasting models are of great importance in many socio-economic sectors reliant on weather-dependent decision making. This paper proposes a novel architecture called Small Attention Residual UNet (SAR-UNet) for precipitation and cloud cover nowcasting. Here, SmaAt-UNet is used as a core model and is further equipped with residual connections, parallel to the depthwise separable convolutions. The proposed SAR-UNet model is evaluated on two datasets, i.e., Dutch precipitation maps ranging from 2016 to 2019 and French cloud cover binary images from 2017 to 2018. The obtained results show that SAR-UNet outperforms other examined models in precipitation nowcasting from 30 to 180 minutes in the future as well as cloud cover nowcasting in the next 90 minutes. Furthermore, we provide additional insights on the nowcasts made by our proposed model using Grad-CAM, a visual explanation technique, which is employed on different levels of the encoder and decoder paths of the SAR-UNet model and produces heatmaps highlighting the critical regions in the input image as well as intermediate representations to the precipitation. The heatmaps generated by Grad-CAM reveal the interactions between the residual connections and the depthwise separable convolutions inside of the multiple depthwise separable blocks placed throughout the network architecture. UNet, Precipitation Nowcasting, Cloud Cover Nowcasting, Deep Learning ## I Introduction An accurate precipitation nowcast is essential in many domains such as urbanization, agriculture and tourism. Sudden heavy rainfalls can lead to major catastrophes such as floods and landslides and have a tremendous economic impact. Early Warning Systems (EWSs) use the nowcast/forecast to limit damages due to climate hazards. Therefore, quick, accurate and trustworthy forecasts are invaluable, especially for the near future, i.e., nowcasting. For instance, highly competitive mechanical sports such as Formula 1 rely on forecasts as detailed as minute-by-minute predictions on different parts of a circuit to decide the race setup and strategy. The aviation sector depends on various factors such as precipitation, wind speed and direction to ensure a safe journey for the passengers daily. State-of-the-art NWP based models rely on numerical methods simulating the intrinsic physical dynamics of the climate. These simulations are often complex and require vast computational resources. In this context, the development of advanced data-driven models such as Deep Learning (DL) in the past years has gained a lot of attention even has challenged the performance of NWP models [1]. Deep Learning based models have been previously successfully applied for forecasting tasks in a range of application domains including crop yield [2], solar irradiance [3], traffic [4] and weather [5, 6, 7, 8, 9, 10, 11, 12]. The weather datasets are time-series with observations of a weather element at each time step. Recurrent Neural Networks (RNNs) are previously designed to include the order and time-dependency observed in datasets [13]. Over the last decade, literature has observed the development of Convolutional Neural Networks for several computer vision tasks [14]. In particular, LeNet-5 [15], GoogLeNet [16] and U-Net [17] have shown promising results for image segmentation or object detection, especially in the medical field [18][19]. The literature on traditional computer vision tasks such as image classification is rich. However, image-to-image forecasting is rather a new field, and many architectures have yet to be developed to address the problems as the one studied here. Despite outperforming performance of deep learning approaches, they are not yet fully transparent in how they reach to a particular decision. Recently, researchers have developed Explainable AI (XAI) techniques to help the user understand the logic that leads the model to the prediction [20]. Standard methods to achieve this are gradient-based saliency maps, Class Activation Maps, or Excitation Backpropagation. These post hoc methods produce heatmaps by computing the layers' activation after the network is trained. Other popular post hoc algorithms are perturbation-based, where one determines the essential features by altering or removing parts of the input and quantifying the impact on the prediction performance. Here, we use post hoc algorithms used previously in image classification and segmentation to give additional insights on model predictions. This paper proposes a novel architecture called Small Attention Residual UNet (SAR-UNet) which uses SmaAt-UNet [21] as core model and equips it with a Residual Connection parallel to each Depthwise Separable Convolution on both encoder and decoder paths. We evaluate the introduced model on precipitation and cloud cover nowcasting over the Netherlands and France respectively. The main contributions of the paper are the following: * We introduce a novel deep architecture for precipitation as well as cloud cover nowcasting. This network relies on the use of Depthwise Separable Convolution (DSC) with Residual connections and Attention mechanisms in its encoder and decoder. The SAR-UNet outperforms its predecessor, the SmaAt-UNet, in precipitation and cloud cover nowcasting tasks. * We utilize Grad-CAM, a visual explanation technique, in order to provide additional insights on the nowcasts. Given an input image, Grad-CAM produces heatmaps of the activations in different levels of our network. To the best of our knowledge, this is the first adaptation of Grad-CAM for an image-to-image nowcasting task. This paper is organized as follows. A brief overview of the related research works is given in Section II. Section III, introduces the proposed SAR-UNet model and the explains the used visual explanation technique. The experimental settings and description of the used datasets are given in Section IV. The obtained results are discussed in Section V and the conclusion is drawn in Section VI. ## II Related Work Data-driven weather forecasting has recently gained a lot of attention. Among many successful deep learning architectures, Recurrent Neural Networks [22] and Long short-term memory (LSTM) [23] have shown promising results in sequential data analysis. Xu et al. [24] combined a Generative Adversarial Network with LSTM architecture to produce a network suitable for prediction using satellite cloud maps. An attempt to design an LSTM block that would encompass the treatment of the spatial aspect of the given input image led to the Convolutional LSTM Network (ConvLSTM) [25], which has been successfully applied on precipitation nowcasting and outperformed the Fully Connected LSTM network. UNet [17], the widely used architecture in computer vision, has been previously extended and modified in various ways. For instance, the authors in [21] introduced the SmaAt-UNet, using a UNet core model, by replacing all regular convolutions with Depthwise-Separable Convolution (DSC) [26] as well as adding Convolutional Block Attention Modules (CBAMs) [27] to the encoder part. The explored DSC mechanism in the SmaAt-UNet significantly reduced the number of trainable parameters of the network. Moreover, the utilized CBAM blocks create attention maps both for the channel dimensions and the spatial dimensions, therefore can better learn the inter-channel as well as inter-spatial relationships of the weather variables. Diakogiannisa et al. [28] proposed a UNet backbone enhanced with residual connections parallel to the convolutional blocks. The use of the residual connections avoids the problem of exploding and vanishing gradients, allowing the creation of deeper networks. Explaining and interpreting deep weather forecasting models is still in its early stage. Abdellaoui et al. [29] used occlusion analysis to infer the importance of weather variables. The great advantage of this process is that it can be applied to any network architecture. In [7], the author quantifies the uncertainty in the prediction through Test Time Dropout, a method that approximate the Bayesian inference of Bayesian Neural Networks. An uncertainty map is then produced to visualize and interpret the prediction. The authors in [30, 31] use Local Interpretable Model-agnostic Explanations (LIME) in a weather forecast context to interpret the decisions from their model. LIME approximates the deep learning network with a simple model as a linear one to understand the relationships in the weather features. Most of the explainable AI techniques adapted to CNN architectures have been applied to image classification or segmentation tasks. For instance, the authors in [32] introduced Gradient-weighted Class Activation Maps (Grad- CAM) to interpret a CNN's output. ## III Method ### _Proposed SAR-UNet model_ #### Iii-A1 Architecture Fig. 1(a) shows a diagram that summarizes the proposed model. The model receives several images of size 288 by 288 stacked over the channels as input. Similar to UNet, its encoder-decoder transforms the data into spatially smaller images with more channels, then back into spatially larger images with fewer channels on each level before the output. Each level of the encoder is made of three transformations: Residual DSC Block (blue arrow), CBAM (yellow arrow), and 2x2 Max Pooling (red arrow). The CBAM's output is used as input for Max Pooling layer as well as it is send to the corresponding level (purple arrow) of the decoder part, where it is concatenated to the images from the lower level. The 2x2 Max Pooling reduces the spatial size of the images by 2, and its output is used as input to the next (lower) level of the encoder. A decoder level begins with 1x1 convolution followed by upsampling, represented by the green arrow in the diagram. The 1x1 convolution aims to halve the number of channels, while the upsampling doubles the spatial size of the images. The images are then concatenated to the CBAM's output over the channels and fed into a Residual DSC block. Note that the Residual DSC blocks double the number of channels in the encoder whilst they halve them in the decoder. A 1x1 output convolution (black arrow) layer is used in the end to obtain a single output image with the prediction. #### Iii-A2 Residual DSC Block Residual connections act as a shortcut in the network and therefore make it easier to optimize the network parameters which itself can result in improving the training of the network and its accuracy. In addition, the residual connections allow for deeper networks to be trained without suffering from the vanishing gradients problem. In the proposed Residual DSC Block, shown in Fig. 1(b), we combine the Residual connection with the Depthwise Separable Convolution mechanism. The input goes through DSC with 3x2 kernel, Batch Normalization and ReLU twice. Moreover, this same input goes, in parallel, through a residual connection with 1x1 convolution to match the output channels of DSC. The output of the Residual DSC block is obtained by summing the output of the residual and the DSC paths. #### Iii-A3 Cbam We place CBAMs after each Residual DSC block in the encoder. As opposed to [21], where the CBAM's output is only used in the skip connection, here it is also used as the input of the next level. The last adaptation of the SmaAt-UNet resides in the number of channels in the bottleneck layer. Our network has 1024 channels, twice as many as in the bottleneck of the SmaAt-UNet. For this reason, we use 1x1 Convolutions before each upsampling operation to reduce the channels in the next decoder level. These adaptations induce a slight increase in the network's total number of trainable parameters. We compare the total number of parameters of UNet, SmaAt-UNet and SAR-UNet in Fig. 2. ### _Training_ For training we followed the guidelines of [21]. The initial learning rate is set to 0.001 with a learning rate scheduler dividing the learning rate by 10 every time the validation loss has not decreased in four consecutive epochs. The Adam optimizer is used for training the model. The maximum number of epochs is set to 200, with an early stopping criterion of 15 epochs, effectively stopping training if there was no improvement over the last 15 epochs. The models are trained using a batch size of 6 on the Google Colab Pro platform with a GPU (Nvidia Tesla P100). ### _Evaluation_ To evaluate the performance of the examined models, we use the Mean Squared Error (MSE) which is also used as the loss function during training. We also set a range of metrics after binarizing the output image. Following the lines of [21], using a threshold of 0.5\(mm/h\), we binarize each pixel of the output image and count the number of True Positives (TP), True Negatives (TN), False Positives (FP), and False Negatives (FN). Thanks to this, we get the Accuracy, Precision, Recall, and the F1-score of the model. We emphasize the MSE as an indicator of performance as this is a regression task, and the objective is to predict the exact quantity of precipitation in millimeters. Moreover, we compare the performance of the proposed model with that of SmaAt-UNet and the Persistence method, which uses the last input image of a sequence as the prediction image. Fig. 1: SAR-UNet architecture. Arrows stand for a given transformation applied to the data, while the rectangles show the shape of the data after each step. The height and width of the images are shown on the left side at each level of the network. The number of filters used in the convolutions can be seen through the number of channels in the resulting images at each step. In the bottom left corner, we show the inside of a Residual DSC Block: the input goes, in parallel, through two DSCs on one side, through 1x1 convolution on the other, and both sides are then summed. The network uses 12 channels as input, corresponding to 60 minutes of image data. Fig. 2: Comparison of the number of trainable parameters between networks ### _Explainability_ Explainability is an essential part of deep learning nowcasting. Here, we explain the predictions through activation heatmaps. These heatmaps show which areas of the input are responsible for the high values in the output, thus making the nowcast more transparent. Grad-CAM was originally developed by [32] to produce a visual explanation of a classification network. It takes the gradient of the score for a class with respect to feature map activations to obtain neuron importance weights. Then it performs a linear combination of the forward activation maps followed by ReLU to obtain the final result. As mentioned before, after binarizing the output readar image for evaluation metrics, we obtain an image that is segmented with rain and no rain classes. We thus obtain explanations for our nowcasting task by transforming it into an image segmentation task. We set the algorithm to produce heatmaps for the rain class on the entire image. ## IV Experiments ### _Precipitation nowcasting_ We use the same precipitation map dataset as in [8, 21]. This dataset is collected from 2016 to 2019 by the Royal Netherlands Meteorological Institute (Koninklijk Nederlands Meteorologisch Instituut) with two radars in Den Helder and De Bilt, The Netherlands, capturing precipitation intensities every 5 minutes. Every pixel of the resulting image corresponds to one square kilometer of land, and its numerical value stands for the amount of rainfall, in hundredth of a millimeter. That means a pixel value of 1 translates to 0.01mm of rainfall. We follow the preprocessing steps of [21], by cropping the images to a size of 288x288 pixels, centered on the Netherlands. As our interest is focused on images with rain, similar to [21], we select images with a threshold set to 50% of the total pixels having a value strictly higher than 0. The sequence of images used as input for the network are stacked over the channel dimension. We have conducted a set of experiments to quantify the nowcasting performance in different situations. In particular, we use three different input sizes with 6, 12 and 18 channels, corresponding to 30, 60 and 90 minutes of data input for the network. For each of the input sizes, we conduct a range of nowcasting tasks: 30, 60, 90, 120 and 180 minutes ahead. This amounts to a total of 15 different nowcasting tasks. We used checkpoints to save the model after each epoch during training and for each setup we selected the checkpoint with lowest loss on the validation set. An example of the nowcastings obtained by our proposed SAR-UNet model is shown in Fig. 3. ### _Cloud cover nowcasting_ We also examine our model on the French cloud cover dataset used in [21]. The images in this dataset are binary and of size 256x256; each pixel has a value 1 if there is a cloud and 0 if there is none. The images are recorded every 15 minutes. Following the lines of [21], we output six images (corresponding to the nowcasts for the next 1.5 hours ahead) using four images (corresponding to the past 1 hour data) as input. An example of the four input images is shown in Fig. 4. ## V Results and Discussion ### _Precipitation and cloud cover nowcasting_ This section presents the results obtained for the precipitation and cloud cover nowcasting tasks. The obtained results of the precipitation task are summarized in Table I, which show that SAR-UNet model is better than the other tested models in 13 of the 15 different setups when considering only the MSE. which We emphasize this by plotting the average MSE (average of all minutes ahead predictions) by the amount of input used for each model in Fig. 5. In this Figure, we compare the performance of the examined models and visualize the superiority of SAR-UNet model over the SmaAt-UNet and the Persistence models. In addition, there Fig. 3: Example of the nowcasting tasks, using 30 minutes input data. The first column presents the 6 input images. The second column shows the ground truth for nowcasting from 30 to 180 minutes ahead. The proposed model predictions are shown in the last column. does not seem to be an substantial increase or decrease in the model performance when varying input data amounts. Table I shows that nowcasting further away in the future is a more challenging task, and in general the losses increase as the minutes ahead increases. For Precision, Recall and F1-score, the SAR-UNet is better than the SmaAt-UNet in about as many setups as when the SmaAt-UNet is better than the SAR-UNet. Therefore, SAR-UNet and SmaAt-UNet are comparable in terms of these tested metrics. This result might be explained by the arbitrary choice of a threshold for binarization of the prediction used in calculating these metrics. The obtained metrics for the cloud cover nowcasting task are tabulated in Table II. Similar to precipitation task, here SAR-UNet in most of the cases outperforms the SmaAt-UNet and Persistence models. ### _Activation heatmaps_ The collection of heatmaps obtained using Grad-CAM bring additional clarity to how SAR-UNet functions and what parts of the images are responsible for the prediction. In Fig. 6 and Fig. 7 we show the heatmaps of network's activations for the precipitation nowcasting. These heatmaps are obtained using the same data shown in Fig. 3. The figures contain heatmaps of the activations of different inner layers of the SAR-UNet to explain how they operate. Therefore, we output a heatmap for the whole Residual DSC blocks, as well as each DSC sequence and Residual connection inside of these blocks, and finally for all CBAMs in the encoder path of the model. We split the plots between the encoder part in Fig. 6 and decoder part in Fig. 7 where the complete path followed by the data, from the very beginning to the very end are shown. This provides additional insights on which part of the input image is most important at each layer and level of the model and that how these information are combined to give a final prediction. We start by analyzing the first column of the figures, representing the Residual DSC Blocks. The first three levels of the encoder in Fig. 6 seem to be focused on the areas where the last input image presents the highest precipitation amounts. In particular, first only the areas with very high precipitation are activated in the encoder depth 0, progressing to a larger activated area in encoder depth 2 corresponding to areas with rain in the input. The deeper levels of the network, shown in Fig. 6 and 7, namely Encoder depth 3,4 and Decoder depth 3, appear to activate in more abstract areas, with a few dots being significantly activated, leaving the rest of the image inactivated. The final three levels (depth 2,1 and 0) of the decoder in Fig. 7 are much more similar to the prediction made by the network. These layers smooth out the prediction so that areas vary gradually from little rain to substantial rain, giving a more realistic appearance to the prediction. The activation from a Residual DSC block is generally similar to the ones from the DSC path and the Residual connection as it is the sum of both. Nevertheless, it appears in Fig. 6 that the DSC path is closer to the output of the Residual Fig. 4: Example of the cloud cover data input. The time interval between each image is 15 minutes. Fig. 5: The comparison of different metrics obtained by averaging over multi-step ahead nowcasts using different amounts of data as input. (a) The average MSE. (b) The average Accuracy. (c) The average F1-score. Fig. 6: Activation heatmaps obtained using Grad-CAM on all pixels predicted with rain after binarization. The first column represents the activation from an entire Residual DSC Block, while the second and third columns focus on parts of it, i.e., the activation of a DSC and the activation of the Residual connection respectively. The activations of the CBAM are shown on the last column. Each row stands for a level (depth) of the encoder part of SAR-UNet model. Thus, the first row is the first transformation for the input, while the bottom row is the last transformation of the encoder part. DSC block, while the Residual connection is less important with the first two depth levels being uniformly activated. On the other hand, in Fig. 7 it is the Residual connection that is activated similarly to the Residual DSC Block of the same row. Therefore, in the encoder and decoder parts, we can infer that the two paths of a Residual DSC Block switch roles and their importance change gradually. Finally, the fourth column in Fig. 6 presents the activations of the CBAMs in the Encoder part of the network. We observe that the CBAMs activate almost exactly as the Residual DSC Blocks. The visible difference is in the intensity of the activation, especially near the points where the activation is the maximum, i.e., the red and orange areas. The activation of the CBAM is less intense in these areas, leading to fewer red areas and more yellow and green ones. Fig. 8 shows the activation heatmaps of Encoder depth 1 and 4, and Decoder depth 3 and 1 of the SAR-UNet for the cloud cover dataset. We notice the Encoder depth 1, placed at the beginning of the network, is activated at the borders between cloud and non-cloud zones with high precision. Encoder depth 4 is also activated on these borders but in wider patches, leading to larger red zones on the heatmap. In Decoder depth 2, we observe activation zones in the center of the cloudy areas of the image. It is focused on the inside of the cloud area delimited in the previous layers. Decoder depth 1 is one of the final layers of the network. Its activation heatmap is very different from the other shown heatmaps. This layer's activation covers almost entirely the cloud areas of the image. It is therefore a combination of the previous layers, with the borders and the centre of the clouds all activated areas. ## VI Conclusion In this paper a novel Small Attention Residual UNet (SAR-UNet) is proposed for weather element nowcasting tasks. The model is based on a U-shape convolutional network that combines Depthwise Separable Convolutions, Residual Connections and the Convolutional Block Attention Modules to outperform its predecessor, the SmaAt-UNet. The proposed model is evaluated on two nowcasting tasks, i.e., precipitation and cloud cover nowcasting. The experimental results demonstrate that SAR-UNet model outperforms the other tested models for the studied datasets. In order to shed lights on the explainability of the inner workings of the proposed SAR-UNet model, we have visualized the activation heatmaps for the nowcasts using Grad-CAM technique. The implementation of our SAR-UNet model is available at GitHub1. Footnote 1: [https://github.com/mathieureanult1/SAR-UNet](https://github.com/mathieureanult1/SAR-UNet)
2303.09550
Denominators of special values of zeta-functions count KU-local homotopy groups of mod p Moore spectra
In this note, for each odd prime $p$, we show that the orders of the $KU$-local homotopy groups of the mod $p$ Moore spectrum are equal to denominators of special values of certain quotients of Dedekind zeta-functions of totally real number fields. With this observation in hand, we give a cute topological proof of the Leopoldt conjecture for those number fields, by showing that it is a consequence of periodicity properties of $KU$-local stable homotopy groups.
A. Salch
2023-03-16T17:59:06Z
http://arxiv.org/abs/2303.09550v1
Denominators of special values of \(\zeta\)-functions count \(Ku\)-local homotopy groups of mod \(p\) Moore spectra. ###### Abstract. In this note, for each odd prime \(p\), we show that the orders of the \(KU\)-local homotopy groups of the mod \(p\) Moore spectrum are equal to denominators of special values of certain quotients of Dedekind zeta-functions of totally real number fields. With this observation in hand, we give a cute topological proof of the Leopoldt conjecture for those number fields, by showing that it is a consequence of periodicity properties of \(KU\)-local stable homotopy groups. ###### Contents * 1 Introduction. * 2 Review of \(KU\)-localization and the \(KU\)-local mod \(p\) Moore spectrum. * 3 Review of some ideas from number theory. * 3.1 Review of Dirichlet characters and their \(L\)-functions. * 3.2 Review of Dedekind zeta-functions. * 4 The \(L\)-function of the mod \(p\) Moore spectrum. * 5 Consequences for the Leopoldt conjecture. * 6 Appendix: a few entertaining numerical calculations. * 6.1 Computed examples of values of \(L(1-n,S/p)\). * 6.2 Some amusing probability arguments associated to homotopy groups. ## 1. Introduction. This note combines a calculation in stable homotopy theory from the 1970s with some number-theoretic ideas of Leopoldt and Carlitz from the 1950s. In [1], J. F. Adams famously proved the following theorem: **Theorem 1.1**.: **(Adams.)** _Let \(n\) be a positive integer, and let \(\operatorname{denom}(\zeta(-n))\) be the denominator of the rational number \(\zeta(-n)\) when written in reduced form. Then the image of the Whitehead \(J\)-homomorphism_ \[\mathbb{Z}\cong\pi_{4n+3}(SO)\stackrel{{ J}}{{\longrightarrow}}\pi _{4n+3}^{S}(S^{0})\] _is a cyclic group of order equal to the denominator of \(\zeta(-2n-1)\), up to multiplication by a power of \(2\)._ Here \(\zeta\) is the Riemann zeta-function, \(\pi_{4n+3}(SO)\) is the \((4n+3)\)rd (unstable) homotopy group of the infinite special orthogonal group, and \(\pi_{4n+3}^{S}(S)\) is the \((4n+3)\)rd stable homotopy group of the zero-sphere \(S^{0}\). Very closely related to the above result of Adams, one has Ravenel's computation (see [22], where it is mentioned that early versions of this computation were done by Adams and Baird): **Theorem 1.2**.: **(Ravenel.)** _Let \(KU\) be periodic complex \(K\)-theory, and let \(L_{KU}S\) be the Bousfield localization of the sphere spectrum \(S\) at \(KU\). Then, for all positive integers \(n\), the order \(\#(\pi_{2n}(L_{KU}S))\) of the \(2n\)th stable homotopy group \(\pi_{2n}(L_{KU}S)\) is a power of \(2\), and \(\#(\pi_{2n-1}(L_{KU}S))=\operatorname{denom}(\zeta(1-n))\) up to multiplication by a power of \(2\)._ In Theorem 1.2 we are adopting the convention that the denominator of the rational number \(0\) is \(1\). (This matters since \(\zeta(-n)=0\) for all even positive integers \(n\).) To date, Theorem 1.2 is unique in the literature, as the only description of the orders of the homotopy groups of a Bousfield-localized finite spectrum in terms of special values of \(L\)-functions. The purpose of this note is to give an infinite collection of new examples of this phenomenon, by extending Theorem 1.2 to a family of spectra other than the sphere spectrum: we prove that the orders of the groups \(\pi_{*}(L_{KU}S/p)\), where \(S/p\) is the mod \(p\) Moore spectrum (i.e., the homotopy cofiber of the degree \(p\) map \(S\to S\)), are also denominators of special values of a natural \(L\)-function, when \(p>2\). The work involved is not difficult, once one pinpoints what the correct \(L\)-function should be; the hard part in this project was simply finding that \(L\)-function to begin with. The main result in this note (proven in Proposition 4.4, Theorem 4.8, and Corollary 4.9) is as follows: **Theorem 1.3**.: _Let \(p\) be an odd prime. Then for all positive integers \(n\), if we agree to write \(\operatorname{denom}(x)\) for the denominator of a rational number \(x\) when written in reduced form, we have equalities_ \[\operatorname{denom}\left(\frac{\zeta_{F}(1-n)}{\zeta(1-n)}\right)=\#(\pi_{2n }(L_{KU}S/p))=\#(\pi_{2n-1}(L_{KU}S/p)), \tag{1.1}\] _where \(L_{KU}S/p\) is the Bousfield localization of the mod \(p\) Moore spectrum \(S/p\) at periodic complex \(K\)-theory \(KU\), and where \(F/\mathbb{Q}\) is the (unique) minimal subextension of \(\mathbb{Q}(\zeta_{p^{2}})/\mathbb{Q}\) in which \(p\) is wildly ramified._ In Theorem 5.3, we prove the Leopoldt conjecture, at the prime \(p\), for each of the number fields \(F\) described in the statement of Theorem 1.3. These number fields are all abelian, so the Leopoldt conjecture was already proven for them, by the work of Baker and Brumer [4]. The proof we offer for Theorem 5.3 has the curious feature that it deduces the relevant cases of the Leopoldt conjecture from Colmez's \(p\)-adic class number formula, and Theorem 1.3, and \(v_{1}\)-periodicity in stable homotopy groups. One naturally wants to know if other, perhaps nonabelian, cases of the Leopoldt conjecture can be verified using a similar topological approach. We describe the general approach in Observation 5.4: **Observation**.: _If \(F\) is a totally real number field and if we have integers \(j,k\) and spectra \(E,X\) such that_ 1. _the order of_ \(\pi_{2(p^{k}-1)p^{n}-1}(L_{E}X)\) _is equal to the denominator of the rational number_ \(\frac{\zeta_{F}(1-p^{n}(p^{k}-1))}{\zeta(1-p^{n}(p^{k}-1))}\)_,_ 2. \(X\) _admits a self-map_ \(\Sigma^{(2p^{k}-2)j}X\to X\) _which induces an isomorphism in_ \(E_{*}\)_-homology, and_ 3. \(\pi_{-1}(L_{E}X)\) _is finite,_ _then the Leopoldt conjecture holds for \(F\) at the prime \(p\)._ The periodicity theorem of Hopkins-Smith (see [13], or Theorem 1.5.4 of [23]) gives an ample supply of spectra \(X\) satisfying the second of the three conditions; see Remark 5.5 for some discussion. In the appendix, section 6, we give some examples of computed special values (including numerators) of \(\zeta_{F}(1-n)/\zeta(1-n)\), and an amusing relationship between the orders of homotopy groups of \(L_{KU}S/p\) and the probability that certain "random" collections of integers satisfy certain coprimality conditions. **Remark 1.4**.: There are various known and conjectured relationships between orders of algebraic \(K\)-groups of number rings, and special values of Dedekind zeta-functions, such as Lichtenbaum's conjecture, from [18]. Since the algebraic \(K\)-group \(K_{i}(R)\) of a ring \(R\) is the homotopy group \(\pi_{i}(\mathcal{K}(R))\) of the algebraic \(K\)-theory spectrum \(\mathcal{K}(R)\), one naturally wants to know how Theorem 1.3 fits with algebraic \(K\)-theory. The answer is this: it follows from Thomason's identification (in [26]) of \(p\)-complete \(\pi_{n}\left((L_{KU}\mathcal{K}(R))\hat{\ \ }_{p}\right)\) with \(p\)-complete etale \(K\)-theory \(K_{n}^{\mathrm{et}}(R)\hat{\ \ }_{p}\), for \(n\gg 0\), together with Quillen's calculation of the \(K\)-groups of finite fields in [21], that the \(p\)-completion of \(L_{KU}S\) is homotopy-equivalent to the \(p\)-completion of \(L_{KU}\mathcal{K}(\mathbb{F}_{\ell})\) for any prime \(\ell\) which is a primitive root modulo \(p^{2}\). So, with some effort, one can rewrite the homotopy groups appearing in Ravenel's theorem reproduced above as Theorem 1.2 as algebraic \(K\)-groups. But \(L_{KU}S/p\) is not homotopy-equivalent to the algebraic \(K\)-theory spectrum of any finite field, or any number ring, or any number field, even after \(p\)-completion, even after restricting attention to homotopy groups in degrees \(\gg 0\). So the results of this note do not seem to fit cleanly into any known or conjectured relationships between algebraic \(K\)-groups and special values. **History and status of this paper.** I wrote most of this material in 2016, but never publicly posted it, because I had the sense that there ought to be a more compelling, deeper, and more generalizable way to prove the same results. It took a few years for me to learn enough Iwasawa theory to find that _better_ proof of these results, a proof that generalizes far beyond the mod \(p\) Moore spectra, for example. But perhaps there is some value in making this note publicly available, since I think the ideas are quite interesting, and they are presented here in a way that doesn't require the reader to make an investment in learning Iwasawa theory, and because some versions of this note were privately circulated and I have been asked about it by several people. So I hope the reader will forgive me for presenting in this note only a precursor of what I think must be the really _effective_ techniques for relating orders of stable homotopy groups to special values of \(\zeta\)-functions. Below, in Remark 1.5, I sketch how to prove the main result of this note using those more effective (Iwasawa-theoretic) techniques. The author wants to emphasize that the proofs in this document are all pretty easy; the hard work involved in this project was finding the correct function \(L(s,S/p)=\zeta_{F}(s)/\zeta(s)\) and fields \(F\). The homotopy groups \(\pi_{*}(L_{KU}S/p)\) for \(p>2\) are very simple (namely, \(\pi_{n}(L_{KU}S/p)\) is isomorphic to \(\mathbb{Z}/p\mathbb{Z}\) if \(n\) is congruent to \(0\) or \(-1\) modulo \(2p-2\), and is trivial otherwise), but "handcrafting" an \(L\)-function to have rational special values with specified denominators at negative integers is a nontrivial task: "most" \(L\)-functions (in the usual families: Dedekind, Hasse-Weil, Artin...) with rational special values at negative integers typically _vanish_ at negative integers, and of those which do not vanish, most are _integral_ at negative integers, and of those which have nonzero noninteger rational special values, most seem to follow the same pattern of denominators as the Riemann zeta-function. Finding \(\zeta_{F}(s)/\zeta(s)\), and the particular number fields \(F\) described in Theorem 1.3, took the author some work; but once you have the right idea for \(F\) and the idea to study \(\zeta_{F}(s)/\zeta(s)\), the pieces fall into place using established methods. **Remark 1.5**.: The argument we present for (1.1) in this note is simply that one computes the denominators of \(\operatorname{denom}\left(\frac{\zeta_{F}(1-n)}{\zeta(1-n)}\right)\), one compares it to the (already computed) order of \(\pi_{2n}(L_{KU}S/p)\) and of \(\pi_{2n-1}(L_{KU}S/p)\), and one sees that they are equal. So it is important to ask: _is equation (1.1) just a coincidence?_ I think the really compelling argument that it _isn't_ a coincidence comes from an Iwasawa-theoretic proof of (1.1), which proceeds by _not_ computing both sides of the equation (1.1), but instead by showing that the input for the descent spectral sequence (described below, in (2.2)) computing \(\pi_{*}(L_{KU}S/p)\) is the cohomology of a certain unit group Iwasawa module, whose cohomology groups are also (by the totally real case of the Iwasawa main conjecture, as in [28]) the denominators of the special values of \(\zeta_{F}(s)/\zeta(s)\) at negative integers; then the vanishing of the differentials in the spectral sequence gives equality (1.1). That Iwasawa-theoretic argument is beyond the scope of this note, and I will have to present it elsewhere. Since that Iwasawa-theoretic argument requires much more knowledge of algebraic number theory than the more classical, Dirichlet-character-theoretic approach in this note, I believe this note will be far more readable to an audience of topologists than a paper which describes the more general and powerful Iwasawa-theoretic approach. I have tried to make this note readable for number theorists, but I think this note will still be most accessible to a reader which is, like the author, trained in homotopy theory but not in number theory. A "crash course" in the necessary results from number theory can be found in section 3, and a briefer crash course on the relevant topological results in section 2. It is a pleasure to thank R. Bruner for many fruitful conversations relating to this material, D. Ravenel for support and inspiration in studying connections between special values and orders of homotopy groups, and an anonymous referee for helpful comments. The computer algebra packages MAGMA and SAGE were also indispensable in making large-scale systematic calculations of special values that led me, eventually, to zero in on the correct families of \(L\)-functions and finally the correct definition of \(L(s,S/p)\). **Conventions 1.6**.: Throughout, we write \(S\) for the sphere spectrum, \(\zeta\) for the Riemann zeta-function, and \(\nu_{p}(x)\) for the \(p\)-adic valuation of a number \(x\). ## 2. Review of \(KU\)-localization and the \(Ku\)-local mod \(p\) Moore spectrum. This section explains some well-known ideas from stable homotopy theory which we will use. An excellent reference for this material is [22]. **Definition 2.1**.: _Fix a spectrum \(E\)._ * _We say that a map of spectra_ \(f:X\to Y\) _is an_ \(E\)-local equivalence _if_ \(E\wedge f\) _is a weak equivalence. In other words:_ \(f\) _is an_ \(E\)_-local equivalence if and only if_ \(f\) _induces an isomorphism_ \(E_{\ast}(X)\to E_{\ast}(Y)\)_._ * _We say that a spectrum_ \(X\) _is_ \(E\)-acyclic _if_ \(E\wedge X\) _is contractible._ * _We say that a spectrum_ \(X\) _is_ \(E\)-local _if, for each_ \(E\)_-acyclic spectrum_ \(Y\)_, every map of spectra_ \(Y\to X\) _is null homotopic._ * _We say that a map of spectra_ \(f:X\to L_{E}X\) _is_ the \(E\)-localization map on \(X\)_, and we call_ \(L_{E}X\) _the_ Bousfield \(E\)-localization of \(X\)_, if_ \(L_{E}X\) _is_ \(E\)_-local and_ \(f\) _is an_ \(E\)_-local weak equivalence._ Uniqueness (up to weak equivalence) of the Bousfield \(E\)-localization of \(X\) is not difficult to see. More difficult is the theorem of Bousfield that the Bousfield \(E\)-localization of \(X\) exists, for all \(E\) and all \(X\).1 Footnote 1: The approach to Bousfield localization we have presented here is close to the original 1970s approach, as summarized in [22]. There are later approaches as well: if we work with a model category of spectra which satisfies appropriate set-theoretic conditions, then there exists a “coarser” model structure on that same underlying category of spectra, whose cofibrations are the same, and whose “coarse” weak equivalences are precisely the \(E\)-local weak equivalences. Fibrant replacement in this “coarse” model structure is Bousfield \(E\)-localization. The book [12] is a very good reference for this elegant approach. Bousfield localization is, among other things, an analogue (for spectra) of the familiar notion of localization in commutative algebra: if \(H\mathbb{Z}_{(p)}\) is the Eilenberg-Mac Lane spectrum of the \(p\)-local integers (uniquely determined by the property that \(\pi_{0}(H\mathbb{Z}_{(p)})\cong\mathbb{Z}_{(p)}\) and \(\pi_{n}(H\mathbb{Z}_{(p)})\cong 0\) for all \(n\neq 0\)), then \(\pi_{\ast}\left(L_{H\mathbb{Z}_{(p)}}X\right)\cong\pi_{\ast}(X)_{(p)}\) for all spectra \(X\) whose homotopy groups are bounded below, and \(\pi_{\ast}\left(L_{L_{H\mathbb{Z}_{(p)}}S}X\right)\cong\pi_{\ast}(X)_{(p)}\) for all spectra \(X\) (without any bound required on homotopy groups). So _some_ Bousfield localizations (like the ones just described, which simply \(p\)-localize the homotopy groups) have a predictable effect on the homotopy groups of spectra. However, Bousfield localization \(L_{E}\) typically has a much more subtle effect on homotopy groups when \(E\) is a spectrum which admits a homotopy equivalence \(\Sigma^{n}E\xrightarrow{\cong}E\) for some \(n>0\). The effect of such Bousfield localizations on homotopy groups is at the core of the approach to stable homotopy groups of spheres via periodic phenomena in the chromatic tower and/or the Adams-Novikov spectral sequence; see [23] for a survey. Let's consider the simplest case, the case where \(E\) is \(KU\), the periodic complex \(K\)-theory spectrum. Here is a very well-known and classical computation, dating back to at least the earlier circulated versions of [22]: **Theorem 2.2**.: _Let \(p\) be an odd prime, and let \(S/p\) be the mod \(p\) Moore spectrum. Then there is an isomorphism of graded abelian groups2_ Footnote 2: This is also an isomorphism of graded rings, but we do not give a proof of that additional fact, because we do not work with multiplicative structure in this note. \[\pi_{\ast}(L_{KU}S/p)\cong E(\alpha_{1})\otimes_{\mathbb{F}_{p}}\mathbb{F}_{p }[v_{1}^{\pm 1}],\] _where \(E(\alpha_{1})\) is an exterior \(\mathbb{F}_{p}\)-algebra on a single generator \(\alpha_{1}\) in degree \(2p-3\), and \(v_{1}\) is in degree \(2p-2\)._ _Consequently \(\pi_{n}(L_{KU}S/p)\cong\mathbb{F}_{p}\) if \(n\) is congruent to \(0\) or \(-1\) modulo \(2p-2\), and \(\pi_{n}(L_{KU}S/p)\cong 0\) otherwise._ **Sketch of proof.** Here is one way (popularized by [9], where the ideas are generalized to formal groups of higher heights) to prove this result, which uses the spectral sequence \[H^{*}_{c}(\operatorname{Aut}(\mathbb{G}_{1}),E(\mathbb{G}_{1})_{*}(X))\Rightarrow \pi_{*}(L_{K(1)}X) \tag{2.2}\] of [9]. (Here \(H^{*}_{c}\) denotes profinite group cohomology, \(\mathbb{G}_{1}\) is a formal group over \(\mathbb{F}_{p}\) of height \(1\), \(E(\mathbb{G}_{1})\) is its associated Morava/Lubin-Tate \(E\)-theory spectrum, and \(K(1)\) is the first Morava \(K\)-theory at the prime \(p\).) It is classical (see e.g. [19]) that the profinite automorphism group \(\operatorname{Aut}(\mathbb{G}_{1})\) is isomorphic to the \(p\)-adic unit group \(\hat{\mathbb{Z}}_{p}^{\times}\), and that (see e.g. [8]) \(E(\mathbb{G}_{1})_{*}\) is isomorphic to \(\hat{\mathbb{Z}}_{p}[w^{\pm 1}]\) with \(w\) in degree \(-2\), with \(\operatorname{Aut}(\mathbb{G}_{1})\) acting on \(\hat{\mathbb{Z}}_{p}\{w^{n}\}\) by the \(n\)th power of the cyclotomic character, i.e., \(u\cdot w^{n}\) is defined as the product \(u^{n}w^{n}\). Consequently \(E(\mathbb{G}_{1})_{n}(S/p)\) vanishes if \(n\) is odd, and is isomorphic to \(\mathbb{Z}/p\mathbb{Z}\) with \(\operatorname{Aut}(\mathbb{G}_{1})\cong\hat{\mathbb{Z}}_{p}^{\times}\) acting transitively if \(n\) is even but not divisible by \(2p-2\), and acting trivially if \(n\) is divisible by \(2p-2\). Easy Lyndon-Hochschild-Serre spectral sequence arguments then show that \(H^{*}_{c}(\operatorname{Aut}(\mathbb{G}_{1});E(\mathbb{G}_{1})_{n}(S/p))\) vanishes unless \(n\) is divisible by \(2p-2\), and \[H^{*}_{c}(\operatorname{Aut}(\mathbb{G}_{1});E(\mathbb{G}_{1})_{n}(S/p))\cong H ^{*}_{c}(1+p\hat{\mathbb{Z}}_{p};\mathbb{F}_{p})\] if \(n\) is divisible by \(2p-2\). Here \(1+p\hat{\mathbb{Z}}_{p}\) is the subgroup of \(\hat{\mathbb{Z}}_{p}^{\times}\) consisting of units congruent to \(1\) modulo \(p\). For \(p>2\), convergence of the \(p\)-adic exponential map yields an isomorphism of profinite groups \(p\hat{\mathbb{Z}}_{p}\xrightarrow{\cong}1+p\hat{\mathbb{Z}}_{p}\), hence \[H^{j}_{c}(1+p\hat{\mathbb{Z}}_{p};\mathbb{F}_{p})\cong\operatorname{colim}_{m} H^{j}_{c}(\mathbb{Z}/p^{m}\mathbb{Z};\mathbb{F}_{p})\] is isomorphic to \(\mathbb{F}_{p}\) if \(j=0\) or \(j=1\), and vanishes otherwise. There is no room for differentials in spectral sequence (2.2), so we get an isomorphism of graded abelian groups \[\pi_{*}(L_{K(1)}S/p)\cong E(\alpha_{1})\otimes_{\mathbb{F}_{p}}\mathbb{F}_{p}[ v_{1}^{\pm 1}].\] with the degrees of \(\alpha_{1}\) and \(v_{1}\) as stated. Now since \(S/p\) is already \(S_{(p)}\)-local, the \(KU\)-localization of \(S/p\) coincides with the \(KU_{(p)}\)-localization of \(S/p\). The well-known splitting \(KU_{(p)}\simeq\coprod_{j=0}^{p-2}\Sigma^{2j}E(1)\), where \(E(1)\) is the \(p\)-local height \(1\) Johnson-Wilson spectrum, establishes that \(L_{KU_{(p)}}\) agrees with \(L_{E(1)}\). The well-known homotopy pullback square3 Footnote 3: The existence of this homotopy pullback square seems to have been known since at least the 1980s, but as far as I know, there is no clear person or paper to whom the result is attributed. A nice modern writeup appears in Bauer’s chapter [3] in the book [10]. \[L_{E(0)}X\] \[L_{E(0)}X\] \[L_{E(1)}X\] \[L_{E(0)}X\] \[L_{E(0)}L_{K(1)}X,\] in the case \(X=S/p\), then yields a weak equivalence \(L_{E(1)}X\simeq L_{K(1)}X\), since \(E(0)\)-localization coincides with rationalization and so \(L_{E(0)}S/p\) and \(L_{E(0)}L_{K(1)}S/p\) are both contractible. So \[L_{KU}S/p\simeq L_{KU_{(p)}}S/p\simeq L_{E(1)}S/p\simeq L_{K(1)}S/p\] has homotopy groups as stated. So the homotopy groups of \(L_{KU}S/p\), for \(p\) odd, are of a very simple form: \(\pi_{n}(L_{KU}S/p)\) has order \(p\) if \(n\) is congruent to \(0\) or \(-1\) modulo \(2p-2\), and \(\pi_{n}(L_{KU}S/p)\) is trivial otherwise. To describe these groups in terms of special values of an \(L\)-function, as the work of Adams and Ravenel did (away from \(2\)) for \(L_{KU}S\) as described in Theorem 1.2, we need to find, for each odd prime, an \(L\)-function whose special values at negative integers are rational numbers whose denominators follow this same \((p-1)\)-periodic pattern. We accomplish this in section 4. ## 3. Review of some ideas from number theory. ### Review of Dirichlet characters and their \(L\)-functions This section explains some well-known ideas from number theory which we will use. Excellent textbook references for this material include [2] and [20]. The definition of a Dirichlet \(L\)-series and Dirichlet characters is classical: **Definition 3.1**.: _The Dirichlet \(L\)-series of a function \(\chi:\mathbb{N}\to\mathbb{C}\) is the series_ \[\sum_{n\geq 1}\frac{\chi(n)}{n^{s}}. \tag{3.3}\] _If \(s\) is a complex number such that the series (3.3) converges, then we write \(L(s,\chi)\) for the number that the series converges to4._ Footnote 4: To be clear: for many functions \(\chi\) of number-theoretic interest, the function \(L(s,\chi)\) is meromorphic on some part of the complex plane, and admits a unique analytic continuation to a meromorphic function on a _larger_ part of the complex plane. That analytic continuation is still called \(L(s,\chi)\) for all \(s\) in its domain, even though \(L(s,\chi)\) only agrees with the series \(\sum_{n\geq 1}\frac{\chi(n)}{n^{s}}\) for all complex \(s\) such that the series \(\sum_{n\geq 1}\frac{\chi(n)}{n^{s}}\) actually converges. For example: as we explain below, when \(\chi\) is a Dirichlet character, the series \(\sum_{n\geq 1}\frac{\chi(n)}{n^{s}}\) converges for all complex \(s\) with \(\Re(s)>1\), but it analytically continues to a meromorphic function on all of \(\mathbb{C}\), and we write \(L(-1,\chi)\) for that value of that meromorphic function at \(s=-1\), even when the series \(\sum_{n\geq 1}\frac{\chi(n)}{n^{s}}\) fails to converge when \(s=-1\). _Given two functions \(\chi_{1},\chi_{2}:\mathbb{N}\to\mathbb{C}\), we define their Dirichlet convolution as the function \(\chi_{1}*\chi_{2}:\mathbb{N}\to\mathbb{C}\) given by_ \[(\chi_{1}*\chi_{2})(n)=\sum_{d|n}\chi_{1}(d)\chi_{2}(\frac{n}{d}),\] _so that \(L(s,\chi_{1}*\chi_{2})=L(s,\chi_{1})L(s,\chi_{2}).\) (See Theorem 11.5 of [2] for a proof.)_ **Definition 3.2**.: _Let \(f\) be a positive integer. A Dirichlet character of modulus \(f\) is a function \(\chi:\mathbb{Z}\to\mathbb{C}\) satisfying the axioms:_ * \(\chi(1)=1\)_,_ * \(\chi(n+f)=\chi(n)\) _for all_ \(n\in\mathbb{Z}\)_,_ * \(\chi(mn)=\chi(m)\chi(n)\) _for all_ \(m,n\)_, and_ * \(\chi(n)=0\) _if_ \(\gcd(n,f)\neq 1\)_._ _A Dirichlet character is a Dirichlet character of modulus \(f\) for some \(f\)._ _The Dirichlet character \(\chi_{0}\) of modulus \(f\) such that \(\chi_{0}(n)=1\) for all \(n\) coprime to \(f\) is called the principal Dirichlet character of modulus \(f\)._ _The Dirichlet \(L\)-function of a Dirichlet character \(\chi\) is the Dirichlet \(L\)-series \(\sum_{n\geq 1}\frac{\chi(n)}{n^{s}},\) which converges to a complex number \(L(s,\chi)\) for all complex numbers \(s\) with real part \(>1\)._ For example, if \(\chi_{0}\) is the (unique) character of modulus \(1\), i.e., \(\chi_{0}(n)=1\) for all \(n\), then \(L(s,\chi_{0})=\zeta(s)\), the Riemann zeta-function. The Dirichlet characters of modulus \(f\) form a group \(\operatorname{Dir}(f)\) under pointwise multiplication; this group has order \(\phi(f)\), and is cyclic if \(f\) is a power of an odd prime. The Dirichlet characters of modulus \(f\) do _not_ form a group under Dirichlet convolution (see Definition 3.1), since the Dirichlet convolution of two Dirichlet characters is not necessarily a Dirichlet character. **Definition 3.3**.: _Let \(\chi\) be a Dirichlet character of modulus \(f\). A divisor \(d\) of \(f\) is called an induced modulus for \(\chi\) if \(\chi(n)=1\) for all \(n\) relatively prime to \(f\) such that \(n\equiv 1\) modulo \(d\)._ _A Dirichlet character \(\chi\) of modulus \(f\) is called primitive if the smallest induced modulus of \(\chi\) is \(f\) itself._ Definition 3.4 originally appeared in [16]. **Definition 3.4**.: **(Generalized Bernoulli numbers.)** _Let \(\chi:\mathbb{Z}\to\mathbb{C}\) be a Dirichlet character of modulus \(f\). Let the sequence of numbers_ \[B_{1}^{\chi},B_{2}^{\chi},B_{3}^{\chi},\cdots\in\mathbb{Q}(\zeta_{\phi(f)}) \subseteq\mathbb{C}\] _be defined as the Maclaurin coefficients of \(\sum_{r=1}^{f}\chi(r)\frac{te^{rt}}{e^{rt}-1}\), so that_ \[\sum_{r=1}^{f}\chi(r)\frac{te^{rt}}{e^{ft}-1}=\sum_{n\geq 0}B_{n}^{\chi}\frac{t ^{n}}{n!}.\] The Euler product of \(L(s,\chi)\) is classical: \[L(s,\chi)=\prod_{\text{primes }p}\frac{1}{1-\chi(p)p^{-s}} \tag{3.4}\] for all complex numbers \(s\) with \(\Re(s)>1\). See e.g. Theorem VII.2.9 of [20] for Theorem 3.5: **Theorem 3.5**.: _The Dirichlet \(L\)-series \(\sum_{n\geq 1}\frac{\chi(n)}{n^{s}}\) admits a unique analytic continuation to a meromorphic function \(L(s,\chi)\) on the complex plane, and a functional equation such that_ \[L(1-n,\chi)=\frac{-B_{n}^{\chi}}{n}\] _for positive integers \(n\)._ _Specifically, if \(\chi\) is a primitive Dirichlet character of modulus \(f\), then_ \[L(1-s,\chi)=\frac{f^{s-1}\Gamma(s)}{(2\pi)^{s}}\left(e^{-\pi is/2}+\chi(-1)e^{ \pi is/2}\right)G(1,\chi)L(s,\overline{\chi}), \tag{3.5}\] _where \(\Gamma\) is the classical gamma-function (so \(\Gamma(n)=(n-1)!\) for positive integers \(n\)), \(\overline{\chi}\) is the complex-conjugate Dirichlet character of \(\chi\) (so \(\overline{\chi}(n)=\overline{\chi(n)}\)), and \(G(1,\chi)\) is the Gauss sum \(\sum_{r=1}^{f}\chi(r)e^{2\pi ir/f}\)._ See e.g. Theorem 12.11 of [2] for equation (3.5). It is also classical that \(L(s,\chi)\) is an entire function on the complex plane, if \(\chi\) is nonprincipal; see e.g. Theorem 12.5 of [2]. **Observation 3.6**.: If \(\chi\) is a nonprincipal Dirichlet character with \(\chi(-1)=1\), then it is an easy exercise to show that the function \(F(t)=\sum_{r=1}^{f}\chi(r)\frac{te^{rt}}{e^{rt}-1}\) satisfies \(F(t)=F(-t)\), and hence that \(B_{n}^{\chi}=0\) for all odd positive integers \(n\), hence that \(L(1-n,\chi)=0\) for all odd positive integers \(n\). Since a Dirichlet character \(\chi\) of modulus \(f\) takes values in the \(\phi(f)\)th roots of unity, the numbers \(B_{n}^{\chi}\) and \(L(1-n,\chi)\) lie in the number field \(\mathbb{Q}(\zeta_{\phi(f)})\). Here is a theorem of Carlitz (see [5] for full proofs, or [6] for a shorter version) on how far these numbers are from being in the _ring of integers_ of that number field: **Theorem 3.7**.: _Let \(\chi\) be a primitive Dirichlet character of modulus \(f\)._ * _If_ \(f\) _is not a prime power, then_ \(\frac{B_{n}^{\chi}}{n}\) _is an algebraic integer for all_ \(n\)_._ * _If_ \(f=p\) _for some prime_ \(p>2\)_, then let_ \(g\) _be a primitive root modulo_ \(p^{r}\) _for all_ \(r\)_, i.e.,_ \(g\in\mathbb{N}\) _is a topological generator of the group_ \(\hat{\mathbb{Z}}_{p}^{\times}\) _of_ \(p\)_-adic units. The number_ \(\frac{B_{n}^{\chi}}{n}\) _is an algebraic integer unless_ \((p,1-\chi(g))\neq(1)\)_, in which case_ \(pB_{n}^{\chi}\equiv p-1\) _modulo_ \((p,1-\chi(g))^{n+1}\)_._ * _If_ \(f=p^{\mu}\) _for some prime_ \(p>2\) _and some integer_ \(\mu>1\)_, then let_ \(g\) _be a primitive root modulo_ \(p^{r}\) _for all_ \(r\)_, i.e.,_ \(g\in\mathbb{N}\) _is a topological generator of the group_ \(\hat{\mathbb{Z}}_{p}^{\times}\) _of_ \(p\)_-adic units. The number_ \(\frac{B_{n}^{\chi}}{n}\) _is an algebraic integer unless_ \((p,1-\chi(g)g^{n})\neq(1)\)_, in which case_ \((1-\chi(1+p))\frac{B_{n}^{\chi}}{n}\equiv 1\) _modulo_ \((p,1-\chi(g)g^{n})\)_._ * _When_ \(f\) _is a power of a prime number_ \(p\)_, then for all positive integers_ \(n\)_,_ \(\frac{B_{n}^{\chi}}{n}\in\mathcal{O}_{\mathbb{Q}(\zeta_{\phi(f)})}[1/p]\)_. That is,_ \(\frac{B_{n}^{\chi}}{n}p^{a}\) _is an algebraic integer for some (sufficiently large) positive integer_ \(a\)_._ ### Review of Dedekind zeta-functions This material is classical; see e.g. chapter 3 of [27]. **Definition 3.8**.: _Let \(F/\mathbb{Q}\) be a finite field extension with ring of integers \(\mathcal{O}_{F}\). Then the Dedekind zeta-function of \(F\) is defined as the series_ \[\sum_{I\subseteq\mathcal{O}_{F}}\frac{1}{\left(\#\mathcal{O}_{F}/I\right)^{s}},\] _where the sum is taken over all nonzero ideals \(I\) of \(\mathcal{O}_{F}\), and \(\#\mathcal{O}_{F}/I\) is the number of elements in the residue ring \(\mathcal{O}_{F}/I\). This series converges for complex numbers \(s\) with real part \(\Re(s)>1\), and uniquely analytically continues to a meromorphic function \(\zeta_{F}(s)\) on the complex plane._ The function \(\zeta_{F}\) has the Euler product \[\zeta_{F}(s)=\prod_{\mathfrak{p}\subseteq\mathcal{O}_{F}}\frac{1}{1-\#\left( \mathcal{O}_{F}/\mathfrak{p}\right)^{-s}}\] for \(\Re(s)>1\), where the product is taken over all nonzero prime ideals \(\mathfrak{p}\) of \(\mathcal{O}_{F}\). **Definition 3.9**.: _Let \(f\) be a positive integer and let \(A\) be a subgroup of the group \(\operatorname{Dir}(f)\) of Dirichlet characters of modulus \(f\). Let \(\ker A\) denote the subgroup of \((\mathbb{Z}/f\mathbb{Z})^{\times}\) consisting of those residue classes \(x\) such that \(\chi(x)=1\) for all \(\chi\in A\). Finally, let \(G\) denote the subgroup of \(\operatorname{Gal}(\mathbb{Q}(\zeta_{f})/\mathbb{Q})\) corresponding to \(\ker A\subseteq(\mathbb{Z}/f\mathbb{Z})^{\times}\) under the usual isomorphism \((\mathbb{Z}/f\mathbb{Z})^{\times}\stackrel{{\cong}}{{=}} \operatorname{Gal}(\mathbb{Q}(\zeta_{f})/\mathbb{Q})\). Then the number field associated to \(A\) is defined as the fixed field \(\mathbb{Q}(\zeta_{f})^{G}\)._ Theorem 3.10 combines Corollary 3.6 and Theorem 4.3 from [27]. **Theorem 3.10**.: _Let \(f\) be a positive integer, let \(A\) be a subgroup of the group \(\operatorname{Dir}(f)\) of Dirichlet characters of modulus \(f\), and let \(F\) be the number field associated to \(A\). Then a prime \(p\in\mathbb{Z}\) is unramified in \(O_{F}\) if and only if \(\chi(p)\neq 0\) for all \(\chi\in A\). Furthermore:_ \[\zeta_{F}(s)=\prod_{\chi\in A}L(s,\chi). \tag{3.6}\] Theorem 3.10 requires a bit of care; the version of it expressed as Theorem 4.3 in [27] leaves one small (but important for getting correct Euler factors at ramified primes) point unexplained. The point is that the product should be taken over _primitive_ representatives for Dirichlet characters in the group \(A\). For the sake of the present work, what this means is the following: if \(p>2\) and we let \(A\) be the group \(\operatorname{Dir}(p^{2})[p]\) of \(p\)-torsion elements in the group of Dirichlet characters of modulus \(p^{2}\), then every nonidentity element in the group \(\operatorname{Dir}(p^{2})[p]\) is a primitive Dirichlet character, but the identity element in \(\operatorname{Dir}(p^{2})[p]\) is the principal Dirichlet character of modulus \(p^{2}\), which is imprimitive. Formula (3.6) is valid if, for the \(L\)-function factor corresponding to the identity element of \(\operatorname{Dir}(p^{2})[p]\), we use the Dirichlet \(L\)-function of the _primitive_, and consequently modulus \(1\), representative for that identity element; i.e., we use the Riemann zeta-function. If we had instead used the Dirichlet \(L\)-series of the (imprimitive) principal Dirichlet character of modulus \(p^{2}\), then formula (3.6) would be off by an Euler factor at \(p\). ## 4. The \(L\)-function of the mod \(p\) Moore spectrum. **Definition 4.1**.: _Let \(p\) be an odd prime. Since the group \(\operatorname{Dir}(p^{2})\) of Dirichlet characters of modulus \(p^{2}\) is cyclic of order \(\phi(p^{2})=p(p-1)\), there exists a unique subgroup of index \(p-1\) in \(\operatorname{Dir}(p^{2})\). We will write \(\operatorname{Dir}(p^{2})[p]\) for this subgroup. Since \(\operatorname{Dir}(p^{2})[p]\) consists of the Dirichlet characters \(\chi\) of modulus \(p^{2}\) such that \(\chi(n)\) is a \(p\)th root of unity for all \(n\), the Galois group \(G_{\mathbb{Q}(\zeta_{p})/\mathbb{Q}}\cong C_{p-1}\) acts on \(\operatorname{Dir}(p^{2})[p]\)._ **Definition 4.2**.: _Let \(\chi\) be a generator of \(\operatorname{Dir}(p^{2})[p]\), and let \(L(s,S/p)\) denote the product_ \[L(s,S/p)=\prod_{\sigma\in G_{\mathbb{Q}(\zeta_{p})/\mathbb{Q}}}L(s,\chi^{ \sigma}).\] _Since the set of Dirichlet characters \(\{\chi^{\sigma}\}_{\sigma\in G_{\mathbb{Q}(\zeta_{p})/\mathbb{Q}}}\) is exactly the set of nontrivial elements (i.e., nonprincipal characters) in \(\operatorname{Dir}(p^{2})[p]\), the function \(L(s,S/p)\) is independent of the choice of \(\chi\)._ _Since each \(L(s,\chi^{\sigma})\) converges for all complex numbers \(s\) with real part \(s>1\), the same is true of \(L(s,S/p)\). Since each \(L(s,\chi^{\sigma})\) has analytic continuation to a meromorphic function on the complex plane, the same is true of \(L(s,S/p)\)._ **Observation 4.3**.: Here are some easy observations about \(L(s,S/p)\): * Since each \(\chi^{\sigma}\) is nonprincipal, \(L(s,\chi^{\sigma})\) is entire, so the product \(L(s,S/p)\) is entire. * The function \(L(s,S/p)\) can be written as a single \(L\)-series, as follows: let \(\mathfrak{o}(p)\) denote the set of elements of order exactly \(p\) in the group \(\operatorname{Dir}(p^{2})\) of Dirichlet characters of modulus \(p^{2}\), i.e., \(\mathfrak{o}(p)\) is the set of nonzero elements of \(\operatorname{Dir}(p^{2})[p]\). Let \(\Join_{\chi\in\mathfrak{o}(p)}\chi\) denote the Dirichlet convolution (see Definition 3.1) of the elements in \(\mathfrak{o}(p)\). Then \[L(s,S/p)=L\left(s,\Join_{\chi\in\mathfrak{o}(p)}\chi\right)=\sum_{n\geq 1} \frac{\left(\Join_{\chi\in\mathfrak{o}(p)}\chi\right)(n)}{n^{s}}.\] * It follows immediately from Observation 3.6 that \(L(1-n,S/p)=0\) for all odd positive integers \(n\). * We have the action of \(G_{\mathbb{Q}(\zeta_{p})/\mathbb{Q}}\) on \(\operatorname{Dir}(p^{2})[p]\) described in Definition 4.1, and \(G_{\mathbb{Q}(\zeta_{p})/\mathbb{Q}}\) of course acts on \(\mathbb{Q}(\zeta_{p})\), where the elements of \(\operatorname{Dir}(p^{2})[p]\) take their values. Using Definition 3.4, one can compute \(B_{n}^{\chi}\), for any fixed value of \(n\), by solving for Taylor coefficients in a way which only involves a finite sum, and in particular, finitely many applications of \(\chi\). So \(\chi\) being a field automorphism implies \(B_{n}^{(\chi^{\sigma})}=(B_{n}^{\chi})^{\sigma}\), for any \(\sigma\in G_{\mathbb{Q}(\zeta_{p})/\mathbb{Q}}\), without needing to know anything about continuity of the Galois action. * In particular, since \(\mathbb{Q}(\zeta_{p})/\mathbb{Q}\) is Galois, \[L(1-n,S/p) =\prod_{\sigma\in G_{\mathbb{Q}(\zeta_{p})/\mathbb{Q}}}L(1-n, \chi^{\sigma})\] \[=\prod_{\sigma\in G_{\mathbb{Q}(\zeta_{p})/\mathbb{Q}}}L(1-n, \chi)^{\sigma}\] \[=N_{\mathbb{Q}(\zeta_{p})/\mathbb{Q}}(L(1-n,\chi)),\] the field norm of the extension \(\mathbb{Q}(\zeta_{p})/\mathbb{Q}\), evaluated at \(L(1-n,\chi)\). Consequently \(L(1-n,S/p)\in\mathbb{Q}\). In Proposition 4.4 we provide an Euler product formula for \(L(s,S/p)\). The result is not number-theoretically novel at all: the method used is classical and very well-known. The resulting formula involves a _division_ by \(\zeta(s)\), and consequently the numerators in special values of \(\zeta(s)\) contribute (in an indirect way) to denominators in special values of \(L(s,S/p)\). In Theorem 4.8 and Corollary 4.9 we prove that the special values of \(L(s,S/p)\) agree with the orders of the \(KU\)-local stable homotopy groups of the mod \(p\) Moore spectrum \(S/p\); consequently _numerators_ of Bernoulli numbers are entering (again, in an indirect way) into the orders of \(KU\)-local stable homotopy groups. **Proposition 4.4**.: _Let \(p>2\) be a prime, and let \(G_{p}\) be the set of prime numbers \(\ell\) such that \(\ell\) is a primitive root modulo \(p^{2}\) (i.e., \(\ell\) generates the group \((\mathbb{Z}/p^{2}\mathbb{Z})^{\times}\)). Let \(N_{p}\) be the set of prime numbers \(\ell\neq p\) not contained in \(G_{p}\). Then, for all complex numbers \(s\) with real part \(>1\), we have an equality_ \[L(s,S/p)=\frac{1-p^{-s}}{\zeta(s)}\left(\prod_{\ell\in N_{p}}\frac{1}{1-\ell^{ -s}}\right)^{p}\left(\prod_{\ell\in G_{p}}\frac{1}{1-\ell^{-sp}}\right). \tag{4.7}\] Proof.: The product of the Euler products (3.4) of the \(L\)-functions \(L(s,\chi^{\sigma})\) over all \(\sigma\in G_{\mathbb{Q}(\zeta_{p})/\mathbb{Q}}\), together with the Euler product of \(\zeta(s)=L(s,\chi_{0})\), has three types of factors: * the \(L\)-factor at \(\ell\), for primes \(\ell\in G_{p}\), is \[\zeta_{\ell}(s)L_{\ell}(s,S/p)=\frac{1}{1-\ell^{-s}}\left(\prod_{\sigma\in G_{0( \zeta_{p})/\mathbb{Q}}}(1-\chi^{\sigma}(\ell)\ell^{-s})\right)^{-1}.\] Since \(\ell\in G_{p}\), the prime \(\ell\) is a primitive root modulo \(p^{2}\), hence \(\chi(\ell)\) is a primitive \(p\)th root of unity. Hence \[\frac{\ell^{ps}}{\zeta_{\ell}(s)L_{\ell}(s,S/p)}=\prod_{j=1}^{p}(\ell^{s}- \zeta_{p}^{j})=(1-\ell^{s})\Phi_{p}(\ell^{s})=1-\ell^{ps},\] with \(j\) a primitive \(p\)th root of unity, and where \(\Phi_{p}\) is the \(p\)th cyclotomic polynomial. Solving for \(\zeta_{\ell}(s)L_{\ell}(s,S/p)\) yields \(\frac{1}{1-\ell^{-ps}}\). * For the \(L\)-factor at \(\ell\), for primes \(\ell\in N_{p}\), we first observe that \(\chi=\chi_{p^{2}}^{p-1}\) for an appropriately chosen primitive character \(\chi_{p^{2}}\) of modulus \(p^{2}\). Hence \(\chi(n)=\chi_{p^{2}}(n)^{p-1}\) is a \(p\)th root of unity which is primitive if and only if \(n\) is a primitive root modulo \(p^{2}\). Consequently, if \(\ell\in N_{p}\), then \(\chi(\ell)\) is a non-primitive \(p\)th root of unity, i.e., \(\chi(\ell)=1\). This lets us simplify an Euler factor: \[\zeta_{\ell}(s)L_{\ell}(s,S/p) =(1-\ell^{-s})^{-1}\left(\prod_{\sigma\in G_{\mathbb{Q}(\zeta_{p} )/\mathbb{Q}}}(1-\chi^{\sigma}(\ell)\ell^{-s})\right)^{-1}\] \[=\left(1-\ell^{-s}\right)^{-p}.\] * The third type of \(L\)-factor is simply the \(\ell=p\)\(L\)-factor. Since \(\chi(p)=0\), the \(p\)-local \(L\)-factor in \(\zeta(s)L(s,S/p)\) is simply the \(p\)-local \(L\)-factor in \(\zeta(s)\), i.e., \((1-p^{-s})^{-1}\). Taking a product over all primes \(\ell\) yields the formula (4.7). **Lemma 4.5**.: _The number field associated to the group \(\operatorname{Dir}(p^{2})[p]\) is the unique minimal subextension of \(\mathbb{Q}(\zeta_{p^{2}})/\mathbb{Q}\) in which \(p\) is wildly ramified. (See Definition 4.1 for the definition of \(\operatorname{Dir}(p^{2})[p]\), and see Definition 3.9 for the definition of the number field associated to a group of Dirichlet characters.)_ Proof.: Elementary exercise. **Proposition 4.6**.: _Let \(p\) be an odd prime, and let \(F/\mathbb{Q}\) be the minimal subextension of \(\mathbb{Q}(\zeta_{p^{2}})/\mathbb{Q}\) in which \(p\) ramifies wildly. Then:_ \[\frac{\zeta_{F}(s)}{\zeta(s)}=L(s,S/p). \tag{4.8}\] Proof.: Among the \(p\) elements of the group \(\operatorname{Dir}(p^{2})[p]\), there are \(p-1\) primitive Dirichlet characters, i.e., there are \(p-1\) nonprincipal \(\mathbb{Q}(\zeta_{p})\)-valued Dirichlet characters of modulus \(p^{2}\). In the group \(\operatorname{Dir}(p^{2})[p]\) we also have the one imprimitive character, namely, the principal Dirichlet character of modulus \(p^{2}\), which is the identity element of the group \(\operatorname{Dir}(p^{2})[p]\). Lemma 4.5 and Theorem 3.10 together let us express \(\zeta_{F}(s)\) as a product over _primitive representatives_ of the elements of \(\operatorname{Dir}(p^{2})[p]\). So let us write \(\operatorname{Dir}(p^{2})[p]^{\prime}\) for the set of primitive representatives for the elements of \(\operatorname{Dir}(p^{2})[p]\), and let us write \(\chi_{p}\) for any generator for the group \(\operatorname{Dir}(p^{2})[p]\). The Galois group \(G_{\mathbb{Q}(\zeta_{p})/\mathbb{Q}}\) acts freely on the \(\mathbb{Q}(\zeta_{p})\)-valued primitive Dirichlet characters of modulus \(p^{2}\), so that \[\operatorname{Dir}(p^{2})[p]^{\prime}=\{\chi_{0}\}\cup\{\chi_{p}^{\sigma}: \sigma\in G_{\mathbb{Q}(\zeta_{p})/\mathbb{Q}}\},\] where \(\chi_{0}\) is principal and modulus \(1\). Now we have equalities: \[\frac{\zeta_{F}(s)}{\zeta(s)} =\frac{\prod_{\chi\in\operatorname{Dir}(p^{2})[p]^{\prime}}L(s, \chi)}{\zeta(s)}\] \[=\frac{L(s,\chi_{0})\cdot\prod_{\sigma\in G_{\mathbb{Q}(\zeta_{p })/\mathbb{Q}}}L(s,\chi_{p}^{\sigma})}{\zeta(s)} \tag{4.10}\] \[=\frac{L(s,\chi_{0})\cdot L(s,S/p)}{\zeta(s)}\] (4.11) \[=L(s,S/p), \tag{4.9}\] where (4.9) is from Lemma 4.5 and Theorem 3.10, and (4.10) is from Definition 4.2. Of course (4.11) comes simply from the observation that the Riemann zeta-function is the Dirichlet \(L\)-function of the principal Dirichlet character of modulus \(1\). **Lemma 4.7**.: _Let \(p>2\) be a prime, let \(g\in\mathbb{N}\) be a primitive root modulo \(p^{2}\), let \(n\) be a positive integer divisible by \(p-1\), let \(\zeta_{p}\) denote a primitive \(p\)th root of unity, and let \(\mathcal{O}_{\mathbb{Q}(\zeta_{p})}\) denote the ring of integers in \(\mathbb{Q}(\zeta_{p})\). Then the ideal \((1-\zeta_{p}g^{n})\) in \(\mathcal{O}_{\mathbb{Q}(\zeta_{p})}\) is contained in the ideal \((1-\zeta_{p})\)._ Proof.: We will use the well-known factorization \((p)=(1-\zeta_{p})^{p-1}\) in the ring of integers \(\mathcal{O}_{\mathbb{Q}(\zeta_{p})}\) (see e.g. Lemma 10.1 of [20]), and its corollary, that \((1-\zeta_{p})\) is a maximal ideal in \(\mathcal{O}_{\mathbb{Q}(\zeta_{p})}\) with residue field \(\mathbb{F}_{p}\). Obviously \(1-\zeta_{p}g^{n}\) is congruent to \(1-g^{n}\) modulo \(1-\zeta_{p}\). Since \(g\) is nonzero modulo \(p\), we have \(g^{p-1}\equiv 1\) in the residue field \(\mathcal{O}_{\mathbb{Q}(\zeta_{p})}/(1-\zeta_{p})\cong\mathbb{F}_{p}\). So \(g^{n}\equiv 1\) modulo \((1-\zeta_{p})\), so \(1-\zeta_{p}g^{n}\equiv 0\) modulo \(1-\zeta_{p}\). So \((1-\zeta_{p}g^{n})\subseteq(1-\zeta_{p})\). In Theorem 4.8, we adopt the convention that, given a rational number \(x\), we write \(\operatorname{denom}(x)\) for the denominator of \(x\) when written in reduced form, and we let \(\operatorname{denom}(x)\) be \(1\) if \(x=0\). **Theorem 4.8**.: _Let \(p>2\) be a prime. Then, for each positive integer \(n\), the following four numbers are equal:_ * _the order of_ \(\pi_{2n}(L_{KU}S/p)\)_, the_ \((2n)\)_th stable homotopy group of the_ \(KU\)_-local mod_ \(p\) _Moore spectrum,_ * _the order of_ \(\pi_{2n-1}(L_{KU}S/p)\)_, the_ \((2n-1)\)_th stable homotopy group of the_ \(KU\)_-local mod_ \(p\) _Moore spectrum,_ * \(\operatorname{denom}(L(1-n,S/p))\)_, and_ * \(\operatorname{denom}\left(\frac{\zeta_{F}(1-n)}{\zeta(1-n)}\right)\)_, where_ \(F/\mathbb{Q}\) _is the (unique) smallest subextension of_ \(\mathbb{Q}(\zeta_{p^{2}})/\mathbb{Q}\) _in which_ \(p\) _is wildly ramified._ Proof.: By Theorem 2.2, \(\pi_{2n}(L_{KU}S/p)\cong\pi_{2n-1}(L_{KU}S/p)\cong\mathbb{Z}/p\mathbb{Z}\) if \((p-1)\mid n\), and \(\pi_{2n}(L_{KU}S/p)\) and \(\pi_{2n-1}(L_{KU}S/p)\) are trivial if \((p-1)\nmid n\), so there are just two cases to consider: * **If \(p-1\mid n\):**: The Dirichlet character \(\chi\) of Definition 4.2 has the property that \(\chi(g)\) is a primitive \(p\)th root of unity, for any primitive root \(g\) modulo \(p^{2}\). So by Lemma 4.7, \((1-\chi(g)g^{n})\subseteq(1-\zeta_{p})\), so \((p,1-\chi(g)g^{n})\subseteq(1-\zeta_{p})\) since \((1-\zeta_{p})^{p-1}=(p)\). In particular, \((p,1-\chi(g)g^{n})\) is contained in a maximal ideal of \(\mathcal{O}_{\mathbb{Q}(\zeta_{p})}\), so \((p,1-\chi(g)g^{n})\neq(1)\). (For this manipulation of ideals, it does not really matter which primitive \(p\)th root of unity \(\zeta_{p}\) we choose, or whether or not it is actually equal to \(\chi(g)\): there is a unique maximal ideal of \(\mathcal{O}_{\mathbb{Q}(\zeta_{p})}\) over \(p\), and it is of the form \((1-\zeta)\) for any primitive \(p\)th root of unity \(\zeta\) we choose. So \((1-\zeta_{p})=(1-\zeta)\) for any primitive \(p\)th root of unity \(\zeta\).) Now we invoke Carlitz's result, Theorem 3.7: since \((p,1-\chi(g)g^{n})\neq(1)\), we have that \((1-\chi(1+p))\frac{B_{n}^{\chi}}{n}\) is congruent to \(1\) modulo \((p,1-\chi(g)g^{n})\). Since \((p,1-\chi(g)g^{n})\subseteq(1-\zeta_{p})\), we now have \[(1-\chi(1+p))^{p-1}L(1-n,S/p) =(1-\chi(1+p))^{p-1}\prod_{\sigma\in G_{\mathbb{Q}(\zeta_{p})/ \mathbb{Q}}}\frac{-B_{n}^{\chi^{\sigma}}}{n} \tag{4.12}\] \[\equiv 1\mod(1-\zeta_{p}),\] and, on taking \(p\)-adic valuations of both sides of the equation (4.12) in \(\mathcal{O}_{\mathbb{Q}(\zeta_{p})}\), we have \[(p-1)\nu_{p}\left(1-\chi(1+p)\right)+\nu_{p}(L(1-n,S/p))=0. \tag{4.13}\] Now remember that \(\chi\) is a generator of \(\operatorname{Dir}(p^{2})[p]\), and in particular, \(\chi\) takes primitive \(p\)th roots of unity in \(\mathbb{Z}/p^{2}\mathbb{Z}\) to primitive \(p\)th roots of unity. By a very easy elementary computation, \(1+p\) is a primitive \(p\)th root of unity in \(\mathbb{Z}/p^{2}\mathbb{Z}\); and since we have an equality of ideals \((1-\zeta)=(1-\zeta_{p})\) for any primitive \(p\)th root of unity \(\zeta\) in \(\mathcal{O}_{\mathbb{Q}(\zeta_{p})}\), we now have \(\nu_{p}(1-\chi(1+p))=\nu_{p}(1-\zeta_{p})=\frac{1}{p-1}\). Hence equation (4.13) gives us that \(\nu_{p}(L(1-n,S/p))=-1\), i.e., the denominator of the rational number \(L(1-n,S/p)\) is divisible by \(p\), but not by \(p^{2}\). Carlitz's result, Theorem 3.7, also implies that \(\frac{p^{a}B_{n}^{\chi}}{n}\in\mathcal{O}_{\mathbb{Q}(\zeta_{p})}\) for some sufficiently large integer \(a\). So the product \(L(1-n,S/p)=\prod_{\sigma\in G_{\mathbb{Q}(\zeta_{p})/\mathbb{Q}}}\frac{-B_{n} ^{\chi}}{n}\) has denominator divisible by no primes other than the prime \(p\). Consequently the denominator of \(L(1-n,S/p)\) is exactly \(p\), when \(p-1\) divides \(n\). **If \(p-1\nmid n\):**: Let \(g\in\mathbb{N}\) be a primitive root modulo \(p^{2}\). We have the congruence \(1-\chi(g)g^{n}\equiv 1-g^{n}\) modulo \(1-\zeta_{p}\), for the same reasons as in the previous part of this proof. The difference is now that, since \(p-1\) does not divide \(n\), \(g^{n}\not\equiv 1\) modulo \(p\). So \(1-g^{n}\not\equiv 0\) modulo \((1-\zeta_{p})\subseteq(p)\), so \((p,1-\chi(g)g^{n})=(1)\subseteq\mathcal{O}_{\mathbb{Q}(\zeta_{p})}\). Now Theorem 3.7 implies that \(\frac{B_{n}^{\chi}}{n}\) is an algebraic integer. Hence, taking a product over Galois conjugates, we have \[L(1-n,S/p)=\prod_{\sigma\in G_{\mathbb{Q}(\zeta_{p})/\mathbb{Q}}}\frac{-B_{n}^{ \chi^{\sigma}}}{n}\in\mathbb{Z},\] as desired. Finally, the equality \(\operatorname{denom}(L(1-n,S/p))=\operatorname{denom}\left(\frac{\zeta_{p}(1- n)}{\zeta(1-n)}\right)\) is immediate from Proposition 4.6. Theorem 3 in [11] is very similar to Carlitz's theorem reproduced as Theorem 3.7, above. Consequently the role of of Carlitz's theorem in the proof of Theorem 4.8 can also be filled by part 2 of Theorem 3 in Fresnel's paper. Corollary 4.9 ought to be understood as a distant descendant of the classical theorem that \(\frac{\zeta(n)}{\pi^{n}}\) is rational for positive even integers \(n\). **Corollary 4.9**.: _Let \(p>2\) be a prime, and let \(n\) be a positive even integer. Then \(L(s,S/p)\) satisfies the functional equation_ \[L(n,S/p)=\left(\frac{2^{n-1}\pi^{n}}{p^{2n-1}(n-1)!}\right)^{p-1}L(1-n,S/p),\] _up to sign. Consequently the number \(L(n,S/p)\) is equal to \(\left(\frac{2^{n-1}\pi^{n}}{p^{2n-1}(n-1)!}\right)^{p-1}\) times a rational number which, when written in reduced form, has denominator equal to the order of \(\pi_{2n}(L_{KU}S/p)\) and of \(\pi_{2n-1}(L_{KU}S/p)\), the \((2n)\)th and \((2n-1)\)st \(KU\)-local stable homotopy groups of the mod \(p\) Moore spectrum._ Proof.: Let \(\chi\) be as in Definition 4.2, i.e., \(\chi\) is any primitive Dirichlet character of modulus \(p^{2}\) taking values in \(\mathbb{Q}(\zeta_{p})\). The function \(L(1-n,\chi)\) vanishes when \(n\) is an odd positive integer (by Observation 3.6); so suppose instead that \(n\) is an even positive integer. Taking the complex norm of both sides of the functional equation (3.5), we have that: * \(|G(1,\chi)|=p\) (since, for \(\chi\) any primitive Dirichlet character of modulus \(f\), we have \(|G(1,\chi)|=\sqrt{f}\); see e.g. Theorem 8.15 of [2]), * since \(\chi(-1)=1\) and since \(n\) is an even positive integer, we have \[\left|e^{-\pi in/2}+\chi(-1)e^{\pi in/2}\right|=2.\] Consequently functional equation (3.5) yields \[|L(1-n,\chi)|=\frac{p^{2n-1}(n-1)!}{2^{n-1}\pi^{n}}\left|L(s,\overline{\chi})\right| \tag{4.14}\] for all positive even integers \(n\). In the group \(\operatorname{Dir}(p^{2})[p]\) of Dirichlet characters of modulus \(p^{2}\) taking values in \(\mathbb{Q}(\zeta_{p})\) (see Definition 4.1), complex conjugation acts freely on the nonprincipal characters; consequently, taking a product of the equation (4.14) over all nonprincipal \(\chi\in\operatorname{Dir}(p^{2})[p]\), we get \[\left(\frac{2^{n-1}}{p^{2n-1}(n-1)!}\right)^{p-1}\left|L(1-n,S/p)\right|= \frac{1}{\pi^{n(p-1)}}\left|L(n,S/p)\right| \tag{4.15}\] for all positive even integers \(n\). Now it follows from the Euler product for \(L(s,S/p)\) (see Proposition 4.4) that \(L(n,S/p)\) is a _real_ number, and it follows from the definition of \(L(s,S/p)\) as a product of Galois conjugates of Dirichlet \(L\)-functions that \(L(1-n,S/p)\), for positive integers \(n\), is a product of Galois conjugates of generalized Bernoulli numbers, hence is rational. Consequently equation (4.15) now reads \[L(n,S/p)=\pm\left(\frac{2^{n-1}\pi^{n}}{p^{2n-1}(n-1)!}\right)^{p-1}L(1-n,S/p)\] for all positive even integers \(n\). Now the description of the denominator of \(L(1-n,S/p)\) given in Theorem 4.8 implies the claimed result. ## 5. Consequences for the Leopoldt conjecture. For a totally real number field \(F\), the classical class number formula5 reads: Footnote 5: The form we give here is somewhat simpler than a typical textbook statement of the class number formula, since we give the formula only for totally real \(F\). For example, for totally real \(F\), the discriminant \(\Delta_{F}\) is always positive, so there is no need to take the absolute value of \(\Delta_{F}\) before taking its square root. \[\lim_{s\to 1}(s-1)\zeta_{F}(s)=\frac{2^{[F:\mathbb{Q}]}\operatorname{reg}_{F}h_{ F}}{w_{F}\sqrt{\Delta_{F}}}, \tag{5.16}\] where \(\operatorname{reg}_{F}\) is the classical regulator of \(F\), \(h_{F}\) the class number of \(F\), \(w_{F}\) the number of roots of unity in \(F\), and \(\Delta_{F}\) the discriminant of \(F\). In [7], Colmez proved an analogue of (5.16) for the \(p\)-adic Dedekind \(\zeta\)-function of \(F\): **Theorem 5.1**.: **(Colmez.)**__ \[\lim_{s\to 1}(s-1)\zeta_{F,p}(s)=\frac{2^{[F:\mathbb{Q}]}\operatorname{reg}_{F, p}h_{F}\prod_{\mathfrak{p}|p}\left(1-\frac{1}{N(\mathfrak{p})}\right)}{w_{F} \sqrt{\Delta_{F}}}, \tag{5.17}\] where \(h_{F},w_{F}\), and \(\Delta_{F}\) are the same as in (5.16), \(N(\mathfrak{p})\) is the norm of the prime ideal \(\mathfrak{p}\), and \(\operatorname{reg}_{F,p}\) is Leopoldt's \(p\)_-adic regulator of \(F\)_, whose definition we give somewhat informally as follows: **Definition 5.2**.: _Fix a prime number \(p\), and6 an embedding \(\mathbb{C}_{p}\hookrightarrow\mathbb{C}\). Let \(\sigma_{1},\ldots,\sigma_{r}\) be the embeddings of \(F\) into \(\mathbb{C}_{p}\) (only list the complex embeddings "once"--leave their conjugates off the list). Let \(e_{1},\ldots,e_{s}\) be a \(\mathbb{Z}\)-linear basis for \(\mathcal{O}_{F}\); it follows from the Dirichlet unit theorem that \(s=r-1\). The \(p\)-adic regulator of \(F\) is_ Footnote 6: This embedding is used only so that, given an embedding \(F\hookrightarrow\mathbb{C}_{p}\), we can say it’s “real” or “complex.” \[\operatorname{reg}_{F,p}=\det(\delta_{i}\log_{p}(\sigma_{i}(e_{j})))_{1\leqslant i,j\leqslant s}, \tag{5.18}\] _where \(\delta_{i}\) is \(1\) if \(\sigma_{i}\) is real and \(2\) if \(\sigma_{j}\) is complex, and where \(\log_{p}\) is the \(p\)-adic logarithm (take the Maclaurin series for \(\ln(1+x)\), but regard the coefficients as \(p\)-adic rationals)._ In Definition 5.2, we see that we naturally get an \(s\)-by-\((s+1)\) matrix of \(p\)-adic logarithms of the numbers \(\sigma_{i}(e_{j})\), and in (5.18) we simply ignore one of the columns to get a square matrix, whose determinant we define as the \(p\)-adic regulator; omitting a different column swaps the sign of the determinant, and so \(\operatorname{reg}_{F,p}\) is only well-defined up to sign. See [17] for further explanation. The Leopoldt conjecture is simply the conjecture that \(\operatorname{reg}_{F,p}\) is nonzero for all primes \(p\) and all number fields \(F\). Siegel [25] and Klingen [14] proved that \(\zeta_{F}(1-n)\) is an algebraic rational number when \(n\) is a positive integer, so we can think of the sequence \[\zeta_{F}(0),\zeta_{F}(-1),\zeta_{F}(-2),\ldots \tag{5.19}\] as a sequence of \(p\)_-adic_ numbers, and we can ask whether there exists some continuous function on the \(p\)-adic integers whose values at negative integers recovers the sequence (5.19). This doesn't work, but for only two reasons, and each can be dealt with by modifying the question appropriately: what _does_ work is to cancel out the Euler factors in \(\zeta_{F}(s)\) corresponding to primes (of the ring of integers \(\mathcal{O}_{F}\) over \(p\), and then to evaluate the result at \(s=1-(p-1),1-2(p-1),1-3(p-1),1-4(p-1),\dots\) instead of at \(s=0,-1,-2,\dots\). One then arrives at the result, from [24], that there exists a unique \(p\)-adically continuous function \(\zeta_{F,p}(s)\) such that, for \(n>1\) an integer, \[\zeta_{F,p}(1-n(p-1))=\zeta_{F}(1-n(p-1))\prod_{\mathfrak{p}|p}\left(1-N( \mathfrak{p})^{n(p-1)-1}\right) \tag{5.20}\] if \(p>2\), and \[\zeta_{F,p}(1-2n)=\zeta_{F}(1-2n)\prod_{\mathfrak{p}|p}\left(1-N(\mathfrak{p})^ {2n-1}\right) \tag{5.21}\] if \(p=2\); this result extends the results of [15], which assumed \(F\) abelian. In particular, \(\nu_{p}\left(\zeta_{F}(1-n(p-1))\right)=\nu_{p}\left(\zeta_{F,p}(1-n(p-1))\right)\) and \(\nu_{2}\left(\zeta_{F}(1-2n)\right)=\nu_{2}\left(\zeta_{F,2}(1-2n)\right)\) for positive integers \(n\). Since \(\zeta_{F,p}\) is \(p\)-adically continuous and since the sequence of integers \[\left(1-(p-1),1-p(p-1),1-p^{2}(p-1),1-p^{3}(p-1),\dots\right)\] converges \(p\)-adically to \(1\), using (5.17) we get an equality: \[\frac{2^{[F:\mathbb{Q}]}\operatorname{reg}_{F,p}h_{F}\prod_{\mathfrak{p}|p} \left(1-\frac{1}{N(\mathfrak{p})}\right)}{w_{F}\sqrt{\Delta_{F}}}=\lim_{j\to \infty}(-p^{j}(p-1))\zeta_{F,p}\left(1-p^{j}(p-1)\right). \tag{5.22}\] The trick now is to compare (5.22) for a nontrivial choice of \(F\) to (5.22) for the trivial choice of \(F\), i.e., \(F=\mathbb{Q}\), and to compare the resulting ratio to the order of a homotopy group using Theorem 4.8. Suppose now that \(F\) is the smallest subextension of \(\mathbb{Q}(\zeta_{p^{2}})/\mathbb{Q}\) in which \(p\) is wildly ramified. Then we have: \[p =\#\left(\pi_{2(p-1)p^{n}-1}(L_{KU}S/p)\right)\] \[=\operatorname{denom}\left(\frac{\zeta_{F}\left(1-p^{n}(p-1) \right)}{\zeta\left(1-p^{n}(p-1)\right)}\right),\] \[=\operatorname{denom}\left(\frac{\zeta_{F,p}\left(1-p^{n}(p-1) \right)}{\zeta_{\mathbb{Q},p}\left(1-p^{n}(p-1)\right)}\right),\] so the order of vanishing of \(\zeta_{K,p}(s)\) at \(s=1\) is equal to the order of vanishing of \(\zeta_{\mathbb{Q},p}(s)\) at \(s=1\). We have that \[\left(1-\frac{1}{p}\right)\operatorname{reg}_{\mathbb{Q},p}=\lim_{s\to 1}(s-1) \zeta_{\mathbb{Q},p}(s),\] which is nonzero, so \(\lim_{s\to 1}(s-1)\zeta_{F,p}(s)\) also converges and is nonzero. Colmez's class number formula (5.17) then yields that \[\frac{2^{[F:\mathbb{Q}]}\operatorname{reg}_{F,p}h_{F}\prod_{\mathfrak{p}|p} \left(1-\frac{1}{N(\mathfrak{p})}\right)}{w_{F}\sqrt{\Delta_{F}}} \tag{5.23}\] must be nonzero. Each factor in (5.23) is automatically nonzero, except possibly for \(\operatorname{reg}_{F,p}\); so the \(p\)-adic regulator \(\operatorname{reg}_{F,p}\) of \(F\) must also be nonzero, i.e., **Theorem 5.3**.: _Let \(F\) be the smallest subextension of \(\mathbb{Q}(\zeta_{p^{2}})/\mathbb{Q}\) in which \(p\) is wildly ramified. Then the Leopoldt conjecture holds for \(F\) at the prime \(p\)._ As we already pointed out, the Leopoldt conjecture for abelian extensions of \(\mathbb{Q}\) has been settled since [4], over 50 years ago, so Theorem 5.3 is not a new case of the Leopoldt conjecture at all. The noteworthy thing about the argument we have given, above, is its use of the \(v_{1}\)-periodicity \(\pi_{-1}(L_{KU}S/p)\cong\pi_{2(p-1)-1}(L_{KU}S/p)\cong\pi_{4(p-1)-1}(L_{KU}S/p )\cong\pi_{6(p-1)-1}(L_{KU}S/p)\cong\dots\) in homotopy groups to deduce nonvanishing of the \(p\)-adic regulator. More generally: **Observation 5.4**.: If \(F\) is a totally real number field and if we have integers \(j,k\) and spectra \(E,X\) such that 1. for \(n\gg 0\), the order of \(\pi_{2(p^{k}-1)p^{n}-1}(L_{E}X)\) is equal to the denominator of the rational number \(\frac{\zeta_{F}(1-p^{n}(p^{k}-1))}{\zeta(1-p^{n}(p^{k}-1))}\), 2. \(X\) admits a self-map \(\Sigma^{(2p^{k}-2)j}X\to X\) which induces an isomorphism in \(E_{*}\)-homology, and 3. \(\pi_{-1}(L_{E}X)\) is finite, then the Leopoldt conjecture holds for \(F\) at the prime \(p\). The argument is as follows: the sequence \[\#(\pi_{-1}(L_{E}X)),\#(\pi_{2pj(p^{k}-1)-1}(L_{E}X)),\#(\pi_{2p^{2}j(p^{k}-1) -1}(L_{E}X)),\dots\] is constant, so the order of vanishing of \(\zeta_{F,p}(s)\) at \(s=1\) is equal to the order of vanishing of \(\zeta_{\mathbb{Q},p}(s)\) at \(s=1\), so by the same argument using Colmez's \(p\)-adic class number formula as above, \(\operatorname{reg}_{F,p}\) is nonzero. **Remark 5.5**.: Every \(E(k-1)\)-acyclic finite CW-complex \(X\) admits a self-map \(\Sigma^{(2p^{k}-2)j}X\to X\), for some \(j\), which induces an isomorphism in \(E(k)\)-homology, by the periodicity theorem of Hopkins-Smith (see [13], or Theorem 1.5.4 of [23]). Here \(E(k)_{*}\) is the height \(k\)\(p\)-primary Johnson-Wilson theory. So we have a very powerful mechanism for arranging for the second condition in Observation 5.4 to be satisfied, and the third condition is, in many situations, amenable to explicit computation. It remains an open question how to produce spectra \(X\) and _nonabelian_ number fields \(F\) satisfying the first condition in Observation 5.4, in order to prove potentially _new_ cases of the Leopoldt conjecture. The Iwasawa-theoretic perspective sketched in Remark 1.5 represents my best hope for how one might go about producing such \(X\) and \(F\). ## 6. Appendix: a few entertaining numerical calculations. ### Computed examples of values of \(L(1-n,S/p)\) While Theorem 4.8 describes the denominators of \(L(1-n,S/p)\) completely for positive integers \(n\), it says nothing about the numerators. These numerators are much less predictable than the denominators. We include a table of a few values of \(L(1-n,S/p)\) and the prime factorizations of their numerators, which might give the reader a sense of this unpredictability: \[L(0,S/3) =0\] \[L(-1,S/3) =\frac{4}{3}\] \[=\frac{2^{2}}{3}\] \[L(-2,S/3) =0\] \[L(-3,S/3) =\frac{796}{3}\] \[=\frac{2^{2}\cdot 199}{3}\] \[L(-4,S/3) =0\] \[L(-5,S/3) =\frac{1409884}{3}\] \[=\frac{2^{2}\cdot 7\cdot 43\cdot 1171}{3}\] \[L(-6,S/3) =0\] \[L(-7,S/3) =\frac{10595003836}{3}\] \[=\frac{2^{2}\cdot 2648750959}{3}\] \[L(0,S/5) =0\] \[L(-1,S/5) =1136\] \[=2^{4}\cdot 71\] \[L(-2,S/5) =0\] \[L(-3,S/5) =\frac{607045659856}{5}\] \[=\frac{2^{4}\cdot 37940353741}{5}\] \[L(-4,S/5) =0\] \[L(-5,S/5) =1293561684322985119376\] \[=2^{4}\cdot 41^{2}\cdot 3331\cdot 2486381\cdot 5807071\] \[L(-6,S/5) =0\] \[L(-7,S/5) =\frac{1280828318043498475058726863755856}{5}\] \[=\frac{2^{4}\cdot 401\cdot 1151\cdot 1171\cdot 281677007771\cdot 5 25827079851}{5}\] \[L(0,S/7) =0\] \[L(-1,S/7) =17624384\] \[=2^{6}\cdot 113\cdot 2437\] \[L(-2,S/7) =0\] \[L(-3,S/7) =60081275301219900531392\] \[=2^{6}\cdot 547\cdot 659\cdot 7477\cdot 348304469143\] \[L(-4,S/7) =0\] \[L(-5,S/7) =\frac{1448428968939581787932808098954336691322688}{7}\] \[=\frac{2^{6}\cdot 138054547\cdot 163933047708171216095114393777711}{ 7}\] \[L(-6,S/7) =0\] \[L(-7,S/7) =58235259522755629726600502123583976556247364608948281462604992\] \[=2^{6}\cdot 14912003737\cdot 61019695682165635074111760577075533607839054420619.\] These examples were computed by solving for Taylor coefficients to compute \(B_{n}^{\chi}\) for \(\chi\in\operatorname{Dir}(p^{2})[p]\), as in Definition 3.4, and then multiplying the resulting values of \(B_{n}^{\chi}\) as in the definition of \(L(s,S/p)\) in Definition 4.2. This process is not difficult to implement in a computer algebra package (the author did this in both MAGMA and SAGE). So, for example, as a special case of Theorem 1.3, we have that the denominator of \(L(-3,S/5)\) is the order of \(\pi_{7}(L_{KU}S/5)\), i.e., the \(5\) in the denominator of \(L(-3,S/5)=\frac{607045659856}{5}\) is the numerical "shadow" of \(\alpha_{1}\in\pi_{7}(S/5)\), while the \(5\) in the denominator of \(L(-7,S/5)=\frac{1280828318043498475058726863755856}{5}\) is the numerical "shadow" of \(v_{1}\alpha_{1}=\alpha_{2}\in\pi_{15}(S/5)\). ### Some amusing probability arguments associated to homotopy groups The functional equation of Corollary 4.9, the Euler product of Proposition 4.4, and a computation of \(L(-n,S/p)\), for a positive integer \(n\), implies an asymptotic prime count. The fact that the denominator of \(L(-n,S/p)\) also counts the order of a homotopy group of \(L_{KU}S/p\) tells us that the homotopy groups of \(L_{KU}S/p\) have a relationship to the probability that certain "randomly chosen" collections of integers satisfy appropriate coprimality conditions. These arguments are straightforward extensions of the classical interpretation of \(1/\zeta(2)\) as the probability of two "randomly chosen" integers being coprime. For example: **Question 6.1**.: Choose an odd prime \(p\). Given a sequence \((m_{1},n_{1},m_{2},n_{2},\ldots,m_{p},n_{p})\) of "randomly chosen" integers, what is the probability that the conditions: 1. the integers \(m_{1},n_{1},m_{2},n_{2},\ldots,m_{p},n_{p}\) do not all share a common prime factor \(\ell\) which is a primitive root modulo \(p^{2}\), and 2. none of the pairs \(m_{i},n_{i}\) share a common prime factor \(\ell\neq p\) which is not a primitive root modulo \(p^{2}\), both hold? The answer to Question 6.1 is \[\frac{1}{(1-p^{-2})\zeta(2)L(2,S/p)},\] since \(\frac{\ell^{2p}-1}{\ell^{2p}}\) is the probability that \(m_{1},n_{1},\ldots,m_{p},n_{p}\) are not all in the same residue class modulo \(\ell\), and \(\left(\frac{\ell^{2}-1}{\ell^{2}}\right)^{p}\) is the probability that, for each \(i\in\{1,\ldots,p\}\), \(m_{i}\) and \(n_{i}\) are not in the same residue class modulo \(\ell\). Then the Euler product of Proposition 4.4 gives us \[\left(\prod_{\notin G_{p}}\frac{\ell^{2}-1}{\ell^{2}}\right)^{p}\left(\prod_{ \ell\in G_{p}}\frac{\ell^{2p}-1}{\ell^{2p}}\right)=\frac{1}{(1-p^{-2})\zeta(2) L(2,S/p)}.\] We can use the functional equation in Corollary 4.9 to simplify \(L(2,S/p)\). For example: **Example 6.2**.: Given a sequence \((m_{1},n_{1},m_{2},n_{2},m_{3},n_{3})\) of "randomly chosen" integers, let \(P\) denote the probability that the conditions: 1. the integers \(m_{1},n_{1},m_{2},n_{2},m_{3},n_{3}\) do not all share a common prime factor \(\ell\) which is a primitive root modulo \(9\), and 2. none of the pairs \(m_{i},n_{i}\) share a common prime factor \(\ell\) which is not a primitive root modulo \(9\), both hold. Then \[P =\frac{1}{(1-3^{-2})}\frac{1}{\zeta(2)}\frac{1}{L(2,S/3)}\] \[=\frac{9}{8}\frac{6}{\pi^{2}}\left(\frac{3^{3}}{2\pi^{2}}\right) ^{2}\frac{1}{L(-1,S/3)}\] \[=\frac{3^{9}}{2^{4}\pi^{6}}\frac{3}{4}\] \[=\frac{59049}{64\pi^{6}}, \tag{6.24}\] which is approximately a \(96\) percent chance. The factor of \(3/4\) in (6.24) is the reciprocal of \(L(-1,S/3)=4/3\), given above. By Theorem 4.8, the factor of three in the denominator of \(L(-1,S/3)\), which is responsible for the probability \(P\) being approximately \(96\) percent instead of approximately \(32\) percent, is the same factor which accounts for the nonvanishing of the fourth \(KU\)-local stable homotopy group \(\pi_{4}(L_{KU}S/3)\) of the mod \(3\) Moore spectrum.
2301.07304
Coupling spin defects in hexagonal boron nitride to a microwave cavity
Optically addressable spin defects in hexagonal boron nitride (hBN) have become a promising platform for quantum sensing. While sensitivity of these defects are limited by their interactions with the spin environment in hBN, inefficient microwave delivery can further reduce their sensitivity. Hare, we design and fabricate a microwave double arc resonator for efficient transferring of the microwave field at 3.8 GHz. The spin transitions in the ground state of VB- are coupled to the frequency of the microwave cavity which results in enhanced optically detected magnetic resonance (ODMR) contrast. In addition, the linewidth of the ODMR signal further reduces, achieving a magnetic field sensitivity as low as 42.4 microtesla per square root of hertz. Our robust and scalable device engineering is promising for future employment of spin defects in hBN for quantum sensing.
Thinh N. Tran, Angus Gale, Benjamin Whitefield, Milos Toth, Igor Aharonovich, Mehran Kianinia
2023-01-18T04:49:07Z
http://arxiv.org/abs/2301.07304v1
# Coupling spin defects in hexagonal boron nitride to a microwave cavity ###### Abstract Optically addressable spin defects in hexagonal boron nitride (hBN) have become a promising platform for quantum sensing. While sensitivity of these defects are limited by their interactions with the spin environment in hBN, inefficient microwave delivery can further reduce their sensitivity. Hare, we design and fabricate a microwave double arc resonator for efficient transferring of the microwave field at 3.8 GHz. The spin transitions in the ground state of \(V_{B}^{-}\) are coupled to the frequency of the microwave cavity which results in enhanced optically detected magnetic resonance (ODMR) contrast. In addition, the linewidth of the ODMR signal further reduces, achieving a magnetic field sensitivity as low as 42.4 \(\mu\)T/\(\backslash\)Hz. Our robust and scalable device engineering is promising for future employment of spin defects in hBN for quantum sensing. quantum sensor, Boron Vacancy, Hexagonal boron nitride, Optically detected magnetic resonance. ## 1 Introduction Optically active spin defects constitute the main quantum hardware for applications in sensing and communication technologies[1-5]. Among existing solid state materials hexagonal boron nitride, a wide bang gap two dimensional material, has been shown to host variety of spin defects at room temperature[6-8]. Recently, a new class of spin defects - namely the negatively charged boron vacancy \(V_{B}^{-}\) defects in hexagonal boron nitride (hBN) has emerged as a promising candidate for quantum sensing[6, 9]. The \(V_{B}^{-}\) emits at \(\sim\) 810 nm, and has a ground state spin transition at \(\sim\) 3.5 GHz. Coherent control of the spin state as well as preliminary imaging experiments were demonstrated, as a proof of principle to utilize this defect as a quantum sensor[9-15]. Furthermore, the superior properties of layered materials in achieving the precise thickness of host hBN and hence controlling the distance between the quantum sensor and the sample has brought much attention and excitement to the community with the possibility of performing quantum sensing at these unexplored regimes. With these fundamental attributes, the \(V_{B}^{-}\) defects have the potential to become an important tool to study physical properties of emerging 2D materials, devices and heterostructures[9, 10, 16]. To deliver the microwave fields, necessary to control and manipulate the spin state, metal waveguides such as gold stripes are commonly used[12, 13, 17]. While microwave delivery can be easily achieved by transferring a hBN flake on top of a metal stripe, it is an inefficient way to deliver the microwaves with losses. In an alternative approach, one can design and engineer a microwave cavity that resonates with the transitions of the defect spin states[18-23]. Such a cavity is posed to enhance ODMR contrast and prevent microwave power broadening, thus enhancing the spin sensitivity. In turn, the improvement of ODMR contrast and sensitivity of \(V_{B}^{-}\) defects in hBN is highly sought after for magnetic, thermal and pressure sensing applications. In this work, we effectively facilitate the process of designing and fabricating a microwave resonators that matches to the upper transition of \(V_{B}^{-}\), from \(|+1\rangle\) to \(|0\rangle\) in the ground level. The microwave resonators were engineered by low cost printed circuit boards (PCBs) which can easily be tuned by changing the inner arc radius. Our results show the improvement in ODMR contrast and the magnetic field detection sensitivity of the hybrid structure, paving the way to integrating a microwave cavity to spin defects in hBN for ultra-high sensitive quantum sensing. To create the \(V_{B}^{-}\) defects, we have used a low energy nitrogen ion beam at 30 KeV to generate as shown in figure 1a. First, hBN flakes were exfoliated on a clean silicon substrate with a thin layer of thermal oxide and further cleaned in a UV ozone chamber for 15 minutes to remove any organic residuals from the surface. During the implantation, the nitrogen ion beam was maintained at \(2\times 10^{16}\) ion\(\cdot\)cm\({}^{-2}\) with ion current at 21.9 pA[24, 25]. The \(V_{B}^{-}\) defects are a spin 1 system with a triplet at ground state separated by \(\sim\)3.47 GHz between m\({}_{\mathrm{s}}=0\) and m\({}_{\mathrm{s}}=\pm 1\) states, as shown schematically in figure 1b. The degeneracy of the latter states are lifted even at zero external magnetic field. This is evident with the two distinct resonances, \(\nu_{1,2}\) in the ODMR spectra of the \(V_{B}^{-}\) defect. The resonant frequencies, \(\nu_{1,2}\) are generalized under external magnetic field (\(B\)) as \(\nu_{1,2}=D_{gs}/h+(1/h)\sqrt{E_{gs}^{2}+(g\mu_{B}B)^{2}}\), where \(D_{gs}\) and \(E_{gs}\)are zero-field splitting parameters, \(g\) is the Lande factor, \(\mu_{B}\) is the Bohr magneton and \(h\) is Planck's constant. Without an external magnetic field, the two resonant frequencies only split about \(E_{gs}/h\approx 50\) MHz which could adversely affect the characterization of the microwave resonator and the \(V_{B}^{-}\) defects. Therefore, we intentionally designed the resonant frequency of the resonator, \(\omega_{\mathrm{c}}\) at \(\approx 3.8\) GHz and tune the upper resonant frequency, \(\nu_{2}\) to match \(\omega_{\mathrm{c}}\) with an external magnetic field (Figure 1b). To confirm the successful generation of the \(V_{B}^{-}\) defects a confocal microscopy characterization was carried out to detect the photoluminescence (PL) emission of the defects centered at \(\sim\)800 nm (figure 1c). Next, the hBN flake was transferred on to the fabricated microwave cavity which resonates with the transition between m\({}_{\mathrm{s}}=+1\) and m\({}_{\mathrm{s}}=0\) in the ground state of \(V_{B}^{-}\). Figure 1: Spin defects in hBN. a) Creation of V\({}_{\mathrm{B}}^{-}\) by nitrogen (N) ion beam implantation into the hBN lattice. b) Electronic level structure of V\({}_{\mathrm{B}}^{-}\) in hBN with zero-field splitting at ground state E\({}_{\mathrm{gs}}=3.48\) GHz. By using an external magnetic field, the ground state spin triplet is split into transitions between m\({}_{\mathrm{s}}=+1\) and m\({}_{\mathrm{s}}=0\) to match the resonance frequency of the microwave cavity (\(\omega_{\mathrm{c}}\)). c) Photoluminescence spectrum from representative V\({}_{\mathrm{B}}^{-}\) defects in hBN. Inset, the scanning electron microscope (SEM) of a hBN flake after N ion beam radiation on areas of 50 \(\times\) 50 \(\upmu\)m\({}^{2}\) (scale bar: 50 \(\upmu\)m). Figure 2a shows the double arc resonator on a PCB (Rogers 4350B substrate) with a compact size (\(\sim\) 28 \(\times\) 15 mm\({}^{2}\)). The resonator consists of two closely separated arcs by a small gap. The microwave signal is delivered through a standard 50 \(\Omega\) microstripe line which forms a capacitor with the double gap-arc by a small distance. The design was inspired by the proposal from Shamonin et al[18]. To further characterize the design and fine tune resonant frequency, the electromagnetic numerical simulations (CST Studio Suite) were used. Figure 2b shows the simulated magnetic field strength distribution at the frequency of 3.78 GHz without any loads. The magnetic strength is uniform at the center of the double gap-arc, however it concentrates along the rims of the outer arc. The resonant frequency of the resonator was calculated from the simulated results of return loss (S\({}_{11}\)) of the resonator as shown in Figure 2c, and confirmed experimentally by measuring the transmission of the resonator. The resonance has a marginally lower Q factor of \(\sim\) 65 and resonant frequency \(\sim\) 3.8 GHz. These differences are due to the capacitive coupling modified by a metal holder used to mount the resonator to a piezo scanning stage. The design parameters of the resonator given in Table 1. The resonance frequency of the resonators can be tuned by modifying the radius of the inner arc \(r_{l}\). Seven resonators with different inner radius were fabricated and measured to identify the resonance frequency. The correlation between the resonance frequencies and the inner radii is shown in Figure 2d. This result shows a linear dependence between the resonance frequency and the inner radius with a slope of \(-\)0.63 GHz/mm. Figure 2: Microwave resonator characterizations. a) A double arc resonator on a PCB. b) Simulated magnetic field strength distribution of the resonator with the intensity color bar on the right. c) Simulated (orange) and measured (blue) return loss (S11) of a resonator with geometrical parameters listed in the text. d) Resonance frequency of the double arc resonator as a function of the radius of the inner arc together with the 2D design of the resonator. r\({}_{1}\) and r\({}_{2}\) are the radii of the inner and outer arcs, respectively; g\({}_{1}\) and g\({}_{2}\) are the cut width of the inner and outer arcs, respectively; g\({}_{1}\) is the distance between the transmission line to the outer arc. To couple the \(V_{B}^{-}\) defects to the resonators, the hBN flake containing the defects was transferred directly onto the PCB, in the region where the microwave field is homogeneous. To shift the upper resonant frequency, \(\nu_{2}\) of \(V_{B}^{-}\) defects to the cavity mode, a small magnetic field applied perpendicularly to the flake. Figure 3a shows ODMR spectra of \(V_{B}^{-}\) in the absence of an external magnetic field (blue) and when the signal is brought to the resonance of the microwave cavity under \(\sim\) 10mT of external magnetic field. The enhancement of ODMR contrast is increasing to \(\sim\) 7% employing the cavity resonance, compared to a pristine contrast of \(\sim\) 1.7 %, under the same microwave power (15 dBm). To further corroborate the coupling strength of \(V_{B}^{-}\) defects into the microwave resonator, we perform ODMR measurements under various microwave powers. Figure 3(b, c) show the ODMR contrast and linewidth at different microwave powers. All experiments were carried out under laser excitation with power of 2 mW. The ODMR contrast increases significantly when the B field is applied to tune the ODMR resonance to the cavity mode. Notably, even under very low microwave powers (\(\sim\) -20 dBm), where the signal is non detectable under zero magnetic field, the detection becomes feasible if the ODMR signal is enhanced by the microwave cavity. At maximum, 15 dBm microwave power, the ODMR contrast increases about 3.5 times, accompanied by \(\sim\) 20% linewidth reduction, when the magnetic field is applied. \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{3}{|c|}{Parameters} & \multicolumn{2}{c|}{Comments} \\ \hline \(\varepsilon_{r}\sim 3.48\) & \(r_{1}\sim 2.79\) mm & \(g_{t}\sim 101\) μm & \(\varepsilon_{r}\), \(t_{Cu}\), tan(\(\sigma\)), \(w\), \(g_{t}\), \(r_{1}\), \(r_{2}\), \(g_{1}\), and \(g_{2}\)are dielectric constant of the PCB, thickness of the copper layer on the PCB, loss tangent of the PCB material, width of copper traces, inner radius, outer radius, coupling gap, gap between the rings, and cut width of the inner and outer arcs, respectively. \\ \hline \(t_{Cu}\)\(\sim\) 89 μm & \(r_{2}\)\(\sim\) 4.55 mm & \(g_{2}\)\(\sim\) 9.1 mm & \(\varepsilon_{r}\), \(t_{Cu}\), tan(\(\sigma\)), \(w\), \(g_{t}\), \(r_{1}\), \(r_{2}\), \(g_{1}\), and \(g_{2}\)are dielectric constant of the PCB, thickness of the copper layer on the PCB, loss tangent of the PCB material, width of copper traces, inner radius, outer radius, coupling gap, gap between the rings, and cut width of the inner and outer arcs, respectively. \\ \hline \end{tabular} \end{table} Table 1: Design parameters for the resonator with Q factor of \(\sim\) 90 and resonant frequency of 3.78 GHz With the enhancement of ODMR contract and reduction of linewidth broadening under high microwave power, the magnetic field sensitivity is expected to be reduced. The magnetic field sensitivity is defined by the ODMR contrast \(C\), average photon count rate \(R\) and the linewidth \(\Delta\nu\) as the following equation \[\eta_{B}\approx P_{F}\frac{h}{g\mu_{B}}\frac{\Delta\nu}{C\sqrt{R}}\] where \(P_{F}\) is a numerical parameter related to lineshape profile. In our case, \(P_{F}\approx 0.7\) for a Gaussian profile. Given the measured photon count rates, ODMR contrasts and linewidths, the magnetic sensitivities can be calculated as shown in Figure 3d. We observed an enhancement of about \(\sim\) 5 in magnetic sensitivity, that reaches \(\sim\)42.4 \(\upmu\)T/\(\upmu\)Hz with the microwave resonator. To summarize, we have demonstrated the coupling of spin defects (\(V_{B}^{-}\)) in hBN to a microwave resonator. The higher ODMR contrast (\(\sim\) 6.8 %) and narrower linewidth (\(\sim\) 104 MHz) were achieved due to the coupling between the microwave resonator and the upper resonant frequency of the \(V_{B}^{-}\)under external magnetic field. Furthermore, the detectable magnetic field sensitivity can be reduced to as low as 42.2 \(\upmu\)T/\(\upmu\)Hz, making it appropriate for detection of small magnetic fields. This result can be further Figure 3: ODMR comparison between with (orange) and without (blue) an external magnetic field. a) ODMR spectrum with and without an external magnetic field at the same microwave power. b) ODMR contrasts, c) Linewidths, and d) Magnetic field sensitivity at different microwave powers with the inset showing zoom-in the sensitivity from 0 to 15 dBm when an external magnetic field is applied. improved by using different ion irradiation schemes with the possibility of reaching the sensitivity in the range of nT/\(\mathrm{\SIUnitSymbolMicro}\)Hz of \(V_{B}^{-}\) in hBN. This improvement of sensitivity would pave the way for quantum sensing applications by using spin defects in layered hBN material. This work is supported by the Australian Research Council (CE200100010, FT220100053) and the Office of Naval Research Global (N62909-22-1-2028).
2310.07445
The Alon-Tarsi number of $K_{3,3}$-minor-free graphs
The well known Wagner's theorem states that a graph is a planar graph if and only if it is $K_5$-minor-free and $K_{3,3}$-minor-free. Denote by $AT(G)$ the Alon-Tarsi number of a graph $G$. We show that for any $K_{3,3}$-minor-free graph $G$, $AT(G)\le 5$, there exists a matching $M$ and a forest $F$ such that $AT(G-M)\le 4$ and $AT(G-E(F))\le 3$, extending the result on the Alon-Tarsi number of $K_5$-minor-free graphs due to Abe, Kim and Ozeki.
Leyou Xu, Bo Zhou
2023-10-11T12:48:32Z
http://arxiv.org/abs/2310.07445v2
# The Alon-Tarsi number of \(K_{3,3}\)-minor-free graphs ###### Abstract The well known Wagner's theorem states that a graph is a planar graph if and only if it is \(K_{5}\)-minor-free and \(K_{3,3}\)-minor-free. Denote by \(AT(G)\) the Alon-Tarsi number of a graph \(G\). We show that for any \(K_{3,3}\)-minor-free graph \(G\), \(AT(G)\leq 5\), there exists a matching \(M\) and a forest \(F\) such that \(AT(G-M)\leq 4\) and \(AT(G-E(F))\leq 3\), extending the result on the Alon-Tarsi number of \(K_{5}\)-minor-free graphs due to Abe, Kim and Ozeki. **Keywords:** Alon-Tarsi number, \(K_{3,3}\)-minor-free graph, planar graph **AMS Classifications:** 05C15 ## 1 Introduction We consider simple, finite and undirected graphs. For a graph \(G\), denote by \(V(G)\) the vertex set and \(E(G)\) the edge set of \(G\). A graph \(H\) is a minor of a connected graph \(G\) if we may obtain \(H\) from \(G\) by repeatedly deleting vertices and edges and contracting edges. A graph \(G\) is \(H\)-minor-free if \(H\) is not a minor of \(G\). For \(\emptyset\neq V_{1}\subseteq V(G)\), \(G[V_{1}]\) denotes the subgraph of \(G\) induced by \(V_{1}\). Denote by \(K_{n}\) the complete graph on \(n\) vertices and \(K_{s,t}\) the complete bipartite graph on \(s\) and \(t\) vertices in its color classes, respectively. A clique \(S\) of a graph \(G\) is a nonempty subset of \(V(G)\) such that \(G[S]\) is complete. Let \(G\) be a graph and \(D\) an orientation of \(G\). For \(v\in V(G)\), denote by \(d_{D}^{+}(v)\) (\(d_{D}^{-}(v)\), respectively) the out-degree (in-degree, respectively) of \(v\) in \(D\). The maximum out-degree of \(D\) is denoted by \(\Delta^{+}(D)\). An Eulerian subdigraph (or circulation) \(H\) of \(D\) is a spanning subdigraph of \(D\) such that \(d_{H}^{+}(v)=d_{H}^{-}(v)\) for each vertex \(v\in V(G)\). Denote by \(EE(D)\) (\(OE(D)\), respectively) the set of all Eulerian subdigraphs of \(D\) with even (odd, respectively) number of edges, respectively. We say that \(D\) is acyclic if \(D\) does not contain any directed cycle. Use the Combinatorial Nullstellensatz, Alon and Tarsi [2] obtained a remarkable relationship between a special orientation of a graph and a certain graph polynomial, which is known as Alon-Tarsi Theorem. The Alon-Tarsi number \(AT(G)\) of a graph \(G\), introduced by Jensen and Toft [8], is the minimum integer \(k\) such that there exists an orientation \(D\) of \(G\) with \(|EE(D)|\neq|OE(D)|\) and \(\Delta^{+}(D)<k\). A coloring of a graph \(G\) is a mapping \(c:V(G)\rightarrow\mathbb{R}\). For \(d\geq 0\), a \(d\)-defective coloring of \(G\) is a coloring such that each color class induces a subgraph of maximum degree at most \(d\). Particularly, a \(0\)-defective coloring is said to be proper. A graph is \(k\)-colorable if there exists a proper coloring \(c\) with \(|c(V(G))|\leq k\). The chromatic number \(\chi(G)\) of \(G\) is the least \(k\) such that \(G\) is \(k\)-colorable. A list assignment of \(G\) is a function \(L\) assigning to every vertex \(v\in V(G)\) a set \(L(v)\subset\mathbb{R}\). Given a list assignment \(L\), a \(d\)-defective-\(L\)-coloring of \(G\) is a \(d\)-defective coloring \(c\) such that \(c(v)\in L(v)\) for each vertex \(v\in V(G)\). A graph \(G\) is \(d\)-defective \(k\)-choosable if there exists an \(d\)-defective-\(L\)-coloring of \(G\) for each assignment \(L\) with \(|L(v)|\geq k\) for every \(v\in V(G)\). Particularly, we say \(G\) is \(k\)-choosable if \(G\) is \(0\)-defective \(k\)-choosable. The list chromatic number \(\chi_{\ell}(G)\) of \(G\) is the least integer \(k\) such that \(G\) is \(k\)-choosable. By Alon-Tarsi Theorem [2], \(\chi(G)\leq\chi_{\ell}(G)\leq AT(G)\) for any \(G\). The famous Four-Color Theorem states that for any planar graph \(G\), \(\chi(G)\leq 4\). Vizing [13] asked whether every planar graph is \(5\)-choosable. Erdos, Rubin and Taylor [5] conjectured that every planar graph is \(5\)-choosable but not necessarily \(4\)-choosable. Thomassen [11] confirmed the former and Voigt [12] confirmed the latter. Cushing and Kierstead [3] proved that every planar graph is \(1\)-defective \(4\)-choosable. Let \(G\) be a planar graph. Zhu [16] improved Thomassen's upper bound by proving that \(AT(G)\leq 5\), solving a natural open problem [7]. Grytczuk and Zhu [6] showed that there exists a matching \(M\) of \(G\) such that \(AT(G-M)\leq 4\). Furthermore, Kim, Kim and Zhu [9] showed that there exists a forest \(F\) in \(G\) such that \(AT(G-E(F))\leq 3\). By the well known Wagner's theorem [14], a graph is a planar graph if and only if it is \(K_{5}\)-minor-free and \(K_{3,3}\)-minor-free. Recently, Abe, Kim and Ozeki [1] extended the above results on the Alon-Tarsi number from planar graphs to \(K_{5}\)-minor-free graphs. **Theorem 1**.: _[_1_, Theorem 1.6, Corollary 1.8]_ _Let \(G\) be a \(K_{5}\)-minor-free graph. The following statements are true._ 1. \(AT(G)\leq 5\)_._ 2. _There exists a matching_ \(M\) _of_ \(G\) _such that_ \(AT(G-M)\leq 4\)_._ 3. _There exists a forest_ \(F\) _in_ \(G\) _such that_ \(AT(G-E(F))\leq 3\)_._ In this note, we extend these results in another direction to \(K_{3,3}\)-minor-free graphs. **Theorem 2**.: _Let \(G\) be a \(K_{3,3}\)-minor-free graph. Then the following results hold._ 1. \(AT(G)\leq 5\) _._ 2. _There exists a matching_ \(M\) _of_ \(G\) _such that_ \(AT(G-M)\leq 4\)_._ 3. _There exists a forest_ \(F\) _of_ \(G\) _such that_ \(AT(G-E(F))\leq 3\)_._ As mentioned above, there exist planar graphs which are not \(4\)-choosable [12], which are surely \(K_{3,3}\)-minor-free, so the bound in Theorem 2 (i) is tight. Theorem 2 (ii) implies that a \(K_{3,3}\)-minor-free graph is \(1\)-defective \(4\)-choosable. **Corollary 1**.: _[_15_]_ _Let \(G\) be a \(K_{3,3}\)-minor-free graph. Then \(\chi_{\ell}(G)\leq 5\)._ The study of list coloring of \(K_{s,t}\)-minor-free graphs has received much attention [10, 15]. Steiner [10] proved that for every pair of constants \(\epsilon>0\) and \(C>1\), there exists a positive integer \(N=N(\epsilon,C)\) such that for all integers \(s\) and \(t\) satisfying \(N\leq s\leq t\leq Cs\), there exists a \(K_{s,t}\)-minor-free graph \(G\) such that \(\chi_{\ell}(G)>(1-\epsilon)(2s+t)\). For any such graph \(G\), \(AT(G)>(1-\epsilon)(2s+t)\). So, in general, it is impossible that \(AT(G)\leq s+t-1\) for a \(K_{s,t}\)-minor-free graph \(G\). ## 2 Proof of Theorem 2 We need the following important result due to Zhu and his coauthors [6, 9, 16]. In a plane graph every face is bounded by a closed walk (not necessarily a cycle), which is called a boundary walk. An orientation \(D\) of a graph \(G\) is said to be an AT-orientation if \(|EE(D)|\neq|OE(D)|\). **Lemma 1**.: _Let \(G\) be a nontrivial plane graph with a boundary walk \(v_{1}\ldots v_{m}\) of the infinite face. The following statements are true._ 1. _[label=()]_ 2. _[_16_]_ _There exists an AT-orientation_ \(D\) _of_ \(G\) _such that_ \(d_{D}^{+}(v_{1})=0\)_,_ \(d_{D}^{+}(v_{2})=1\)_,_ \(d_{D}^{+}(v_{i})\leq 2\) _for each_ \(i=3,\ldots,m\) _and_ \(\Delta^{+}(D)\leq 4\)_._ 3. _[_6_]_ _There exists a matching_ \(M\) _of_ \(G\) _and an AT-orientation_ \(D\) _of_ \(G-M\) _such that_ \(d_{D}^{+}(v_{1})=d_{D}^{+}(v_{2})=0\)_,_ \(d_{D}^{+}(v_{i})\leq 2-d_{M}(v_{i})\) _for each_ \(i=3,\ldots,m\) _and_ \(\Delta^{+}(D)\leq 3\)_._ 4. _[_9_]_ _There exists a forest_ \(F\) _of_ \(G\) _and an acyclic orientation_ \(D\) _of_ \(G-E(F)\) _such that_ \(d_{D}^{+}(v_{1})=d_{D}^{+}(v_{2})=0\)_,_ \(d_{D}^{+}(v_{i})=1\) _for each_ \(i=3,\ldots,m\) _and_ \(\Delta^{+}(D)\leq 2\)_._ Let \(G_{1}\) and \(G_{2}\) be two vertex disjoint graphs. Suppose that \(X_{i}\subset V(G_{i})\) is a clique of \(G_{i}\) for \(i=1,2\) with \(|X_{1}|=|X_{2}|=k\). Let \(f:X_{1}\to X_{2}\) be a bijection. A graph \(G\) obtained from \(G_{1}\) and \(G_{2}\) by identifying \(x\) and \(f(x)\) for every \(x\in X_{1}\) and possibly deleting some edges of the clique is called a \(k\)-clique-sum of \(G_{1}\) and \(G_{2}\). Evidently, a \(0\)-sum of \(G_{1}\) and \(G_{2}\) is \(G_{1}\cup G_{2}\). **Lemma 2**.: _[_14_]_ _A graph \(G\) is \(K_{3,3}\)-minor-free if and only if \(G\) is a planar graph or \(K_{5}\), or \(G\) can be obtained from planar graphs and \(K_{5}\) by \(0\)-, \(1\)-, and \(2\)-sums._ **Lemma 3**.: _Let \(G\) be a graph obtained by the \(k\)-clique-sum of \(G_{1}\) and \(G_{2}\), where \(k\geq 1\). Let \(K\) be their common clique with \(K=\{v_{1},\ldots,v_{k}\}\). Let \(G^{\prime}_{i}=G_{i}\cap G\) for \(i=1,2\). Suppose that \(G^{\prime}_{1}\) has an AT-orientation \(D^{\prime}_{1}\) with \(\Delta^{+}(D^{\prime}_{1})\leq\ell\) and that \(G_{2}\) has an AT-orientation \(D_{2}\) with \(\Delta^{+}(D_{2})\leq\ell\) and \(d^{+}_{D_{2}}(v_{i})=i-1\) for \(i=1,\ldots,k\). Then \(G\) has an AT-orientation \(D\) such that \(\Delta^{+}(D)\leq\ell\) and \(d^{+}_{D}(v)=d^{+}_{D^{\prime}_{1}}(v)\) for each \(v\in V(G_{1})\)._ Proof.: Let \(D^{\prime}_{2}\) be restriction of \(D_{2}\) on \(G_{2}-E(G[K])\). Then \(D=D^{\prime}_{1}\cup D^{\prime}_{2}\) is an orientation of \(G\). As \(d^{+}_{D_{2}}(v_{i})=i-1\) for \(i=1,\ldots,k\) and \(D_{2}[K]\) is a tournament, \(D_{2}[K]\) is transitive and there is no arc from \(K\) to \(V(G_{2})\setminus K\) in \(D_{2}\). Then \(d^{+}_{D^{\prime}_{2}}(u)=0\) for \(u\in K\). So \(d^{+}_{D}(v)=d^{+}_{D^{\prime}_{1}}(v)\) for each \(v\in V(G_{1})\). Note that \(d^{+}_{D}(v)=d^{+}_{D^{\prime}_{2}}(v)=d^{+}_{D_{2}}(v)\) for each \(v\in V(G_{2})\setminus K\). Thus \(\Delta^{+}(D)=\max\{\Delta^{+}(D^{\prime}_{1}),\Delta^{+}(D^{\prime}_{2})\}\leq\ell\). We are left to show that \(D\) is an AT-orientation of \(G\). Let \(H\) be an Eulerian subdigraph of \(D\) and \(H_{1}=H[V(G_{1})]\). **Claim 1**.: \(H\) contains no arcs from \(V(G_{2})\setminus K\) to \(K\). Proof.: If there exists some vertex \(v\in K\) and \(w\in V(G_{2})\setminus K\) such that \((w,v)\) is an arc of \(H\), then \(d^{-}_{H_{1}}(v)<d^{-}_{H}(v)\). As there is no arcs from \(K\) to \(V(G_{2})\setminus K\) in \(D_{2}\), we have \(d^{+}_{H_{1}}(u)=d^{+}_{H}(u)\) for each \(u\in K\). Thus \(d^{+}_{H_{1}}(u)=d^{+}_{H}(u)\) for each \(u\in V(G_{1})\), so \[\sum_{u\in V(H_{1})}d^{+}_{H}(u)=\sum_{u\in V(H_{1})}d^{+}_{H_{1}}(u)=|A(H_{1} )|=\sum_{u\in V(H_{1})}d^{-}_{H_{1}}(u)<\sum_{u\in V(H_{1})}d^{-}_{H}(u),\] a contradiction. By Claim 1, \(H\) contains no arcs either from \(K\) to \(V(G_{2})\setminus K\) or from \(V(G_{2})\setminus K\) to \(K\). Therefore, \(H\) has an edge-disjoint decomposition \(H=H_{1}\cup H_{2}\), where \(H_{1}\) and \(H_{2}\) are Eulerian subdigraphs of \(D^{\prime}_{1}\) and \(D^{\prime}_{2}\), respectively. If \(H\in EE(D)\), then either \(H_{1}\in EE(D^{\prime}_{1})\) and \(H_{2}\in EE(D^{\prime}_{2})\) or \(H_{1}\in OE(D^{\prime}_{1})\) and \(H_{2}\in OE(D^{\prime}_{2})\). On the other hand, \(H^{\prime}_{1}\cup H^{\prime}_{2}\in EE(D)\) for any \(H^{\prime}_{i}\in EE(D^{\prime}_{i})\) with \(i=1,2\) or \(H^{\prime}_{i}\in OE(D^{\prime}_{1})\) with \(i=1,2\). Thus, there is a bijection between \(EE(D)\) and \((EE(D^{\prime}_{1})\times EE(D^{\prime}_{2}))\cup(OE(D^{\prime}_{1})\times OE(D^ {\prime}_{2}))\). Similarly, there is a bijection between \(OE(D)\) and \((EE(D^{\prime}_{1})\times OE(D^{\prime}_{2}))\cup(OE(D^{\prime}_{1})\times EE(D^ {\prime}_{2}))\). Note that any Eulerian subdigraph of \(D_{2}\) contains no arc incident to vertices in \(K\) as \(d^{+}_{D_{2}}(v_{i})=i-1\) for \(i=1,\ldots,k\). Thus \(EE(D^{\prime}_{2})=EE(D_{2})\) and \(OE(D^{\prime}_{2})=OE(D_{2})\). Recall that \(D^{\prime}_{1}\) and \(D_{2}\) are AT-orientation. Thus \(|EE(D^{\prime}_{1})|-|OE(D^{\prime}_{1})|\neq 0\) and \(|EE(D^{\prime}_{2})|-|OE(D^{\prime}_{2})|\neq 0\). It hence follows that \[|EE(D)|-|OE(D)| =(|EE(D^{\prime}_{1})|\times|EE(D^{\prime}_{2})|+|OE(D^{\prime}_{1 })|\times|OE(D^{\prime}_{2})|)\] \[\quad-(|EE(D^{\prime}_{1})|\times|OE(D^{\prime}_{2})|+|OE(D^{ \prime}_{1})|\times|EE(D^{\prime}_{2})|)\] \[=(|EE(D^{\prime}_{1})|-|OE(D^{\prime}_{1})|)\,(|EE(D^{\prime}_{2}) |-|OE(D^{\prime}_{2})|)\] \[\neq 0,\] which implies that \(D\) is an AT-orientation of \(G\). Now we are ready to prove Theorem 2. Proof of Theorem 2.: It suffices to show that for each \(uv\in E(G)\), we have 1. There exists an AT-orientation \(D\) of \(G\) such that \(\Delta^{+}(D)\leq 4\), \(d_{D}^{+}(u)=0\) and \(d_{D}^{+}(v)=1\). 2. There exists a matching \(M\) of \(G\) and an AT-orientation \(D\) of \(G-M\) such that \(\Delta^{+}(D)\leq 3\) and \(d_{D}^{+}(u)=d_{D}^{+}(v)=0\). 3. There exists a forest \(F\) of \(G\) and an acyclic orientation \(D\) of \(G-E(F)\) such that \(\Delta^{+}(D)\leq 2\) and \(d_{D}^{+}(u)=d_{D}^{+}(v)=0\). If \(G\) is a planar graph, then we may assume that \(G\) is an plane graph so that edge \(uv\) lies on the boundary of the infinite face, so Item (a) ((b), (c), respectively) follows from (a) ((b), (c), respectively) of Lemma 1. Suppose that \(G\cong K_{5}\). Let \(V(G)=\{v_{1},\ldots,v_{5}\}\) with \(u=v_{1},v=v_{2}\). Let \(D\) be the orientation of \(G\) such that \((v_{i},v_{j})\) is an arc of \(D\) if and only if \(i>j\). It is obvious that \(d_{D}^{+}(v_{1})=0\), \(d_{D}^{+}(v_{2})=1\) and \(\Delta^{+}(D)=4\). As \(D\) is an acyclic orientation, it is an AT-orientation. So (a) follows. Let \(M=\{v_{1}v_{2},v_{4}v_{5}\}\) and let \(D\) be the orientation of \(G-M\) such that \((v_{i},v_{j})\) is an arc of \(D\) if and only if \(i>j\). Then \(d_{D}^{+}(v_{1})=d_{D}^{+}(v_{2})=0\) and \(\Delta^{+}(D)=3\). As \(D\) is an acyclic orientation, it is an AT-orientation. So (b) follows. Let \(F\) be a forest with \(E(F)=\{v_{1}v_{2},v_{3}v_{5},v_{2}v_{4},v_{4}v_{5}\}\) and let \(D\) be the orientation of \(G-E(F)\) such that \((v_{i},v_{j})\) is an arc of \(D\) if and only if \(i>j\). Then \(d_{D}^{+}(v_{1})=d_{D}^{+}(v_{2})=0\) and \(\Delta^{+}(D)=2\). It is easy to see that \(D\) is acyclic, so (c) follows. Suppose that \(G\) is not planar and \(G\ncong K_{5}\). Suppose by contradiction that \(G\) is a minimum counterexample with respect to the order. Then \(G\) is connected by the minimality of \(G\). As \(G\) is \(K_{3,3}\)-minor-free, we have by Lemma 2 that there exists two \(K_{3,3}\)-minor-free graphs \(G_{1}\) and \(G_{2}\) such that \(G\) is a \(k\)-clique-sum of \(G_{1}\) and \(G_{2}\) with \(k=1,2\). Let \(K\) be the common clique of \(G_{1}\) and \(G_{2}\) with \(K=\{x\}\) if \(k=1\) and \(K=\{x,y\}\) if \(k=2\). Let \(G_{i}^{\prime}=G\cap G_{i}\) for \(i=1,2\). Let \(uv\in E(G_{1}^{\prime})\). By the minimality of \(G\), we have 1. There exists an AT-orientation \(D_{1}\) of \(G_{1}^{\prime}\) such that \(\Delta^{+}(D_{1})\leq 4\), \(d_{D_{1}}^{+}(u)=0\) and \(d_{D_{1}}^{+}(v)=1\). 2. There exists a matching \(M_{1}\) of \(G_{1}^{\prime}\) and an AT-orientation \(D_{1}\) of \(G_{1}^{\prime}-M_{1}\) such that \(\Delta^{+}(D_{1})\leq 3\) and \(d_{D_{1}}^{+}(u)=d_{D_{1}}^{+}(v)=0\). 3. There exists a forest \(F_{1}\) of \(G_{1}^{\prime}\) and an acyclic orientation \(D_{1}\) of \(G_{1}^{\prime}-E(F_{1})\) such that \(\Delta^{+}(D_{1})\leq 2\) and \(d_{D_{1}}^{+}(u)=d_{D_{1}}^{+}(v)=0\). Firstly, we show (a). By the minimality of \(G\), there exists an AT-orientation \(D_{2}\) of \(G_{2}\) such that \(\Delta^{+}(D_{2})\leq 4\), \(d_{D_{2}}^{+}(x)=0\) and if \(k=2\), then \(d_{D_{2}}^{+}(y)=1\). So, by (a1) and Lemma 3, \(G\) has an AT-orientation \(D\) such that \(\Delta^{+}(D)\leq 4\), \(d_{D}^{+}(u)=0\), and \(d_{D}^{+}(v)=1\), contradicting the choice of \(G\). Secondly, we show (b). Denote by \(y\) a neighbor of \(x\) in \(G_{2}\) if \(k=1\). By the minimality of \(G\), there exists a matching \(M_{2}\) of \(G_{2}\) and an AT-orientation \(D_{2}\) of \(G_{2}-M_{2}\) such that \(\Delta^{+}(D_{2})\leq 3\), \(d^{+}_{D_{2}}(x)=0\) and \(d^{+}_{D_{2}}(y)=0\). Then \(xy\in M_{2}\) and neither \(x\) nor \(y\) is \(M_{2}\setminus\{xy\}\)-saturated. Let \(D^{\prime}_{2}\) be an orientation of \(G_{2}-(M_{2}\setminus\{xy\})\) with \(D^{\prime}_{2}=D_{2}\cup\{(y,x)\}\). Then \(d^{+}_{D^{\prime}_{2}}(y)=1\). So \(D^{\prime}_{2}\) is an AT-orientation with \(\Delta^{+}(D^{\prime}_{2})\leq 3\), \(d^{+}_{D^{\prime}_{2}}(x)=0\) and \(d^{+}_{D^{\prime}_{2}}(y)=1\). Let \(M=M_{1}\cup(M_{2}\setminus\{xy\})\). Then \(M\) is a matching of \(G\). By (b1) and Lemma 3, \(G-M\) has an AT-orientation \(D\) such that \(\Delta^{+}(D)\leq 3\) and \(d^{+}_{D}(u)=d^{+}_{D}(v)=0\), contradicting the choice of \(G\). Now we show (c). Denote by \(y\) a neighbor of \(x\) in \(G_{2}\) if \(k=1\). By the minimality of \(G\), there exists a forest \(F_{2}\) of \(G_{2}\) and an acyclic orientation of \(G_{2}-E(F_{2})\) such that \(\Delta^{+}(D_{2})\leq 2\), \(d^{+}_{D_{2}}(x)=0\) and \(d^{+}_{D_{2}}(y)=0\). Then \(xy\in E(F_{2})\). Let \(D^{\prime}_{2}\) be an orientation of \(G_{2}-(E(F_{2})\setminus\{xy\})\) with \(D^{\prime}_{2}=D_{2}\cup\{(y,x)\}\). Then \(d^{+}_{D^{\prime}_{2}}(y)=1\). So \(D^{\prime}_{2}\) is an acyclic orientation with \(\Delta^{+}(D^{\prime}_{2})\leq 2\), \(d^{+}_{D^{\prime}_{2}}(x)=0\) and \(d^{+}_{D^{\prime}_{2}}(y)=1\). Let \(F=F_{1}\cup(F_{2}-xy)\). Then \(F\) is a forest of \(G\). By (c1) and Lemma 3, \(G-E(F)\) has an acyclic orientation \(D\) such that \(\Delta^{+}(D)\leq 2\) and \(d^{+}_{D}(u)=d^{+}_{D}(v)=0\), contradicting the choice of \(G\). A graph \(G\) is \(k\)-degenerate if each subgraph of \(G\) contains a vertex of degree at most \(k\), equivalently, \(G\) has an acyclic orientation \(D\) with \(\Delta^{+}(D)\leq k\). **Corollary 2**.: _Let \(G\) be a \(K_{3,3}\)-minor-free graph. Then there exists a forest \(F\) such that \(G-E(F)\) is \(2\)-degenerate._ Proof.: By the proof of Theorem 2, there exists a forest \(F\) and an acyclic AT-orientation \(D\) such that \(\Delta^{+}(D)\leq 2\). We may label the vertices in \(G\) such that \((v_{i},v_{j})\) is an arc of \(D\) if and only if \(i>j\). Let \(H\subseteq G-E(F)\) and \(H^{\prime}\) be the orientation of \(H\) on \(D\). Let \(t=\max\{v_{i}:v_{i}\in V(H)\}\). Then \(d^{+}_{H^{\prime}}(v_{t})\leq 2\) and \(d^{-}_{H^{\prime}}(v_{t})=0\) and so the degree of \(v_{t}\) is at most \(2\) in \(H\). Therefore, the result follows. **Remark 1**.: Theorem 1 may be extended to \(K_{5}^{\perp}\)-minor-free graphs, where \(K_{5}^{\perp}\) is the graph obtained from \(K_{4}\) by adding two adjacent vertices of degree three with no common neighbors. From [4, Lemmas 2.4 and 4.1], a graph \(G\) is \(K_{5}^{\perp}\)-minor free if and only if \(G\) a \(K_{5}\)-minor-free graph or \(K_{5}\), or \(G\) can be obtained from \(K_{5}\)-minor-free graphs and \(K_{5}\) by \(0\)-, \(1\)-, and \(2\)-sums. The result follows from a similar argument as in the proof in Theorem 2 (that \(K_{5}\) satisfies (a), (b) and (c) there) and [1, Lemma 3.1]. **Remark 2**.: Theorem 2 may be extended to \((K_{3,3}+e)\)-minor-free graphs. By [4, Lemmas 2.4 and 3.7] a graph \(G\) is \((K_{3,3}+e)\)-minor free graph if and only if \(G\) is a \(K_{3,3}\)-minor-graph or \(K_{3,3}\), or \(G\) can be obtained from \(K_{3,3}\)-minor-free graphs and \(K_{3,3}\) by \(0\)-, \(1\)- and \(2\)-sums. So, by similar argument as in the proof of Theorem 2, we only need to check the validity of (a)-(c) there if \(G=K_{3,3}\). This is verified as follows: Let \(\{u_{1},u_{2},u_{3}\}\cup\{v_{1},v_{2},v_{3}\}\) be the bipartition of \(G\) with \(u=u_{1}\) and \(v=v_{1}\). (a) Let \(D\) be the orientation of \(G\) with arc set \(\{(v_{i},u_{j}):i=2,3,j=1,2,3\}\cup\{(v_{1},u_{1}),(u_{2},v_{1}),(u_{3},v_{1})\}\). Then \(d^{+}_{D}(u_{1})=0\), \(d^{+}_{D}(v_{1})=1\) and \(\Delta^{+}(D)=3\). As \(D\) is an acyclic orientation, it is an AT-orientation. (b) Let \(M=\{v_{i}u_{i}:i=1,2,3\}\). Let \(D\) be the orientation of \(G-M\) with arc set \(\{(v_{2},u_{1}),(v_{3},u_{1}),(u_{2},v_{1}),(u_{2},v_{3}),(u_{3},v_{1}),(u_{3},v_{2})\}\). Then \(d^{+}_{D}(u_{1})=d^{+}_{D}(v_{1})=0\) and \(\Delta^{+}(D)=2\). As \(D\) is an acyclic orientation, it is an AT-orientation. (c) Let \(F\) be a forest with \(E(F)=\{u_{1}v_{1},u_{1}v_{2},u_{2}v_{2},u_{2}v_{3},u_{3}v_{3}\}\) and let \(D\) be the orientation of \(G-E(F)\) with arc set \(\{(v_{3},u_{1}),(u_{2},v_{1}),(u_{3},v_{1}),(u_{3},v_{2})\}\). Then \(d_{D}^{+}(u_{1})=d_{D}^{+}(v_{1})=0\) and \(\Delta^{+}(D)=2\). As \(D\) is an acyclic orientation, it is an AT-orientation. ## 3 Future research direction As mentioned above, it is impossible that \(AT(G)\leq s+t-1\) for a \(K_{s,t}\)-minor-free graph \(G\) in general. It is of interest to find a tight upper bound of the Alon-Tarsi number for any \(K_{s,t}\)-minor-free graph. Steiner [10] proposed an open problem: Is it true that for all integers \(1\leq s\leq t\), every \(K_{s,t}\)-minor-free graph \(G\) satisfies \(\chi_{\ell}(G)\leq 2s+t\)? Similarly, one may ask whether it is true that for all integers \(1\leq s\leq t\), every \(K_{s,t}\)-minor-free graph \(G\) satisfies \(AT(G)\leq 2s+t\). **Acknowledgement.** This work was supported by the National Natural Science Foundation of China (No. 12071158).
2302.08567
Coherent feedback control of quantum correlations in cavity magnomechanical system with magnon squeezing
We address a scheme to enhance the quantum correlations in cavity opto-magnomechanical system by using the coherent feedback loop in the presence of magnon squeezing. The proposed coherent feedback-control allows a significant enhancement of the entanglement of three bipartite subsystems, i.e., photon-phonon, photon-magnon and phonon-magnon. We also study the Einstein-Podolsky-Rosen steering and one-way steering in the presence of thermal effects without imposing additional conditions of asymmetric losses or noises in the subsystems. Furthermore, we investigate the sensitiveness of the scheme to the magnon squeezing, and its performance in non-ideal situations in which losses and noises are taken into account.
M. Amazioug, S. K. Singh, B. Teklu, M. Asjad
2023-02-16T20:18:57Z
http://arxiv.org/abs/2302.08567v2
Coherent feedback control of quantum correlations in cavity magnomechanical system with magnon squeezing ###### Abstract We address a scheme to enhance the quantum correlations in cavity opto-magnomechanical system by using the coherent feedback loop in the presence of magnon squeezing. The proposed coherent feedback-control allows a significant enhancement of the entanglement of three bipartite subsystems, i.e., photon-phonon, photon-magnon and phonon-magnon. We also study the Einstein-Podolsky-Rosen steering and one-way steering in the presence of thermal effects without imposing additional conditions of asymmetric losses or noises in the subsystems. Furthermore, we investigate the sensitiveness of the scheme to the magnon squeezing, and its performance in non-ideal situations in which losses and noises are taken into account. _Keywords_ : Cavity magnonechanics, Coherent feedback, Entanglement, Steerability. ## I Introduction Entanglement and Einstein-Podolsky-Rosen (EPR) steering are two quantum resources, which play a crucial role in quantum information processing and communication. Quantum entanglement plays an important role in various applications in quantum information processing, such as quantum teleportation [1], superdense coding [2], telecloning [3] and quantum cryptography [4]. Many schemes have been proposed over the past decades for processing quantum information such as spins [5; 6], ions[7; 8; 9; 10], atoms [11; 12; 13; 14; 15; 16], photons[17; 18; 19; 20; 21; 22; 23; 24; 25], phonons [26; 27; 28]. Besides, quantum steering is a class of asymmetric quantum correlations stronger than entanglement [29] but weaker than the violation of Bell's inequality [30]. The concept of quantum steering was introduced first by Schrodinger in the context of the EPR paradox [31; 32] and it can be asymmetric (one-way), and symmetric (two-way) [33]. Steering is then a natural resource for one sided device-independent quantum key distribution [34; 35]. In recent years, magnons, as the quanta of collective spin excitations in yttrium iron garnet (\(Y_{3}FeS_{O12},YIG\)), are of paramount importance role due to their high spin density, low damping rate and great tunability. Therefore, cavity magnonechanics has attracted considerable attention and offers a robust platform where ferrimagnetic cristal (e.g., yttrium iron garnet (\(YIG\)) sphere) is coupled with a microwave cavity [36; 37]. In the cavity magnonechanics, a magnon mode (spin wave) is combined with a vibratory deformation mode of a ferromagnet (or ferrimagnet) by the magnetostrictive force, and a microwave cavity mode by the interaction of magnetic dipoles. The magnetostrictive interaction is a dispersive interaction similar to a radiation pressure for a large ferromagnet, where the frequency of the mechanical mode is very lower than the magnon frequency [38; 41]. In this paper, we consider coherent feedback technique [39; 40] to enhance the entanglement and steerability in an opto-magnomechanical system consisting of a cavity containing (\(YIG\)) sphere with the magnon self-Kerr nonlinearity as shown in Fig. 1. We find a significant enhancement of quantum correlations via magnon squeezing which is generated by using the magnon self-Kerr nonlinearity [42; 43]. The magnon self-Kerr nonlinearity [[44]] can be generated via coupling the magnon mode to a superconducting qubit [45]. We consider the logarithmic negativity [46; 47] to quantify the quantum entanglement of three bipartite subsystems. The steerability of the subsystem \(A\) by the first subsystem \(B\) is used to quantify how much the two entangled bipartite states are steerable. We discuss the enhancement of nonclassical correlations via coherent feedback technique in the presence of the magnon self-Kerr nonlinearity. We show the role of the feedback technique in the presence of the magnon self-Kerr nonlinearity and when \(\beta=\pi\) to make the nonclassical correlations very robust to the thermal effects. The paper is organized as follows. In Sec. II, we give the explicit expression of the Hamiltonian and the corresponding nonlinear quantum Langevin equations of the system. In Sec. III, we provide the linearized quantum Langevin equations for the system. We present a method in Sec. IV to quantify entanglement for two-mode continuous-variable (CV) and Gaussian quantum steering. The results and discussions are given in Sec. V. Concluding remarks are given in Sec. VI ## II Model We consider a cavity magnonechanics driven by single coherent laser source and a microwave cavity with coherent feedback as depicted in Fig. 1. where a yttrium iron garnet (YIG) sphere with the diameter \(250-\mu\) m-diameter (Ref. [41]) is placed inside the cavity. In this system, the coupling between magnons and cavity photons is due to the magnetic dipole interaction. The magnetostrictive interaction mediates the coupling between magnons and phonons. The variable magnetisation induced by the magnon excitation within the (\(YIG\)) sphere causes the deformation of its geometric structure, which forms the vibrational modes of the sphere, and vice versa [48]. We consider the influence of radiation pressure to be insignificant because the size of the sphere is much smaller than the microwave wavelength. The Hamiltonian of the system is described by the form (with \(\hbar=1\)) \[\mathcal{H} = \omega_{a}a^{\dagger}a+\omega_{b}b^{\dagger}b+\frac{\omega_{m}}{2} (x^{2}+y^{2})+\xi(b^{\dagger}b)^{2}+g_{Gh}b^{\dagger}bx \tag{1}\] \[+ g_{Gia}(a+a^{\dagger})(b+b^{\dagger})+i\Omega(b^{\dagger}e^{-i \omega_{b}t}-be^{i\omega_{b}t})\] \[+ i\mathcal{E}(a^{\dagger}e^{-i\omega_{b}t}-ae^{i\omega_{b}t}),\] where \(a\) (\(a^{\dagger}\)) and \(b\) (\(b^{\dagger}\)) (\([O,O^{\dagger}]\!=\!1\), \(O\!=\!a,b\)) are the annihilation (creation) operators of the cavity and magnon modes, respectively, \(x\) and \(y\) (\([x,y]\!=\!i\)) are the dimensionless position and momentum quadratures of the mechanical mode, and \(\omega_{a}\), \(\omega_{b}\), and \(\omega_{m}\) are respectively the resonance frequency of the cavity, magnon and mechanical modes. \(\xi\) is the self-Kerr coefficient. The magnon frequency is determined by the external bias magnetic field \(H\) and the gyromagnetic ratio \(\kappa\), i.e., \(\omega_{b}=\kappa H\). The single-magnon magnomechanical coupling rate \(g_{Gh}\) is small, but the magnomechanical interaction can be improved via driving the magnon mode with a strong microwave field (directly driving the (\(YIG\)) sphere with a microwave source [49; 50]). The coupling rate \(g_{Ga}\) between the magnon and microwave can be larger than the dissipation rates \(\gamma_{a}\) and \(\gamma_{b}\) of the cavity and magnon modes respectively, entering into the strong coupling regime, \(g_{Ga}>\gamma_{a},\gamma_{b}\). In the frame rotating at the drive frequency \(\omega_{0}\) and applying the rotating-wave approximation (RWA) of the system, \(g_{Ga}(a+a^{\dagger})(b+b^{\dagger})\to g_{Ga}(ab^{\dagger}+a^{\dagger}b)\) (valid when \(\omega_{a},\omega_{b}\gg g_{Ga},\gamma_{a},\gamma_{G}\), which is easily satisfied [41]). The parameter \(\Omega=\frac{\sqrt{3}}{4}\kappa\sqrt{N}B_{0}\) represents the Rabi frequency [51] which describes the coupling strength of the drive magnetic field (with \(B_{0}\) and \(\omega_{0}\) are respectively the amplitude and frequency ) with the magnon mode, where \(\kappa/2\pi=28\) GHz/T, and the total number of spins \(N=\rho V\) with \(V\) the volume of the sphere and \(\rho=4.22\times 10^{27}\) m\({}^{-3}\) the spin density of the (\(YIG\)). The Rabi frequency \(\Omega\) is derived under the assumption of the low-lying excitations, \(\langle b^{\dagger}b\rangle\ll 2Ns\), with \(s=\frac{5}{2}\) is the spin number of the ground state Fe\({}^{3+}\) ion in (\(YIG\)). Then the full dynamics in the presence of coherent feedback and noises is described by the corresponding quantum Langevin equations (QLEs) \[\dot{a} = -(i\Delta_{fb}+\gamma_{fb})a-ig_{Ga}b-\psi\mathcal{E}+(2\gamma_{a} )^{\frac{1}{2}}a_{fb}^{\rm in},\] \[\dot{b} = -(i\Delta_{b}+\gamma_{b})b-ig_{Ga}a-ig_{Gb}bx-2i\xi b^{\dagger}bb+ \Omega+\sqrt{2\gamma_{G}}b^{\rm in},\] \[\dot{x} = \omega_{m}y,\] \[\dot{y} = -\omega_{m}x-\gamma_{m}y-g_{Gb}b^{\dagger}b+\phi, \tag{2}\] where \(\Delta_{b}=\omega_{b}-\omega_{0}\), \(\gamma_{b}\) is the dissipation rate of the magnon mode, \(\gamma_{m}\) is the mechanical damping rate, \(\gamma_{fb}=\gamma_{a}(1-2\tau\cos\beta)\) is the modified cavity decay rate and \(\Delta_{fb}=\Delta_{a}+2\gamma_{a}\tau\sin\beta\) is the effective detuning with \(\Delta_{a}=\omega_{a}-\omega_{0}\) are respectively the effective cavity decay rate and the detuning with Here, the quantities \(\psi\), \(\tau\) denote the transmission and reflection coefficients respectively and \(\beta\) describes the phase shift generated by the reflectivity of the output field on the mirrors [52]. The operator \(a_{fb}^{in}\) describes the effective input noise operator in the presence of coherent feedback and corresponding description is based on input-output theory [53]. Specifically it can be written as \(a_{fb}^{in}=\tau e^{i\theta}a^{out}+\psi a^{in}\), where \(a^{in}\) is the input noise operator associated with microwave mode with only non-zero correlations \(\langle a^{in}(t)a^{in}(t^{\prime})\rangle=n_{a}(\omega_{a})\delta(t-t^{\prime})\) and \(\langle a^{in^{\prime}}(t)a^{in}(t^{\prime})\rangle=(n_{a}(\omega_{a})+1) \delta(t-t^{\prime})\). Then the corresponding correlation functions for the effective input noise operator \(a_{fb}^{in}\) for the microwave mode can be written as [54] \[\langle a_{fb}^{\rm in}(t)\,a_{fb}^{\rm int}(t^{\prime})\rangle = \psi^{2}|1-\tau e^{i\beta}|^{2}\{n_{a}(\omega_{a})+1\}|\delta(t-t^{ \prime}),\] \[\langle a_{fb}^{\rm int}(t)\,a_{fb}^{\rm int}(t^{\prime})\rangle = \psi^{2}|1-\tau e^{i\beta}|^{2}n_{a}(\omega_{a})\,\delta(t-t^{ \prime}). \tag{3}\] Morover, \(b^{\rm in}\) and \(\phi\) are input noise operators for the magnon and mechanical modes, respectively, which are zero mean and characterized by the following correlation functions [55] \[\langle b^{\rm in}(t)\,b^{\rm int}(t^{\prime})\rangle = [n_{b}(\omega_{b})+1]\,\delta(t-t^{\prime}), \tag{4}\] \[\langle b^{\rm in\dagger}(t)\,b^{\rm in}(t^{\prime})\rangle = n_{b}(\omega_{b})\,\delta(t-t^{\prime}),\] (5) \[\langle\phi(t)(t^{\prime})+\phi(t^{\prime})\phi(t)\rangle/2 \simeq \gamma_{b}[2n_{m}(\omega_{m})+1]\delta(t-t^{\prime}). \tag{6}\] The mechanical quality factor \(\mathcal{Q}=\omega_{m}/\gamma_{m}\gg\!1\) is large for a Markovian approximation [56], where \(n_{f}(\omega_{j})\!=\!\left[\exp\!\left(\frac{\hbar\omega_{j}}{k_{B}^{2}}\!- \!1\right)^{-1}\right]\) (\(j\!=\!a,b,m\)) are the equilibrium mean thermal photon, magnon, and phonon number, respectively. ## III Linearization of quantum Langevin equations Heisenberg-Langevin in Eq.(2) are non-linear in nature and generally cannot be solved analytically. To solve analytical Figure 1: Schematic diagram of a single-mode cavity with feedback loop and a (\(YIG\)) sphere with magnon self-Kerr nonlinearity. The magnons are embodied by a collective motion of a large number of spins in a macroscopic ferrimagnet, and the magnon mode is directly driven by a microwave source (not shown) to enhance the magnomechanical coupling. The cavity is also driven by an electromagnetic field with amplitude \(\Omega\). The photons and magnons of the cavity are coupled by dipole magnetic interaction, and the magnons and phonons are coupled by magnetostrictive interaction. A microwave field (not shown) is implemented to improve magnon-phonon coupling. At the sphere (\(YIG\)), the magnetic field (along the x-axis) of the cavity mode, the driving magnetic field (in the y-direction) and bias magnetic field (z-direction) are common perpendicular. An input laser light field enters in the cavity across an asymmetric beam splitter (BS). The output field is fully reflected on the \(M\) mirror and some of the output field is sent to the cavity by the beam splitter. these equations, we use the following linearization scheme. We re-write each operator as a sum of the stationary state mean and a fluctuating quantum operator \(O=O_{s}+\delta O\) (\(O=a,b,x,y\)), and neglecting second order fluctuation terms when the magnon mode is strongly driven (large amplitude \(|\langle b\rangle|\gg 1\) at the steady state), and the cavity field also has a large amplitude \(|\langle a\rangle|\gg 1\) via the cavity-magnon beamsplitter interaction. This gives the steady-state solutions according to \[\langle b\rangle =\frac{\Omega-ig_{Ga}\langle a\rangle}{i\tilde{\Delta}_{b}+\gamma _{b}}, \tag{7}\] \[\langle a\rangle =-\frac{ig_{Ga}\langle b\rangle+i\psi\mathcal{E}}{i\Delta_{fb}+ \gamma_{fb}} \tag{8}\] and for \(|\tilde{\Delta}_{b}|,|\Delta_{fb}|\gg\gamma_{fb},\gamma_{b}\), one gets \[\langle b\rangle \simeq\frac{i\Omega\Delta_{fb}-i\psi\mathcal{E}}{g_{Ga}^{2}- \tilde{\Delta}_{b}\Delta_{fb}}, \tag{9}\] where \(\tilde{\Delta}_{b}=\Delta_{b}+g_{Gb}\langle x\rangle+2i\xi|\langle b\rangle|^{2}\) is the effective magnon-drive detuning including the frequency shift due to the magnomechanical interaction, and \(\tilde{G}_{Gb}=i\sqrt{2}g_{Gb}\langle b\rangle\) is the effective magnomechanical coupling rate, where \(\langle x\rangle=-\frac{g_{Gb}}{\omega_{m}}\langle b\rangle^{2}\). The linearized QLEs describing the quadrature fluctuations \(\delta X_{a}=(\delta a+\delta a^{\dagger})/\sqrt{2},\quad\delta Y_{a}=i( \delta a^{\dagger}-\delta a)/\sqrt{2},\delta X_{b}=(\delta b+\delta b^{\dagger} )/\sqrt{2},\delta Y_{b}=i(\delta b^{\dagger}-\delta b)/\sqrt{2},\delta x\) and \(\delta y\) can be written in compact matrix form as \[i(t)=\mathcal{L}u(t)+\mu(t), \tag{10}\] where \(u(t)=\left[\delta X_{a}(t),\delta Y_{a}(t),\delta X_{b}(t),\delta Y_{b}(t), \delta x(t),\delta y(t)\right]^{T}\) is vector of quadrature fluctuation operators, \(\mu(t)=\left[\sqrt{2\gamma_{a}}X_{a}^{\rm in}(t),\sqrt{2\gamma_{a}}X_{a}^{\rm in }(t),\sqrt{2\gamma_{b}}X_{b}^{\rm in}(t),\sqrt{2\gamma_{b}}X_{b}^{\rm in}(t), 0,\phi(t)\right]^{T}\) is the vector of input noise operators, and the drift matrix \(\mathcal{L}\) can be written as \[\mathcal{L}=\begin{pmatrix}-\gamma_{fb}&\Delta_{fb}&0&g_{Ga}&0&0\\ -\Delta_{fb}&-\gamma_{fb}&-g_{Ga}&0&0&0\\ 0&g_{Ga}&-\gamma_{b}+\xi&\tilde{\Delta}_{b}&-\tilde{G}_{Gb}&0\\ -g_{Ga}&0&-\tilde{\Delta}_{b}&-\gamma_{b}-\xi&0&0\\ 0&0&0&0&0&\omega_{m}\\ 0&0&0&\tilde{G}_{Gb}&-\omega_{m}&-\gamma_{m}\end{pmatrix}. \tag{11}\] The drift matrix in Eq. (11) is provided under the condition \(|\tilde{\Delta}_{b}|,|\Delta_{fb}|\gg\gamma_{fb},\gamma_{b}\). In fact, we will show later that \(|\tilde{\Delta}_{b}|,|\Delta_{fb}|\simeq\omega_{m}\gg\gamma_{fb},\gamma_{b}\) [see Fig. 1 (b)] are optimal for the presence of all bipartite entanglements of the system. Note that Eq. (7) is intrinsically nonlinear since \(\tilde{\Delta}_{b}\) contains \(|\langle b\rangle|^{2}\). However, for a given value of \(\tilde{\Delta}_{b}\) (one can always tune \(\tilde{\Delta}_{b}\) by adjusting the bias magnetic field) \(\langle b\rangle\), and hence \(\tilde{G}_{Gb}\), can be achieved straightforwardly. ## IV Entanglement and steerabilities The steady state evolution of the quantum fluctuations of the system is a continuous variable (CV) three-mode Gaussian state is completely characterized by a \(6\times 6\) covariance matrix (CM) \(\mathcal{V}\), where \(\mathcal{V}_{ij}=\frac{1}{2}\langle u_{i}(t)u_{j}(t^{\prime})+u_{j}(t^{ \prime})u_{i}(t)\rangle\) (\(i,j=1,2,...,6\)) of the covariance matrix (CM) \(\Delta\) satisfies [57; 58] \[\mathcal{L}\mathcal{V}(t)+\mathcal{V}(t)\mathcal{L}^{T}=-\mathcal{K}, \tag{12}\] where \(\mathcal{K}=\mathrm{diag}[\gamma_{a}\psi^{2}|1-\tau e^{i\beta}|^{2}(2n_{a}+1), \gamma_{a}\psi^{2}|1-\tau e^{i\beta}|^{2}(2n_{a}+1),\gamma_{b}(2n_{b}+1),\gamma _{b}(2n_{b}+1),0,\gamma_{m}(2n_{m}+1)]\) is the diffusion matrix, which is defined through \(\langle\mu_{i}(t)\mu_{j}(t^{\prime})+\mu_{j}(t^{\prime})\mu_{i}(t)\rangle/2= \mathcal{K}_{ij}\phi(t-t^{\prime})\). The covariance matrix \(\sigma_{AB}\) of two modes \(A\) and \(B\) may be written as \[\sigma_{AB}=\begin{pmatrix}\mathcal{A}&C\\ \mathcal{C}^{T}&\mathcal{B}\end{pmatrix}. \tag{13}\] The \(2\times 2\) sub-matrices \(\mathcal{A}\) and \(\mathcal{B}\) in Eq. (13) describe the autocorrelations of the two modes and \(2\times 2\) sub-matrix \(\mathcal{C}\) in Eq. (13) denotes the cross-correlations of the two modes. Characterizing, quantifying and classifying quantum correlations in multipartite quantum systems are one of the most problematic issues in quantum information and especially in optomagnomechanical systems when information is encoded in continuous variables (CV). The entanglement in CV system can be quantified by by using the logarithmic negativity \(E_{N}\)[46; 47] \[E_{N}=\max[0,-\log(2\Delta^{-})], \tag{14}\] where \(\Lambda^{-}=\sqrt{\mathcal{X}-(\mathcal{X}^{2}-4\det\sigma)^{1/2}}/\sqrt{2}\) being the minimum symplectic eigenvalue of partially transposed covariance matrix of two mode Gaussian states, with \(\mathcal{X}=\det\mathcal{A}+\det\mathcal{B}-\det\mathcal{C}\). The two subsystems are entangled if \(E_{N}>0\). For the two-mode Gaussian state (13), the system is separable if \(\Lambda^{-}<1/2\), where \(\Lambda^{-}\) being the smallest symplectic eigenvalue of partial transposed covariance matrix Eq. (13). Another quantum correlation quantifier which is of paramount importance in optomagnomechanical systems is the quantum steering. The steerability of Bob (\(B\)) by Alice (\(A\)) (\(A\to B\)) for a (\(n_{A}+n_{B}\)) mode Gaussian state can be quantified by [33] \[S^{A\to B}(\sigma_{AB})=\max\left[0,-\sum_{j:j_{j}^{AB,A}<1}ln\left(\bar{\nu} _{j}^{AB/A}\right)\right], \tag{15}\] where \(\bar{\nu}_{j}^{AB/A}\) (\(j=1,...,m_{B}\)) are the symplectic eigenvalues of \(\bar{\sigma}_{AB/A}=B-C^{T}A^{-1}C\), derived from the Schur complement of \(A\) in the covariance matrix \(\sigma_{AB}\). The steerability of Alice by Bob [\(S^{A\to B}(\sigma_{AB})\)] can be obtained by swapping the roles of \(A\) and \(B\). We notice that a non-separable state is not always steerable but a steerable state is always non separable state. Thus we have two possibilities between \(A\) and \(B:\) (\(i\)) if \(\mathcal{S}^{A\to B}=\mathcal{S}^{B\to A}=0\) Alice can't steer Bob and vice versa even if they are entangled (i.e. no-way steering), (\(ii\)) if \(\mathcal{S}^{A\to B}>0\) and \(\mathcal{S}^{B\to A}=0\) or \(\mathcal{S}^{A\to B}=0\) and \(\mathcal{S}^{B\to A}>0\) as one-way steering, i.e. Alice can steer Bob but Bob can't steer Alice and vice versa, and (\(iii\)) if \(\mathcal{S}^{A\to B}=\mathcal{S}^{B\to A}>0\) Alice can steer Bob and vice versa (i.e. two-way steering). In addition, the measurement of Gaussian Steering is always bounded by the entanglement. In order to check the asymmetric steerability of the two mode Gaussian state, we introduce the steering asymmetry which is defined as \[S(AB)=|S^{A\to B}-S^{B\to A}|. \tag{16}\] ## V Results and discussion In this section, we show the results and discuss the evolution of quantum correlations of the system by considering experimentally accessible parameters reported in [54; 51]: \(\omega_{a}/2\pi=10\) GHz, \(\omega_{m}/2\pi=10\) MHz, \(\gamma_{m}/2\pi=100\) Hz, \(\gamma_{a}/2\pi=\gamma_{b}/2\pi=1\) MHz, \(g_{Ga}/2\pi=G_{Gj}/2\pi=3.2\) MHz, and at low temperature \(T=10\) mK. \(G_{Gb}=2\pi\times 3.2\) MHz implies the drive magnetic field \(B_{0}\approx 3.9\times 10^{-5}\) T for \(g_{Ga}\approx 2\pi\times 0.2\) Hz, corresponding to the drive power \(P=8.9\) mW. We present in Fig. (2), the steady state of the three bipartite entanglement \(E_{ab}\) (between the cavity and magnon mode), \(E_{bm}\) (between the magnon and mechanical mode) and \(E_{am}\) (between the cavity and mechanical mode) versus the detunings \(\Delta_{a}\) and \(\tilde{\Delta}_{b}\) in the presence of coherent feedback loop with the magnon self-Kerr nonlinearity. We observe, that the entanglement is very strong (\(E_{ab}>1.3\), \(E_{bm}>0.8\) and \(E_{am}>1.3\)) in comprising with the results in Ref. [51; 54]. The maximum value of entanglement of the three bipartite is improves via coherent feedback loop and the magnon self-Kerr nonlinearity when \(\beta=\pi\). We remark, when \(\Delta_{a}=-\omega_{m}\) and \(\tilde{\Delta}_{b}=0.9\omega_{m}\) the entanglement \(E_{ab}\) and \(E_{am}\) are maximum while \(E_{bm}\approx 0.2\). In Fig. (3) we plot there bipartite entanglements \(E_{ab}\), \(E_{bm}\) and \(E_{am}\) as a function of the reflectivity \(\tau\) and \(\beta\) as shown We remark that the entanglement is increasing with \(\tau\) and \(\beta\) and it robust when \(\beta=\pi\). Moreover, the entanglement is achieved its maximum value when \(\gamma_{fb}=\gamma_{a}(1+2\tau)\). In Fig. (4) we plot the entanglements of the three bipartite \(E_{ab}\), \(E_{am}\) and \(E_{bm}\) versus different parameters. We remark that the existing of genug tripartite entanglement when all bipartite entanglement are non-vanishing as illustrated in Fig. (4). We notice, the entanglement is robust against temperature as depicted in Fig. (4)(a) and survive above 3K. We observe that the entanglement of all the subsystem is diminishes due to decoherence phenomenon [60]. Moreover, the entanglement between photon-magnon and photon-phonon persists for temperature \(T>3\) K and \(T\approx 2.5\) K respectively, whereas, the entanglement between magnon-phonon is vanishes at lower temperatures (\(T\approx 0.2\) K) even this temperature is the maximum achieved in the Ref. [51]. One can say that the entanglement between photon-magnon and photon-phonon is stronger than the entanglement between magnon-phonon. The entanglement between photon-magnon and magnon-phonon increases with increasing the magnon self-Kerr nonlinearity coefficient \(\xi\), instead the entanglement between photon-phonon decreases as illustrated in Fig. (4)(b). The entanglement \(E_{ab}\approx 0.25\) for \(\xi=10^{7}\) Hz in comprising \(E_{ab}\approx 0.125\) in comprising with the results in Ref. [51]. We remark in Fig. (4)(c) the enhancement of all three bipartite entanglement by coherent feedback technique. The maximum value reached by entanglement between photon-magnon and photon-phonon is very important than which obtained in Ref. [51; 54]. In Fig.(5), we plot for each bipartite the entanglement, the Gaussian steering \(S^{A\to B}\), \(S^{B\to A}\) and the asymmetric steering versus the temperature \(T\). The entanglement and steerabilities diminish quickly with temperature due to the decoherence phenomenon. We note, the one way quantum steering is more robust than two way quantum steering and it survive for a larger value of temperature \(T\). The entangled state is not always steerable state instead steerable state must be entangled i.e. when \(S^{A\to B}=S^{B\to A}>0\) and \(E_{N}>0\) is the witnesses of existence of Gaussian two-way steering, such that the subsystem of two subsystem are entangled but are steerable only from \(A\) to \(B\) and from \(B\) to \(A\)[33] and no-way steering appears when \(S^{A\to B}=S^{B\to A}=0\) and \(E_{N}>0\) as depicted in Fig.(5)(c). The measurement of Gaussian steering is always bounded by the entanglement \(E_{N}\) as also discussed in [61]. Finally, the asymmetric steering \(SA\) is always less than \(ln(2)\), which is maximal when the state is nonsteerable in one-way i.e. \(S^{A\to B}>0\) and \(S^{B\to A}=0\) or \(S^{A\to B}=0\) and \(S^{B\to A}>0\) and it decreases with increasing steerability in either way [33]. In Fig.(5)(a) the steering from the photon mode to the magnon mode \(S^{a\to b}\) has a similar behavior to \(E_{N}\) it decreases from its maximum value to zero when \(T>3\) K. Besides, one-way steering appears when \(T>0.2\) K, i.e. \(S^{a\to b}>0\) and \(S^{b\to a}=0\) as expected in Fig. (5)(a). Moreover, the steering from the magnon mode to the photon mode \(S^{b\to a}\) is diminishes quickly to remains zero for \(T>0.2\) K as depicted in Fig.(5)(a). Otherwise, when the temperature \(T<0.2\) K, the two-way steering occurs between optical mode and the magnon mode, i.e. \(S^{a\to b}>0\) and \(S^{b\to a}>0\) (\(S(ab)=0\)). The steerability between photon mode and the phonon mode is always remains one-way steering, i.e. \(S^{a\to m}>0\) (\(S^{m\to a}=0\)) when \(T>0.2\) K as implemented in Fig.(5)(b). The steerability between the magnon mode and phonon mode is approximately remains two-way steering and \(S^{b\to m}>S^{m\to b}\) when \(T<0.10\) K and no-way steering (\(S^{b\to m}=0\) and \(S^{m\to b}=0\) (\(S(bm)=0\)) when \(T>0.10\) K as shown in Fig.(5)(c). ## VI Conclusions In conclusion we have studied how coherent feedback loop improves the quantum correlations between three bipartite subsystems in the presence of the magnon self-Kerr nonlinearity in cavity magnomechanics systems. We quantify steerability by using Gaussian quantum steering and show that Gaussian steering remains bounded by entanglement, i.e. the steerable modes are strictly entangled but the entangled modes are not necessarily steerable. We have found one way-steering between photon-magnon and photon-phonon. However, the steerability bewteen magnon-photon is always two-way. The entanglement and steerabilities are shown robust against the temperature, where entanglement may persist above 3 K in the case of photon-magnon, and approximately equal 2.5 K for photon-phonon. Moreover, The entanglement and steerabilities between magnon-phonon is fragile under thermal effects. Our proposed scheme to improve entanglement can be of interest for various applications in quantum information processing.
2301.11566
Renovation of Seoul Radio Astronomy Observatory and Its First Millimeter VLBI Observations
The Seoul Radio Astronomy Observatory (SRAO) operates a 6.1-meter radio telescope on the Gwanak campus of Seoul National University. We present the efforts to reform SRAO to a Very Long Baseline Interferometry (VLBI) station, motivated by recent achievements by millimeter interferometer networks such as Event Horizon Telescope, East Asia VLBI Network, and Korean VLBI Network (KVN). For this goal, we installed a receiver that had been used in the Combined Array for Research in Millimeter-wave Astronomy and a digital backend, including an H-maser clock. The existing hardware and software were also revised, which had been dedicated only to single-dish operations. After several years of preparations and test observations in 1 and 3-millimeter bands, a fringe was successfully detected toward 3C 84 in 86 GHz in June 2022 for a baseline between SRAO and KVN Ulsan station separated by 300 km. Thanks to the dual frequency operation of the receiver, the VLBI observations will soon be extended to the 1 mm band and verify the frequency phase referencing technique between 1 and 3-millimeter bands.
Naeun Shin, Yong-Sun Park, Do-Young Byun, Jinguk Seo, Dongkok Kim, Cheulhong Min, Hyunwoo Kang, Keiichi Asada, Wen-Ping Lo, Sascha Trippe
2023-01-27T07:10:42Z
http://arxiv.org/abs/2301.11566v1
# Renovation of Seoul Radio Astronomy Observatory ###### Abstract The Seoul Radio Astronomy Observatory (SRAO) operates a 6.1-meter radio telescope on the Gwanak campus of Seoul National University. We present the efforts to reform SRAO to a Very Long Baseline Interferometry (VLBI) station, motivated by recent achievements by millimeter interferometer networks such as Event Horizon Telescope, East Asia VLBI Network, and Korean VLBI Network (KVN). For this goal, we installed a receiver that had been used in the Combined Array for Research in Millimeter-wave Astronomy and a digital backend, including an H-maser clock. The existing hardware and software were also revised, which had been dedicated only to single-dish operations. After several years of preparations and test observations in 1 and 3-millimeter bands, a fringe was successfully detected toward 3C 84 in 86 GHz in June 2022 for a baseline between SRAO and KVN Ulsan station separated by 300 km. Thanks to the dual frequency operation of the receiver, the VLBI observations will soon be extended to the 1 mm band and verify the frequency phase referencing technique between 1 and 3-millimeter bands. instrumentation: interferometers -- techniques : high angular resolution 0507 2132254614 [1]500 [1]500 [2]50 [3]50 [4]50 [5]50 [6]50 [7]50 [8]50 [9]50 [10]50 [11]50 [12]50 [13]50 [14]50 [15]50 [16]50 [17]50 [18]50 [19]50 [20]50 [21]50 [22]50 [23]50 [24]50 [25]50 [26]50 [27]50 [28]50 [29]50 [30]50 [31]50 [32]50 [33]50 [34]50 [35]50 [36]50 [37]50 [38]50 [39]50 [40]50 [41]50 [42]50 [43]50 [44]50 [45]0 [46]50 [47]50 [48]50 [49]50 [50]50 [51]50 [52]50 [53]50 [54]50 [55]0 [56]50 [57]50 [58]50 [59]50 [60]50 [61]50 [62]50 [63]50 [64]50 [65]50 [66]50 [67]50 [68]50 [69]50 [70]50 [71]50 [72]50 [73]50 [74]50 [75]50 [76]50 [77]50 [78]50 [79]50 [80]50 [81]50 [82]50 [83]50 [84]50 [85]50 [86]50 [87]50 [88]50 [89]50 [90]50 [91]50 [92]50 [93]50 [94]50 [95]50 [96]50 [97]50 [98]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [99]50 [999]50 [99]50 [99]50 [999]50 [99]50 [999]50 [99]50 [999]50 [999]50 [999]50 [999]50 [999]50 [999]50 [999]50 [999]50 [999]50 [999]50 [999]50 [999]50 [999]50 [999]50 [999]50 [999]50 [999]50 [999]50 [9999]50 [9999]50 [999]50 [9999]50 [999]50 [9999]50 [999]50 [999]50 [9999]50 [9999]50 [9999]50 [9999]50 [999]50 [9999]50 [9999]50 [999]50 [9999]50 [9999]50 [9999]50 [9999]50 [9999]50 [9999]50 [999]50 [9999]50 [9999]50 [999]50 [9999]50 [9999]50 [9999]50 [9999]50 [999]50 [9999]50 [9999]50 [9999]50 [9999]50 [9999]50 [9999]50 [9999]50 [999]50 [9999]50 [9999]50 [9999]50 [999]50 [999]50 [9999]50 [999]50 [999]50 [999]50 [999]50 [999]50 [9999]50 [9999]50 [999]50 [9999]50 [999]50 [999]50 [9999]50 [999]50 [999]50 [9999]50 [999]50 [999]50 [9999]50 [9999]50 [9999]50 [9999]50 [9999]50 that had once operated in the Combined Array for Research in Millimeter-wave Astronomy (CARMA). The main reasons for adopting the CARMA receiver are that it is paired with an economical 15 K cooling system, and its performance is still good. The only drawback may be that the CARMA receiver works in double sidebands (DSB) mode. It features dual-band operation in the 1 mm and the 3 mm bands, receiver temperatures of 60-70 K (DSB), and compactness (Hull & Plambeck, 2015). The CARMA receiver is displayed in Figure 1. Millimeter waves from the antenna pass through two Teflon lenses without complicated beam-guiding optics and go into the dewar. The interior of the dewar is divided mainly into two parts--left for the 1 mm band and right for the 3 mm band, as shown in Figure 2. The part of the 1 mm band consists of a feed horn, a circular polarizer, an orthonode transducer (OMT), superconductor-insulator-superconductor (SIS) mixers, and low-noise amplifiers (LNA) (Hull & Plambeck, 2015). This configuration allows dual circular polarization observations. The 3 mm band receiving system is similar but does not have an OMT, allowing only the right-handed circular polarization (RCP). Table 1 summarizes the receiver parameters in the two bands. Local oscillator (LO) plates at both sides of the dewar generate the LO signals for the two bands, as shown in Figure 1. The mylar beam combiners reflect the LO signals in front of the dewar. Then the LO signals combined with waves from celestial sources propagate into the dewar. After we took the receiver to the laboratory, we tested its components one by one and measured receiver temperatures representing its overall performance. Figure 3 displays the I-V curve of the 1 mm and the 3 mm mixers without LO power. One can see a steep current increase at bias voltages of around 10 mV. The measured receiver temperatures are similar to the ones in Hull & Plambeck (2015). ### Cryogenic System The cryogenic system consists of a cold head and a compressor. Within the dewar is a CTI1020 cryogenic cold head that consists of two cooling stages, where the second stage cools down to 15 K. It was modified to have an additional third stage by the University of California, Berkeley, which is shown in Figure 2. The cooling power of the third stage is only 50 mW, but enough to cool down mixers and feed horns to around 4 K. The 15 K cryogenic system is not so expensive but has gained the 4 K stage after modification, which is one of the reasons that we adopt the CARMA receiver. \begin{table} \begin{tabular}{c c c} \hline & 1 mm & 3 mm \\ \hline RF frequency & 215–270 GHz & 84–115 GHz \\ IF bandwidth & & 0–1 GHz \\ Polarization & LCP/RCP & RCP \\ Receiver temperature & & 60–70 K (DSB) \\ System temperature & 400 K (DSB) & 150 K (DSB) \\ \hline \end{tabular} \end{table} Table 1Receiver parameters Figure 1: The view of the CARMA receiver. Teflon lenses and beam combiners for the two bands are shown together. The LO plates are mounted on both sides of the dewar. (Photo by Richard Plambeck) Figure 2: The cryogenically cooled dewar contains the 1 mm band receiver parts on the left side and the 3 mm band parts on the right side. The copper plates at the bottom are the first and second cooling stages, and the 3rd is the square plate at the top of the pillar in the center. Feed horns, circular polarizers, an OMT, and mixers are cooled below 4 K, while LNAs are cooled to \(\sim\)15 K. The third stage can be further cooled down by lowering the pumping frequency. The cold head typically operates at 72 rpm, which seems the most efficient cycle frequency for the heat transfer of cryogenic regenerator (Ogawa et al., 1991). However, it is empirically found that if the pumping period is increased by a factor of two, then the temperature of the third stage further decreases below 3 K, probably because the cooling He gas spends more time inside the third stage (Plambeck et al., 1993). We usually set the pumping speed to 30 rpm using a frequency inverter, SV-is7, made by a local company, LS Electric Co. The typical temperatures during the regular operation are 47.4, 12.2, and 2.8 K, respectively, from the first to third stages. As for the compressor, we use the model M600 made by Trillion. Though a water-cooled compressor has a higher cooling capacity with a smaller volume, we adopt the air-cooled one since the air-cooled compressor requires fewer maintenance efforts. ### Signal Chain The signal's intermediate frequency (IF) bandwidth after the LNAs is rather wide but is narrowed down to 3.5-4.5 GHz through bandpass filters. The IF signal is then down-converted to 1-2 GHz in the IF processing box in the cabin. It goes to the observatory building and is converted to 0-1 GHz before being fed into the spectrometers and VLBI backends. The spectrometer covers 1 GHz bandwidth with 2\({}^{14}\) channels resulting in a spectral resolution of 61 kHz. The signal flows for single-dish and VLBI operation modes are summarized in Figure 4. ### Alignment of the Receiver Since the configuration changed near the secondary focus, we need to check whether the direction of the maximum gain of the feed horn points to the center of the subreflector. First, we point the telescope towards a mountain, which is seen at low elevations, i.e., the 300 K background. Then by moving an absorber immersed in liquid nitrogen at 80 K in front of the subreflector and by measuring the output power of the receiver, we can find the direction of the maximum response of the horn. Figure 5 indicates that the feed horn looks at the subreflector downward by 1\({}^{\circ}\), while it is well aligned in the E-W direction. We corrected this misalignment by adjusting the heights of the receiver plate in four corners. The change of the receiver position also affects the pointing offsets, which must be corrected. We established the pointing model by carrying out five-point observations for the standard stars in the Hipparcos catalog (Byun & Yun, 2002), using the optical telescope attached to the side of the antenna dish (Koo et al., 2003). The _rms_ pointing errors are around 15\({}^{\prime\prime}\) in both directions right after the model fitting. However, systematic offsets begin to appear, probably due to the differential solar heating of the antenna structure, which may affect the 1 mm band observation because of its smaller beam size. Figure 4: The system diagram including the conversion of IF frequencies. The VLBI instruments and the spectrometer for single-dish observation are located in the backend room. Figure 5: The relative output power of the receiver in dB for various locations of the 80 K absorber. The coordinate origin of the map refers to the center of the subreflector. The boresight of the feed horn is slightly shifted to the low elevation. Figure 3: The unpumped I-V curves of the 1 mm and the 3 mm mixers were measured in the laboratory. They exhibit a nice non-linearity. ## 3 Installation of VLBI Equipment A digital sampler, recorders, and an H-maser clock are installed to carry out VLBI experiments in SRAO, as shown in Figure 6. As for the digital sampler, we borrowed a ROACH 2 digital backend (R2DBE) from Academia Sinica Institute of Astronomy and Astrophysics (ASIAA) (Vertatschitsch et al., 2015). Its maximum data rate is 16 Gbps, from 4 Gbps samples per second in four levels for two data streams. We also borrowed a Mark 6 recorder from ASIAA and four disk packs from the Korea Astronomy and Space science Institute (KASI) with a total capacity of 256 TBytes. The H-maser clock, provided by KASI, distributes a reference frequency of 10 MHz to all the frequency synthesizers in the receiver system and the recorder. An additional component, the GPS receiver, compares its one pulse per second (PPS) signal and that of the H-maser clock for synchronization with other stations. Several frequency synthesizers in the receiving system were of low quality and independent of each other in the past since it was not so critical for single-dish observations. For the VLBI experiment, we bought a few high-quality frequency synthesizers, such as Keysight E8257D, to reduce the phase noises of the system. They replaced old synthesizers and are bound to the 10 MHz reference from the H-maser clock. ## 4 Test Observations ### Single-Dish Observations In February 2019, SRAO detected the first light of the CO \(J=2-1\) line at 230.538 GHz toward Orion KL with the CARMA receiver (Figure 7). The best system temperature for a 1 mm band is measured as 400 K (DSB). It is found that, contrary to expectations, the system temperature rises in the winter. The reduction of cooling capacity due to the hardening of oil in the compressor may cause this problem. We expect lower system temperatures by keeping the compressor warm in the winter. The 3 mm band receiver was operated in the spring for recent two years. The lowest system temperature in the 3 mm band is 150 K (DSB) at 86 GHz. We have made single-dish observations of several bright spectral Figure 6: The VLBI backends, R2DBE, Mark 6, and GPS receiver are located in the backend room of SRAO. A spectrometer and the second IF processing box are shown together on the left. Figure 7: The first spectrum of CO \(J=2-1\) at 230.538 GHz toward the Orion KL was obtained by the SRAO with the CARMA receiver. The temperature scale and the velocity of the spectrum are not calibrated accurately. A velocity resolution is 0.079 km s\({}^{-1}\) per channel. line sources, such as Orion KL and TX-Cam, to inspect the pointing accuracy and verify the overall system performance before the VLBI observation. ### VLBI Test Observations #### 4.2.1 Test Observations in the 1 mm Band As soon as we got the first light at 230 GHz in 2019, we conducted international VLBI observations. During UT 11:00 to 18:00 on March 18th and 19th, 2019, the Greenland Telescope (GLT), built by ASIAA in Taiwan, and Solar Planetary Atmosphere Research Telescope, operated by Osaka prefecture university in Japan, joined the campaign. Targets were three bright AGNs (M87, NGC 6251, Mrk 501) and four bright calibrators (3C 371, 1928+738, 3C 345, 1633+382). The spectral line sources (NGC 7027, IRC+10216, DR21) are also observed for autocorrelation. No fringes were detected for baselines that include SRAO. The suspected reason is that one LO accidentally missed the 10 MHz reference signal from the H-maser clock. The second test observation was run during UT from 08:00 to 15:00 on February 1st and 5th, 2020, collaborating with GLT and James Clerk Maxwell Telescope. Two bright AGNs (OJ 287, 3C 84) were observed, and data was transferred to the correlation center at the Shanghai Astronomical Observatory via the internet. The typical data transfer rate was around 500 Mbps. Unfortunatly, the fringes were not detected in this session too. We concluded that the main reason for the failure is the substantial system temperature, probably because of the reason mentioned in Section 4.1 and cloudy weather during the observation. #### 4.2.2 VLBI Test Observations with KVN in the 3 mm Band Since the scheduling is not easy in the international VLBI observations, and thus the chance of observations is limited, we cooperated with the domestic VLBI system, the KVN. Since the KVN does not have the 1 mm receivers, the VLBI test observation was made in the 3 mm band. From UT 05:00 June 3rd, 2022, the bright source, 3C 84 and Orion KL, were observed alternately in three scans each, with 10 minutes of exposure per scan. 3C 84, a bright AGN, is selected as the main target to find a fringe. Orion KL is observed to check the frequency offset and the pointing accuracy. In SRAO, a data stream of 1024 MHz bandwidth from 86 to 87 GHz is Nyquist-sampled and recorded in 4 Gbps. On the other hand, KVN stations recorded signals of 2048 MHz bandwidth from 85 to 87 GHz for two polarization in 16 Gbps using an OCTAD sampler (Oh et al., 2017) and Mark 6 recorder. The system setup is summarized in Table 2. Because of the instrumental problems, only the KVN Ulsan station recorded the data among the three KVN stations. The recorded data in the Mark 6 was transferred to the Daejeon correlation center located at KASI through the internet. The DiFX Software correlator performed the correlation in 65 K channels. Visibility data is analyzed with the NRAO Astronomical Image Processing System (AIPS). The fringe fitting solutions are found using the task FRING of AIPS with a solution interval of 30 seconds. The upper panel of Figure 8 shows the clear and consistent phase as a function of frequency after the fringe fitting. The cross-power spectrum between SRAO and KVN Ulsan is displayed together. Figure 9 presents a fringe solution in a delay and delay rate plane. The visibility amplitude is averaged over the central 640 MHz of the bandwidth, where phases remain constant. We can see a strong peak for a specific delay and delay rate. The obtained delay rate of 250 mHz between SRAO and KVN Ulsan is mainly due to the frequency offset of about 200 mHz of the H-maser clock at KVN Ulsan station. The observed SNRs are 60 on average for a solution interval of 30 seconds. We can estimate the expected SNR using the equation, \[\mathrm{SNR}=0.88\,F\,\sqrt{\frac{2\,\Delta\nu\,\tau}{\mathrm{SEFD}_{1}\times \mathrm{SEFD}_{2}}},\] where \(F\) is a source flux density, \(\tau\) the integration time, and \(\Delta\nu\) the observing bandwidth. The SEFD\({}_{i}\) is the system equivalent flux density of a station \(i\). We set \(\Delta\nu=640\) MHz and \(\tau=30\) seconds. The F is assumed as 16 Jy based on the single-dish mode observation of KVN toward 3C 84 in May 2022. The SEFD of the KVN is 3200 Jy. The SEFD of the SRAO is not measured, but it can be accurately inferred as \(4.1\times 10^{4}\) Jy from the comparison of the antenna temperatures of the SiO \(v=1\), \(J=2-1\) transition toward Orion KL obtained by both KVN in single-dish mode and SRAO on the same day. The resulting SNR is 120, much larger than the observed one. The factor of two difference may be originated from the two reasons. The first one is a longer averaging time to derive visibility data from raw data streams from the two antennas. We set one second for it as usual, but the delay rate of \(-250\) mHz makes the phase rotate by \(90^{\circ}\) for one second, which results in a factor of \(2/\pi\) degrada \begin{table} \begin{tabular}{c c c} \hline & SRAO & KVN \\ \hline Equipment & R2DBE / Mark 6 & OCTAD / Mark 6 \\ Bandwidth & 1024 MHz & 2048 MHz \\ Polarization & RCP & LCP/RCP \\ System temperature & 150 K (DSB) & 280 K (SSB) \\ \hline \end{tabular} \end{table} Table 23 mm VLBI test observation parameters tion of visibility amplitudes. The other one is related to the source size. According to the 86 GHz observation of 3C 84 with KVN, the core and the jet components are separated in north-south direction by about 3 mas (Wajima et al., 2020). The SRAO-KVN Ulsan baseline length of 300 km results in the minimum fringe spacing of 2.5 mas, and thus the source might be partially resolved out. Figure 8 shows a flux density of \(\sim\)5 Jy, weaker than that of single-dish observation indeed. The solution interval comparable to the coherence time of the atmosphere may also affect the SNR. However, it is found that data reduction with the solution interval of 10 seconds does not improve the SNR. ## 5 Discussion and Conclusions The SRAO retrofitted the receiver and cooling systems, enabling observations in the 1 mm and the 3 mm bands with reasonable noise temperature. We also installed instruments such as a digital backend and high-speed recorder to reform SRAO to a VLBI station. After many years of single-dish observation and international VLBI campaigns, we detected fringes at 86 GHz between KVN Ulsan and SRAO in June 2022 for the first time. Since the LO chain of the 1 mm band is identical to that of the 3 mm band, except for the frequency triplet, it is expected to find fringes in the 1 mm band in the near future. To make routine VLBI observations possible, we need to improve our system further: A new receiver is under development, adopting sideband separation mixers and a new cryostat. It will widen the IF bandwidth from 1 GHz to 2 GHz and have a lower noise temperature than the CARMA receiver. Moreover, SRAO will be connected to Korea Research Environment Open Network (KREONET), and the data transfer rate will be over 10 Gbps. In addition, with the help of KREONET, a 10 MHz reference signal from KVN Yonsei may be transmitted to SRAO via dark fibers, which will replace the H-maser clock operated over its life span. VLBI observations at millimeter wavelengths are considerably affected by rapid atmospheric phase variations. This infection can be minimized by applying the solutions of the phase variations taken at the lower frequencies to the higher target frequencies (Rioja et al., 2011, 2014; Algaba et al., 2015; Park et al., 2018). SRAO can implement the frequency phase referencing with minimal effort, since the 1 mm and the 3 mm receiving components are in one dewar, and LOs share a common frequency reference. The only thing to do is to install a frequency-selective surface and a reflection mirror in front of the dewar. In summary, a series of test observations and planned future works guarantee that SRAO can be a member of the international mm VLBI network. ###### Acknowledgements. The authors gratefully acknowledge the contribution of Jongho Park for providing a code of the 3D plot of fringes and a kind explanation. We are grateful to the staff of the KVN who helped to operate the array and to correlate the data. The KVN and a high-performance computing cluster are facilities operated by the KASI. The KVN observations and correlations are supported through the high-speed network connections among the KVN sites provided by the KREONET, which is managed and operated by the KISTI. This work was supported partially by National Research Foundation of Korea grant funded by the Korean government (MEST) (No. 2019R1A6A1A1A0073437 and 2022R1F1A1075115) and partially by KASI under the R&D program (Project No. 2022-1-860-03) supervised by the Ministry of Science and ICT. Figure 8: The visibility phase (top) and the cross power spectrum (bottom) of 3C 84 for the SRAO to KVN Ulsan baseline. Figure 9: The visibility amplitude averaged over the central 640 MHz bandwidth as a function of the delay and the delay rate. The delay and the delay rate at the peak are about \(-\)580 ns and \(-\)250 mHz, respectively.
2302.10328
Hello Me, Meet the Real Me: Audio Deepfake Attacks on Voice Assistants
The radical advances in telecommunications and computer science have enabled a myriad of applications and novel seamless interaction with computing interfaces. Voice Assistants (VAs) have become a norm for smartphones, and millions of VAs incorporated in smart devices are used to control these devices in the smart home context. Previous research has shown that they are prone to attacks, leading vendors to countermeasures. One of these measures is to allow only a specific individual, the device's owner, to perform possibly dangerous tasks, that is, tasks that may disclose personal information, involve monetary transactions etc. To understand the extent to which VAs provide the necessary protection to their users, we experimented with two of the most widely used VAs, which the participants trained. We then utilised voice synthesis using samples provided by participants to synthesise commands that were used to trigger the corresponding VA and perform a dangerous task. Our extensive results showed that more than 30\% of our deepfake attacks were successful and that there was at least one successful attack for more than half of the participants. Moreover, they illustrate statistically significant variation among vendors and, in one case, even gender bias. The outcomes are rather alarming and require the deployment of further countermeasures to prevent exploitation, as the number of VAs in use is currently comparable to the world population.
Domna Bilika, Nikoletta Michopoulou, Efthimios Alepis, Constantinos Patsakis
2023-02-20T21:41:14Z
http://arxiv.org/abs/2302.10328v1
# Hello Me, Meet the Real Me: Audio Deepfake Attacks on Voice Assistants ###### Abstract The radical advances in telecommunications and computer science have enabled a myriad of applications and novel seamless interaction with computing interfaces. Voice Assistants (VAs) have become a norm for smartphones, and millions of VAs incorporated in smart devices are used to control these devices in the smart home context. Previous research has shown that they are prone to attacks, leading vendors to countermeasures. One of these measures is to allow only a specific individual, the device's owner, to perform possibly dangerous tasks, that is, tasks that may disclose personal information, involve monetary transactions etc. To understand the extent to which VAs provide the necessary protection to their users, we experimented with two of the most widely used VAs, which the participants trained. We then utilised voice synthesis using samples provided by participants to synthesise commands that were used to trigger the corresponding VA and perform a dangerous task. Our extensive results showed that more than 30% of our deepfake attacks were successful and that there was at least one successful attack for more than half of the participants. Moreover, they illustrate statistically significant variation among vendors and, in one case, even gender bias. The outcomes are rather alarming and require the deployment of further countermeasures to prevent exploitation, as the number of VAs in use is currently comparable to the world population. _Index terms--_ Voice Assistants, Audio deepfake, Android, iOS Privacy, Security, Synthesised voice ## 1 Introduction Digital Assistants (DAs), also referred to as Virtual Assistants, Intelligent Personal Assistants and Artificial Intelligence Assistants, are used more and more as their sophistication and capabilities are rapidly increasing, with the manufacturing of new products and services on top of them. DAs are defined as devices - usually speakers - or services integrated into mobile phones and web services that use advanced artificial intelligence (AI) and other advanced algorithmic approaches to i) perform tasks for an individual, ii) answer various questions, iii) maintain a conversation with the user and iv) retain information about the user and issue reminders and warnings based on environmental constraints, e.g., time and location. The above makes DAs extremely useful for people with mobility problems and the elderly. Chambers and Beaney [1] describe how VAs can be applied to patients' health and care needs. Also, Pradhan et al. [2] conducted an experiment with VAs over a period of 3 weeks on older people who do not use computing devices every day. In the end, elderly people consistently use the device to access information online, mainly to access health-related information. DAs use natural language processing (NLP), natural language understanding, and machine learning to learn and provide a personalised conversational experience continuously. Combining historical information such as purchase preferences, home ownership, location, family size, and so on, the underlying algorithms can create data models that identify behavioural patterns and then refine those patterns as data is added. By learning users' history, preferences, and other information, DAs can answer complex questions, provide recommendations, make predictions, and even initiate conversations. They facilitate users' daily lives by performing functions to manage electrical appliances, even those related to home security [3]. DAs are always there for users to inform them of any outstanding work to be done through reminders. A significant benefit is that they can simultaneously serve a large percentage of people. Finally, with prolonged use, they can gather more useful information to improve the user experience. Depending on the input format, DAs can be classified into three main categories. If their input is textual, we usually refer to them as _chatbots_. When the DA interacts with the user using voice, the DA is referred to as _voice assistant_. Finally, some DAs use visual input like digital images, videos, or a live camera. These assistants have the ability to do image processing to recognise objects in the image to help the users get better results from the clicked images. Computer vision also enables the system to recognise body language, which is a significant part of communication. To provide authentication followed by access to sensitive data, VAs can be trained to "obey" only one user or allow this single user to perform sensitive tasks, e.g., use services that have payments. Every VA requires user training before using it for the first time. Due to the increased volume of data they receive daily, they are becoming more and more efficient. However, training in a single voice is still quite challenging. It is worth noting that not all voice commands work in the same way since there are commands that are executed by everybody, including the voice of the user who 'trained' the VA. Still, some commands must have been explicitly uttered by the user who trained this assistant, as they are considered dangerous, e.g., monetary transactions, phone calls and reading messages. As VAs become more popular, there are increasing security, privacy, and legal risks involved. This study aims to attack VAs trying to bypass the aforementioned restrictions. For this purpose, we collect voice data from different sources of the _trusted_ by the VA voice to create voice samples that would trigger voice commands that are restricted. Thus, we examined ways in which we can extract information about a user through a number of available resources. For instance, we used face-to-face recordings and videos as input for our experiments. Another way of collecting data would be via phone calls. Nevertheless, it was inapplicable for most of our participants, so it has been excluded from our experiments. As a second step, following the data collection, voice synthesising with a third-party application took place in our experiments. Finally, the produced synthesised output was played from another device to attack the VA with appropriate commands. To do this, there are many methods, e.g., the adversary plays the audio when in proximity to the VA, or the audio is reproduced by the smartphone, triggering the VA [4, 5] exploiting the fact that some VAs do not distinguish the source of the audio. **Scope:** With the continuous integration of VAs in several devices in our homes, several devices are waiting to collect voice commands. As already discussed, some of these commands may have a significant impact on the users, e.g., sensitive information leakage and financial cost. To prevent such attacks, manufacturers have installed specific features that may allow only 'trusted' voices to perform such sensitive tasks. This begs the question of assessing the trustworthiness of such a protection mechanism. To answer this, we have to consider it in the context of voice synthesis and the wide availability of voice samples of users in modern societies. The latter is a crucial parameter, as in the ubiquitous computing environment that we are living in, a plethora of means can record one's voice. Therefore, the main goal of this research is to determine whether an adversary can replicate a VA's 'trusted' voice in a real-world setting. To this end, we conduct a set of targeted experiments that harness the voice of a user from various sources to train a voice synthesis model and use it to issue a sensitive command against two of the most widely used proprietary VAs, namely Google's Assistant and Apple's Siri. As a next step, we use the same trained model to attack voice authentication systems to determine whether commercial off-the-shelf systems that use such authentication are vulnerable to such attacks. To the best of our knowledge, this is the first work in the literature to perform a deepfake audio experiment in a broad and open setting, shedding light on the security of a technology that is continuously being integrated into devices and services. **Main contributions and results:** Our results illustrate a rather alarming state of practice in two of the most used VAs we tested. In practice, we show that around 3 out of 10 of our attacks successfully deceive the VAs into performing an action that should be performed only by authorised users, using an off-the-shelf open-source solution. Moreover, our research indicates that these results may significantly vary among vendors and even gender. Indeed, in our experiments, measurements between vendors illustrate huge differences in users' exposure. At the same time, for one OS, it is shown that there is a similar gap which depends on the gender of the simulated voice. The latter practically illustrates the possible gender biases in cybersecurity research. **Potential impact:** In 2019, approximately 3.25 billion VA devices were purchased around the world. Forecasts suggest that by the end of 2023, the number of VAs will reach around 8 billion units - a number in the scale of the world's population [6]. VAs are a feature found in many consumer electronics devices, ranging from smartphones to mobile-operated car systems. Thus, it is essential to understand the extent of the risks involved in attacks on VAs. Notably, VAs are usually not located in isolated environments or only as smartphone apps. A quite typical case where VAs are increasingly being used is a Smart Home. This term refers to an integrated system of interconnected devices, sensors, and services which automates various tasks inside a house and can be automatically controlled remotely from Internet-connected devices such as smartphones and tablets. The user can remotely control and schedule functions such as access to the house and premises, activation and deactivation of a device, control of the alarm system etc. These devices are usually connected to a central "gateway". This way, the user can control all the connected appliances, including but not limited to lighting, thermostat, boilers, and so many other functions through a personal device, even if they are physically far away from it. At any time, they are aware of any operation of the house through relevant notifications. However, there are many risks, mainly regarding security issues affecting users. Given that VAs are often used to control home automation, if they can easily be deceived, the impact on home automation can be catastrophic. Beyond electricity costs, a Smart Home also allows for physical access automation, implying that attacks may extend the cyber layer and reach the physical one. In Figure 1, we try to illustrate the devices and sensors indicating the ones which a VA can control to understand the potential risks from their abuse. **Ethics:** To perform this study, an extensive experiment was conducted with 140 people participating on a volunteer basis. We detailed the scope, goals, and steps of the experiment to all participants before their participation. All steps were performed by the participants without installing anything on their devices. Additionally, in the attacks, the participants did not have any costs. **Road map:** The rest of this work is organised as follows. In the next section, we analyse the related work regarding VAs and voice synthesis. In Section 3.2, we describe how data can be collected from various media and manipulated accordingly. We describe data collection with malicious software and proceed with the processing and synthesis of voice data. The two most important sections are attacks on VAs, which are demonstrated through our application showing how easily a system can be tricked with a synthesised voice into performing protected actions. The robustness and validity of our results are guaranteed through the experimental results of a large group of people that participated in our research. Finally, this concludes by summarising our contributions and discussing possible countermeasures and future work. ## 2 Related Work In the following paragraphs, we first provide an overview of the attacks on VAs, trying to illustrate the various risks users are exposed to. Then we present how voice synthesis works, focusing on the methods employed by the tool we used to synthesise commands with the participants' voices. Figure 1: Devices and sensors in a Smart Home. The voice icon (\(\#\)) indicates devices controlled by VAs. ### Attacks on Voice Assistants Kumar et al. [7, 8] introduced the skill squatting attack targeted at Alexa. In essence, the researchers used prerecorded samples to identify where the VA misinterprets the audio input. Some of these errors were found to be consistent, so they could be exploited to lure a user to a malicious application without being aware of it. The attack could be further tuned into a spear phishing attack to target specific demographic groups. In the REEVE attack [9] Yuan et al., the attacker performs radio signal injection to trick Alexa in Amazon's Echo into performing specific tasks, identifying more than one hundred vulnerable skills and applets. A critical issue in VAs is the fact that humans and machines conceive audio totally differently. The latter opens the door to a wide range of attacks [10], and wrong triggers [11] as what we hear and say is not translated the same way in a machine. Moreover, there is a wide audio spectrum that could be sensed by a machine but not by the human ear. As a result, an adversary can generate audio that does not make any sense for a human but is interpretable by a machine [12]. Practically, the adversary can use any device to play the audio which could trigger a command to a VA. Such audio may even be embedded in songs [13], making it impossible for the human to understand the origin of the attack. Similar attacks can be considered the DolphinAttack [14] and the ones from Schonherr et al. [15], Roy et al. [16] and Yan et al. [17] as the audio is inaudible to the human ear but interpretable by a machine. In general, the apps and skills of VAs are not properly monitored, allowing malicious ones to appear in the relevant stores [18, 19]. In this context, Zhang et al. [20] introduced two new attacks, namely voice squatting and voice masquerading. In the former, the attacker uses similarly pronounced phrases to trigger malicious skills instead of benign ones. In the voice masquerading attack, the adversary impersonates the voice of the VA or of another legitimate skill to lure the user into disclosing sensitive information. Attempting to infringe on the Android operating system is not an emerging research object. In every version of Android, attempts have been made to find vulnerabilities. Diao et al. [21] presented an approach (GVS-Attack) to launch permission bypassing attacks from a zero-permission Android application (VoicEmployer) through the speaker. Through the Android Intent mechanism, VoicEmployer triggers Google Voice Search in the foreground and then plays prepared audio files in the background. Google Voice Search can recognize this voice command and execute corresponding operations. Also, they found a vulnerability in status checking in the Google Search application, which can be utilised to dial arbitrary numbers even when the phone is securely locked with a password. Alepis and Patsakis [4] thoroughly examine in their research the dangers lurking in mobiles with intelligent VAs. They deny that it is a fictitious threat but a real scenario that can greatly expose users. In this work, detailed real scenarios of attacks with voice commands were implemented. However, due to the use of AI, these systems are more difficult to break. An important point of their research is the fact that attacks on mobile devices were not limited. There is a variety of different devices that incorporate VAs such as smartwatches, personal computers, or even smart TVs. In another research Zhang et al. [5] proposed a stealthy attacking method targeting VAs on smartphones. They proposed an attacking method that could activate the VA and apply further attacks, such as leaking private information, sending forged SMS/Emails, and calling arbitrary numbers. To hide the attack from users, an optimal attacking time was chosen. Through their proof-of-concept attack targeting Google Assistant on the Android platform, they demonstrated the feasibility of the attack in real-world scenarios. Eposito et al. [22] recently took advantage of Alexa's inability to distinguish voice commands from audio files that it reproduces to trick the VA into performing unauthorised actions. Similarly, Vasy of Zhang et al. [23] is a spyware app which records activation keywords and replays in targetted intervals to launch an attack on the VA locally, solely studying the case of a handful of Android phones. Chen et al. [24] synthesise voice commands that are not interpretable by most humans yet sound like normal speech to launch attacks against commercial VAs. However, their attacks target generic automatic speech recognition systems and do not cover the cases of trusted voices. Finally, Wenger et al. [25] used existing voice datasets to train commercial speaker recognition and then used voice synthesisers to attack them with high success rates. For more on the security of VAs, the interested reader may refer to [26, 27, 28, 29]. ### Voice synthesis For centuries people have been experimenting with the development of devices that could replicate and produce the voice of animals and, ultimately, humans. Nevertheless, the first devices to produce human speech date back to the 18th century with the pioneering works of Kratzenstein and von Kempelen. With the technological advances, researchers managed not only to replicate the human voice but to synthesise voices that can even utter arbitrary texts. To achieve this, there are several approaches. For instance, we have articulatory synthesis [30] where, as the name implies, the goal is to simulate the mechanism that humans speak; thus, one simulates the lips, tongue etc. There is also concatenative synthesis [31] in which the goal is first to isolate various speech segments, ranging from sentences down to diphones and phonemes, and use this as a reference to transform text to voice. In formant synthesis, one tries to replicate the pitch (frequency) and volume (amplitude) of a voice, following the way that we replicate the music produced by musical instruments [32]. Statistical parametric speech synthesis [33] uses a statistical model to generate speech. The model is trained on a large dataset of speech samples to learn the statistical patterns that govern how sounds are produced. Once the model is trained, it can be used to generate new speech samples by sampling from the learned distribution of sounds. This can be done by specifying the desired text to be spoken, along with any desired prosodic features such as pitch and speaking rate. The model then generates a synthetic speech sample corresponding to the specified text and prosodic features. There are several different types of statistical parametric speech synthesis models, including hidden Markov models and neural network-based models. Finally, we have neural speech synthesis, which started as a type of statistical parametric speech synthesis using neural networks as the underlying model. The basic idea is to use a neural network to map text input to speech output. The network is trained on a large dataset of speech samples to generate the corresponding speech waveform. Different types of neural networks can be used for speech synthesis, including feedforward networks and recurrent networks. Recurrent networks are particularly popular because they can handle sequential data, such as speech, which has temporal dependencies. In a neural speech synthesis system, the input text is first processed by a text encoder network that maps the text to a high-dimensional representation. This representation is then passed to a speech decoder network that generates the speech waveform. The decoder network is typically a generative model, such as a Generative Adversarial Network (GAN) or a Variational Autoencoder (VAE), which generates the final speech output. This method of speech synthesis has been shown to produce high-quality speech that is very similar to human speech. Additionally, neural speech synthesis can also be used to control various aspects of the speech output, such as speaking rate and pitch, by conditioning the network on these features during training. For the composition of a synthesised user's voice that we utilised in the attacks on VAs, we used the "Real Time Voice Cloning" (RTVC) [34], which relies on the work of Jia et al. [35]. It is a three-stage deep learning framework that performs voice cloning in real time. Using an utterance of speech of 5 seconds, the framework can capture in a digital format a meaningful representation of the spoken voice. Thus, by giving a text prompt, it can perform the text-to-speech conversion using any voice extracted by this process. After long hours of training and by using a large dataset, the framework could clone voices it has never heard of and generate speech from arbitrary text. Since it performs neural speech synthesis, the application consists of three parts. The first part is a speaker encoder that derives an embedding from the short utterance of a single speaker. The embedding is a meaningful representation of the voice of the speaker. The second part is a synthesizer that, conditioned on the embedding of a speaker, generates a spectrogram from text and the last one is a vocoder that infers an audio waveform from the spectrograms generated by the synthesizer. More precisely, RTVC uses the synthesizer Tacotron [36], which is a recurrent sequence-to-sequence model that predicts a mel spectrogram from the text. RTVC features an encoder-decoder structure that is bridged by a location-sensitive attention mechanism. Individual characters from the text sequence are first embedded as vectors. Convolutional layers follow to increase the span of a single encoder frame. These frames are passed through a bidirectional Long Short-Term Memory (LSTM) to produce the encoder output frames. This is where SV2TTS (Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis) [35] modifies the architecture: a speaker embedding is concatenated to every frame that the Tacotron encoder produces. The attention mechanism attends to the encoder output frames to generate the decoder input frames. Each decoder input frame is concatenated with the previous decoder frame output passed through a pre-net, making the model autoregressive. This concatenated vector goes through two unidirectional LSTM layers before being projected to a single mel spectrogram frame. Another projection of the same vector to a scalar allows the network to predict on its own that it should stop generating frames by emitting a value above a certain threshold. The entire sequence of frames is passed through a residual post-net before it becomes the mel spectrogram. In addition, it is worth mentioning that in SV2TTS and Tacotron, WaveNet is the vocoder [37]. WaveNet has been at the heart of deep learning with audio since its release and remains state of the art regarding voice naturalness in Text-to-Speech (TTS). However, it is also known for being the slowest practical deep learning architecture at inference time. More recent research has improved it to make the generation near real-time or faster than real-time without significantly impacting the quality of the generated speech. Thus, in RTVC, WaveRNN [38] has been selected. ## 3 Attack scenarios ### Threat assumptions The goal of this work is to assess the security of commercial VAs against voice synthesis attacks. Since these commercial VAs are closed-source and little information about their internal mechanisms is provided, we consider them as black boxes. Moreover, we assume an adversary which can collect some voice samples of the victim (see paragraphs below) and has the capacity to get in proximity to the victim's VA unattended and play an audio. For the VA, we assume that it has been trained with the victim's voice to perform potentially dangerous tasks. In our experiments, we consider two of the most widely used VAs, namely Google Assistant and Siri. However, the method is generic enough to attack any other VA. ### Possible information sources As mentioned above, the aim of this research is to evaluate attacks on VAs using synthesised voice commands. Clearly, to achieve this, the attacker must collect voice samples which may originate from different sources from the potential victim. We highlight this heterogeneity as the quality of the samples from different sources may vary for various reasons. These may include, among others, the presence of background noise, small-length samples, and poor recording quality. Nevertheless, since we are in the information era where people share everything mainly through the internet, one has to understand that with a large amount of available data on the internet through, e.g., social media, it becomes easier to extract it for attacks. More specifically, we consider four distinct and realistic attack scenarios, which are analysed in the following paragraphs. In three of them, we consider an active adversary who interacts with the potential victim via three different modalities. Finally, in the last scenario, we consider a passive adversary. #### 3.2.1 Face to Face user interaction The first way of collecting data requires personal contact with the _victim_, the targeted individual whose VA would be attacked. All that is needed is a recording device that can be activated during the conversation with the potential victim. Evidently, an important problem encountered in such cases is the maintenance of favourable conditions in terms of sound, as a lot of noise can distort the collected data. #### 3.2.2 Via a Call An alternative to collecting the voice samples is via call recording. In this case, it was a less "risky" way as it did not bring the attacker into personal contact with the "victim" and, therefore, could not be perceived by the latter as being recorded. All that was needed was to make a call to the victim under any pretext. This can be considered a social engineering attack as the attacker needs to maintain a conversation with the victim for enough time to collect the necessary voice samples. Evidently, the network quality and the location of the victim play a crucial role, as these two factors can significantly affect the quality of the recording. #### 3.2.3 Spyware and app vulnerabilities Numerous malicious applications appear in Google Play and App Store that occasionally appear, many of which manage to trick millions of users into installing them. Moreover, the fact that some of them do not appear to act malicious does not necessarily mean that they are not abusing users' data or that they do not contain vulnerabilities that can be abused, even in the backend. Given that numerous applications in Google Play and App Store use the microphone permission to record users' voices, many of which may remotely store the collected information, it is clear that thousands of developers may access the users' voice samples. Should the application be considered trusted by the user, the samples can be ample, guaranteeing that there would be enough without background noise and of good quality to be used for synthesis. Similarly, unprotected AWS buckets and vulnerable APIs may grant an attacker access to backends that apps store video or audio samples. #### 3.2.4 Social media Technology is increasingly invading people's daily lives. Social media are gaining an indispensable role in our lives. Users publish various images, videos and audio content freely, providing access to them to almost everyone. This content is, in many cases, of very high quality as it is recorded via modern cameras and microphones, which have very high resolution, sensitivity and inherent noise-cancellation mechanisms. Moreover, on many social media, users share their thoughts or speak in interviews, so the background noise is minimal. Therefore, an attacker can easily get hold of ample content and extract many voice samples of a potential victim without the need for direct interaction or consent. ### Attack preparation and impact Based on the above four scenarios, we can safely assume that after the first steps, all four scenarios lead to the same output, an audio file containing the user's voice. In the first three scenarios, this file is the immediate outcome of the recording. However, in the case of social media, since users most often share videos than audio content, the attacker has to extract the audio from the video. Solutions such as FFmpeg1 can easily perform this task without degrading or altering the audio quality. Once the adversary has the audio files, their next task is to isolate the victim's voice. For this task, she may use Audacity2, which can automatically split audio files into segments based on silenced parts. Then, these files are sent to the voice synthesiser, which utilises them to create an audio file that, machine-wise, resembles the user's voice. This file is then reproduced to the VA, which would execute a command allowing the adversary to perform a task with elevated privileges, a task that would only be allowed by the trusted voice of the targeted user. Depending on the command, the impact could range from information leakage, e.g., reading emails, messages etc., and money loss, e.g., ordering a product/service using the user's stored credit card, to physical attacks, e.g. granting access to the victim's premises by unlocking a door. Other attacks, such as resource exhaustion and appliance fatigue leading to breakdowns or monetary loss and reputation damage through posting unauthorised content on social media and communication platforms, are also other possible impacts. The attacker could even exploit VAs to collect 2FA tokens, further deepening the possible impact of such an attack. Footnote 1: [https://ffmpeg.org/](https://ffmpeg.org/) Footnote 2: [https://www.audacityteam.org/](https://www.audacityteam.org/) In Figure 2, we illustrate the possible attack scenarios that were described above. Moreover, we illustrate the possible impact. ### Training the voice assistants and authentication Our experimental process took into consideration the training of the VAs from Google and Apple to authenticate the user's voice so that they respond only to the user to which they have been trained. In both cases of the examined VAs, Google Assistant and Siri, the mobile operating systems ask the user to speak specific commands, aiming both in the calibration of their speech recognition module and also to recognise the owner of the mobile device successfully. Following this, in our experiments, we have chosen a text containing commonly spoken phrases from users' daily lives. The fact that the companies acknowledge that they perform voice authentication is evident, for instance, by Google stating that _"When you turn on Voice Match, you can teach Google Assistant to recognize your voice so it can verify who you are before it gives you personal results."3_ Footnote 3: [https://support.google.com/assistant/answer/9071681](https://support.google.com/assistant/answer/9071681) Similarly, Apple states for iOS that _"You can control iPhone with just your voice. Speak commands to perform gestures, interact with screen elements, dictate and edit text, and more."4_ Moreover, Homepod also uses Siri and the fact that it is trained with a specific voice that it is considered trusted is acknowledged by the following quote: _"Siri can recognise multiple voices, so everyone in your home can use HomePod to enjoy personalised music recommendations, access their own playlists, send and read messages, make phone calls and more."_5 Footnote 5: [https://support.apple.com/en-gb/guide/homepod/apdi1841a8f81/homepod](https://support.apple.com/en-gb/guide/homepod/apdi1841a8f81/homepod) ## 4 Experimental results To assess the security offered by VAs, we conducted an experiment targeted on Android and iOS, the two most popular mobile Operating Systems, that tries to replicate the attacks that could be launched from face-to-face, spyware, and social media. We did not consider phone call attacks as they could be considered a case of a face-to-face attack with possibly additional noise due to the network. Moreover, they required more interaction with the experiment participants, many of which might not feel comfortable sharing their telephone numbers. In what follows, we detail the experiment and its findings. The phases of the experiment are illustrated in Figure 3. Figure 3: The phases of the experiment. Figure 2: The attack scenarios considered in this work ### Concept and experimental setup To conduct our experiment, we first modelled the attacks and tried to alleviate some technical issues, e.g. splitting the recordings to keep only the victim's voice etc. We concluded that we would request the participants to provide us with clear recordings of the same given text, see Figure 4. We consider that only two samples are needed, one recorded through a mobile phone and one from another device, e.g. a PC. This way, we would have the voice of the participant from an arbitrary text of a realistic length without requesting the participants to perform exhaustive actions but enough to synthesise their voice. To scale the experiment and in order not to intervene and tamper with the participants' devices, we opted to provide the participants with the synthesised samples and request them to report the actions of their VA. This way, the participants had the necessary guarantees that no harm would be caused to their software and/or hardware and also had full control of opting out. Based on the above, we created a description of the experiment and notified potential participants orally and in written form of the scope, goals, and steps of our experiments. Therefore, the participants were fully informed and voluntarily opted in, and, as already mentioned, could opt out whenever they deemed appropriate. Participants who undertook the experiments, after training the VA, emailed the two requested recordings. For each participant, we synthesised some voice samples that would trigger the VA and request a phone call. These recordings were sent to the participants, who replayed the recordings to their VAs, monitored their actions and subsequently sent us back their results for further processing and examination. Since Android and Apple smartphones have integrated a VA and represent approximately 99% of the mobile market6 we opted to test only two VAs, Google Assistant and Siri. While others, e.g. Alexa and Bixby, might have millions of installations and dedicated devices, they might not be available to all potential participants and introduce representation biases. All participants had to enable the VA and then train it using their own voice to be eligible for the experiment and exposed to attacks. As described, the experiment's execution consisted of three phases. In the first phase, data collection, the participants were requested to provide one mobile recording and a video recorded from another device, e.g. a desktop computer. The mobile recording represents the face-to-face and spyware attacks, and the video recording represents the attack from published social media content. We requested the participants to record the samples in places with no ambient noise. All participants read the same text, which did not include commands that could cause the activation of the VA to increase the results' credibility. Additionally, the participants were requested to provide information about their mobile hardware and software versions to accommodate the quality of the statistical results. Using the files the participants sent, we synthesised the "attack" recordings with the phrase "Hey Google, call John" and "Hey Siri, call John" (depending on their devices) using an off-the-shelf open-source solution, namely the "Real Time Voice Cloning" project. In the second phase of the execution, we sent participants the synthesised voice samples and asked them to use them to attack their VAs, e.g. reproduce the content in proximity to the VA. Please Figure 4: The dictated text that users recorded for the experiment. note that we consider the smartphones unlocked. We argue that this is a soft requirement, as VAs in a smart home environment and not integrated into a smartphone device, so they would not have any type of lock. Finally, participants forwarded the results of the attacks and their observations for the last phase, which is the processing and presenting of the outcomes reported by the users. ### Dataset composition and overall findings In total, 140 people participated in the experiment. The participants were not native English speakers and belonged to the age group of 18-40. In terms of the used operating system, 88 participants had Android devices, while the rest 52 had iOS. Figure 5 demonstrates the distribution of OS per gender, while Table (a)a shows the distribution of participants among vendors for Android and versions. For simplicity, we only report the major Android version. Similarly, for iOS, Table (b)b reports the iOS versions of the participants. For clarity, we report only the major release and brunch. Regarding vendors, the vast majority is shared by two companies, namely Xiaomi and Samsung. Therefore, our sample resembles the mobile vendor market share in Europe7. Footnote 7: [https://gs.statcounter.com/vendor-market-share/mobile/europe](https://gs.statcounter.com/vendor-market-share/mobile/europe) The participants performed multiple repetitions using the generated recordings. The results varied and could be separated into three categories: fully successful results, semi-successful results (noted as _trigger_); when the VA was triggered but did not understand the command that was addressed, and finally unsuccessful where there was no response and action from the VA. From now on, we will refer to the attacks that the participants provided their voice samples, and we synthesised the voice commands as audio attacks. For the voice samples that the audio was extracted from the videos, we will refer to them as voice attacks. \begin{table} \end{table} Table 1: Distribution of vendors and versions in our sample. Figure 5: Participant distribution per OS and gender. In general, as illustrated in Figure 6, we have come up with some consistent patterns. The overall success rate is on a scale of 3 out of 10 (28.39%), with a significantly higher success rate (31.17%) in audio attacks and a significantly lower success rate (25.34%) in video attacks. The overall variation in trigger results is on the scale of 0.5%, so it is considered insignificant. However, the success rate between the two OS platforms differs statistically significantly (4.2%). More precisely, out of the 2180 attacks that were carried out, 619 were successful (28.39%), 96 were triggers (4.4%), and 1465 were unsuccessful (67.2%). Figure 6 illustrates all these results. As highlighted, there is a higher success rate in audio attacks. More precisely, out of 1142 audio recordings, 356 were successful (31.17%), and 741 failed (64.89%). On the contrary, out of the 1038 video recordings, only 263 were successful (25.34%), and 724 failed (69.75%). ## 5 Discussion In all cases, for the most part, positive results emerged, which raised many questions. If a VA can be tricked so easily, that is, with applications that are either available for free or easy to implement, how easy could it be to trick a security system where all the devices are connected to the same network? For example, a Smart Home consists entirely of such devices. What if anyone could open a person's home anytime with just one voice command? Are we heading into a modern age faster than we should, leaving huge security gaps behind? Over the years, more and more devices have become part of Smart Homes, trying to offer more convenience to users, yet this may come at a considerable cost. Human traits have been known to play a role in cybersecurity [39]; nevertheless, this role may not always be obvious. This can be augmented by biases in artificial intelligence [40, 41] since, over the past few years, the convergence of cybersecurity and artificial intelligence is continuously growing to address the challenges posed by big data. Gender biases are often in artificial intelligence [42] and may imply further issues for cybersecurity. Our work illustrates that there is a significant gender bias for attacks against iOS devices. Given that the samples of the participants in our experiment were processed exactly the same for both OSes, it is evident that iOS devices perceive the attacks entirely differently based on the gender of the user. This difference is so evident that the attacks against females were 10.98% successful and 35.24% against males, which is more than three times increase. Notably, the gender imbalance was also exhibited in the experiments of [25], who categorise gender as a decisive factor for the success of their attacks. Nevertheless, in our experiments, attacks against females were always less successful. The above illustrates that there inherent biases in the training of voice authentication systems which may trigger differently. Finally, in Figure 11, we illustrate the number of participants for which there was at least one successful attack. Given her proximity to the VA and lack of monitoring, an adversary may persistently Figure 6: Overview of experimental results. perform attacks until she has the desired result. Again we notice that there are significant differences among the vendors. More precisely, for Android, for 72.73% of the users, there is at least one successful attack, while for iOS, this drops down to 28.84%. Practically, Android users are approximately 2.5 times more vulnerable than iOS users. Having our presented results in mind, we should consider how much they could be improved if we incorporated even more sophisticated AI-empowered voice synthesis modules, which could provide voice outputs even closer to the real user ones. ## 6 Conclusions The continuous integration of DAs enables seamless human-computer interaction. Even more, the use of VAs provides this functionality in a more human way. It is easier and more direct for us to ask for something we want by speaking. This convenience comes with several drawbacks, as VAs are not bulletproof. On the contrary, many attacks in the literature illustrate various ways to trick VAs into Figure 8: Video attacks per vendor. Figure 7: Audio attacks per vendor. performing tasks that are not initiated by the users. Following this line of research, we performed a large-scale experiment, the largest in the related work, to assess the security of two of the most widely used commercial VAs against synthesised voices. One of the key differences is that the VAs were trained with the users' voices so that they would not be triggered to perform any task with their "master's" voice. Our experiments illustrate some startling facts. More precisely, in all VAs using an open-source voice synthesiser with voice samples provided by the participants, approximately one out of three attacks is successful. Even more, there is at least one successful attack for more than half of the participants. Our Figure 10: Video attacks per OS and Gender. Figure 9: Audio attacks per OS and Gender. experiments indicate that there are big variations among vendors regarding their susceptibility to such attacks. Additionally, there are underlying gender biases that make the attacks significantly more robust for males in the case of iOS. We believe that the above requires more attention from manufacturers. The diversity of the results in terms of gender and manufacturer is subject to different interpretations. Yet, we believe that for systems that are so widely used and integrated into millions of devices and interconnected to so many others, such issues are very grave. For starters, given that most of these attacks would initially be prerecorded, a randomised request for response could be considered a temporary patch. In the long run, VAs should integrate audio deepfake detection mechanisms [43, 44] to allow VAs to determine whether a voice has been synthesised. Other additional measures may include the occasional use of 2FA or the proximity to a user device with the trusted voice for cases where sensitive content is requested, monetary transaction, or potentially dangerous command is detected. ## Acknowledgments This work was supported by the European Commission under the Horizon Europe Programme, as part of the project LAZARUS ([https://lazarus-he.eu/](https://lazarus-he.eu/)) (Grant Agreement no. 101070303). The content of this article does not reflect the official opinion of the European Union. Responsibility for the information and views expressed therein lies entirely with the authors.
2303.10001
Improving Data Transfer Efficiency for AIs in the DareFightingICE using gRPC
This paper presents a new communication interface for the DareFightingICE platform, a Java-based fighting game focused on implementing AI for controlling a non-player character. The interface uses an open-source remote procedure call, gRPC to improve the efficiency of data transfer between the game and the AI, reducing the time spent on receiving information from the game server. This is important because the main challenge of implementing AI in a fighting game is the need for the AI to select an action to perform within a short response time. The DareFightingICE platform has been integrated with Py4J, allowing developers to create AIs using Python. However, Py4J is less efficient at handling large amounts of data, resulting in excessive latency. In contrast, gRPC is well-suited for transmitting large amounts of data. To evaluate the effectiveness of the new communication interface, we conducted an experiment comparing the latency of gRPC and Py4J, using a rule-based AI that sends a kick command regardless of the information received from the game server. The experiment results showed not only a 65\% reduction in latency but also improved stability and eliminated missed frames compared to the current interface.
Chollakorn Nimpattanavong, Ibrahim Khan, Thai Van Nguyen, Ruck Thawonmas, Worawat Choensawat, Kingkarn Sookhanaphibarn
2023-03-11T05:12:02Z
http://arxiv.org/abs/2303.10001v1
# Improving Data Transfer Efficiency for AIs in the DareFightingICE using gRPC ###### Abstract This paper presents a new communication interface for the DareFightingICE platform, a Java-based fighting game focused on implementing AI for controlling a non-player character. The interface uses an open-source remote procedure call, gRPC to improve the efficiency of data transfer between the game and the AI, reducing the time spent on receiving information from the game server. This is important because the main challenge of implementing AI in a fighting game is the need for the AI to select an action to perform within a short response time. The DareFightingICE platform has been integrated with Py4J, allowing developers to create AIs using Python. However, Py4J is less efficient at handling large amounts of data, resulting in excessive latency. In contrast, gRPC is well-suited for transmitting large amounts of data. To evaluate the effectiveness of the new communication interface, we conducted an experiment comparing the latency of gRPC and Py4J, using a rule-based AI that sends a kick command regardless of the information received from the game server. The experiment results showed not only a 65% reduction in latency but also improved stability and eliminated missed frames compared to the current interface. Remote Procedure Call, gRPC, Producer-consumer synchronization, Fighting Game, DareFightingICE ## I Introduction FightingICE [1] is a fighting game platform1 that is focused on implementing artificial intelligence (AI) for controlling a non-player character to fight against another computer-controlled character. In FightingICE, the game provides the AI with information about the current game state, allowing developers to create intelligent algorithms for the AI to use in order to decide on the best action to take within a short response time (16.66 ms in FightingICE). The challenge for developers is to create AI that is able to make quick and effective decisions based on the current game state in order to emerge victorious in the fighting arena. This platform is written in Java, and initially only supported AI development in Java. Footnote 1: [http://www.ice.ci.iritsumei.ac.jp/~fugai/](http://www.ice.ci.iritsumei.ac.jp/~fugai/) Py4J [2] is a Python library that allows Python programs to access Java objects in a Java Virtual Machine (JVM) dynamically. It is a useful tool for integrating Python and Java code, and allows developers to call Java code from their Python programs and vice versa. Py4J is designed to be user-friendly and makes it easy to integrate Python and Java into a single application. The current version of the FightingICE platform has been integrated with Py4J implementation to provide a convenient way to combine the power of Python and Java, and has made it possible to use the strengths of Python programming to develop AIs for the FightingICE platform. DareFightingICE [3] is an enhanced version2 of the FightingICE platform that was proposed in 2022. This enhanced version includes improved sound design, allowing visually-impaired players to play the game, and providing a testing ground for AI algorithms that use sound as the sole input. However, the amount of time spent providing game information to the AI using Py4J is excessive, and sometimes exceeds the maximum response time, leaving no time for the AI to process the data. This is because Py4J relies heavily on sockets to communicate between Python and Java, which can add overhead and potentially limit its performance. In addition, Py4J is less efficient at handling large amounts of data or performing complex calculations compared to a native solution that is optimized for those specific tasks. This can be a problem for developers who want to create AI algorithms that require a significant amount of computational time or involve complex calculations in order to function effectively. In this paper, we present a new communication interface for the DareFightingICE platform that uses an open-source remote procedure call, gRPC [4] instead of Py4J as shown in Figure 1. Our proposed interface is based on a producer-consumer approach [5], where the platform acts as the producer of data and the AI acts as the consumer. This approach helps to coordinate the generation and consumption of data more efficiently, and prevents potential problems such as data loss or corruption. Additionally, gRPC is capable of efficiently transmitting large amounts of data between the producer and consumer, making it an ideal choice for the DareFightingICE platform, where the AI must be able to quickly and effectively process large amounts of data in order to make decisions within the short response time. ## II Related Work ### _gRPC Remote Procedure Call_ gRPC is an open-source remote procedure call framework that can run in any environment. It uses the HTTP/2 protocol [6] for transport, and Protocol Buffers [7] as the interface description language. gRPC enables client and server applications to communicate transparently, and simplifies the building of connected systems. One of the key features of gRPC is its ability to use a single HTTP/2 connection for bi-directional, full-duplex communication between client and server. This allows for the efficient exchange of large amounts of data and the ability to stream multiple messages in both directions. gRPC also supports a number of advanced features, such as authentication, flow control, blocking or non-blocking bindings, cancellation and timeouts. In addition, gRPC is designed to be highly performant. It uses a binary serialization format [8] that is compact and efficient, allowing for efficient transmission of data over the network. ### _Producer-Consumer Synchronization_ Producer-consumer synchronization refers to the coordination of activities between producer and consumer threads [9] in a concurrent system. In a producer-consumer system, the producer thread is responsible for generating data, and the consumer thread is responsible for consuming the data. The key challenge in producer-consumer synchronization is to ensure that the producer and consumer threads coordinate their actions properly to avoid race conditions and other synchronization issues, while still allowing for efficient data transfer. This requires careful design and implementation of synchronization mechanisms, such as locks, semaphores, and monitors, to ensure that the producer and consumer threads can work together smoothly and efficiently. ### _Study on the performance of MCTS_ In FightingICE, the sample AI was implemented using Monte Carlo Tree Search [10], called MCTSAI. Three different MCTSAIs with varying parameter settings were used in the study on [3]. These Als, named MCTSAI165, MCTSAI115, and MCTSAI65, are based on a sample MCTSAI from FightingICE, but with the MCTS execution time set to 16.5 ms, 11.5 ms, and 6.5 ms, respectively. The execution time determines the time budget for the MCTS algorithm, so reducing the execution time can theoretically decrease the strength of the AI. The performance of each MCTSAI was evaluated by calculating the ratio of winning rounds to the total number of rounds played (300). The winning ratios for MCTSAI165, MCTSAI115, and MCTSAI65 were 0.96, 0.49, and 0.05, respectively. In conclusion, providing a longer computation time for the MCTS algorithm can lead to improved performance from the AI. ## III Proposed Communication Interface This section covers the details of our proposed interface, which has two parts. The first part covers the exposed services on the DareFightingICE platform that can be accessed using gRPC. The second part covers the implementation of the DareFightingICE game server using a producer-consumer approach. ### _gRPC-accessible services_ The gRPC server is integrated into the DareFightingICE platform, as shown in Figure 2. The server is typically served on port 50051 by default. The AI depicted in the figure serves as a guide for creating an AI that can effectively interact with this interface. There are three remote procedure call methods on the gRPC server side, allowing for easy communication and interaction with the platform. The AI must first register its information and request a server-streaming of the game state, as described as follows. The _Initialize_ method is a unary-RPC used to set up and configure an AI before the start of a game. This method requires a parameter of type _InitializeRequest_, which consists of a boolean value indicating the player number (i.e. true for player one and false for player two), the player's name, and a boolean value indicating whether the AI is blind or not. The response to this method is provided as an _InitializeResponse_ object, which contains the player's unique identifier. This unique ID is required as a parameter for both the _Participate_ and _Input_ methods, which are used to control the AI's actions during the game. The _Participate_ method is a server-streaming RPC that allows Als to register and receive game state information from the game server. The method takes _ParticipateRequest_, which consists of a player's unique identifier, as input. The output is a streaming of _PlayerGameData_ object, which contains information about the game state, such as audio data. If the AI is not registered as blind AI, the _PlayerGameData_ object will also include information about the current frame and screen data. This information can be used by the AI to make decisions and take actions in the game. The _Input_ method is a unary-RPC used for Als to send their chosen actions to the game server. This method takes _PlayerAction_ which consists of a player's unique identifier, along with a string representation of the selected action, as inputs. The game server will then use this information to update the game state. This allows the AI to participate in the game and make decisions based on its programmed strategy. These methods can be used by AIs to send and receive information, and to participate in the game. The gRPC server and these methods provide a convenient and efficient way for AIs to interact with the DareFightingICE platform. Fig. 1: Proposed system architecture Fig. 2: The workflow of the gRPC-accessible services integrated into the DareFightingICE platform involves the flow of the game state from the main thread to the AI process, as indicated by the bold arrow. The flow of the player action from the AI process back to the main thread is indicated by the non-bold line. ### _Game server with producer-consumer approach_ In this proposed system, the DareFightingICE platform acts as a producer of game state information, while the AI acts as a consumer of this information. This system is based on the system architecture described in a previous study [1]. The DareFightingICE platform's processes consist of three main threads. The first is the main thread, which is responsible for all aspects of the game, including rendering the graphics and processing the game state. The other two threads are responsible for controlling the two AIs. These threads are called AI-Threads and are responsible for providing the AI with information about the current game state, processing the information, and updating the input received from the AI. In the current system, only local processes are supported, so there is no way to implement an AI that can connect remotely to the DareFightingICE platform. To fix this, we modified the two AI-Threads to allow for the use of the producer-consumer approach and to provide the ability to connect to the platform remotely from a different process. In our modified system, the AI-Thread provides game state information to the AI and then halts until a response is received. Once the response is received, it is processed by the game server, and the responsible thread is resumed. This approach ensures efficient data transfer and avoids the possibility of the game server processing a command before the AI's action has been received. ## IV Evaluation We implemented a rule-based AI that sends a kick command regardless of the information received from the game server. The purpose is to measure the amount of time the game server takes to provide information to the AI. We implemented two versions of this AI, one using Py4J and one using gRPC. We refer to the version implemented with Py4J as _Py4J_AI_ and the version implemented with gRPC as _gRPC_AI_. To measure the amount of time spent, we started a timer when the information from the game server began transmitting to the AI, and ended the timer when the AI responded with the selected action to perform. The time was collected in nanoseconds and then divided it by one million to convert it to milliseconds. We did this to ensure precision, as milliseconds are critical in this scenario. The PC used in the experiment had an Intel(R) Xeon(R) W-2123 @ 3.60GHz CPU, 16 GB DDR4 RAM, NVIDIA Quadro P400 graphics card, and ran on the Windows 11 Pro for Workstations operating system. The latency was measured on both the current and proposed interfaces, and the results are shown in Figure 3. In the figure, we have added two dashed lines. The first is a blue, vertical line at the x-axis value of 0, which shows the start of the game. The second is a red, horizontal line at the y-axis value of 16.66, which indicates the maximum response time for the AI. If the AI's response time exceeds this value, it means that the AI has no time left for processing anything. The average latency using Py4J was 5.66 ms, while gRPC had an average latency of 1.99 ms. Our proposed interface significantly reduced the average latency by about 65% compared to the current one. During the experiment, we observed that the maximum latency using Py4J was 39.20 ms, while gRPC had a maximum latency of 6.18 ms which only occurred at the beginning of the game. In the later stages of the game, the latency for gRPC stabilized and remained consistently lower than that of Py4J. This indicates that our proposed interface is not only faster than the current one but also more stable. As mentioned earlier, exceeding the latency of 16.66 ms means that the AI will not be able to process anything, resulting in missed frames. Table I shows the miss rates for both _Py4J_AI_ and _gRPC_AI_. In DareFightingICE, there are 3,600 frames per game. However, taking the 15 frame delay as described in [1] into account, the total number of frames that the AI is responsible for processing is 3,585 per game. The results show that _Py4J_AI_ missed responding to the game server 68 times, which is 1.90% of all frames. In contrast, _gRPC_AI_ was able to process everything without missing any frames. Therefore, the results show that our proposed communication interface can eliminate all miss rates in the current interface. ## V Discussions The results of our experiments indicate that the proposed gRPC communication interface is significantly faster and more stable than the current Py4J interface. The gRPC interface reduced the average latency by about 65%, and eliminated all miss rates, which occurred in the Py4J interface. These improvements are important for the DareFightingICE platform, as a faster and more stable communication interface allows for better performance and responsiveness from the AIs. One potential limitation of the proposed interface is the warm-up period that is required before it can be used efficiently. This warm-up period takes place during the pre-game phase and does not impact the performance of the game once it has begun. However, it may be worthwhile to investigate ways to reduce or eliminate this warm-up period in order to further improve the performance of the gRPC interface. ## VI Conclusions In conclusion, we have proposed a new communication interface for the DareFightingICE platform that uses gRPC instead of Py4J. This interface is based on a producer-consumer approach, where the platform acts as the producer of data and the AI acts as the consumer. This approach helps to coordinate the generation and consumption of data more efficiently, and prevents potential problems such as data loss or corruption. Additionally, gRPC is capable of efficiently transmitting large amounts of data between the producer and consumer, making it an ideal choice for the DareFightingICE platform. We have also implemented two versions of AI to measure the amount of time the game server takes to provide information to the AI. Our experiments have shown that the proposed interface significantly reduces the average latency by about 65% compared to the current interface, and also eliminates all miss rates. Therefore, our proposed communication interface is a valuable addition to the DareFightingICE platform, providing improved performance and stability. ## Acknowledgement The first three authors would like to express their gratitude to the Japanese Government for providing them with MEXT scholarships during their graduate studies at the Intelligent Computer Entertainment Laboratory, Graduate School of Information Science and Engineering, Ritsumeikan University, under the guidance and supervision of the fourth author.
2308.12564
Some New Results for Generalized Incomplete Exponential Matrix Functions
The primary goal of this paper is to introduce and investigate generalized incomplete exponential functions with matrix parameters. Integral representation, differential formula, addition formula, multiplication formula, and recurrence relation obtained here are believed to be new in the theory of special matrix functions. We also establish the connection between these matrix functions and other matrix functions, such as the incomplete gamma matrix function, the Bessel and modified Bessel matrix functions.
Ashish Verma, Komal Singh Yadav
2023-08-24T05:10:37Z
http://arxiv.org/abs/2308.12564v1
# Some New Results for Generalized Incomplete Exponential Matrix Functions ###### Abstract The primary goal of this paper is to introduce and investigate generalized incomplete exponential functions with matrix parameters. Integral representation, differential formula, addition formula, multiplication formula, and recurrence relation obtained here are believed to be new in the theory of special matrix functions. We also establish the connection between these matrix functions and other matrix functions, such as the incomplete gamma matrix function, the Bessel and modified Bessel matrix functions. Keywords: Matrix functional calculus, Gamma matrix function, Incomplete gamma matrix function, generalized incomplete exponential matrix functions, Bessel and modified Bessel matrix function. AMS Subject Classification: 15A15; 33C65; 33C45; 34A05. Introduction Tricomi studied the theory of incomplete gamma functions [22]. These are very important special functions that are used in a variety of problems in mathematics, astrophysics, applied statistics, and engineering. These functions are also useful in the study of Fourier and Laplace transforms, as well as probability theory. The incomplete exponential functions and the incomplete hypergeometric functions have been introduced by Pal _et al_. [3, 5]. They also presented applications in communication theory, probability theory, and groundwater pumping modelling. The incomplete Pochhammer symbols and the incomplete hypergeometric functions have been introduced by Srivastava _et al_. [18]. The matrix theory is used in orthogonal polynomial and special functions, and it is widely used in mathematics in general. The special matrix functions are mentioned in the literatures of statistics [6] and Lie theory [9]. It is also related to the matrix version of the Laguerre, Hermite, and Legendre differential equations, and the corresponding polynomial families are listed in [10, 12, 13]. Jodar and Cortes has introduced matrix analogue of the Gauss hypergeometric function in [14]. Abdalla [1] introduced the incomplete hypergeometric matrix functions and discussed some fundamental properties of these functions. The Wright hypergeometric matrix functions and the incomplete Wright Gauss hypergeometric matrix functions have been established in [2] and obtained some properties of these functions. In [20, 21], introduced the incomplete second Appell hypergeometric matrix functions and the incomplete Srivastava's triple hypergeometric matrix functions and studied some basic properties of these functions, including matrix differential equation, integral formula, recursion formula, recurrence relation, and differentiation formula. The paper is organized in the following manner. In Section 2, we list the fundamental definitions and results of special matrix functions that are required in the sequel. In Section 3, we define incomplete exponential matrix functions \(e(x;t)\) and \(E(x;t)\). Some properties; integral representation, differentiation formula and connections with bessel matrix functions are also derived. In Section 4, we introduce incomplete exponential matrix functions \({}_{p}e_{q}(x;t)\) and \({}_{p}E_{q}(x;t)\) and investigate several properties of each of these functions. Finally, in Section 5, we define generalized incomplete exponential matrix functions \({}_{p}e_{q}(x,A,B;v)\) and \({}_{p}E_{q}(x,A,B;v)\) and derive certain properties of each of these functions. ## 2 Preliminaries Throughout this paper, let \(\mathbb{C}^{r\times r}\) be the vector space of \(r\)-square matrices with complex entries. For any matrix \(E\in\mathbb{C}^{r\times r}\), its spectrum \(\sigma(E)\) is the set of eigenvalues of \(E\). A square matrix \(E\) in \(\mathbb{C}^{r\times r}\) is said to be positive stable if \(\Re(\lambda)>0\) for all \(\lambda\in\sigma(E)\). Let \(E\) be positive stable matrix in \(\mathbb{C}^{r\times r}\). The gamma matrix function \(\Gamma(E)\) are defined as follows [15]: \[\Gamma(E)=\int_{0}^{\infty}e^{-t}t^{E-I}dt;\hskip 28.452756ptt^{E-I}=\exp((E-I) \ln t). \tag{2.1}\] The reciprocal gamma function [15] is defined as \(\Gamma^{-1}(E)=(E)_{n}\ \Gamma^{-1}(E+nI)\), where \(E+nI\) is invertible for all integers \(n\geq 0\), \(I\) being the \(r\)-square identity matrix and \((E)_{n}\) is the shifted factorial matrix function for \(E\in\mathbb{C}^{r\times r}\) defined in [14]. If \(E\in\mathbb{C}^{r\times r}\) is a positive stable matrix and \(n\geq 1\), then by [15] we have \(\Gamma(E)=\lim_{n\to\infty}(n-1)!(E)_{n}^{-1}n^{E}\). By application of the matrix functional calculus, the Pochhammer symbol for \(E\in\mathbb{C}^{r\times r}\) is given by [15] \[(E)_{n}=\begin{cases}I,&\text{if }n=0,\\ E(E+I)\dots(E+(n-1)I),&\text{if }n\geq 1.\end{cases} \tag{2.2}\] This gives \[(E)_{n}=\Gamma^{-1}(E)\ \Gamma(E+nI),\qquad n\geq 1. \tag{2.3}\] If \(E\) and \(F\) are positive stable matrices in \(\mathbb{C}^{r\times r}\), then, for \(EF=FE\), the beta matrix function is defined as [15] \[\mathfrak{B}(E,F)=\Gamma(E)\Gamma(F)\Gamma^{-1}(E+F) =\int_{0}^{1}t^{E-I}(1-t)^{F-I}dt \tag{2.4}\] \[=\int_{0}^{\infty}u^{E-I}(1+u)^{-(E+F)}du. \tag{2.5}\] Clearly, the generalized pochhammer matrix symbol \((A)_{kn}\) can be represented in the following form \[(E)_{kn}=k^{kn}\,\left(\frac{E}{k}\right)_{n}\left(\frac{E+I}{k}\right)_{n} \dots\left(\frac{E+(k-1)I}{k}\right)_{n}. \tag{2.6}\] Suppose \(E\) be positive stable matrix in \(\mathbb{C}^{r\times r}\) and let \(x\) be a positive real number. Then the incomplete gamma matrix functions \(\gamma(E,x)\) and \(\Gamma(E,x)\) are defined by [1] \[\gamma(E,x)=\int_{0}^{x}e^{-t}t^{E-I}dt \tag{2.7}\] and \[\Gamma(E,x)=\int_{x}^{\infty}e^{-t}t^{E-I}dt\,, \tag{2.8}\] respectively and satisfy the following decomposition formula: \[\gamma(E,x)+\Gamma(E,x)=\Gamma(E). \tag{2.9}\] Let \(E\) be matrix in \(\mathbb{C}^{r\times r}\) and let x be a positive real number. Then the incomplete Pochhammer matrix symbols \((E;x)_{n}\) and \([E;x]_{n}\) are defined as follows [1] \[(E;x)_{n}=\gamma(E+nI,x)\,\Gamma^{-1}(E) \tag{2.10}\] and \[[E;x]_{n}=\Gamma(E+nI,x)\,\Gamma^{-1}(E). \tag{2.11}\] In idea of (2.9), these incomplete Pochhammer matrix symbols \((E;x)_{n}\) and \([E;x]_{n}\) complete the following decomposition relation \[(E;x)_{n}+[E;x]_{n}=(E)_{n}. \tag{2.12}\] where \((E)_{n}\) is the Pochhammer matrix symbol defined in [14]. Let \(E\), \(F\) and \(G\) are matrices in \(\mathbb{C}^{r\times r}\) such that \(G+nI\) is invertible for all integers \(n\geq 0.\) The incomplete Gauss hypergeometric matrix functions are defined by [1] \[{}_{2}\gamma_{1}\Big{[}(E;x),F;G;z\Big{]}=\sum_{n=0}^{\infty}(E;x)_{n}(F)_{n}( G)_{n}^{-1}\frac{z^{n}}{n!} \tag{2.13}\] and \[{}_{2}\Gamma_{1}\Big{[}[E;x],F;G;z\Big{]}=\sum_{n=0}^{\infty}[E;x]_{n}(F)_{n} (G)_{n}^{-1}\frac{z^{n}}{n!}. \tag{2.14}\] The matrix function \({}_{p}R_{q}(A,B;z)\) is [8] defined by: \[{}_{p}R_{q}(A,B;z) ={}_{p}R_{q}\left[\begin{array}{c}E_{1},E_{2},\cdots,E_{p}\\ F_{1},F_{2},\cdots,F_{q}\end{array}|A,B;v\right]\] \[=\sum_{m=0}^{\infty}\Gamma^{-1}(mA+B)(E_{1})_{m}\cdots(E_{p})_{m }(F_{1})_{m}\cdots(F_{q})_{m}\frac{v^{m}}{m!}, \tag{2.15}\] where \(A\), \(B\), \(E_{i}\) and \(F_{j}\), \(1\leq i\leq p\), \(1\leq j\leq q\) be matrices in \(\mathbb{C}^{r\times r}\) such that \(F_{j}+kI\) are invertible for all integers \(k\geq 0\). The Bessel matrix function is [11, 16, 17] defined by: \[J_{A}(z)=\sum_{m\geq 0}^{\infty}\frac{(-1)^{m}\ \Gamma^{-1}(A+(m+1)I)}{m!} \Big{(}\frac{z}{2}\Big{)}^{A+2mI}, \tag{2.16}\] where \(A+nI\) is invertible for all integers \(n\geq 0\). Therefore, the modified Bessel matrix functions are introduced in [17] in the form \[I_{A}=e^{\frac{-Ai\pi}{2}}J_{A}(ze^{\frac{i\pi}{2}});\ \ -\pi<arg(z)<\frac{\pi}{2},\] \[I_{A}=e^{\frac{Ai\pi}{2}}J_{A}(ze^{\frac{-i\pi}{2}});\ \ -\frac{\pi}{2}<arg(z)<\pi. \tag{2.17}\] Incomplete Exponential Matrix Functions \(e(x;t)\) and \(E(x;t)\): Definition and some new properties In this section, we define the incomplete exponential matrix functions as \[e\left((x,t);A\right)=\sum_{m=0}^{\infty}\Gamma^{-1}(A+mI)\gamma(A+mI,x)\frac{t^ {m}}{m!}, \tag{3.1}\] \[E\left((x,t);A\right)=\sum_{m=0}^{\infty}\Gamma^{-1}(A+mI)\Gamma(A+mI,x)\frac{ t^{m}}{m!}, \tag{3.2}\] where \(A\) is a positive stable matrix in \(\mathbb{C}^{r\times r}\) such that \(A+kI\) is invertible for all integers \(k\geq 0\). So that \[e\left((x,t);A\right)+E\left((x,t);A\right)=e^{t}. \tag{3.3}\] **Theorem 3.1**.: _Let \(A\) be positive stable matrix in \(\mathbb{C}^{r\times r}\) such that \(A+kI\) is invertible for all integers \(k\geq 0\). Then the following integral representation for the incomplete exponential matrix functions holds true:_ \[e\left((x,t);A\right) =\Gamma^{-1}(A)\int_{0}^{x}v^{A-I}e^{-v}\Big{(}\sum_{m=0}^{ \infty}(A)_{m}^{-1}\frac{(vt)^{m}}{m!}\Big{)}dv\] \[=\Gamma^{-1}(A)\int_{0}^{x}v^{A-I}e^{-v}\,_{0}F_{1}(-;A;vt)dv, \tag{3.4}\] \[E\left((x,t);A\right) =\Gamma^{-1}(A)\int_{x}^{\infty}v^{A-I}e^{-v}\Big{(}\sum_{m=0}^{ \infty}(A)_{m}^{-1}\frac{(vt)^{m}}{m!}\Big{)}dv\] \[=\Gamma^{-1}(A)\int_{x}^{\infty}v^{A-I}e^{-v}\,_{0}F_{1}(-;A;vt)dv. \tag{3.5}\] Proof.: From definition of the incomplete exponential matrix functions, we have \[e\left((x,t);A\right)=\sum_{m=0}^{\infty}\Gamma^{-1}(A+mI)\gamma(A+mI,x)\frac{ t^{m}}{m!}. \tag{3.6}\] Applying the definition of incomplete gamma matrix functions (2.7), we get (3.4). In a similar manner we can prove (3.5). This finishes the proof of this theorem. **Corollary 3.1**.: _(Connections with Bessel matrix functions )The following integral representation for the incomplete exponential matrix functions holds true:_ \[e\left((x,t);A+I\right)=t^{-\frac{A}{2}}\int_{0}^{x}v^{\frac{A}{2}}e^{-v}I_{A }(2\sqrt{vt})dv, \tag{3.7}\] \[E\left((x,t);A+I\right)=t^{-\frac{A}{2}}\int_{x}^{\infty}v^{\frac{A}{2}}e ^{-v}I_{A}(2\sqrt{vt})dv, \tag{3.8}\] \[e\left((x,-t);A+I\right)=t^{-\frac{A}{2}}\int_{0}^{x}v^{\frac{A}{2 }}e^{-v}J_{A}(2\sqrt{vt})dv,\] (3.9) \[E\left((x,-t);A+I\right)=t^{-\frac{A}{2}}\int_{x}^{\infty}v^{ \frac{A}{2}}e^{-v}J_{A}(2\sqrt{vt})dv, \tag{3.10}\] _where \(I_{A}(v)\) and \(J_{A}(v)\) are Bessel matrix functions defined in [17]._ **Theorem 3.2**.: _Let \(A\) be positive stable matrix in \(\mathbb{C}^{r\times r}\) such that \(A+kI\) is invertible for all integers \(k\geq 0\). Then the following derivative formula for the incomplete exponential matrix functions holds true:_ \[\frac{\partial}{\partial t^{n}}e\left((x,t);A\right)=e\left((x,t); A+nI\right), \tag{3.11}\] \[\frac{\partial}{\partial t^{n}}E\left((x,t);A\right)=E\left((x,t );A+nI\right).\] (3.12) \[\frac{\partial}{\partial x}e\left((x,t);A\right)=x^{A-I}e^{-x} \,\Gamma^{-1}(A)\,_{0}F_{1}(-;A;tv),\] (3.13) \[\frac{\partial}{\partial x}E\left((x,t);A\right)=-x^{A-I}e^{-x} \,\Gamma^{-1}(A)\,_{0}F_{1}(-;A;tv). \tag{3.14}\] Proof.: Differentiating (3.1) with respect to \(t\), the resultant equation comes to: \[\frac{\partial}{\partial t}e\left((x,t);A\right)=\sum_{m=1}^{\infty}\gamma(A +mI,x)\Gamma^{-1}(A+mI)\frac{t^{m-1}}{(m-1)!}. \tag{3.15}\] Changing \(m\) to \(m+1\) and \(A\) to \(A+I\) in (3.15), we obtain \[\frac{\partial}{\partial t}e\left((x,t);A\right)=e\left((x,t);A+I\right), \tag{3.16}\] which is (3.11) for \(n=1\). Generalization can be achieved by using the principle of mathematical induction on \(n\). This completes the proof of (3.11). Successively (3.12)-(3.14) can be proved in an analogous manner. Incomplete Exponential Matrix Functions \({}_{p}e_{q}(x;t)\) and \({}_{p}E_{q}(x;t)\): Definition and some new relations Let \(A\), \(E_{i}\) and \(F_{j}\), \(2\leq i\leq p\), \(2\leq j\leq q\), be matrices in \(\mathbb{C}^{r\times r}\) such that \(A+kI\), \(F_{j}+kI\), \(2\leq j\leq q\) are invertible for all integers \(k\geq 0\). Then we define incomplete generalized exponential matrix functions as \[{}_{p}e_{q}\left[(x;t)\right] \left(\begin{array}{c}(A,E_{2},\cdots,E_{p})\\ (A,F_{2},\cdots,F_{q})\end{array}\right]:= \sum_{m=0}^{\infty}\,\Gamma^{-1}(A+mI)\gamma(A+mI,x)(E_{2})_{m} \cdots(E_{p})_{m}\] \[\times(F_{2})_{m}^{-1}\cdots(F_{p})_{m}^{-1}\frac{t^{m}}{m!}, \tag{4.1}\] \[{}_{p}E_{q}\left[(x;t)|\begin{array}{l}(A,E_{2},\cdots,E_{p})\\ (A,F_{2},\cdots,F_{q})\end{array}\right]:= \sum_{m=0}^{\infty}\,\Gamma^{-1}(A+mI)\Gamma(A+mI,x)(E_{2})_{m} \cdots(E_{p})_{m}\] \[\times(F_{2})_{m}^{-1}\cdots(F_{p})_{m}^{-1}\frac{t^{m}}{m!}. \tag{4.2}\] In view of (2.9), we have the following decomposition formula \[{}_{p}e_{q}\left[(x;t)|\begin{array}{l}(A,E_{2},\cdots,E_{p})\\ (A,F_{2},\cdots,F_{q})\end{array}\right]+\,_{p}E_{q}\left[(x;t)|\begin{array}[] {l}(A,E_{2},\cdots,E_{p})\\ (A,F_{2},\cdots,F_{q})\end{array}\right]\] \[=\,_{p-1}F_{q-1}\left[\left(\begin{array}{l}E_{2},\cdots,E_{p} \\ F_{2},\cdots,F_{q}\end{array}\right);t\right] \tag{4.3}\] where \({}_{p-1}F_{q-1}\) is the generalized hypergeometric matrix function [7]. **Theorem 4.1**.: _Let \(A\), \(E_{i}\) and \(F_{j}\), \(2\leq i\leq p\), \(2\leq j\leq q\), be matrices in \(\mathbb{C}^{r\times r}\) such that \(A+kI\), \(F_{j}+kI\), \(2\leq j\leq q\) are invertible for all integers \(k\geq 0\). Then the following integral representations for generalized incomplete exponential matrix functions holds true:_ \[{}_{p}e_{q}\left[(x;t)|\begin{array}{l}(A,E_{2},\cdots,E_{p})\\ (A,F_{2},\cdots,F_{q})\end{array}\right]\] \[=\Gamma^{-1}(A)\int_{0}^{x}v^{A-I}e^{-v}\,_{p-1}F_{q}\left[ \left(\begin{array}{l}-,E_{2},\cdots,E_{p}\\ A,F_{2},\cdots,F_{q}\end{array}\right);vt\right]dv, \tag{4.4}\] \[{}_{p}E_{q}\left[(x;t)|\begin{array}{l}(A,E_{2},\cdots,E_{p})\\ (A,F_{2},\cdots,F_{q})\end{array}\right]\] \[=\Gamma^{-1}(A)\int_{x}^{\infty}v^{A-I}e^{-v}\,_{p-1}F_{q}\left[ \left(\begin{array}{l}-,E_{2},\cdots,E_{p}\\ A,F_{2},\cdots,F_{q}\end{array}\right);vt\right]dv,\] (4.5) \[{}_{p-1}F_{q-1}\left[\left(\begin{array}{l}E_{2},\cdots,E_{p}\\ F_{2},\cdots,F_{q}\end{array}\right);t\right]\] \[=\Gamma^{-1}(A)\int_{0}^{\infty}v^{A-I}e^{-v}\,_{p-1}F_{q}\left[ \left(\begin{array}{l}-,E_{2},\cdots,E_{p}\\ A,F_{2},\cdots,F_{q}\end{array}\right);vt\right]dv. \tag{4.6}\] **Corollary 4.1**.: _The following integral representation for the incomplete exponential matrix functions holds true:_ \[{}_{2}e_{1}\left[(x;t)|\begin{array}{l}(C,A)\\ (C)\end{array}\right]=\Gamma^{-1}(C)\int_{0}^{x}v^{C-I}e^{-v}\,_{1}F_{1}\left( A;C;vt\right)dv, \tag{4.7}\] \[{}_{2}E_{1}\left[(x;t)|\begin{array}{l}(C,A)\\ (C)\end{array}\right]=\Gamma^{-1}(C)\int_{x}^{\infty}v^{C-I}e^{-v}\,_{1}F_{1} \left(A;C;vt\right)dv,\] (4.8) \[{}_{2}e_{1}\left[(x;t)|\begin{array}{l}(C,A)\\ (C)\end{array}\right]+\,_{2}E_{1}\left[(x;t)|\begin{array}{l}(C,A)\\ (C)\end{array}\right]=(1-t)^{-A}, \tag{4.9}\] _where \(A\), \(C\) are matrices in \(\mathbb{C}^{r\times r}\) such that \(C+kI\) is invertible for all integers \(k\geq 0\)._ **Corollary 4.2**.: _The following integral representation for the incomplete exponential matrix functions holds true:_ \[{}_{2}e_{1}\left[(x;-t)|\begin{array}{c}(C,C)\\ (C)\end{array}\right] =\Gamma^{-1}(C)\int_{0}^{x}v^{C-I}e^{-(t+1)v}dv, \tag{4.10}\] \[{}_{2}E_{1}\left[(x;-t)|\begin{array}{c}(C,C)\\ (C)\end{array}\right] =\Gamma^{-1}(C)\int_{x}^{\infty}v^{C-I}e^{-(t+1)v}dv, \tag{4.11}\] where \(C\) is a matrices in \(\mathbb{C}^{r\times r}\) such that \(C+kI\) is invertible for all integers \(k\geq 0\). **Corollary 4.3**.: _The following integral representation for the incomplete exponential matrix functions holds true:_ \[{}_{3}e_{1}\left[(x;t)|\begin{array}{c}(C,A,B)\\ (C)\end{array}\right] =\Gamma^{-1}(C)\int_{0}^{x}v^{C-I}e^{-v}\,{}_{2}F_{1}\left(A,B;C;vt \right)dv, \tag{4.12}\] \[{}_{3}E_{1}\left[(x;t)|\begin{array}{c}(C,A,B)\\ (C)\end{array}\right] =\Gamma^{-1}(C)\int_{x}^{\infty}v^{C-I}e^{-v}\,{}_{2}F_{1}\left(A,B;C ;vt\right)dv, \tag{4.13}\] _where \(A\), \(B\), \(C\) are matrices in \(\mathbb{C}^{r\times r}\) such that \(C+kI\) is invertible for all integers \(k\geq 0\)._ Generalized Incomplete Exponential Matrix Functions \({}_{p}e_{q}(x,A,B;v)\) and \({}_{p}E_{q}(x,A,B;v)\): Definition and some new formulas In this section, we introduce the matrix analogs of generalized incomplete exponential functions [3, 5]. We denote these matrix functions by \({}_{p}e_{q}(x,A,B;v)\) and \({}_{p}E_{q}(x,A,B;v)\). Let \(A\), \(B\), \(E_{i}\) and \(F_{j}\), \(1\leq i\leq p\), \(1\leq j\leq q\) be matrices in \(\mathbb{C}^{r\times r}\) such that \(F_{j}+kI\) are invertible for all integers \(k\geq 0\). Then, we define \[{}_{p}e_{q}(x,A,B;v) =\,_{p}e_{q}\left[(x,A,B;v)|\begin{array}{c}E_{1},E_{2},\cdots,E_{p}\\ F_{1},F_{2},\cdots,F_{q}\end{array}\right]\] \[=\sum_{m=0}^{\infty}\Gamma^{-1}(mA+B)\gamma(mA+B,x)(E_{1})_{m} \ldots(E_{p})_{m}(F_{1})_{m}^{-1}\ldots(F_{q})_{m}^{-1}\frac{v^{m}}{m!}, \tag{5.1}\] \[{}_{p}E_{q}(x,A,B;v) =\,_{p}E_{q}\left[(x,A,B;v)|\begin{array}{c}E_{1},E_{2},\cdots,E_{p}\\ F_{1},F_{2},\cdots,F_{q}\end{array}\right]\] \[=\sum_{m=0}^{\infty}\Gamma^{-1}(mA+B)\Gamma(mA+B,x)(E_{1})_{m}\ldots(E_{ p})_{m}(F_{1})_{m}^{-1}\ldots(F_{q})_{m}^{-1}\frac{v^{m}}{m!}. \tag{5.2}\] From (5.1) and (5.2), we can obtain the following decomposition formula: \[{}_{p}e_{q}\left[(x,A,B;v)|\begin{array}{c}E_{1},E_{2},\cdots,E_ {p}\\ F_{1},F_{2},\cdots,F_{q}\end{array}\right]+{}_{p}E_{q}\left[(x,A,B;v)|\begin{array} []{c}E_{1},E_{2},\cdots,E_{p}\\ F_{1},F_{2},\cdots,F_{q}\end{array}\right]\] \[={}_{p}F_{q}\left[\begin{array}{c}E_{1},E_{2},\cdots,E_{p}\\ F_{1},F_{2},\cdots,F_{q}\end{array}|v\right], \tag{5.3}\] where \({}_{p}F_{q}\) is the generalized hypergeometric matrix function [7]. **Remark 1**.: _If we take \(p=q=0\) and \(A=I\), then (5.1) and (5.2) reduces to incomplete exponential matrix functions (3.1) and (3.2):_ \[{}_{0}e_{0}(x,I,B;v) ={}_{0}e_{0}\left[(x,I,B;v)|\begin{array}{c}-\\ -\end{array}\right]\] \[=\sum_{m=0}^{\infty}\Gamma^{-1}(mI+B)\,\gamma(mI+B,x)\frac{v^{m}} {m!}, \tag{5.4}\] _and_ \[{}_{0}E_{0}(x,I,B;v) ={}_{0}E_{0}\left[(x,I,B;v)|\begin{array}{c}-\\ -\end{array}\right]\] \[=\sum_{m=0}^{\infty}\Gamma^{-1}(mI+B)\,\Gamma(mI+B,x)\frac{v^{m}} {m!}. \tag{5.5}\] **Theorem 5.1**.: _The generalized incomplete exponential matrix function \({}_{p}E_{q}(x,A,B;v)\) satisfies the following integral representation:_ \[{}_{p}E_{q}\left[(x,A,B;v)|\begin{array}{c}E_{1},E_{2},\cdots,E_ {p}\\ F_{1},F_{2},\cdots,F_{q}\end{array}\right]=\int_{x}^{\infty}t^{B-I}e^{-t}\,{}_{ p}R_{q}\left[\begin{array}{c}E_{1},E_{2},\cdots,E_{p}\\ F_{1},F_{2},\cdots,F_{q}\end{array}|A,B;vt^{A}\right]dt, \tag{5.6}\] _where \(A\), \(B\), \(E_{i}\) and \(F_{j}\), \(1\leq i\leq p\), \(1\leq j\leq q\) be matrices in \(\mathbb{C}^{r\times r}\) such that \(F_{j}+kI\) are invertible for all integers \(k\geq 0\), and \(B\) is positive stable._ Proof.: Using the integral representation of the incomplete gamma matrix function defined by (2.8), we obtain \[{}_{p}E_{q}\left[(x,A,B;v)|\begin{array}{c}E_{1},E_{2},\cdots,E_ {p}\\ F_{1},F_{2},\cdots,F_{q}\end{array}\right]\] \[=\int_{x}^{\infty}t^{mA+B-I}e^{-t}\left(\sum_{m=0}^{\infty} \Gamma^{-1}(mA+B)(E_{1})_{m}\cdots(E_{p})_{m}(F_{1})_{m}^{-1}\cdots(F_{q})_{m} ^{-1}\frac{v^{m}}{m!}\right)dt. \tag{5.7}\] Reversing the order of summation and integration yields the R.H.S. of assertion (5.6). **Corollary 5.1**.: _Putting \(A=I,B=C\), \(p=1,q=0\) i.e. \(E_{1}=A\), (5.6) reduces to_ \[{}_{1}E_{0}\left[(x,I,C;v)|\begin{array}{c}A\\ -\end{array}\right]=\Gamma^{-1}(C)\int_{x}^{\infty}t^{C-I}e^{-t}\,{}_{1}F_{1} \left[\begin{array}{c}A\\ C\end{array}|vt\right]dt, \tag{5.8}\] _where \(A\) and \(C\) be matrices in \(\mathbb{C}^{r\times r}\) such that \(C+kI\) is invertible for all integers \(k\geq 0\) and \(C\) is positive stable._ **Corollary 5.2**.: _For the matrix function \({}_{p}R_{q}(A,B;z)\), the following integral representation holds true:_ \[{}_{p}R_{q}\left[\begin{array}{c}E_{1},E_{2},\cdots,E_{p}\\ F_{1},F_{2},\cdots,F_{q}\end{array}|A,B;v\right]\] \[=\Gamma^{-1}(E_{1})\int_{0}^{\infty}t^{E_{1}-I}e^{-t}\,{}_{p-1}R _{q}\left[\begin{array}{c}E_{2},\cdots,E_{p}\\ F_{1},F_{2},\cdots,F_{q}\end{array}|A,B;vt\right]dt, \tag{5.9}\] _where \(A\), \(B\), \(E_{i}\) and \(F_{j}\), \(1\leq i\leq p\), \(1\leq j\leq q\) be matrices in \(\mathbb{C}^{r\times r}\) such that \(F_{j}+kI\) are invertible for all integers \(k\geq 0\) and \(E_{1}\) is positive stable matrix._ **Theorem 5.2**.: _Let \(A\), \(B\), \(E_{i}\) and \(F_{j}\), \(1\leq i\leq p\), \(1\leq j\leq q\) be matrices in \(\mathbb{C}^{r\times r}\) such that \(E_{p}F_{j}=F_{j}E_{p}\), \(F_{j}+kI\) are invertible for all integers \(k\geq 0\) and \(E_{1},F_{1},F_{1}-E_{1}\) are positive stable. Then the matrix function \({}_{p}E_{q}(x,A,B;v)\) defined in (5.2) can be put in the integral form as_ \[{}_{p}E_{q}\left[(x,A,B;v)|\begin{array}{c}E_{1},E_{2},\cdots,E_ {p}\\ F_{1},F_{2},\cdots,F_{q}\end{array}\right]\] \[=\int_{0}^{1}\,{}_{p-1}E_{q-1}\left[(x,A,B;tv)|\begin{array}{c} E_{1},\cdots,E_{p-1}\\ F_{1},\cdots,F_{q-1}\end{array}\right]t^{E_{p}-1}(1-t)^{F_{q}-E_{p}-1}dt\] \[\times\left[\mathfrak{B}(E_{p},F_{q}-E_{p})\right]^{-1}. \tag{5.10}\] Proof.: \[{}_{p}E_{q}\left[(x,A,B;v)|\begin{array}{c}E_{1},E_{2},\cdots,E_ {p}\\ F_{1},F_{2},\cdots,F_{q}\end{array}\right]\] \[=\sum_{m=0}^{\infty}\Gamma^{-1}(mA+B)\Gamma(mA+B,x)(E_{1})_{m} \ldots(E_{p})_{m}(F_{1})_{m}^{-1}\ldots(F_{q})_{m}^{-1}\frac{v^{m}}{m!}\] \[=\sum_{m=0}^{\infty}\Gamma^{-1}(mA+B)\Gamma(mA+B,x)(E_{1})_{m} \ldots(E_{p-1})_{m}(F_{1})_{m}^{-1}\ldots(F_{q-1})_{m}^{-1}\frac{v^{m}}{m!}\] \[\times\left.\mathbb{B}(E_{p}+mI,F_{q}-E_{p})\left[\mathfrak{B}( E_{p},F_{q}-E_{p})\right]^{-1} \tag{5.11}\] Applying the integral representation of beta matrix function in (5.11), we get (5.10). This completes the proof of this theorem. **Corollary 5.3**.: _For the matrix function \({}_{p}R_{q}(A,B;v)\), we have the following integral representation:_ \[{}_{p}R_{q}\left[\begin{array}{c}E_{1},E_{2},\cdots,E_{p}\\ F_{1},F_{2},\cdots,F_{q}\end{array}|A,B;v\right]\] \[=\int_{0}^{1}{}_{p-1}R_{q-1}\left[\begin{array}{c}E_{2},\cdots,E _{p}\\ F_{2},\cdots,F_{q}\end{array}|A,B;vt\right]t^{E_{p}-1}(1-t)^{F_{q}-E_{p}-1}dt\] \[\times\left[\mathfrak{B}(E_{p},F_{q}-E_{p})\right]^{-1}, \tag{5.12}\] _where \(A\), \(B\), \(E_{i}\) and \(F_{j}\), \(1\leq i\leq p\), \(1\leq j\leq q\) be matrices in \(\mathbb{C}^{r\times r}\) such that \(E_{p}F_{j}=F_{j}E_{p}\), \(F_{j}+kI\) are invertible for all integers \(k\geq 0\) and \(E_{p},F_{q},F_{q}-E_{p}\) are positive stable._ **Theorem 5.3**.: _Let \(A\), \(B\), \(E_{i}\) and \(F_{j}\), \(1\leq i\leq p\), \(1\leq j\leq q\) be matrices in \(\mathbb{C}^{r\times r}\) such that \(E_{i}F_{j}=F_{j}E_{i}\), \(F_{j}+kI\) are invertible for all integers \(k\geq 0\). Then the generalized incomplete exponential matrix function \({}_{p}E_{q}(x,A,B;v)\) have the following derivative formula:_ \[\frac{d^{n}}{dv^{n}}\left({}_{p}E_{q}\left[(x,A,B;v)|\begin{array} []{c}E_{1},E_{2},\cdots,E_{p}\\ F_{1},F_{2},\cdots,F_{q}\end{array}\right]\right)\] \[=\,_{p}E_{q}\left[(x,A,A+B;v)|\begin{array}{c}E_{1},E_{2}, \cdots,E_{p}\\ F_{1},F_{2},\cdots,F_{q}\end{array}\right]\times(E_{1})_{n}\cdots(E_{p})_{n}(F_ {1})_{n}^{-1}\cdots(F_{q})_{n}^{-1}. \tag{5.13}\] Proof.: Differentiating (5.2) with repect to \(v\) and replacing \(m\to m+1\), we get \[{}_{p}E_{q}\left[(x,A,B;v)|\begin{array}{c}E_{1},E_{2},\cdots,E _{p}\\ F_{1},F_{2},\cdots,F_{q}\end{array}\right]\] \[=\,\sum_{m=0}^{\infty}\Gamma^{-1}(mA+A+B)\Gamma(mA+A+B,x)(E_{1})_ {m+1}\ldots(E_{p})_{m+1}(F_{1})_{m+1}^{-1}\ldots(F_{q})_{m+1}^{-1}\frac{v^{m}} {m!}\] Using the relation \((A)_{m+1}=A(A+I)_{m}\), we arrive at \[\frac{d}{dv}\left({}_{p}E_{q}\left[(x,A,B;v)|\begin{array}{c}E_ {1},E_{2},\cdots,E_{p}\\ F_{1},F_{2},\cdots,F_{q}\end{array}\right]\right)\] \[=\,_{p}E_{q}\left[(x,A,A+B;v)|\begin{array}{c}E_{1},E_{2}, \cdots,E_{p}\\ F_{1},F_{2},\cdots,F_{q}\end{array}\right]\times(E_{1})_{1}\cdots(E_{p})_{1} \left(F_{1}\right)_{1}^{-1}\cdots(F_{q})_{1}^{-1}. \tag{5.14}\] by repeating above procedure \(n-\)times yields the R.H.S. of assertion (5.13). **Corollary 5.4**.: _For the matrix function \({}_{p}R_{q}(A,B;v)\), we have the following derivative formula:_ \[\frac{d^{n}}{dv^{n}}\left({}_{p}R_{q}\left[\begin{array}{c}E_{1},E_{2}, \cdots,E_{p}\\ F_{1},F_{2},\cdots,F_{q}\end{array}|A,B;v\right]\right)\] \[=\,_{p}R_{q}\left[\begin{array}{c}E_{1},E_{2},\cdots,E_{p}\\ F_{1},F_{2},\cdots,F_{q}\end{array}|A,A+B;v\right]\times(E_{1})_{n}\cdots(E_{p})_{ n}(F_{1})_{n}^{-1}\cdots(F_{q})_{n}^{-1}, \tag{5.15}\] _where \(A\), \(B\), \(E_{i}\) and \(F_{j}\), \(1\leq i\leq p\), \(1\leq j\leq q\) be matrices in \(\mathbb{C}^{r\times r}\) such that \(E_{i}F_{j}=F_{j}E_{i}\), \(F_{j}+kI\) are invertible for all integers \(k\geq 0\)._ **Theorem 5.4**.: _The generalized incomplete exponential matrix function \({}_{p}E_{q}(x,A,B;v)\) have the following partial derivatives holds true:_ \[\frac{\partial}{\partial v}\left({}_{p}E_{q}\left[(x,A,B;v)| \begin{array}{c}E_{1},E_{2},\cdots,E_{p}\\ F_{1},F_{2},\cdots,F_{q}\end{array}\right]\right) \tag{5.16}\] \[=\,_{p}E_{q}\left[(x,A,A+B;v)|\begin{array}{c}E_{1}+I,E_{2}+I, \cdots,E_{p}+I\\ F_{1}+I,F_{2}+I,\cdots,F_{q}+I\end{array}\right]\] \[\times E_{1}\cdots E_{p}\,F_{1}^{-1}\cdots F_{q}^{-1},\] \[\frac{\partial}{\partial x}\left({}_{p}E_{q}\left[(x,A,B;v)| \begin{array}{c}E_{1},E_{2},\cdots,E_{p}\\ F_{1},F_{2},\cdots,F_{q}\end{array}\right]\right) \tag{5.17}\] \[=-e^{-x}x^{B-I}\left({}_{p}R_{q}\left[\begin{array}{c}E_{1},E_ {2},\cdots,E_{p}\\ F_{1},F_{2},\cdots,F_{q}\end{array}|A,B;vx^{A}\right]\right),\] _where \(A\), \(B\), \(E_{i}\) and \(F_{j}\), \(1\leq i\leq p\), \(1\leq j\leq q\) be matrices in \(\mathbb{C}^{r\times r}\) such that \(E_{i}F_{j}=F_{j}E_{i}\), \(F_{j}+kI\) are invertible for all integers \(k\geq 0\)._ Proof.: Differentiating partially (5.2) with respect to \(v\), we get \[\frac{\partial}{\partial v}\left({}_{p}E_{q}\left[(x,A,B;v)| \begin{array}{c}E_{1},E_{2},\cdots,E_{p}\\ F_{1},F_{2},\cdots,F_{q}\end{array}\right]\right)\] \[=\,\frac{\partial}{\partial v}\left(\sum_{m=0}^{\infty}\Gamma^{-1 }(mA+B)\Gamma(mA+B,x)(E_{1})_{m}\ldots(E_{p})_{m}(F_{1})_{m}^{-1}\ldots(F_{q} )_{m}^{-1}\frac{v^{m}}{m!}\right),\] \[=\,\sum_{m=1}^{\infty}\Gamma^{-1}(mA+B)\Gamma(mA+B,x)(E_{1})_{m} \ldots(E_{p})_{m}(F_{1})_{m}^{-1}\ldots(F_{q})_{m}^{-1}\frac{v^{m-1}}{(m-1)!}.\] This leads to proof of (5.16) by replacing \(m{\rightarrow}m+1\). We differentiate partially (5.6) with respect to \(x\) to demonstrate (5.17). **Theorem 5.5**.: _For the generalized incomplete exponential matrix function \({}_{p}E_{q}(x,A,B;v)\), the following addition formula holds true:_ \[{}_{p}E_{q}\left[(x,A,B;w+v)|\begin{array}{c}E_{1},E_{2},\cdots,E_{p}\\ F_{1},F_{2},\cdots,F_{q}\end{array}\right]\] \[=\sum_{n=0}^{\infty}\,_{p}E_{q}\left[(x,A,nA+B;w)|\begin{array} []{c}E_{1}+nI,E_{2}+nI,\cdots,E_{p}+nI\\ F_{1}+nI,F_{2}+nI,\cdots,F_{q}+nI\end{array}\right]\frac{v^{n}}{n!}\] \[\times\Gamma(E_{1}+nI)\cdots\Gamma(E_{p}+nI)\Gamma(F_{1}+nI)^{-1} \cdots\Gamma(F_{q}+nI)^{-1}\] \[\times\Gamma(E_{1})^{-1}\cdots\Gamma(E_{p})^{-1}\Gamma(F_{1}) \cdots\Gamma(F_{q}), \tag{5.18}\] _where \(A\), \(B\), \(E_{i}\) and \(F_{j}\), \(1\leq i\leq p\), \(1\leq j\leq q\) be matrices in \(\mathbb{C}^{r\times r}\) such that \(E_{i}F_{j}=F_{j}E_{i}\), \(F_{j}+kI\) are invertible for all integers \(k\geq 0\)._ Proof.: Applying the definition of generalized incomplete exponential matrix function \({}_{p}E_{q}(x,A,B;v)\) given in (5.2), expanding L.H.S. of (5.18) and using the identity [19] \[\sum_{N=0}^{\infty}f(N)\frac{(w+v)^{N}}{N!}=\sum_{m,n=0}^{\infty}f(m+n)\frac{w ^{m}}{m!}\frac{v^{n}}{n!}.\] We get (5.18). This finishes the proof of this theorem. **Theorem 5.6**.: _For the generalized incomplete exponential matrix function \({}_{p}E_{q}(x,A,B;v)\), the following multiplication formula holds true:_ \[{}_{p}E_{q}\left[(x,A,B;wv)|\begin{array}{c}E_{1},E_{2},\cdots,E _{p}\\ F_{1},F_{2},\cdots,F_{q}\end{array}\right]\] \[=\sum_{n=0}^{\infty}\,{}_{p}E_{q}\left[(x,A,nA+B;w)|\begin{array} []{c}E_{1}+nI,E_{2}+nI,\cdots,E_{p}+nI\\ F_{1}+nI,F_{2}+nI,\cdots,F_{q}+nI\end{array}\right]\frac{w^{n}(v-1)^{n}}{n!}\] \[\times\Gamma(E_{1}+nI)\cdots\Gamma(E_{p}+nI)\Gamma(F_{1}+nI)^{-1 }\cdots\Gamma(F_{q}+nI)^{-1}\] \[\times\Gamma(E_{1})^{-1}\cdots\Gamma(E_{p})^{-1}\Gamma(F_{1}) \cdots\Gamma(F_{q}), \tag{5.19}\] _where \(A\), \(B\), \(E_{i}\) and \(F_{j}\), \(1\leq i\leq p\), \(1\leq j\leq q\) be matrices in \(\mathbb{C}^{r\times r}\) such that \(E_{i}F_{j}=F_{j}E_{i}\), \(F_{j}+kI\) are invertible for all integers \(k\geq 0\)._ Proof.: Proof of above theorem is similar to theorem 4.5. **Theorem 5.7**.: _For the generalized incomplete exponential matrix function \({}_{p}E_{q}(x,A,B;v)\), we have the following integral representation:_ \[\int_{0}^{t}v^{A-I}(t-v)^{B-I}\,{}_{p}E_{q}\left[(x,A,B;\lambda v ^{k})|\begin{array}{c}E_{1},E_{2},\cdots,E_{p}\\ F_{1},F_{2},\cdots,F_{q}\end{array}\right]dv\] \[=\mathfrak{B}(A,B)\,t^{A+B-I}\,{}_{p+k}E_{q+k}\left[(x,A,B;\lambda t ^{k})|\begin{array}{c}\Delta(k,A),E_{1},E_{2},\cdots,E_{p}\\ \Delta(k,A+B),F_{1},F_{2},\cdots,F_{q}\end{array}\right], \tag{5.20}\] _where \(A\), \(B\), \(E_{i}\) and \(F_{j}\), \(1\leq i\leq p\), \(1\leq j\leq q\) be matrices in \(\mathbb{C}^{r\times r}\) such that \(F_{j}+kI\) are invertible for all integers \(k\geq 0\). Also \(\Delta(k,A)\) stands for \(\frac{A}{k}\), \(\frac{A+I}{k}\),..., \(\frac{A+(k-1)I}{k}\)._ Proof.: Suppose \(\mathfrak{L}\) be the L.H.S. of (5.20). Then, using (5.2), this gives \[\mathfrak{L}=\int_{0}^{t}v^{A-I}(t-v)^{B-I}\sum_{m=0}^{\infty}\Gamma^{-1}(mA+B )\Gamma(mA+B,x)\] \[\begin{split}&\mathfrak{A}=\int_{t}^{y}(y-v)^{C-I}(v-t)^{B-I}\sum_{m=0} ^{\infty}\Gamma^{-1}(mA+B)\Gamma(mA+B,x)\\ &\times(E_{1})_{m}\cdots(E_{p})_{m}(F_{1})_{m}^{-1}\cdots(F_{q}) _{m}^{-1}\frac{(\lambda(v-t)^{k})^{m}}{m!}dv.\end{split}\] Putting \(s=\frac{v-t}{y-t}\), we get \[\begin{split}\mathfrak{A}&=(y-t)^{C+B-I}\int_{0}^{ 1}\sum_{m=0}^{\infty}s^{B+(km-1)I}(1-s)^{C-I}\,\Gamma^{-1}(mA+B)\Gamma(mA+B,x) \\ &\times(E_{1})_{m}\cdots(E_{p})_{m}(F_{1})_{m}^{-1}\cdots(F_{q}) _{m}^{-1}\frac{(\lambda(v-t)^{k})^{m}}{m!}dv,\\ &=t^{A+B-I}\sum_{m=0}^{\infty}\Gamma(A+kmI)\Gamma(B)\Gamma^{-1}( A+B+kmI)\Gamma^{-1}(mA+B)\Gamma(mA+B,x)\\ &\times(E_{1})_{m}\cdots(E_{p})_{m}(F_{1})_{m}^{-1}\cdots(F_{q}) _{m}^{-1}\frac{(\lambda t^{k})^{m}}{m!}dv.\end{split}\] Now using the property of Pochhammer matrix symbol defined in (2.6), this leads to the R.H.S. of (5.20). **Theorem 5.8**.: _Let \(A\), \(B\), \(C\), \(E_{i}\) and \(F_{j}\), \(1\leq i\leq p\), \(1\leq j\leq q\) be matrices in \(\mathbb{C}^{r\times r}\) such that \(F_{j}+kI\) are invertible for all integers \(k\geq 0\) and \(B\), \(C\), \(C+B\) are positive stable. Then the generalized incomplete exponential matrix function \({}_{p}E_{q}(x,A,B;v)\), satisfies the following integral representation:_ \[\begin{split}&\int_{t}^{y}(y-v)^{C-I}(v-t)^{B-I}\,_{p}E_{q}\left[(x,A,B; \lambda(v-t)^{k})|\begin{array}{c}E_{1},E_{2},\cdots,E_{p}\\ F_{1},F_{2},\cdots,F_{q}\end{array}\right]dv\\ &=\mathfrak{B}(B,C)\,(y-t)^{C+B-I}\,_{p+k}E_{q+k}\left[(x,A,B; \lambda(x-t)^{k})|\begin{array}{c}\Delta(k,B),E_{1},E_{2},\cdots,E_{p}\\ \Delta(k,B+C),F_{1},F_{2},\cdots,F_{q}\end{array}\right],\end{split} \tag{5.21}\] _where \(\Delta(k,A)\) stands for \(\frac{A}{k}\), \(\frac{A+I}{k}\), \(\dots\), \(\frac{A+(k-1)I}{k}\)._ Proof.: Suppose \(\mathfrak{A}\) be the L.H.S. of (5.21). Then, using (5.2), this gives \[\begin{split}\mathfrak{A}&=\int_{t}^{y}(y-v)^{C-I}(v-t)^{ B-I}\sum_{m=0}^{\infty}\Gamma^{-1}(mA+B)\Gamma(mA+B,x)\\ &\times(E_{1})_{m}\cdots(E_{p})_{m}(F_{1})_{m}^{-1}\cdots(F_{q}) _{m}^{-1}\frac{(\lambda(v-t)^{k})^{m}}{m!}dv.\end{split}\] Putting \(s=\frac{v-t}{y-t}\), we get \[\begin{split}\mathfrak{A}&=(y-t)^{C+B-I}\int_{0}^{ 1}\sum_{m=0}^{\infty}s^{B+(km-1)I}(1-s)^{C-I}\,\Gamma^{-1}(mA+B)\Gamma(mA+B,x) \\ &\times(E_{1})_{m}\cdots(E_{p})_{m}(F_{1})_{m}^{-1}\cdots(F_{q}) _{m}^{-1}\frac{(\lambda(y-t)^{k})^{m}}{m!}dv,\end{split}\] \[=(y-t)^{C+B-I}\sum_{m=0}^{\infty}\mathfrak{B}(B+kmI,C)\Gamma^{-1}(mA+B) \Gamma(mA+B,x)\] \[\times(E_{1})_{m}\cdots(E_{p})_{m}(F_{1})_{m}^{-1}\cdots(F_{q})_{m }^{-1}\frac{(\lambda(y-t)^{k})^{m}}{m!}dv,\] \[=(y-t)^{C+B-I}\sum_{m=0}^{\infty}\Gamma(B+kmI)\Gamma(C)\Gamma^{-1 }(B+C+kmI)\Gamma^{-1}(mA+B)\Gamma(mA+B,x)\] \[\times(E_{1})_{m}\cdots(E_{p})_{m}(F_{1})_{m}^{-1}\cdots(F_{q})_{ m}^{-1}\frac{(\lambda(y-t)^{k})^{m}}{m!}dv.\] Now using the property of Pochhammer matrix symbol defined in (2.6), this leads to the R.H.S. of (5.21). **Theorem 5.9**.: _The incomplete exponential matrix function \({}_{2}E_{1}(x,A,B;v)\) satisfy the following relaton_ \[{}_{2}E_{1}\left[(x,A,B;1)|\begin{array}{c}E_{1},E_{2}\\ F_{1}\end{array}\right]\] \[=\Gamma(F_{1}-E_{1}-E_{2})\Gamma(F_{1})\Gamma^{-1}(F_{1}-E_{1}) \Gamma^{-1}(F_{1}-E_{2})-\gamma(mA+B,x)\times{}_{2}R_{1}(A,B;1), \tag{5.22}\] _where \(A\), \(B\), \(E_{1}\), \(E_{2}\) and \(F_{1}\) be matrices in \(\mathbb{C}^{r\times r}\) such that \(F_{1}+kI\) is invertible for all integers \(k\geq 0\), and \(F_{1}\), \(F_{1}-E_{1}\), \(F_{1}-E_{2}\) and \(F_{1}-E_{1}-E_{2}\) are positive stable matrices where all matrices are commutative._ Proof.: Putting \(p=2\), \(q=1\), \(v=1\) in (5.3), we obtain \[{}_{2}E_{1}\left[(x,A,B;1)|\begin{array}{c}E_{1},E_{2}\\ F_{1}\end{array}\right]\] \[={}_{2}F_{1}\left[\begin{array}{c}E_{1},E_{2}\\ F_{1}\end{array}|1\right]-{}_{2}e_{1}\left[(x,A,B;1)|\begin{array}{c}E_{1},E _{2}\\ F_{1}\end{array}\right]\] \[={}_{2}F_{1}\left[\begin{array}{c}E_{1},E_{2}\\ F_{1}\end{array}|1\right]-\int_{0}^{x}t^{B-I}e^{-t}{}_{2}R_{1}\left[\begin{array} []{c}E_{1},E_{2}\\ F_{1}\end{array}|A,B;t^{A}\right]dt,\] \[={}_{2}F_{1}\left[\begin{array}{c}E_{1},E_{2}\\ F_{1}\end{array}|1\right]-\int_{0}^{x}t^{B-I}e^{-t}\sum_{m=0}^{\infty}\Gamma^{ -1}(mA+B)(E_{1})_{m}(E_{2})_{m}(F_{1})_{m}^{-1}\frac{(t^{A})^{m}}{m!}dt. \tag{5.23}\] Applying Gauss summation matrix formula [4] for \(\mathrm{v}=1\) and changing the order of summation and integration, we get \[{}_{2}E_{1}\left[(x,A,B;1)|\begin{array}{c}E_{1},E_{2}\\ F_{1}\end{array}\right]\] \[=\Gamma(F_{1}-E_{1}-E_{2})\Gamma(F_{1})\Gamma^{-1}(F_{1}-E_{1}) \Gamma^{-1}(F_{1}-E_{2})\] \[-\sum_{m=0}^{\infty}\Gamma^{-1}(mA+B)(E_{1})_{m}(E_{2})_{m}(F_{1} )_{m}^{-1}\frac{1}{m!}\times\left(\int_{0}^{x}t^{mA+B-I}e^{-t}dt\right). \tag{5.24}\] After some simplification by using (2.7)and (2.15), we obtain R.H.S. of (5.22). **Theorem 5.10**.: _Let \(A\), \(B\), \(E_{1}\), \(E_{2}\) and \(F_{1}\) be matrices in \(\mathbb{C}^{r\times r}\) such that \(E_{1}F_{1}=F_{1}E_{1}\), \(E_{1}E_{2}=E_{2}E_{1}\), \(F_{1}+kI\) is invertible for all integers \(k\geq 0\) and \(E_{1}\), \(E_{1}-F_{1}+I\), \(F_{1}-I\) are positive stable. Then the matrix function \({}_{2}E_{1}(x,A,B;v)\) satisfies the following recurrence relation:_ \[\begin{split}&{}_{2}E_{1}\left[(x,A,B;v)|\begin{array}{c}E_{1},E_{ 2}\\ F_{1}\end{array}\right](E_{1}-F_{1}+I)\\ &=\,{}_{2}E_{1}\left[(x,A,B;v)|\begin{array}{c}E_{1}+I,E_{2}\\ F_{1}\end{array}\right]E_{1}-\,{}_{2}E_{1}\left[(x,A,B;v)|\begin{array}{c}E_{ 1},E_{2}\\ F_{1}-I\end{array}\right](F_{1}-I).\end{split} \tag{5.25}\] Proof.: Using the definition of the incomplete exponential matrix function \({}_{2}E_{1}(x,A,B;v)\), take \(p=2,q=1\) in (5.2), and the R.H.S. of (5.25) is of the form \[\begin{split} R.H.S.&=\sum_{m=0}^{\infty}\Gamma^{-1}(mA+B )\Gamma(mA+B,x)(E_{1}+I)_{m}(E_{2})_{m}(F_{1})_{m}^{-1}\frac{v^{m}}{m!}\,E_{1 }\\ &-\sum_{m=0}^{\infty}\Gamma^{-1}(mA+B)\Gamma(mA+B,x)(E_{1})_{m}( E_{2})_{m}(F_{1}-I)_{m}^{-1}\frac{v^{m}}{m!}\,(F_{1}-I)\end{split} \tag{5.26}\] Using the relations \[\begin{split} E_{1}(E_{1}+I)_{m}&=(E_{1}+mI)(E_{1}) _{m},\\ (F_{1}-I)(F_{1}-I)_{m}^{-1}&=(F_{1}+(m-1)I)(F_{1})_{ m}^{-1},\end{split}\] this yields the L.H.S. of (5.25). ## 6 Conclusion We have investigated the generalized incomplete exponential matrix functions \({}_{p}e_{q}(x,A,B;v)\) and \({}_{p}E_{q}(x,A,B;v)\). We have found several characteristics of each of these generalized incomplete exponential matrix functions, for example various integral representations, addition formulas and derivative formulas etc. Some integral representations containing incomplete gamma matrix function, Bessel and modified Bessel matrix functions are also presented. Inference of this article is generalization of [3, 5] for the matrix cases. These generalized incomplete exponential matrix functions have applications in many areas of mathematics and mathematical physics. **Acknowledgments.** The second author is grateful to the University Grants Commission of India for financial assistance in the form of a Junior Research Fellowship. **Declarations** **Conflict of interest** The authors declare no conflict of interest.